Building on a couple of threads like this or this I am looking for the fastest way to regularly copy files in an encrypted fashion between two Linux computers. This has some special limitations which is why I decided to open a new question on it:
- The computers are connected via a 40Gbit (and later possibly even 80Gbit) LAN. Latency is not an issue as we are talking about a fiber, switched interconnect of 20 meters or so.
- The assumption is that the sending and receiving storage end can saturate the 40Gbit connection
- The transfer should be encrypted. Only if it would really bring down the speed, encryption might be disabled
- The transfer needs to happen a couple of times per day. Hence, we need to be able to automate it in some way.
- The files to be transfered are probably in the range of 20 - 250 GB each
From what I learned so far, netcat is probably the fastest way. But is it suitable for unsupervised, automated use? And what about its speed when tunneled over SSH?
Is there any encrypted protocol which can saturate a 40G or eben 80G link?
//edit: As requested some details on the hardware:
- The servers are HPE Proliant DL380 Gen9
- The Storage Controller is a HPE Smart Array P840ar with 12Gb/s for each connected SAS SSD
- The network controller is a HPE IB FDR/EN 40Gb 2P 544+FLR-QSFP Adptr
Thanks a lot!
It is designed to be a reliable "back-end" tool that can be used directly or easily driven by other programs and scripts.– Seth Nov 21 '16 at 12:58netcatand encrypt the files beforehand. So the network transfer would be fast but you'd need preparation time. You could use a SSH or IPSec connection and use a connection based encryption. In addition you will have to think about the structure of the data. It could be worth it to zip them or at least use a tar to get a consecutive/monolithic file rather than a lot of small files (could be important if your solution is FTP based). – Seth Nov 22 '16 at 07:12netcatexample and the first one has a SSH/SCP example. That way you already have a two ready to go approaches and just need to substitute your files. Depending on the application your data is from there might still be a smarter way to go about it like using a cluster with replication. – Seth Nov 22 '16 at 13:30