On 2016-08-24 Micah Garlich-Miller wrote:
- It's about 400 GB, but I dont know the data structure as I
haven't seen it yet. ie, I'm not sure if its naturally chunked.
- It's of a sensitivity that it needs to be crypted before being
sent.
- Sending on a crypted external harddrive is not acceptable in this
situation.
I find a very easy, secure way to give people large files from my linux box is via apache. If you already have a linux box you run apache on then this is a breeze. If you don't, you need apache (or any other WS) and have it serve up port 80/443 to the external internet (either directly, through DMZ, or port forwarding).
Since you want encrypted, SSL is a must, even if you put in a self-signed ssl cert (just send the fingerprint to the other user to verify in that case).
What I do is I have a directory on my web server that has directory indexing turned off in apache. Then I put a directory in it that is some uberlong (like 64 chars) random alphanum string like pYwRYezIOz6EIAi4HijyF2hVe70L2V2hF3CR6UFMbvjCBgaI5XVsvDeiqzTJa307
Then in that directory have directory indexing turned on (you can use .htaccess for this). Put your big files in there.
Then email or otherwise get the link to your end user:
https://foo.com/hidden/pYwRYezIOz6EIAi4HijyF2hVe70L2V2hF3CR6UFMbvjCBgaI5XVsv...
Voila, secure, encrypted access to the files. For someone else on the net to gain access, they'd have to intercept your email/backchannel with the link, or guess the random string, which is basically impossible for this 62**64 string (or have local shell access to your server, though creative use of apache group permissions can mitigate this). If you wanted to you could even add .htpassword basic auth to the above. (Also, make sure you don't provide an un-SSL port 80 http:// link to it!)
Test that your no-dirindex is indeed working by going to: https://foo.com/hidden/ which should give you an error saying indexing denied.
As for breaking up the files, you could use split to break them into chunks, which might be prudent, though many http downloaders/browsers will resume a broken connection so even a single 400G file might be ok (try using wget with resume options for it).
To split you'd do: split -b 1000000000 infile dl-me for 1GB-ish chunks.
On the other side it's just: cat dl-me* > original-file
Even Windbloze might be able to do it, maybe with type? type dl-me*.* > original-file ??? not sure on that one, maybe powershell gives us better options now.
Final note, this transfer will saturate your modem's upload connection, so you might want to schedule it for a weekend or something. Ratelimiting with apache is kind of a pain, but the receiving end could use wget's rate-limiting options (aim for 75% your nominal u/l bandwidth maybe). Or you can setup qos / ratelimiting (using tc and iptables) for egress web traffic on your linux web server or linux router you control, but that gets complicated if you've never done it before.
Final final note, you may tick off your ISP and/or exceed your monthly transfer limits (uploads are usually quite limited vs downloads). 400GB is a huge amount to upload for normal home/SMB shaw/mts accounts. (You could bzip (or better) the file before starting the whole process to make it smaller, if both sides have enough disk space for 2 copies.)