[RndTbl] how to share/move sensitive data

Robert Keizer robert at keizer.ca
Wed Aug 24 17:51:33 CDT 2016

Tar it up and compress it. Scp it.
On Aug 24, 2016 17:19, "Trevor Cordes" <trevor at tecnopolis.ca> wrote:

On 2016-08-24 Micah Garlich-Miller wrote:
>    - It's about 400 GB, but I dont know the data structure as I
> haven't seen it yet.  ie, I'm not sure if its naturally chunked.
>    - It's of a sensitivity that it needs to be crypted before being
> sent.
>    - Sending on a crypted external harddrive is not acceptable in this
>    situation.

I find a very easy, secure way to give people large files from my linux
box is via apache.  If you already have a linux box you run apache on
then this is a breeze.  If you don't, you need apache (or any other WS)
and have it serve up port 80/443 to the external internet (either
directly, through DMZ, or port forwarding).

Since you want encrypted, SSL is a must, even if you put in a
self-signed ssl cert (just send the fingerprint to the other user to
verify in that case).

What I do is I have a directory on my web server that has directory
indexing turned off in apache.  Then I put a directory in it that is
some uberlong (like 64 chars) random alphanum string like

Then in that directory have directory indexing turned on (you can
use .htaccess for this).  Put your big files in there.

Then email or otherwise get the link to your end user:


Voila, secure, encrypted access to the files.  For someone else on the
net to gain access, they'd have to intercept your email/backchannel
with the link, or guess the random string, which is basically
impossible for this 62**64 string (or have local shell access to your
server, though creative use of apache group permissions can mitigate
this).  If you wanted to you could even add .htpassword basic auth to
the above. (Also, make sure you don't provide an un-SSL port 80
http:// link to it!)

Test that your no-dirindex is indeed working by going to:
which should give you an error saying indexing denied.

As for breaking up the files, you could use split to break them into
chunks, which might be prudent, though many http downloaders/browsers
will resume a broken connection so even a single 400G file might be ok
(try using wget with resume options for it).

To split you'd do:
split -b 1000000000 infile dl-me
for 1GB-ish chunks.

On the other side it's just:
cat dl-me* > original-file

Even Windbloze might be able to do it, maybe with type?
type dl-me*.* > original-file
??? not sure on that one, maybe powershell gives us better options now.

Final note, this transfer will saturate your modem's upload connection,
so you might want to schedule it for a weekend or something.
Ratelimiting with apache is kind of a pain, but the receiving end could
use wget's rate-limiting options (aim for 75% your nominal u/l
bandwidth maybe).  Or you can setup qos / ratelimiting (using tc and
iptables) for egress web traffic on your linux web server or linux
router you control, but that gets complicated if you've never done it

Final final note, you may tick off your ISP and/or exceed your monthly
transfer limits (uploads are usually quite limited vs downloads).
400GB is a huge amount to upload for normal home/SMB shaw/mts
accounts.  (You could bzip (or better) the file before starting the
whole process to make it smaller, if both sides have enough disk
space for 2 copies.)
Roundtable mailing list
Roundtable at muug.mb.ca
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.muug.mb.ca/pipermail/roundtable/attachments/20160824/f70d3e3b/attachment.html>

More information about the Roundtable mailing list