How to process series of files once transfer complete
Clash Royale CLAN TAG#URR8PPP
up vote
0
down vote
favorite
What I have
I have 2 servers. Lets call them sen.der
and recei.ver
.
Sender generates files; these files can range in size from a 20Kb to 30 Gb.
I've written a script that checks to see how big the file is once it has been generated and if it's smaller than 10Mb, sends the file to recei.ver
via SFTP. Otherwise, if it's greater than 10Mb, splits it up into 10Mb chunks and then sends them off to recei.ver
via SFTP.
The sending time is obviously determined by the file size and the line-speed. That linespeed may be as low as 100Kbps meaning that it could theoretically take as much as 11 hours to transfer the biggest file.
What I Need
What I'm trying to do is get recei.ver
to automatically cat the files back into one big one (and run a few extra errands like untar, send notification email, etc).
What I could do
I could try using inotifywait
with the -m
option, and then check the file size of the files being written and then cat the lot, when the last file that was created is less than 10,485,760 bytes.
I could then cat the chunks, and test the tar with a tar tf <filename>; echo $?
Is there a better way?
It would probably work... But doesn't seem elegant. Is there a better way to do this?
bash tar split inotify
add a comment |Â
up vote
0
down vote
favorite
What I have
I have 2 servers. Lets call them sen.der
and recei.ver
.
Sender generates files; these files can range in size from a 20Kb to 30 Gb.
I've written a script that checks to see how big the file is once it has been generated and if it's smaller than 10Mb, sends the file to recei.ver
via SFTP. Otherwise, if it's greater than 10Mb, splits it up into 10Mb chunks and then sends them off to recei.ver
via SFTP.
The sending time is obviously determined by the file size and the line-speed. That linespeed may be as low as 100Kbps meaning that it could theoretically take as much as 11 hours to transfer the biggest file.
What I Need
What I'm trying to do is get recei.ver
to automatically cat the files back into one big one (and run a few extra errands like untar, send notification email, etc).
What I could do
I could try using inotifywait
with the -m
option, and then check the file size of the files being written and then cat the lot, when the last file that was created is less than 10,485,760 bytes.
I could then cat the chunks, and test the tar with a tar tf <filename>; echo $?
Is there a better way?
It would probably work... But doesn't seem elegant. Is there a better way to do this?
bash tar split inotify
add a comment |Â
up vote
0
down vote
favorite
up vote
0
down vote
favorite
What I have
I have 2 servers. Lets call them sen.der
and recei.ver
.
Sender generates files; these files can range in size from a 20Kb to 30 Gb.
I've written a script that checks to see how big the file is once it has been generated and if it's smaller than 10Mb, sends the file to recei.ver
via SFTP. Otherwise, if it's greater than 10Mb, splits it up into 10Mb chunks and then sends them off to recei.ver
via SFTP.
The sending time is obviously determined by the file size and the line-speed. That linespeed may be as low as 100Kbps meaning that it could theoretically take as much as 11 hours to transfer the biggest file.
What I Need
What I'm trying to do is get recei.ver
to automatically cat the files back into one big one (and run a few extra errands like untar, send notification email, etc).
What I could do
I could try using inotifywait
with the -m
option, and then check the file size of the files being written and then cat the lot, when the last file that was created is less than 10,485,760 bytes.
I could then cat the chunks, and test the tar with a tar tf <filename>; echo $?
Is there a better way?
It would probably work... But doesn't seem elegant. Is there a better way to do this?
bash tar split inotify
What I have
I have 2 servers. Lets call them sen.der
and recei.ver
.
Sender generates files; these files can range in size from a 20Kb to 30 Gb.
I've written a script that checks to see how big the file is once it has been generated and if it's smaller than 10Mb, sends the file to recei.ver
via SFTP. Otherwise, if it's greater than 10Mb, splits it up into 10Mb chunks and then sends them off to recei.ver
via SFTP.
The sending time is obviously determined by the file size and the line-speed. That linespeed may be as low as 100Kbps meaning that it could theoretically take as much as 11 hours to transfer the biggest file.
What I Need
What I'm trying to do is get recei.ver
to automatically cat the files back into one big one (and run a few extra errands like untar, send notification email, etc).
What I could do
I could try using inotifywait
with the -m
option, and then check the file size of the files being written and then cat the lot, when the last file that was created is less than 10,485,760 bytes.
I could then cat the chunks, and test the tar with a tar tf <filename>; echo $?
Is there a better way?
It would probably work... But doesn't seem elegant. Is there a better way to do this?
bash tar split inotify
bash tar split inotify
asked Aug 31 at 9:41
Jim
9313
9313
add a comment |Â
add a comment |Â
1 Answer
1
active
oldest
votes
up vote
1
down vote
Using the size of a partial file as an implicit end marker may be brittle. Much better would be to send the parts first and then send a control file which lists the parts (maybe with sha256 checksums to detect transfer problems) so that the receiving program can check whether all parts have been transmitted and start reassembling only then.
Worth a shot. (about the control file). I've been including the checksum in the tarball, but I see your point of using it instead of trying to check the integrity of the file with atar tf
â Jim
Aug 31 at 12:08
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
Using the size of a partial file as an implicit end marker may be brittle. Much better would be to send the parts first and then send a control file which lists the parts (maybe with sha256 checksums to detect transfer problems) so that the receiving program can check whether all parts have been transmitted and start reassembling only then.
Worth a shot. (about the control file). I've been including the checksum in the tarball, but I see your point of using it instead of trying to check the integrity of the file with atar tf
â Jim
Aug 31 at 12:08
add a comment |Â
up vote
1
down vote
Using the size of a partial file as an implicit end marker may be brittle. Much better would be to send the parts first and then send a control file which lists the parts (maybe with sha256 checksums to detect transfer problems) so that the receiving program can check whether all parts have been transmitted and start reassembling only then.
Worth a shot. (about the control file). I've been including the checksum in the tarball, but I see your point of using it instead of trying to check the integrity of the file with atar tf
â Jim
Aug 31 at 12:08
add a comment |Â
up vote
1
down vote
up vote
1
down vote
Using the size of a partial file as an implicit end marker may be brittle. Much better would be to send the parts first and then send a control file which lists the parts (maybe with sha256 checksums to detect transfer problems) so that the receiving program can check whether all parts have been transmitted and start reassembling only then.
Using the size of a partial file as an implicit end marker may be brittle. Much better would be to send the parts first and then send a control file which lists the parts (maybe with sha256 checksums to detect transfer problems) so that the receiving program can check whether all parts have been transmitted and start reassembling only then.
answered Aug 31 at 11:07
Hans-Martin Mosner
1,22548
1,22548
Worth a shot. (about the control file). I've been including the checksum in the tarball, but I see your point of using it instead of trying to check the integrity of the file with atar tf
â Jim
Aug 31 at 12:08
add a comment |Â
Worth a shot. (about the control file). I've been including the checksum in the tarball, but I see your point of using it instead of trying to check the integrity of the file with atar tf
â Jim
Aug 31 at 12:08
Worth a shot. (about the control file). I've been including the checksum in the tarball, but I see your point of using it instead of trying to check the integrity of the file with a
tar tf
â Jim
Aug 31 at 12:08
Worth a shot. (about the control file). I've been including the checksum in the tarball, but I see your point of using it instead of trying to check the integrity of the file with a
tar tf
â Jim
Aug 31 at 12:08
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f465976%2fhow-to-process-series-of-files-once-transfer-complete%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password