How to debug: tar: A lone zero block

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP












9















How to debug this? This issue has suddenly appeared within the last couple of days. All backups of a website are corrupted.



If the backup is just left as tar, there are no problems, but as soon the tar is compressed as gz or xz I can't uncompress them.



There is a lot of free disk



Local disk space 2.68 TB total / 2.26 TB free / 432.46 GB used


error



tar: Skipping to next header[===============================> ] 39% ETA 0:01:14
tar: A lone zero block at 2291466===============================> ] 44% ETA 0:01:13
tar: Exiting with failure status due to previous errors
878MiB 0:00:58 [15.1MiB/s] [===================================> ] 44%


And why does it say Skipping to next header? It has never done that before. Something is terribly wrong the some of the files.



There are about 15k pdf, jpg or png files in the directories.



command



pv $backup_file | tar -izxf - -C $import_dir


There must be some data that corrupts the compression.



I have also tried to check the HDD health by doing this:



# getting the drives
lsblk -dpno name

smartctl -H /dev/sda
smartctl -H /dev/sdb


On both drives I get this:



=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED


How can I find out which files that are corrupting the tar.gz? I just want to delete them.



update



Have now copied all files to another server and I have the exact same issue. I can tar everything and extract it without problems, but as soon I want to compress the files, I can't uncompress them (gz/xz).










share|improve this question
























  • Did a file system fill up during the backup? Any logs from the backup?

    – Jeff Schaller
    May 20 '17 at 11:36











  • Have any checksums of the files, or any files on the backup drive? Ram errors?

    – Xen2050
    May 20 '17 at 12:02






  • 4





    Can you show us the full tar (+ compression) command(s) that created the .tar.gz? and how they are called? And in the extractino command you show, add v to have it display which files it managed to extract, this will help you pinpoint the one(s) which cause errors as well

    – Olivier Dulac
    May 22 '17 at 17:00






  • 1





    What happens if you run tar -cf xxx.tar ... without the compression, then gzip xxx.tar? Does that tarball extract cleanly? Is pv causing problems? What happens if you drop the pv ... | ... piping and just directly run tar -cvzf xxx.tar.gz ... then tar -xvzf xxx.tar ...?

    – Andrew Henle
    May 23 '17 at 9:41






  • 1





    What is the underlying filesystem type? What is the O/S version and size and md5 sum of the binaries? Try calling the binaries with absolute path and without pv.

    – MattBianco
    May 29 '17 at 13:28
















9















How to debug this? This issue has suddenly appeared within the last couple of days. All backups of a website are corrupted.



If the backup is just left as tar, there are no problems, but as soon the tar is compressed as gz or xz I can't uncompress them.



There is a lot of free disk



Local disk space 2.68 TB total / 2.26 TB free / 432.46 GB used


error



tar: Skipping to next header[===============================> ] 39% ETA 0:01:14
tar: A lone zero block at 2291466===============================> ] 44% ETA 0:01:13
tar: Exiting with failure status due to previous errors
878MiB 0:00:58 [15.1MiB/s] [===================================> ] 44%


And why does it say Skipping to next header? It has never done that before. Something is terribly wrong the some of the files.



There are about 15k pdf, jpg or png files in the directories.



command



pv $backup_file | tar -izxf - -C $import_dir


There must be some data that corrupts the compression.



I have also tried to check the HDD health by doing this:



# getting the drives
lsblk -dpno name

smartctl -H /dev/sda
smartctl -H /dev/sdb


On both drives I get this:



=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED


How can I find out which files that are corrupting the tar.gz? I just want to delete them.



update



Have now copied all files to another server and I have the exact same issue. I can tar everything and extract it without problems, but as soon I want to compress the files, I can't uncompress them (gz/xz).










share|improve this question
























  • Did a file system fill up during the backup? Any logs from the backup?

    – Jeff Schaller
    May 20 '17 at 11:36











  • Have any checksums of the files, or any files on the backup drive? Ram errors?

    – Xen2050
    May 20 '17 at 12:02






  • 4





    Can you show us the full tar (+ compression) command(s) that created the .tar.gz? and how they are called? And in the extractino command you show, add v to have it display which files it managed to extract, this will help you pinpoint the one(s) which cause errors as well

    – Olivier Dulac
    May 22 '17 at 17:00






  • 1





    What happens if you run tar -cf xxx.tar ... without the compression, then gzip xxx.tar? Does that tarball extract cleanly? Is pv causing problems? What happens if you drop the pv ... | ... piping and just directly run tar -cvzf xxx.tar.gz ... then tar -xvzf xxx.tar ...?

    – Andrew Henle
    May 23 '17 at 9:41






  • 1





    What is the underlying filesystem type? What is the O/S version and size and md5 sum of the binaries? Try calling the binaries with absolute path and without pv.

    – MattBianco
    May 29 '17 at 13:28














9












9








9


1






How to debug this? This issue has suddenly appeared within the last couple of days. All backups of a website are corrupted.



If the backup is just left as tar, there are no problems, but as soon the tar is compressed as gz or xz I can't uncompress them.



There is a lot of free disk



Local disk space 2.68 TB total / 2.26 TB free / 432.46 GB used


error



tar: Skipping to next header[===============================> ] 39% ETA 0:01:14
tar: A lone zero block at 2291466===============================> ] 44% ETA 0:01:13
tar: Exiting with failure status due to previous errors
878MiB 0:00:58 [15.1MiB/s] [===================================> ] 44%


And why does it say Skipping to next header? It has never done that before. Something is terribly wrong the some of the files.



There are about 15k pdf, jpg or png files in the directories.



command



pv $backup_file | tar -izxf - -C $import_dir


There must be some data that corrupts the compression.



I have also tried to check the HDD health by doing this:



# getting the drives
lsblk -dpno name

smartctl -H /dev/sda
smartctl -H /dev/sdb


On both drives I get this:



=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED


How can I find out which files that are corrupting the tar.gz? I just want to delete them.



update



Have now copied all files to another server and I have the exact same issue. I can tar everything and extract it without problems, but as soon I want to compress the files, I can't uncompress them (gz/xz).










share|improve this question
















How to debug this? This issue has suddenly appeared within the last couple of days. All backups of a website are corrupted.



If the backup is just left as tar, there are no problems, but as soon the tar is compressed as gz or xz I can't uncompress them.



There is a lot of free disk



Local disk space 2.68 TB total / 2.26 TB free / 432.46 GB used


error



tar: Skipping to next header[===============================> ] 39% ETA 0:01:14
tar: A lone zero block at 2291466===============================> ] 44% ETA 0:01:13
tar: Exiting with failure status due to previous errors
878MiB 0:00:58 [15.1MiB/s] [===================================> ] 44%


And why does it say Skipping to next header? It has never done that before. Something is terribly wrong the some of the files.



There are about 15k pdf, jpg or png files in the directories.



command



pv $backup_file | tar -izxf - -C $import_dir


There must be some data that corrupts the compression.



I have also tried to check the HDD health by doing this:



# getting the drives
lsblk -dpno name

smartctl -H /dev/sda
smartctl -H /dev/sdb


On both drives I get this:



=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED


How can I find out which files that are corrupting the tar.gz? I just want to delete them.



update



Have now copied all files to another server and I have the exact same issue. I can tar everything and extract it without problems, but as soon I want to compress the files, I can't uncompress them (gz/xz).







debian tar corruption






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Feb 17 '18 at 12:35









Jeff Schaller

39.3k1054125




39.3k1054125










asked May 20 '17 at 11:26









clarkkclarkk

54451224




54451224












  • Did a file system fill up during the backup? Any logs from the backup?

    – Jeff Schaller
    May 20 '17 at 11:36











  • Have any checksums of the files, or any files on the backup drive? Ram errors?

    – Xen2050
    May 20 '17 at 12:02






  • 4





    Can you show us the full tar (+ compression) command(s) that created the .tar.gz? and how they are called? And in the extractino command you show, add v to have it display which files it managed to extract, this will help you pinpoint the one(s) which cause errors as well

    – Olivier Dulac
    May 22 '17 at 17:00






  • 1





    What happens if you run tar -cf xxx.tar ... without the compression, then gzip xxx.tar? Does that tarball extract cleanly? Is pv causing problems? What happens if you drop the pv ... | ... piping and just directly run tar -cvzf xxx.tar.gz ... then tar -xvzf xxx.tar ...?

    – Andrew Henle
    May 23 '17 at 9:41






  • 1





    What is the underlying filesystem type? What is the O/S version and size and md5 sum of the binaries? Try calling the binaries with absolute path and without pv.

    – MattBianco
    May 29 '17 at 13:28


















  • Did a file system fill up during the backup? Any logs from the backup?

    – Jeff Schaller
    May 20 '17 at 11:36











  • Have any checksums of the files, or any files on the backup drive? Ram errors?

    – Xen2050
    May 20 '17 at 12:02






  • 4





    Can you show us the full tar (+ compression) command(s) that created the .tar.gz? and how they are called? And in the extractino command you show, add v to have it display which files it managed to extract, this will help you pinpoint the one(s) which cause errors as well

    – Olivier Dulac
    May 22 '17 at 17:00






  • 1





    What happens if you run tar -cf xxx.tar ... without the compression, then gzip xxx.tar? Does that tarball extract cleanly? Is pv causing problems? What happens if you drop the pv ... | ... piping and just directly run tar -cvzf xxx.tar.gz ... then tar -xvzf xxx.tar ...?

    – Andrew Henle
    May 23 '17 at 9:41






  • 1





    What is the underlying filesystem type? What is the O/S version and size and md5 sum of the binaries? Try calling the binaries with absolute path and without pv.

    – MattBianco
    May 29 '17 at 13:28

















Did a file system fill up during the backup? Any logs from the backup?

– Jeff Schaller
May 20 '17 at 11:36





Did a file system fill up during the backup? Any logs from the backup?

– Jeff Schaller
May 20 '17 at 11:36













Have any checksums of the files, or any files on the backup drive? Ram errors?

– Xen2050
May 20 '17 at 12:02





Have any checksums of the files, or any files on the backup drive? Ram errors?

– Xen2050
May 20 '17 at 12:02




4




4





Can you show us the full tar (+ compression) command(s) that created the .tar.gz? and how they are called? And in the extractino command you show, add v to have it display which files it managed to extract, this will help you pinpoint the one(s) which cause errors as well

– Olivier Dulac
May 22 '17 at 17:00





Can you show us the full tar (+ compression) command(s) that created the .tar.gz? and how they are called? And in the extractino command you show, add v to have it display which files it managed to extract, this will help you pinpoint the one(s) which cause errors as well

– Olivier Dulac
May 22 '17 at 17:00




1




1





What happens if you run tar -cf xxx.tar ... without the compression, then gzip xxx.tar? Does that tarball extract cleanly? Is pv causing problems? What happens if you drop the pv ... | ... piping and just directly run tar -cvzf xxx.tar.gz ... then tar -xvzf xxx.tar ...?

– Andrew Henle
May 23 '17 at 9:41





What happens if you run tar -cf xxx.tar ... without the compression, then gzip xxx.tar? Does that tarball extract cleanly? Is pv causing problems? What happens if you drop the pv ... | ... piping and just directly run tar -cvzf xxx.tar.gz ... then tar -xvzf xxx.tar ...?

– Andrew Henle
May 23 '17 at 9:41




1




1





What is the underlying filesystem type? What is the O/S version and size and md5 sum of the binaries? Try calling the binaries with absolute path and without pv.

– MattBianco
May 29 '17 at 13:28






What is the underlying filesystem type? What is the O/S version and size and md5 sum of the binaries? Try calling the binaries with absolute path and without pv.

– MattBianco
May 29 '17 at 13:28











4 Answers
4






active

oldest

votes


















7














Your file is either truncated or corrupted, so xz can't get to the end of the data. tar complains because the archive stops in the middle, which is logical since xz didn't manage to read the whole data.



Run the following commands to check where the problem is:



cat /var/www/bak/db/2017-05-20-1200_mysql.tar.xz >/dev/null
xzcat /var/www/bak/db/2017-05-20-1200_mysql.tar.xz >/dev/null


If cat complains then the file is corrupted on the disk and the operating system detected the corruption. Check the kernel logs for more information; usually the disk needs to be replaced at this point. If only xz complains then the OS didn't detect any corruption but the file is nevertheless not valid (either corrupted or truncated). Either way, you aren't going to be able to recover this file. You'll need to get it back from your offline backups.






share|improve this answer























  • Have updated my question.. If I test the uncompressed tar files I get no errors but as soon I compress they either as gz or xz I can't uncompress them

    – clarkk
    May 20 '17 at 19:44






  • 1





    @clarkk Then the files got corrupted before they were stored, or on storage (but undetected errors are storage are very unlikely — for storage errors, cat or anything else would report that a part of the file is unreadable). The files may have been truncated (e.g. because the disk got full while writing them).

    – Gilles
    May 21 '17 at 19:05











  • If the files was corrupted before they was stored in the tarball.. How can I then detect the corrupted files?

    – clarkk
    May 22 '17 at 19:31











  • The two commands with cat and xzcat doesn't return any errors..

    – clarkk
    May 22 '17 at 20:29











  • @clarkk It doesn't? It did in your initial question. The problem could be RAM failure on your machine. Do a memory test, and don't write anything from your machine if you can avoid it.

    – Gilles
    May 22 '17 at 22:47


















1














I don't see any mentioning of how the broken tar files are created?



You say it's backups from a web site, but the issues you're showing are all when restoring/unpacking, so there (the source) is where you need to put the trouble shooting effort.



If the files can't be uncompressed after moving the backup to another machine/location, they must be either created faulty, or broken in transport.



To locate the source of the error:



  • manually create a backup on the web server (without pv and without -i)

  • manually test the backup on the web server (without pv and without -i)

If no problems found so far:



  • copy the backup from the web server

  • test the copied backup on the target machine (without pv and without -i)

If no problems found so far, the backup script doesn't create the archive the same way you did when doing it by hand (and should probably be modified to do what you did manually).



Also, make sure to use the absolute paths of all involved commands. If you have a bad $PATH and/or $LD_LIBRARY_PATH variable and an intruder in the system, you might be using trojaned binaries, which could cause unintentional side-effects.



It could of course also be incompatible tar versions involved, unless both systems are debian. You could try forcing POSIX-mode on both sides.






share|improve this answer
































    0














    You're using the flag -i that in its long form is --ignore-zeros.
    This is why tar does not complain about the files that are corrupted.
    So, if you want to debug your tar file, just remove the -i option and you'll get the list of corrupted files.



    There are also 2 other ways to find corrupted files on unix (in general). I quote an answer given in another question.




    rsync can be used to copy directories, and is capable of restarting the copy from the point at which it terminated if any error causes the rsync to die.



    Using rsync's --dry-run option you can see what would be copied without actually copying anything. The --stats and --progress options would also be useful. and --human-readable or -h is easier to read.



    e.g.



    rsync --dry-run -avh --stats --progress /path/to/src/ /path/to/destination/



    I'm not sure if rsync is installed by default on Mac OS X, but I have used it on Macs so I know it's definitely available.



    For a quick-and-dirty check on whether files in a subdirectory can be read or not, you could use grep -r XXX /path/to/directory/ > /dev/null. The search regexp doesn't matter, because output is being discarded anyway.



    STDOUT is being redirected to /dev/null, so you'll only see errors.



    The only reason I chose grep here was because of its -R recursion option. There are many other commands that could be used instead of
    grep here, and even more if used with find.




    As reference: Finding corrupted files






    share|improve this answer






























      0














      The line of reasoning in answer by @MattBianco is what I would methodically follow to solve this particular issue.



      Zeroed blocks indicate EOF, but that's dependent on the blocking factor (the default is a compiled constant, typically 20). Tar's --compare|--diff appear to execute with --ignore-zeros (-i) implicitly.



      Given the extra complication of pv, I suspect tar -i is causing issues for xz, looking at tar man on blocking factor I'd suggest first removing -i



      Then if that doesn't help, replacing with:



      --read-full-records --blocking-factor=300



      If you're just reading this having googled "tar: A lone zero block at N", and aren't piping anything then try --ignore-zeros.






      share|improve this answer
























        Your Answer








        StackExchange.ready(function()
        var channelOptions =
        tags: "".split(" "),
        id: "106"
        ;
        initTagRenderer("".split(" "), "".split(" "), channelOptions);

        StackExchange.using("externalEditor", function()
        // Have to fire editor after snippets, if snippets enabled
        if (StackExchange.settings.snippets.snippetsEnabled)
        StackExchange.using("snippets", function()
        createEditor();
        );

        else
        createEditor();

        );

        function createEditor()
        StackExchange.prepareEditor(
        heartbeatType: 'answer',
        autoActivateHeartbeat: false,
        convertImagesToLinks: false,
        noModals: true,
        showLowRepImageUploadWarning: true,
        reputationToPostImages: null,
        bindNavPrevention: true,
        postfix: "",
        imageUploader:
        brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
        contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
        allowUrls: true
        ,
        onDemand: true,
        discardSelector: ".discard-answer"
        ,immediatelyShowMarkdownHelp:true
        );



        );













        draft saved

        draft discarded


















        StackExchange.ready(
        function ()
        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f366231%2fhow-to-debug-tar-a-lone-zero-block%23new-answer', 'question_page');

        );

        Post as a guest















        Required, but never shown

























        4 Answers
        4






        active

        oldest

        votes








        4 Answers
        4






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes









        7














        Your file is either truncated or corrupted, so xz can't get to the end of the data. tar complains because the archive stops in the middle, which is logical since xz didn't manage to read the whole data.



        Run the following commands to check where the problem is:



        cat /var/www/bak/db/2017-05-20-1200_mysql.tar.xz >/dev/null
        xzcat /var/www/bak/db/2017-05-20-1200_mysql.tar.xz >/dev/null


        If cat complains then the file is corrupted on the disk and the operating system detected the corruption. Check the kernel logs for more information; usually the disk needs to be replaced at this point. If only xz complains then the OS didn't detect any corruption but the file is nevertheless not valid (either corrupted or truncated). Either way, you aren't going to be able to recover this file. You'll need to get it back from your offline backups.






        share|improve this answer























        • Have updated my question.. If I test the uncompressed tar files I get no errors but as soon I compress they either as gz or xz I can't uncompress them

          – clarkk
          May 20 '17 at 19:44






        • 1





          @clarkk Then the files got corrupted before they were stored, or on storage (but undetected errors are storage are very unlikely — for storage errors, cat or anything else would report that a part of the file is unreadable). The files may have been truncated (e.g. because the disk got full while writing them).

          – Gilles
          May 21 '17 at 19:05











        • If the files was corrupted before they was stored in the tarball.. How can I then detect the corrupted files?

          – clarkk
          May 22 '17 at 19:31











        • The two commands with cat and xzcat doesn't return any errors..

          – clarkk
          May 22 '17 at 20:29











        • @clarkk It doesn't? It did in your initial question. The problem could be RAM failure on your machine. Do a memory test, and don't write anything from your machine if you can avoid it.

          – Gilles
          May 22 '17 at 22:47















        7














        Your file is either truncated or corrupted, so xz can't get to the end of the data. tar complains because the archive stops in the middle, which is logical since xz didn't manage to read the whole data.



        Run the following commands to check where the problem is:



        cat /var/www/bak/db/2017-05-20-1200_mysql.tar.xz >/dev/null
        xzcat /var/www/bak/db/2017-05-20-1200_mysql.tar.xz >/dev/null


        If cat complains then the file is corrupted on the disk and the operating system detected the corruption. Check the kernel logs for more information; usually the disk needs to be replaced at this point. If only xz complains then the OS didn't detect any corruption but the file is nevertheless not valid (either corrupted or truncated). Either way, you aren't going to be able to recover this file. You'll need to get it back from your offline backups.






        share|improve this answer























        • Have updated my question.. If I test the uncompressed tar files I get no errors but as soon I compress they either as gz or xz I can't uncompress them

          – clarkk
          May 20 '17 at 19:44






        • 1





          @clarkk Then the files got corrupted before they were stored, or on storage (but undetected errors are storage are very unlikely — for storage errors, cat or anything else would report that a part of the file is unreadable). The files may have been truncated (e.g. because the disk got full while writing them).

          – Gilles
          May 21 '17 at 19:05











        • If the files was corrupted before they was stored in the tarball.. How can I then detect the corrupted files?

          – clarkk
          May 22 '17 at 19:31











        • The two commands with cat and xzcat doesn't return any errors..

          – clarkk
          May 22 '17 at 20:29











        • @clarkk It doesn't? It did in your initial question. The problem could be RAM failure on your machine. Do a memory test, and don't write anything from your machine if you can avoid it.

          – Gilles
          May 22 '17 at 22:47













        7












        7








        7







        Your file is either truncated or corrupted, so xz can't get to the end of the data. tar complains because the archive stops in the middle, which is logical since xz didn't manage to read the whole data.



        Run the following commands to check where the problem is:



        cat /var/www/bak/db/2017-05-20-1200_mysql.tar.xz >/dev/null
        xzcat /var/www/bak/db/2017-05-20-1200_mysql.tar.xz >/dev/null


        If cat complains then the file is corrupted on the disk and the operating system detected the corruption. Check the kernel logs for more information; usually the disk needs to be replaced at this point. If only xz complains then the OS didn't detect any corruption but the file is nevertheless not valid (either corrupted or truncated). Either way, you aren't going to be able to recover this file. You'll need to get it back from your offline backups.






        share|improve this answer













        Your file is either truncated or corrupted, so xz can't get to the end of the data. tar complains because the archive stops in the middle, which is logical since xz didn't manage to read the whole data.



        Run the following commands to check where the problem is:



        cat /var/www/bak/db/2017-05-20-1200_mysql.tar.xz >/dev/null
        xzcat /var/www/bak/db/2017-05-20-1200_mysql.tar.xz >/dev/null


        If cat complains then the file is corrupted on the disk and the operating system detected the corruption. Check the kernel logs for more information; usually the disk needs to be replaced at this point. If only xz complains then the OS didn't detect any corruption but the file is nevertheless not valid (either corrupted or truncated). Either way, you aren't going to be able to recover this file. You'll need to get it back from your offline backups.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered May 20 '17 at 13:29









        GillesGilles

        530k12810631591




        530k12810631591












        • Have updated my question.. If I test the uncompressed tar files I get no errors but as soon I compress they either as gz or xz I can't uncompress them

          – clarkk
          May 20 '17 at 19:44






        • 1





          @clarkk Then the files got corrupted before they were stored, or on storage (but undetected errors are storage are very unlikely — for storage errors, cat or anything else would report that a part of the file is unreadable). The files may have been truncated (e.g. because the disk got full while writing them).

          – Gilles
          May 21 '17 at 19:05











        • If the files was corrupted before they was stored in the tarball.. How can I then detect the corrupted files?

          – clarkk
          May 22 '17 at 19:31











        • The two commands with cat and xzcat doesn't return any errors..

          – clarkk
          May 22 '17 at 20:29











        • @clarkk It doesn't? It did in your initial question. The problem could be RAM failure on your machine. Do a memory test, and don't write anything from your machine if you can avoid it.

          – Gilles
          May 22 '17 at 22:47

















        • Have updated my question.. If I test the uncompressed tar files I get no errors but as soon I compress they either as gz or xz I can't uncompress them

          – clarkk
          May 20 '17 at 19:44






        • 1





          @clarkk Then the files got corrupted before they were stored, or on storage (but undetected errors are storage are very unlikely — for storage errors, cat or anything else would report that a part of the file is unreadable). The files may have been truncated (e.g. because the disk got full while writing them).

          – Gilles
          May 21 '17 at 19:05











        • If the files was corrupted before they was stored in the tarball.. How can I then detect the corrupted files?

          – clarkk
          May 22 '17 at 19:31











        • The two commands with cat and xzcat doesn't return any errors..

          – clarkk
          May 22 '17 at 20:29











        • @clarkk It doesn't? It did in your initial question. The problem could be RAM failure on your machine. Do a memory test, and don't write anything from your machine if you can avoid it.

          – Gilles
          May 22 '17 at 22:47
















        Have updated my question.. If I test the uncompressed tar files I get no errors but as soon I compress they either as gz or xz I can't uncompress them

        – clarkk
        May 20 '17 at 19:44





        Have updated my question.. If I test the uncompressed tar files I get no errors but as soon I compress they either as gz or xz I can't uncompress them

        – clarkk
        May 20 '17 at 19:44




        1




        1





        @clarkk Then the files got corrupted before they were stored, or on storage (but undetected errors are storage are very unlikely — for storage errors, cat or anything else would report that a part of the file is unreadable). The files may have been truncated (e.g. because the disk got full while writing them).

        – Gilles
        May 21 '17 at 19:05





        @clarkk Then the files got corrupted before they were stored, or on storage (but undetected errors are storage are very unlikely — for storage errors, cat or anything else would report that a part of the file is unreadable). The files may have been truncated (e.g. because the disk got full while writing them).

        – Gilles
        May 21 '17 at 19:05













        If the files was corrupted before they was stored in the tarball.. How can I then detect the corrupted files?

        – clarkk
        May 22 '17 at 19:31





        If the files was corrupted before they was stored in the tarball.. How can I then detect the corrupted files?

        – clarkk
        May 22 '17 at 19:31













        The two commands with cat and xzcat doesn't return any errors..

        – clarkk
        May 22 '17 at 20:29





        The two commands with cat and xzcat doesn't return any errors..

        – clarkk
        May 22 '17 at 20:29













        @clarkk It doesn't? It did in your initial question. The problem could be RAM failure on your machine. Do a memory test, and don't write anything from your machine if you can avoid it.

        – Gilles
        May 22 '17 at 22:47





        @clarkk It doesn't? It did in your initial question. The problem could be RAM failure on your machine. Do a memory test, and don't write anything from your machine if you can avoid it.

        – Gilles
        May 22 '17 at 22:47













        1














        I don't see any mentioning of how the broken tar files are created?



        You say it's backups from a web site, but the issues you're showing are all when restoring/unpacking, so there (the source) is where you need to put the trouble shooting effort.



        If the files can't be uncompressed after moving the backup to another machine/location, they must be either created faulty, or broken in transport.



        To locate the source of the error:



        • manually create a backup on the web server (without pv and without -i)

        • manually test the backup on the web server (without pv and without -i)

        If no problems found so far:



        • copy the backup from the web server

        • test the copied backup on the target machine (without pv and without -i)

        If no problems found so far, the backup script doesn't create the archive the same way you did when doing it by hand (and should probably be modified to do what you did manually).



        Also, make sure to use the absolute paths of all involved commands. If you have a bad $PATH and/or $LD_LIBRARY_PATH variable and an intruder in the system, you might be using trojaned binaries, which could cause unintentional side-effects.



        It could of course also be incompatible tar versions involved, unless both systems are debian. You could try forcing POSIX-mode on both sides.






        share|improve this answer





























          1














          I don't see any mentioning of how the broken tar files are created?



          You say it's backups from a web site, but the issues you're showing are all when restoring/unpacking, so there (the source) is where you need to put the trouble shooting effort.



          If the files can't be uncompressed after moving the backup to another machine/location, they must be either created faulty, or broken in transport.



          To locate the source of the error:



          • manually create a backup on the web server (without pv and without -i)

          • manually test the backup on the web server (without pv and without -i)

          If no problems found so far:



          • copy the backup from the web server

          • test the copied backup on the target machine (without pv and without -i)

          If no problems found so far, the backup script doesn't create the archive the same way you did when doing it by hand (and should probably be modified to do what you did manually).



          Also, make sure to use the absolute paths of all involved commands. If you have a bad $PATH and/or $LD_LIBRARY_PATH variable and an intruder in the system, you might be using trojaned binaries, which could cause unintentional side-effects.



          It could of course also be incompatible tar versions involved, unless both systems are debian. You could try forcing POSIX-mode on both sides.






          share|improve this answer



























            1












            1








            1







            I don't see any mentioning of how the broken tar files are created?



            You say it's backups from a web site, but the issues you're showing are all when restoring/unpacking, so there (the source) is where you need to put the trouble shooting effort.



            If the files can't be uncompressed after moving the backup to another machine/location, they must be either created faulty, or broken in transport.



            To locate the source of the error:



            • manually create a backup on the web server (without pv and without -i)

            • manually test the backup on the web server (without pv and without -i)

            If no problems found so far:



            • copy the backup from the web server

            • test the copied backup on the target machine (without pv and without -i)

            If no problems found so far, the backup script doesn't create the archive the same way you did when doing it by hand (and should probably be modified to do what you did manually).



            Also, make sure to use the absolute paths of all involved commands. If you have a bad $PATH and/or $LD_LIBRARY_PATH variable and an intruder in the system, you might be using trojaned binaries, which could cause unintentional side-effects.



            It could of course also be incompatible tar versions involved, unless both systems are debian. You could try forcing POSIX-mode on both sides.






            share|improve this answer















            I don't see any mentioning of how the broken tar files are created?



            You say it's backups from a web site, but the issues you're showing are all when restoring/unpacking, so there (the source) is where you need to put the trouble shooting effort.



            If the files can't be uncompressed after moving the backup to another machine/location, they must be either created faulty, or broken in transport.



            To locate the source of the error:



            • manually create a backup on the web server (without pv and without -i)

            • manually test the backup on the web server (without pv and without -i)

            If no problems found so far:



            • copy the backup from the web server

            • test the copied backup on the target machine (without pv and without -i)

            If no problems found so far, the backup script doesn't create the archive the same way you did when doing it by hand (and should probably be modified to do what you did manually).



            Also, make sure to use the absolute paths of all involved commands. If you have a bad $PATH and/or $LD_LIBRARY_PATH variable and an intruder in the system, you might be using trojaned binaries, which could cause unintentional side-effects.



            It could of course also be incompatible tar versions involved, unless both systems are debian. You could try forcing POSIX-mode on both sides.







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited May 29 '17 at 14:58

























            answered May 29 '17 at 13:42









            MattBiancoMattBianco

            2,22231839




            2,22231839





















                0














                You're using the flag -i that in its long form is --ignore-zeros.
                This is why tar does not complain about the files that are corrupted.
                So, if you want to debug your tar file, just remove the -i option and you'll get the list of corrupted files.



                There are also 2 other ways to find corrupted files on unix (in general). I quote an answer given in another question.




                rsync can be used to copy directories, and is capable of restarting the copy from the point at which it terminated if any error causes the rsync to die.



                Using rsync's --dry-run option you can see what would be copied without actually copying anything. The --stats and --progress options would also be useful. and --human-readable or -h is easier to read.



                e.g.



                rsync --dry-run -avh --stats --progress /path/to/src/ /path/to/destination/



                I'm not sure if rsync is installed by default on Mac OS X, but I have used it on Macs so I know it's definitely available.



                For a quick-and-dirty check on whether files in a subdirectory can be read or not, you could use grep -r XXX /path/to/directory/ > /dev/null. The search regexp doesn't matter, because output is being discarded anyway.



                STDOUT is being redirected to /dev/null, so you'll only see errors.



                The only reason I chose grep here was because of its -R recursion option. There are many other commands that could be used instead of
                grep here, and even more if used with find.




                As reference: Finding corrupted files






                share|improve this answer



























                  0














                  You're using the flag -i that in its long form is --ignore-zeros.
                  This is why tar does not complain about the files that are corrupted.
                  So, if you want to debug your tar file, just remove the -i option and you'll get the list of corrupted files.



                  There are also 2 other ways to find corrupted files on unix (in general). I quote an answer given in another question.




                  rsync can be used to copy directories, and is capable of restarting the copy from the point at which it terminated if any error causes the rsync to die.



                  Using rsync's --dry-run option you can see what would be copied without actually copying anything. The --stats and --progress options would also be useful. and --human-readable or -h is easier to read.



                  e.g.



                  rsync --dry-run -avh --stats --progress /path/to/src/ /path/to/destination/



                  I'm not sure if rsync is installed by default on Mac OS X, but I have used it on Macs so I know it's definitely available.



                  For a quick-and-dirty check on whether files in a subdirectory can be read or not, you could use grep -r XXX /path/to/directory/ > /dev/null. The search regexp doesn't matter, because output is being discarded anyway.



                  STDOUT is being redirected to /dev/null, so you'll only see errors.



                  The only reason I chose grep here was because of its -R recursion option. There are many other commands that could be used instead of
                  grep here, and even more if used with find.




                  As reference: Finding corrupted files






                  share|improve this answer

























                    0












                    0








                    0







                    You're using the flag -i that in its long form is --ignore-zeros.
                    This is why tar does not complain about the files that are corrupted.
                    So, if you want to debug your tar file, just remove the -i option and you'll get the list of corrupted files.



                    There are also 2 other ways to find corrupted files on unix (in general). I quote an answer given in another question.




                    rsync can be used to copy directories, and is capable of restarting the copy from the point at which it terminated if any error causes the rsync to die.



                    Using rsync's --dry-run option you can see what would be copied without actually copying anything. The --stats and --progress options would also be useful. and --human-readable or -h is easier to read.



                    e.g.



                    rsync --dry-run -avh --stats --progress /path/to/src/ /path/to/destination/



                    I'm not sure if rsync is installed by default on Mac OS X, but I have used it on Macs so I know it's definitely available.



                    For a quick-and-dirty check on whether files in a subdirectory can be read or not, you could use grep -r XXX /path/to/directory/ > /dev/null. The search regexp doesn't matter, because output is being discarded anyway.



                    STDOUT is being redirected to /dev/null, so you'll only see errors.



                    The only reason I chose grep here was because of its -R recursion option. There are many other commands that could be used instead of
                    grep here, and even more if used with find.




                    As reference: Finding corrupted files






                    share|improve this answer













                    You're using the flag -i that in its long form is --ignore-zeros.
                    This is why tar does not complain about the files that are corrupted.
                    So, if you want to debug your tar file, just remove the -i option and you'll get the list of corrupted files.



                    There are also 2 other ways to find corrupted files on unix (in general). I quote an answer given in another question.




                    rsync can be used to copy directories, and is capable of restarting the copy from the point at which it terminated if any error causes the rsync to die.



                    Using rsync's --dry-run option you can see what would be copied without actually copying anything. The --stats and --progress options would also be useful. and --human-readable or -h is easier to read.



                    e.g.



                    rsync --dry-run -avh --stats --progress /path/to/src/ /path/to/destination/



                    I'm not sure if rsync is installed by default on Mac OS X, but I have used it on Macs so I know it's definitely available.



                    For a quick-and-dirty check on whether files in a subdirectory can be read or not, you could use grep -r XXX /path/to/directory/ > /dev/null. The search regexp doesn't matter, because output is being discarded anyway.



                    STDOUT is being redirected to /dev/null, so you'll only see errors.



                    The only reason I chose grep here was because of its -R recursion option. There are many other commands that could be used instead of
                    grep here, and even more if used with find.




                    As reference: Finding corrupted files







                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered May 29 '17 at 12:29









                    tmowtmow

                    1,1031017




                    1,1031017





















                        0














                        The line of reasoning in answer by @MattBianco is what I would methodically follow to solve this particular issue.



                        Zeroed blocks indicate EOF, but that's dependent on the blocking factor (the default is a compiled constant, typically 20). Tar's --compare|--diff appear to execute with --ignore-zeros (-i) implicitly.



                        Given the extra complication of pv, I suspect tar -i is causing issues for xz, looking at tar man on blocking factor I'd suggest first removing -i



                        Then if that doesn't help, replacing with:



                        --read-full-records --blocking-factor=300



                        If you're just reading this having googled "tar: A lone zero block at N", and aren't piping anything then try --ignore-zeros.






                        share|improve this answer





























                          0














                          The line of reasoning in answer by @MattBianco is what I would methodically follow to solve this particular issue.



                          Zeroed blocks indicate EOF, but that's dependent on the blocking factor (the default is a compiled constant, typically 20). Tar's --compare|--diff appear to execute with --ignore-zeros (-i) implicitly.



                          Given the extra complication of pv, I suspect tar -i is causing issues for xz, looking at tar man on blocking factor I'd suggest first removing -i



                          Then if that doesn't help, replacing with:



                          --read-full-records --blocking-factor=300



                          If you're just reading this having googled "tar: A lone zero block at N", and aren't piping anything then try --ignore-zeros.






                          share|improve this answer



























                            0












                            0








                            0







                            The line of reasoning in answer by @MattBianco is what I would methodically follow to solve this particular issue.



                            Zeroed blocks indicate EOF, but that's dependent on the blocking factor (the default is a compiled constant, typically 20). Tar's --compare|--diff appear to execute with --ignore-zeros (-i) implicitly.



                            Given the extra complication of pv, I suspect tar -i is causing issues for xz, looking at tar man on blocking factor I'd suggest first removing -i



                            Then if that doesn't help, replacing with:



                            --read-full-records --blocking-factor=300



                            If you're just reading this having googled "tar: A lone zero block at N", and aren't piping anything then try --ignore-zeros.






                            share|improve this answer















                            The line of reasoning in answer by @MattBianco is what I would methodically follow to solve this particular issue.



                            Zeroed blocks indicate EOF, but that's dependent on the blocking factor (the default is a compiled constant, typically 20). Tar's --compare|--diff appear to execute with --ignore-zeros (-i) implicitly.



                            Given the extra complication of pv, I suspect tar -i is causing issues for xz, looking at tar man on blocking factor I'd suggest first removing -i



                            Then if that doesn't help, replacing with:



                            --read-full-records --blocking-factor=300



                            If you're just reading this having googled "tar: A lone zero block at N", and aren't piping anything then try --ignore-zeros.







                            share|improve this answer














                            share|improve this answer



                            share|improve this answer








                            edited Dec 30 '18 at 2:50

























                            answered Dec 30 '18 at 2:43









                            earcamearcam

                            1013




                            1013



























                                draft saved

                                draft discarded
















































                                Thanks for contributing an answer to Unix & Linux Stack Exchange!


                                • Please be sure to answer the question. Provide details and share your research!

                                But avoid


                                • Asking for help, clarification, or responding to other answers.

                                • Making statements based on opinion; back them up with references or personal experience.

                                To learn more, see our tips on writing great answers.




                                draft saved


                                draft discarded














                                StackExchange.ready(
                                function ()
                                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f366231%2fhow-to-debug-tar-a-lone-zero-block%23new-answer', 'question_page');

                                );

                                Post as a guest















                                Required, but never shown





















































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown

































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown






                                Popular posts from this blog

                                How to check contact read email or not when send email to Individual?

                                Displaying single band from multi-band raster using QGIS

                                How many registers does an x86_64 CPU actually have?