Is it possible to make a .tar.gz file directly from stdin? Or, I need to tar together already gzipped files

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
6
down vote

favorite
1












I'm going to tell you exactly what I need in order to clarify the cryptic question in the title. I'm currently making scheduled MySQL backups of all my databases with something like:



mysqldump ... | gzip -c > mysql-backup.gz


This is ok, but I'm willing to make a separate file for each single database, since that will make it easier to take a look at dumped data or restore a single database:



for db in $dbs; do mysqldump ... $db | gzip -c > mysql-backup-$db.gz; done


I'd like to store all of the dumps for each single backup in a single .tar file, i.e. mysql-backup.tar.gz with all the dumped databases inside. I know I can simply leave .sql files uncompressed and then tar -cz *.sql, but 1) I'm searching for a way that doesn't need to temporarly store big files. In my current script, in fact, mysqldump is piped into gzip, so no big file is created.



2) Is there a similar way in which I can create .tar.gz from stdin?



3) Is tar -c *.sql.gz eqivalent to tar -cz *.sql?










share|improve this question

















  • 1




    See stackoverflow.com/questions/2597875/…
    – jhilmer
    Jul 8 '15 at 7:37






  • 3




    @jhilmer Linked questions is about getting file names from stdin, not actual data.
    – lorenzo-s
    Jul 8 '15 at 7:42






  • 1




    Is tar -c *.sql.gz eqivalent to tar -cz *.sql? - No, the latter is slightly more efficient, but that makes more of a difference for many small files rather than a few big files.
    – lcd047
    Jul 8 '15 at 7:51














up vote
6
down vote

favorite
1












I'm going to tell you exactly what I need in order to clarify the cryptic question in the title. I'm currently making scheduled MySQL backups of all my databases with something like:



mysqldump ... | gzip -c > mysql-backup.gz


This is ok, but I'm willing to make a separate file for each single database, since that will make it easier to take a look at dumped data or restore a single database:



for db in $dbs; do mysqldump ... $db | gzip -c > mysql-backup-$db.gz; done


I'd like to store all of the dumps for each single backup in a single .tar file, i.e. mysql-backup.tar.gz with all the dumped databases inside. I know I can simply leave .sql files uncompressed and then tar -cz *.sql, but 1) I'm searching for a way that doesn't need to temporarly store big files. In my current script, in fact, mysqldump is piped into gzip, so no big file is created.



2) Is there a similar way in which I can create .tar.gz from stdin?



3) Is tar -c *.sql.gz eqivalent to tar -cz *.sql?










share|improve this question

















  • 1




    See stackoverflow.com/questions/2597875/…
    – jhilmer
    Jul 8 '15 at 7:37






  • 3




    @jhilmer Linked questions is about getting file names from stdin, not actual data.
    – lorenzo-s
    Jul 8 '15 at 7:42






  • 1




    Is tar -c *.sql.gz eqivalent to tar -cz *.sql? - No, the latter is slightly more efficient, but that makes more of a difference for many small files rather than a few big files.
    – lcd047
    Jul 8 '15 at 7:51












up vote
6
down vote

favorite
1









up vote
6
down vote

favorite
1






1





I'm going to tell you exactly what I need in order to clarify the cryptic question in the title. I'm currently making scheduled MySQL backups of all my databases with something like:



mysqldump ... | gzip -c > mysql-backup.gz


This is ok, but I'm willing to make a separate file for each single database, since that will make it easier to take a look at dumped data or restore a single database:



for db in $dbs; do mysqldump ... $db | gzip -c > mysql-backup-$db.gz; done


I'd like to store all of the dumps for each single backup in a single .tar file, i.e. mysql-backup.tar.gz with all the dumped databases inside. I know I can simply leave .sql files uncompressed and then tar -cz *.sql, but 1) I'm searching for a way that doesn't need to temporarly store big files. In my current script, in fact, mysqldump is piped into gzip, so no big file is created.



2) Is there a similar way in which I can create .tar.gz from stdin?



3) Is tar -c *.sql.gz eqivalent to tar -cz *.sql?










share|improve this question













I'm going to tell you exactly what I need in order to clarify the cryptic question in the title. I'm currently making scheduled MySQL backups of all my databases with something like:



mysqldump ... | gzip -c > mysql-backup.gz


This is ok, but I'm willing to make a separate file for each single database, since that will make it easier to take a look at dumped data or restore a single database:



for db in $dbs; do mysqldump ... $db | gzip -c > mysql-backup-$db.gz; done


I'd like to store all of the dumps for each single backup in a single .tar file, i.e. mysql-backup.tar.gz with all the dumped databases inside. I know I can simply leave .sql files uncompressed and then tar -cz *.sql, but 1) I'm searching for a way that doesn't need to temporarly store big files. In my current script, in fact, mysqldump is piped into gzip, so no big file is created.



2) Is there a similar way in which I can create .tar.gz from stdin?



3) Is tar -c *.sql.gz eqivalent to tar -cz *.sql?







tar compression gzip






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Jul 8 '15 at 7:32









lorenzo-s

2461410




2461410







  • 1




    See stackoverflow.com/questions/2597875/…
    – jhilmer
    Jul 8 '15 at 7:37






  • 3




    @jhilmer Linked questions is about getting file names from stdin, not actual data.
    – lorenzo-s
    Jul 8 '15 at 7:42






  • 1




    Is tar -c *.sql.gz eqivalent to tar -cz *.sql? - No, the latter is slightly more efficient, but that makes more of a difference for many small files rather than a few big files.
    – lcd047
    Jul 8 '15 at 7:51












  • 1




    See stackoverflow.com/questions/2597875/…
    – jhilmer
    Jul 8 '15 at 7:37






  • 3




    @jhilmer Linked questions is about getting file names from stdin, not actual data.
    – lorenzo-s
    Jul 8 '15 at 7:42






  • 1




    Is tar -c *.sql.gz eqivalent to tar -cz *.sql? - No, the latter is slightly more efficient, but that makes more of a difference for many small files rather than a few big files.
    – lcd047
    Jul 8 '15 at 7:51







1




1




See stackoverflow.com/questions/2597875/…
– jhilmer
Jul 8 '15 at 7:37




See stackoverflow.com/questions/2597875/…
– jhilmer
Jul 8 '15 at 7:37




3




3




@jhilmer Linked questions is about getting file names from stdin, not actual data.
– lorenzo-s
Jul 8 '15 at 7:42




@jhilmer Linked questions is about getting file names from stdin, not actual data.
– lorenzo-s
Jul 8 '15 at 7:42




1




1




Is tar -c *.sql.gz eqivalent to tar -cz *.sql? - No, the latter is slightly more efficient, but that makes more of a difference for many small files rather than a few big files.
– lcd047
Jul 8 '15 at 7:51




Is tar -c *.sql.gz eqivalent to tar -cz *.sql? - No, the latter is slightly more efficient, but that makes more of a difference for many small files rather than a few big files.
– lcd047
Jul 8 '15 at 7:51










4 Answers
4






active

oldest

votes

















up vote
4
down vote













Not easily. tar records not only file contents, but also file metadata (name, timestamps, permissions, owner and such). That information has to come from somewhere, and it won't be there in a pipe.



You could gzip your database dumps to a file (probably named for the database in question), append the file to a tar archive and then delete the file before proceeding to the next database. That'd result in a .gz.tar file, which is unusual but in no way a problem, and probably not use significantly more disk than gzipping a whole-database dump (it will be a little less efficiently compressed since it can't share across database borders).






share|improve this answer



























    up vote
    4
    down vote













    I cobbled together some python to do what you want. It uses python's tarfile library to append stdin to a tar file, and then simply seeks back in the tar to rewrite the header with the right size at eof. The usage would be:



    rm -f mytar
    for db in $dbs
    do mysqldump ... $db | gzip -c |
    tarappend -t mytar -f mysql-backup-$db.gz
    done
    tar tvf mytar


    Here's the tarappend python script:



    #!/usr/bin/python
    # concat stdin to end of tar file, with given name. meuh on stackexchange
    # $Id: tarappend,v 1.3 2015/07/08 11:31:18 meuh $

    import sys, os, tarfile, time, copy
    from optparse import OptionParser
    try:
    import grp, pwd
    except ImportError:
    grp = pwd = None

    usage = """%prog: ... | %prog -t tarfile -f filename
    Appends stdin to tarfile under the given arbitrary filename.
    tarfile is created if it does not exist.
    """

    def doargs():
    parser = OptionParser(usage=usage)
    parser.add_option("-f", "--filename", help="filename to use")
    parser.add_option("-t", "--tarfile", help="existing tar archive")
    (options, args) = parser.parse_args()
    if options.filename is None or options.tarfile is None:
    parser.error("need filename and tarfile")
    if len(args):
    parser.error("unknown args: "+" ".join(args))
    return options

    def copygetlen(fsrc, fdst):
    """copy data from file-like object fsrc to file-like object fdst. return len"""
    totlen = 0
    while 1:
    buf = fsrc.read(16*1024)
    if not buf:
    return totlen
    fdst.write(buf)
    totlen += len(buf)

    class TarFileStdin(tarfile.TarFile):
    def addstdin(self, tarinfo, fileobj):
    """Add stdin to archive. based on addfile() """
    self._check("aw")
    tarinfo = copy.copy(tarinfo)
    buf = tarinfo.tobuf(self.format, self.encoding, self.errors)
    bufoffset = self.offset
    self.fileobj.write(buf)
    self.offset += len(buf)

    tarinfo.size = copygetlen(fileobj, self.fileobj)
    blocks, remainder = divmod(tarinfo.size, tarfile.BLOCKSIZE)
    if remainder > 0:
    self.fileobj.write(tarfile.NUL * (tarfile.BLOCKSIZE - remainder))
    blocks += 1
    self.offset += blocks * tarfile.BLOCKSIZE
    # rewrite header with correct size
    buf = tarinfo.tobuf(self.format, self.encoding, self.errors)
    self.fileobj.seek(bufoffset)
    self.fileobj.write(buf)
    self.fileobj.seek(self.offset)
    self.members.append(tarinfo)

    class TarInfoStdin(tarfile.TarInfo):
    def __init__(self, name):
    if len(name)>100:
    raise ValueError(name+": filename too long")
    if name.endswith("/"):
    raise ValueError(name+": is a directory name")
    tarfile.TarInfo.__init__(self, name)
    self.size = 99
    self.uid = os.getuid()
    self.gid = os.getgid()
    self.mtime = time.time()
    if pwd:
    self.uname = pwd.getpwuid(self.uid)[0]
    self.gname = grp.getgrgid(self.gid)[0]

    def run(tarfilename, newfilename):
    tar = TarFileStdin.open(tarfilename, 'a')
    tarinfo = TarInfoStdin(newfilename)
    tar.addstdin(tarinfo, sys.stdin)
    tar.close()

    if __name__ == '__main__':
    options = doargs()
    run(options.tarfile, options.filename)





    share|improve this answer





























      up vote
      1
      down vote













      No, and I miss that feature so much: my question on Ask Ubuntu.



      If the file to be archived is a raw file with no filesystem metadata associated to it, tar doesn't have neither a filename nor a path necessary to build the internal directories / files tree (to say the least).



      I think that something can be done in Perl, which has some library dedicated to compression / decompression / archiving of files: see if you can get the most out of this answer: a related answer on Ask Ubuntu.






      share|improve this answer





























        up vote
        0
        down vote













        You could consider using the tardy tar post-processor.



        However, you might question the use of tar and consider some other ways to archive your things. In particular consider rsync and afio



        Notice that mysqldump understands the --export-all option (see this). You might pipe that into some script understanding the boundaries, etc...






        share|improve this answer






















          Your Answer








          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "106"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













           

          draft saved


          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f214542%2fis-it-possible-to-make-a-tar-gz-file-directly-from-stdin-or-i-need-to-tar-tog%23new-answer', 'question_page');

          );

          Post as a guest






























          4 Answers
          4






          active

          oldest

          votes








          4 Answers
          4






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          up vote
          4
          down vote













          Not easily. tar records not only file contents, but also file metadata (name, timestamps, permissions, owner and such). That information has to come from somewhere, and it won't be there in a pipe.



          You could gzip your database dumps to a file (probably named for the database in question), append the file to a tar archive and then delete the file before proceeding to the next database. That'd result in a .gz.tar file, which is unusual but in no way a problem, and probably not use significantly more disk than gzipping a whole-database dump (it will be a little less efficiently compressed since it can't share across database borders).






          share|improve this answer
























            up vote
            4
            down vote













            Not easily. tar records not only file contents, but also file metadata (name, timestamps, permissions, owner and such). That information has to come from somewhere, and it won't be there in a pipe.



            You could gzip your database dumps to a file (probably named for the database in question), append the file to a tar archive and then delete the file before proceeding to the next database. That'd result in a .gz.tar file, which is unusual but in no way a problem, and probably not use significantly more disk than gzipping a whole-database dump (it will be a little less efficiently compressed since it can't share across database borders).






            share|improve this answer






















              up vote
              4
              down vote










              up vote
              4
              down vote









              Not easily. tar records not only file contents, but also file metadata (name, timestamps, permissions, owner and such). That information has to come from somewhere, and it won't be there in a pipe.



              You could gzip your database dumps to a file (probably named for the database in question), append the file to a tar archive and then delete the file before proceeding to the next database. That'd result in a .gz.tar file, which is unusual but in no way a problem, and probably not use significantly more disk than gzipping a whole-database dump (it will be a little less efficiently compressed since it can't share across database borders).






              share|improve this answer












              Not easily. tar records not only file contents, but also file metadata (name, timestamps, permissions, owner and such). That information has to come from somewhere, and it won't be there in a pipe.



              You could gzip your database dumps to a file (probably named for the database in question), append the file to a tar archive and then delete the file before proceeding to the next database. That'd result in a .gz.tar file, which is unusual but in no way a problem, and probably not use significantly more disk than gzipping a whole-database dump (it will be a little less efficiently compressed since it can't share across database borders).







              share|improve this answer












              share|improve this answer



              share|improve this answer










              answered Jul 8 '15 at 7:44









              Calle Dybedahl

              44424




              44424






















                  up vote
                  4
                  down vote













                  I cobbled together some python to do what you want. It uses python's tarfile library to append stdin to a tar file, and then simply seeks back in the tar to rewrite the header with the right size at eof. The usage would be:



                  rm -f mytar
                  for db in $dbs
                  do mysqldump ... $db | gzip -c |
                  tarappend -t mytar -f mysql-backup-$db.gz
                  done
                  tar tvf mytar


                  Here's the tarappend python script:



                  #!/usr/bin/python
                  # concat stdin to end of tar file, with given name. meuh on stackexchange
                  # $Id: tarappend,v 1.3 2015/07/08 11:31:18 meuh $

                  import sys, os, tarfile, time, copy
                  from optparse import OptionParser
                  try:
                  import grp, pwd
                  except ImportError:
                  grp = pwd = None

                  usage = """%prog: ... | %prog -t tarfile -f filename
                  Appends stdin to tarfile under the given arbitrary filename.
                  tarfile is created if it does not exist.
                  """

                  def doargs():
                  parser = OptionParser(usage=usage)
                  parser.add_option("-f", "--filename", help="filename to use")
                  parser.add_option("-t", "--tarfile", help="existing tar archive")
                  (options, args) = parser.parse_args()
                  if options.filename is None or options.tarfile is None:
                  parser.error("need filename and tarfile")
                  if len(args):
                  parser.error("unknown args: "+" ".join(args))
                  return options

                  def copygetlen(fsrc, fdst):
                  """copy data from file-like object fsrc to file-like object fdst. return len"""
                  totlen = 0
                  while 1:
                  buf = fsrc.read(16*1024)
                  if not buf:
                  return totlen
                  fdst.write(buf)
                  totlen += len(buf)

                  class TarFileStdin(tarfile.TarFile):
                  def addstdin(self, tarinfo, fileobj):
                  """Add stdin to archive. based on addfile() """
                  self._check("aw")
                  tarinfo = copy.copy(tarinfo)
                  buf = tarinfo.tobuf(self.format, self.encoding, self.errors)
                  bufoffset = self.offset
                  self.fileobj.write(buf)
                  self.offset += len(buf)

                  tarinfo.size = copygetlen(fileobj, self.fileobj)
                  blocks, remainder = divmod(tarinfo.size, tarfile.BLOCKSIZE)
                  if remainder > 0:
                  self.fileobj.write(tarfile.NUL * (tarfile.BLOCKSIZE - remainder))
                  blocks += 1
                  self.offset += blocks * tarfile.BLOCKSIZE
                  # rewrite header with correct size
                  buf = tarinfo.tobuf(self.format, self.encoding, self.errors)
                  self.fileobj.seek(bufoffset)
                  self.fileobj.write(buf)
                  self.fileobj.seek(self.offset)
                  self.members.append(tarinfo)

                  class TarInfoStdin(tarfile.TarInfo):
                  def __init__(self, name):
                  if len(name)>100:
                  raise ValueError(name+": filename too long")
                  if name.endswith("/"):
                  raise ValueError(name+": is a directory name")
                  tarfile.TarInfo.__init__(self, name)
                  self.size = 99
                  self.uid = os.getuid()
                  self.gid = os.getgid()
                  self.mtime = time.time()
                  if pwd:
                  self.uname = pwd.getpwuid(self.uid)[0]
                  self.gname = grp.getgrgid(self.gid)[0]

                  def run(tarfilename, newfilename):
                  tar = TarFileStdin.open(tarfilename, 'a')
                  tarinfo = TarInfoStdin(newfilename)
                  tar.addstdin(tarinfo, sys.stdin)
                  tar.close()

                  if __name__ == '__main__':
                  options = doargs()
                  run(options.tarfile, options.filename)





                  share|improve this answer


























                    up vote
                    4
                    down vote













                    I cobbled together some python to do what you want. It uses python's tarfile library to append stdin to a tar file, and then simply seeks back in the tar to rewrite the header with the right size at eof. The usage would be:



                    rm -f mytar
                    for db in $dbs
                    do mysqldump ... $db | gzip -c |
                    tarappend -t mytar -f mysql-backup-$db.gz
                    done
                    tar tvf mytar


                    Here's the tarappend python script:



                    #!/usr/bin/python
                    # concat stdin to end of tar file, with given name. meuh on stackexchange
                    # $Id: tarappend,v 1.3 2015/07/08 11:31:18 meuh $

                    import sys, os, tarfile, time, copy
                    from optparse import OptionParser
                    try:
                    import grp, pwd
                    except ImportError:
                    grp = pwd = None

                    usage = """%prog: ... | %prog -t tarfile -f filename
                    Appends stdin to tarfile under the given arbitrary filename.
                    tarfile is created if it does not exist.
                    """

                    def doargs():
                    parser = OptionParser(usage=usage)
                    parser.add_option("-f", "--filename", help="filename to use")
                    parser.add_option("-t", "--tarfile", help="existing tar archive")
                    (options, args) = parser.parse_args()
                    if options.filename is None or options.tarfile is None:
                    parser.error("need filename and tarfile")
                    if len(args):
                    parser.error("unknown args: "+" ".join(args))
                    return options

                    def copygetlen(fsrc, fdst):
                    """copy data from file-like object fsrc to file-like object fdst. return len"""
                    totlen = 0
                    while 1:
                    buf = fsrc.read(16*1024)
                    if not buf:
                    return totlen
                    fdst.write(buf)
                    totlen += len(buf)

                    class TarFileStdin(tarfile.TarFile):
                    def addstdin(self, tarinfo, fileobj):
                    """Add stdin to archive. based on addfile() """
                    self._check("aw")
                    tarinfo = copy.copy(tarinfo)
                    buf = tarinfo.tobuf(self.format, self.encoding, self.errors)
                    bufoffset = self.offset
                    self.fileobj.write(buf)
                    self.offset += len(buf)

                    tarinfo.size = copygetlen(fileobj, self.fileobj)
                    blocks, remainder = divmod(tarinfo.size, tarfile.BLOCKSIZE)
                    if remainder > 0:
                    self.fileobj.write(tarfile.NUL * (tarfile.BLOCKSIZE - remainder))
                    blocks += 1
                    self.offset += blocks * tarfile.BLOCKSIZE
                    # rewrite header with correct size
                    buf = tarinfo.tobuf(self.format, self.encoding, self.errors)
                    self.fileobj.seek(bufoffset)
                    self.fileobj.write(buf)
                    self.fileobj.seek(self.offset)
                    self.members.append(tarinfo)

                    class TarInfoStdin(tarfile.TarInfo):
                    def __init__(self, name):
                    if len(name)>100:
                    raise ValueError(name+": filename too long")
                    if name.endswith("/"):
                    raise ValueError(name+": is a directory name")
                    tarfile.TarInfo.__init__(self, name)
                    self.size = 99
                    self.uid = os.getuid()
                    self.gid = os.getgid()
                    self.mtime = time.time()
                    if pwd:
                    self.uname = pwd.getpwuid(self.uid)[0]
                    self.gname = grp.getgrgid(self.gid)[0]

                    def run(tarfilename, newfilename):
                    tar = TarFileStdin.open(tarfilename, 'a')
                    tarinfo = TarInfoStdin(newfilename)
                    tar.addstdin(tarinfo, sys.stdin)
                    tar.close()

                    if __name__ == '__main__':
                    options = doargs()
                    run(options.tarfile, options.filename)





                    share|improve this answer
























                      up vote
                      4
                      down vote










                      up vote
                      4
                      down vote









                      I cobbled together some python to do what you want. It uses python's tarfile library to append stdin to a tar file, and then simply seeks back in the tar to rewrite the header with the right size at eof. The usage would be:



                      rm -f mytar
                      for db in $dbs
                      do mysqldump ... $db | gzip -c |
                      tarappend -t mytar -f mysql-backup-$db.gz
                      done
                      tar tvf mytar


                      Here's the tarappend python script:



                      #!/usr/bin/python
                      # concat stdin to end of tar file, with given name. meuh on stackexchange
                      # $Id: tarappend,v 1.3 2015/07/08 11:31:18 meuh $

                      import sys, os, tarfile, time, copy
                      from optparse import OptionParser
                      try:
                      import grp, pwd
                      except ImportError:
                      grp = pwd = None

                      usage = """%prog: ... | %prog -t tarfile -f filename
                      Appends stdin to tarfile under the given arbitrary filename.
                      tarfile is created if it does not exist.
                      """

                      def doargs():
                      parser = OptionParser(usage=usage)
                      parser.add_option("-f", "--filename", help="filename to use")
                      parser.add_option("-t", "--tarfile", help="existing tar archive")
                      (options, args) = parser.parse_args()
                      if options.filename is None or options.tarfile is None:
                      parser.error("need filename and tarfile")
                      if len(args):
                      parser.error("unknown args: "+" ".join(args))
                      return options

                      def copygetlen(fsrc, fdst):
                      """copy data from file-like object fsrc to file-like object fdst. return len"""
                      totlen = 0
                      while 1:
                      buf = fsrc.read(16*1024)
                      if not buf:
                      return totlen
                      fdst.write(buf)
                      totlen += len(buf)

                      class TarFileStdin(tarfile.TarFile):
                      def addstdin(self, tarinfo, fileobj):
                      """Add stdin to archive. based on addfile() """
                      self._check("aw")
                      tarinfo = copy.copy(tarinfo)
                      buf = tarinfo.tobuf(self.format, self.encoding, self.errors)
                      bufoffset = self.offset
                      self.fileobj.write(buf)
                      self.offset += len(buf)

                      tarinfo.size = copygetlen(fileobj, self.fileobj)
                      blocks, remainder = divmod(tarinfo.size, tarfile.BLOCKSIZE)
                      if remainder > 0:
                      self.fileobj.write(tarfile.NUL * (tarfile.BLOCKSIZE - remainder))
                      blocks += 1
                      self.offset += blocks * tarfile.BLOCKSIZE
                      # rewrite header with correct size
                      buf = tarinfo.tobuf(self.format, self.encoding, self.errors)
                      self.fileobj.seek(bufoffset)
                      self.fileobj.write(buf)
                      self.fileobj.seek(self.offset)
                      self.members.append(tarinfo)

                      class TarInfoStdin(tarfile.TarInfo):
                      def __init__(self, name):
                      if len(name)>100:
                      raise ValueError(name+": filename too long")
                      if name.endswith("/"):
                      raise ValueError(name+": is a directory name")
                      tarfile.TarInfo.__init__(self, name)
                      self.size = 99
                      self.uid = os.getuid()
                      self.gid = os.getgid()
                      self.mtime = time.time()
                      if pwd:
                      self.uname = pwd.getpwuid(self.uid)[0]
                      self.gname = grp.getgrgid(self.gid)[0]

                      def run(tarfilename, newfilename):
                      tar = TarFileStdin.open(tarfilename, 'a')
                      tarinfo = TarInfoStdin(newfilename)
                      tar.addstdin(tarinfo, sys.stdin)
                      tar.close()

                      if __name__ == '__main__':
                      options = doargs()
                      run(options.tarfile, options.filename)





                      share|improve this answer














                      I cobbled together some python to do what you want. It uses python's tarfile library to append stdin to a tar file, and then simply seeks back in the tar to rewrite the header with the right size at eof. The usage would be:



                      rm -f mytar
                      for db in $dbs
                      do mysqldump ... $db | gzip -c |
                      tarappend -t mytar -f mysql-backup-$db.gz
                      done
                      tar tvf mytar


                      Here's the tarappend python script:



                      #!/usr/bin/python
                      # concat stdin to end of tar file, with given name. meuh on stackexchange
                      # $Id: tarappend,v 1.3 2015/07/08 11:31:18 meuh $

                      import sys, os, tarfile, time, copy
                      from optparse import OptionParser
                      try:
                      import grp, pwd
                      except ImportError:
                      grp = pwd = None

                      usage = """%prog: ... | %prog -t tarfile -f filename
                      Appends stdin to tarfile under the given arbitrary filename.
                      tarfile is created if it does not exist.
                      """

                      def doargs():
                      parser = OptionParser(usage=usage)
                      parser.add_option("-f", "--filename", help="filename to use")
                      parser.add_option("-t", "--tarfile", help="existing tar archive")
                      (options, args) = parser.parse_args()
                      if options.filename is None or options.tarfile is None:
                      parser.error("need filename and tarfile")
                      if len(args):
                      parser.error("unknown args: "+" ".join(args))
                      return options

                      def copygetlen(fsrc, fdst):
                      """copy data from file-like object fsrc to file-like object fdst. return len"""
                      totlen = 0
                      while 1:
                      buf = fsrc.read(16*1024)
                      if not buf:
                      return totlen
                      fdst.write(buf)
                      totlen += len(buf)

                      class TarFileStdin(tarfile.TarFile):
                      def addstdin(self, tarinfo, fileobj):
                      """Add stdin to archive. based on addfile() """
                      self._check("aw")
                      tarinfo = copy.copy(tarinfo)
                      buf = tarinfo.tobuf(self.format, self.encoding, self.errors)
                      bufoffset = self.offset
                      self.fileobj.write(buf)
                      self.offset += len(buf)

                      tarinfo.size = copygetlen(fileobj, self.fileobj)
                      blocks, remainder = divmod(tarinfo.size, tarfile.BLOCKSIZE)
                      if remainder > 0:
                      self.fileobj.write(tarfile.NUL * (tarfile.BLOCKSIZE - remainder))
                      blocks += 1
                      self.offset += blocks * tarfile.BLOCKSIZE
                      # rewrite header with correct size
                      buf = tarinfo.tobuf(self.format, self.encoding, self.errors)
                      self.fileobj.seek(bufoffset)
                      self.fileobj.write(buf)
                      self.fileobj.seek(self.offset)
                      self.members.append(tarinfo)

                      class TarInfoStdin(tarfile.TarInfo):
                      def __init__(self, name):
                      if len(name)>100:
                      raise ValueError(name+": filename too long")
                      if name.endswith("/"):
                      raise ValueError(name+": is a directory name")
                      tarfile.TarInfo.__init__(self, name)
                      self.size = 99
                      self.uid = os.getuid()
                      self.gid = os.getgid()
                      self.mtime = time.time()
                      if pwd:
                      self.uname = pwd.getpwuid(self.uid)[0]
                      self.gname = grp.getgrgid(self.gid)[0]

                      def run(tarfilename, newfilename):
                      tar = TarFileStdin.open(tarfilename, 'a')
                      tarinfo = TarInfoStdin(newfilename)
                      tar.addstdin(tarinfo, sys.stdin)
                      tar.close()

                      if __name__ == '__main__':
                      options = doargs()
                      run(options.tarfile, options.filename)






                      share|improve this answer














                      share|improve this answer



                      share|improve this answer








                      edited 22 hours ago

























                      answered Jul 8 '15 at 11:38









                      meuh

                      31k11754




                      31k11754




















                          up vote
                          1
                          down vote













                          No, and I miss that feature so much: my question on Ask Ubuntu.



                          If the file to be archived is a raw file with no filesystem metadata associated to it, tar doesn't have neither a filename nor a path necessary to build the internal directories / files tree (to say the least).



                          I think that something can be done in Perl, which has some library dedicated to compression / decompression / archiving of files: see if you can get the most out of this answer: a related answer on Ask Ubuntu.






                          share|improve this answer


























                            up vote
                            1
                            down vote













                            No, and I miss that feature so much: my question on Ask Ubuntu.



                            If the file to be archived is a raw file with no filesystem metadata associated to it, tar doesn't have neither a filename nor a path necessary to build the internal directories / files tree (to say the least).



                            I think that something can be done in Perl, which has some library dedicated to compression / decompression / archiving of files: see if you can get the most out of this answer: a related answer on Ask Ubuntu.






                            share|improve this answer
























                              up vote
                              1
                              down vote










                              up vote
                              1
                              down vote









                              No, and I miss that feature so much: my question on Ask Ubuntu.



                              If the file to be archived is a raw file with no filesystem metadata associated to it, tar doesn't have neither a filename nor a path necessary to build the internal directories / files tree (to say the least).



                              I think that something can be done in Perl, which has some library dedicated to compression / decompression / archiving of files: see if you can get the most out of this answer: a related answer on Ask Ubuntu.






                              share|improve this answer














                              No, and I miss that feature so much: my question on Ask Ubuntu.



                              If the file to be archived is a raw file with no filesystem metadata associated to it, tar doesn't have neither a filename nor a path necessary to build the internal directories / files tree (to say the least).



                              I think that something can be done in Perl, which has some library dedicated to compression / decompression / archiving of files: see if you can get the most out of this answer: a related answer on Ask Ubuntu.







                              share|improve this answer














                              share|improve this answer



                              share|improve this answer








                              edited Apr 13 '17 at 12:22









                              Community

                              1




                              1










                              answered Jul 8 '15 at 7:55









                              kos

                              2,1511516




                              2,1511516




















                                  up vote
                                  0
                                  down vote













                                  You could consider using the tardy tar post-processor.



                                  However, you might question the use of tar and consider some other ways to archive your things. In particular consider rsync and afio



                                  Notice that mysqldump understands the --export-all option (see this). You might pipe that into some script understanding the boundaries, etc...






                                  share|improve this answer


























                                    up vote
                                    0
                                    down vote













                                    You could consider using the tardy tar post-processor.



                                    However, you might question the use of tar and consider some other ways to archive your things. In particular consider rsync and afio



                                    Notice that mysqldump understands the --export-all option (see this). You might pipe that into some script understanding the boundaries, etc...






                                    share|improve this answer
























                                      up vote
                                      0
                                      down vote










                                      up vote
                                      0
                                      down vote









                                      You could consider using the tardy tar post-processor.



                                      However, you might question the use of tar and consider some other ways to archive your things. In particular consider rsync and afio



                                      Notice that mysqldump understands the --export-all option (see this). You might pipe that into some script understanding the boundaries, etc...






                                      share|improve this answer














                                      You could consider using the tardy tar post-processor.



                                      However, you might question the use of tar and consider some other ways to archive your things. In particular consider rsync and afio



                                      Notice that mysqldump understands the --export-all option (see this). You might pipe that into some script understanding the boundaries, etc...







                                      share|improve this answer














                                      share|improve this answer



                                      share|improve this answer








                                      edited May 23 '17 at 12:40









                                      Community

                                      1




                                      1










                                      answered Jul 8 '15 at 7:44









                                      Basile Starynkevitch

                                      8,0082040




                                      8,0082040



























                                           

                                          draft saved


                                          draft discarded















































                                           


                                          draft saved


                                          draft discarded














                                          StackExchange.ready(
                                          function ()
                                          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f214542%2fis-it-possible-to-make-a-tar-gz-file-directly-from-stdin-or-i-need-to-tar-tog%23new-answer', 'question_page');

                                          );

                                          Post as a guest













































































                                          Popular posts from this blog

                                          How to check contact read email or not when send email to Individual?

                                          Bahrain

                                          Postfix configuration issue with fips on centos 7; mailgun relay