Turn off buffering in pipe

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
337
down vote

favorite
148












I have a script which calls two commands:



long_running_command | print_progress


The long_running_command prints a progress but I'm unhappy with it. I'm using print_progress to make it more nice (namely, I print the progress in a single line).



The problem: Connection a pipe to stdout also activates a 4K buffer, to the nice print program gets nothing ... nothing ... nothing ... a whole lot ... :)



How can I disable the 4K buffer for the long_running_command (no, I don't have the source)?







share|improve this question













migrated from stackoverflow.com Nov 25 '11 at 22:03


This question came from our site for professional and enthusiast programmers.










  • 1




    So when you run long_running_command without piping you can see the progress updates properly, but when piping they get buffered?
    – second
    Jun 16 '09 at 10:58






  • 1




    Yes, that's exactly what happens.
    – Aaron Digulla
    Jun 16 '09 at 11:50






  • 17




    The inability for a simple way of controlling buffering has been a problem for decades. For example, see: marc.info/?l=glibc-bug&m=98313957306297&w=4 which basicly says "I can't be arsed doing this and here's some clap-trap to justify my position"
    – Adrian Pronk
    Oct 19 '10 at 21:59






  • 2




    serverfault.com/a/589614/67097
    – Nakilon
    Feb 9 '15 at 9:08






  • 1




    It is actually stdio not the pipe that causes a delay while waiting for enough data. Pipes do have a capacity, but as soon as there is any data written to the pipe, it is immediately ready to read at the other end.
    – Sam Watkins
    Dec 16 '16 at 11:37














up vote
337
down vote

favorite
148












I have a script which calls two commands:



long_running_command | print_progress


The long_running_command prints a progress but I'm unhappy with it. I'm using print_progress to make it more nice (namely, I print the progress in a single line).



The problem: Connection a pipe to stdout also activates a 4K buffer, to the nice print program gets nothing ... nothing ... nothing ... a whole lot ... :)



How can I disable the 4K buffer for the long_running_command (no, I don't have the source)?







share|improve this question













migrated from stackoverflow.com Nov 25 '11 at 22:03


This question came from our site for professional and enthusiast programmers.










  • 1




    So when you run long_running_command without piping you can see the progress updates properly, but when piping they get buffered?
    – second
    Jun 16 '09 at 10:58






  • 1




    Yes, that's exactly what happens.
    – Aaron Digulla
    Jun 16 '09 at 11:50






  • 17




    The inability for a simple way of controlling buffering has been a problem for decades. For example, see: marc.info/?l=glibc-bug&m=98313957306297&w=4 which basicly says "I can't be arsed doing this and here's some clap-trap to justify my position"
    – Adrian Pronk
    Oct 19 '10 at 21:59






  • 2




    serverfault.com/a/589614/67097
    – Nakilon
    Feb 9 '15 at 9:08






  • 1




    It is actually stdio not the pipe that causes a delay while waiting for enough data. Pipes do have a capacity, but as soon as there is any data written to the pipe, it is immediately ready to read at the other end.
    – Sam Watkins
    Dec 16 '16 at 11:37












up vote
337
down vote

favorite
148









up vote
337
down vote

favorite
148






148





I have a script which calls two commands:



long_running_command | print_progress


The long_running_command prints a progress but I'm unhappy with it. I'm using print_progress to make it more nice (namely, I print the progress in a single line).



The problem: Connection a pipe to stdout also activates a 4K buffer, to the nice print program gets nothing ... nothing ... nothing ... a whole lot ... :)



How can I disable the 4K buffer for the long_running_command (no, I don't have the source)?







share|improve this question













I have a script which calls two commands:



long_running_command | print_progress


The long_running_command prints a progress but I'm unhappy with it. I'm using print_progress to make it more nice (namely, I print the progress in a single line).



The problem: Connection a pipe to stdout also activates a 4K buffer, to the nice print program gets nothing ... nothing ... nothing ... a whole lot ... :)



How can I disable the 4K buffer for the long_running_command (no, I don't have the source)?









share|improve this question












share|improve this question




share|improve this question








edited Dec 29 '16 at 23:06
























asked Jun 16 '09 at 10:27









Aaron Digulla

2,11841316




2,11841316




migrated from stackoverflow.com Nov 25 '11 at 22:03


This question came from our site for professional and enthusiast programmers.






migrated from stackoverflow.com Nov 25 '11 at 22:03


This question came from our site for professional and enthusiast programmers.









  • 1




    So when you run long_running_command without piping you can see the progress updates properly, but when piping they get buffered?
    – second
    Jun 16 '09 at 10:58






  • 1




    Yes, that's exactly what happens.
    – Aaron Digulla
    Jun 16 '09 at 11:50






  • 17




    The inability for a simple way of controlling buffering has been a problem for decades. For example, see: marc.info/?l=glibc-bug&m=98313957306297&w=4 which basicly says "I can't be arsed doing this and here's some clap-trap to justify my position"
    – Adrian Pronk
    Oct 19 '10 at 21:59






  • 2




    serverfault.com/a/589614/67097
    – Nakilon
    Feb 9 '15 at 9:08






  • 1




    It is actually stdio not the pipe that causes a delay while waiting for enough data. Pipes do have a capacity, but as soon as there is any data written to the pipe, it is immediately ready to read at the other end.
    – Sam Watkins
    Dec 16 '16 at 11:37












  • 1




    So when you run long_running_command without piping you can see the progress updates properly, but when piping they get buffered?
    – second
    Jun 16 '09 at 10:58






  • 1




    Yes, that's exactly what happens.
    – Aaron Digulla
    Jun 16 '09 at 11:50






  • 17




    The inability for a simple way of controlling buffering has been a problem for decades. For example, see: marc.info/?l=glibc-bug&m=98313957306297&w=4 which basicly says "I can't be arsed doing this and here's some clap-trap to justify my position"
    – Adrian Pronk
    Oct 19 '10 at 21:59






  • 2




    serverfault.com/a/589614/67097
    – Nakilon
    Feb 9 '15 at 9:08






  • 1




    It is actually stdio not the pipe that causes a delay while waiting for enough data. Pipes do have a capacity, but as soon as there is any data written to the pipe, it is immediately ready to read at the other end.
    – Sam Watkins
    Dec 16 '16 at 11:37







1




1




So when you run long_running_command without piping you can see the progress updates properly, but when piping they get buffered?
– second
Jun 16 '09 at 10:58




So when you run long_running_command without piping you can see the progress updates properly, but when piping they get buffered?
– second
Jun 16 '09 at 10:58




1




1




Yes, that's exactly what happens.
– Aaron Digulla
Jun 16 '09 at 11:50




Yes, that's exactly what happens.
– Aaron Digulla
Jun 16 '09 at 11:50




17




17




The inability for a simple way of controlling buffering has been a problem for decades. For example, see: marc.info/?l=glibc-bug&m=98313957306297&w=4 which basicly says "I can't be arsed doing this and here's some clap-trap to justify my position"
– Adrian Pronk
Oct 19 '10 at 21:59




The inability for a simple way of controlling buffering has been a problem for decades. For example, see: marc.info/?l=glibc-bug&m=98313957306297&w=4 which basicly says "I can't be arsed doing this and here's some clap-trap to justify my position"
– Adrian Pronk
Oct 19 '10 at 21:59




2




2




serverfault.com/a/589614/67097
– Nakilon
Feb 9 '15 at 9:08




serverfault.com/a/589614/67097
– Nakilon
Feb 9 '15 at 9:08




1




1




It is actually stdio not the pipe that causes a delay while waiting for enough data. Pipes do have a capacity, but as soon as there is any data written to the pipe, it is immediately ready to read at the other end.
– Sam Watkins
Dec 16 '16 at 11:37




It is actually stdio not the pipe that causes a delay while waiting for enough data. Pipes do have a capacity, but as soon as there is any data written to the pipe, it is immediately ready to read at the other end.
– Sam Watkins
Dec 16 '16 at 11:37










13 Answers
13






active

oldest

votes

















up vote
210
down vote



accepted










You can use the expect command unbuffer, e.g.



unbuffer long_running_command | print_progress


unbuffer connects to long_running_command via a pseudoterminal (pty), which makes the system treat it as an interactive process, therefore not using the 4-kiB buffering in the pipeline that is the likely cause of the delay.



For longer pipelines, you may have to unbuffer each command (except the final one), e.g.



unbuffer x | unbuffer -p y | z





share|improve this answer



















  • 3




    In fact, the use of a pty to connect to interactive processes is true of expect in general.
    – cheduardo
    Jun 17 '09 at 7:58






  • 12




    When pipelining calls to unbuffer, you should use the -p argument so that unbuffer reads from stdin.
    – Chris Conway
    Oct 6 '09 at 20:18






  • 24




    Note: On debian systems, this is called expect_unbuffer and is in the expect-dev package, not the expect package
    – bdonlan
    Jan 24 '11 at 11:14






  • 3




    @bdonlan: At least on Ubuntu (debian-based), expect-dev provides both unbuffer and expect_unbuffer (the former is a symlink to the latter). The links are available since expect 5.44.1.14-1 (2009).
    – jfs
    Apr 11 '13 at 13:00






  • 1




    Note: On Ubuntu 14.04.x systems, it's also in the expect-dev package.
    – Alexandre Mazel
    Dec 11 '15 at 16:08

















up vote
396
down vote













Another way to skin this cat is to use the stdbuf program, which is part of the GNU Coreutils (FreeBSD also has its own one).



stdbuf -i0 -o0 -e0 command


This turns off buffering completely for input, output and error. For some applications, line buffering may be more suitable for performance reasons:



stdbuf -oL -eL command


Note that it only works for stdio buffering (printf(), fputs()...) for dynamically linked applications, and only if that application doesn't otherwise adjust the buffering of its standard streams by itself, though that should cover most applications.






share|improve this answer



















  • 23




    Why doesn’t this get more upvotes? it's a better solution imho. Does it have any downsides?
    – qdii
    May 12 '13 at 11:39






  • 5




    "unbuffer" needs to be installed in Ubuntu, which is inside the package: expect-dev which is 2MB...
    – lepe
    Jun 27 '13 at 6:21







  • 12




    @qdii stdbuf does not work with tee, because tee overwrites the defaults set by stdbuf. See the manual page of stdbuf.
    – ceving
    Jun 30 '14 at 11:51






  • 5




    @lepe Bizarrely, unbuffer has dependencies on x11 and tcl/tk, meaning it actually needs >80 MB if you're installing it on a server without them.
    – jpatokal
    Aug 28 '14 at 12:27






  • 8




    @qdii stdbuf uses LD_PRELOAD mechanism to insert its own dynamically loaded library libstdbuf.so. This means that it will not work with these kinds executables: with setuid or file capabilities set, statically linked, not using standard libc. In these cases it is better to use the solutions with unbuffer / script / socat. See also stdbuf with setuid/capabilities.
    – pabouk
    Oct 12 '15 at 9:20


















up vote
60
down vote













Yet another way to turn on line-buffering output mode for the long_running_command is to use the script command that runs your long_running_command in a pseudo terminal (pty).



script -q /dev/null long_running_command | print_progress # FreeBSD, Mac OS X
script -c "long_running_command" /dev/null | print_progress # Linux





share|improve this answer

















  • 12




    +1 nice trick, since script is such an old command, it should be available on all Unix-like platforms.
    – Aaron Digulla
    Jan 20 '13 at 13:01






  • 4




    you also need -q on Linux: script -q -c 'long_running_command' /dev/null | print_progress
    – jfs
    Apr 11 '13 at 12:51






  • 1




    It seems like script reads from stdin, which makes it impossible to run such a long_running_command in the background, at least when started from interactive terminal. To workaround, I was able to redirect stdin from /dev/null, since my long_running_command doesn't use stdin.
    – haridsv
    Nov 15 '13 at 12:44







  • 1




    Even works on Android.
    – not2qubit
    Jul 2 '14 at 23:36






  • 2




    One significant disadvantage: ctrl-z no longer works (i.e. I can't suspend the script). This can be fixed by, for example: echo | sudo script -c /usr/local/bin/ec2-snapshot-all /dev/null | ts , if you don't mind not being able to interact with the program.
    – rlpowell
    Jul 24 '15 at 0:03

















up vote
54
down vote













For grep, sed and awk you can force output to be line buffered. You can use:



grep --line-buffered


Force output to be line buffered.  By default, output is line buffered when standard output is a terminal and block buffered other-wise.



sed -u


Make output line buffered.



See this page for more information:
http://www.perkin.org.uk/posts/how-to-fix-stdio-buffering.html






share|improve this answer






























    up vote
    45
    down vote













    If it is a problem with the libc modifying its buffering / flushing when output does not go to a terminal, you should try socat. You can create a bidirectional stream between almost any kind of I/O mechanism. One of those is a forked program speaking to a pseudo tty.



     socat EXEC:long_running_command,pty,ctty STDIO 


    What it does is



    • create a pseudo tty

    • fork long_running_command with the slave side of the pty as stdin/stdout

    • establish a bidirectional stream between the master side of the pty and the second address (here it is STDIO)

    If this gives you the same output as long_running_command, then you can continue with a pipe.



    Edit : Wow
    Did not see the unbuffer answer ! Well, socat is a great tool anyway, so I might just leave this answer






    share|improve this answer

















    • 1




      ...and I didn't know about socat - looks kinda like netcat only perhaps more so. ;) Thanks and +1.
      – cheduardo
      Jun 20 '09 at 9:32






    • 1




      I'd use socat -u exec:long_running_command,pty,end-close - here
      – Stéphane Chazelas
      Aug 7 '15 at 13:10

















    up vote
    17
    down vote













    You can use



    long_running_command 1>&2 |& print_progress


    The problem is that libc will line-buffer when stdout to screen, and full-buffer when stdout to a file. But no-buffer for stderr.



    I don't think it's the problem with pipe buffer, it's all about libc's buffer policy.






    share|improve this answer























    • You're right; my question is still: How can I influence libc's buffer policy without recompiling?
      – Aaron Digulla
      Apr 4 '14 at 8:55










    • @StéphaneChazelas fd1 will redirected to stderr
      – Wang HongQin
      Aug 7 '15 at 9:26










    • @StéphaneChazelas i dont get your arguing point. plz do a test, it works
      – Wang HongQin
      Aug 7 '15 at 9:53






    • 3




      OK, what's happening is that with both zsh (where |& comes from adapted from csh) and bash, when you do cmd1 >&2 |& cmd2, both fd 1 and 2 are connected to the outer stdout. So it works at preventing buffering when that outer stdout is a terminal, but only because the output doesn't go through the pipe (so print_progress prints nothing). So it's the same as long_running_command & print_progress (except that print_progress stdin is a pipe that has no writer). You can verify with ls -l /proc/self/fd >&2 |& cat compared to ls -l /proc/self/fd |& cat.
      – Stéphane Chazelas
      Aug 7 '15 at 10:44







    • 1




      That's because |& is short for 2>&1 |, literally. So cmd1 |& cmd2 is cmd1 1>&2 2>&1 | cmd2. So, both fd 1 and 2 end up connected to the original stderr, and nothing is left writing to the pipe. (s/outer stdout/outer stderr/g in my previous comment).
      – Stéphane Chazelas
      Aug 7 '15 at 10:48


















    up vote
    10
    down vote













    It used to be the case, and probably still is the case, that when standard output is written to a terminal, it is line buffered by default - when a newline is written, the line is written to the terminal. When standard output is sent to a pipe, it is fully buffered - so the data is only sent to the next process in the pipeline when the standard I/O buffer is filled.



    That's the source of the trouble. I'm not sure whether there is much you can do to fix it without modifying the program writing into the pipe. You could use the setvbuf() function with the _IOLBF flag to unconditionally put stdout into line buffered mode. But I don't see an easy way to enforce that on a program. Or the program can do fflush() at appropriate points (after each line of output), but the same comment applies.



    I suppose that if you replaced the pipe with a pseudo-terminal, then the standard I/O library would think the output was a terminal (because it is a type of terminal) and would line buffer automatically. That is a complex way of dealing with things, though.






    share|improve this answer




























      up vote
      6
      down vote













      I know this is an old question and already had lot of answers, but if you wish to avoid the buffer problem, just try something like:



      stdbuf -oL tail -f /var/log/messages | tee -a /home/your_user_here/logs.txt


      This will output in real time the logs and also save them into the logs.txt file and the buffer will no longer affect the tail -f command.






      share|improve this answer



















      • 3




        This looks like the second answer :-/
        – Aaron Digulla
        Aug 11 '15 at 11:17






      • 1




        stdbuf is included in gnu coreutils(I verified on latest version 8.25). verified this works on an embedded linux.
        – zhaorufei
        Jul 20 '16 at 10:18

















      up vote
      4
      down vote













      I don't think the problem is with the pipe. It sounds like your long running process is not flushing its own buffer frequently enough. Changing the pipe's buffer size would be a hack to get round it, but I don't think its possible without rebuilding the kernel - something you wouldn't want to do as a hack, as it probably aversley affect a lot of other processes.






      share|improve this answer

















      • 15




        The root cause is that libc switches to 4k buffering if the stdout is not a tty.
        – Aaron Digulla
        Jun 16 '09 at 11:50






      • 5




        That is very interesting ! because pipe don't cause any buffering. They provide buffering, but if you read from a pipe, you get whatever data is available, you don't have to wait for a buffer in the pipe. So the culprit would be the stdio buffering in the application.
        – shodanex
        Jun 16 '09 at 13:58

















      up vote
      2
      down vote













      In a similar vein to chad's answer, you can write a little script like this:



      # save as ~/bin/scriptee, or so
      script -q /dev/null sh -c 'exec cat > /dev/null'


      Then use this scriptee command as a replacement for tee.



      my-long-running-command | scriptee


      Alas, I can't seem to get a version like that to work perfectly in Linux, so seems limited to BSD-style unixes.



      On Linux, this is close, but you don't get your prompt back when it finishes (until you press enter, etc)...



      script -q -c 'cat > /proc/self/fd/1' /dev/null





      share|improve this answer























      • Why does that work? Does "script" turn off buffering?
        – Aaron Digulla
        Dec 6 '16 at 10:09










      • @Aaron Digulla: script emulates a terminal, so yes, I believe it turns off buffering. It also echoes back each character sent to it - which is why cat is sent to /dev/null in the example. As far as the program running inside script is concerned, it is talking to an interactive session. I believe it's similar to expect in this regard, but script likely is part of your base system.
        – jwd
        Dec 7 '16 at 18:54


















      up vote
      1
      down vote













      According to this post here, you could try reducing the pipe ulimit to one single 512-byte block. It certainly won't turn off buffering, but well, 512 bytes is way less than 4K :3






      share|improve this answer






























        up vote
        1
        down vote













        I found this clever solution: (echo -e "cmd 1ncmd 2" && cat) | ./shell_executable



        This does the trick. cat will read additional input (until EOF) and pass that to the pipe after the echo has put its arguments into the input stream of shell_executable.






        share|improve this answer























        • Actually, cat doesn't see the the output of the echo; you just run two commands in a subshell and the output of both is sent into the pipe. The second command in the subshell ('cat') reads from the parent/outer stdin, that's why it works.
          – Aaron Digulla
          Nov 9 '16 at 11:19

















        up vote
        -1
        down vote













        According to this the pipe buffer size seems to be set in the kernel and would require you to recompile your kernel to alter.






        share|improve this answer

















        • 7




          I believe that is a different buffer.
          – Samuel Edwin Ward
          Jan 8 '13 at 21:58










        Your Answer







        StackExchange.ready(function()
        var channelOptions =
        tags: "".split(" "),
        id: "106"
        ;
        initTagRenderer("".split(" "), "".split(" "), channelOptions);

        StackExchange.using("externalEditor", function()
        // Have to fire editor after snippets, if snippets enabled
        if (StackExchange.settings.snippets.snippetsEnabled)
        StackExchange.using("snippets", function()
        createEditor();
        );

        else
        createEditor();

        );

        function createEditor()
        StackExchange.prepareEditor(
        heartbeatType: 'answer',
        convertImagesToLinks: false,
        noModals: false,
        showLowRepImageUploadWarning: true,
        reputationToPostImages: null,
        bindNavPrevention: true,
        postfix: "",
        onDemand: true,
        discardSelector: ".discard-answer"
        ,immediatelyShowMarkdownHelp:true
        );



        );








         

        draft saved


        draft discarded


















        StackExchange.ready(
        function ()
        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f25372%2fturn-off-buffering-in-pipe%23new-answer', 'question_page');

        );

        Post as a guest






























        13 Answers
        13






        active

        oldest

        votes








        13 Answers
        13






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes








        up vote
        210
        down vote



        accepted










        You can use the expect command unbuffer, e.g.



        unbuffer long_running_command | print_progress


        unbuffer connects to long_running_command via a pseudoterminal (pty), which makes the system treat it as an interactive process, therefore not using the 4-kiB buffering in the pipeline that is the likely cause of the delay.



        For longer pipelines, you may have to unbuffer each command (except the final one), e.g.



        unbuffer x | unbuffer -p y | z





        share|improve this answer



















        • 3




          In fact, the use of a pty to connect to interactive processes is true of expect in general.
          – cheduardo
          Jun 17 '09 at 7:58






        • 12




          When pipelining calls to unbuffer, you should use the -p argument so that unbuffer reads from stdin.
          – Chris Conway
          Oct 6 '09 at 20:18






        • 24




          Note: On debian systems, this is called expect_unbuffer and is in the expect-dev package, not the expect package
          – bdonlan
          Jan 24 '11 at 11:14






        • 3




          @bdonlan: At least on Ubuntu (debian-based), expect-dev provides both unbuffer and expect_unbuffer (the former is a symlink to the latter). The links are available since expect 5.44.1.14-1 (2009).
          – jfs
          Apr 11 '13 at 13:00






        • 1




          Note: On Ubuntu 14.04.x systems, it's also in the expect-dev package.
          – Alexandre Mazel
          Dec 11 '15 at 16:08














        up vote
        210
        down vote



        accepted










        You can use the expect command unbuffer, e.g.



        unbuffer long_running_command | print_progress


        unbuffer connects to long_running_command via a pseudoterminal (pty), which makes the system treat it as an interactive process, therefore not using the 4-kiB buffering in the pipeline that is the likely cause of the delay.



        For longer pipelines, you may have to unbuffer each command (except the final one), e.g.



        unbuffer x | unbuffer -p y | z





        share|improve this answer



















        • 3




          In fact, the use of a pty to connect to interactive processes is true of expect in general.
          – cheduardo
          Jun 17 '09 at 7:58






        • 12




          When pipelining calls to unbuffer, you should use the -p argument so that unbuffer reads from stdin.
          – Chris Conway
          Oct 6 '09 at 20:18






        • 24




          Note: On debian systems, this is called expect_unbuffer and is in the expect-dev package, not the expect package
          – bdonlan
          Jan 24 '11 at 11:14






        • 3




          @bdonlan: At least on Ubuntu (debian-based), expect-dev provides both unbuffer and expect_unbuffer (the former is a symlink to the latter). The links are available since expect 5.44.1.14-1 (2009).
          – jfs
          Apr 11 '13 at 13:00






        • 1




          Note: On Ubuntu 14.04.x systems, it's also in the expect-dev package.
          – Alexandre Mazel
          Dec 11 '15 at 16:08












        up vote
        210
        down vote



        accepted







        up vote
        210
        down vote



        accepted






        You can use the expect command unbuffer, e.g.



        unbuffer long_running_command | print_progress


        unbuffer connects to long_running_command via a pseudoterminal (pty), which makes the system treat it as an interactive process, therefore not using the 4-kiB buffering in the pipeline that is the likely cause of the delay.



        For longer pipelines, you may have to unbuffer each command (except the final one), e.g.



        unbuffer x | unbuffer -p y | z





        share|improve this answer















        You can use the expect command unbuffer, e.g.



        unbuffer long_running_command | print_progress


        unbuffer connects to long_running_command via a pseudoterminal (pty), which makes the system treat it as an interactive process, therefore not using the 4-kiB buffering in the pipeline that is the likely cause of the delay.



        For longer pipelines, you may have to unbuffer each command (except the final one), e.g.



        unbuffer x | unbuffer -p y | z






        share|improve this answer















        share|improve this answer



        share|improve this answer








        edited Jul 1 '14 at 14:55









        mik

        830613




        830613











        answered Jun 16 '09 at 11:03







        cheduardo














        • 3




          In fact, the use of a pty to connect to interactive processes is true of expect in general.
          – cheduardo
          Jun 17 '09 at 7:58






        • 12




          When pipelining calls to unbuffer, you should use the -p argument so that unbuffer reads from stdin.
          – Chris Conway
          Oct 6 '09 at 20:18






        • 24




          Note: On debian systems, this is called expect_unbuffer and is in the expect-dev package, not the expect package
          – bdonlan
          Jan 24 '11 at 11:14






        • 3




          @bdonlan: At least on Ubuntu (debian-based), expect-dev provides both unbuffer and expect_unbuffer (the former is a symlink to the latter). The links are available since expect 5.44.1.14-1 (2009).
          – jfs
          Apr 11 '13 at 13:00






        • 1




          Note: On Ubuntu 14.04.x systems, it's also in the expect-dev package.
          – Alexandre Mazel
          Dec 11 '15 at 16:08












        • 3




          In fact, the use of a pty to connect to interactive processes is true of expect in general.
          – cheduardo
          Jun 17 '09 at 7:58






        • 12




          When pipelining calls to unbuffer, you should use the -p argument so that unbuffer reads from stdin.
          – Chris Conway
          Oct 6 '09 at 20:18






        • 24




          Note: On debian systems, this is called expect_unbuffer and is in the expect-dev package, not the expect package
          – bdonlan
          Jan 24 '11 at 11:14






        • 3




          @bdonlan: At least on Ubuntu (debian-based), expect-dev provides both unbuffer and expect_unbuffer (the former is a symlink to the latter). The links are available since expect 5.44.1.14-1 (2009).
          – jfs
          Apr 11 '13 at 13:00






        • 1




          Note: On Ubuntu 14.04.x systems, it's also in the expect-dev package.
          – Alexandre Mazel
          Dec 11 '15 at 16:08







        3




        3




        In fact, the use of a pty to connect to interactive processes is true of expect in general.
        – cheduardo
        Jun 17 '09 at 7:58




        In fact, the use of a pty to connect to interactive processes is true of expect in general.
        – cheduardo
        Jun 17 '09 at 7:58




        12




        12




        When pipelining calls to unbuffer, you should use the -p argument so that unbuffer reads from stdin.
        – Chris Conway
        Oct 6 '09 at 20:18




        When pipelining calls to unbuffer, you should use the -p argument so that unbuffer reads from stdin.
        – Chris Conway
        Oct 6 '09 at 20:18




        24




        24




        Note: On debian systems, this is called expect_unbuffer and is in the expect-dev package, not the expect package
        – bdonlan
        Jan 24 '11 at 11:14




        Note: On debian systems, this is called expect_unbuffer and is in the expect-dev package, not the expect package
        – bdonlan
        Jan 24 '11 at 11:14




        3




        3




        @bdonlan: At least on Ubuntu (debian-based), expect-dev provides both unbuffer and expect_unbuffer (the former is a symlink to the latter). The links are available since expect 5.44.1.14-1 (2009).
        – jfs
        Apr 11 '13 at 13:00




        @bdonlan: At least on Ubuntu (debian-based), expect-dev provides both unbuffer and expect_unbuffer (the former is a symlink to the latter). The links are available since expect 5.44.1.14-1 (2009).
        – jfs
        Apr 11 '13 at 13:00




        1




        1




        Note: On Ubuntu 14.04.x systems, it's also in the expect-dev package.
        – Alexandre Mazel
        Dec 11 '15 at 16:08




        Note: On Ubuntu 14.04.x systems, it's also in the expect-dev package.
        – Alexandre Mazel
        Dec 11 '15 at 16:08












        up vote
        396
        down vote













        Another way to skin this cat is to use the stdbuf program, which is part of the GNU Coreutils (FreeBSD also has its own one).



        stdbuf -i0 -o0 -e0 command


        This turns off buffering completely for input, output and error. For some applications, line buffering may be more suitable for performance reasons:



        stdbuf -oL -eL command


        Note that it only works for stdio buffering (printf(), fputs()...) for dynamically linked applications, and only if that application doesn't otherwise adjust the buffering of its standard streams by itself, though that should cover most applications.






        share|improve this answer



















        • 23




          Why doesn’t this get more upvotes? it's a better solution imho. Does it have any downsides?
          – qdii
          May 12 '13 at 11:39






        • 5




          "unbuffer" needs to be installed in Ubuntu, which is inside the package: expect-dev which is 2MB...
          – lepe
          Jun 27 '13 at 6:21







        • 12




          @qdii stdbuf does not work with tee, because tee overwrites the defaults set by stdbuf. See the manual page of stdbuf.
          – ceving
          Jun 30 '14 at 11:51






        • 5




          @lepe Bizarrely, unbuffer has dependencies on x11 and tcl/tk, meaning it actually needs >80 MB if you're installing it on a server without them.
          – jpatokal
          Aug 28 '14 at 12:27






        • 8




          @qdii stdbuf uses LD_PRELOAD mechanism to insert its own dynamically loaded library libstdbuf.so. This means that it will not work with these kinds executables: with setuid or file capabilities set, statically linked, not using standard libc. In these cases it is better to use the solutions with unbuffer / script / socat. See also stdbuf with setuid/capabilities.
          – pabouk
          Oct 12 '15 at 9:20















        up vote
        396
        down vote













        Another way to skin this cat is to use the stdbuf program, which is part of the GNU Coreutils (FreeBSD also has its own one).



        stdbuf -i0 -o0 -e0 command


        This turns off buffering completely for input, output and error. For some applications, line buffering may be more suitable for performance reasons:



        stdbuf -oL -eL command


        Note that it only works for stdio buffering (printf(), fputs()...) for dynamically linked applications, and only if that application doesn't otherwise adjust the buffering of its standard streams by itself, though that should cover most applications.






        share|improve this answer



















        • 23




          Why doesn’t this get more upvotes? it's a better solution imho. Does it have any downsides?
          – qdii
          May 12 '13 at 11:39






        • 5




          "unbuffer" needs to be installed in Ubuntu, which is inside the package: expect-dev which is 2MB...
          – lepe
          Jun 27 '13 at 6:21







        • 12




          @qdii stdbuf does not work with tee, because tee overwrites the defaults set by stdbuf. See the manual page of stdbuf.
          – ceving
          Jun 30 '14 at 11:51






        • 5




          @lepe Bizarrely, unbuffer has dependencies on x11 and tcl/tk, meaning it actually needs >80 MB if you're installing it on a server without them.
          – jpatokal
          Aug 28 '14 at 12:27






        • 8




          @qdii stdbuf uses LD_PRELOAD mechanism to insert its own dynamically loaded library libstdbuf.so. This means that it will not work with these kinds executables: with setuid or file capabilities set, statically linked, not using standard libc. In these cases it is better to use the solutions with unbuffer / script / socat. See also stdbuf with setuid/capabilities.
          – pabouk
          Oct 12 '15 at 9:20













        up vote
        396
        down vote










        up vote
        396
        down vote









        Another way to skin this cat is to use the stdbuf program, which is part of the GNU Coreutils (FreeBSD also has its own one).



        stdbuf -i0 -o0 -e0 command


        This turns off buffering completely for input, output and error. For some applications, line buffering may be more suitable for performance reasons:



        stdbuf -oL -eL command


        Note that it only works for stdio buffering (printf(), fputs()...) for dynamically linked applications, and only if that application doesn't otherwise adjust the buffering of its standard streams by itself, though that should cover most applications.






        share|improve this answer















        Another way to skin this cat is to use the stdbuf program, which is part of the GNU Coreutils (FreeBSD also has its own one).



        stdbuf -i0 -o0 -e0 command


        This turns off buffering completely for input, output and error. For some applications, line buffering may be more suitable for performance reasons:



        stdbuf -oL -eL command


        Note that it only works for stdio buffering (printf(), fputs()...) for dynamically linked applications, and only if that application doesn't otherwise adjust the buffering of its standard streams by itself, though that should cover most applications.







        share|improve this answer















        share|improve this answer



        share|improve this answer








        edited Jan 25 at 12:05









        StackzOfZtuff

        15916




        15916











        answered Jun 19 '11 at 7:12









        a3nm

        5,04831219




        5,04831219







        • 23




          Why doesn’t this get more upvotes? it's a better solution imho. Does it have any downsides?
          – qdii
          May 12 '13 at 11:39






        • 5




          "unbuffer" needs to be installed in Ubuntu, which is inside the package: expect-dev which is 2MB...
          – lepe
          Jun 27 '13 at 6:21







        • 12




          @qdii stdbuf does not work with tee, because tee overwrites the defaults set by stdbuf. See the manual page of stdbuf.
          – ceving
          Jun 30 '14 at 11:51






        • 5




          @lepe Bizarrely, unbuffer has dependencies on x11 and tcl/tk, meaning it actually needs >80 MB if you're installing it on a server without them.
          – jpatokal
          Aug 28 '14 at 12:27






        • 8




          @qdii stdbuf uses LD_PRELOAD mechanism to insert its own dynamically loaded library libstdbuf.so. This means that it will not work with these kinds executables: with setuid or file capabilities set, statically linked, not using standard libc. In these cases it is better to use the solutions with unbuffer / script / socat. See also stdbuf with setuid/capabilities.
          – pabouk
          Oct 12 '15 at 9:20













        • 23




          Why doesn’t this get more upvotes? it's a better solution imho. Does it have any downsides?
          – qdii
          May 12 '13 at 11:39






        • 5




          "unbuffer" needs to be installed in Ubuntu, which is inside the package: expect-dev which is 2MB...
          – lepe
          Jun 27 '13 at 6:21







        • 12




          @qdii stdbuf does not work with tee, because tee overwrites the defaults set by stdbuf. See the manual page of stdbuf.
          – ceving
          Jun 30 '14 at 11:51






        • 5




          @lepe Bizarrely, unbuffer has dependencies on x11 and tcl/tk, meaning it actually needs >80 MB if you're installing it on a server without them.
          – jpatokal
          Aug 28 '14 at 12:27






        • 8




          @qdii stdbuf uses LD_PRELOAD mechanism to insert its own dynamically loaded library libstdbuf.so. This means that it will not work with these kinds executables: with setuid or file capabilities set, statically linked, not using standard libc. In these cases it is better to use the solutions with unbuffer / script / socat. See also stdbuf with setuid/capabilities.
          – pabouk
          Oct 12 '15 at 9:20








        23




        23




        Why doesn’t this get more upvotes? it's a better solution imho. Does it have any downsides?
        – qdii
        May 12 '13 at 11:39




        Why doesn’t this get more upvotes? it's a better solution imho. Does it have any downsides?
        – qdii
        May 12 '13 at 11:39




        5




        5




        "unbuffer" needs to be installed in Ubuntu, which is inside the package: expect-dev which is 2MB...
        – lepe
        Jun 27 '13 at 6:21





        "unbuffer" needs to be installed in Ubuntu, which is inside the package: expect-dev which is 2MB...
        – lepe
        Jun 27 '13 at 6:21





        12




        12




        @qdii stdbuf does not work with tee, because tee overwrites the defaults set by stdbuf. See the manual page of stdbuf.
        – ceving
        Jun 30 '14 at 11:51




        @qdii stdbuf does not work with tee, because tee overwrites the defaults set by stdbuf. See the manual page of stdbuf.
        – ceving
        Jun 30 '14 at 11:51




        5




        5




        @lepe Bizarrely, unbuffer has dependencies on x11 and tcl/tk, meaning it actually needs >80 MB if you're installing it on a server without them.
        – jpatokal
        Aug 28 '14 at 12:27




        @lepe Bizarrely, unbuffer has dependencies on x11 and tcl/tk, meaning it actually needs >80 MB if you're installing it on a server without them.
        – jpatokal
        Aug 28 '14 at 12:27




        8




        8




        @qdii stdbuf uses LD_PRELOAD mechanism to insert its own dynamically loaded library libstdbuf.so. This means that it will not work with these kinds executables: with setuid or file capabilities set, statically linked, not using standard libc. In these cases it is better to use the solutions with unbuffer / script / socat. See also stdbuf with setuid/capabilities.
        – pabouk
        Oct 12 '15 at 9:20





        @qdii stdbuf uses LD_PRELOAD mechanism to insert its own dynamically loaded library libstdbuf.so. This means that it will not work with these kinds executables: with setuid or file capabilities set, statically linked, not using standard libc. In these cases it is better to use the solutions with unbuffer / script / socat. See also stdbuf with setuid/capabilities.
        – pabouk
        Oct 12 '15 at 9:20











        up vote
        60
        down vote













        Yet another way to turn on line-buffering output mode for the long_running_command is to use the script command that runs your long_running_command in a pseudo terminal (pty).



        script -q /dev/null long_running_command | print_progress # FreeBSD, Mac OS X
        script -c "long_running_command" /dev/null | print_progress # Linux





        share|improve this answer

















        • 12




          +1 nice trick, since script is such an old command, it should be available on all Unix-like platforms.
          – Aaron Digulla
          Jan 20 '13 at 13:01






        • 4




          you also need -q on Linux: script -q -c 'long_running_command' /dev/null | print_progress
          – jfs
          Apr 11 '13 at 12:51






        • 1




          It seems like script reads from stdin, which makes it impossible to run such a long_running_command in the background, at least when started from interactive terminal. To workaround, I was able to redirect stdin from /dev/null, since my long_running_command doesn't use stdin.
          – haridsv
          Nov 15 '13 at 12:44







        • 1




          Even works on Android.
          – not2qubit
          Jul 2 '14 at 23:36






        • 2




          One significant disadvantage: ctrl-z no longer works (i.e. I can't suspend the script). This can be fixed by, for example: echo | sudo script -c /usr/local/bin/ec2-snapshot-all /dev/null | ts , if you don't mind not being able to interact with the program.
          – rlpowell
          Jul 24 '15 at 0:03














        up vote
        60
        down vote













        Yet another way to turn on line-buffering output mode for the long_running_command is to use the script command that runs your long_running_command in a pseudo terminal (pty).



        script -q /dev/null long_running_command | print_progress # FreeBSD, Mac OS X
        script -c "long_running_command" /dev/null | print_progress # Linux





        share|improve this answer

















        • 12




          +1 nice trick, since script is such an old command, it should be available on all Unix-like platforms.
          – Aaron Digulla
          Jan 20 '13 at 13:01






        • 4




          you also need -q on Linux: script -q -c 'long_running_command' /dev/null | print_progress
          – jfs
          Apr 11 '13 at 12:51






        • 1




          It seems like script reads from stdin, which makes it impossible to run such a long_running_command in the background, at least when started from interactive terminal. To workaround, I was able to redirect stdin from /dev/null, since my long_running_command doesn't use stdin.
          – haridsv
          Nov 15 '13 at 12:44







        • 1




          Even works on Android.
          – not2qubit
          Jul 2 '14 at 23:36






        • 2




          One significant disadvantage: ctrl-z no longer works (i.e. I can't suspend the script). This can be fixed by, for example: echo | sudo script -c /usr/local/bin/ec2-snapshot-all /dev/null | ts , if you don't mind not being able to interact with the program.
          – rlpowell
          Jul 24 '15 at 0:03












        up vote
        60
        down vote










        up vote
        60
        down vote









        Yet another way to turn on line-buffering output mode for the long_running_command is to use the script command that runs your long_running_command in a pseudo terminal (pty).



        script -q /dev/null long_running_command | print_progress # FreeBSD, Mac OS X
        script -c "long_running_command" /dev/null | print_progress # Linux





        share|improve this answer













        Yet another way to turn on line-buffering output mode for the long_running_command is to use the script command that runs your long_running_command in a pseudo terminal (pty).



        script -q /dev/null long_running_command | print_progress # FreeBSD, Mac OS X
        script -c "long_running_command" /dev/null | print_progress # Linux






        share|improve this answer













        share|improve this answer



        share|improve this answer











        answered Jan 19 '13 at 13:05









        chad

        60152




        60152







        • 12




          +1 nice trick, since script is such an old command, it should be available on all Unix-like platforms.
          – Aaron Digulla
          Jan 20 '13 at 13:01






        • 4




          you also need -q on Linux: script -q -c 'long_running_command' /dev/null | print_progress
          – jfs
          Apr 11 '13 at 12:51






        • 1




          It seems like script reads from stdin, which makes it impossible to run such a long_running_command in the background, at least when started from interactive terminal. To workaround, I was able to redirect stdin from /dev/null, since my long_running_command doesn't use stdin.
          – haridsv
          Nov 15 '13 at 12:44







        • 1




          Even works on Android.
          – not2qubit
          Jul 2 '14 at 23:36






        • 2




          One significant disadvantage: ctrl-z no longer works (i.e. I can't suspend the script). This can be fixed by, for example: echo | sudo script -c /usr/local/bin/ec2-snapshot-all /dev/null | ts , if you don't mind not being able to interact with the program.
          – rlpowell
          Jul 24 '15 at 0:03












        • 12




          +1 nice trick, since script is such an old command, it should be available on all Unix-like platforms.
          – Aaron Digulla
          Jan 20 '13 at 13:01






        • 4




          you also need -q on Linux: script -q -c 'long_running_command' /dev/null | print_progress
          – jfs
          Apr 11 '13 at 12:51






        • 1




          It seems like script reads from stdin, which makes it impossible to run such a long_running_command in the background, at least when started from interactive terminal. To workaround, I was able to redirect stdin from /dev/null, since my long_running_command doesn't use stdin.
          – haridsv
          Nov 15 '13 at 12:44







        • 1




          Even works on Android.
          – not2qubit
          Jul 2 '14 at 23:36






        • 2




          One significant disadvantage: ctrl-z no longer works (i.e. I can't suspend the script). This can be fixed by, for example: echo | sudo script -c /usr/local/bin/ec2-snapshot-all /dev/null | ts , if you don't mind not being able to interact with the program.
          – rlpowell
          Jul 24 '15 at 0:03







        12




        12




        +1 nice trick, since script is such an old command, it should be available on all Unix-like platforms.
        – Aaron Digulla
        Jan 20 '13 at 13:01




        +1 nice trick, since script is such an old command, it should be available on all Unix-like platforms.
        – Aaron Digulla
        Jan 20 '13 at 13:01




        4




        4




        you also need -q on Linux: script -q -c 'long_running_command' /dev/null | print_progress
        – jfs
        Apr 11 '13 at 12:51




        you also need -q on Linux: script -q -c 'long_running_command' /dev/null | print_progress
        – jfs
        Apr 11 '13 at 12:51




        1




        1




        It seems like script reads from stdin, which makes it impossible to run such a long_running_command in the background, at least when started from interactive terminal. To workaround, I was able to redirect stdin from /dev/null, since my long_running_command doesn't use stdin.
        – haridsv
        Nov 15 '13 at 12:44





        It seems like script reads from stdin, which makes it impossible to run such a long_running_command in the background, at least when started from interactive terminal. To workaround, I was able to redirect stdin from /dev/null, since my long_running_command doesn't use stdin.
        – haridsv
        Nov 15 '13 at 12:44





        1




        1




        Even works on Android.
        – not2qubit
        Jul 2 '14 at 23:36




        Even works on Android.
        – not2qubit
        Jul 2 '14 at 23:36




        2




        2




        One significant disadvantage: ctrl-z no longer works (i.e. I can't suspend the script). This can be fixed by, for example: echo | sudo script -c /usr/local/bin/ec2-snapshot-all /dev/null | ts , if you don't mind not being able to interact with the program.
        – rlpowell
        Jul 24 '15 at 0:03




        One significant disadvantage: ctrl-z no longer works (i.e. I can't suspend the script). This can be fixed by, for example: echo | sudo script -c /usr/local/bin/ec2-snapshot-all /dev/null | ts , if you don't mind not being able to interact with the program.
        – rlpowell
        Jul 24 '15 at 0:03










        up vote
        54
        down vote













        For grep, sed and awk you can force output to be line buffered. You can use:



        grep --line-buffered


        Force output to be line buffered.  By default, output is line buffered when standard output is a terminal and block buffered other-wise.



        sed -u


        Make output line buffered.



        See this page for more information:
        http://www.perkin.org.uk/posts/how-to-fix-stdio-buffering.html






        share|improve this answer



























          up vote
          54
          down vote













          For grep, sed and awk you can force output to be line buffered. You can use:



          grep --line-buffered


          Force output to be line buffered.  By default, output is line buffered when standard output is a terminal and block buffered other-wise.



          sed -u


          Make output line buffered.



          See this page for more information:
          http://www.perkin.org.uk/posts/how-to-fix-stdio-buffering.html






          share|improve this answer

























            up vote
            54
            down vote










            up vote
            54
            down vote









            For grep, sed and awk you can force output to be line buffered. You can use:



            grep --line-buffered


            Force output to be line buffered.  By default, output is line buffered when standard output is a terminal and block buffered other-wise.



            sed -u


            Make output line buffered.



            See this page for more information:
            http://www.perkin.org.uk/posts/how-to-fix-stdio-buffering.html






            share|improve this answer















            For grep, sed and awk you can force output to be line buffered. You can use:



            grep --line-buffered


            Force output to be line buffered.  By default, output is line buffered when standard output is a terminal and block buffered other-wise.



            sed -u


            Make output line buffered.



            See this page for more information:
            http://www.perkin.org.uk/posts/how-to-fix-stdio-buffering.html







            share|improve this answer















            share|improve this answer



            share|improve this answer








            edited Oct 21 '14 at 12:28









            Braiam

            22.3k1869130




            22.3k1869130











            answered Oct 31 '12 at 16:00









            yaneku

            54942




            54942




















                up vote
                45
                down vote













                If it is a problem with the libc modifying its buffering / flushing when output does not go to a terminal, you should try socat. You can create a bidirectional stream between almost any kind of I/O mechanism. One of those is a forked program speaking to a pseudo tty.



                 socat EXEC:long_running_command,pty,ctty STDIO 


                What it does is



                • create a pseudo tty

                • fork long_running_command with the slave side of the pty as stdin/stdout

                • establish a bidirectional stream between the master side of the pty and the second address (here it is STDIO)

                If this gives you the same output as long_running_command, then you can continue with a pipe.



                Edit : Wow
                Did not see the unbuffer answer ! Well, socat is a great tool anyway, so I might just leave this answer






                share|improve this answer

















                • 1




                  ...and I didn't know about socat - looks kinda like netcat only perhaps more so. ;) Thanks and +1.
                  – cheduardo
                  Jun 20 '09 at 9:32






                • 1




                  I'd use socat -u exec:long_running_command,pty,end-close - here
                  – Stéphane Chazelas
                  Aug 7 '15 at 13:10














                up vote
                45
                down vote













                If it is a problem with the libc modifying its buffering / flushing when output does not go to a terminal, you should try socat. You can create a bidirectional stream between almost any kind of I/O mechanism. One of those is a forked program speaking to a pseudo tty.



                 socat EXEC:long_running_command,pty,ctty STDIO 


                What it does is



                • create a pseudo tty

                • fork long_running_command with the slave side of the pty as stdin/stdout

                • establish a bidirectional stream between the master side of the pty and the second address (here it is STDIO)

                If this gives you the same output as long_running_command, then you can continue with a pipe.



                Edit : Wow
                Did not see the unbuffer answer ! Well, socat is a great tool anyway, so I might just leave this answer






                share|improve this answer

















                • 1




                  ...and I didn't know about socat - looks kinda like netcat only perhaps more so. ;) Thanks and +1.
                  – cheduardo
                  Jun 20 '09 at 9:32






                • 1




                  I'd use socat -u exec:long_running_command,pty,end-close - here
                  – Stéphane Chazelas
                  Aug 7 '15 at 13:10












                up vote
                45
                down vote










                up vote
                45
                down vote









                If it is a problem with the libc modifying its buffering / flushing when output does not go to a terminal, you should try socat. You can create a bidirectional stream between almost any kind of I/O mechanism. One of those is a forked program speaking to a pseudo tty.



                 socat EXEC:long_running_command,pty,ctty STDIO 


                What it does is



                • create a pseudo tty

                • fork long_running_command with the slave side of the pty as stdin/stdout

                • establish a bidirectional stream between the master side of the pty and the second address (here it is STDIO)

                If this gives you the same output as long_running_command, then you can continue with a pipe.



                Edit : Wow
                Did not see the unbuffer answer ! Well, socat is a great tool anyway, so I might just leave this answer






                share|improve this answer













                If it is a problem with the libc modifying its buffering / flushing when output does not go to a terminal, you should try socat. You can create a bidirectional stream between almost any kind of I/O mechanism. One of those is a forked program speaking to a pseudo tty.



                 socat EXEC:long_running_command,pty,ctty STDIO 


                What it does is



                • create a pseudo tty

                • fork long_running_command with the slave side of the pty as stdin/stdout

                • establish a bidirectional stream between the master side of the pty and the second address (here it is STDIO)

                If this gives you the same output as long_running_command, then you can continue with a pipe.



                Edit : Wow
                Did not see the unbuffer answer ! Well, socat is a great tool anyway, so I might just leave this answer







                share|improve this answer













                share|improve this answer



                share|improve this answer











                answered Jun 17 '09 at 7:21









                shodanex

                55634




                55634







                • 1




                  ...and I didn't know about socat - looks kinda like netcat only perhaps more so. ;) Thanks and +1.
                  – cheduardo
                  Jun 20 '09 at 9:32






                • 1




                  I'd use socat -u exec:long_running_command,pty,end-close - here
                  – Stéphane Chazelas
                  Aug 7 '15 at 13:10












                • 1




                  ...and I didn't know about socat - looks kinda like netcat only perhaps more so. ;) Thanks and +1.
                  – cheduardo
                  Jun 20 '09 at 9:32






                • 1




                  I'd use socat -u exec:long_running_command,pty,end-close - here
                  – Stéphane Chazelas
                  Aug 7 '15 at 13:10







                1




                1




                ...and I didn't know about socat - looks kinda like netcat only perhaps more so. ;) Thanks and +1.
                – cheduardo
                Jun 20 '09 at 9:32




                ...and I didn't know about socat - looks kinda like netcat only perhaps more so. ;) Thanks and +1.
                – cheduardo
                Jun 20 '09 at 9:32




                1




                1




                I'd use socat -u exec:long_running_command,pty,end-close - here
                – Stéphane Chazelas
                Aug 7 '15 at 13:10




                I'd use socat -u exec:long_running_command,pty,end-close - here
                – Stéphane Chazelas
                Aug 7 '15 at 13:10










                up vote
                17
                down vote













                You can use



                long_running_command 1>&2 |& print_progress


                The problem is that libc will line-buffer when stdout to screen, and full-buffer when stdout to a file. But no-buffer for stderr.



                I don't think it's the problem with pipe buffer, it's all about libc's buffer policy.






                share|improve this answer























                • You're right; my question is still: How can I influence libc's buffer policy without recompiling?
                  – Aaron Digulla
                  Apr 4 '14 at 8:55










                • @StéphaneChazelas fd1 will redirected to stderr
                  – Wang HongQin
                  Aug 7 '15 at 9:26










                • @StéphaneChazelas i dont get your arguing point. plz do a test, it works
                  – Wang HongQin
                  Aug 7 '15 at 9:53






                • 3




                  OK, what's happening is that with both zsh (where |& comes from adapted from csh) and bash, when you do cmd1 >&2 |& cmd2, both fd 1 and 2 are connected to the outer stdout. So it works at preventing buffering when that outer stdout is a terminal, but only because the output doesn't go through the pipe (so print_progress prints nothing). So it's the same as long_running_command & print_progress (except that print_progress stdin is a pipe that has no writer). You can verify with ls -l /proc/self/fd >&2 |& cat compared to ls -l /proc/self/fd |& cat.
                  – Stéphane Chazelas
                  Aug 7 '15 at 10:44







                • 1




                  That's because |& is short for 2>&1 |, literally. So cmd1 |& cmd2 is cmd1 1>&2 2>&1 | cmd2. So, both fd 1 and 2 end up connected to the original stderr, and nothing is left writing to the pipe. (s/outer stdout/outer stderr/g in my previous comment).
                  – Stéphane Chazelas
                  Aug 7 '15 at 10:48















                up vote
                17
                down vote













                You can use



                long_running_command 1>&2 |& print_progress


                The problem is that libc will line-buffer when stdout to screen, and full-buffer when stdout to a file. But no-buffer for stderr.



                I don't think it's the problem with pipe buffer, it's all about libc's buffer policy.






                share|improve this answer























                • You're right; my question is still: How can I influence libc's buffer policy without recompiling?
                  – Aaron Digulla
                  Apr 4 '14 at 8:55










                • @StéphaneChazelas fd1 will redirected to stderr
                  – Wang HongQin
                  Aug 7 '15 at 9:26










                • @StéphaneChazelas i dont get your arguing point. plz do a test, it works
                  – Wang HongQin
                  Aug 7 '15 at 9:53






                • 3




                  OK, what's happening is that with both zsh (where |& comes from adapted from csh) and bash, when you do cmd1 >&2 |& cmd2, both fd 1 and 2 are connected to the outer stdout. So it works at preventing buffering when that outer stdout is a terminal, but only because the output doesn't go through the pipe (so print_progress prints nothing). So it's the same as long_running_command & print_progress (except that print_progress stdin is a pipe that has no writer). You can verify with ls -l /proc/self/fd >&2 |& cat compared to ls -l /proc/self/fd |& cat.
                  – Stéphane Chazelas
                  Aug 7 '15 at 10:44







                • 1




                  That's because |& is short for 2>&1 |, literally. So cmd1 |& cmd2 is cmd1 1>&2 2>&1 | cmd2. So, both fd 1 and 2 end up connected to the original stderr, and nothing is left writing to the pipe. (s/outer stdout/outer stderr/g in my previous comment).
                  – Stéphane Chazelas
                  Aug 7 '15 at 10:48













                up vote
                17
                down vote










                up vote
                17
                down vote









                You can use



                long_running_command 1>&2 |& print_progress


                The problem is that libc will line-buffer when stdout to screen, and full-buffer when stdout to a file. But no-buffer for stderr.



                I don't think it's the problem with pipe buffer, it's all about libc's buffer policy.






                share|improve this answer















                You can use



                long_running_command 1>&2 |& print_progress


                The problem is that libc will line-buffer when stdout to screen, and full-buffer when stdout to a file. But no-buffer for stderr.



                I don't think it's the problem with pipe buffer, it's all about libc's buffer policy.







                share|improve this answer















                share|improve this answer



                share|improve this answer








                edited Apr 4 '14 at 6:18


























                answered Apr 4 '14 at 6:10









                Wang HongQin

                18714




                18714











                • You're right; my question is still: How can I influence libc's buffer policy without recompiling?
                  – Aaron Digulla
                  Apr 4 '14 at 8:55










                • @StéphaneChazelas fd1 will redirected to stderr
                  – Wang HongQin
                  Aug 7 '15 at 9:26










                • @StéphaneChazelas i dont get your arguing point. plz do a test, it works
                  – Wang HongQin
                  Aug 7 '15 at 9:53






                • 3




                  OK, what's happening is that with both zsh (where |& comes from adapted from csh) and bash, when you do cmd1 >&2 |& cmd2, both fd 1 and 2 are connected to the outer stdout. So it works at preventing buffering when that outer stdout is a terminal, but only because the output doesn't go through the pipe (so print_progress prints nothing). So it's the same as long_running_command & print_progress (except that print_progress stdin is a pipe that has no writer). You can verify with ls -l /proc/self/fd >&2 |& cat compared to ls -l /proc/self/fd |& cat.
                  – Stéphane Chazelas
                  Aug 7 '15 at 10:44







                • 1




                  That's because |& is short for 2>&1 |, literally. So cmd1 |& cmd2 is cmd1 1>&2 2>&1 | cmd2. So, both fd 1 and 2 end up connected to the original stderr, and nothing is left writing to the pipe. (s/outer stdout/outer stderr/g in my previous comment).
                  – Stéphane Chazelas
                  Aug 7 '15 at 10:48

















                • You're right; my question is still: How can I influence libc's buffer policy without recompiling?
                  – Aaron Digulla
                  Apr 4 '14 at 8:55










                • @StéphaneChazelas fd1 will redirected to stderr
                  – Wang HongQin
                  Aug 7 '15 at 9:26










                • @StéphaneChazelas i dont get your arguing point. plz do a test, it works
                  – Wang HongQin
                  Aug 7 '15 at 9:53






                • 3




                  OK, what's happening is that with both zsh (where |& comes from adapted from csh) and bash, when you do cmd1 >&2 |& cmd2, both fd 1 and 2 are connected to the outer stdout. So it works at preventing buffering when that outer stdout is a terminal, but only because the output doesn't go through the pipe (so print_progress prints nothing). So it's the same as long_running_command & print_progress (except that print_progress stdin is a pipe that has no writer). You can verify with ls -l /proc/self/fd >&2 |& cat compared to ls -l /proc/self/fd |& cat.
                  – Stéphane Chazelas
                  Aug 7 '15 at 10:44







                • 1




                  That's because |& is short for 2>&1 |, literally. So cmd1 |& cmd2 is cmd1 1>&2 2>&1 | cmd2. So, both fd 1 and 2 end up connected to the original stderr, and nothing is left writing to the pipe. (s/outer stdout/outer stderr/g in my previous comment).
                  – Stéphane Chazelas
                  Aug 7 '15 at 10:48
















                You're right; my question is still: How can I influence libc's buffer policy without recompiling?
                – Aaron Digulla
                Apr 4 '14 at 8:55




                You're right; my question is still: How can I influence libc's buffer policy without recompiling?
                – Aaron Digulla
                Apr 4 '14 at 8:55












                @StéphaneChazelas fd1 will redirected to stderr
                – Wang HongQin
                Aug 7 '15 at 9:26




                @StéphaneChazelas fd1 will redirected to stderr
                – Wang HongQin
                Aug 7 '15 at 9:26












                @StéphaneChazelas i dont get your arguing point. plz do a test, it works
                – Wang HongQin
                Aug 7 '15 at 9:53




                @StéphaneChazelas i dont get your arguing point. plz do a test, it works
                – Wang HongQin
                Aug 7 '15 at 9:53




                3




                3




                OK, what's happening is that with both zsh (where |& comes from adapted from csh) and bash, when you do cmd1 >&2 |& cmd2, both fd 1 and 2 are connected to the outer stdout. So it works at preventing buffering when that outer stdout is a terminal, but only because the output doesn't go through the pipe (so print_progress prints nothing). So it's the same as long_running_command & print_progress (except that print_progress stdin is a pipe that has no writer). You can verify with ls -l /proc/self/fd >&2 |& cat compared to ls -l /proc/self/fd |& cat.
                – Stéphane Chazelas
                Aug 7 '15 at 10:44





                OK, what's happening is that with both zsh (where |& comes from adapted from csh) and bash, when you do cmd1 >&2 |& cmd2, both fd 1 and 2 are connected to the outer stdout. So it works at preventing buffering when that outer stdout is a terminal, but only because the output doesn't go through the pipe (so print_progress prints nothing). So it's the same as long_running_command & print_progress (except that print_progress stdin is a pipe that has no writer). You can verify with ls -l /proc/self/fd >&2 |& cat compared to ls -l /proc/self/fd |& cat.
                – Stéphane Chazelas
                Aug 7 '15 at 10:44





                1




                1




                That's because |& is short for 2>&1 |, literally. So cmd1 |& cmd2 is cmd1 1>&2 2>&1 | cmd2. So, both fd 1 and 2 end up connected to the original stderr, and nothing is left writing to the pipe. (s/outer stdout/outer stderr/g in my previous comment).
                – Stéphane Chazelas
                Aug 7 '15 at 10:48





                That's because |& is short for 2>&1 |, literally. So cmd1 |& cmd2 is cmd1 1>&2 2>&1 | cmd2. So, both fd 1 and 2 end up connected to the original stderr, and nothing is left writing to the pipe. (s/outer stdout/outer stderr/g in my previous comment).
                – Stéphane Chazelas
                Aug 7 '15 at 10:48











                up vote
                10
                down vote













                It used to be the case, and probably still is the case, that when standard output is written to a terminal, it is line buffered by default - when a newline is written, the line is written to the terminal. When standard output is sent to a pipe, it is fully buffered - so the data is only sent to the next process in the pipeline when the standard I/O buffer is filled.



                That's the source of the trouble. I'm not sure whether there is much you can do to fix it without modifying the program writing into the pipe. You could use the setvbuf() function with the _IOLBF flag to unconditionally put stdout into line buffered mode. But I don't see an easy way to enforce that on a program. Or the program can do fflush() at appropriate points (after each line of output), but the same comment applies.



                I suppose that if you replaced the pipe with a pseudo-terminal, then the standard I/O library would think the output was a terminal (because it is a type of terminal) and would line buffer automatically. That is a complex way of dealing with things, though.






                share|improve this answer

























                  up vote
                  10
                  down vote













                  It used to be the case, and probably still is the case, that when standard output is written to a terminal, it is line buffered by default - when a newline is written, the line is written to the terminal. When standard output is sent to a pipe, it is fully buffered - so the data is only sent to the next process in the pipeline when the standard I/O buffer is filled.



                  That's the source of the trouble. I'm not sure whether there is much you can do to fix it without modifying the program writing into the pipe. You could use the setvbuf() function with the _IOLBF flag to unconditionally put stdout into line buffered mode. But I don't see an easy way to enforce that on a program. Or the program can do fflush() at appropriate points (after each line of output), but the same comment applies.



                  I suppose that if you replaced the pipe with a pseudo-terminal, then the standard I/O library would think the output was a terminal (because it is a type of terminal) and would line buffer automatically. That is a complex way of dealing with things, though.






                  share|improve this answer























                    up vote
                    10
                    down vote










                    up vote
                    10
                    down vote









                    It used to be the case, and probably still is the case, that when standard output is written to a terminal, it is line buffered by default - when a newline is written, the line is written to the terminal. When standard output is sent to a pipe, it is fully buffered - so the data is only sent to the next process in the pipeline when the standard I/O buffer is filled.



                    That's the source of the trouble. I'm not sure whether there is much you can do to fix it without modifying the program writing into the pipe. You could use the setvbuf() function with the _IOLBF flag to unconditionally put stdout into line buffered mode. But I don't see an easy way to enforce that on a program. Or the program can do fflush() at appropriate points (after each line of output), but the same comment applies.



                    I suppose that if you replaced the pipe with a pseudo-terminal, then the standard I/O library would think the output was a terminal (because it is a type of terminal) and would line buffer automatically. That is a complex way of dealing with things, though.






                    share|improve this answer













                    It used to be the case, and probably still is the case, that when standard output is written to a terminal, it is line buffered by default - when a newline is written, the line is written to the terminal. When standard output is sent to a pipe, it is fully buffered - so the data is only sent to the next process in the pipeline when the standard I/O buffer is filled.



                    That's the source of the trouble. I'm not sure whether there is much you can do to fix it without modifying the program writing into the pipe. You could use the setvbuf() function with the _IOLBF flag to unconditionally put stdout into line buffered mode. But I don't see an easy way to enforce that on a program. Or the program can do fflush() at appropriate points (after each line of output), but the same comment applies.



                    I suppose that if you replaced the pipe with a pseudo-terminal, then the standard I/O library would think the output was a terminal (because it is a type of terminal) and would line buffer automatically. That is a complex way of dealing with things, though.







                    share|improve this answer













                    share|improve this answer



                    share|improve this answer











                    answered Jun 17 '09 at 0:47









                    Jonathan Leffler

                    1,165813




                    1,165813




















                        up vote
                        6
                        down vote













                        I know this is an old question and already had lot of answers, but if you wish to avoid the buffer problem, just try something like:



                        stdbuf -oL tail -f /var/log/messages | tee -a /home/your_user_here/logs.txt


                        This will output in real time the logs and also save them into the logs.txt file and the buffer will no longer affect the tail -f command.






                        share|improve this answer



















                        • 3




                          This looks like the second answer :-/
                          – Aaron Digulla
                          Aug 11 '15 at 11:17






                        • 1




                          stdbuf is included in gnu coreutils(I verified on latest version 8.25). verified this works on an embedded linux.
                          – zhaorufei
                          Jul 20 '16 at 10:18














                        up vote
                        6
                        down vote













                        I know this is an old question and already had lot of answers, but if you wish to avoid the buffer problem, just try something like:



                        stdbuf -oL tail -f /var/log/messages | tee -a /home/your_user_here/logs.txt


                        This will output in real time the logs and also save them into the logs.txt file and the buffer will no longer affect the tail -f command.






                        share|improve this answer



















                        • 3




                          This looks like the second answer :-/
                          – Aaron Digulla
                          Aug 11 '15 at 11:17






                        • 1




                          stdbuf is included in gnu coreutils(I verified on latest version 8.25). verified this works on an embedded linux.
                          – zhaorufei
                          Jul 20 '16 at 10:18












                        up vote
                        6
                        down vote










                        up vote
                        6
                        down vote









                        I know this is an old question and already had lot of answers, but if you wish to avoid the buffer problem, just try something like:



                        stdbuf -oL tail -f /var/log/messages | tee -a /home/your_user_here/logs.txt


                        This will output in real time the logs and also save them into the logs.txt file and the buffer will no longer affect the tail -f command.






                        share|improve this answer















                        I know this is an old question and already had lot of answers, but if you wish to avoid the buffer problem, just try something like:



                        stdbuf -oL tail -f /var/log/messages | tee -a /home/your_user_here/logs.txt


                        This will output in real time the logs and also save them into the logs.txt file and the buffer will no longer affect the tail -f command.







                        share|improve this answer















                        share|improve this answer



                        share|improve this answer








                        edited Aug 10 '15 at 3:57









                        Alois Mahdal

                        1,80232846




                        1,80232846











                        answered Aug 7 '15 at 8:31









                        Marin Nedea

                        17915




                        17915







                        • 3




                          This looks like the second answer :-/
                          – Aaron Digulla
                          Aug 11 '15 at 11:17






                        • 1




                          stdbuf is included in gnu coreutils(I verified on latest version 8.25). verified this works on an embedded linux.
                          – zhaorufei
                          Jul 20 '16 at 10:18












                        • 3




                          This looks like the second answer :-/
                          – Aaron Digulla
                          Aug 11 '15 at 11:17






                        • 1




                          stdbuf is included in gnu coreutils(I verified on latest version 8.25). verified this works on an embedded linux.
                          – zhaorufei
                          Jul 20 '16 at 10:18







                        3




                        3




                        This looks like the second answer :-/
                        – Aaron Digulla
                        Aug 11 '15 at 11:17




                        This looks like the second answer :-/
                        – Aaron Digulla
                        Aug 11 '15 at 11:17




                        1




                        1




                        stdbuf is included in gnu coreutils(I verified on latest version 8.25). verified this works on an embedded linux.
                        – zhaorufei
                        Jul 20 '16 at 10:18




                        stdbuf is included in gnu coreutils(I verified on latest version 8.25). verified this works on an embedded linux.
                        – zhaorufei
                        Jul 20 '16 at 10:18










                        up vote
                        4
                        down vote













                        I don't think the problem is with the pipe. It sounds like your long running process is not flushing its own buffer frequently enough. Changing the pipe's buffer size would be a hack to get round it, but I don't think its possible without rebuilding the kernel - something you wouldn't want to do as a hack, as it probably aversley affect a lot of other processes.






                        share|improve this answer

















                        • 15




                          The root cause is that libc switches to 4k buffering if the stdout is not a tty.
                          – Aaron Digulla
                          Jun 16 '09 at 11:50






                        • 5




                          That is very interesting ! because pipe don't cause any buffering. They provide buffering, but if you read from a pipe, you get whatever data is available, you don't have to wait for a buffer in the pipe. So the culprit would be the stdio buffering in the application.
                          – shodanex
                          Jun 16 '09 at 13:58














                        up vote
                        4
                        down vote













                        I don't think the problem is with the pipe. It sounds like your long running process is not flushing its own buffer frequently enough. Changing the pipe's buffer size would be a hack to get round it, but I don't think its possible without rebuilding the kernel - something you wouldn't want to do as a hack, as it probably aversley affect a lot of other processes.






                        share|improve this answer

















                        • 15




                          The root cause is that libc switches to 4k buffering if the stdout is not a tty.
                          – Aaron Digulla
                          Jun 16 '09 at 11:50






                        • 5




                          That is very interesting ! because pipe don't cause any buffering. They provide buffering, but if you read from a pipe, you get whatever data is available, you don't have to wait for a buffer in the pipe. So the culprit would be the stdio buffering in the application.
                          – shodanex
                          Jun 16 '09 at 13:58












                        up vote
                        4
                        down vote










                        up vote
                        4
                        down vote









                        I don't think the problem is with the pipe. It sounds like your long running process is not flushing its own buffer frequently enough. Changing the pipe's buffer size would be a hack to get round it, but I don't think its possible without rebuilding the kernel - something you wouldn't want to do as a hack, as it probably aversley affect a lot of other processes.






                        share|improve this answer













                        I don't think the problem is with the pipe. It sounds like your long running process is not flushing its own buffer frequently enough. Changing the pipe's buffer size would be a hack to get round it, but I don't think its possible without rebuilding the kernel - something you wouldn't want to do as a hack, as it probably aversley affect a lot of other processes.







                        share|improve this answer













                        share|improve this answer



                        share|improve this answer











                        answered Jun 16 '09 at 10:45







                        anon














                        • 15




                          The root cause is that libc switches to 4k buffering if the stdout is not a tty.
                          – Aaron Digulla
                          Jun 16 '09 at 11:50






                        • 5




                          That is very interesting ! because pipe don't cause any buffering. They provide buffering, but if you read from a pipe, you get whatever data is available, you don't have to wait for a buffer in the pipe. So the culprit would be the stdio buffering in the application.
                          – shodanex
                          Jun 16 '09 at 13:58












                        • 15




                          The root cause is that libc switches to 4k buffering if the stdout is not a tty.
                          – Aaron Digulla
                          Jun 16 '09 at 11:50






                        • 5




                          That is very interesting ! because pipe don't cause any buffering. They provide buffering, but if you read from a pipe, you get whatever data is available, you don't have to wait for a buffer in the pipe. So the culprit would be the stdio buffering in the application.
                          – shodanex
                          Jun 16 '09 at 13:58







                        15




                        15




                        The root cause is that libc switches to 4k buffering if the stdout is not a tty.
                        – Aaron Digulla
                        Jun 16 '09 at 11:50




                        The root cause is that libc switches to 4k buffering if the stdout is not a tty.
                        – Aaron Digulla
                        Jun 16 '09 at 11:50




                        5




                        5




                        That is very interesting ! because pipe don't cause any buffering. They provide buffering, but if you read from a pipe, you get whatever data is available, you don't have to wait for a buffer in the pipe. So the culprit would be the stdio buffering in the application.
                        – shodanex
                        Jun 16 '09 at 13:58




                        That is very interesting ! because pipe don't cause any buffering. They provide buffering, but if you read from a pipe, you get whatever data is available, you don't have to wait for a buffer in the pipe. So the culprit would be the stdio buffering in the application.
                        – shodanex
                        Jun 16 '09 at 13:58










                        up vote
                        2
                        down vote













                        In a similar vein to chad's answer, you can write a little script like this:



                        # save as ~/bin/scriptee, or so
                        script -q /dev/null sh -c 'exec cat > /dev/null'


                        Then use this scriptee command as a replacement for tee.



                        my-long-running-command | scriptee


                        Alas, I can't seem to get a version like that to work perfectly in Linux, so seems limited to BSD-style unixes.



                        On Linux, this is close, but you don't get your prompt back when it finishes (until you press enter, etc)...



                        script -q -c 'cat > /proc/self/fd/1' /dev/null





                        share|improve this answer























                        • Why does that work? Does "script" turn off buffering?
                          – Aaron Digulla
                          Dec 6 '16 at 10:09










                        • @Aaron Digulla: script emulates a terminal, so yes, I believe it turns off buffering. It also echoes back each character sent to it - which is why cat is sent to /dev/null in the example. As far as the program running inside script is concerned, it is talking to an interactive session. I believe it's similar to expect in this regard, but script likely is part of your base system.
                          – jwd
                          Dec 7 '16 at 18:54















                        up vote
                        2
                        down vote













                        In a similar vein to chad's answer, you can write a little script like this:



                        # save as ~/bin/scriptee, or so
                        script -q /dev/null sh -c 'exec cat > /dev/null'


                        Then use this scriptee command as a replacement for tee.



                        my-long-running-command | scriptee


                        Alas, I can't seem to get a version like that to work perfectly in Linux, so seems limited to BSD-style unixes.



                        On Linux, this is close, but you don't get your prompt back when it finishes (until you press enter, etc)...



                        script -q -c 'cat > /proc/self/fd/1' /dev/null





                        share|improve this answer























                        • Why does that work? Does "script" turn off buffering?
                          – Aaron Digulla
                          Dec 6 '16 at 10:09










                        • @Aaron Digulla: script emulates a terminal, so yes, I believe it turns off buffering. It also echoes back each character sent to it - which is why cat is sent to /dev/null in the example. As far as the program running inside script is concerned, it is talking to an interactive session. I believe it's similar to expect in this regard, but script likely is part of your base system.
                          – jwd
                          Dec 7 '16 at 18:54













                        up vote
                        2
                        down vote










                        up vote
                        2
                        down vote









                        In a similar vein to chad's answer, you can write a little script like this:



                        # save as ~/bin/scriptee, or so
                        script -q /dev/null sh -c 'exec cat > /dev/null'


                        Then use this scriptee command as a replacement for tee.



                        my-long-running-command | scriptee


                        Alas, I can't seem to get a version like that to work perfectly in Linux, so seems limited to BSD-style unixes.



                        On Linux, this is close, but you don't get your prompt back when it finishes (until you press enter, etc)...



                        script -q -c 'cat > /proc/self/fd/1' /dev/null





                        share|improve this answer















                        In a similar vein to chad's answer, you can write a little script like this:



                        # save as ~/bin/scriptee, or so
                        script -q /dev/null sh -c 'exec cat > /dev/null'


                        Then use this scriptee command as a replacement for tee.



                        my-long-running-command | scriptee


                        Alas, I can't seem to get a version like that to work perfectly in Linux, so seems limited to BSD-style unixes.



                        On Linux, this is close, but you don't get your prompt back when it finishes (until you press enter, etc)...



                        script -q -c 'cat > /proc/self/fd/1' /dev/null






                        share|improve this answer















                        share|improve this answer



                        share|improve this answer








                        edited Apr 13 '17 at 12:36









                        Community♦

                        1




                        1











                        answered Nov 18 '16 at 1:12









                        jwd

                        58837




                        58837











                        • Why does that work? Does "script" turn off buffering?
                          – Aaron Digulla
                          Dec 6 '16 at 10:09










                        • @Aaron Digulla: script emulates a terminal, so yes, I believe it turns off buffering. It also echoes back each character sent to it - which is why cat is sent to /dev/null in the example. As far as the program running inside script is concerned, it is talking to an interactive session. I believe it's similar to expect in this regard, but script likely is part of your base system.
                          – jwd
                          Dec 7 '16 at 18:54

















                        • Why does that work? Does "script" turn off buffering?
                          – Aaron Digulla
                          Dec 6 '16 at 10:09










                        • @Aaron Digulla: script emulates a terminal, so yes, I believe it turns off buffering. It also echoes back each character sent to it - which is why cat is sent to /dev/null in the example. As far as the program running inside script is concerned, it is talking to an interactive session. I believe it's similar to expect in this regard, but script likely is part of your base system.
                          – jwd
                          Dec 7 '16 at 18:54
















                        Why does that work? Does "script" turn off buffering?
                        – Aaron Digulla
                        Dec 6 '16 at 10:09




                        Why does that work? Does "script" turn off buffering?
                        – Aaron Digulla
                        Dec 6 '16 at 10:09












                        @Aaron Digulla: script emulates a terminal, so yes, I believe it turns off buffering. It also echoes back each character sent to it - which is why cat is sent to /dev/null in the example. As far as the program running inside script is concerned, it is talking to an interactive session. I believe it's similar to expect in this regard, but script likely is part of your base system.
                        – jwd
                        Dec 7 '16 at 18:54





                        @Aaron Digulla: script emulates a terminal, so yes, I believe it turns off buffering. It also echoes back each character sent to it - which is why cat is sent to /dev/null in the example. As far as the program running inside script is concerned, it is talking to an interactive session. I believe it's similar to expect in this regard, but script likely is part of your base system.
                        – jwd
                        Dec 7 '16 at 18:54











                        up vote
                        1
                        down vote













                        According to this post here, you could try reducing the pipe ulimit to one single 512-byte block. It certainly won't turn off buffering, but well, 512 bytes is way less than 4K :3






                        share|improve this answer



























                          up vote
                          1
                          down vote













                          According to this post here, you could try reducing the pipe ulimit to one single 512-byte block. It certainly won't turn off buffering, but well, 512 bytes is way less than 4K :3






                          share|improve this answer

























                            up vote
                            1
                            down vote










                            up vote
                            1
                            down vote









                            According to this post here, you could try reducing the pipe ulimit to one single 512-byte block. It certainly won't turn off buffering, but well, 512 bytes is way less than 4K :3






                            share|improve this answer















                            According to this post here, you could try reducing the pipe ulimit to one single 512-byte block. It certainly won't turn off buffering, but well, 512 bytes is way less than 4K :3







                            share|improve this answer















                            share|improve this answer



                            share|improve this answer








                            edited Apr 13 '17 at 12:37









                            Community♦

                            1




                            1











                            answered May 20 '14 at 19:43









                            RAKK

                            4241625




                            4241625




















                                up vote
                                1
                                down vote













                                I found this clever solution: (echo -e "cmd 1ncmd 2" && cat) | ./shell_executable



                                This does the trick. cat will read additional input (until EOF) and pass that to the pipe after the echo has put its arguments into the input stream of shell_executable.






                                share|improve this answer























                                • Actually, cat doesn't see the the output of the echo; you just run two commands in a subshell and the output of both is sent into the pipe. The second command in the subshell ('cat') reads from the parent/outer stdin, that's why it works.
                                  – Aaron Digulla
                                  Nov 9 '16 at 11:19














                                up vote
                                1
                                down vote













                                I found this clever solution: (echo -e "cmd 1ncmd 2" && cat) | ./shell_executable



                                This does the trick. cat will read additional input (until EOF) and pass that to the pipe after the echo has put its arguments into the input stream of shell_executable.






                                share|improve this answer























                                • Actually, cat doesn't see the the output of the echo; you just run two commands in a subshell and the output of both is sent into the pipe. The second command in the subshell ('cat') reads from the parent/outer stdin, that's why it works.
                                  – Aaron Digulla
                                  Nov 9 '16 at 11:19












                                up vote
                                1
                                down vote










                                up vote
                                1
                                down vote









                                I found this clever solution: (echo -e "cmd 1ncmd 2" && cat) | ./shell_executable



                                This does the trick. cat will read additional input (until EOF) and pass that to the pipe after the echo has put its arguments into the input stream of shell_executable.






                                share|improve this answer















                                I found this clever solution: (echo -e "cmd 1ncmd 2" && cat) | ./shell_executable



                                This does the trick. cat will read additional input (until EOF) and pass that to the pipe after the echo has put its arguments into the input stream of shell_executable.







                                share|improve this answer















                                share|improve this answer



                                share|improve this answer








                                edited Nov 9 '16 at 12:09









                                Aaron Digulla

                                2,11841316




                                2,11841316











                                answered Nov 4 '16 at 11:01









                                jaggedsoft

                                1113




                                1113











                                • Actually, cat doesn't see the the output of the echo; you just run two commands in a subshell and the output of both is sent into the pipe. The second command in the subshell ('cat') reads from the parent/outer stdin, that's why it works.
                                  – Aaron Digulla
                                  Nov 9 '16 at 11:19
















                                • Actually, cat doesn't see the the output of the echo; you just run two commands in a subshell and the output of both is sent into the pipe. The second command in the subshell ('cat') reads from the parent/outer stdin, that's why it works.
                                  – Aaron Digulla
                                  Nov 9 '16 at 11:19















                                Actually, cat doesn't see the the output of the echo; you just run two commands in a subshell and the output of both is sent into the pipe. The second command in the subshell ('cat') reads from the parent/outer stdin, that's why it works.
                                – Aaron Digulla
                                Nov 9 '16 at 11:19




                                Actually, cat doesn't see the the output of the echo; you just run two commands in a subshell and the output of both is sent into the pipe. The second command in the subshell ('cat') reads from the parent/outer stdin, that's why it works.
                                – Aaron Digulla
                                Nov 9 '16 at 11:19










                                up vote
                                -1
                                down vote













                                According to this the pipe buffer size seems to be set in the kernel and would require you to recompile your kernel to alter.






                                share|improve this answer

















                                • 7




                                  I believe that is a different buffer.
                                  – Samuel Edwin Ward
                                  Jan 8 '13 at 21:58














                                up vote
                                -1
                                down vote













                                According to this the pipe buffer size seems to be set in the kernel and would require you to recompile your kernel to alter.






                                share|improve this answer

















                                • 7




                                  I believe that is a different buffer.
                                  – Samuel Edwin Ward
                                  Jan 8 '13 at 21:58












                                up vote
                                -1
                                down vote










                                up vote
                                -1
                                down vote









                                According to this the pipe buffer size seems to be set in the kernel and would require you to recompile your kernel to alter.






                                share|improve this answer













                                According to this the pipe buffer size seems to be set in the kernel and would require you to recompile your kernel to alter.







                                share|improve this answer













                                share|improve this answer



                                share|improve this answer











                                answered Jun 16 '09 at 10:41







                                second














                                • 7




                                  I believe that is a different buffer.
                                  – Samuel Edwin Ward
                                  Jan 8 '13 at 21:58












                                • 7




                                  I believe that is a different buffer.
                                  – Samuel Edwin Ward
                                  Jan 8 '13 at 21:58







                                7




                                7




                                I believe that is a different buffer.
                                – Samuel Edwin Ward
                                Jan 8 '13 at 21:58




                                I believe that is a different buffer.
                                – Samuel Edwin Ward
                                Jan 8 '13 at 21:58












                                 

                                draft saved


                                draft discarded


























                                 


                                draft saved


                                draft discarded














                                StackExchange.ready(
                                function ()
                                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f25372%2fturn-off-buffering-in-pipe%23new-answer', 'question_page');

                                );

                                Post as a guest













































































                                Popular posts from this blog

                                How to check contact read email or not when send email to Individual?

                                Bahrain

                                Postfix configuration issue with fips on centos 7; mailgun relay