The process substitution output is out of the order

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
12
down vote

favorite
4












The



echo one; echo two > >(cat); echo three; 


command gives unexpected output.



I read this: How process substitution is implemented in bash? and many other articles about process substitution on the internet, but don't understand why it behaves this way.



Expected output:



one
two
three


Real output:



prompt$ echo one; echo two > >(cat); echo three;
one
three
prompt$ two


Also, this two commands should be equivalent from my point of view, but they don't:



##### first command - the pipe is used.
prompt$ seq 1 5 | cat
1
2
3
4
5
##### second command - the process substitution and redirection are used.
prompt$ seq 1 5 > >(cat)
prompt$ 1
2
3
4
5


Why I think, they should be the same? Because, both connects the seq output to the cat input through the anonymous pipe - Wikipedia, Process substitution.



Question: Why it behaves this way? Where is my error? The comprehensive answer is desired (with explanation of how the bash does it under the hood).







share|improve this question


















  • 1




    Even if it's not so clear at first sight, it's actually a duplicate of bash wait for process in process substitution even if command is invalid
    – Stéphane Chazelas
    Nov 10 '17 at 16:52







  • 1




    Actually, it would be better if that other question was marked as a duplicate to this one as this one is more to the point. Which is why I copied my answer there.
    – Stéphane Chazelas
    Nov 10 '17 at 17:00















up vote
12
down vote

favorite
4












The



echo one; echo two > >(cat); echo three; 


command gives unexpected output.



I read this: How process substitution is implemented in bash? and many other articles about process substitution on the internet, but don't understand why it behaves this way.



Expected output:



one
two
three


Real output:



prompt$ echo one; echo two > >(cat); echo three;
one
three
prompt$ two


Also, this two commands should be equivalent from my point of view, but they don't:



##### first command - the pipe is used.
prompt$ seq 1 5 | cat
1
2
3
4
5
##### second command - the process substitution and redirection are used.
prompt$ seq 1 5 > >(cat)
prompt$ 1
2
3
4
5


Why I think, they should be the same? Because, both connects the seq output to the cat input through the anonymous pipe - Wikipedia, Process substitution.



Question: Why it behaves this way? Where is my error? The comprehensive answer is desired (with explanation of how the bash does it under the hood).







share|improve this question


















  • 1




    Even if it's not so clear at first sight, it's actually a duplicate of bash wait for process in process substitution even if command is invalid
    – Stéphane Chazelas
    Nov 10 '17 at 16:52







  • 1




    Actually, it would be better if that other question was marked as a duplicate to this one as this one is more to the point. Which is why I copied my answer there.
    – Stéphane Chazelas
    Nov 10 '17 at 17:00













up vote
12
down vote

favorite
4









up vote
12
down vote

favorite
4






4





The



echo one; echo two > >(cat); echo three; 


command gives unexpected output.



I read this: How process substitution is implemented in bash? and many other articles about process substitution on the internet, but don't understand why it behaves this way.



Expected output:



one
two
three


Real output:



prompt$ echo one; echo two > >(cat); echo three;
one
three
prompt$ two


Also, this two commands should be equivalent from my point of view, but they don't:



##### first command - the pipe is used.
prompt$ seq 1 5 | cat
1
2
3
4
5
##### second command - the process substitution and redirection are used.
prompt$ seq 1 5 > >(cat)
prompt$ 1
2
3
4
5


Why I think, they should be the same? Because, both connects the seq output to the cat input through the anonymous pipe - Wikipedia, Process substitution.



Question: Why it behaves this way? Where is my error? The comprehensive answer is desired (with explanation of how the bash does it under the hood).







share|improve this question














The



echo one; echo two > >(cat); echo three; 


command gives unexpected output.



I read this: How process substitution is implemented in bash? and many other articles about process substitution on the internet, but don't understand why it behaves this way.



Expected output:



one
two
three


Real output:



prompt$ echo one; echo two > >(cat); echo three;
one
three
prompt$ two


Also, this two commands should be equivalent from my point of view, but they don't:



##### first command - the pipe is used.
prompt$ seq 1 5 | cat
1
2
3
4
5
##### second command - the process substitution and redirection are used.
prompt$ seq 1 5 > >(cat)
prompt$ 1
2
3
4
5


Why I think, they should be the same? Because, both connects the seq output to the cat input through the anonymous pipe - Wikipedia, Process substitution.



Question: Why it behaves this way? Where is my error? The comprehensive answer is desired (with explanation of how the bash does it under the hood).









share|improve this question













share|improve this question




share|improve this question








edited Nov 10 '17 at 17:29









Jeff Schaller

32k849109




32k849109










asked Nov 10 '17 at 16:47









MiniMax

2,706719




2,706719







  • 1




    Even if it's not so clear at first sight, it's actually a duplicate of bash wait for process in process substitution even if command is invalid
    – Stéphane Chazelas
    Nov 10 '17 at 16:52







  • 1




    Actually, it would be better if that other question was marked as a duplicate to this one as this one is more to the point. Which is why I copied my answer there.
    – Stéphane Chazelas
    Nov 10 '17 at 17:00













  • 1




    Even if it's not so clear at first sight, it's actually a duplicate of bash wait for process in process substitution even if command is invalid
    – Stéphane Chazelas
    Nov 10 '17 at 16:52







  • 1




    Actually, it would be better if that other question was marked as a duplicate to this one as this one is more to the point. Which is why I copied my answer there.
    – Stéphane Chazelas
    Nov 10 '17 at 17:00








1




1




Even if it's not so clear at first sight, it's actually a duplicate of bash wait for process in process substitution even if command is invalid
– Stéphane Chazelas
Nov 10 '17 at 16:52





Even if it's not so clear at first sight, it's actually a duplicate of bash wait for process in process substitution even if command is invalid
– Stéphane Chazelas
Nov 10 '17 at 16:52





1




1




Actually, it would be better if that other question was marked as a duplicate to this one as this one is more to the point. Which is why I copied my answer there.
– Stéphane Chazelas
Nov 10 '17 at 17:00





Actually, it would be better if that other question was marked as a duplicate to this one as this one is more to the point. Which is why I copied my answer there.
– Stéphane Chazelas
Nov 10 '17 at 17:00











2 Answers
2






active

oldest

votes

















up vote
16
down vote



accepted










Yes, in bash like in ksh (where the feature comes from), the processes inside the process substitution are not waited for (before running the next command in the script).



for a <(...) one, that's usually fine as in:



cmd1 <(cmd2)


the shell will be waiting for cmd1 and cmd1 will be typically waiting for cmd2 by virtue of it reading until end-of-file on the pipe that is substituted, and that end-of-file typically happens when cmd2 dies. That's the same reason several shells (not bash) don't bother waiting for cmd2 in cmd2 | cmd1.



For cmd1 >(cmd2), however, that's generally not the case, as it's more cmd2 that typically waits for cmd1 there so will generally exit after.



That's fixed in zsh that waits for cmd2 there (but not if you write it as cmd1 > >(cmd2) and cmd1 is not builtin, use cmd1 > >(cmd2) instead as documented).



ksh doesn't wait by default, but lets you wait for it with the wait builtin (it also makes the pid available in $!, though that doesn't help if you do cmd1 >(cmd2) >(cmd3))



rc (with the cmd1 >cmd2 syntax), same as ksh except you can get the pids of all the background processes with $apids.



es (also with cmd1 >cmd2) waits for cmd2 like in zsh, and also waits for cmd2 in <cmd2 process redirections.



bash does make the pid of cmd2 (or more exactly of the subshell as it does run cmd2 in a child process of that subshell even though it's the last command there) available in $!, but doesn't let you wait for it.



If you do have to use bash, you can work around the problem by using a command that will wait for both commands with:



 cmd1 >(cmd2); 3>&1 >&4 4>&- 4>&1


That makes both cmd1 and cmd2 have their fd 3 open to a pipe. cat will wait for end-of-file at the other end, so will typically only exit when both cmd1 and cmd2 are dead. And the shell will wait for that cat command. You could see that as a net to catch the termination of all background processes (you can use it for other things started in background like with &, coprocs or even commands that background themselves provided they don't close all their file descriptors like daemons typically do).



Note that thanks to that wasted subshell process mentioned above, it works even if cmd2 closes its fd 3 (commands usually don't do that, but some like sudo or ssh do). Future versions of bash may eventually do the optimisation there like in other shells. Then you'd need something like:



 cat; 4>&1


To make sure there's still an extra shell process with that fd 3 open waiting for that sudo command.



Note that cat won't read anything (since the processes don't write on their fd 3). It's just there for synchronisation. It will do just one read() system call that will return with nothing at the end.



You can actually avoid running cat by using a command substitution to do the pipe synchronisation:



 unused=$( cmd1 >(cmd2); 3>&1 >&4 4>&-); 4>&1


This time, it's the shell instead of cat that is reading from the pipe whose other end is open on fd 3 of cmd1 and cmd2. We're using a variable assignment so the exit status of cmd1 is available in $?.



Or you could do the process substitution by hand, and then you could even use your system's sh as that would become standard shell syntax:



 cmd1 /dev/fd/3 3>&1 >&4 4>&- 4>&1


though note as noted earlier that not all sh implementations would wait for cmd1 after cmd2 has finished (though that's better than the other way round). That time, $? contains the exit status of cmd2; though bash and zsh make cmd1's exit status available in $PIPESTATUS[0] and $pipestatus[1] respectively (see also the pipefail option in a few shells so $? can report the failure of pipe components other than the last)



Note that yash has similar issues with its process redirection feature. cmd1 >(cmd2) would be written cmd1 /dev/fd/3 3>(cmd2) there. But cmd2 is not waited for and you can't use wait to wait for it either and its pid is not made available in the $! variable either. You'd use the same work arounds as for bash.






share|improve this answer






















  • Firstly, I tried echo one; cat; 4>&1; echo three;, then simplified it to the echo one; echo two > >(cat) | cat; echo three; and it outputs values in the right order, too. Does all this descriptor manipulations 3>&1 >&4 4>&- are necessary? Also, I don't get this >&4 4>& - we are redirect stdout to the fourth fd, then closing fourth fd, then again use 4>&1 it. Why it needed and how it works? May be, I should create new question on this topic?
    – MiniMax
    Nov 10 '17 at 23:14






  • 1




    @MiniMax, but there, you're affecting the stdout of cmd1 and cmd2, the point with the little dance with the file descriptor is to restore the original ones and using only the extra pipe for the waiting instead of also channelling the output of the commands.
    – Stéphane Chazelas
    Nov 11 '17 at 2:10










  • @MiniMax It took me a while to understand, I didn't get the pipes at such a low level before. The rightmost 4>&1 creates a file descriptor (fd) 4 for the outer braces command list, and makes it equal to outer braces' stdout. The inner braces has stdin/stdout/stderr automatically setup to connect to the outer braces. However, 3>&1 makes fd 3 connect to the outer braces' stdin. >&4 makes the inner braces' stdout connect to the outer braces fd 4 (The one we created before). 4>&- closes fd 4 from the inner braces (Since inner braces' stdout is already connected to out braces' fd 4).
    – Nicholas Pipitone
    Jul 25 at 13:24











  • @MiniMax The confusing part was the right-to-left part, 4>&1 gets executed first, before the other redirects, so you don't "again use 4>&1". Overall, the inner braces is sending data to its stdout, which was overwritten with whatever fd 4 it was given. The fd 4 that the inner braces was given, is the outer braces' fd 4, which is equal to the outer braces' original stdout.
    – Nicholas Pipitone
    Jul 25 at 13:25











  • Bash makes it feel like 4>5 means "4 goes to 5", but really "fd 4 is overwritten with fd 5". And before execution, fd 0/1/2 are auto connected (Along with any fd of the outer shell), and you can overwrite them as you wish. That's at least my interpretation of the bash documentation. If you understood something else out of this, lmk.
    – Nicholas Pipitone
    Jul 25 at 13:31


















up vote
2
down vote













You can pipe the second command into another cat, which will wait until its input pipe closes. Ex:



prompt$ echo one; echo two > >(cat) | cat; echo three;
one
two
three
prompt$


Short and simple.



==========



As simple as it seems, a lot is going on behind the scenes. You can ignore the rest of the answer if you aren't interested in how this works.



When you have echo two > >(cat); echo three, >(cat) is forked off by the interactive shell, and runs independently of echo two. Thus, echo two finishes, and then echo three gets executed, but before the >(cat) finishes. When bash gets data from >(cat) when it didn't expect it (a couple of milliseconds later), it gives you that prompt-like situation where you have to hit newline to get back to the terminal (Same as if another user mesg'ed you).



However, given echo two > >(cat) | cat; echo three, two subshells are spawned (as per the documentation of the | symbol).



One subshell named A is for echo two > >(cat), and one subshell named B is for cat. A is automatically connected to B (A's stdout is B's stdin). Then, echo two and >(cat) begin executing. >(cat)'s stdout is set to A's stdout, which is equal to B's stdin. After echo two finishes, A exits, closing its stdout. However, >(cat) is still holding the reference to B's stdin. The second cat's stdin is holding B's stdin, and that cat will not exit until it sees an EOF. An EOF is only given when no one has the file open in write mode anymore, so >(cat)'s stdout is blocking the second cat. B remains waiting on that second cat. Since echo two exited, >(cat) eventually gets an EOF, so >(cat) flushes its buffer and exits. No one is holding B's/second cat's stdin anymore, so the second cat reads an EOF (B isn't reading its stdin at all, it doesn't care). This EOF causes the second cat to flush its buffer, close its stdout, and exit, and then B exits because cat exited and B was waiting on cat.



A caveat of this is that bash also spawns a subshell for >(cat)! Because of this, you'll see that



echo two > >(sleep 5) | cat; echo three



will still wait 5 seconds before executing echo three, even though sleep 5 isn't holding B's stdin. This is because a hidden subshell C spawned for >(sleep 5) is waiting on sleep, and C is holding B's stdin. You can see how



echo two > >(exec sleep 5) | cat; echo three



Will not wait however, since sleep isn't holding B's stdin, and there's no ghost subshell C that's holding B's stdin (exec will force sleep to replace C, as opposed to forking and making C wait on sleep). Regardless of this caveat,



echo two > >(exec cat) | cat; echo three



will still properly execute the functions in order, as described previously.






share|improve this answer






















  • As noted in the conversion with @MiniMax in the comments to my answer, that has however the downside of affecting the stdout of the command and means the output needs to be read and written an extra time.
    – Stéphane Chazelas
    Jul 24 at 20:50










  • The explanation is not accurate. A is not waiting for the cat spawned in >(cat). As I mention in my answer, the reason why echo two > >(sleep 5 &>/dev/null) | cat; echo three outputs three after 5 seconds is because current versions of bash waste an extra shell process in >(sleep 5) that waits for sleep and that process still has stdout going to the pipe which prevents the second cat from terminating. If you replace it with echo two > >(exec sleep 5 &>/dev/null) | cat; echo three to eliminate that extra process, you'll find that it returns straight away.
    – Stéphane Chazelas
    Jul 24 at 21:32










  • It makes a nested subshell? I've been trying to look into the bash implementation to figure it out, I'm pretty sure the echo two > >(sleep 5 &>/dev/null) at the minimum gets its own subshell. Is it a non-documented implementation detail that causes sleep 5 to get its own subshell too? If it's documented then it would be a legitimate way to get it done with fewer characters (Unless there's a tight loop i dont think anyone will notice performance problems with a subshell, or a cat)`. If it's not documented then rip, nice hack though, won't work on future versions though.
    – Nicholas Pipitone
    Jul 25 at 13:19











  • $(...), <(...) do indeed involve a subshell, but ksh93 or zsh would run the last command in that subshell in the same process, not bash which is why there's still another process holding the pipe open while sleep is running an not holding the pipe open. Future versions of bash may implement a similar optimisation.
    – Stéphane Chazelas
    Jul 25 at 13:23










  • A would still be waiting for >(sleep 5) though, wouldn't it? It wouldn't be able to close because if it closed then B would close immediately, since it's no longer connected to >(sleep 5) using A as a proxy. Even if >() doesn't get its own subshell, wouldnt A have to wait for it?
    – Nicholas Pipitone
    Jul 25 at 13:37











Your Answer







StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "106"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: false,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f403783%2fthe-process-substitution-output-is-out-of-the-order%23new-answer', 'question_page');

);

Post as a guest






























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
16
down vote



accepted










Yes, in bash like in ksh (where the feature comes from), the processes inside the process substitution are not waited for (before running the next command in the script).



for a <(...) one, that's usually fine as in:



cmd1 <(cmd2)


the shell will be waiting for cmd1 and cmd1 will be typically waiting for cmd2 by virtue of it reading until end-of-file on the pipe that is substituted, and that end-of-file typically happens when cmd2 dies. That's the same reason several shells (not bash) don't bother waiting for cmd2 in cmd2 | cmd1.



For cmd1 >(cmd2), however, that's generally not the case, as it's more cmd2 that typically waits for cmd1 there so will generally exit after.



That's fixed in zsh that waits for cmd2 there (but not if you write it as cmd1 > >(cmd2) and cmd1 is not builtin, use cmd1 > >(cmd2) instead as documented).



ksh doesn't wait by default, but lets you wait for it with the wait builtin (it also makes the pid available in $!, though that doesn't help if you do cmd1 >(cmd2) >(cmd3))



rc (with the cmd1 >cmd2 syntax), same as ksh except you can get the pids of all the background processes with $apids.



es (also with cmd1 >cmd2) waits for cmd2 like in zsh, and also waits for cmd2 in <cmd2 process redirections.



bash does make the pid of cmd2 (or more exactly of the subshell as it does run cmd2 in a child process of that subshell even though it's the last command there) available in $!, but doesn't let you wait for it.



If you do have to use bash, you can work around the problem by using a command that will wait for both commands with:



 cmd1 >(cmd2); 3>&1 >&4 4>&- 4>&1


That makes both cmd1 and cmd2 have their fd 3 open to a pipe. cat will wait for end-of-file at the other end, so will typically only exit when both cmd1 and cmd2 are dead. And the shell will wait for that cat command. You could see that as a net to catch the termination of all background processes (you can use it for other things started in background like with &, coprocs or even commands that background themselves provided they don't close all their file descriptors like daemons typically do).



Note that thanks to that wasted subshell process mentioned above, it works even if cmd2 closes its fd 3 (commands usually don't do that, but some like sudo or ssh do). Future versions of bash may eventually do the optimisation there like in other shells. Then you'd need something like:



 cat; 4>&1


To make sure there's still an extra shell process with that fd 3 open waiting for that sudo command.



Note that cat won't read anything (since the processes don't write on their fd 3). It's just there for synchronisation. It will do just one read() system call that will return with nothing at the end.



You can actually avoid running cat by using a command substitution to do the pipe synchronisation:



 unused=$( cmd1 >(cmd2); 3>&1 >&4 4>&-); 4>&1


This time, it's the shell instead of cat that is reading from the pipe whose other end is open on fd 3 of cmd1 and cmd2. We're using a variable assignment so the exit status of cmd1 is available in $?.



Or you could do the process substitution by hand, and then you could even use your system's sh as that would become standard shell syntax:



 cmd1 /dev/fd/3 3>&1 >&4 4>&- 4>&1


though note as noted earlier that not all sh implementations would wait for cmd1 after cmd2 has finished (though that's better than the other way round). That time, $? contains the exit status of cmd2; though bash and zsh make cmd1's exit status available in $PIPESTATUS[0] and $pipestatus[1] respectively (see also the pipefail option in a few shells so $? can report the failure of pipe components other than the last)



Note that yash has similar issues with its process redirection feature. cmd1 >(cmd2) would be written cmd1 /dev/fd/3 3>(cmd2) there. But cmd2 is not waited for and you can't use wait to wait for it either and its pid is not made available in the $! variable either. You'd use the same work arounds as for bash.






share|improve this answer






















  • Firstly, I tried echo one; cat; 4>&1; echo three;, then simplified it to the echo one; echo two > >(cat) | cat; echo three; and it outputs values in the right order, too. Does all this descriptor manipulations 3>&1 >&4 4>&- are necessary? Also, I don't get this >&4 4>& - we are redirect stdout to the fourth fd, then closing fourth fd, then again use 4>&1 it. Why it needed and how it works? May be, I should create new question on this topic?
    – MiniMax
    Nov 10 '17 at 23:14






  • 1




    @MiniMax, but there, you're affecting the stdout of cmd1 and cmd2, the point with the little dance with the file descriptor is to restore the original ones and using only the extra pipe for the waiting instead of also channelling the output of the commands.
    – Stéphane Chazelas
    Nov 11 '17 at 2:10










  • @MiniMax It took me a while to understand, I didn't get the pipes at such a low level before. The rightmost 4>&1 creates a file descriptor (fd) 4 for the outer braces command list, and makes it equal to outer braces' stdout. The inner braces has stdin/stdout/stderr automatically setup to connect to the outer braces. However, 3>&1 makes fd 3 connect to the outer braces' stdin. >&4 makes the inner braces' stdout connect to the outer braces fd 4 (The one we created before). 4>&- closes fd 4 from the inner braces (Since inner braces' stdout is already connected to out braces' fd 4).
    – Nicholas Pipitone
    Jul 25 at 13:24











  • @MiniMax The confusing part was the right-to-left part, 4>&1 gets executed first, before the other redirects, so you don't "again use 4>&1". Overall, the inner braces is sending data to its stdout, which was overwritten with whatever fd 4 it was given. The fd 4 that the inner braces was given, is the outer braces' fd 4, which is equal to the outer braces' original stdout.
    – Nicholas Pipitone
    Jul 25 at 13:25











  • Bash makes it feel like 4>5 means "4 goes to 5", but really "fd 4 is overwritten with fd 5". And before execution, fd 0/1/2 are auto connected (Along with any fd of the outer shell), and you can overwrite them as you wish. That's at least my interpretation of the bash documentation. If you understood something else out of this, lmk.
    – Nicholas Pipitone
    Jul 25 at 13:31















up vote
16
down vote



accepted










Yes, in bash like in ksh (where the feature comes from), the processes inside the process substitution are not waited for (before running the next command in the script).



for a <(...) one, that's usually fine as in:



cmd1 <(cmd2)


the shell will be waiting for cmd1 and cmd1 will be typically waiting for cmd2 by virtue of it reading until end-of-file on the pipe that is substituted, and that end-of-file typically happens when cmd2 dies. That's the same reason several shells (not bash) don't bother waiting for cmd2 in cmd2 | cmd1.



For cmd1 >(cmd2), however, that's generally not the case, as it's more cmd2 that typically waits for cmd1 there so will generally exit after.



That's fixed in zsh that waits for cmd2 there (but not if you write it as cmd1 > >(cmd2) and cmd1 is not builtin, use cmd1 > >(cmd2) instead as documented).



ksh doesn't wait by default, but lets you wait for it with the wait builtin (it also makes the pid available in $!, though that doesn't help if you do cmd1 >(cmd2) >(cmd3))



rc (with the cmd1 >cmd2 syntax), same as ksh except you can get the pids of all the background processes with $apids.



es (also with cmd1 >cmd2) waits for cmd2 like in zsh, and also waits for cmd2 in <cmd2 process redirections.



bash does make the pid of cmd2 (or more exactly of the subshell as it does run cmd2 in a child process of that subshell even though it's the last command there) available in $!, but doesn't let you wait for it.



If you do have to use bash, you can work around the problem by using a command that will wait for both commands with:



 cmd1 >(cmd2); 3>&1 >&4 4>&- 4>&1


That makes both cmd1 and cmd2 have their fd 3 open to a pipe. cat will wait for end-of-file at the other end, so will typically only exit when both cmd1 and cmd2 are dead. And the shell will wait for that cat command. You could see that as a net to catch the termination of all background processes (you can use it for other things started in background like with &, coprocs or even commands that background themselves provided they don't close all their file descriptors like daemons typically do).



Note that thanks to that wasted subshell process mentioned above, it works even if cmd2 closes its fd 3 (commands usually don't do that, but some like sudo or ssh do). Future versions of bash may eventually do the optimisation there like in other shells. Then you'd need something like:



 cat; 4>&1


To make sure there's still an extra shell process with that fd 3 open waiting for that sudo command.



Note that cat won't read anything (since the processes don't write on their fd 3). It's just there for synchronisation. It will do just one read() system call that will return with nothing at the end.



You can actually avoid running cat by using a command substitution to do the pipe synchronisation:



 unused=$( cmd1 >(cmd2); 3>&1 >&4 4>&-); 4>&1


This time, it's the shell instead of cat that is reading from the pipe whose other end is open on fd 3 of cmd1 and cmd2. We're using a variable assignment so the exit status of cmd1 is available in $?.



Or you could do the process substitution by hand, and then you could even use your system's sh as that would become standard shell syntax:



 cmd1 /dev/fd/3 3>&1 >&4 4>&- 4>&1


though note as noted earlier that not all sh implementations would wait for cmd1 after cmd2 has finished (though that's better than the other way round). That time, $? contains the exit status of cmd2; though bash and zsh make cmd1's exit status available in $PIPESTATUS[0] and $pipestatus[1] respectively (see also the pipefail option in a few shells so $? can report the failure of pipe components other than the last)



Note that yash has similar issues with its process redirection feature. cmd1 >(cmd2) would be written cmd1 /dev/fd/3 3>(cmd2) there. But cmd2 is not waited for and you can't use wait to wait for it either and its pid is not made available in the $! variable either. You'd use the same work arounds as for bash.






share|improve this answer






















  • Firstly, I tried echo one; cat; 4>&1; echo three;, then simplified it to the echo one; echo two > >(cat) | cat; echo three; and it outputs values in the right order, too. Does all this descriptor manipulations 3>&1 >&4 4>&- are necessary? Also, I don't get this >&4 4>& - we are redirect stdout to the fourth fd, then closing fourth fd, then again use 4>&1 it. Why it needed and how it works? May be, I should create new question on this topic?
    – MiniMax
    Nov 10 '17 at 23:14






  • 1




    @MiniMax, but there, you're affecting the stdout of cmd1 and cmd2, the point with the little dance with the file descriptor is to restore the original ones and using only the extra pipe for the waiting instead of also channelling the output of the commands.
    – Stéphane Chazelas
    Nov 11 '17 at 2:10










  • @MiniMax It took me a while to understand, I didn't get the pipes at such a low level before. The rightmost 4>&1 creates a file descriptor (fd) 4 for the outer braces command list, and makes it equal to outer braces' stdout. The inner braces has stdin/stdout/stderr automatically setup to connect to the outer braces. However, 3>&1 makes fd 3 connect to the outer braces' stdin. >&4 makes the inner braces' stdout connect to the outer braces fd 4 (The one we created before). 4>&- closes fd 4 from the inner braces (Since inner braces' stdout is already connected to out braces' fd 4).
    – Nicholas Pipitone
    Jul 25 at 13:24











  • @MiniMax The confusing part was the right-to-left part, 4>&1 gets executed first, before the other redirects, so you don't "again use 4>&1". Overall, the inner braces is sending data to its stdout, which was overwritten with whatever fd 4 it was given. The fd 4 that the inner braces was given, is the outer braces' fd 4, which is equal to the outer braces' original stdout.
    – Nicholas Pipitone
    Jul 25 at 13:25











  • Bash makes it feel like 4>5 means "4 goes to 5", but really "fd 4 is overwritten with fd 5". And before execution, fd 0/1/2 are auto connected (Along with any fd of the outer shell), and you can overwrite them as you wish. That's at least my interpretation of the bash documentation. If you understood something else out of this, lmk.
    – Nicholas Pipitone
    Jul 25 at 13:31













up vote
16
down vote



accepted







up vote
16
down vote



accepted






Yes, in bash like in ksh (where the feature comes from), the processes inside the process substitution are not waited for (before running the next command in the script).



for a <(...) one, that's usually fine as in:



cmd1 <(cmd2)


the shell will be waiting for cmd1 and cmd1 will be typically waiting for cmd2 by virtue of it reading until end-of-file on the pipe that is substituted, and that end-of-file typically happens when cmd2 dies. That's the same reason several shells (not bash) don't bother waiting for cmd2 in cmd2 | cmd1.



For cmd1 >(cmd2), however, that's generally not the case, as it's more cmd2 that typically waits for cmd1 there so will generally exit after.



That's fixed in zsh that waits for cmd2 there (but not if you write it as cmd1 > >(cmd2) and cmd1 is not builtin, use cmd1 > >(cmd2) instead as documented).



ksh doesn't wait by default, but lets you wait for it with the wait builtin (it also makes the pid available in $!, though that doesn't help if you do cmd1 >(cmd2) >(cmd3))



rc (with the cmd1 >cmd2 syntax), same as ksh except you can get the pids of all the background processes with $apids.



es (also with cmd1 >cmd2) waits for cmd2 like in zsh, and also waits for cmd2 in <cmd2 process redirections.



bash does make the pid of cmd2 (or more exactly of the subshell as it does run cmd2 in a child process of that subshell even though it's the last command there) available in $!, but doesn't let you wait for it.



If you do have to use bash, you can work around the problem by using a command that will wait for both commands with:



 cmd1 >(cmd2); 3>&1 >&4 4>&- 4>&1


That makes both cmd1 and cmd2 have their fd 3 open to a pipe. cat will wait for end-of-file at the other end, so will typically only exit when both cmd1 and cmd2 are dead. And the shell will wait for that cat command. You could see that as a net to catch the termination of all background processes (you can use it for other things started in background like with &, coprocs or even commands that background themselves provided they don't close all their file descriptors like daemons typically do).



Note that thanks to that wasted subshell process mentioned above, it works even if cmd2 closes its fd 3 (commands usually don't do that, but some like sudo or ssh do). Future versions of bash may eventually do the optimisation there like in other shells. Then you'd need something like:



 cat; 4>&1


To make sure there's still an extra shell process with that fd 3 open waiting for that sudo command.



Note that cat won't read anything (since the processes don't write on their fd 3). It's just there for synchronisation. It will do just one read() system call that will return with nothing at the end.



You can actually avoid running cat by using a command substitution to do the pipe synchronisation:



 unused=$( cmd1 >(cmd2); 3>&1 >&4 4>&-); 4>&1


This time, it's the shell instead of cat that is reading from the pipe whose other end is open on fd 3 of cmd1 and cmd2. We're using a variable assignment so the exit status of cmd1 is available in $?.



Or you could do the process substitution by hand, and then you could even use your system's sh as that would become standard shell syntax:



 cmd1 /dev/fd/3 3>&1 >&4 4>&- 4>&1


though note as noted earlier that not all sh implementations would wait for cmd1 after cmd2 has finished (though that's better than the other way round). That time, $? contains the exit status of cmd2; though bash and zsh make cmd1's exit status available in $PIPESTATUS[0] and $pipestatus[1] respectively (see also the pipefail option in a few shells so $? can report the failure of pipe components other than the last)



Note that yash has similar issues with its process redirection feature. cmd1 >(cmd2) would be written cmd1 /dev/fd/3 3>(cmd2) there. But cmd2 is not waited for and you can't use wait to wait for it either and its pid is not made available in the $! variable either. You'd use the same work arounds as for bash.






share|improve this answer














Yes, in bash like in ksh (where the feature comes from), the processes inside the process substitution are not waited for (before running the next command in the script).



for a <(...) one, that's usually fine as in:



cmd1 <(cmd2)


the shell will be waiting for cmd1 and cmd1 will be typically waiting for cmd2 by virtue of it reading until end-of-file on the pipe that is substituted, and that end-of-file typically happens when cmd2 dies. That's the same reason several shells (not bash) don't bother waiting for cmd2 in cmd2 | cmd1.



For cmd1 >(cmd2), however, that's generally not the case, as it's more cmd2 that typically waits for cmd1 there so will generally exit after.



That's fixed in zsh that waits for cmd2 there (but not if you write it as cmd1 > >(cmd2) and cmd1 is not builtin, use cmd1 > >(cmd2) instead as documented).



ksh doesn't wait by default, but lets you wait for it with the wait builtin (it also makes the pid available in $!, though that doesn't help if you do cmd1 >(cmd2) >(cmd3))



rc (with the cmd1 >cmd2 syntax), same as ksh except you can get the pids of all the background processes with $apids.



es (also with cmd1 >cmd2) waits for cmd2 like in zsh, and also waits for cmd2 in <cmd2 process redirections.



bash does make the pid of cmd2 (or more exactly of the subshell as it does run cmd2 in a child process of that subshell even though it's the last command there) available in $!, but doesn't let you wait for it.



If you do have to use bash, you can work around the problem by using a command that will wait for both commands with:



 cmd1 >(cmd2); 3>&1 >&4 4>&- 4>&1


That makes both cmd1 and cmd2 have their fd 3 open to a pipe. cat will wait for end-of-file at the other end, so will typically only exit when both cmd1 and cmd2 are dead. And the shell will wait for that cat command. You could see that as a net to catch the termination of all background processes (you can use it for other things started in background like with &, coprocs or even commands that background themselves provided they don't close all their file descriptors like daemons typically do).



Note that thanks to that wasted subshell process mentioned above, it works even if cmd2 closes its fd 3 (commands usually don't do that, but some like sudo or ssh do). Future versions of bash may eventually do the optimisation there like in other shells. Then you'd need something like:



 cat; 4>&1


To make sure there's still an extra shell process with that fd 3 open waiting for that sudo command.



Note that cat won't read anything (since the processes don't write on their fd 3). It's just there for synchronisation. It will do just one read() system call that will return with nothing at the end.



You can actually avoid running cat by using a command substitution to do the pipe synchronisation:



 unused=$( cmd1 >(cmd2); 3>&1 >&4 4>&-); 4>&1


This time, it's the shell instead of cat that is reading from the pipe whose other end is open on fd 3 of cmd1 and cmd2. We're using a variable assignment so the exit status of cmd1 is available in $?.



Or you could do the process substitution by hand, and then you could even use your system's sh as that would become standard shell syntax:



 cmd1 /dev/fd/3 3>&1 >&4 4>&- 4>&1


though note as noted earlier that not all sh implementations would wait for cmd1 after cmd2 has finished (though that's better than the other way round). That time, $? contains the exit status of cmd2; though bash and zsh make cmd1's exit status available in $PIPESTATUS[0] and $pipestatus[1] respectively (see also the pipefail option in a few shells so $? can report the failure of pipe components other than the last)



Note that yash has similar issues with its process redirection feature. cmd1 >(cmd2) would be written cmd1 /dev/fd/3 3>(cmd2) there. But cmd2 is not waited for and you can't use wait to wait for it either and its pid is not made available in the $! variable either. You'd use the same work arounds as for bash.







share|improve this answer














share|improve this answer



share|improve this answer








edited Dec 3 '17 at 23:07

























answered Nov 10 '17 at 16:59









Stéphane Chazelas

283k53521854




283k53521854











  • Firstly, I tried echo one; cat; 4>&1; echo three;, then simplified it to the echo one; echo two > >(cat) | cat; echo three; and it outputs values in the right order, too. Does all this descriptor manipulations 3>&1 >&4 4>&- are necessary? Also, I don't get this >&4 4>& - we are redirect stdout to the fourth fd, then closing fourth fd, then again use 4>&1 it. Why it needed and how it works? May be, I should create new question on this topic?
    – MiniMax
    Nov 10 '17 at 23:14






  • 1




    @MiniMax, but there, you're affecting the stdout of cmd1 and cmd2, the point with the little dance with the file descriptor is to restore the original ones and using only the extra pipe for the waiting instead of also channelling the output of the commands.
    – Stéphane Chazelas
    Nov 11 '17 at 2:10










  • @MiniMax It took me a while to understand, I didn't get the pipes at such a low level before. The rightmost 4>&1 creates a file descriptor (fd) 4 for the outer braces command list, and makes it equal to outer braces' stdout. The inner braces has stdin/stdout/stderr automatically setup to connect to the outer braces. However, 3>&1 makes fd 3 connect to the outer braces' stdin. >&4 makes the inner braces' stdout connect to the outer braces fd 4 (The one we created before). 4>&- closes fd 4 from the inner braces (Since inner braces' stdout is already connected to out braces' fd 4).
    – Nicholas Pipitone
    Jul 25 at 13:24











  • @MiniMax The confusing part was the right-to-left part, 4>&1 gets executed first, before the other redirects, so you don't "again use 4>&1". Overall, the inner braces is sending data to its stdout, which was overwritten with whatever fd 4 it was given. The fd 4 that the inner braces was given, is the outer braces' fd 4, which is equal to the outer braces' original stdout.
    – Nicholas Pipitone
    Jul 25 at 13:25











  • Bash makes it feel like 4>5 means "4 goes to 5", but really "fd 4 is overwritten with fd 5". And before execution, fd 0/1/2 are auto connected (Along with any fd of the outer shell), and you can overwrite them as you wish. That's at least my interpretation of the bash documentation. If you understood something else out of this, lmk.
    – Nicholas Pipitone
    Jul 25 at 13:31

















  • Firstly, I tried echo one; cat; 4>&1; echo three;, then simplified it to the echo one; echo two > >(cat) | cat; echo three; and it outputs values in the right order, too. Does all this descriptor manipulations 3>&1 >&4 4>&- are necessary? Also, I don't get this >&4 4>& - we are redirect stdout to the fourth fd, then closing fourth fd, then again use 4>&1 it. Why it needed and how it works? May be, I should create new question on this topic?
    – MiniMax
    Nov 10 '17 at 23:14






  • 1




    @MiniMax, but there, you're affecting the stdout of cmd1 and cmd2, the point with the little dance with the file descriptor is to restore the original ones and using only the extra pipe for the waiting instead of also channelling the output of the commands.
    – Stéphane Chazelas
    Nov 11 '17 at 2:10










  • @MiniMax It took me a while to understand, I didn't get the pipes at such a low level before. The rightmost 4>&1 creates a file descriptor (fd) 4 for the outer braces command list, and makes it equal to outer braces' stdout. The inner braces has stdin/stdout/stderr automatically setup to connect to the outer braces. However, 3>&1 makes fd 3 connect to the outer braces' stdin. >&4 makes the inner braces' stdout connect to the outer braces fd 4 (The one we created before). 4>&- closes fd 4 from the inner braces (Since inner braces' stdout is already connected to out braces' fd 4).
    – Nicholas Pipitone
    Jul 25 at 13:24











  • @MiniMax The confusing part was the right-to-left part, 4>&1 gets executed first, before the other redirects, so you don't "again use 4>&1". Overall, the inner braces is sending data to its stdout, which was overwritten with whatever fd 4 it was given. The fd 4 that the inner braces was given, is the outer braces' fd 4, which is equal to the outer braces' original stdout.
    – Nicholas Pipitone
    Jul 25 at 13:25











  • Bash makes it feel like 4>5 means "4 goes to 5", but really "fd 4 is overwritten with fd 5". And before execution, fd 0/1/2 are auto connected (Along with any fd of the outer shell), and you can overwrite them as you wish. That's at least my interpretation of the bash documentation. If you understood something else out of this, lmk.
    – Nicholas Pipitone
    Jul 25 at 13:31
















Firstly, I tried echo one; cat; 4>&1; echo three;, then simplified it to the echo one; echo two > >(cat) | cat; echo three; and it outputs values in the right order, too. Does all this descriptor manipulations 3>&1 >&4 4>&- are necessary? Also, I don't get this >&4 4>& - we are redirect stdout to the fourth fd, then closing fourth fd, then again use 4>&1 it. Why it needed and how it works? May be, I should create new question on this topic?
– MiniMax
Nov 10 '17 at 23:14




Firstly, I tried echo one; cat; 4>&1; echo three;, then simplified it to the echo one; echo two > >(cat) | cat; echo three; and it outputs values in the right order, too. Does all this descriptor manipulations 3>&1 >&4 4>&- are necessary? Also, I don't get this >&4 4>& - we are redirect stdout to the fourth fd, then closing fourth fd, then again use 4>&1 it. Why it needed and how it works? May be, I should create new question on this topic?
– MiniMax
Nov 10 '17 at 23:14




1




1




@MiniMax, but there, you're affecting the stdout of cmd1 and cmd2, the point with the little dance with the file descriptor is to restore the original ones and using only the extra pipe for the waiting instead of also channelling the output of the commands.
– Stéphane Chazelas
Nov 11 '17 at 2:10




@MiniMax, but there, you're affecting the stdout of cmd1 and cmd2, the point with the little dance with the file descriptor is to restore the original ones and using only the extra pipe for the waiting instead of also channelling the output of the commands.
– Stéphane Chazelas
Nov 11 '17 at 2:10












@MiniMax It took me a while to understand, I didn't get the pipes at such a low level before. The rightmost 4>&1 creates a file descriptor (fd) 4 for the outer braces command list, and makes it equal to outer braces' stdout. The inner braces has stdin/stdout/stderr automatically setup to connect to the outer braces. However, 3>&1 makes fd 3 connect to the outer braces' stdin. >&4 makes the inner braces' stdout connect to the outer braces fd 4 (The one we created before). 4>&- closes fd 4 from the inner braces (Since inner braces' stdout is already connected to out braces' fd 4).
– Nicholas Pipitone
Jul 25 at 13:24





@MiniMax It took me a while to understand, I didn't get the pipes at such a low level before. The rightmost 4>&1 creates a file descriptor (fd) 4 for the outer braces command list, and makes it equal to outer braces' stdout. The inner braces has stdin/stdout/stderr automatically setup to connect to the outer braces. However, 3>&1 makes fd 3 connect to the outer braces' stdin. >&4 makes the inner braces' stdout connect to the outer braces fd 4 (The one we created before). 4>&- closes fd 4 from the inner braces (Since inner braces' stdout is already connected to out braces' fd 4).
– Nicholas Pipitone
Jul 25 at 13:24













@MiniMax The confusing part was the right-to-left part, 4>&1 gets executed first, before the other redirects, so you don't "again use 4>&1". Overall, the inner braces is sending data to its stdout, which was overwritten with whatever fd 4 it was given. The fd 4 that the inner braces was given, is the outer braces' fd 4, which is equal to the outer braces' original stdout.
– Nicholas Pipitone
Jul 25 at 13:25





@MiniMax The confusing part was the right-to-left part, 4>&1 gets executed first, before the other redirects, so you don't "again use 4>&1". Overall, the inner braces is sending data to its stdout, which was overwritten with whatever fd 4 it was given. The fd 4 that the inner braces was given, is the outer braces' fd 4, which is equal to the outer braces' original stdout.
– Nicholas Pipitone
Jul 25 at 13:25













Bash makes it feel like 4>5 means "4 goes to 5", but really "fd 4 is overwritten with fd 5". And before execution, fd 0/1/2 are auto connected (Along with any fd of the outer shell), and you can overwrite them as you wish. That's at least my interpretation of the bash documentation. If you understood something else out of this, lmk.
– Nicholas Pipitone
Jul 25 at 13:31





Bash makes it feel like 4>5 means "4 goes to 5", but really "fd 4 is overwritten with fd 5". And before execution, fd 0/1/2 are auto connected (Along with any fd of the outer shell), and you can overwrite them as you wish. That's at least my interpretation of the bash documentation. If you understood something else out of this, lmk.
– Nicholas Pipitone
Jul 25 at 13:31













up vote
2
down vote













You can pipe the second command into another cat, which will wait until its input pipe closes. Ex:



prompt$ echo one; echo two > >(cat) | cat; echo three;
one
two
three
prompt$


Short and simple.



==========



As simple as it seems, a lot is going on behind the scenes. You can ignore the rest of the answer if you aren't interested in how this works.



When you have echo two > >(cat); echo three, >(cat) is forked off by the interactive shell, and runs independently of echo two. Thus, echo two finishes, and then echo three gets executed, but before the >(cat) finishes. When bash gets data from >(cat) when it didn't expect it (a couple of milliseconds later), it gives you that prompt-like situation where you have to hit newline to get back to the terminal (Same as if another user mesg'ed you).



However, given echo two > >(cat) | cat; echo three, two subshells are spawned (as per the documentation of the | symbol).



One subshell named A is for echo two > >(cat), and one subshell named B is for cat. A is automatically connected to B (A's stdout is B's stdin). Then, echo two and >(cat) begin executing. >(cat)'s stdout is set to A's stdout, which is equal to B's stdin. After echo two finishes, A exits, closing its stdout. However, >(cat) is still holding the reference to B's stdin. The second cat's stdin is holding B's stdin, and that cat will not exit until it sees an EOF. An EOF is only given when no one has the file open in write mode anymore, so >(cat)'s stdout is blocking the second cat. B remains waiting on that second cat. Since echo two exited, >(cat) eventually gets an EOF, so >(cat) flushes its buffer and exits. No one is holding B's/second cat's stdin anymore, so the second cat reads an EOF (B isn't reading its stdin at all, it doesn't care). This EOF causes the second cat to flush its buffer, close its stdout, and exit, and then B exits because cat exited and B was waiting on cat.



A caveat of this is that bash also spawns a subshell for >(cat)! Because of this, you'll see that



echo two > >(sleep 5) | cat; echo three



will still wait 5 seconds before executing echo three, even though sleep 5 isn't holding B's stdin. This is because a hidden subshell C spawned for >(sleep 5) is waiting on sleep, and C is holding B's stdin. You can see how



echo two > >(exec sleep 5) | cat; echo three



Will not wait however, since sleep isn't holding B's stdin, and there's no ghost subshell C that's holding B's stdin (exec will force sleep to replace C, as opposed to forking and making C wait on sleep). Regardless of this caveat,



echo two > >(exec cat) | cat; echo three



will still properly execute the functions in order, as described previously.






share|improve this answer






















  • As noted in the conversion with @MiniMax in the comments to my answer, that has however the downside of affecting the stdout of the command and means the output needs to be read and written an extra time.
    – Stéphane Chazelas
    Jul 24 at 20:50










  • The explanation is not accurate. A is not waiting for the cat spawned in >(cat). As I mention in my answer, the reason why echo two > >(sleep 5 &>/dev/null) | cat; echo three outputs three after 5 seconds is because current versions of bash waste an extra shell process in >(sleep 5) that waits for sleep and that process still has stdout going to the pipe which prevents the second cat from terminating. If you replace it with echo two > >(exec sleep 5 &>/dev/null) | cat; echo three to eliminate that extra process, you'll find that it returns straight away.
    – Stéphane Chazelas
    Jul 24 at 21:32










  • It makes a nested subshell? I've been trying to look into the bash implementation to figure it out, I'm pretty sure the echo two > >(sleep 5 &>/dev/null) at the minimum gets its own subshell. Is it a non-documented implementation detail that causes sleep 5 to get its own subshell too? If it's documented then it would be a legitimate way to get it done with fewer characters (Unless there's a tight loop i dont think anyone will notice performance problems with a subshell, or a cat)`. If it's not documented then rip, nice hack though, won't work on future versions though.
    – Nicholas Pipitone
    Jul 25 at 13:19











  • $(...), <(...) do indeed involve a subshell, but ksh93 or zsh would run the last command in that subshell in the same process, not bash which is why there's still another process holding the pipe open while sleep is running an not holding the pipe open. Future versions of bash may implement a similar optimisation.
    – Stéphane Chazelas
    Jul 25 at 13:23










  • A would still be waiting for >(sleep 5) though, wouldn't it? It wouldn't be able to close because if it closed then B would close immediately, since it's no longer connected to >(sleep 5) using A as a proxy. Even if >() doesn't get its own subshell, wouldnt A have to wait for it?
    – Nicholas Pipitone
    Jul 25 at 13:37















up vote
2
down vote













You can pipe the second command into another cat, which will wait until its input pipe closes. Ex:



prompt$ echo one; echo two > >(cat) | cat; echo three;
one
two
three
prompt$


Short and simple.



==========



As simple as it seems, a lot is going on behind the scenes. You can ignore the rest of the answer if you aren't interested in how this works.



When you have echo two > >(cat); echo three, >(cat) is forked off by the interactive shell, and runs independently of echo two. Thus, echo two finishes, and then echo three gets executed, but before the >(cat) finishes. When bash gets data from >(cat) when it didn't expect it (a couple of milliseconds later), it gives you that prompt-like situation where you have to hit newline to get back to the terminal (Same as if another user mesg'ed you).



However, given echo two > >(cat) | cat; echo three, two subshells are spawned (as per the documentation of the | symbol).



One subshell named A is for echo two > >(cat), and one subshell named B is for cat. A is automatically connected to B (A's stdout is B's stdin). Then, echo two and >(cat) begin executing. >(cat)'s stdout is set to A's stdout, which is equal to B's stdin. After echo two finishes, A exits, closing its stdout. However, >(cat) is still holding the reference to B's stdin. The second cat's stdin is holding B's stdin, and that cat will not exit until it sees an EOF. An EOF is only given when no one has the file open in write mode anymore, so >(cat)'s stdout is blocking the second cat. B remains waiting on that second cat. Since echo two exited, >(cat) eventually gets an EOF, so >(cat) flushes its buffer and exits. No one is holding B's/second cat's stdin anymore, so the second cat reads an EOF (B isn't reading its stdin at all, it doesn't care). This EOF causes the second cat to flush its buffer, close its stdout, and exit, and then B exits because cat exited and B was waiting on cat.



A caveat of this is that bash also spawns a subshell for >(cat)! Because of this, you'll see that



echo two > >(sleep 5) | cat; echo three



will still wait 5 seconds before executing echo three, even though sleep 5 isn't holding B's stdin. This is because a hidden subshell C spawned for >(sleep 5) is waiting on sleep, and C is holding B's stdin. You can see how



echo two > >(exec sleep 5) | cat; echo three



Will not wait however, since sleep isn't holding B's stdin, and there's no ghost subshell C that's holding B's stdin (exec will force sleep to replace C, as opposed to forking and making C wait on sleep). Regardless of this caveat,



echo two > >(exec cat) | cat; echo three



will still properly execute the functions in order, as described previously.






share|improve this answer






















  • As noted in the conversion with @MiniMax in the comments to my answer, that has however the downside of affecting the stdout of the command and means the output needs to be read and written an extra time.
    – Stéphane Chazelas
    Jul 24 at 20:50










  • The explanation is not accurate. A is not waiting for the cat spawned in >(cat). As I mention in my answer, the reason why echo two > >(sleep 5 &>/dev/null) | cat; echo three outputs three after 5 seconds is because current versions of bash waste an extra shell process in >(sleep 5) that waits for sleep and that process still has stdout going to the pipe which prevents the second cat from terminating. If you replace it with echo two > >(exec sleep 5 &>/dev/null) | cat; echo three to eliminate that extra process, you'll find that it returns straight away.
    – Stéphane Chazelas
    Jul 24 at 21:32










  • It makes a nested subshell? I've been trying to look into the bash implementation to figure it out, I'm pretty sure the echo two > >(sleep 5 &>/dev/null) at the minimum gets its own subshell. Is it a non-documented implementation detail that causes sleep 5 to get its own subshell too? If it's documented then it would be a legitimate way to get it done with fewer characters (Unless there's a tight loop i dont think anyone will notice performance problems with a subshell, or a cat)`. If it's not documented then rip, nice hack though, won't work on future versions though.
    – Nicholas Pipitone
    Jul 25 at 13:19











  • $(...), <(...) do indeed involve a subshell, but ksh93 or zsh would run the last command in that subshell in the same process, not bash which is why there's still another process holding the pipe open while sleep is running an not holding the pipe open. Future versions of bash may implement a similar optimisation.
    – Stéphane Chazelas
    Jul 25 at 13:23










  • A would still be waiting for >(sleep 5) though, wouldn't it? It wouldn't be able to close because if it closed then B would close immediately, since it's no longer connected to >(sleep 5) using A as a proxy. Even if >() doesn't get its own subshell, wouldnt A have to wait for it?
    – Nicholas Pipitone
    Jul 25 at 13:37













up vote
2
down vote










up vote
2
down vote









You can pipe the second command into another cat, which will wait until its input pipe closes. Ex:



prompt$ echo one; echo two > >(cat) | cat; echo three;
one
two
three
prompt$


Short and simple.



==========



As simple as it seems, a lot is going on behind the scenes. You can ignore the rest of the answer if you aren't interested in how this works.



When you have echo two > >(cat); echo three, >(cat) is forked off by the interactive shell, and runs independently of echo two. Thus, echo two finishes, and then echo three gets executed, but before the >(cat) finishes. When bash gets data from >(cat) when it didn't expect it (a couple of milliseconds later), it gives you that prompt-like situation where you have to hit newline to get back to the terminal (Same as if another user mesg'ed you).



However, given echo two > >(cat) | cat; echo three, two subshells are spawned (as per the documentation of the | symbol).



One subshell named A is for echo two > >(cat), and one subshell named B is for cat. A is automatically connected to B (A's stdout is B's stdin). Then, echo two and >(cat) begin executing. >(cat)'s stdout is set to A's stdout, which is equal to B's stdin. After echo two finishes, A exits, closing its stdout. However, >(cat) is still holding the reference to B's stdin. The second cat's stdin is holding B's stdin, and that cat will not exit until it sees an EOF. An EOF is only given when no one has the file open in write mode anymore, so >(cat)'s stdout is blocking the second cat. B remains waiting on that second cat. Since echo two exited, >(cat) eventually gets an EOF, so >(cat) flushes its buffer and exits. No one is holding B's/second cat's stdin anymore, so the second cat reads an EOF (B isn't reading its stdin at all, it doesn't care). This EOF causes the second cat to flush its buffer, close its stdout, and exit, and then B exits because cat exited and B was waiting on cat.



A caveat of this is that bash also spawns a subshell for >(cat)! Because of this, you'll see that



echo two > >(sleep 5) | cat; echo three



will still wait 5 seconds before executing echo three, even though sleep 5 isn't holding B's stdin. This is because a hidden subshell C spawned for >(sleep 5) is waiting on sleep, and C is holding B's stdin. You can see how



echo two > >(exec sleep 5) | cat; echo three



Will not wait however, since sleep isn't holding B's stdin, and there's no ghost subshell C that's holding B's stdin (exec will force sleep to replace C, as opposed to forking and making C wait on sleep). Regardless of this caveat,



echo two > >(exec cat) | cat; echo three



will still properly execute the functions in order, as described previously.






share|improve this answer














You can pipe the second command into another cat, which will wait until its input pipe closes. Ex:



prompt$ echo one; echo two > >(cat) | cat; echo three;
one
two
three
prompt$


Short and simple.



==========



As simple as it seems, a lot is going on behind the scenes. You can ignore the rest of the answer if you aren't interested in how this works.



When you have echo two > >(cat); echo three, >(cat) is forked off by the interactive shell, and runs independently of echo two. Thus, echo two finishes, and then echo three gets executed, but before the >(cat) finishes. When bash gets data from >(cat) when it didn't expect it (a couple of milliseconds later), it gives you that prompt-like situation where you have to hit newline to get back to the terminal (Same as if another user mesg'ed you).



However, given echo two > >(cat) | cat; echo three, two subshells are spawned (as per the documentation of the | symbol).



One subshell named A is for echo two > >(cat), and one subshell named B is for cat. A is automatically connected to B (A's stdout is B's stdin). Then, echo two and >(cat) begin executing. >(cat)'s stdout is set to A's stdout, which is equal to B's stdin. After echo two finishes, A exits, closing its stdout. However, >(cat) is still holding the reference to B's stdin. The second cat's stdin is holding B's stdin, and that cat will not exit until it sees an EOF. An EOF is only given when no one has the file open in write mode anymore, so >(cat)'s stdout is blocking the second cat. B remains waiting on that second cat. Since echo two exited, >(cat) eventually gets an EOF, so >(cat) flushes its buffer and exits. No one is holding B's/second cat's stdin anymore, so the second cat reads an EOF (B isn't reading its stdin at all, it doesn't care). This EOF causes the second cat to flush its buffer, close its stdout, and exit, and then B exits because cat exited and B was waiting on cat.



A caveat of this is that bash also spawns a subshell for >(cat)! Because of this, you'll see that



echo two > >(sleep 5) | cat; echo three



will still wait 5 seconds before executing echo three, even though sleep 5 isn't holding B's stdin. This is because a hidden subshell C spawned for >(sleep 5) is waiting on sleep, and C is holding B's stdin. You can see how



echo two > >(exec sleep 5) | cat; echo three



Will not wait however, since sleep isn't holding B's stdin, and there's no ghost subshell C that's holding B's stdin (exec will force sleep to replace C, as opposed to forking and making C wait on sleep). Regardless of this caveat,



echo two > >(exec cat) | cat; echo three



will still properly execute the functions in order, as described previously.







share|improve this answer














share|improve this answer



share|improve this answer








edited Jul 26 at 14:44

























answered Jul 24 at 19:38









Nicholas Pipitone

1214




1214











  • As noted in the conversion with @MiniMax in the comments to my answer, that has however the downside of affecting the stdout of the command and means the output needs to be read and written an extra time.
    – Stéphane Chazelas
    Jul 24 at 20:50










  • The explanation is not accurate. A is not waiting for the cat spawned in >(cat). As I mention in my answer, the reason why echo two > >(sleep 5 &>/dev/null) | cat; echo three outputs three after 5 seconds is because current versions of bash waste an extra shell process in >(sleep 5) that waits for sleep and that process still has stdout going to the pipe which prevents the second cat from terminating. If you replace it with echo two > >(exec sleep 5 &>/dev/null) | cat; echo three to eliminate that extra process, you'll find that it returns straight away.
    – Stéphane Chazelas
    Jul 24 at 21:32










  • It makes a nested subshell? I've been trying to look into the bash implementation to figure it out, I'm pretty sure the echo two > >(sleep 5 &>/dev/null) at the minimum gets its own subshell. Is it a non-documented implementation detail that causes sleep 5 to get its own subshell too? If it's documented then it would be a legitimate way to get it done with fewer characters (Unless there's a tight loop i dont think anyone will notice performance problems with a subshell, or a cat)`. If it's not documented then rip, nice hack though, won't work on future versions though.
    – Nicholas Pipitone
    Jul 25 at 13:19











  • $(...), <(...) do indeed involve a subshell, but ksh93 or zsh would run the last command in that subshell in the same process, not bash which is why there's still another process holding the pipe open while sleep is running an not holding the pipe open. Future versions of bash may implement a similar optimisation.
    – Stéphane Chazelas
    Jul 25 at 13:23










  • A would still be waiting for >(sleep 5) though, wouldn't it? It wouldn't be able to close because if it closed then B would close immediately, since it's no longer connected to >(sleep 5) using A as a proxy. Even if >() doesn't get its own subshell, wouldnt A have to wait for it?
    – Nicholas Pipitone
    Jul 25 at 13:37

















  • As noted in the conversion with @MiniMax in the comments to my answer, that has however the downside of affecting the stdout of the command and means the output needs to be read and written an extra time.
    – Stéphane Chazelas
    Jul 24 at 20:50










  • The explanation is not accurate. A is not waiting for the cat spawned in >(cat). As I mention in my answer, the reason why echo two > >(sleep 5 &>/dev/null) | cat; echo three outputs three after 5 seconds is because current versions of bash waste an extra shell process in >(sleep 5) that waits for sleep and that process still has stdout going to the pipe which prevents the second cat from terminating. If you replace it with echo two > >(exec sleep 5 &>/dev/null) | cat; echo three to eliminate that extra process, you'll find that it returns straight away.
    – Stéphane Chazelas
    Jul 24 at 21:32










  • It makes a nested subshell? I've been trying to look into the bash implementation to figure it out, I'm pretty sure the echo two > >(sleep 5 &>/dev/null) at the minimum gets its own subshell. Is it a non-documented implementation detail that causes sleep 5 to get its own subshell too? If it's documented then it would be a legitimate way to get it done with fewer characters (Unless there's a tight loop i dont think anyone will notice performance problems with a subshell, or a cat)`. If it's not documented then rip, nice hack though, won't work on future versions though.
    – Nicholas Pipitone
    Jul 25 at 13:19











  • $(...), <(...) do indeed involve a subshell, but ksh93 or zsh would run the last command in that subshell in the same process, not bash which is why there's still another process holding the pipe open while sleep is running an not holding the pipe open. Future versions of bash may implement a similar optimisation.
    – Stéphane Chazelas
    Jul 25 at 13:23










  • A would still be waiting for >(sleep 5) though, wouldn't it? It wouldn't be able to close because if it closed then B would close immediately, since it's no longer connected to >(sleep 5) using A as a proxy. Even if >() doesn't get its own subshell, wouldnt A have to wait for it?
    – Nicholas Pipitone
    Jul 25 at 13:37
















As noted in the conversion with @MiniMax in the comments to my answer, that has however the downside of affecting the stdout of the command and means the output needs to be read and written an extra time.
– Stéphane Chazelas
Jul 24 at 20:50




As noted in the conversion with @MiniMax in the comments to my answer, that has however the downside of affecting the stdout of the command and means the output needs to be read and written an extra time.
– Stéphane Chazelas
Jul 24 at 20:50












The explanation is not accurate. A is not waiting for the cat spawned in >(cat). As I mention in my answer, the reason why echo two > >(sleep 5 &>/dev/null) | cat; echo three outputs three after 5 seconds is because current versions of bash waste an extra shell process in >(sleep 5) that waits for sleep and that process still has stdout going to the pipe which prevents the second cat from terminating. If you replace it with echo two > >(exec sleep 5 &>/dev/null) | cat; echo three to eliminate that extra process, you'll find that it returns straight away.
– Stéphane Chazelas
Jul 24 at 21:32




The explanation is not accurate. A is not waiting for the cat spawned in >(cat). As I mention in my answer, the reason why echo two > >(sleep 5 &>/dev/null) | cat; echo three outputs three after 5 seconds is because current versions of bash waste an extra shell process in >(sleep 5) that waits for sleep and that process still has stdout going to the pipe which prevents the second cat from terminating. If you replace it with echo two > >(exec sleep 5 &>/dev/null) | cat; echo three to eliminate that extra process, you'll find that it returns straight away.
– Stéphane Chazelas
Jul 24 at 21:32












It makes a nested subshell? I've been trying to look into the bash implementation to figure it out, I'm pretty sure the echo two > >(sleep 5 &>/dev/null) at the minimum gets its own subshell. Is it a non-documented implementation detail that causes sleep 5 to get its own subshell too? If it's documented then it would be a legitimate way to get it done with fewer characters (Unless there's a tight loop i dont think anyone will notice performance problems with a subshell, or a cat)`. If it's not documented then rip, nice hack though, won't work on future versions though.
– Nicholas Pipitone
Jul 25 at 13:19





It makes a nested subshell? I've been trying to look into the bash implementation to figure it out, I'm pretty sure the echo two > >(sleep 5 &>/dev/null) at the minimum gets its own subshell. Is it a non-documented implementation detail that causes sleep 5 to get its own subshell too? If it's documented then it would be a legitimate way to get it done with fewer characters (Unless there's a tight loop i dont think anyone will notice performance problems with a subshell, or a cat)`. If it's not documented then rip, nice hack though, won't work on future versions though.
– Nicholas Pipitone
Jul 25 at 13:19













$(...), <(...) do indeed involve a subshell, but ksh93 or zsh would run the last command in that subshell in the same process, not bash which is why there's still another process holding the pipe open while sleep is running an not holding the pipe open. Future versions of bash may implement a similar optimisation.
– Stéphane Chazelas
Jul 25 at 13:23




$(...), <(...) do indeed involve a subshell, but ksh93 or zsh would run the last command in that subshell in the same process, not bash which is why there's still another process holding the pipe open while sleep is running an not holding the pipe open. Future versions of bash may implement a similar optimisation.
– Stéphane Chazelas
Jul 25 at 13:23












A would still be waiting for >(sleep 5) though, wouldn't it? It wouldn't be able to close because if it closed then B would close immediately, since it's no longer connected to >(sleep 5) using A as a proxy. Even if >() doesn't get its own subshell, wouldnt A have to wait for it?
– Nicholas Pipitone
Jul 25 at 13:37





A would still be waiting for >(sleep 5) though, wouldn't it? It wouldn't be able to close because if it closed then B would close immediately, since it's no longer connected to >(sleep 5) using A as a proxy. Even if >() doesn't get its own subshell, wouldnt A have to wait for it?
– Nicholas Pipitone
Jul 25 at 13:37


















 

draft saved


draft discarded















































 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f403783%2fthe-process-substitution-output-is-out-of-the-order%23new-answer', 'question_page');

);

Post as a guest













































































Popular posts from this blog

How to check contact read email or not when send email to Individual?

Displaying single band from multi-band raster using QGIS

How many registers does an x86_64 CPU actually have?