How to run multiple processes and exit if any of them exits or fails
Clash Royale CLAN TAG#URR8PPP
I am running multiple processes and want to exit with the appropriate exit code (this means error on failure, success otherwise) if any of them fails or exists.
Additionally if any child process exits or fails, any other child processes should also be shut down.
My current non-functional solution (yarn is just an example; may be any other command):
#!/bin/bash -e
# Run other process before start
sh ./bin/optimize.sh
trap 'exit' INT TERM
trap 'kill -INT 0' EXIT
# Run Schedule
sh ./bin/schedule.sh &
# Run a long running task
yarn task &
wait
./bin/schedule.sh
:
#!/bin/bash -e
while true; do
yarn schedule
sleep 30
done
If something in yarn schedule
fails, everything exists correctly. But when I kill the process via ctrl+c or yarn task
exits yarn schedule
keeps running.
How to get this working regardless of what the child processes are (bash, yarn, php or whatever)?
I can't use GNU parallel.
bash shell return-status concurrency
|
show 3 more comments
I am running multiple processes and want to exit with the appropriate exit code (this means error on failure, success otherwise) if any of them fails or exists.
Additionally if any child process exits or fails, any other child processes should also be shut down.
My current non-functional solution (yarn is just an example; may be any other command):
#!/bin/bash -e
# Run other process before start
sh ./bin/optimize.sh
trap 'exit' INT TERM
trap 'kill -INT 0' EXIT
# Run Schedule
sh ./bin/schedule.sh &
# Run a long running task
yarn task &
wait
./bin/schedule.sh
:
#!/bin/bash -e
while true; do
yarn schedule
sleep 30
done
If something in yarn schedule
fails, everything exists correctly. But when I kill the process via ctrl+c or yarn task
exits yarn schedule
keeps running.
How to get this working regardless of what the child processes are (bash, yarn, php or whatever)?
I can't use GNU parallel.
bash shell return-status concurrency
sorry ... is "Scheduler" the./bin/schedule.sh
referred to above?
– trs
Dec 10 at 20:13
I can't reproduce this with small test scripts. Isyarn
ignoring signals when used independently?
– Kusalananda
Dec 10 at 20:32
@trs yes, I made it more clear in the question
– timw
Dec 10 at 20:35
@Kusalanandayarn
is just an example
– timw
Dec 10 at 20:35
To have the script exit whenyarn task
exits, you would have to startyarn task
with(yarn task; kill "$$") &
. I can't at the moment see a good reason why the backgrounded shell script does not exit when you pressCtrl+C
(it does when I'm testing this withyarn schedule
replaced by asleep
).
– Kusalananda
Dec 10 at 20:53
|
show 3 more comments
I am running multiple processes and want to exit with the appropriate exit code (this means error on failure, success otherwise) if any of them fails or exists.
Additionally if any child process exits or fails, any other child processes should also be shut down.
My current non-functional solution (yarn is just an example; may be any other command):
#!/bin/bash -e
# Run other process before start
sh ./bin/optimize.sh
trap 'exit' INT TERM
trap 'kill -INT 0' EXIT
# Run Schedule
sh ./bin/schedule.sh &
# Run a long running task
yarn task &
wait
./bin/schedule.sh
:
#!/bin/bash -e
while true; do
yarn schedule
sleep 30
done
If something in yarn schedule
fails, everything exists correctly. But when I kill the process via ctrl+c or yarn task
exits yarn schedule
keeps running.
How to get this working regardless of what the child processes are (bash, yarn, php or whatever)?
I can't use GNU parallel.
bash shell return-status concurrency
I am running multiple processes and want to exit with the appropriate exit code (this means error on failure, success otherwise) if any of them fails or exists.
Additionally if any child process exits or fails, any other child processes should also be shut down.
My current non-functional solution (yarn is just an example; may be any other command):
#!/bin/bash -e
# Run other process before start
sh ./bin/optimize.sh
trap 'exit' INT TERM
trap 'kill -INT 0' EXIT
# Run Schedule
sh ./bin/schedule.sh &
# Run a long running task
yarn task &
wait
./bin/schedule.sh
:
#!/bin/bash -e
while true; do
yarn schedule
sleep 30
done
If something in yarn schedule
fails, everything exists correctly. But when I kill the process via ctrl+c or yarn task
exits yarn schedule
keeps running.
How to get this working regardless of what the child processes are (bash, yarn, php or whatever)?
I can't use GNU parallel.
bash shell return-status concurrency
bash shell return-status concurrency
edited Dec 11 at 8:14
Gilles
526k12710561580
526k12710561580
asked Dec 10 at 19:28
timw
212
212
sorry ... is "Scheduler" the./bin/schedule.sh
referred to above?
– trs
Dec 10 at 20:13
I can't reproduce this with small test scripts. Isyarn
ignoring signals when used independently?
– Kusalananda
Dec 10 at 20:32
@trs yes, I made it more clear in the question
– timw
Dec 10 at 20:35
@Kusalanandayarn
is just an example
– timw
Dec 10 at 20:35
To have the script exit whenyarn task
exits, you would have to startyarn task
with(yarn task; kill "$$") &
. I can't at the moment see a good reason why the backgrounded shell script does not exit when you pressCtrl+C
(it does when I'm testing this withyarn schedule
replaced by asleep
).
– Kusalananda
Dec 10 at 20:53
|
show 3 more comments
sorry ... is "Scheduler" the./bin/schedule.sh
referred to above?
– trs
Dec 10 at 20:13
I can't reproduce this with small test scripts. Isyarn
ignoring signals when used independently?
– Kusalananda
Dec 10 at 20:32
@trs yes, I made it more clear in the question
– timw
Dec 10 at 20:35
@Kusalanandayarn
is just an example
– timw
Dec 10 at 20:35
To have the script exit whenyarn task
exits, you would have to startyarn task
with(yarn task; kill "$$") &
. I can't at the moment see a good reason why the backgrounded shell script does not exit when you pressCtrl+C
(it does when I'm testing this withyarn schedule
replaced by asleep
).
– Kusalananda
Dec 10 at 20:53
sorry ... is "Scheduler" the
./bin/schedule.sh
referred to above?– trs
Dec 10 at 20:13
sorry ... is "Scheduler" the
./bin/schedule.sh
referred to above?– trs
Dec 10 at 20:13
I can't reproduce this with small test scripts. Is
yarn
ignoring signals when used independently?– Kusalananda
Dec 10 at 20:32
I can't reproduce this with small test scripts. Is
yarn
ignoring signals when used independently?– Kusalananda
Dec 10 at 20:32
@trs yes, I made it more clear in the question
– timw
Dec 10 at 20:35
@trs yes, I made it more clear in the question
– timw
Dec 10 at 20:35
@Kusalananda
yarn
is just an example– timw
Dec 10 at 20:35
@Kusalananda
yarn
is just an example– timw
Dec 10 at 20:35
To have the script exit when
yarn task
exits, you would have to start yarn task
with (yarn task; kill "$$") &
. I can't at the moment see a good reason why the backgrounded shell script does not exit when you press Ctrl+C
(it does when I'm testing this with yarn schedule
replaced by a sleep
).– Kusalananda
Dec 10 at 20:53
To have the script exit when
yarn task
exits, you would have to start yarn task
with (yarn task; kill "$$") &
. I can't at the moment see a good reason why the backgrounded shell script does not exit when you press Ctrl+C
(it does when I'm testing this with yarn schedule
replaced by a sleep
).– Kusalananda
Dec 10 at 20:53
|
show 3 more comments
2 Answers
2
active
oldest
votes
This is painful in shells because the wait
builtin doesn't do “wait for any”, it does ”wait for all“. wait
with no argument waits for all the children to exit, and returns 0. wait
with an explicit list of processes waits for all of them to exit, and returns the status of the last argument. To wait for multiple children and obtain their exit status, you need a different approach. wait
can give you the exit status only if you know which child is already dead.
One possible approach is to use a dedicated named pipe to report each child's status. The following snippet (untested!) returns the largest of the children's status.
mkfifo status_pipe
children=0
child1; echo 1 $? >status_pipe; & children=$((children+1))
child2; echo 2 $? >status_pipe; & children=$((children+1))
max_status=0
while [ $children -ne 0 ]; do
read -r child status <status_pipe
children=$((children-1))
if [ $status -gt $max_status ]; then
max_status=$status
fi
done
rm status_pipe
Note that this will block forever if one of the subshells dies without reporting its status. This won't happen under typical conditions, but it could happen if the subshell was killed manually, or if the subshell ran out of memory.
If you want to do something as soon as one of the children fails, replace if [ $status -gt $max_status ]; then …
by if [ $status -ne 0 ]; then …
.
add a comment |
GNU Parallel has --halt
. It will kill all running jobs if one of the jobs finishes or dies and will return false if the job failed:
parallel --halt now,done=1 ::: 'sleep 1;echo a' 'sleep 2;echo b' ||
echo the job that finished failed
parallel --halt now,done=1 ::: 'sleep 1;echo a;false' 'sleep 2;echo b' ||
echo the job that finished failed
For systems that do not have GNU Parallel installed, you can typically write your script on a system that has GNU Parallel, and use --embed
to embed GNU Parallel directly into the script:
parallel --embed > myscript.sh
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "106"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f487188%2fhow-to-run-multiple-processes-and-exit-if-any-of-them-exits-or-fails%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
This is painful in shells because the wait
builtin doesn't do “wait for any”, it does ”wait for all“. wait
with no argument waits for all the children to exit, and returns 0. wait
with an explicit list of processes waits for all of them to exit, and returns the status of the last argument. To wait for multiple children and obtain their exit status, you need a different approach. wait
can give you the exit status only if you know which child is already dead.
One possible approach is to use a dedicated named pipe to report each child's status. The following snippet (untested!) returns the largest of the children's status.
mkfifo status_pipe
children=0
child1; echo 1 $? >status_pipe; & children=$((children+1))
child2; echo 2 $? >status_pipe; & children=$((children+1))
max_status=0
while [ $children -ne 0 ]; do
read -r child status <status_pipe
children=$((children-1))
if [ $status -gt $max_status ]; then
max_status=$status
fi
done
rm status_pipe
Note that this will block forever if one of the subshells dies without reporting its status. This won't happen under typical conditions, but it could happen if the subshell was killed manually, or if the subshell ran out of memory.
If you want to do something as soon as one of the children fails, replace if [ $status -gt $max_status ]; then …
by if [ $status -ne 0 ]; then …
.
add a comment |
This is painful in shells because the wait
builtin doesn't do “wait for any”, it does ”wait for all“. wait
with no argument waits for all the children to exit, and returns 0. wait
with an explicit list of processes waits for all of them to exit, and returns the status of the last argument. To wait for multiple children and obtain their exit status, you need a different approach. wait
can give you the exit status only if you know which child is already dead.
One possible approach is to use a dedicated named pipe to report each child's status. The following snippet (untested!) returns the largest of the children's status.
mkfifo status_pipe
children=0
child1; echo 1 $? >status_pipe; & children=$((children+1))
child2; echo 2 $? >status_pipe; & children=$((children+1))
max_status=0
while [ $children -ne 0 ]; do
read -r child status <status_pipe
children=$((children-1))
if [ $status -gt $max_status ]; then
max_status=$status
fi
done
rm status_pipe
Note that this will block forever if one of the subshells dies without reporting its status. This won't happen under typical conditions, but it could happen if the subshell was killed manually, or if the subshell ran out of memory.
If you want to do something as soon as one of the children fails, replace if [ $status -gt $max_status ]; then …
by if [ $status -ne 0 ]; then …
.
add a comment |
This is painful in shells because the wait
builtin doesn't do “wait for any”, it does ”wait for all“. wait
with no argument waits for all the children to exit, and returns 0. wait
with an explicit list of processes waits for all of them to exit, and returns the status of the last argument. To wait for multiple children and obtain their exit status, you need a different approach. wait
can give you the exit status only if you know which child is already dead.
One possible approach is to use a dedicated named pipe to report each child's status. The following snippet (untested!) returns the largest of the children's status.
mkfifo status_pipe
children=0
child1; echo 1 $? >status_pipe; & children=$((children+1))
child2; echo 2 $? >status_pipe; & children=$((children+1))
max_status=0
while [ $children -ne 0 ]; do
read -r child status <status_pipe
children=$((children-1))
if [ $status -gt $max_status ]; then
max_status=$status
fi
done
rm status_pipe
Note that this will block forever if one of the subshells dies without reporting its status. This won't happen under typical conditions, but it could happen if the subshell was killed manually, or if the subshell ran out of memory.
If you want to do something as soon as one of the children fails, replace if [ $status -gt $max_status ]; then …
by if [ $status -ne 0 ]; then …
.
This is painful in shells because the wait
builtin doesn't do “wait for any”, it does ”wait for all“. wait
with no argument waits for all the children to exit, and returns 0. wait
with an explicit list of processes waits for all of them to exit, and returns the status of the last argument. To wait for multiple children and obtain their exit status, you need a different approach. wait
can give you the exit status only if you know which child is already dead.
One possible approach is to use a dedicated named pipe to report each child's status. The following snippet (untested!) returns the largest of the children's status.
mkfifo status_pipe
children=0
child1; echo 1 $? >status_pipe; & children=$((children+1))
child2; echo 2 $? >status_pipe; & children=$((children+1))
max_status=0
while [ $children -ne 0 ]; do
read -r child status <status_pipe
children=$((children-1))
if [ $status -gt $max_status ]; then
max_status=$status
fi
done
rm status_pipe
Note that this will block forever if one of the subshells dies without reporting its status. This won't happen under typical conditions, but it could happen if the subshell was killed manually, or if the subshell ran out of memory.
If you want to do something as soon as one of the children fails, replace if [ $status -gt $max_status ]; then …
by if [ $status -ne 0 ]; then …
.
answered Dec 11 at 8:14
Gilles
526k12710561580
526k12710561580
add a comment |
add a comment |
GNU Parallel has --halt
. It will kill all running jobs if one of the jobs finishes or dies and will return false if the job failed:
parallel --halt now,done=1 ::: 'sleep 1;echo a' 'sleep 2;echo b' ||
echo the job that finished failed
parallel --halt now,done=1 ::: 'sleep 1;echo a;false' 'sleep 2;echo b' ||
echo the job that finished failed
For systems that do not have GNU Parallel installed, you can typically write your script on a system that has GNU Parallel, and use --embed
to embed GNU Parallel directly into the script:
parallel --embed > myscript.sh
add a comment |
GNU Parallel has --halt
. It will kill all running jobs if one of the jobs finishes or dies and will return false if the job failed:
parallel --halt now,done=1 ::: 'sleep 1;echo a' 'sleep 2;echo b' ||
echo the job that finished failed
parallel --halt now,done=1 ::: 'sleep 1;echo a;false' 'sleep 2;echo b' ||
echo the job that finished failed
For systems that do not have GNU Parallel installed, you can typically write your script on a system that has GNU Parallel, and use --embed
to embed GNU Parallel directly into the script:
parallel --embed > myscript.sh
add a comment |
GNU Parallel has --halt
. It will kill all running jobs if one of the jobs finishes or dies and will return false if the job failed:
parallel --halt now,done=1 ::: 'sleep 1;echo a' 'sleep 2;echo b' ||
echo the job that finished failed
parallel --halt now,done=1 ::: 'sleep 1;echo a;false' 'sleep 2;echo b' ||
echo the job that finished failed
For systems that do not have GNU Parallel installed, you can typically write your script on a system that has GNU Parallel, and use --embed
to embed GNU Parallel directly into the script:
parallel --embed > myscript.sh
GNU Parallel has --halt
. It will kill all running jobs if one of the jobs finishes or dies and will return false if the job failed:
parallel --halt now,done=1 ::: 'sleep 1;echo a' 'sleep 2;echo b' ||
echo the job that finished failed
parallel --halt now,done=1 ::: 'sleep 1;echo a;false' 'sleep 2;echo b' ||
echo the job that finished failed
For systems that do not have GNU Parallel installed, you can typically write your script on a system that has GNU Parallel, and use --embed
to embed GNU Parallel directly into the script:
parallel --embed > myscript.sh
edited Dec 16 at 1:01
answered Dec 14 at 19:36
Ole Tange
11.9k1451105
11.9k1451105
add a comment |
add a comment |
Thanks for contributing an answer to Unix & Linux Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f487188%2fhow-to-run-multiple-processes-and-exit-if-any-of-them-exits-or-fails%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
sorry ... is "Scheduler" the
./bin/schedule.sh
referred to above?– trs
Dec 10 at 20:13
I can't reproduce this with small test scripts. Is
yarn
ignoring signals when used independently?– Kusalananda
Dec 10 at 20:32
@trs yes, I made it more clear in the question
– timw
Dec 10 at 20:35
@Kusalananda
yarn
is just an example– timw
Dec 10 at 20:35
To have the script exit when
yarn task
exits, you would have to startyarn task
with(yarn task; kill "$$") &
. I can't at the moment see a good reason why the backgrounded shell script does not exit when you pressCtrl+C
(it does when I'm testing this withyarn schedule
replaced by asleep
).– Kusalananda
Dec 10 at 20:53