Trap and collect script output, “input file is output file” error?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
2
down vote

favorite












I need to upload the output of the current script, so I added a trap and set -ex, e.g



#!/bin/bash

exec &> /tmp/error.log
trap 'cat /tmp/error.log; curl http://127.0.0.1/error.php?hostname=$(hostname) -F file=@/tmp/error.log' EXIT

set -ex
wtfwtf


When I execute it, I'm always getting this error, and the PHP script did not receive the whole file



%> cat /tmp/error.log
1.sh: line 6: wtfwtf: command not found
cat: /tmp/error.log: input file is output file


So far, the only solution is to copy the error.log to a new file and upload it, e.g



#!/bin/bash

exec &> /tmp/error.log
trap 'cp /tmp/error.log 123; curl http://127.0.0.1/error.php?hostname=$(hostname) -F file=@123' EXIT

set -ex
wtfwtf


Is there any better way to do this?







share|improve this question





















  • Do you need the cat?
    – ctrl-alt-delor
    Jun 7 at 9:22










  • @ctrl-alt-delor Not really. The client is uploading an file partially so you'll want to cat the file and see what's being uploaded. So the cat command here is only for debugging
    – daisy
    Jun 7 at 9:24










  • And the cat was the bug. What sort of bug, was it a spider cat.
    – ctrl-alt-delor
    Jun 7 at 9:26














up vote
2
down vote

favorite












I need to upload the output of the current script, so I added a trap and set -ex, e.g



#!/bin/bash

exec &> /tmp/error.log
trap 'cat /tmp/error.log; curl http://127.0.0.1/error.php?hostname=$(hostname) -F file=@/tmp/error.log' EXIT

set -ex
wtfwtf


When I execute it, I'm always getting this error, and the PHP script did not receive the whole file



%> cat /tmp/error.log
1.sh: line 6: wtfwtf: command not found
cat: /tmp/error.log: input file is output file


So far, the only solution is to copy the error.log to a new file and upload it, e.g



#!/bin/bash

exec &> /tmp/error.log
trap 'cp /tmp/error.log 123; curl http://127.0.0.1/error.php?hostname=$(hostname) -F file=@123' EXIT

set -ex
wtfwtf


Is there any better way to do this?







share|improve this question





















  • Do you need the cat?
    – ctrl-alt-delor
    Jun 7 at 9:22










  • @ctrl-alt-delor Not really. The client is uploading an file partially so you'll want to cat the file and see what's being uploaded. So the cat command here is only for debugging
    – daisy
    Jun 7 at 9:24










  • And the cat was the bug. What sort of bug, was it a spider cat.
    – ctrl-alt-delor
    Jun 7 at 9:26












up vote
2
down vote

favorite









up vote
2
down vote

favorite











I need to upload the output of the current script, so I added a trap and set -ex, e.g



#!/bin/bash

exec &> /tmp/error.log
trap 'cat /tmp/error.log; curl http://127.0.0.1/error.php?hostname=$(hostname) -F file=@/tmp/error.log' EXIT

set -ex
wtfwtf


When I execute it, I'm always getting this error, and the PHP script did not receive the whole file



%> cat /tmp/error.log
1.sh: line 6: wtfwtf: command not found
cat: /tmp/error.log: input file is output file


So far, the only solution is to copy the error.log to a new file and upload it, e.g



#!/bin/bash

exec &> /tmp/error.log
trap 'cp /tmp/error.log 123; curl http://127.0.0.1/error.php?hostname=$(hostname) -F file=@123' EXIT

set -ex
wtfwtf


Is there any better way to do this?







share|improve this question













I need to upload the output of the current script, so I added a trap and set -ex, e.g



#!/bin/bash

exec &> /tmp/error.log
trap 'cat /tmp/error.log; curl http://127.0.0.1/error.php?hostname=$(hostname) -F file=@/tmp/error.log' EXIT

set -ex
wtfwtf


When I execute it, I'm always getting this error, and the PHP script did not receive the whole file



%> cat /tmp/error.log
1.sh: line 6: wtfwtf: command not found
cat: /tmp/error.log: input file is output file


So far, the only solution is to copy the error.log to a new file and upload it, e.g



#!/bin/bash

exec &> /tmp/error.log
trap 'cp /tmp/error.log 123; curl http://127.0.0.1/error.php?hostname=$(hostname) -F file=@123' EXIT

set -ex
wtfwtf


Is there any better way to do this?









share|improve this question












share|improve this question




share|improve this question








edited Jun 7 at 7:05









Kusalananda

101k13199312




101k13199312









asked Jun 7 at 3:09









daisy

27.3k45157289




27.3k45157289











  • Do you need the cat?
    – ctrl-alt-delor
    Jun 7 at 9:22










  • @ctrl-alt-delor Not really. The client is uploading an file partially so you'll want to cat the file and see what's being uploaded. So the cat command here is only for debugging
    – daisy
    Jun 7 at 9:24










  • And the cat was the bug. What sort of bug, was it a spider cat.
    – ctrl-alt-delor
    Jun 7 at 9:26
















  • Do you need the cat?
    – ctrl-alt-delor
    Jun 7 at 9:22










  • @ctrl-alt-delor Not really. The client is uploading an file partially so you'll want to cat the file and see what's being uploaded. So the cat command here is only for debugging
    – daisy
    Jun 7 at 9:24










  • And the cat was the bug. What sort of bug, was it a spider cat.
    – ctrl-alt-delor
    Jun 7 at 9:26















Do you need the cat?
– ctrl-alt-delor
Jun 7 at 9:22




Do you need the cat?
– ctrl-alt-delor
Jun 7 at 9:22












@ctrl-alt-delor Not really. The client is uploading an file partially so you'll want to cat the file and see what's being uploaded. So the cat command here is only for debugging
– daisy
Jun 7 at 9:24




@ctrl-alt-delor Not really. The client is uploading an file partially so you'll want to cat the file and see what's being uploaded. So the cat command here is only for debugging
– daisy
Jun 7 at 9:24












And the cat was the bug. What sort of bug, was it a spider cat.
– ctrl-alt-delor
Jun 7 at 9:26




And the cat was the bug. What sort of bug, was it a spider cat.
– ctrl-alt-delor
Jun 7 at 9:26










1 Answer
1






active

oldest

votes

















up vote
6
down vote



accepted










With the exec, you are redirecting all output of the script to a specific log file.



In your trap, you want to display the contents of the log file using cat. Since all output is also redirected to that file, GNU cat notices that its input file and standard output stream (which is inherited from the shell) are the same thing, and refuses to perform its task.



BSD cat does not do the same check as GNU cat does, which, if the script is not interrupted, results in an infinitely large log file with the same few lines repeated over and over.



A workaround is to save the original standard output file descriptor, do the redirection as before, and then reinstate it in the trap.



#!/bin/bash

exec 3>&1 # make fd 3 copy of original fd 1
exec >/tmp/error.log 2>&1

# in the trap, make fd 1 copy of fd 3 and close fd 3 (i.e. move fd 3 to fd 1)
trap 'exec 1>&3-; cat /tmp/error.log; curl "http://127.0.0.1/error.php?hostname=$(hostname)" -F file=@/tmp/error.log' EXIT

set -ex
wtfwtf


This makes a copy of file descriptor 1 (as fd 3) before redirecting it to the log file. In the trap, we move this copy back to fd 1 and do the output.



Note that the standard error stream, in the trap, in this example, is still connected to the log file. Therefore, if the curl generates a diagnostic message, this will be saved in the log file rather than being displayed on the terminal (or wherever the original standard error stream was connected to).




Taking the comment from Stéphane Chazelas into account:



#!/bin/sh

exit_handler ()
# 1. Make standard output be the original standard error
# (by using fd 3, which is a copy of original fd 2)
# 2. Do the same with standard error
# 3. Close fd 3.
exec >&3 2>&3 3>&-
cat "$logfile"
curl "some URL" -F "file=@$logfile"


logfile='/var/log/myscript.log'

# Truncate the logfile.
: >"$logfile"

# 1. Make fd 3 a copy of standard error (fd 2)
# 2. Redirect original standard output to the logfile (appending)
# 3. Redirect original standard error to the logfile (will also append)
exec 3>&2 >>"$logfile" 2>&1

# Use shell function for exit trap (for neatness)
trap exit_handler EXIT

set -ex
wtfwtf


His point is that the logfile is only for diagnostic messages anyway, so it makes more sense to output the logfile to the original standard error stream.



He also points out that it's dangerous to use a fixed filename in a world-writable directory such as /tmp. This is because no check is put in place in the script to make sure that this file does not already exist (someone or some malware could have created a /tmp/error.log symlink to /etc/passwd or your ~/.bashrc for instance). His solution to this is to use a dedicated persistent log file for the script under /var/log instead (the file is persistent, but the contents will be cleared when running the script).



A variation of this would be to use mktemp to create a unique filename under $TMPDIR (and then remove that file in the EXIT trap, unless curl failed in which case the rm would not be executed since set -e is in effect):



#!/bin/sh

exit_handler ()
# 1. Make standard output be the original standard error
# (by using fd 3, which is a copy of original fd 2)
# 2. Do the same with standard error
# 3. Close fd 3.
exec >&3 2>&3 3>&-
cat "$logfile"
curl "some URL" -F "file=@$logfile"
rm -f "$logfile"


logfile=$( mktemp )

# 1. Make fd 3 a copy of standard error (fd 2)
# 2. Redirect original standard output to the logfile (appending)
# 3. Redirect original standard error to the logfile (will also append)
exec 3>&2 >>"$logfile" 2>&1

# Use shell function for exit trap (for neatness)
trap exit_handler EXIT

set -ex
wtfwtf



Your second example works, but only because you're not using cat on the log file, not because of copying it.




Minor nitpick: URLs on the command line should probably always be at least double-quoted as they tend to contain characters that the shell may interpret as special (for example ?).






share|improve this answer























  • where can i get more information about this exec ?
    – Kamaraj
    Jun 7 at 6:50






  • 1




    @Kamaraj Duplicating and moving file descriptors is described in the bash manual. The sections are called "Duplicating File Descriptors" and "Moving File Descriptors". You may also search the web for tutorials about this if the manual's description is too terse (I have no good examples of tutorials unfortunately).
    – Kusalananda
    Jun 7 at 6:52






  • 1




    Here, since it's errors we're talking about, it would make more sense to send them to the original stderr. Also note that using files with fixed names in a world writable directory is a very dangerous thing to do. I'd do: log=/var/log/myscript.log; : > "$log"; exec 3>&2 >> /var/log/myscript.log 2>&1 and ... trap 'exec >&3 2>&3 3>&-; ... (and then you can even use sh instead of bash)
    – Stéphane Chazelas
    Jun 7 at 8:25











  • @StéphaneChazelas Much thanks. I have incorporated that in the answer.
    – Kusalananda
    Jun 7 at 9:05










  • Note that you can also define your exit_handler() as exit_handler() ...; >&3 2>&1 3>&-
    – Stéphane Chazelas
    Jun 7 at 10:31










Your Answer







StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "106"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: false,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);








 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f448323%2ftrap-and-collect-script-output-input-file-is-output-file-error%23new-answer', 'question_page');

);

Post as a guest






























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
6
down vote



accepted










With the exec, you are redirecting all output of the script to a specific log file.



In your trap, you want to display the contents of the log file using cat. Since all output is also redirected to that file, GNU cat notices that its input file and standard output stream (which is inherited from the shell) are the same thing, and refuses to perform its task.



BSD cat does not do the same check as GNU cat does, which, if the script is not interrupted, results in an infinitely large log file with the same few lines repeated over and over.



A workaround is to save the original standard output file descriptor, do the redirection as before, and then reinstate it in the trap.



#!/bin/bash

exec 3>&1 # make fd 3 copy of original fd 1
exec >/tmp/error.log 2>&1

# in the trap, make fd 1 copy of fd 3 and close fd 3 (i.e. move fd 3 to fd 1)
trap 'exec 1>&3-; cat /tmp/error.log; curl "http://127.0.0.1/error.php?hostname=$(hostname)" -F file=@/tmp/error.log' EXIT

set -ex
wtfwtf


This makes a copy of file descriptor 1 (as fd 3) before redirecting it to the log file. In the trap, we move this copy back to fd 1 and do the output.



Note that the standard error stream, in the trap, in this example, is still connected to the log file. Therefore, if the curl generates a diagnostic message, this will be saved in the log file rather than being displayed on the terminal (or wherever the original standard error stream was connected to).




Taking the comment from Stéphane Chazelas into account:



#!/bin/sh

exit_handler ()
# 1. Make standard output be the original standard error
# (by using fd 3, which is a copy of original fd 2)
# 2. Do the same with standard error
# 3. Close fd 3.
exec >&3 2>&3 3>&-
cat "$logfile"
curl "some URL" -F "file=@$logfile"


logfile='/var/log/myscript.log'

# Truncate the logfile.
: >"$logfile"

# 1. Make fd 3 a copy of standard error (fd 2)
# 2. Redirect original standard output to the logfile (appending)
# 3. Redirect original standard error to the logfile (will also append)
exec 3>&2 >>"$logfile" 2>&1

# Use shell function for exit trap (for neatness)
trap exit_handler EXIT

set -ex
wtfwtf


His point is that the logfile is only for diagnostic messages anyway, so it makes more sense to output the logfile to the original standard error stream.



He also points out that it's dangerous to use a fixed filename in a world-writable directory such as /tmp. This is because no check is put in place in the script to make sure that this file does not already exist (someone or some malware could have created a /tmp/error.log symlink to /etc/passwd or your ~/.bashrc for instance). His solution to this is to use a dedicated persistent log file for the script under /var/log instead (the file is persistent, but the contents will be cleared when running the script).



A variation of this would be to use mktemp to create a unique filename under $TMPDIR (and then remove that file in the EXIT trap, unless curl failed in which case the rm would not be executed since set -e is in effect):



#!/bin/sh

exit_handler ()
# 1. Make standard output be the original standard error
# (by using fd 3, which is a copy of original fd 2)
# 2. Do the same with standard error
# 3. Close fd 3.
exec >&3 2>&3 3>&-
cat "$logfile"
curl "some URL" -F "file=@$logfile"
rm -f "$logfile"


logfile=$( mktemp )

# 1. Make fd 3 a copy of standard error (fd 2)
# 2. Redirect original standard output to the logfile (appending)
# 3. Redirect original standard error to the logfile (will also append)
exec 3>&2 >>"$logfile" 2>&1

# Use shell function for exit trap (for neatness)
trap exit_handler EXIT

set -ex
wtfwtf



Your second example works, but only because you're not using cat on the log file, not because of copying it.




Minor nitpick: URLs on the command line should probably always be at least double-quoted as they tend to contain characters that the shell may interpret as special (for example ?).






share|improve this answer























  • where can i get more information about this exec ?
    – Kamaraj
    Jun 7 at 6:50






  • 1




    @Kamaraj Duplicating and moving file descriptors is described in the bash manual. The sections are called "Duplicating File Descriptors" and "Moving File Descriptors". You may also search the web for tutorials about this if the manual's description is too terse (I have no good examples of tutorials unfortunately).
    – Kusalananda
    Jun 7 at 6:52






  • 1




    Here, since it's errors we're talking about, it would make more sense to send them to the original stderr. Also note that using files with fixed names in a world writable directory is a very dangerous thing to do. I'd do: log=/var/log/myscript.log; : > "$log"; exec 3>&2 >> /var/log/myscript.log 2>&1 and ... trap 'exec >&3 2>&3 3>&-; ... (and then you can even use sh instead of bash)
    – Stéphane Chazelas
    Jun 7 at 8:25











  • @StéphaneChazelas Much thanks. I have incorporated that in the answer.
    – Kusalananda
    Jun 7 at 9:05










  • Note that you can also define your exit_handler() as exit_handler() ...; >&3 2>&1 3>&-
    – Stéphane Chazelas
    Jun 7 at 10:31














up vote
6
down vote



accepted










With the exec, you are redirecting all output of the script to a specific log file.



In your trap, you want to display the contents of the log file using cat. Since all output is also redirected to that file, GNU cat notices that its input file and standard output stream (which is inherited from the shell) are the same thing, and refuses to perform its task.



BSD cat does not do the same check as GNU cat does, which, if the script is not interrupted, results in an infinitely large log file with the same few lines repeated over and over.



A workaround is to save the original standard output file descriptor, do the redirection as before, and then reinstate it in the trap.



#!/bin/bash

exec 3>&1 # make fd 3 copy of original fd 1
exec >/tmp/error.log 2>&1

# in the trap, make fd 1 copy of fd 3 and close fd 3 (i.e. move fd 3 to fd 1)
trap 'exec 1>&3-; cat /tmp/error.log; curl "http://127.0.0.1/error.php?hostname=$(hostname)" -F file=@/tmp/error.log' EXIT

set -ex
wtfwtf


This makes a copy of file descriptor 1 (as fd 3) before redirecting it to the log file. In the trap, we move this copy back to fd 1 and do the output.



Note that the standard error stream, in the trap, in this example, is still connected to the log file. Therefore, if the curl generates a diagnostic message, this will be saved in the log file rather than being displayed on the terminal (or wherever the original standard error stream was connected to).




Taking the comment from Stéphane Chazelas into account:



#!/bin/sh

exit_handler ()
# 1. Make standard output be the original standard error
# (by using fd 3, which is a copy of original fd 2)
# 2. Do the same with standard error
# 3. Close fd 3.
exec >&3 2>&3 3>&-
cat "$logfile"
curl "some URL" -F "file=@$logfile"


logfile='/var/log/myscript.log'

# Truncate the logfile.
: >"$logfile"

# 1. Make fd 3 a copy of standard error (fd 2)
# 2. Redirect original standard output to the logfile (appending)
# 3. Redirect original standard error to the logfile (will also append)
exec 3>&2 >>"$logfile" 2>&1

# Use shell function for exit trap (for neatness)
trap exit_handler EXIT

set -ex
wtfwtf


His point is that the logfile is only for diagnostic messages anyway, so it makes more sense to output the logfile to the original standard error stream.



He also points out that it's dangerous to use a fixed filename in a world-writable directory such as /tmp. This is because no check is put in place in the script to make sure that this file does not already exist (someone or some malware could have created a /tmp/error.log symlink to /etc/passwd or your ~/.bashrc for instance). His solution to this is to use a dedicated persistent log file for the script under /var/log instead (the file is persistent, but the contents will be cleared when running the script).



A variation of this would be to use mktemp to create a unique filename under $TMPDIR (and then remove that file in the EXIT trap, unless curl failed in which case the rm would not be executed since set -e is in effect):



#!/bin/sh

exit_handler ()
# 1. Make standard output be the original standard error
# (by using fd 3, which is a copy of original fd 2)
# 2. Do the same with standard error
# 3. Close fd 3.
exec >&3 2>&3 3>&-
cat "$logfile"
curl "some URL" -F "file=@$logfile"
rm -f "$logfile"


logfile=$( mktemp )

# 1. Make fd 3 a copy of standard error (fd 2)
# 2. Redirect original standard output to the logfile (appending)
# 3. Redirect original standard error to the logfile (will also append)
exec 3>&2 >>"$logfile" 2>&1

# Use shell function for exit trap (for neatness)
trap exit_handler EXIT

set -ex
wtfwtf



Your second example works, but only because you're not using cat on the log file, not because of copying it.




Minor nitpick: URLs on the command line should probably always be at least double-quoted as they tend to contain characters that the shell may interpret as special (for example ?).






share|improve this answer























  • where can i get more information about this exec ?
    – Kamaraj
    Jun 7 at 6:50






  • 1




    @Kamaraj Duplicating and moving file descriptors is described in the bash manual. The sections are called "Duplicating File Descriptors" and "Moving File Descriptors". You may also search the web for tutorials about this if the manual's description is too terse (I have no good examples of tutorials unfortunately).
    – Kusalananda
    Jun 7 at 6:52






  • 1




    Here, since it's errors we're talking about, it would make more sense to send them to the original stderr. Also note that using files with fixed names in a world writable directory is a very dangerous thing to do. I'd do: log=/var/log/myscript.log; : > "$log"; exec 3>&2 >> /var/log/myscript.log 2>&1 and ... trap 'exec >&3 2>&3 3>&-; ... (and then you can even use sh instead of bash)
    – Stéphane Chazelas
    Jun 7 at 8:25











  • @StéphaneChazelas Much thanks. I have incorporated that in the answer.
    – Kusalananda
    Jun 7 at 9:05










  • Note that you can also define your exit_handler() as exit_handler() ...; >&3 2>&1 3>&-
    – Stéphane Chazelas
    Jun 7 at 10:31












up vote
6
down vote



accepted







up vote
6
down vote



accepted






With the exec, you are redirecting all output of the script to a specific log file.



In your trap, you want to display the contents of the log file using cat. Since all output is also redirected to that file, GNU cat notices that its input file and standard output stream (which is inherited from the shell) are the same thing, and refuses to perform its task.



BSD cat does not do the same check as GNU cat does, which, if the script is not interrupted, results in an infinitely large log file with the same few lines repeated over and over.



A workaround is to save the original standard output file descriptor, do the redirection as before, and then reinstate it in the trap.



#!/bin/bash

exec 3>&1 # make fd 3 copy of original fd 1
exec >/tmp/error.log 2>&1

# in the trap, make fd 1 copy of fd 3 and close fd 3 (i.e. move fd 3 to fd 1)
trap 'exec 1>&3-; cat /tmp/error.log; curl "http://127.0.0.1/error.php?hostname=$(hostname)" -F file=@/tmp/error.log' EXIT

set -ex
wtfwtf


This makes a copy of file descriptor 1 (as fd 3) before redirecting it to the log file. In the trap, we move this copy back to fd 1 and do the output.



Note that the standard error stream, in the trap, in this example, is still connected to the log file. Therefore, if the curl generates a diagnostic message, this will be saved in the log file rather than being displayed on the terminal (or wherever the original standard error stream was connected to).




Taking the comment from Stéphane Chazelas into account:



#!/bin/sh

exit_handler ()
# 1. Make standard output be the original standard error
# (by using fd 3, which is a copy of original fd 2)
# 2. Do the same with standard error
# 3. Close fd 3.
exec >&3 2>&3 3>&-
cat "$logfile"
curl "some URL" -F "file=@$logfile"


logfile='/var/log/myscript.log'

# Truncate the logfile.
: >"$logfile"

# 1. Make fd 3 a copy of standard error (fd 2)
# 2. Redirect original standard output to the logfile (appending)
# 3. Redirect original standard error to the logfile (will also append)
exec 3>&2 >>"$logfile" 2>&1

# Use shell function for exit trap (for neatness)
trap exit_handler EXIT

set -ex
wtfwtf


His point is that the logfile is only for diagnostic messages anyway, so it makes more sense to output the logfile to the original standard error stream.



He also points out that it's dangerous to use a fixed filename in a world-writable directory such as /tmp. This is because no check is put in place in the script to make sure that this file does not already exist (someone or some malware could have created a /tmp/error.log symlink to /etc/passwd or your ~/.bashrc for instance). His solution to this is to use a dedicated persistent log file for the script under /var/log instead (the file is persistent, but the contents will be cleared when running the script).



A variation of this would be to use mktemp to create a unique filename under $TMPDIR (and then remove that file in the EXIT trap, unless curl failed in which case the rm would not be executed since set -e is in effect):



#!/bin/sh

exit_handler ()
# 1. Make standard output be the original standard error
# (by using fd 3, which is a copy of original fd 2)
# 2. Do the same with standard error
# 3. Close fd 3.
exec >&3 2>&3 3>&-
cat "$logfile"
curl "some URL" -F "file=@$logfile"
rm -f "$logfile"


logfile=$( mktemp )

# 1. Make fd 3 a copy of standard error (fd 2)
# 2. Redirect original standard output to the logfile (appending)
# 3. Redirect original standard error to the logfile (will also append)
exec 3>&2 >>"$logfile" 2>&1

# Use shell function for exit trap (for neatness)
trap exit_handler EXIT

set -ex
wtfwtf



Your second example works, but only because you're not using cat on the log file, not because of copying it.




Minor nitpick: URLs on the command line should probably always be at least double-quoted as they tend to contain characters that the shell may interpret as special (for example ?).






share|improve this answer















With the exec, you are redirecting all output of the script to a specific log file.



In your trap, you want to display the contents of the log file using cat. Since all output is also redirected to that file, GNU cat notices that its input file and standard output stream (which is inherited from the shell) are the same thing, and refuses to perform its task.



BSD cat does not do the same check as GNU cat does, which, if the script is not interrupted, results in an infinitely large log file with the same few lines repeated over and over.



A workaround is to save the original standard output file descriptor, do the redirection as before, and then reinstate it in the trap.



#!/bin/bash

exec 3>&1 # make fd 3 copy of original fd 1
exec >/tmp/error.log 2>&1

# in the trap, make fd 1 copy of fd 3 and close fd 3 (i.e. move fd 3 to fd 1)
trap 'exec 1>&3-; cat /tmp/error.log; curl "http://127.0.0.1/error.php?hostname=$(hostname)" -F file=@/tmp/error.log' EXIT

set -ex
wtfwtf


This makes a copy of file descriptor 1 (as fd 3) before redirecting it to the log file. In the trap, we move this copy back to fd 1 and do the output.



Note that the standard error stream, in the trap, in this example, is still connected to the log file. Therefore, if the curl generates a diagnostic message, this will be saved in the log file rather than being displayed on the terminal (or wherever the original standard error stream was connected to).




Taking the comment from Stéphane Chazelas into account:



#!/bin/sh

exit_handler ()
# 1. Make standard output be the original standard error
# (by using fd 3, which is a copy of original fd 2)
# 2. Do the same with standard error
# 3. Close fd 3.
exec >&3 2>&3 3>&-
cat "$logfile"
curl "some URL" -F "file=@$logfile"


logfile='/var/log/myscript.log'

# Truncate the logfile.
: >"$logfile"

# 1. Make fd 3 a copy of standard error (fd 2)
# 2. Redirect original standard output to the logfile (appending)
# 3. Redirect original standard error to the logfile (will also append)
exec 3>&2 >>"$logfile" 2>&1

# Use shell function for exit trap (for neatness)
trap exit_handler EXIT

set -ex
wtfwtf


His point is that the logfile is only for diagnostic messages anyway, so it makes more sense to output the logfile to the original standard error stream.



He also points out that it's dangerous to use a fixed filename in a world-writable directory such as /tmp. This is because no check is put in place in the script to make sure that this file does not already exist (someone or some malware could have created a /tmp/error.log symlink to /etc/passwd or your ~/.bashrc for instance). His solution to this is to use a dedicated persistent log file for the script under /var/log instead (the file is persistent, but the contents will be cleared when running the script).



A variation of this would be to use mktemp to create a unique filename under $TMPDIR (and then remove that file in the EXIT trap, unless curl failed in which case the rm would not be executed since set -e is in effect):



#!/bin/sh

exit_handler ()
# 1. Make standard output be the original standard error
# (by using fd 3, which is a copy of original fd 2)
# 2. Do the same with standard error
# 3. Close fd 3.
exec >&3 2>&3 3>&-
cat "$logfile"
curl "some URL" -F "file=@$logfile"
rm -f "$logfile"


logfile=$( mktemp )

# 1. Make fd 3 a copy of standard error (fd 2)
# 2. Redirect original standard output to the logfile (appending)
# 3. Redirect original standard error to the logfile (will also append)
exec 3>&2 >>"$logfile" 2>&1

# Use shell function for exit trap (for neatness)
trap exit_handler EXIT

set -ex
wtfwtf



Your second example works, but only because you're not using cat on the log file, not because of copying it.




Minor nitpick: URLs on the command line should probably always be at least double-quoted as they tend to contain characters that the shell may interpret as special (for example ?).







share|improve this answer















share|improve this answer



share|improve this answer








edited Jun 7 at 10:30









Stéphane Chazelas

279k53513844




279k53513844











answered Jun 7 at 6:42









Kusalananda

101k13199312




101k13199312











  • where can i get more information about this exec ?
    – Kamaraj
    Jun 7 at 6:50






  • 1




    @Kamaraj Duplicating and moving file descriptors is described in the bash manual. The sections are called "Duplicating File Descriptors" and "Moving File Descriptors". You may also search the web for tutorials about this if the manual's description is too terse (I have no good examples of tutorials unfortunately).
    – Kusalananda
    Jun 7 at 6:52






  • 1




    Here, since it's errors we're talking about, it would make more sense to send them to the original stderr. Also note that using files with fixed names in a world writable directory is a very dangerous thing to do. I'd do: log=/var/log/myscript.log; : > "$log"; exec 3>&2 >> /var/log/myscript.log 2>&1 and ... trap 'exec >&3 2>&3 3>&-; ... (and then you can even use sh instead of bash)
    – Stéphane Chazelas
    Jun 7 at 8:25











  • @StéphaneChazelas Much thanks. I have incorporated that in the answer.
    – Kusalananda
    Jun 7 at 9:05










  • Note that you can also define your exit_handler() as exit_handler() ...; >&3 2>&1 3>&-
    – Stéphane Chazelas
    Jun 7 at 10:31
















  • where can i get more information about this exec ?
    – Kamaraj
    Jun 7 at 6:50






  • 1




    @Kamaraj Duplicating and moving file descriptors is described in the bash manual. The sections are called "Duplicating File Descriptors" and "Moving File Descriptors". You may also search the web for tutorials about this if the manual's description is too terse (I have no good examples of tutorials unfortunately).
    – Kusalananda
    Jun 7 at 6:52






  • 1




    Here, since it's errors we're talking about, it would make more sense to send them to the original stderr. Also note that using files with fixed names in a world writable directory is a very dangerous thing to do. I'd do: log=/var/log/myscript.log; : > "$log"; exec 3>&2 >> /var/log/myscript.log 2>&1 and ... trap 'exec >&3 2>&3 3>&-; ... (and then you can even use sh instead of bash)
    – Stéphane Chazelas
    Jun 7 at 8:25











  • @StéphaneChazelas Much thanks. I have incorporated that in the answer.
    – Kusalananda
    Jun 7 at 9:05










  • Note that you can also define your exit_handler() as exit_handler() ...; >&3 2>&1 3>&-
    – Stéphane Chazelas
    Jun 7 at 10:31















where can i get more information about this exec ?
– Kamaraj
Jun 7 at 6:50




where can i get more information about this exec ?
– Kamaraj
Jun 7 at 6:50




1




1




@Kamaraj Duplicating and moving file descriptors is described in the bash manual. The sections are called "Duplicating File Descriptors" and "Moving File Descriptors". You may also search the web for tutorials about this if the manual's description is too terse (I have no good examples of tutorials unfortunately).
– Kusalananda
Jun 7 at 6:52




@Kamaraj Duplicating and moving file descriptors is described in the bash manual. The sections are called "Duplicating File Descriptors" and "Moving File Descriptors". You may also search the web for tutorials about this if the manual's description is too terse (I have no good examples of tutorials unfortunately).
– Kusalananda
Jun 7 at 6:52




1




1




Here, since it's errors we're talking about, it would make more sense to send them to the original stderr. Also note that using files with fixed names in a world writable directory is a very dangerous thing to do. I'd do: log=/var/log/myscript.log; : > "$log"; exec 3>&2 >> /var/log/myscript.log 2>&1 and ... trap 'exec >&3 2>&3 3>&-; ... (and then you can even use sh instead of bash)
– Stéphane Chazelas
Jun 7 at 8:25





Here, since it's errors we're talking about, it would make more sense to send them to the original stderr. Also note that using files with fixed names in a world writable directory is a very dangerous thing to do. I'd do: log=/var/log/myscript.log; : > "$log"; exec 3>&2 >> /var/log/myscript.log 2>&1 and ... trap 'exec >&3 2>&3 3>&-; ... (and then you can even use sh instead of bash)
– Stéphane Chazelas
Jun 7 at 8:25













@StéphaneChazelas Much thanks. I have incorporated that in the answer.
– Kusalananda
Jun 7 at 9:05




@StéphaneChazelas Much thanks. I have incorporated that in the answer.
– Kusalananda
Jun 7 at 9:05












Note that you can also define your exit_handler() as exit_handler() ...; >&3 2>&1 3>&-
– Stéphane Chazelas
Jun 7 at 10:31




Note that you can also define your exit_handler() as exit_handler() ...; >&3 2>&1 3>&-
– Stéphane Chazelas
Jun 7 at 10:31












 

draft saved


draft discarded


























 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f448323%2ftrap-and-collect-script-output-input-file-is-output-file-error%23new-answer', 'question_page');

);

Post as a guest













































































Popular posts from this blog

How to check contact read email or not when send email to Individual?

Bahrain

Postfix configuration issue with fips on centos 7; mailgun relay