Subject: Output from from your job 1843 / Body: Killed
Clash Royale CLAN TAG#URR8PPP
up vote
0
down vote
favorite
I run a Sheevaplug (small ARM server) with Debian 9. It does not have any third-party repos enabled in sources.list
/ sources.list.d
.
I have a backup script which runs as root
, and uses at
. I think something broke on Sep 13, because I am getting these emails that look like they come from at
. They are daily, like my backups. The body of the message just says Killed
.
I can't think what would be sending SIGKILL to my process! Without gathering any more information than I have now, can you think of any reason this would happen?
It can't be from the OOM killer (Out of Memory condition), because I have a full kernel log in dmesg
which does not show any OOM messages.
The at
job is
#!/bin/sh
# at uses sh shell
set -e
cd /d/backup/jenkins-desktop/
for i in */; do
nice ionice -c 3 rdiff-backup "$i" ../jenkins-desktop.rdiff/"$i"
done
I doubt it's systemd SystemCallFilter=
, and that would send SIGSYS by default. I see that a couple of rlimit
s send SIGKILL. But I'm not doing anything to set rlimit
s myself; also it looks like in both cases you would be killed by SIGXCPU first, which defaults to fatal and should show "CPU time limit exceeded".
I have looked in journalctl --since=-2d -p notice
and there are no errors, only some success messages from anacron
.
Return-path: <root@brick>
Envelope-to: root@brick
Delivery-date: Thu, 13 Sep 2018 02:14:15 +0100
Received: from root by brick with local (Exim 4.89)
(envelope-from <root@brick>)
id 1g0GD0-0000Xr-Bz
for root@brick; Thu, 13 Sep 2018 02:14:14 +0100
Subject: Output from your job 1843
To: root@brick
Message-Id: <E1g0GD0-0000Xr-Bz@brick>
From: root <root@brick>
Date: Thu, 13 Sep 2018 02:14:14 +0100
X-IMAPbase: 1541805998 113
Status: O
X-UID: 1
Killed
logs signals at sigkill
add a comment |
up vote
0
down vote
favorite
I run a Sheevaplug (small ARM server) with Debian 9. It does not have any third-party repos enabled in sources.list
/ sources.list.d
.
I have a backup script which runs as root
, and uses at
. I think something broke on Sep 13, because I am getting these emails that look like they come from at
. They are daily, like my backups. The body of the message just says Killed
.
I can't think what would be sending SIGKILL to my process! Without gathering any more information than I have now, can you think of any reason this would happen?
It can't be from the OOM killer (Out of Memory condition), because I have a full kernel log in dmesg
which does not show any OOM messages.
The at
job is
#!/bin/sh
# at uses sh shell
set -e
cd /d/backup/jenkins-desktop/
for i in */; do
nice ionice -c 3 rdiff-backup "$i" ../jenkins-desktop.rdiff/"$i"
done
I doubt it's systemd SystemCallFilter=
, and that would send SIGSYS by default. I see that a couple of rlimit
s send SIGKILL. But I'm not doing anything to set rlimit
s myself; also it looks like in both cases you would be killed by SIGXCPU first, which defaults to fatal and should show "CPU time limit exceeded".
I have looked in journalctl --since=-2d -p notice
and there are no errors, only some success messages from anacron
.
Return-path: <root@brick>
Envelope-to: root@brick
Delivery-date: Thu, 13 Sep 2018 02:14:15 +0100
Received: from root by brick with local (Exim 4.89)
(envelope-from <root@brick>)
id 1g0GD0-0000Xr-Bz
for root@brick; Thu, 13 Sep 2018 02:14:14 +0100
Subject: Output from your job 1843
To: root@brick
Message-Id: <E1g0GD0-0000Xr-Bz@brick>
From: root <root@brick>
Date: Thu, 13 Sep 2018 02:14:14 +0100
X-IMAPbase: 1541805998 113
Status: O
X-UID: 1
Killed
logs signals at sigkill
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
I run a Sheevaplug (small ARM server) with Debian 9. It does not have any third-party repos enabled in sources.list
/ sources.list.d
.
I have a backup script which runs as root
, and uses at
. I think something broke on Sep 13, because I am getting these emails that look like they come from at
. They are daily, like my backups. The body of the message just says Killed
.
I can't think what would be sending SIGKILL to my process! Without gathering any more information than I have now, can you think of any reason this would happen?
It can't be from the OOM killer (Out of Memory condition), because I have a full kernel log in dmesg
which does not show any OOM messages.
The at
job is
#!/bin/sh
# at uses sh shell
set -e
cd /d/backup/jenkins-desktop/
for i in */; do
nice ionice -c 3 rdiff-backup "$i" ../jenkins-desktop.rdiff/"$i"
done
I doubt it's systemd SystemCallFilter=
, and that would send SIGSYS by default. I see that a couple of rlimit
s send SIGKILL. But I'm not doing anything to set rlimit
s myself; also it looks like in both cases you would be killed by SIGXCPU first, which defaults to fatal and should show "CPU time limit exceeded".
I have looked in journalctl --since=-2d -p notice
and there are no errors, only some success messages from anacron
.
Return-path: <root@brick>
Envelope-to: root@brick
Delivery-date: Thu, 13 Sep 2018 02:14:15 +0100
Received: from root by brick with local (Exim 4.89)
(envelope-from <root@brick>)
id 1g0GD0-0000Xr-Bz
for root@brick; Thu, 13 Sep 2018 02:14:14 +0100
Subject: Output from your job 1843
To: root@brick
Message-Id: <E1g0GD0-0000Xr-Bz@brick>
From: root <root@brick>
Date: Thu, 13 Sep 2018 02:14:14 +0100
X-IMAPbase: 1541805998 113
Status: O
X-UID: 1
Killed
logs signals at sigkill
I run a Sheevaplug (small ARM server) with Debian 9. It does not have any third-party repos enabled in sources.list
/ sources.list.d
.
I have a backup script which runs as root
, and uses at
. I think something broke on Sep 13, because I am getting these emails that look like they come from at
. They are daily, like my backups. The body of the message just says Killed
.
I can't think what would be sending SIGKILL to my process! Without gathering any more information than I have now, can you think of any reason this would happen?
It can't be from the OOM killer (Out of Memory condition), because I have a full kernel log in dmesg
which does not show any OOM messages.
The at
job is
#!/bin/sh
# at uses sh shell
set -e
cd /d/backup/jenkins-desktop/
for i in */; do
nice ionice -c 3 rdiff-backup "$i" ../jenkins-desktop.rdiff/"$i"
done
I doubt it's systemd SystemCallFilter=
, and that would send SIGSYS by default. I see that a couple of rlimit
s send SIGKILL. But I'm not doing anything to set rlimit
s myself; also it looks like in both cases you would be killed by SIGXCPU first, which defaults to fatal and should show "CPU time limit exceeded".
I have looked in journalctl --since=-2d -p notice
and there are no errors, only some success messages from anacron
.
Return-path: <root@brick>
Envelope-to: root@brick
Delivery-date: Thu, 13 Sep 2018 02:14:15 +0100
Received: from root by brick with local (Exim 4.89)
(envelope-from <root@brick>)
id 1g0GD0-0000Xr-Bz
for root@brick; Thu, 13 Sep 2018 02:14:14 +0100
Subject: Output from your job 1843
To: root@brick
Message-Id: <E1g0GD0-0000Xr-Bz@brick>
From: root <root@brick>
Date: Thu, 13 Sep 2018 02:14:14 +0100
X-IMAPbase: 1541805998 113
Status: O
X-UID: 1
Killed
logs signals at sigkill
logs signals at sigkill
edited 3 hours ago
asked 3 hours ago
sourcejedi
21.5k43396
21.5k43396
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
up vote
0
down vote
The body of the message just says
Killed
.
Sorry, this was incorrect.
The body of the first message says Killed
. I think this was a one-off killing performed by an admin (me) :-).
The reason I am getting daily messages can be investigated by looking at the subsequent messages. Or, I should be careful now and say the second and last messages look the same :-).
Previous backup seems to have failed, regressing destination now.
Exception '[Errno 28] No space left on device' raised of class '<type 'exceptions.IOError'>':
File "/usr/lib/python2.7/dist-packages/rdiff_backup/robust.py", line 32, in check_common_error
try: return function(*args)
File "/usr/lib/python2.7/dist-packages/rdiff_backup/restore.py", line 468, in get_fp
Rdiff.write_patched_fp(current_fp, delta_fp, new_fp)
File "/usr/lib/python2.7/dist-packages/rdiff_backup/Rdiff.py", line 73, in write_patched_fp
rpath.copyfileobj(librsync.PatchedFile(basis_fp, delta_fp), out_fp)
File "/usr/lib/python2.7/dist-packages/rdiff_backup/rpath.py", line 64, in copyfileobj
outputfp.write(inbuf)
Exception '[Errno 28] No space left on device' raised of class '<type 'exceptions.IOError'>':
File "/usr/lib/python2.7/dist-packages/rdiff_backup/Main.py", line 304, in error_check_Main
try: Main(arglist)
File "/usr/lib/python2.7/dist-packages/rdiff_backup/Main.py", line 324, in Main
take_action(rps)
File "/usr/lib/python2.7/dist-packages/rdiff_backup/Main.py", line 280, in take_action
elif action == "backup": Backup(rps[0], rps[1])
You might wonder "regressing destination" seems to fail with "No space left on device". I'm not sure, because there seems to be a fair amount of space on the drive, but that's a question for another day.
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
The body of the message just says
Killed
.
Sorry, this was incorrect.
The body of the first message says Killed
. I think this was a one-off killing performed by an admin (me) :-).
The reason I am getting daily messages can be investigated by looking at the subsequent messages. Or, I should be careful now and say the second and last messages look the same :-).
Previous backup seems to have failed, regressing destination now.
Exception '[Errno 28] No space left on device' raised of class '<type 'exceptions.IOError'>':
File "/usr/lib/python2.7/dist-packages/rdiff_backup/robust.py", line 32, in check_common_error
try: return function(*args)
File "/usr/lib/python2.7/dist-packages/rdiff_backup/restore.py", line 468, in get_fp
Rdiff.write_patched_fp(current_fp, delta_fp, new_fp)
File "/usr/lib/python2.7/dist-packages/rdiff_backup/Rdiff.py", line 73, in write_patched_fp
rpath.copyfileobj(librsync.PatchedFile(basis_fp, delta_fp), out_fp)
File "/usr/lib/python2.7/dist-packages/rdiff_backup/rpath.py", line 64, in copyfileobj
outputfp.write(inbuf)
Exception '[Errno 28] No space left on device' raised of class '<type 'exceptions.IOError'>':
File "/usr/lib/python2.7/dist-packages/rdiff_backup/Main.py", line 304, in error_check_Main
try: Main(arglist)
File "/usr/lib/python2.7/dist-packages/rdiff_backup/Main.py", line 324, in Main
take_action(rps)
File "/usr/lib/python2.7/dist-packages/rdiff_backup/Main.py", line 280, in take_action
elif action == "backup": Backup(rps[0], rps[1])
You might wonder "regressing destination" seems to fail with "No space left on device". I'm not sure, because there seems to be a fair amount of space on the drive, but that's a question for another day.
add a comment |
up vote
0
down vote
The body of the message just says
Killed
.
Sorry, this was incorrect.
The body of the first message says Killed
. I think this was a one-off killing performed by an admin (me) :-).
The reason I am getting daily messages can be investigated by looking at the subsequent messages. Or, I should be careful now and say the second and last messages look the same :-).
Previous backup seems to have failed, regressing destination now.
Exception '[Errno 28] No space left on device' raised of class '<type 'exceptions.IOError'>':
File "/usr/lib/python2.7/dist-packages/rdiff_backup/robust.py", line 32, in check_common_error
try: return function(*args)
File "/usr/lib/python2.7/dist-packages/rdiff_backup/restore.py", line 468, in get_fp
Rdiff.write_patched_fp(current_fp, delta_fp, new_fp)
File "/usr/lib/python2.7/dist-packages/rdiff_backup/Rdiff.py", line 73, in write_patched_fp
rpath.copyfileobj(librsync.PatchedFile(basis_fp, delta_fp), out_fp)
File "/usr/lib/python2.7/dist-packages/rdiff_backup/rpath.py", line 64, in copyfileobj
outputfp.write(inbuf)
Exception '[Errno 28] No space left on device' raised of class '<type 'exceptions.IOError'>':
File "/usr/lib/python2.7/dist-packages/rdiff_backup/Main.py", line 304, in error_check_Main
try: Main(arglist)
File "/usr/lib/python2.7/dist-packages/rdiff_backup/Main.py", line 324, in Main
take_action(rps)
File "/usr/lib/python2.7/dist-packages/rdiff_backup/Main.py", line 280, in take_action
elif action == "backup": Backup(rps[0], rps[1])
You might wonder "regressing destination" seems to fail with "No space left on device". I'm not sure, because there seems to be a fair amount of space on the drive, but that's a question for another day.
add a comment |
up vote
0
down vote
up vote
0
down vote
The body of the message just says
Killed
.
Sorry, this was incorrect.
The body of the first message says Killed
. I think this was a one-off killing performed by an admin (me) :-).
The reason I am getting daily messages can be investigated by looking at the subsequent messages. Or, I should be careful now and say the second and last messages look the same :-).
Previous backup seems to have failed, regressing destination now.
Exception '[Errno 28] No space left on device' raised of class '<type 'exceptions.IOError'>':
File "/usr/lib/python2.7/dist-packages/rdiff_backup/robust.py", line 32, in check_common_error
try: return function(*args)
File "/usr/lib/python2.7/dist-packages/rdiff_backup/restore.py", line 468, in get_fp
Rdiff.write_patched_fp(current_fp, delta_fp, new_fp)
File "/usr/lib/python2.7/dist-packages/rdiff_backup/Rdiff.py", line 73, in write_patched_fp
rpath.copyfileobj(librsync.PatchedFile(basis_fp, delta_fp), out_fp)
File "/usr/lib/python2.7/dist-packages/rdiff_backup/rpath.py", line 64, in copyfileobj
outputfp.write(inbuf)
Exception '[Errno 28] No space left on device' raised of class '<type 'exceptions.IOError'>':
File "/usr/lib/python2.7/dist-packages/rdiff_backup/Main.py", line 304, in error_check_Main
try: Main(arglist)
File "/usr/lib/python2.7/dist-packages/rdiff_backup/Main.py", line 324, in Main
take_action(rps)
File "/usr/lib/python2.7/dist-packages/rdiff_backup/Main.py", line 280, in take_action
elif action == "backup": Backup(rps[0], rps[1])
You might wonder "regressing destination" seems to fail with "No space left on device". I'm not sure, because there seems to be a fair amount of space on the drive, but that's a question for another day.
The body of the message just says
Killed
.
Sorry, this was incorrect.
The body of the first message says Killed
. I think this was a one-off killing performed by an admin (me) :-).
The reason I am getting daily messages can be investigated by looking at the subsequent messages. Or, I should be careful now and say the second and last messages look the same :-).
Previous backup seems to have failed, regressing destination now.
Exception '[Errno 28] No space left on device' raised of class '<type 'exceptions.IOError'>':
File "/usr/lib/python2.7/dist-packages/rdiff_backup/robust.py", line 32, in check_common_error
try: return function(*args)
File "/usr/lib/python2.7/dist-packages/rdiff_backup/restore.py", line 468, in get_fp
Rdiff.write_patched_fp(current_fp, delta_fp, new_fp)
File "/usr/lib/python2.7/dist-packages/rdiff_backup/Rdiff.py", line 73, in write_patched_fp
rpath.copyfileobj(librsync.PatchedFile(basis_fp, delta_fp), out_fp)
File "/usr/lib/python2.7/dist-packages/rdiff_backup/rpath.py", line 64, in copyfileobj
outputfp.write(inbuf)
Exception '[Errno 28] No space left on device' raised of class '<type 'exceptions.IOError'>':
File "/usr/lib/python2.7/dist-packages/rdiff_backup/Main.py", line 304, in error_check_Main
try: Main(arglist)
File "/usr/lib/python2.7/dist-packages/rdiff_backup/Main.py", line 324, in Main
take_action(rps)
File "/usr/lib/python2.7/dist-packages/rdiff_backup/Main.py", line 280, in take_action
elif action == "backup": Backup(rps[0], rps[1])
You might wonder "regressing destination" seems to fail with "No space left on device". I'm not sure, because there seems to be a fair amount of space on the drive, but that's a question for another day.
edited 3 hours ago
answered 3 hours ago
sourcejedi
21.5k43396
21.5k43396
add a comment |
add a comment |
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f480871%2fsubject-output-from-from-your-job-1843-body-killed%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password