How to set timeout for the systemd start job âdev-md125.deviceâ (mdadm)
Clash Royale CLAN TAG#URR8PPP
up vote
0
down vote
favorite
I've setup a RAID1 device with mdadm on CentOS 7. The system boots fine when both disks are inserted but hangs when there is only one.
The error occurs at boot with the following message from systemd:
A start job is running for dev-md125.device (54s / no limit)
The problem here is the "no limit" part. How do I add a limit so that I can allow the system to boot?
There is nothing I can see in my mdadm.conf:
$cat /etc/mdadm.conf
MAILADDR root
AUTO +imsm +1.x -all
ARRAY /dev/md/archive:boot level=raid1 num-devices=2 UUID=1104ad14:c378ffcd:5d2c92be:ffaace05
ARRAY /dev/md/archive:root level=raid1 num-devices=2 UUID=f30b5fcf:d194f469:404a464f:c1b0ba0a
ARRAY /dev/md/archive:swap level=raid1 num-devices=2 UUID=d6490a08:3c6a7311:cb7ddd3f:9eac77ff
I tried adding timeouts to fstab:
$cat /etc/fstab
UUID=309bc32c-d75b-4ddb-9141-f234be9b72ca / xfs defaults,x-systemd.device-timeout=5 1 1
UUID=b336e2bb-f5d2-4065-9aed-9de77c02c0e2 /boot xfs defaults,x-systemd.device-timeout=5 1 2
UUID=93434118-d16e-4cc7-8ff0-c0891bcbcb72 swap swap defaults,x-systemd.device-timeout=5 0 0
I thought that /etc/systemd/system/sysinit.target.wants/dmraid-activation.service
might be responsible but adding a timeout did not change the behavior (still no limit):
$cat /etc/systemd/system/sysinit.target.wants/dmraid-activation.service
[Unit]
Description=Activation of DM RAID sets
DefaultDependencies=no
Conflicts=shutdown.target
After=systemd-udev-settle.service
Before=lvm2-activation-early.service cryptsetup.target local-fs.target shutdown.target
Wants=systemd-udev-settle.service
[Service]
ExecStart=/lib/systemd/rhel-dmraid-activation
Type=oneshot
TimeoutSec=5
[Install]
WantedBy=sysinit.target
centos systemd raid mdadm software-raid
add a comment |Â
up vote
0
down vote
favorite
I've setup a RAID1 device with mdadm on CentOS 7. The system boots fine when both disks are inserted but hangs when there is only one.
The error occurs at boot with the following message from systemd:
A start job is running for dev-md125.device (54s / no limit)
The problem here is the "no limit" part. How do I add a limit so that I can allow the system to boot?
There is nothing I can see in my mdadm.conf:
$cat /etc/mdadm.conf
MAILADDR root
AUTO +imsm +1.x -all
ARRAY /dev/md/archive:boot level=raid1 num-devices=2 UUID=1104ad14:c378ffcd:5d2c92be:ffaace05
ARRAY /dev/md/archive:root level=raid1 num-devices=2 UUID=f30b5fcf:d194f469:404a464f:c1b0ba0a
ARRAY /dev/md/archive:swap level=raid1 num-devices=2 UUID=d6490a08:3c6a7311:cb7ddd3f:9eac77ff
I tried adding timeouts to fstab:
$cat /etc/fstab
UUID=309bc32c-d75b-4ddb-9141-f234be9b72ca / xfs defaults,x-systemd.device-timeout=5 1 1
UUID=b336e2bb-f5d2-4065-9aed-9de77c02c0e2 /boot xfs defaults,x-systemd.device-timeout=5 1 2
UUID=93434118-d16e-4cc7-8ff0-c0891bcbcb72 swap swap defaults,x-systemd.device-timeout=5 0 0
I thought that /etc/systemd/system/sysinit.target.wants/dmraid-activation.service
might be responsible but adding a timeout did not change the behavior (still no limit):
$cat /etc/systemd/system/sysinit.target.wants/dmraid-activation.service
[Unit]
Description=Activation of DM RAID sets
DefaultDependencies=no
Conflicts=shutdown.target
After=systemd-udev-settle.service
Before=lvm2-activation-early.service cryptsetup.target local-fs.target shutdown.target
Wants=systemd-udev-settle.service
[Service]
ExecStart=/lib/systemd/rhel-dmraid-activation
Type=oneshot
TimeoutSec=5
[Install]
WantedBy=sysinit.target
centos systemd raid mdadm software-raid
add a comment |Â
up vote
0
down vote
favorite
up vote
0
down vote
favorite
I've setup a RAID1 device with mdadm on CentOS 7. The system boots fine when both disks are inserted but hangs when there is only one.
The error occurs at boot with the following message from systemd:
A start job is running for dev-md125.device (54s / no limit)
The problem here is the "no limit" part. How do I add a limit so that I can allow the system to boot?
There is nothing I can see in my mdadm.conf:
$cat /etc/mdadm.conf
MAILADDR root
AUTO +imsm +1.x -all
ARRAY /dev/md/archive:boot level=raid1 num-devices=2 UUID=1104ad14:c378ffcd:5d2c92be:ffaace05
ARRAY /dev/md/archive:root level=raid1 num-devices=2 UUID=f30b5fcf:d194f469:404a464f:c1b0ba0a
ARRAY /dev/md/archive:swap level=raid1 num-devices=2 UUID=d6490a08:3c6a7311:cb7ddd3f:9eac77ff
I tried adding timeouts to fstab:
$cat /etc/fstab
UUID=309bc32c-d75b-4ddb-9141-f234be9b72ca / xfs defaults,x-systemd.device-timeout=5 1 1
UUID=b336e2bb-f5d2-4065-9aed-9de77c02c0e2 /boot xfs defaults,x-systemd.device-timeout=5 1 2
UUID=93434118-d16e-4cc7-8ff0-c0891bcbcb72 swap swap defaults,x-systemd.device-timeout=5 0 0
I thought that /etc/systemd/system/sysinit.target.wants/dmraid-activation.service
might be responsible but adding a timeout did not change the behavior (still no limit):
$cat /etc/systemd/system/sysinit.target.wants/dmraid-activation.service
[Unit]
Description=Activation of DM RAID sets
DefaultDependencies=no
Conflicts=shutdown.target
After=systemd-udev-settle.service
Before=lvm2-activation-early.service cryptsetup.target local-fs.target shutdown.target
Wants=systemd-udev-settle.service
[Service]
ExecStart=/lib/systemd/rhel-dmraid-activation
Type=oneshot
TimeoutSec=5
[Install]
WantedBy=sysinit.target
centos systemd raid mdadm software-raid
I've setup a RAID1 device with mdadm on CentOS 7. The system boots fine when both disks are inserted but hangs when there is only one.
The error occurs at boot with the following message from systemd:
A start job is running for dev-md125.device (54s / no limit)
The problem here is the "no limit" part. How do I add a limit so that I can allow the system to boot?
There is nothing I can see in my mdadm.conf:
$cat /etc/mdadm.conf
MAILADDR root
AUTO +imsm +1.x -all
ARRAY /dev/md/archive:boot level=raid1 num-devices=2 UUID=1104ad14:c378ffcd:5d2c92be:ffaace05
ARRAY /dev/md/archive:root level=raid1 num-devices=2 UUID=f30b5fcf:d194f469:404a464f:c1b0ba0a
ARRAY /dev/md/archive:swap level=raid1 num-devices=2 UUID=d6490a08:3c6a7311:cb7ddd3f:9eac77ff
I tried adding timeouts to fstab:
$cat /etc/fstab
UUID=309bc32c-d75b-4ddb-9141-f234be9b72ca / xfs defaults,x-systemd.device-timeout=5 1 1
UUID=b336e2bb-f5d2-4065-9aed-9de77c02c0e2 /boot xfs defaults,x-systemd.device-timeout=5 1 2
UUID=93434118-d16e-4cc7-8ff0-c0891bcbcb72 swap swap defaults,x-systemd.device-timeout=5 0 0
I thought that /etc/systemd/system/sysinit.target.wants/dmraid-activation.service
might be responsible but adding a timeout did not change the behavior (still no limit):
$cat /etc/systemd/system/sysinit.target.wants/dmraid-activation.service
[Unit]
Description=Activation of DM RAID sets
DefaultDependencies=no
Conflicts=shutdown.target
After=systemd-udev-settle.service
Before=lvm2-activation-early.service cryptsetup.target local-fs.target shutdown.target
Wants=systemd-udev-settle.service
[Service]
ExecStart=/lib/systemd/rhel-dmraid-activation
Type=oneshot
TimeoutSec=5
[Install]
WantedBy=sysinit.target
centos systemd raid mdadm software-raid
centos systemd raid mdadm software-raid
edited 1 min ago
asked 10 mins ago
Zhro
318312
318312
add a comment |Â
add a comment |Â
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f479512%2fhow-to-set-timeout-for-the-systemd-start-job-dev-md125-device-mdadm%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password