Udev rule to automount media devices stopped working after systemd was updated to version 239
Clash Royale CLAN TAG#URR8PPP
up vote
1
down vote
favorite
For some time I had a working udev rule to automount media devices.
/etc/udev/rules.d/61-mount_media_by_label.rules
#
# To propagate udev's mountpoint to the user space, MountFlags must have a value "shared" in the /usr/lib/systemd/system/systemd-udevd.service.
#
# Ignore devices that aren't storage block-devices and block-devices that are already listed in /etc/fstab.
KERNEL!="sd[a-z][1-9]*", GOTO="mount_media_by_label_end"
PROGRAM="/bin/grep -e '^UUID=%EID_FS_UUID' /etc/fstab", RESULT!="", GOTO="mount_media_by_label_end"
# Decide the name for device's mountpoint directory, based on device's label.
ENVID_FS_LABEL!="", ENVmountpoint="%EID_FS_LABEL"
ENVID_FS_LABEL=="", ENVmountpoint="usb-%k"
# If device is being plugged in, set options for mount command.
ACTION=="add", ENVmount_options="relatime"
ACTION=="add", ENVID_FS_TYPE=="vfat|ntfs", ENVmount_options="%Emount_options,utf8,gid=100,umask=002"
# If device is being plugged in, create mountpoint directory in /media and mount device node to it.
ACTION=="add", RUN+="/bin/mkdir -p /media/%Emountpoint", RUN+="/bin/mount -o %Emount_options /dev/%k /media/%Emountpoint"
# If device is being plugged out, unmount it and delete its mountpoint directory.
ACTION=="remove", ENVmountpoint!="", RUN+="/bin/umount -l /media/%Emountpoint", RUN+="/bin/rmdir /media/%Emountpoint"
# Label for early exit.
LABEL="mount_media_by_label_end"
To make this rule work, I only had to change the value of MountFlags
option to shared
in
/usr/lib/systemd/system/systemd-udevd.service
After I updated systemd
to version 239
, this file looks different.
I noticed 2 changes that might be problematic:
- The
MountFlags
option is not specified in default settings. - There is a new option
PrivateMounts
set toyes
.
From systemd
's documentation I figured that now I only need to set PrivateMounts=no
and propagation of mountpoint would reach the user space.
However, this is not the case.
I have tried
- Changing
PrivateMounts=no
- Changing
PrivateMounts=no
and addingMountFlags=shared
but neither works.
What is the correct way to mount media devices from udev rules in systemd v239
and later?
mount systemd udev automounting
 |Â
show 2 more comments
up vote
1
down vote
favorite
For some time I had a working udev rule to automount media devices.
/etc/udev/rules.d/61-mount_media_by_label.rules
#
# To propagate udev's mountpoint to the user space, MountFlags must have a value "shared" in the /usr/lib/systemd/system/systemd-udevd.service.
#
# Ignore devices that aren't storage block-devices and block-devices that are already listed in /etc/fstab.
KERNEL!="sd[a-z][1-9]*", GOTO="mount_media_by_label_end"
PROGRAM="/bin/grep -e '^UUID=%EID_FS_UUID' /etc/fstab", RESULT!="", GOTO="mount_media_by_label_end"
# Decide the name for device's mountpoint directory, based on device's label.
ENVID_FS_LABEL!="", ENVmountpoint="%EID_FS_LABEL"
ENVID_FS_LABEL=="", ENVmountpoint="usb-%k"
# If device is being plugged in, set options for mount command.
ACTION=="add", ENVmount_options="relatime"
ACTION=="add", ENVID_FS_TYPE=="vfat|ntfs", ENVmount_options="%Emount_options,utf8,gid=100,umask=002"
# If device is being plugged in, create mountpoint directory in /media and mount device node to it.
ACTION=="add", RUN+="/bin/mkdir -p /media/%Emountpoint", RUN+="/bin/mount -o %Emount_options /dev/%k /media/%Emountpoint"
# If device is being plugged out, unmount it and delete its mountpoint directory.
ACTION=="remove", ENVmountpoint!="", RUN+="/bin/umount -l /media/%Emountpoint", RUN+="/bin/rmdir /media/%Emountpoint"
# Label for early exit.
LABEL="mount_media_by_label_end"
To make this rule work, I only had to change the value of MountFlags
option to shared
in
/usr/lib/systemd/system/systemd-udevd.service
After I updated systemd
to version 239
, this file looks different.
I noticed 2 changes that might be problematic:
- The
MountFlags
option is not specified in default settings. - There is a new option
PrivateMounts
set toyes
.
From systemd
's documentation I figured that now I only need to set PrivateMounts=no
and propagation of mountpoint would reach the user space.
However, this is not the case.
I have tried
- Changing
PrivateMounts=no
- Changing
PrivateMounts=no
and addingMountFlags=shared
but neither works.
What is the correct way to mount media devices from udev rules in systemd v239
and later?
mount systemd udev automounting
weird, it worked for someone else. github.com/systemd/systemd/issues/9873#issuecomment-413058745
â sourcejedi
Aug 26 at 20:38
you remembered to both reload systemd and restart udev, right?
â sourcejedi
Aug 26 at 20:39
I rebooted the whole system and assume that is good enough?
â Iskustvo
Aug 26 at 20:43
Can you compare the output ofeval "$(systemctl show -p MainPID systemd-udevd)"; sudo ls -l /proc/$MainPID/ns/mnt
with the unchanged service file from v239, v.s. with the modification from case 1, v.s.ls -l /proc/self/ns/mnt
?
â sourcejedi
Aug 26 at 20:43
whole-system reboot is good enough, thanks.
â sourcejedi
Aug 26 at 20:44
 |Â
show 2 more comments
up vote
1
down vote
favorite
up vote
1
down vote
favorite
For some time I had a working udev rule to automount media devices.
/etc/udev/rules.d/61-mount_media_by_label.rules
#
# To propagate udev's mountpoint to the user space, MountFlags must have a value "shared" in the /usr/lib/systemd/system/systemd-udevd.service.
#
# Ignore devices that aren't storage block-devices and block-devices that are already listed in /etc/fstab.
KERNEL!="sd[a-z][1-9]*", GOTO="mount_media_by_label_end"
PROGRAM="/bin/grep -e '^UUID=%EID_FS_UUID' /etc/fstab", RESULT!="", GOTO="mount_media_by_label_end"
# Decide the name for device's mountpoint directory, based on device's label.
ENVID_FS_LABEL!="", ENVmountpoint="%EID_FS_LABEL"
ENVID_FS_LABEL=="", ENVmountpoint="usb-%k"
# If device is being plugged in, set options for mount command.
ACTION=="add", ENVmount_options="relatime"
ACTION=="add", ENVID_FS_TYPE=="vfat|ntfs", ENVmount_options="%Emount_options,utf8,gid=100,umask=002"
# If device is being plugged in, create mountpoint directory in /media and mount device node to it.
ACTION=="add", RUN+="/bin/mkdir -p /media/%Emountpoint", RUN+="/bin/mount -o %Emount_options /dev/%k /media/%Emountpoint"
# If device is being plugged out, unmount it and delete its mountpoint directory.
ACTION=="remove", ENVmountpoint!="", RUN+="/bin/umount -l /media/%Emountpoint", RUN+="/bin/rmdir /media/%Emountpoint"
# Label for early exit.
LABEL="mount_media_by_label_end"
To make this rule work, I only had to change the value of MountFlags
option to shared
in
/usr/lib/systemd/system/systemd-udevd.service
After I updated systemd
to version 239
, this file looks different.
I noticed 2 changes that might be problematic:
- The
MountFlags
option is not specified in default settings. - There is a new option
PrivateMounts
set toyes
.
From systemd
's documentation I figured that now I only need to set PrivateMounts=no
and propagation of mountpoint would reach the user space.
However, this is not the case.
I have tried
- Changing
PrivateMounts=no
- Changing
PrivateMounts=no
and addingMountFlags=shared
but neither works.
What is the correct way to mount media devices from udev rules in systemd v239
and later?
mount systemd udev automounting
For some time I had a working udev rule to automount media devices.
/etc/udev/rules.d/61-mount_media_by_label.rules
#
# To propagate udev's mountpoint to the user space, MountFlags must have a value "shared" in the /usr/lib/systemd/system/systemd-udevd.service.
#
# Ignore devices that aren't storage block-devices and block-devices that are already listed in /etc/fstab.
KERNEL!="sd[a-z][1-9]*", GOTO="mount_media_by_label_end"
PROGRAM="/bin/grep -e '^UUID=%EID_FS_UUID' /etc/fstab", RESULT!="", GOTO="mount_media_by_label_end"
# Decide the name for device's mountpoint directory, based on device's label.
ENVID_FS_LABEL!="", ENVmountpoint="%EID_FS_LABEL"
ENVID_FS_LABEL=="", ENVmountpoint="usb-%k"
# If device is being plugged in, set options for mount command.
ACTION=="add", ENVmount_options="relatime"
ACTION=="add", ENVID_FS_TYPE=="vfat|ntfs", ENVmount_options="%Emount_options,utf8,gid=100,umask=002"
# If device is being plugged in, create mountpoint directory in /media and mount device node to it.
ACTION=="add", RUN+="/bin/mkdir -p /media/%Emountpoint", RUN+="/bin/mount -o %Emount_options /dev/%k /media/%Emountpoint"
# If device is being plugged out, unmount it and delete its mountpoint directory.
ACTION=="remove", ENVmountpoint!="", RUN+="/bin/umount -l /media/%Emountpoint", RUN+="/bin/rmdir /media/%Emountpoint"
# Label for early exit.
LABEL="mount_media_by_label_end"
To make this rule work, I only had to change the value of MountFlags
option to shared
in
/usr/lib/systemd/system/systemd-udevd.service
After I updated systemd
to version 239
, this file looks different.
I noticed 2 changes that might be problematic:
- The
MountFlags
option is not specified in default settings. - There is a new option
PrivateMounts
set toyes
.
From systemd
's documentation I figured that now I only need to set PrivateMounts=no
and propagation of mountpoint would reach the user space.
However, this is not the case.
I have tried
- Changing
PrivateMounts=no
- Changing
PrivateMounts=no
and addingMountFlags=shared
but neither works.
What is the correct way to mount media devices from udev rules in systemd v239
and later?
mount systemd udev automounting
mount systemd udev automounting
asked Aug 26 at 20:20
Iskustvo
672218
672218
weird, it worked for someone else. github.com/systemd/systemd/issues/9873#issuecomment-413058745
â sourcejedi
Aug 26 at 20:38
you remembered to both reload systemd and restart udev, right?
â sourcejedi
Aug 26 at 20:39
I rebooted the whole system and assume that is good enough?
â Iskustvo
Aug 26 at 20:43
Can you compare the output ofeval "$(systemctl show -p MainPID systemd-udevd)"; sudo ls -l /proc/$MainPID/ns/mnt
with the unchanged service file from v239, v.s. with the modification from case 1, v.s.ls -l /proc/self/ns/mnt
?
â sourcejedi
Aug 26 at 20:43
whole-system reboot is good enough, thanks.
â sourcejedi
Aug 26 at 20:44
 |Â
show 2 more comments
weird, it worked for someone else. github.com/systemd/systemd/issues/9873#issuecomment-413058745
â sourcejedi
Aug 26 at 20:38
you remembered to both reload systemd and restart udev, right?
â sourcejedi
Aug 26 at 20:39
I rebooted the whole system and assume that is good enough?
â Iskustvo
Aug 26 at 20:43
Can you compare the output ofeval "$(systemctl show -p MainPID systemd-udevd)"; sudo ls -l /proc/$MainPID/ns/mnt
with the unchanged service file from v239, v.s. with the modification from case 1, v.s.ls -l /proc/self/ns/mnt
?
â sourcejedi
Aug 26 at 20:43
whole-system reboot is good enough, thanks.
â sourcejedi
Aug 26 at 20:44
weird, it worked for someone else. github.com/systemd/systemd/issues/9873#issuecomment-413058745
â sourcejedi
Aug 26 at 20:38
weird, it worked for someone else. github.com/systemd/systemd/issues/9873#issuecomment-413058745
â sourcejedi
Aug 26 at 20:38
you remembered to both reload systemd and restart udev, right?
â sourcejedi
Aug 26 at 20:39
you remembered to both reload systemd and restart udev, right?
â sourcejedi
Aug 26 at 20:39
I rebooted the whole system and assume that is good enough?
â Iskustvo
Aug 26 at 20:43
I rebooted the whole system and assume that is good enough?
â Iskustvo
Aug 26 at 20:43
Can you compare the output of
eval "$(systemctl show -p MainPID systemd-udevd)"; sudo ls -l /proc/$MainPID/ns/mnt
with the unchanged service file from v239, v.s. with the modification from case 1, v.s. ls -l /proc/self/ns/mnt
?â sourcejedi
Aug 26 at 20:43
Can you compare the output of
eval "$(systemctl show -p MainPID systemd-udevd)"; sudo ls -l /proc/$MainPID/ns/mnt
with the unchanged service file from v239, v.s. with the modification from case 1, v.s. ls -l /proc/self/ns/mnt
?â sourcejedi
Aug 26 at 20:43
whole-system reboot is good enough, thanks.
â sourcejedi
Aug 26 at 20:44
whole-system reboot is good enough, thanks.
â sourcejedi
Aug 26 at 20:44
 |Â
show 2 more comments
1 Answer
1
active
oldest
votes
up vote
1
down vote
accepted
This approach may be sub-optimal. For example, if you support mounting writable NTFS using ntfs-3g
, the ntfs-3g
process will be killed whenever you restart udev.
Note that modern security doctrine suggests desktops should start using FUSE for mounting all removable filesystems. https://lwn.net/Articles/755593/
It would be preferable if you could work out how to launch (and stop?) a separate systemd unit ... and write this up as the preferred approach, in whatever peculiar documents keep suggesting Arch users to use this pattern :-). Using a separate systemd unit will avoid the restrictions applied to the udev service.
For example, launch a command in a systemd scope unit using systemd-run --no-block --scope -- my mount command here
.
Unfortunately, if you want the unit that contains ntfs-3g
to have an identifiable name, it's not immediately obvious what the 100% correct way is. If an old unit with that name is still tracked as "active" but the process has just exited, then simply asking the service to start won't do anything. You could ignore the problem, generate a random suffix for the name, or try to rule out this sequence of events... but maybe there's a better way.
I have not tested this with FUSE, but I think the way to do this would be the systemd-mount
command.
A SuperUser answer suggests that using systemd-mount
on a device while the udev rule is still running, might not work correctly. This would require rather baroque workarounds. (RUN+="/path/to/my/script %k"
which runs systemd-run --no-block --scope --unit=mount-$1 sh -c "systemctl start /dev/$1; systemd-mount ..."
).
I think the way to do this would look something like
ENVSYSTEMD_WANTS=my-mounter@%k.service
# /etc/systemd/system/my-mounter@.service
[Service]
Type=oneshot
ExecStart=systemd-mount %I
#!/bin/sh
# /usr/local/lib/my-mounter
# You can make this as complicated as you want.
# (Although curiously systemd-mount also reads SYSTEMD_MOUNT_WHERE and
# SYSTEMD_MOUNT_OPTIONS properties if you set them on the udev device.)
# You could also read udev properties yourself using
# eval "$(udevadm info --query=property --export)"
DEVNAME="$1"
systemd-mount "/dev/$DEVNAME"
The defaults for systemd-mount
cause the filesystem to be unmounted automatically on removal, but they do not cleanup the automatically-created mount point directory afterwards (!).
There were two separate changes in v239 - two separate directives that you must revert to get the old behaviour.
PrivateMounts=yes
. Replace this withPrivateMounts=no
.SystemCallFilter=@system-service @module @raw-io
The use of this directive is new in v239. Therefore the simplest way to regain the previous behaviour is to remove it entirely.
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
accepted
This approach may be sub-optimal. For example, if you support mounting writable NTFS using ntfs-3g
, the ntfs-3g
process will be killed whenever you restart udev.
Note that modern security doctrine suggests desktops should start using FUSE for mounting all removable filesystems. https://lwn.net/Articles/755593/
It would be preferable if you could work out how to launch (and stop?) a separate systemd unit ... and write this up as the preferred approach, in whatever peculiar documents keep suggesting Arch users to use this pattern :-). Using a separate systemd unit will avoid the restrictions applied to the udev service.
For example, launch a command in a systemd scope unit using systemd-run --no-block --scope -- my mount command here
.
Unfortunately, if you want the unit that contains ntfs-3g
to have an identifiable name, it's not immediately obvious what the 100% correct way is. If an old unit with that name is still tracked as "active" but the process has just exited, then simply asking the service to start won't do anything. You could ignore the problem, generate a random suffix for the name, or try to rule out this sequence of events... but maybe there's a better way.
I have not tested this with FUSE, but I think the way to do this would be the systemd-mount
command.
A SuperUser answer suggests that using systemd-mount
on a device while the udev rule is still running, might not work correctly. This would require rather baroque workarounds. (RUN+="/path/to/my/script %k"
which runs systemd-run --no-block --scope --unit=mount-$1 sh -c "systemctl start /dev/$1; systemd-mount ..."
).
I think the way to do this would look something like
ENVSYSTEMD_WANTS=my-mounter@%k.service
# /etc/systemd/system/my-mounter@.service
[Service]
Type=oneshot
ExecStart=systemd-mount %I
#!/bin/sh
# /usr/local/lib/my-mounter
# You can make this as complicated as you want.
# (Although curiously systemd-mount also reads SYSTEMD_MOUNT_WHERE and
# SYSTEMD_MOUNT_OPTIONS properties if you set them on the udev device.)
# You could also read udev properties yourself using
# eval "$(udevadm info --query=property --export)"
DEVNAME="$1"
systemd-mount "/dev/$DEVNAME"
The defaults for systemd-mount
cause the filesystem to be unmounted automatically on removal, but they do not cleanup the automatically-created mount point directory afterwards (!).
There were two separate changes in v239 - two separate directives that you must revert to get the old behaviour.
PrivateMounts=yes
. Replace this withPrivateMounts=no
.SystemCallFilter=@system-service @module @raw-io
The use of this directive is new in v239. Therefore the simplest way to regain the previous behaviour is to remove it entirely.
add a comment |Â
up vote
1
down vote
accepted
This approach may be sub-optimal. For example, if you support mounting writable NTFS using ntfs-3g
, the ntfs-3g
process will be killed whenever you restart udev.
Note that modern security doctrine suggests desktops should start using FUSE for mounting all removable filesystems. https://lwn.net/Articles/755593/
It would be preferable if you could work out how to launch (and stop?) a separate systemd unit ... and write this up as the preferred approach, in whatever peculiar documents keep suggesting Arch users to use this pattern :-). Using a separate systemd unit will avoid the restrictions applied to the udev service.
For example, launch a command in a systemd scope unit using systemd-run --no-block --scope -- my mount command here
.
Unfortunately, if you want the unit that contains ntfs-3g
to have an identifiable name, it's not immediately obvious what the 100% correct way is. If an old unit with that name is still tracked as "active" but the process has just exited, then simply asking the service to start won't do anything. You could ignore the problem, generate a random suffix for the name, or try to rule out this sequence of events... but maybe there's a better way.
I have not tested this with FUSE, but I think the way to do this would be the systemd-mount
command.
A SuperUser answer suggests that using systemd-mount
on a device while the udev rule is still running, might not work correctly. This would require rather baroque workarounds. (RUN+="/path/to/my/script %k"
which runs systemd-run --no-block --scope --unit=mount-$1 sh -c "systemctl start /dev/$1; systemd-mount ..."
).
I think the way to do this would look something like
ENVSYSTEMD_WANTS=my-mounter@%k.service
# /etc/systemd/system/my-mounter@.service
[Service]
Type=oneshot
ExecStart=systemd-mount %I
#!/bin/sh
# /usr/local/lib/my-mounter
# You can make this as complicated as you want.
# (Although curiously systemd-mount also reads SYSTEMD_MOUNT_WHERE and
# SYSTEMD_MOUNT_OPTIONS properties if you set them on the udev device.)
# You could also read udev properties yourself using
# eval "$(udevadm info --query=property --export)"
DEVNAME="$1"
systemd-mount "/dev/$DEVNAME"
The defaults for systemd-mount
cause the filesystem to be unmounted automatically on removal, but they do not cleanup the automatically-created mount point directory afterwards (!).
There were two separate changes in v239 - two separate directives that you must revert to get the old behaviour.
PrivateMounts=yes
. Replace this withPrivateMounts=no
.SystemCallFilter=@system-service @module @raw-io
The use of this directive is new in v239. Therefore the simplest way to regain the previous behaviour is to remove it entirely.
add a comment |Â
up vote
1
down vote
accepted
up vote
1
down vote
accepted
This approach may be sub-optimal. For example, if you support mounting writable NTFS using ntfs-3g
, the ntfs-3g
process will be killed whenever you restart udev.
Note that modern security doctrine suggests desktops should start using FUSE for mounting all removable filesystems. https://lwn.net/Articles/755593/
It would be preferable if you could work out how to launch (and stop?) a separate systemd unit ... and write this up as the preferred approach, in whatever peculiar documents keep suggesting Arch users to use this pattern :-). Using a separate systemd unit will avoid the restrictions applied to the udev service.
For example, launch a command in a systemd scope unit using systemd-run --no-block --scope -- my mount command here
.
Unfortunately, if you want the unit that contains ntfs-3g
to have an identifiable name, it's not immediately obvious what the 100% correct way is. If an old unit with that name is still tracked as "active" but the process has just exited, then simply asking the service to start won't do anything. You could ignore the problem, generate a random suffix for the name, or try to rule out this sequence of events... but maybe there's a better way.
I have not tested this with FUSE, but I think the way to do this would be the systemd-mount
command.
A SuperUser answer suggests that using systemd-mount
on a device while the udev rule is still running, might not work correctly. This would require rather baroque workarounds. (RUN+="/path/to/my/script %k"
which runs systemd-run --no-block --scope --unit=mount-$1 sh -c "systemctl start /dev/$1; systemd-mount ..."
).
I think the way to do this would look something like
ENVSYSTEMD_WANTS=my-mounter@%k.service
# /etc/systemd/system/my-mounter@.service
[Service]
Type=oneshot
ExecStart=systemd-mount %I
#!/bin/sh
# /usr/local/lib/my-mounter
# You can make this as complicated as you want.
# (Although curiously systemd-mount also reads SYSTEMD_MOUNT_WHERE and
# SYSTEMD_MOUNT_OPTIONS properties if you set them on the udev device.)
# You could also read udev properties yourself using
# eval "$(udevadm info --query=property --export)"
DEVNAME="$1"
systemd-mount "/dev/$DEVNAME"
The defaults for systemd-mount
cause the filesystem to be unmounted automatically on removal, but they do not cleanup the automatically-created mount point directory afterwards (!).
There were two separate changes in v239 - two separate directives that you must revert to get the old behaviour.
PrivateMounts=yes
. Replace this withPrivateMounts=no
.SystemCallFilter=@system-service @module @raw-io
The use of this directive is new in v239. Therefore the simplest way to regain the previous behaviour is to remove it entirely.
This approach may be sub-optimal. For example, if you support mounting writable NTFS using ntfs-3g
, the ntfs-3g
process will be killed whenever you restart udev.
Note that modern security doctrine suggests desktops should start using FUSE for mounting all removable filesystems. https://lwn.net/Articles/755593/
It would be preferable if you could work out how to launch (and stop?) a separate systemd unit ... and write this up as the preferred approach, in whatever peculiar documents keep suggesting Arch users to use this pattern :-). Using a separate systemd unit will avoid the restrictions applied to the udev service.
For example, launch a command in a systemd scope unit using systemd-run --no-block --scope -- my mount command here
.
Unfortunately, if you want the unit that contains ntfs-3g
to have an identifiable name, it's not immediately obvious what the 100% correct way is. If an old unit with that name is still tracked as "active" but the process has just exited, then simply asking the service to start won't do anything. You could ignore the problem, generate a random suffix for the name, or try to rule out this sequence of events... but maybe there's a better way.
I have not tested this with FUSE, but I think the way to do this would be the systemd-mount
command.
A SuperUser answer suggests that using systemd-mount
on a device while the udev rule is still running, might not work correctly. This would require rather baroque workarounds. (RUN+="/path/to/my/script %k"
which runs systemd-run --no-block --scope --unit=mount-$1 sh -c "systemctl start /dev/$1; systemd-mount ..."
).
I think the way to do this would look something like
ENVSYSTEMD_WANTS=my-mounter@%k.service
# /etc/systemd/system/my-mounter@.service
[Service]
Type=oneshot
ExecStart=systemd-mount %I
#!/bin/sh
# /usr/local/lib/my-mounter
# You can make this as complicated as you want.
# (Although curiously systemd-mount also reads SYSTEMD_MOUNT_WHERE and
# SYSTEMD_MOUNT_OPTIONS properties if you set them on the udev device.)
# You could also read udev properties yourself using
# eval "$(udevadm info --query=property --export)"
DEVNAME="$1"
systemd-mount "/dev/$DEVNAME"
The defaults for systemd-mount
cause the filesystem to be unmounted automatically on removal, but they do not cleanup the automatically-created mount point directory afterwards (!).
There were two separate changes in v239 - two separate directives that you must revert to get the old behaviour.
PrivateMounts=yes
. Replace this withPrivateMounts=no
.SystemCallFilter=@system-service @module @raw-io
The use of this directive is new in v239. Therefore the simplest way to regain the previous behaviour is to remove it entirely.
edited Aug 27 at 10:52
answered Aug 27 at 9:39
sourcejedi
20.2k42886
20.2k42886
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f464959%2fudev-rule-to-automount-media-devices-stopped-working-after-systemd-was-updated-t%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
weird, it worked for someone else. github.com/systemd/systemd/issues/9873#issuecomment-413058745
â sourcejedi
Aug 26 at 20:38
you remembered to both reload systemd and restart udev, right?
â sourcejedi
Aug 26 at 20:39
I rebooted the whole system and assume that is good enough?
â Iskustvo
Aug 26 at 20:43
Can you compare the output of
eval "$(systemctl show -p MainPID systemd-udevd)"; sudo ls -l /proc/$MainPID/ns/mnt
with the unchanged service file from v239, v.s. with the modification from case 1, v.s.ls -l /proc/self/ns/mnt
?â sourcejedi
Aug 26 at 20:43
whole-system reboot is good enough, thanks.
â sourcejedi
Aug 26 at 20:44