Why is it possible to delete your entire file system? [closed]
Clash Royale CLAN TAG#URR8PPP
After committing the infamous mistake of deleting my entire file system via sudo rm -rf /*
, recovering from the horrendous damage that I had done and coping with the fact that I just lost 6 years off my lifespan, I started wondering why is it even possible to do that, and what could be done to prevent this mistake from happening.
One solution that was suggested to me is revoking root access from my account, but that is inconvenient, because a lot of commands require root access and when you have to run a few dozen commands every day, that gets annoying.
Backing up your system is the obvious way to go. But restoring a backup also requires some downtime, and depending on your system that downtime could be days or weeks, which could be unacceptable in some cases.
My question is: Why not implement a confirmation when the user tries to delete their filesystem? So that when you actually want to do that, you just hit Y or enter, and if you don't at least you don't lose everything.
command-line rm
closed as primarily opinion-based by Pilot6, Sergiy Kolodyazhnyy, Xen2050, Eric Carvalho, Thomas Ward♦ Feb 13 at 15:19
Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise. If this question can be reworded to fit the rules in the help center, please edit the question.
|
show 4 more comments
After committing the infamous mistake of deleting my entire file system via sudo rm -rf /*
, recovering from the horrendous damage that I had done and coping with the fact that I just lost 6 years off my lifespan, I started wondering why is it even possible to do that, and what could be done to prevent this mistake from happening.
One solution that was suggested to me is revoking root access from my account, but that is inconvenient, because a lot of commands require root access and when you have to run a few dozen commands every day, that gets annoying.
Backing up your system is the obvious way to go. But restoring a backup also requires some downtime, and depending on your system that downtime could be days or weeks, which could be unacceptable in some cases.
My question is: Why not implement a confirmation when the user tries to delete their filesystem? So that when you actually want to do that, you just hit Y or enter, and if you don't at least you don't lose everything.
command-line rm
closed as primarily opinion-based by Pilot6, Sergiy Kolodyazhnyy, Xen2050, Eric Carvalho, Thomas Ward♦ Feb 13 at 15:19
Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise. If this question can be reworded to fit the rules in the help center, please edit the question.
See serverfault.com/q/337082
– jdv
Feb 12 at 19:53
20
"Why is it even possible to do that?" Why should't it be possible? There are perfectly good reasons to delete the contents of a directory hierarchy, and there are plenty of subsets of/
that would be nearly as bad to delete (/etc/
, for example). It simply is not the job ofrm
to decide which directories can or can't easily be deleted.
– chepner
Feb 12 at 20:30
2
Title says "Why is it possible to delete the system?" whereas question itself asks "My question is: Why not implement a confirmation when the user tries to delete their filesystem?". So this makes the question unclear. Which one is your actual question so we at least know what to answer ? Please edit your post to clarify
– Sergiy Kolodyazhnyy
Feb 13 at 6:57
3
What's actually the question here? I can see three: (1) Why is it possible? (2) How to prevent doing it?, and (3) Why not implement a confirmation? -- They are not the same question, the first asks for reasoning, the second for tools. (The third is related to the second, but still not really the same. A confirmation isn't the only way to prevent something.)
– ilkkachu
Feb 13 at 10:16
7
If you're not asking for clarification from the question's author, then please don't comment at all. I see a lot of self-congratulatory comments here, explaining how this is the OP's fault for not knowing what the flags mean, or for not having a backup or whatever. I am very happy to know that so many of our users are wise enough to have backups and not run commands they don't understand. That's absolutely great for them, but fundamentally unhelpful to the OP who, presumably, has also learnt this lesson by now. So let's stop basking in our own brilliance and just answer the question.
– terdon♦
Feb 13 at 14:23
|
show 4 more comments
After committing the infamous mistake of deleting my entire file system via sudo rm -rf /*
, recovering from the horrendous damage that I had done and coping with the fact that I just lost 6 years off my lifespan, I started wondering why is it even possible to do that, and what could be done to prevent this mistake from happening.
One solution that was suggested to me is revoking root access from my account, but that is inconvenient, because a lot of commands require root access and when you have to run a few dozen commands every day, that gets annoying.
Backing up your system is the obvious way to go. But restoring a backup also requires some downtime, and depending on your system that downtime could be days or weeks, which could be unacceptable in some cases.
My question is: Why not implement a confirmation when the user tries to delete their filesystem? So that when you actually want to do that, you just hit Y or enter, and if you don't at least you don't lose everything.
command-line rm
After committing the infamous mistake of deleting my entire file system via sudo rm -rf /*
, recovering from the horrendous damage that I had done and coping with the fact that I just lost 6 years off my lifespan, I started wondering why is it even possible to do that, and what could be done to prevent this mistake from happening.
One solution that was suggested to me is revoking root access from my account, but that is inconvenient, because a lot of commands require root access and when you have to run a few dozen commands every day, that gets annoying.
Backing up your system is the obvious way to go. But restoring a backup also requires some downtime, and depending on your system that downtime could be days or weeks, which could be unacceptable in some cases.
My question is: Why not implement a confirmation when the user tries to delete their filesystem? So that when you actually want to do that, you just hit Y or enter, and if you don't at least you don't lose everything.
command-line rm
command-line rm
edited Feb 14 at 15:13
Mister_Fix
asked Feb 12 at 14:44
Mister_FixMister_Fix
12516
12516
closed as primarily opinion-based by Pilot6, Sergiy Kolodyazhnyy, Xen2050, Eric Carvalho, Thomas Ward♦ Feb 13 at 15:19
Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise. If this question can be reworded to fit the rules in the help center, please edit the question.
closed as primarily opinion-based by Pilot6, Sergiy Kolodyazhnyy, Xen2050, Eric Carvalho, Thomas Ward♦ Feb 13 at 15:19
Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise. If this question can be reworded to fit the rules in the help center, please edit the question.
See serverfault.com/q/337082
– jdv
Feb 12 at 19:53
20
"Why is it even possible to do that?" Why should't it be possible? There are perfectly good reasons to delete the contents of a directory hierarchy, and there are plenty of subsets of/
that would be nearly as bad to delete (/etc/
, for example). It simply is not the job ofrm
to decide which directories can or can't easily be deleted.
– chepner
Feb 12 at 20:30
2
Title says "Why is it possible to delete the system?" whereas question itself asks "My question is: Why not implement a confirmation when the user tries to delete their filesystem?". So this makes the question unclear. Which one is your actual question so we at least know what to answer ? Please edit your post to clarify
– Sergiy Kolodyazhnyy
Feb 13 at 6:57
3
What's actually the question here? I can see three: (1) Why is it possible? (2) How to prevent doing it?, and (3) Why not implement a confirmation? -- They are not the same question, the first asks for reasoning, the second for tools. (The third is related to the second, but still not really the same. A confirmation isn't the only way to prevent something.)
– ilkkachu
Feb 13 at 10:16
7
If you're not asking for clarification from the question's author, then please don't comment at all. I see a lot of self-congratulatory comments here, explaining how this is the OP's fault for not knowing what the flags mean, or for not having a backup or whatever. I am very happy to know that so many of our users are wise enough to have backups and not run commands they don't understand. That's absolutely great for them, but fundamentally unhelpful to the OP who, presumably, has also learnt this lesson by now. So let's stop basking in our own brilliance and just answer the question.
– terdon♦
Feb 13 at 14:23
|
show 4 more comments
See serverfault.com/q/337082
– jdv
Feb 12 at 19:53
20
"Why is it even possible to do that?" Why should't it be possible? There are perfectly good reasons to delete the contents of a directory hierarchy, and there are plenty of subsets of/
that would be nearly as bad to delete (/etc/
, for example). It simply is not the job ofrm
to decide which directories can or can't easily be deleted.
– chepner
Feb 12 at 20:30
2
Title says "Why is it possible to delete the system?" whereas question itself asks "My question is: Why not implement a confirmation when the user tries to delete their filesystem?". So this makes the question unclear. Which one is your actual question so we at least know what to answer ? Please edit your post to clarify
– Sergiy Kolodyazhnyy
Feb 13 at 6:57
3
What's actually the question here? I can see three: (1) Why is it possible? (2) How to prevent doing it?, and (3) Why not implement a confirmation? -- They are not the same question, the first asks for reasoning, the second for tools. (The third is related to the second, but still not really the same. A confirmation isn't the only way to prevent something.)
– ilkkachu
Feb 13 at 10:16
7
If you're not asking for clarification from the question's author, then please don't comment at all. I see a lot of self-congratulatory comments here, explaining how this is the OP's fault for not knowing what the flags mean, or for not having a backup or whatever. I am very happy to know that so many of our users are wise enough to have backups and not run commands they don't understand. That's absolutely great for them, but fundamentally unhelpful to the OP who, presumably, has also learnt this lesson by now. So let's stop basking in our own brilliance and just answer the question.
– terdon♦
Feb 13 at 14:23
See serverfault.com/q/337082
– jdv
Feb 12 at 19:53
See serverfault.com/q/337082
– jdv
Feb 12 at 19:53
20
20
"Why is it even possible to do that?" Why should't it be possible? There are perfectly good reasons to delete the contents of a directory hierarchy, and there are plenty of subsets of
/
that would be nearly as bad to delete (/etc/
, for example). It simply is not the job of rm
to decide which directories can or can't easily be deleted.– chepner
Feb 12 at 20:30
"Why is it even possible to do that?" Why should't it be possible? There are perfectly good reasons to delete the contents of a directory hierarchy, and there are plenty of subsets of
/
that would be nearly as bad to delete (/etc/
, for example). It simply is not the job of rm
to decide which directories can or can't easily be deleted.– chepner
Feb 12 at 20:30
2
2
Title says "Why is it possible to delete the system?" whereas question itself asks "My question is: Why not implement a confirmation when the user tries to delete their filesystem?". So this makes the question unclear. Which one is your actual question so we at least know what to answer ? Please edit your post to clarify
– Sergiy Kolodyazhnyy
Feb 13 at 6:57
Title says "Why is it possible to delete the system?" whereas question itself asks "My question is: Why not implement a confirmation when the user tries to delete their filesystem?". So this makes the question unclear. Which one is your actual question so we at least know what to answer ? Please edit your post to clarify
– Sergiy Kolodyazhnyy
Feb 13 at 6:57
3
3
What's actually the question here? I can see three: (1) Why is it possible? (2) How to prevent doing it?, and (3) Why not implement a confirmation? -- They are not the same question, the first asks for reasoning, the second for tools. (The third is related to the second, but still not really the same. A confirmation isn't the only way to prevent something.)
– ilkkachu
Feb 13 at 10:16
What's actually the question here? I can see three: (1) Why is it possible? (2) How to prevent doing it?, and (3) Why not implement a confirmation? -- They are not the same question, the first asks for reasoning, the second for tools. (The third is related to the second, but still not really the same. A confirmation isn't the only way to prevent something.)
– ilkkachu
Feb 13 at 10:16
7
7
If you're not asking for clarification from the question's author, then please don't comment at all. I see a lot of self-congratulatory comments here, explaining how this is the OP's fault for not knowing what the flags mean, or for not having a backup or whatever. I am very happy to know that so many of our users are wise enough to have backups and not run commands they don't understand. That's absolutely great for them, but fundamentally unhelpful to the OP who, presumably, has also learnt this lesson by now. So let's stop basking in our own brilliance and just answer the question.
– terdon♦
Feb 13 at 14:23
If you're not asking for clarification from the question's author, then please don't comment at all. I see a lot of self-congratulatory comments here, explaining how this is the OP's fault for not knowing what the flags mean, or for not having a backup or whatever. I am very happy to know that so many of our users are wise enough to have backups and not run commands they don't understand. That's absolutely great for them, but fundamentally unhelpful to the OP who, presumably, has also learnt this lesson by now. So let's stop basking in our own brilliance and just answer the question.
– terdon♦
Feb 13 at 14:23
|
show 4 more comments
7 Answers
7
active
oldest
votes
rm
is a low level system tool. These tools are built as simply as possible as they must be present on any system. rm
is expected to have well known behaviour, especially with regard to confirmation prompts so that it can be used in scripts.
Adding a special case to prompt on rm /*
would not be possible as the rm command doesn't see it in this form. The *
wildcard is expanded by the shell before being passed to rm
, so the actual command which needs a special case would be something like rm /bin /boot /dev /etc /home /initrd.img /lib /lib64 /lost+found /media /mnt /opt /proc /root /run /sbin /srv /sys /tmp /usr /var /vmlinuz
. Adding the code to check for this case (which will probably be different on diffferent linuxes) would be a complex challenge as well as being prone to subtle errors. The standard linux rm
does have a default protection against system destruction by refusing to remove /
without the --no-preserve-root
option.
By default there are three protections against deleting your system in this way:
- Permissions - regular users won't be able to remove important files. You bypassed this with sudo
- Directories - by default rm will not remove directories. You bypassed this with the -r flag
- Write protected files - by default, rm will ask for confirmation before deleting a write protected file (this would not have stopped all the damage, but may have provided a prompt before the system became unrecoverable). You bypassed this protection with the -f flag
To remove all the contents of a folder, rather than running rm /path/to/folder/*
, do rm -rf /path/to/folder
, then mkdir /path/to/folder
as this will trigger the --preserve-root
protection as well as removing any dotfiles in the folder
3
" rm is expected to have well known behaviour" and it is in fact one of the tools specified by POSIX standard. "he * wildcard is expanded by the shell before being passed to rm" Exactly, so adding checks for all types of parameters, which may be symlinks to actual directories and files in/
would take a lot of combinations and considerations, so it's not practical. And going back to the idea of standards, adding such checks would break consistent behavior
– Sergiy Kolodyazhnyy
Feb 13 at 9:30
That’s exactly whysafe-rm
is a wrapper aroundrm
: This way it can check every single argument (instead of the whole random command line), verify it’s not on the configurable blacklist and only then callrm
with the verified arguments. That’s neither very complex nor prone to errors.
– dessert
Feb 13 at 9:44
add a comment |
Meet safe-rm
, the “wrapper around the rm
command to prevent accidental deletions”:
safe-rm prevents the accidental deletion of important files by
replacingrm
with a wrapper
which checks the given arguments against a configurable blacklist of files and directories
which should never be removed.
Users who attempt to delete one of these protected files or directories will not be able
to do so and will be shown a warning message instead. (man safe-rm
)
If the installation link above doesn’t work for you just use sudo apt install safe-rm
instead.
The default configuration already contains the system directories, let’s try rm /*
for example:
$ rm /*
safe-rm: skipping /bin
safe-rm: skipping /boot
safe-rm: skipping /dev
safe-rm: skipping /etc
safe-rm: skipping /home
safe-rm: skipping /lib
safe-rm: skipping /proc
safe-rm: skipping /root
safe-rm: skipping /sbin
safe-rm: skipping /sys
safe-rm: skipping /usr
safe-rm: skipping /var
…
As you see, this would prevent you from deleting /home
, where I suppose your personal files are stored. However, it does not prevent you from deleting ~
or any of its subdirectories if you try deleting them directly. To add the ~/precious_photos
directory just add its absolute path with the tilde resolved to safe-rm
’s config file /etc/safe-rm.conf
, e.g.:
echo /home/dessert/precious_photos | sudo tee -a /etc/safe-rm.conf
For the cases where you run rm
without sudo
1 and the -f
flag it’s a good idea to add an alias
for your shell that makes rm
’s -i
flag the default. This way rm
asks for every file before deleting it:
alias rm='rm -i'
A similarly useful flag is -I
, just that it only warns “once before removing more than three files, or when removing recursively”, which is “less intrusive than -i
, while still giving protection against most mistakes”:
alias rm='rm -I'
The general danger of these aliases is that you easily get in the habit of relying on them to save you, which may backfire badly when using a different environment.
1: sudo
ignores aliases, one can work around that by defining alias sudo='sudo '
though
add a comment |
Confirmation is already there, the problem is the -f
in the command, that is --force
; When user forces an operation it is supposed they know what they're doing (obviously a mistake could always append).
An example:
rm -r ./*
rm: remove write-protected regular file './mozilla_mvaschetto0/WEBMASTER-04.DOC'? N
rm: cannot remove './mozilla_mvaschetto0': Directory not empty
rm: descend into write-protected directory './pulse-PKdhtXMmr18n'? n
rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-bolt.service-rZWMCb'? n
rm: descend into write-protected directory './systemd-private- 890f5b31987b4910a579d1c49930a591-colord.service-4ZBnUf'? n
rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-fwupd.service-vAxdbk'? n
rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-minissdpd.service-9G8GrR'?
rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-ModemManager.service-s43zUX'? nn
rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-rtkit-daemon.service-cfMePv'? n
rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-systemd-timesyncd.service-oXT4pr'? n
rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-upower.service-L0k9rT'? n
It is different with --force
option: I will not get any confirmation and files are deleted.
The problem is to know the command and its parameters, navigate more in the man
of a command (also if the command is found in a tutorial) for examples: the first time I saw the command tar xzf some.tar.gz
I'm asking myself, "what does xzf
mean?"
Then I read the tar manpage and discovered it.
I don’t think that’s relevant here. At the point where rm first asks for a write-protected or whatever file, it may already have deleted a whole bunch of important files.
– Jonas Schäfer
Feb 12 at 17:30
1
So personally, I have always thought-f
was required to delete folders. I even opened a prompt to confirm and complain but learned that just-r
is needed. I supposerm -rf
has become the norm since it is so useful in a script (you don't want the script to fail just because you're trying to delete things that don't exist) so you see it often, but I suppose we need to be vigilant about just usingrm -r
as our "default" when in a shell (understandably there should be no "default" assumptions you don't understand, especially with sudo, but people will be people and at least this is safer).
– Captain Man
Feb 12 at 18:42
2
Rmdir is the safest way to delete a folder
– AtomiX84
Feb 12 at 18:43
rm
does not ask for confirmation by default, it only asks it for write-protected directories and files. If you ran that command on your machine you probably deleted lots of your own files. If you neededrm
to ask for confirmation, you need to pass the-i
parameter. For example:rm -ir ./*
– Dan
Feb 13 at 14:16
add a comment |
Running without backups means you have to be super careful to never make any mistakes. And hope your hardware never fails. (Even RAID can't save you from filesystem corruption caused by faulty RAM.) So that's your first problem. (Which I assume you've already realized and will be doing backups in the future.)
But there are things you can do to reduce the likelihood of mistakes like this:
- alias
rm='rm -I'
to prompt if deleting more than 3 things. - alias mv and cp to
mv -i
andcp -i
(many normal use-cases for these don't involve overwriting a destination file). - alias
sudo='sudo '
to do alias expansion on the first argument tosudo
I find rm -I
is a lot more useful than rm -i
. It usually don't prompt during normal use, so tetting prompted when you didn't expect it is a lot more noticeable / better warning. With -i
(before I discovered -I
), I got used to typing rm
to disable alias expansion, after being sure I'd typed the command correctly.
You don't want to get in the habit of relying on rm -i
or -I
aliases to save you. It's your safety line that you hope never gets used. If I actually want to interactively select which matches to delete, or I'm not sure if my glob might match some extra files, I manually type rm -i .../*whatever*
. (Also a good habit in case you're ever in an environment without your aliases).
Defend against fat-fingering Enter by typing ls -d /*foo*
first, then up-arrow and change that to rm -r
after you've finished typing. So the command line never contains rm -rf ~/
or similar dangerous commands at any point. You only "arm" it by changing ls
to rm
with control-a, alt-d to go to the start of the line and adding the -r
or the -f
after you've finished typing the ~/some/sub/dir/
part of the command.
Depending on what you're deleting, actually run the ls -d
first, or not if that wouldn't add anything to what you see with tab-completion. You might start with rm
(without -r
or -rf
) so it's just control-a / control-right (or alt+f) / space / -r
.
(Get used to bash/readline's powerful editing keybindings for moving around quickly, like control-arrows or alt+f/b to move by words, and killing whole words with alt+backspace or alt+d, or control-w. And control-u to kill to the beginning of the line. And control-/ to undo an edit if you go one step too far.
And of course up-arrow history that you can search with control-r / control-s.)
Avoid -rf
unless you actually need it to silence prompts about removing read-only files.
Take extra time to think before pressing return on a sudo
command. Especially if you don't have full backups, or now would be a bad time to have to restore from them.
add a comment |
Well the short answer is to not run such a command.
The long story is that it's part of the customization. Essentially there are two factors at play here. One is the fact that you are free to modify all files.
The second is that the rm command offers the helpful syntactic sugar to delete all files under a folder.
Effectively this could be restated as a singe simple tenet of Unix machines. Everything is a file. To make matters better, there are access controls, but these are overridden by your usage of
sudo
I guess you could add an alias or a function to ensure that this can never be run.
add a comment |
If your system file space usage isn't immense (and these days 'immense' means 'hundreds of gigabytes or more') create some virtual machine instances, and always work inside of one. Recovery would just entail using a backup instance.
Or you could create a chroot jail, and work inside it. You'd still need some recovery if it got trashed, but that would be easier with a running (enclosing) system to work from.
This is probably the most effective answer, since it can protect against any damage, even third party scripts. You'd only have to worry about actual malware.
– PyRulez
Feb 12 at 19:32
Thought of another angle. It's worth asking why you need to do recursive deletions in the first place. Maybe what's really needed are some scripts to remove a project, etc.
– Loren Rosen
Feb 12 at 19:39
"It's worth asking why you need to do recursive deletions in the first place." Well, just because there's not built in command doesn't mean you still can't make a mistake. Third party scripts might delete files one by one from some directory. And there are other ways to bork the system that only touch one file. However, replacingrm
withsafe-rm
helps, at least.
– PyRulez
Feb 12 at 19:42
My notion with the script was that it would have a built-in notion of a 'project' or similar. Perhaps you'd have an empty file at the project root called.project_root
, or, if the file system supports it, an attribute on the directory itself. Then, the script would go up the file tree looking for the project root, and complain it the current directory wasn't in a project. Or, if the projects all live in same place, the script could require you to name a project. You could still delete the wrong project, but not destroy the entire system.
– Loren Rosen
Feb 12 at 20:00
... also, a variant ofchroot
would be to use something like Docker (which I think actually useschroot
under the covers). For other files you just need to read, mount a read-only file-system.
– Loren Rosen
Feb 12 at 20:34
add a comment |
rm
is a very old Unix command and was likely not designed with user-friendliness in mind. It tries to do precisely what it's asked of, when it has the permissions. A pitfall for many new users is that they frequently see code with sudo
and don't think much about using it. Functions that directly modify files like rm
, dd
, chroot
, etc. require extreme care in use.
Nowadays I like to use trash
(without sudo) from trash-cli. It functions like the Recycle Bin from Windows, in that you can easily retrieve accidentally deleted files. Ubuntu already has a Trash folder and move-to-trash functionality built into Files.
Even then you may make mistakes so make sure to make backups of your entire filesystem.
add a comment |
7 Answers
7
active
oldest
votes
7 Answers
7
active
oldest
votes
active
oldest
votes
active
oldest
votes
rm
is a low level system tool. These tools are built as simply as possible as they must be present on any system. rm
is expected to have well known behaviour, especially with regard to confirmation prompts so that it can be used in scripts.
Adding a special case to prompt on rm /*
would not be possible as the rm command doesn't see it in this form. The *
wildcard is expanded by the shell before being passed to rm
, so the actual command which needs a special case would be something like rm /bin /boot /dev /etc /home /initrd.img /lib /lib64 /lost+found /media /mnt /opt /proc /root /run /sbin /srv /sys /tmp /usr /var /vmlinuz
. Adding the code to check for this case (which will probably be different on diffferent linuxes) would be a complex challenge as well as being prone to subtle errors. The standard linux rm
does have a default protection against system destruction by refusing to remove /
without the --no-preserve-root
option.
By default there are three protections against deleting your system in this way:
- Permissions - regular users won't be able to remove important files. You bypassed this with sudo
- Directories - by default rm will not remove directories. You bypassed this with the -r flag
- Write protected files - by default, rm will ask for confirmation before deleting a write protected file (this would not have stopped all the damage, but may have provided a prompt before the system became unrecoverable). You bypassed this protection with the -f flag
To remove all the contents of a folder, rather than running rm /path/to/folder/*
, do rm -rf /path/to/folder
, then mkdir /path/to/folder
as this will trigger the --preserve-root
protection as well as removing any dotfiles in the folder
3
" rm is expected to have well known behaviour" and it is in fact one of the tools specified by POSIX standard. "he * wildcard is expanded by the shell before being passed to rm" Exactly, so adding checks for all types of parameters, which may be symlinks to actual directories and files in/
would take a lot of combinations and considerations, so it's not practical. And going back to the idea of standards, adding such checks would break consistent behavior
– Sergiy Kolodyazhnyy
Feb 13 at 9:30
That’s exactly whysafe-rm
is a wrapper aroundrm
: This way it can check every single argument (instead of the whole random command line), verify it’s not on the configurable blacklist and only then callrm
with the verified arguments. That’s neither very complex nor prone to errors.
– dessert
Feb 13 at 9:44
add a comment |
rm
is a low level system tool. These tools are built as simply as possible as they must be present on any system. rm
is expected to have well known behaviour, especially with regard to confirmation prompts so that it can be used in scripts.
Adding a special case to prompt on rm /*
would not be possible as the rm command doesn't see it in this form. The *
wildcard is expanded by the shell before being passed to rm
, so the actual command which needs a special case would be something like rm /bin /boot /dev /etc /home /initrd.img /lib /lib64 /lost+found /media /mnt /opt /proc /root /run /sbin /srv /sys /tmp /usr /var /vmlinuz
. Adding the code to check for this case (which will probably be different on diffferent linuxes) would be a complex challenge as well as being prone to subtle errors. The standard linux rm
does have a default protection against system destruction by refusing to remove /
without the --no-preserve-root
option.
By default there are three protections against deleting your system in this way:
- Permissions - regular users won't be able to remove important files. You bypassed this with sudo
- Directories - by default rm will not remove directories. You bypassed this with the -r flag
- Write protected files - by default, rm will ask for confirmation before deleting a write protected file (this would not have stopped all the damage, but may have provided a prompt before the system became unrecoverable). You bypassed this protection with the -f flag
To remove all the contents of a folder, rather than running rm /path/to/folder/*
, do rm -rf /path/to/folder
, then mkdir /path/to/folder
as this will trigger the --preserve-root
protection as well as removing any dotfiles in the folder
3
" rm is expected to have well known behaviour" and it is in fact one of the tools specified by POSIX standard. "he * wildcard is expanded by the shell before being passed to rm" Exactly, so adding checks for all types of parameters, which may be symlinks to actual directories and files in/
would take a lot of combinations and considerations, so it's not practical. And going back to the idea of standards, adding such checks would break consistent behavior
– Sergiy Kolodyazhnyy
Feb 13 at 9:30
That’s exactly whysafe-rm
is a wrapper aroundrm
: This way it can check every single argument (instead of the whole random command line), verify it’s not on the configurable blacklist and only then callrm
with the verified arguments. That’s neither very complex nor prone to errors.
– dessert
Feb 13 at 9:44
add a comment |
rm
is a low level system tool. These tools are built as simply as possible as they must be present on any system. rm
is expected to have well known behaviour, especially with regard to confirmation prompts so that it can be used in scripts.
Adding a special case to prompt on rm /*
would not be possible as the rm command doesn't see it in this form. The *
wildcard is expanded by the shell before being passed to rm
, so the actual command which needs a special case would be something like rm /bin /boot /dev /etc /home /initrd.img /lib /lib64 /lost+found /media /mnt /opt /proc /root /run /sbin /srv /sys /tmp /usr /var /vmlinuz
. Adding the code to check for this case (which will probably be different on diffferent linuxes) would be a complex challenge as well as being prone to subtle errors. The standard linux rm
does have a default protection against system destruction by refusing to remove /
without the --no-preserve-root
option.
By default there are three protections against deleting your system in this way:
- Permissions - regular users won't be able to remove important files. You bypassed this with sudo
- Directories - by default rm will not remove directories. You bypassed this with the -r flag
- Write protected files - by default, rm will ask for confirmation before deleting a write protected file (this would not have stopped all the damage, but may have provided a prompt before the system became unrecoverable). You bypassed this protection with the -f flag
To remove all the contents of a folder, rather than running rm /path/to/folder/*
, do rm -rf /path/to/folder
, then mkdir /path/to/folder
as this will trigger the --preserve-root
protection as well as removing any dotfiles in the folder
rm
is a low level system tool. These tools are built as simply as possible as they must be present on any system. rm
is expected to have well known behaviour, especially with regard to confirmation prompts so that it can be used in scripts.
Adding a special case to prompt on rm /*
would not be possible as the rm command doesn't see it in this form. The *
wildcard is expanded by the shell before being passed to rm
, so the actual command which needs a special case would be something like rm /bin /boot /dev /etc /home /initrd.img /lib /lib64 /lost+found /media /mnt /opt /proc /root /run /sbin /srv /sys /tmp /usr /var /vmlinuz
. Adding the code to check for this case (which will probably be different on diffferent linuxes) would be a complex challenge as well as being prone to subtle errors. The standard linux rm
does have a default protection against system destruction by refusing to remove /
without the --no-preserve-root
option.
By default there are three protections against deleting your system in this way:
- Permissions - regular users won't be able to remove important files. You bypassed this with sudo
- Directories - by default rm will not remove directories. You bypassed this with the -r flag
- Write protected files - by default, rm will ask for confirmation before deleting a write protected file (this would not have stopped all the damage, but may have provided a prompt before the system became unrecoverable). You bypassed this protection with the -f flag
To remove all the contents of a folder, rather than running rm /path/to/folder/*
, do rm -rf /path/to/folder
, then mkdir /path/to/folder
as this will trigger the --preserve-root
protection as well as removing any dotfiles in the folder
edited Feb 13 at 9:26
Sergiy Kolodyazhnyy
73.7k9154322
73.7k9154322
answered Feb 13 at 9:17
rhellenrhellen
1642
1642
3
" rm is expected to have well known behaviour" and it is in fact one of the tools specified by POSIX standard. "he * wildcard is expanded by the shell before being passed to rm" Exactly, so adding checks for all types of parameters, which may be symlinks to actual directories and files in/
would take a lot of combinations and considerations, so it's not practical. And going back to the idea of standards, adding such checks would break consistent behavior
– Sergiy Kolodyazhnyy
Feb 13 at 9:30
That’s exactly whysafe-rm
is a wrapper aroundrm
: This way it can check every single argument (instead of the whole random command line), verify it’s not on the configurable blacklist and only then callrm
with the verified arguments. That’s neither very complex nor prone to errors.
– dessert
Feb 13 at 9:44
add a comment |
3
" rm is expected to have well known behaviour" and it is in fact one of the tools specified by POSIX standard. "he * wildcard is expanded by the shell before being passed to rm" Exactly, so adding checks for all types of parameters, which may be symlinks to actual directories and files in/
would take a lot of combinations and considerations, so it's not practical. And going back to the idea of standards, adding such checks would break consistent behavior
– Sergiy Kolodyazhnyy
Feb 13 at 9:30
That’s exactly whysafe-rm
is a wrapper aroundrm
: This way it can check every single argument (instead of the whole random command line), verify it’s not on the configurable blacklist and only then callrm
with the verified arguments. That’s neither very complex nor prone to errors.
– dessert
Feb 13 at 9:44
3
3
" rm is expected to have well known behaviour" and it is in fact one of the tools specified by POSIX standard. "he * wildcard is expanded by the shell before being passed to rm" Exactly, so adding checks for all types of parameters, which may be symlinks to actual directories and files in
/
would take a lot of combinations and considerations, so it's not practical. And going back to the idea of standards, adding such checks would break consistent behavior– Sergiy Kolodyazhnyy
Feb 13 at 9:30
" rm is expected to have well known behaviour" and it is in fact one of the tools specified by POSIX standard. "he * wildcard is expanded by the shell before being passed to rm" Exactly, so adding checks for all types of parameters, which may be symlinks to actual directories and files in
/
would take a lot of combinations and considerations, so it's not practical. And going back to the idea of standards, adding such checks would break consistent behavior– Sergiy Kolodyazhnyy
Feb 13 at 9:30
That’s exactly why
safe-rm
is a wrapper around rm
: This way it can check every single argument (instead of the whole random command line), verify it’s not on the configurable blacklist and only then call rm
with the verified arguments. That’s neither very complex nor prone to errors.– dessert
Feb 13 at 9:44
That’s exactly why
safe-rm
is a wrapper around rm
: This way it can check every single argument (instead of the whole random command line), verify it’s not on the configurable blacklist and only then call rm
with the verified arguments. That’s neither very complex nor prone to errors.– dessert
Feb 13 at 9:44
add a comment |
Meet safe-rm
, the “wrapper around the rm
command to prevent accidental deletions”:
safe-rm prevents the accidental deletion of important files by
replacingrm
with a wrapper
which checks the given arguments against a configurable blacklist of files and directories
which should never be removed.
Users who attempt to delete one of these protected files or directories will not be able
to do so and will be shown a warning message instead. (man safe-rm
)
If the installation link above doesn’t work for you just use sudo apt install safe-rm
instead.
The default configuration already contains the system directories, let’s try rm /*
for example:
$ rm /*
safe-rm: skipping /bin
safe-rm: skipping /boot
safe-rm: skipping /dev
safe-rm: skipping /etc
safe-rm: skipping /home
safe-rm: skipping /lib
safe-rm: skipping /proc
safe-rm: skipping /root
safe-rm: skipping /sbin
safe-rm: skipping /sys
safe-rm: skipping /usr
safe-rm: skipping /var
…
As you see, this would prevent you from deleting /home
, where I suppose your personal files are stored. However, it does not prevent you from deleting ~
or any of its subdirectories if you try deleting them directly. To add the ~/precious_photos
directory just add its absolute path with the tilde resolved to safe-rm
’s config file /etc/safe-rm.conf
, e.g.:
echo /home/dessert/precious_photos | sudo tee -a /etc/safe-rm.conf
For the cases where you run rm
without sudo
1 and the -f
flag it’s a good idea to add an alias
for your shell that makes rm
’s -i
flag the default. This way rm
asks for every file before deleting it:
alias rm='rm -i'
A similarly useful flag is -I
, just that it only warns “once before removing more than three files, or when removing recursively”, which is “less intrusive than -i
, while still giving protection against most mistakes”:
alias rm='rm -I'
The general danger of these aliases is that you easily get in the habit of relying on them to save you, which may backfire badly when using a different environment.
1: sudo
ignores aliases, one can work around that by defining alias sudo='sudo '
though
add a comment |
Meet safe-rm
, the “wrapper around the rm
command to prevent accidental deletions”:
safe-rm prevents the accidental deletion of important files by
replacingrm
with a wrapper
which checks the given arguments against a configurable blacklist of files and directories
which should never be removed.
Users who attempt to delete one of these protected files or directories will not be able
to do so and will be shown a warning message instead. (man safe-rm
)
If the installation link above doesn’t work for you just use sudo apt install safe-rm
instead.
The default configuration already contains the system directories, let’s try rm /*
for example:
$ rm /*
safe-rm: skipping /bin
safe-rm: skipping /boot
safe-rm: skipping /dev
safe-rm: skipping /etc
safe-rm: skipping /home
safe-rm: skipping /lib
safe-rm: skipping /proc
safe-rm: skipping /root
safe-rm: skipping /sbin
safe-rm: skipping /sys
safe-rm: skipping /usr
safe-rm: skipping /var
…
As you see, this would prevent you from deleting /home
, where I suppose your personal files are stored. However, it does not prevent you from deleting ~
or any of its subdirectories if you try deleting them directly. To add the ~/precious_photos
directory just add its absolute path with the tilde resolved to safe-rm
’s config file /etc/safe-rm.conf
, e.g.:
echo /home/dessert/precious_photos | sudo tee -a /etc/safe-rm.conf
For the cases where you run rm
without sudo
1 and the -f
flag it’s a good idea to add an alias
for your shell that makes rm
’s -i
flag the default. This way rm
asks for every file before deleting it:
alias rm='rm -i'
A similarly useful flag is -I
, just that it only warns “once before removing more than three files, or when removing recursively”, which is “less intrusive than -i
, while still giving protection against most mistakes”:
alias rm='rm -I'
The general danger of these aliases is that you easily get in the habit of relying on them to save you, which may backfire badly when using a different environment.
1: sudo
ignores aliases, one can work around that by defining alias sudo='sudo '
though
add a comment |
Meet safe-rm
, the “wrapper around the rm
command to prevent accidental deletions”:
safe-rm prevents the accidental deletion of important files by
replacingrm
with a wrapper
which checks the given arguments against a configurable blacklist of files and directories
which should never be removed.
Users who attempt to delete one of these protected files or directories will not be able
to do so and will be shown a warning message instead. (man safe-rm
)
If the installation link above doesn’t work for you just use sudo apt install safe-rm
instead.
The default configuration already contains the system directories, let’s try rm /*
for example:
$ rm /*
safe-rm: skipping /bin
safe-rm: skipping /boot
safe-rm: skipping /dev
safe-rm: skipping /etc
safe-rm: skipping /home
safe-rm: skipping /lib
safe-rm: skipping /proc
safe-rm: skipping /root
safe-rm: skipping /sbin
safe-rm: skipping /sys
safe-rm: skipping /usr
safe-rm: skipping /var
…
As you see, this would prevent you from deleting /home
, where I suppose your personal files are stored. However, it does not prevent you from deleting ~
or any of its subdirectories if you try deleting them directly. To add the ~/precious_photos
directory just add its absolute path with the tilde resolved to safe-rm
’s config file /etc/safe-rm.conf
, e.g.:
echo /home/dessert/precious_photos | sudo tee -a /etc/safe-rm.conf
For the cases where you run rm
without sudo
1 and the -f
flag it’s a good idea to add an alias
for your shell that makes rm
’s -i
flag the default. This way rm
asks for every file before deleting it:
alias rm='rm -i'
A similarly useful flag is -I
, just that it only warns “once before removing more than three files, or when removing recursively”, which is “less intrusive than -i
, while still giving protection against most mistakes”:
alias rm='rm -I'
The general danger of these aliases is that you easily get in the habit of relying on them to save you, which may backfire badly when using a different environment.
1: sudo
ignores aliases, one can work around that by defining alias sudo='sudo '
though
Meet safe-rm
, the “wrapper around the rm
command to prevent accidental deletions”:
safe-rm prevents the accidental deletion of important files by
replacingrm
with a wrapper
which checks the given arguments against a configurable blacklist of files and directories
which should never be removed.
Users who attempt to delete one of these protected files or directories will not be able
to do so and will be shown a warning message instead. (man safe-rm
)
If the installation link above doesn’t work for you just use sudo apt install safe-rm
instead.
The default configuration already contains the system directories, let’s try rm /*
for example:
$ rm /*
safe-rm: skipping /bin
safe-rm: skipping /boot
safe-rm: skipping /dev
safe-rm: skipping /etc
safe-rm: skipping /home
safe-rm: skipping /lib
safe-rm: skipping /proc
safe-rm: skipping /root
safe-rm: skipping /sbin
safe-rm: skipping /sys
safe-rm: skipping /usr
safe-rm: skipping /var
…
As you see, this would prevent you from deleting /home
, where I suppose your personal files are stored. However, it does not prevent you from deleting ~
or any of its subdirectories if you try deleting them directly. To add the ~/precious_photos
directory just add its absolute path with the tilde resolved to safe-rm
’s config file /etc/safe-rm.conf
, e.g.:
echo /home/dessert/precious_photos | sudo tee -a /etc/safe-rm.conf
For the cases where you run rm
without sudo
1 and the -f
flag it’s a good idea to add an alias
for your shell that makes rm
’s -i
flag the default. This way rm
asks for every file before deleting it:
alias rm='rm -i'
A similarly useful flag is -I
, just that it only warns “once before removing more than three files, or when removing recursively”, which is “less intrusive than -i
, while still giving protection against most mistakes”:
alias rm='rm -I'
The general danger of these aliases is that you easily get in the habit of relying on them to save you, which may backfire badly when using a different environment.
1: sudo
ignores aliases, one can work around that by defining alias sudo='sudo '
though
edited Feb 13 at 11:11
answered Feb 12 at 14:51
dessertdessert
24.2k670104
24.2k670104
add a comment |
add a comment |
Confirmation is already there, the problem is the -f
in the command, that is --force
; When user forces an operation it is supposed they know what they're doing (obviously a mistake could always append).
An example:
rm -r ./*
rm: remove write-protected regular file './mozilla_mvaschetto0/WEBMASTER-04.DOC'? N
rm: cannot remove './mozilla_mvaschetto0': Directory not empty
rm: descend into write-protected directory './pulse-PKdhtXMmr18n'? n
rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-bolt.service-rZWMCb'? n
rm: descend into write-protected directory './systemd-private- 890f5b31987b4910a579d1c49930a591-colord.service-4ZBnUf'? n
rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-fwupd.service-vAxdbk'? n
rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-minissdpd.service-9G8GrR'?
rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-ModemManager.service-s43zUX'? nn
rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-rtkit-daemon.service-cfMePv'? n
rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-systemd-timesyncd.service-oXT4pr'? n
rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-upower.service-L0k9rT'? n
It is different with --force
option: I will not get any confirmation and files are deleted.
The problem is to know the command and its parameters, navigate more in the man
of a command (also if the command is found in a tutorial) for examples: the first time I saw the command tar xzf some.tar.gz
I'm asking myself, "what does xzf
mean?"
Then I read the tar manpage and discovered it.
I don’t think that’s relevant here. At the point where rm first asks for a write-protected or whatever file, it may already have deleted a whole bunch of important files.
– Jonas Schäfer
Feb 12 at 17:30
1
So personally, I have always thought-f
was required to delete folders. I even opened a prompt to confirm and complain but learned that just-r
is needed. I supposerm -rf
has become the norm since it is so useful in a script (you don't want the script to fail just because you're trying to delete things that don't exist) so you see it often, but I suppose we need to be vigilant about just usingrm -r
as our "default" when in a shell (understandably there should be no "default" assumptions you don't understand, especially with sudo, but people will be people and at least this is safer).
– Captain Man
Feb 12 at 18:42
2
Rmdir is the safest way to delete a folder
– AtomiX84
Feb 12 at 18:43
rm
does not ask for confirmation by default, it only asks it for write-protected directories and files. If you ran that command on your machine you probably deleted lots of your own files. If you neededrm
to ask for confirmation, you need to pass the-i
parameter. For example:rm -ir ./*
– Dan
Feb 13 at 14:16
add a comment |
Confirmation is already there, the problem is the -f
in the command, that is --force
; When user forces an operation it is supposed they know what they're doing (obviously a mistake could always append).
An example:
rm -r ./*
rm: remove write-protected regular file './mozilla_mvaschetto0/WEBMASTER-04.DOC'? N
rm: cannot remove './mozilla_mvaschetto0': Directory not empty
rm: descend into write-protected directory './pulse-PKdhtXMmr18n'? n
rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-bolt.service-rZWMCb'? n
rm: descend into write-protected directory './systemd-private- 890f5b31987b4910a579d1c49930a591-colord.service-4ZBnUf'? n
rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-fwupd.service-vAxdbk'? n
rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-minissdpd.service-9G8GrR'?
rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-ModemManager.service-s43zUX'? nn
rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-rtkit-daemon.service-cfMePv'? n
rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-systemd-timesyncd.service-oXT4pr'? n
rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-upower.service-L0k9rT'? n
It is different with --force
option: I will not get any confirmation and files are deleted.
The problem is to know the command and its parameters, navigate more in the man
of a command (also if the command is found in a tutorial) for examples: the first time I saw the command tar xzf some.tar.gz
I'm asking myself, "what does xzf
mean?"
Then I read the tar manpage and discovered it.
I don’t think that’s relevant here. At the point where rm first asks for a write-protected or whatever file, it may already have deleted a whole bunch of important files.
– Jonas Schäfer
Feb 12 at 17:30
1
So personally, I have always thought-f
was required to delete folders. I even opened a prompt to confirm and complain but learned that just-r
is needed. I supposerm -rf
has become the norm since it is so useful in a script (you don't want the script to fail just because you're trying to delete things that don't exist) so you see it often, but I suppose we need to be vigilant about just usingrm -r
as our "default" when in a shell (understandably there should be no "default" assumptions you don't understand, especially with sudo, but people will be people and at least this is safer).
– Captain Man
Feb 12 at 18:42
2
Rmdir is the safest way to delete a folder
– AtomiX84
Feb 12 at 18:43
rm
does not ask for confirmation by default, it only asks it for write-protected directories and files. If you ran that command on your machine you probably deleted lots of your own files. If you neededrm
to ask for confirmation, you need to pass the-i
parameter. For example:rm -ir ./*
– Dan
Feb 13 at 14:16
add a comment |
Confirmation is already there, the problem is the -f
in the command, that is --force
; When user forces an operation it is supposed they know what they're doing (obviously a mistake could always append).
An example:
rm -r ./*
rm: remove write-protected regular file './mozilla_mvaschetto0/WEBMASTER-04.DOC'? N
rm: cannot remove './mozilla_mvaschetto0': Directory not empty
rm: descend into write-protected directory './pulse-PKdhtXMmr18n'? n
rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-bolt.service-rZWMCb'? n
rm: descend into write-protected directory './systemd-private- 890f5b31987b4910a579d1c49930a591-colord.service-4ZBnUf'? n
rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-fwupd.service-vAxdbk'? n
rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-minissdpd.service-9G8GrR'?
rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-ModemManager.service-s43zUX'? nn
rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-rtkit-daemon.service-cfMePv'? n
rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-systemd-timesyncd.service-oXT4pr'? n
rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-upower.service-L0k9rT'? n
It is different with --force
option: I will not get any confirmation and files are deleted.
The problem is to know the command and its parameters, navigate more in the man
of a command (also if the command is found in a tutorial) for examples: the first time I saw the command tar xzf some.tar.gz
I'm asking myself, "what does xzf
mean?"
Then I read the tar manpage and discovered it.
Confirmation is already there, the problem is the -f
in the command, that is --force
; When user forces an operation it is supposed they know what they're doing (obviously a mistake could always append).
An example:
rm -r ./*
rm: remove write-protected regular file './mozilla_mvaschetto0/WEBMASTER-04.DOC'? N
rm: cannot remove './mozilla_mvaschetto0': Directory not empty
rm: descend into write-protected directory './pulse-PKdhtXMmr18n'? n
rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-bolt.service-rZWMCb'? n
rm: descend into write-protected directory './systemd-private- 890f5b31987b4910a579d1c49930a591-colord.service-4ZBnUf'? n
rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-fwupd.service-vAxdbk'? n
rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-minissdpd.service-9G8GrR'?
rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-ModemManager.service-s43zUX'? nn
rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-rtkit-daemon.service-cfMePv'? n
rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-systemd-timesyncd.service-oXT4pr'? n
rm: descend into write-protected directory './systemd-private-890f5b31987b4910a579d1c49930a591-upower.service-L0k9rT'? n
It is different with --force
option: I will not get any confirmation and files are deleted.
The problem is to know the command and its parameters, navigate more in the man
of a command (also if the command is found in a tutorial) for examples: the first time I saw the command tar xzf some.tar.gz
I'm asking myself, "what does xzf
mean?"
Then I read the tar manpage and discovered it.
edited Feb 13 at 6:08
SusanW
11314
11314
answered Feb 12 at 14:56
AtomiX84AtomiX84
955111
955111
I don’t think that’s relevant here. At the point where rm first asks for a write-protected or whatever file, it may already have deleted a whole bunch of important files.
– Jonas Schäfer
Feb 12 at 17:30
1
So personally, I have always thought-f
was required to delete folders. I even opened a prompt to confirm and complain but learned that just-r
is needed. I supposerm -rf
has become the norm since it is so useful in a script (you don't want the script to fail just because you're trying to delete things that don't exist) so you see it often, but I suppose we need to be vigilant about just usingrm -r
as our "default" when in a shell (understandably there should be no "default" assumptions you don't understand, especially with sudo, but people will be people and at least this is safer).
– Captain Man
Feb 12 at 18:42
2
Rmdir is the safest way to delete a folder
– AtomiX84
Feb 12 at 18:43
rm
does not ask for confirmation by default, it only asks it for write-protected directories and files. If you ran that command on your machine you probably deleted lots of your own files. If you neededrm
to ask for confirmation, you need to pass the-i
parameter. For example:rm -ir ./*
– Dan
Feb 13 at 14:16
add a comment |
I don’t think that’s relevant here. At the point where rm first asks for a write-protected or whatever file, it may already have deleted a whole bunch of important files.
– Jonas Schäfer
Feb 12 at 17:30
1
So personally, I have always thought-f
was required to delete folders. I even opened a prompt to confirm and complain but learned that just-r
is needed. I supposerm -rf
has become the norm since it is so useful in a script (you don't want the script to fail just because you're trying to delete things that don't exist) so you see it often, but I suppose we need to be vigilant about just usingrm -r
as our "default" when in a shell (understandably there should be no "default" assumptions you don't understand, especially with sudo, but people will be people and at least this is safer).
– Captain Man
Feb 12 at 18:42
2
Rmdir is the safest way to delete a folder
– AtomiX84
Feb 12 at 18:43
rm
does not ask for confirmation by default, it only asks it for write-protected directories and files. If you ran that command on your machine you probably deleted lots of your own files. If you neededrm
to ask for confirmation, you need to pass the-i
parameter. For example:rm -ir ./*
– Dan
Feb 13 at 14:16
I don’t think that’s relevant here. At the point where rm first asks for a write-protected or whatever file, it may already have deleted a whole bunch of important files.
– Jonas Schäfer
Feb 12 at 17:30
I don’t think that’s relevant here. At the point where rm first asks for a write-protected or whatever file, it may already have deleted a whole bunch of important files.
– Jonas Schäfer
Feb 12 at 17:30
1
1
So personally, I have always thought
-f
was required to delete folders. I even opened a prompt to confirm and complain but learned that just -r
is needed. I suppose rm -rf
has become the norm since it is so useful in a script (you don't want the script to fail just because you're trying to delete things that don't exist) so you see it often, but I suppose we need to be vigilant about just using rm -r
as our "default" when in a shell (understandably there should be no "default" assumptions you don't understand, especially with sudo, but people will be people and at least this is safer).– Captain Man
Feb 12 at 18:42
So personally, I have always thought
-f
was required to delete folders. I even opened a prompt to confirm and complain but learned that just -r
is needed. I suppose rm -rf
has become the norm since it is so useful in a script (you don't want the script to fail just because you're trying to delete things that don't exist) so you see it often, but I suppose we need to be vigilant about just using rm -r
as our "default" when in a shell (understandably there should be no "default" assumptions you don't understand, especially with sudo, but people will be people and at least this is safer).– Captain Man
Feb 12 at 18:42
2
2
Rmdir is the safest way to delete a folder
– AtomiX84
Feb 12 at 18:43
Rmdir is the safest way to delete a folder
– AtomiX84
Feb 12 at 18:43
rm
does not ask for confirmation by default, it only asks it for write-protected directories and files. If you ran that command on your machine you probably deleted lots of your own files. If you needed rm
to ask for confirmation, you need to pass the -i
parameter. For example: rm -ir ./*
– Dan
Feb 13 at 14:16
rm
does not ask for confirmation by default, it only asks it for write-protected directories and files. If you ran that command on your machine you probably deleted lots of your own files. If you needed rm
to ask for confirmation, you need to pass the -i
parameter. For example: rm -ir ./*
– Dan
Feb 13 at 14:16
add a comment |
Running without backups means you have to be super careful to never make any mistakes. And hope your hardware never fails. (Even RAID can't save you from filesystem corruption caused by faulty RAM.) So that's your first problem. (Which I assume you've already realized and will be doing backups in the future.)
But there are things you can do to reduce the likelihood of mistakes like this:
- alias
rm='rm -I'
to prompt if deleting more than 3 things. - alias mv and cp to
mv -i
andcp -i
(many normal use-cases for these don't involve overwriting a destination file). - alias
sudo='sudo '
to do alias expansion on the first argument tosudo
I find rm -I
is a lot more useful than rm -i
. It usually don't prompt during normal use, so tetting prompted when you didn't expect it is a lot more noticeable / better warning. With -i
(before I discovered -I
), I got used to typing rm
to disable alias expansion, after being sure I'd typed the command correctly.
You don't want to get in the habit of relying on rm -i
or -I
aliases to save you. It's your safety line that you hope never gets used. If I actually want to interactively select which matches to delete, or I'm not sure if my glob might match some extra files, I manually type rm -i .../*whatever*
. (Also a good habit in case you're ever in an environment without your aliases).
Defend against fat-fingering Enter by typing ls -d /*foo*
first, then up-arrow and change that to rm -r
after you've finished typing. So the command line never contains rm -rf ~/
or similar dangerous commands at any point. You only "arm" it by changing ls
to rm
with control-a, alt-d to go to the start of the line and adding the -r
or the -f
after you've finished typing the ~/some/sub/dir/
part of the command.
Depending on what you're deleting, actually run the ls -d
first, or not if that wouldn't add anything to what you see with tab-completion. You might start with rm
(without -r
or -rf
) so it's just control-a / control-right (or alt+f) / space / -r
.
(Get used to bash/readline's powerful editing keybindings for moving around quickly, like control-arrows or alt+f/b to move by words, and killing whole words with alt+backspace or alt+d, or control-w. And control-u to kill to the beginning of the line. And control-/ to undo an edit if you go one step too far.
And of course up-arrow history that you can search with control-r / control-s.)
Avoid -rf
unless you actually need it to silence prompts about removing read-only files.
Take extra time to think before pressing return on a sudo
command. Especially if you don't have full backups, or now would be a bad time to have to restore from them.
add a comment |
Running without backups means you have to be super careful to never make any mistakes. And hope your hardware never fails. (Even RAID can't save you from filesystem corruption caused by faulty RAM.) So that's your first problem. (Which I assume you've already realized and will be doing backups in the future.)
But there are things you can do to reduce the likelihood of mistakes like this:
- alias
rm='rm -I'
to prompt if deleting more than 3 things. - alias mv and cp to
mv -i
andcp -i
(many normal use-cases for these don't involve overwriting a destination file). - alias
sudo='sudo '
to do alias expansion on the first argument tosudo
I find rm -I
is a lot more useful than rm -i
. It usually don't prompt during normal use, so tetting prompted when you didn't expect it is a lot more noticeable / better warning. With -i
(before I discovered -I
), I got used to typing rm
to disable alias expansion, after being sure I'd typed the command correctly.
You don't want to get in the habit of relying on rm -i
or -I
aliases to save you. It's your safety line that you hope never gets used. If I actually want to interactively select which matches to delete, or I'm not sure if my glob might match some extra files, I manually type rm -i .../*whatever*
. (Also a good habit in case you're ever in an environment without your aliases).
Defend against fat-fingering Enter by typing ls -d /*foo*
first, then up-arrow and change that to rm -r
after you've finished typing. So the command line never contains rm -rf ~/
or similar dangerous commands at any point. You only "arm" it by changing ls
to rm
with control-a, alt-d to go to the start of the line and adding the -r
or the -f
after you've finished typing the ~/some/sub/dir/
part of the command.
Depending on what you're deleting, actually run the ls -d
first, or not if that wouldn't add anything to what you see with tab-completion. You might start with rm
(without -r
or -rf
) so it's just control-a / control-right (or alt+f) / space / -r
.
(Get used to bash/readline's powerful editing keybindings for moving around quickly, like control-arrows or alt+f/b to move by words, and killing whole words with alt+backspace or alt+d, or control-w. And control-u to kill to the beginning of the line. And control-/ to undo an edit if you go one step too far.
And of course up-arrow history that you can search with control-r / control-s.)
Avoid -rf
unless you actually need it to silence prompts about removing read-only files.
Take extra time to think before pressing return on a sudo
command. Especially if you don't have full backups, or now would be a bad time to have to restore from them.
add a comment |
Running without backups means you have to be super careful to never make any mistakes. And hope your hardware never fails. (Even RAID can't save you from filesystem corruption caused by faulty RAM.) So that's your first problem. (Which I assume you've already realized and will be doing backups in the future.)
But there are things you can do to reduce the likelihood of mistakes like this:
- alias
rm='rm -I'
to prompt if deleting more than 3 things. - alias mv and cp to
mv -i
andcp -i
(many normal use-cases for these don't involve overwriting a destination file). - alias
sudo='sudo '
to do alias expansion on the first argument tosudo
I find rm -I
is a lot more useful than rm -i
. It usually don't prompt during normal use, so tetting prompted when you didn't expect it is a lot more noticeable / better warning. With -i
(before I discovered -I
), I got used to typing rm
to disable alias expansion, after being sure I'd typed the command correctly.
You don't want to get in the habit of relying on rm -i
or -I
aliases to save you. It's your safety line that you hope never gets used. If I actually want to interactively select which matches to delete, or I'm not sure if my glob might match some extra files, I manually type rm -i .../*whatever*
. (Also a good habit in case you're ever in an environment without your aliases).
Defend against fat-fingering Enter by typing ls -d /*foo*
first, then up-arrow and change that to rm -r
after you've finished typing. So the command line never contains rm -rf ~/
or similar dangerous commands at any point. You only "arm" it by changing ls
to rm
with control-a, alt-d to go to the start of the line and adding the -r
or the -f
after you've finished typing the ~/some/sub/dir/
part of the command.
Depending on what you're deleting, actually run the ls -d
first, or not if that wouldn't add anything to what you see with tab-completion. You might start with rm
(without -r
or -rf
) so it's just control-a / control-right (or alt+f) / space / -r
.
(Get used to bash/readline's powerful editing keybindings for moving around quickly, like control-arrows or alt+f/b to move by words, and killing whole words with alt+backspace or alt+d, or control-w. And control-u to kill to the beginning of the line. And control-/ to undo an edit if you go one step too far.
And of course up-arrow history that you can search with control-r / control-s.)
Avoid -rf
unless you actually need it to silence prompts about removing read-only files.
Take extra time to think before pressing return on a sudo
command. Especially if you don't have full backups, or now would be a bad time to have to restore from them.
Running without backups means you have to be super careful to never make any mistakes. And hope your hardware never fails. (Even RAID can't save you from filesystem corruption caused by faulty RAM.) So that's your first problem. (Which I assume you've already realized and will be doing backups in the future.)
But there are things you can do to reduce the likelihood of mistakes like this:
- alias
rm='rm -I'
to prompt if deleting more than 3 things. - alias mv and cp to
mv -i
andcp -i
(many normal use-cases for these don't involve overwriting a destination file). - alias
sudo='sudo '
to do alias expansion on the first argument tosudo
I find rm -I
is a lot more useful than rm -i
. It usually don't prompt during normal use, so tetting prompted when you didn't expect it is a lot more noticeable / better warning. With -i
(before I discovered -I
), I got used to typing rm
to disable alias expansion, after being sure I'd typed the command correctly.
You don't want to get in the habit of relying on rm -i
or -I
aliases to save you. It's your safety line that you hope never gets used. If I actually want to interactively select which matches to delete, or I'm not sure if my glob might match some extra files, I manually type rm -i .../*whatever*
. (Also a good habit in case you're ever in an environment without your aliases).
Defend against fat-fingering Enter by typing ls -d /*foo*
first, then up-arrow and change that to rm -r
after you've finished typing. So the command line never contains rm -rf ~/
or similar dangerous commands at any point. You only "arm" it by changing ls
to rm
with control-a, alt-d to go to the start of the line and adding the -r
or the -f
after you've finished typing the ~/some/sub/dir/
part of the command.
Depending on what you're deleting, actually run the ls -d
first, or not if that wouldn't add anything to what you see with tab-completion. You might start with rm
(without -r
or -rf
) so it's just control-a / control-right (or alt+f) / space / -r
.
(Get used to bash/readline's powerful editing keybindings for moving around quickly, like control-arrows or alt+f/b to move by words, and killing whole words with alt+backspace or alt+d, or control-w. And control-u to kill to the beginning of the line. And control-/ to undo an edit if you go one step too far.
And of course up-arrow history that you can search with control-r / control-s.)
Avoid -rf
unless you actually need it to silence prompts about removing read-only files.
Take extra time to think before pressing return on a sudo
command. Especially if you don't have full backups, or now would be a bad time to have to restore from them.
answered Feb 13 at 3:58
Peter CordesPeter Cordes
1,019814
1,019814
add a comment |
add a comment |
Well the short answer is to not run such a command.
The long story is that it's part of the customization. Essentially there are two factors at play here. One is the fact that you are free to modify all files.
The second is that the rm command offers the helpful syntactic sugar to delete all files under a folder.
Effectively this could be restated as a singe simple tenet of Unix machines. Everything is a file. To make matters better, there are access controls, but these are overridden by your usage of
sudo
I guess you could add an alias or a function to ensure that this can never be run.
add a comment |
Well the short answer is to not run such a command.
The long story is that it's part of the customization. Essentially there are two factors at play here. One is the fact that you are free to modify all files.
The second is that the rm command offers the helpful syntactic sugar to delete all files under a folder.
Effectively this could be restated as a singe simple tenet of Unix machines. Everything is a file. To make matters better, there are access controls, but these are overridden by your usage of
sudo
I guess you could add an alias or a function to ensure that this can never be run.
add a comment |
Well the short answer is to not run such a command.
The long story is that it's part of the customization. Essentially there are two factors at play here. One is the fact that you are free to modify all files.
The second is that the rm command offers the helpful syntactic sugar to delete all files under a folder.
Effectively this could be restated as a singe simple tenet of Unix machines. Everything is a file. To make matters better, there are access controls, but these are overridden by your usage of
sudo
I guess you could add an alias or a function to ensure that this can never be run.
Well the short answer is to not run such a command.
The long story is that it's part of the customization. Essentially there are two factors at play here. One is the fact that you are free to modify all files.
The second is that the rm command offers the helpful syntactic sugar to delete all files under a folder.
Effectively this could be restated as a singe simple tenet of Unix machines. Everything is a file. To make matters better, there are access controls, but these are overridden by your usage of
sudo
I guess you could add an alias or a function to ensure that this can never be run.
edited Feb 13 at 15:14
answered Feb 12 at 15:21
HaoZekeHaoZeke
14113
14113
add a comment |
add a comment |
If your system file space usage isn't immense (and these days 'immense' means 'hundreds of gigabytes or more') create some virtual machine instances, and always work inside of one. Recovery would just entail using a backup instance.
Or you could create a chroot jail, and work inside it. You'd still need some recovery if it got trashed, but that would be easier with a running (enclosing) system to work from.
This is probably the most effective answer, since it can protect against any damage, even third party scripts. You'd only have to worry about actual malware.
– PyRulez
Feb 12 at 19:32
Thought of another angle. It's worth asking why you need to do recursive deletions in the first place. Maybe what's really needed are some scripts to remove a project, etc.
– Loren Rosen
Feb 12 at 19:39
"It's worth asking why you need to do recursive deletions in the first place." Well, just because there's not built in command doesn't mean you still can't make a mistake. Third party scripts might delete files one by one from some directory. And there are other ways to bork the system that only touch one file. However, replacingrm
withsafe-rm
helps, at least.
– PyRulez
Feb 12 at 19:42
My notion with the script was that it would have a built-in notion of a 'project' or similar. Perhaps you'd have an empty file at the project root called.project_root
, or, if the file system supports it, an attribute on the directory itself. Then, the script would go up the file tree looking for the project root, and complain it the current directory wasn't in a project. Or, if the projects all live in same place, the script could require you to name a project. You could still delete the wrong project, but not destroy the entire system.
– Loren Rosen
Feb 12 at 20:00
... also, a variant ofchroot
would be to use something like Docker (which I think actually useschroot
under the covers). For other files you just need to read, mount a read-only file-system.
– Loren Rosen
Feb 12 at 20:34
add a comment |
If your system file space usage isn't immense (and these days 'immense' means 'hundreds of gigabytes or more') create some virtual machine instances, and always work inside of one. Recovery would just entail using a backup instance.
Or you could create a chroot jail, and work inside it. You'd still need some recovery if it got trashed, but that would be easier with a running (enclosing) system to work from.
This is probably the most effective answer, since it can protect against any damage, even third party scripts. You'd only have to worry about actual malware.
– PyRulez
Feb 12 at 19:32
Thought of another angle. It's worth asking why you need to do recursive deletions in the first place. Maybe what's really needed are some scripts to remove a project, etc.
– Loren Rosen
Feb 12 at 19:39
"It's worth asking why you need to do recursive deletions in the first place." Well, just because there's not built in command doesn't mean you still can't make a mistake. Third party scripts might delete files one by one from some directory. And there are other ways to bork the system that only touch one file. However, replacingrm
withsafe-rm
helps, at least.
– PyRulez
Feb 12 at 19:42
My notion with the script was that it would have a built-in notion of a 'project' or similar. Perhaps you'd have an empty file at the project root called.project_root
, or, if the file system supports it, an attribute on the directory itself. Then, the script would go up the file tree looking for the project root, and complain it the current directory wasn't in a project. Or, if the projects all live in same place, the script could require you to name a project. You could still delete the wrong project, but not destroy the entire system.
– Loren Rosen
Feb 12 at 20:00
... also, a variant ofchroot
would be to use something like Docker (which I think actually useschroot
under the covers). For other files you just need to read, mount a read-only file-system.
– Loren Rosen
Feb 12 at 20:34
add a comment |
If your system file space usage isn't immense (and these days 'immense' means 'hundreds of gigabytes or more') create some virtual machine instances, and always work inside of one. Recovery would just entail using a backup instance.
Or you could create a chroot jail, and work inside it. You'd still need some recovery if it got trashed, but that would be easier with a running (enclosing) system to work from.
If your system file space usage isn't immense (and these days 'immense' means 'hundreds of gigabytes or more') create some virtual machine instances, and always work inside of one. Recovery would just entail using a backup instance.
Or you could create a chroot jail, and work inside it. You'd still need some recovery if it got trashed, but that would be easier with a running (enclosing) system to work from.
answered Feb 12 at 16:03
Loren RosenLoren Rosen
623
623
This is probably the most effective answer, since it can protect against any damage, even third party scripts. You'd only have to worry about actual malware.
– PyRulez
Feb 12 at 19:32
Thought of another angle. It's worth asking why you need to do recursive deletions in the first place. Maybe what's really needed are some scripts to remove a project, etc.
– Loren Rosen
Feb 12 at 19:39
"It's worth asking why you need to do recursive deletions in the first place." Well, just because there's not built in command doesn't mean you still can't make a mistake. Third party scripts might delete files one by one from some directory. And there are other ways to bork the system that only touch one file. However, replacingrm
withsafe-rm
helps, at least.
– PyRulez
Feb 12 at 19:42
My notion with the script was that it would have a built-in notion of a 'project' or similar. Perhaps you'd have an empty file at the project root called.project_root
, or, if the file system supports it, an attribute on the directory itself. Then, the script would go up the file tree looking for the project root, and complain it the current directory wasn't in a project. Or, if the projects all live in same place, the script could require you to name a project. You could still delete the wrong project, but not destroy the entire system.
– Loren Rosen
Feb 12 at 20:00
... also, a variant ofchroot
would be to use something like Docker (which I think actually useschroot
under the covers). For other files you just need to read, mount a read-only file-system.
– Loren Rosen
Feb 12 at 20:34
add a comment |
This is probably the most effective answer, since it can protect against any damage, even third party scripts. You'd only have to worry about actual malware.
– PyRulez
Feb 12 at 19:32
Thought of another angle. It's worth asking why you need to do recursive deletions in the first place. Maybe what's really needed are some scripts to remove a project, etc.
– Loren Rosen
Feb 12 at 19:39
"It's worth asking why you need to do recursive deletions in the first place." Well, just because there's not built in command doesn't mean you still can't make a mistake. Third party scripts might delete files one by one from some directory. And there are other ways to bork the system that only touch one file. However, replacingrm
withsafe-rm
helps, at least.
– PyRulez
Feb 12 at 19:42
My notion with the script was that it would have a built-in notion of a 'project' or similar. Perhaps you'd have an empty file at the project root called.project_root
, or, if the file system supports it, an attribute on the directory itself. Then, the script would go up the file tree looking for the project root, and complain it the current directory wasn't in a project. Or, if the projects all live in same place, the script could require you to name a project. You could still delete the wrong project, but not destroy the entire system.
– Loren Rosen
Feb 12 at 20:00
... also, a variant ofchroot
would be to use something like Docker (which I think actually useschroot
under the covers). For other files you just need to read, mount a read-only file-system.
– Loren Rosen
Feb 12 at 20:34
This is probably the most effective answer, since it can protect against any damage, even third party scripts. You'd only have to worry about actual malware.
– PyRulez
Feb 12 at 19:32
This is probably the most effective answer, since it can protect against any damage, even third party scripts. You'd only have to worry about actual malware.
– PyRulez
Feb 12 at 19:32
Thought of another angle. It's worth asking why you need to do recursive deletions in the first place. Maybe what's really needed are some scripts to remove a project, etc.
– Loren Rosen
Feb 12 at 19:39
Thought of another angle. It's worth asking why you need to do recursive deletions in the first place. Maybe what's really needed are some scripts to remove a project, etc.
– Loren Rosen
Feb 12 at 19:39
"It's worth asking why you need to do recursive deletions in the first place." Well, just because there's not built in command doesn't mean you still can't make a mistake. Third party scripts might delete files one by one from some directory. And there are other ways to bork the system that only touch one file. However, replacing
rm
with safe-rm
helps, at least.– PyRulez
Feb 12 at 19:42
"It's worth asking why you need to do recursive deletions in the first place." Well, just because there's not built in command doesn't mean you still can't make a mistake. Third party scripts might delete files one by one from some directory. And there are other ways to bork the system that only touch one file. However, replacing
rm
with safe-rm
helps, at least.– PyRulez
Feb 12 at 19:42
My notion with the script was that it would have a built-in notion of a 'project' or similar. Perhaps you'd have an empty file at the project root called
.project_root
, or, if the file system supports it, an attribute on the directory itself. Then, the script would go up the file tree looking for the project root, and complain it the current directory wasn't in a project. Or, if the projects all live in same place, the script could require you to name a project. You could still delete the wrong project, but not destroy the entire system.– Loren Rosen
Feb 12 at 20:00
My notion with the script was that it would have a built-in notion of a 'project' or similar. Perhaps you'd have an empty file at the project root called
.project_root
, or, if the file system supports it, an attribute on the directory itself. Then, the script would go up the file tree looking for the project root, and complain it the current directory wasn't in a project. Or, if the projects all live in same place, the script could require you to name a project. You could still delete the wrong project, but not destroy the entire system.– Loren Rosen
Feb 12 at 20:00
... also, a variant of
chroot
would be to use something like Docker (which I think actually uses chroot
under the covers). For other files you just need to read, mount a read-only file-system.– Loren Rosen
Feb 12 at 20:34
... also, a variant of
chroot
would be to use something like Docker (which I think actually uses chroot
under the covers). For other files you just need to read, mount a read-only file-system.– Loren Rosen
Feb 12 at 20:34
add a comment |
rm
is a very old Unix command and was likely not designed with user-friendliness in mind. It tries to do precisely what it's asked of, when it has the permissions. A pitfall for many new users is that they frequently see code with sudo
and don't think much about using it. Functions that directly modify files like rm
, dd
, chroot
, etc. require extreme care in use.
Nowadays I like to use trash
(without sudo) from trash-cli. It functions like the Recycle Bin from Windows, in that you can easily retrieve accidentally deleted files. Ubuntu already has a Trash folder and move-to-trash functionality built into Files.
Even then you may make mistakes so make sure to make backups of your entire filesystem.
add a comment |
rm
is a very old Unix command and was likely not designed with user-friendliness in mind. It tries to do precisely what it's asked of, when it has the permissions. A pitfall for many new users is that they frequently see code with sudo
and don't think much about using it. Functions that directly modify files like rm
, dd
, chroot
, etc. require extreme care in use.
Nowadays I like to use trash
(without sudo) from trash-cli. It functions like the Recycle Bin from Windows, in that you can easily retrieve accidentally deleted files. Ubuntu already has a Trash folder and move-to-trash functionality built into Files.
Even then you may make mistakes so make sure to make backups of your entire filesystem.
add a comment |
rm
is a very old Unix command and was likely not designed with user-friendliness in mind. It tries to do precisely what it's asked of, when it has the permissions. A pitfall for many new users is that they frequently see code with sudo
and don't think much about using it. Functions that directly modify files like rm
, dd
, chroot
, etc. require extreme care in use.
Nowadays I like to use trash
(without sudo) from trash-cli. It functions like the Recycle Bin from Windows, in that you can easily retrieve accidentally deleted files. Ubuntu already has a Trash folder and move-to-trash functionality built into Files.
Even then you may make mistakes so make sure to make backups of your entire filesystem.
rm
is a very old Unix command and was likely not designed with user-friendliness in mind. It tries to do precisely what it's asked of, when it has the permissions. A pitfall for many new users is that they frequently see code with sudo
and don't think much about using it. Functions that directly modify files like rm
, dd
, chroot
, etc. require extreme care in use.
Nowadays I like to use trash
(without sudo) from trash-cli. It functions like the Recycle Bin from Windows, in that you can easily retrieve accidentally deleted files. Ubuntu already has a Trash folder and move-to-trash functionality built into Files.
Even then you may make mistakes so make sure to make backups of your entire filesystem.
answered Feb 13 at 6:41
qwrqwr
583519
583519
add a comment |
add a comment |
See serverfault.com/q/337082
– jdv
Feb 12 at 19:53
20
"Why is it even possible to do that?" Why should't it be possible? There are perfectly good reasons to delete the contents of a directory hierarchy, and there are plenty of subsets of
/
that would be nearly as bad to delete (/etc/
, for example). It simply is not the job ofrm
to decide which directories can or can't easily be deleted.– chepner
Feb 12 at 20:30
2
Title says "Why is it possible to delete the system?" whereas question itself asks "My question is: Why not implement a confirmation when the user tries to delete their filesystem?". So this makes the question unclear. Which one is your actual question so we at least know what to answer ? Please edit your post to clarify
– Sergiy Kolodyazhnyy
Feb 13 at 6:57
3
What's actually the question here? I can see three: (1) Why is it possible? (2) How to prevent doing it?, and (3) Why not implement a confirmation? -- They are not the same question, the first asks for reasoning, the second for tools. (The third is related to the second, but still not really the same. A confirmation isn't the only way to prevent something.)
– ilkkachu
Feb 13 at 10:16
7
If you're not asking for clarification from the question's author, then please don't comment at all. I see a lot of self-congratulatory comments here, explaining how this is the OP's fault for not knowing what the flags mean, or for not having a backup or whatever. I am very happy to know that so many of our users are wise enough to have backups and not run commands they don't understand. That's absolutely great for them, but fundamentally unhelpful to the OP who, presumably, has also learnt this lesson by now. So let's stop basking in our own brilliance and just answer the question.
– terdon♦
Feb 13 at 14:23