Where do files go when the rm command is issued?
Clash Royale CLAN TAG#URR8PPP
up vote
80
down vote
favorite
Recently I accidentally did rm
on a set of files and it got me thinking where exactly these files end up?
That is to say, when working with a GUI, deleted files go to the Trash. What's the equivalent for rm
and is there a way of undoing an rm
command?
command-line rm trash
add a comment |Â
up vote
80
down vote
favorite
Recently I accidentally did rm
on a set of files and it got me thinking where exactly these files end up?
That is to say, when working with a GUI, deleted files go to the Trash. What's the equivalent for rm
and is there a way of undoing an rm
command?
command-line rm trash
3
here's a possible duplicate. undo on linux. But I'm not really sure it is as where do files go is quite different from is there a way to undo.
â xenoterracide
Apr 8 '11 at 12:47
add a comment |Â
up vote
80
down vote
favorite
up vote
80
down vote
favorite
Recently I accidentally did rm
on a set of files and it got me thinking where exactly these files end up?
That is to say, when working with a GUI, deleted files go to the Trash. What's the equivalent for rm
and is there a way of undoing an rm
command?
command-line rm trash
Recently I accidentally did rm
on a set of files and it got me thinking where exactly these files end up?
That is to say, when working with a GUI, deleted files go to the Trash. What's the equivalent for rm
and is there a way of undoing an rm
command?
command-line rm trash
edited Apr 8 '11 at 19:23
Gilles
506k12010031529
506k12010031529
asked Apr 8 '11 at 2:32
boehj
1,17021621
1,17021621
3
here's a possible duplicate. undo on linux. But I'm not really sure it is as where do files go is quite different from is there a way to undo.
â xenoterracide
Apr 8 '11 at 12:47
add a comment |Â
3
here's a possible duplicate. undo on linux. But I'm not really sure it is as where do files go is quite different from is there a way to undo.
â xenoterracide
Apr 8 '11 at 12:47
3
3
here's a possible duplicate. undo on linux. But I'm not really sure it is as where do files go is quite different from is there a way to undo.
â xenoterracide
Apr 8 '11 at 12:47
here's a possible duplicate. undo on linux. But I'm not really sure it is as where do files go is quite different from is there a way to undo.
â xenoterracide
Apr 8 '11 at 12:47
add a comment |Â
6 Answers
6
active
oldest
votes
up vote
105
down vote
accepted
Nowhere, gone, vanished. Well, more specifically, the file gets unlinked. The data is still sitting there on disk, but the link to it is removed. It used to be possible to retrieve the data, but nowadays the metadata is cleared and nothing's recoverable.
There is no Trash can for rm
, nor should there be. If you need a Trash can, you should use a higher-level interface. There is a command-line utility in trash-cli
on Ubuntu, but most of the time GUI file managers like Nautilus or Dolphin are used to provide a standard Trash can. The Trash can is standard itself. Files trashed in Dolphin will be visible in the Trash from Nautilus.
Files are usually moved to somewhere like ~/.local/share/Trash/files/
when trashed. The rm
command on UNIX/Linux is comparable to del
on DOS/Windows which also deletes and does not move files to the Recycle Bin. Another thing to realize is that moving a file across filesystems like to your USB disk from your hard disk drive is really 1) a copy of the file data followed by 2) unlinking the original file. You wouldn't want your Trash to be filled up with these extra copies.
3
Thanks very much for the clear explanation. I don't mind using the CLI, I just need to be a little more careful when using wildcards. :)
â boehj
Apr 8 '11 at 4:42
14
I'd be cautious with using something likelibtrash
to change the behavior of rm. Lots of scripts userm
to clean up files and you don't want those showing up in Trash. I recommend using a dedicated command liketrash
from thetrash-cli
package. @pedro I should add that I once created a file * in my home directory. I had accidentally quoted * when I shouldn't creating it so I decided to remove it with with rm * natually. When I realized what I did I quickly killed the command, but it had already deleted a number of files in my home directory.
â penguin359
Apr 8 '11 at 5:13
4
It is very rare to have a something like a trash can in the shell, so if you add it on you local machine and get used to it or even depend on it in your daily work. You can get in trouble when using some of the other 99% of the unix:s without one...
â Johan
Apr 8 '11 at 7:04
2
I think the take-home message here is that I need to stop CLI'ing in the wee hours and pay better attention.
â boehj
Apr 8 '11 at 8:06
2
Along the same lines as what @Johan said, RedHat used to (still does?) set aliases for commands likecp
, andmv
, tocp -i
andmv -i
when running as root. This changes the default behavior so those commands will always ask before overwriting existing files. Some sysadmins recommended to specifically remove those aliases so you don't end up expecting that behavior which may end up being lethal when on another system that follow default behavior.
â penguin359
Apr 8 '11 at 8:13
 |Â
show 5 more comments
up vote
10
down vote
For ext3/ext4, you can try recovering files using tools like extundelete or ext3grep, or even go messing with the low-level structures manually (not for the faint of heart); for many filesystems, you can try to search for the not-yet-overwritten blocks by certain patterns (e.g. magicrescue can search for JPEG headers, amongst other things). Note that these are using heuristics to recover the files from the metadata left behind, so full recovery is not guaranteed - it's more of a last-chance bet (as those require that some traces of the files remain in the journal, and that the blocks weren't overwritten yet).
So, for all intents and purposes, files removed with rm
are gone - you could try such necromancy as these tools offer, but don't depend on it: these are the tools to try when everything else fails. Better dig out your latest backups (you have been making backups, right? Oh well, live and learn...).
2
For high-value text data, you can always use any robust general-purpose tool (even emacs or perl) to look at the "raw device" for the disk that contained the removed file, and search for known strings; I've recovered Word documents for people this way; they lose the mark-up, but can recover most of the text. Obviously this is disaster recovery, not "Undo".
â alexis
Dec 7 '13 at 14:05
A proper 'manually' link: web.archive.org/web/20131221183925/http://â¦
â sjas
Jan 29 '15 at 22:56
add a comment |Â
up vote
7
down vote
Regarding undoing the effects of rm
:
Given that most filesystems only remove the reference to the data and indicate that the blocks as free, you could try to locate your data reading directly from the device. With a bit of luck the blocks containing your file(s) haven't been claimed for something else.
This assumes you have something fairly unique to look for, that you have root
on the system and I'm guessing piecing together anything that spans more than one file system block (probably 4k) might end up quite laborious if the file system didn't manage to put the file(s) in contiguous blocks.
I have successfully recovered the contents of a couple of plain-text files by running strings on the device the file system was on, and using grep
looking for something from those files with a large context (-C
). (And shortly after that incident, the company decided to spend some resources on implementing backups)
It's slightly more complicated e.g. by ext3 zeroing out the block pointers in the inode, but yes, looking for the files directly might work - if they're small enough or allocated in a contiguous block. This is sometimes called file carving and there are tools likemagicrescue
that try to find images or sounds by their distinctive patterns.
â Piskvor
Apr 8 '11 at 20:48
add a comment |Â
up vote
6
down vote
Whenever you delete a file using rm
command, the file's data is never deleted. In other words the blocks in the file system containing data is still there.
What happens is when you run the rm
command, the system marks the inode belonging to that file as unused and the data blocks of that file also as unused (but not wiped out). However ext3
zeros most of the fields in the inode, when a file is deleted.
This normal marking of unused is done for the speed... Otherwise for deletion it will take some more time. That's why you might have noted deleting even large files are faster (you can recover the data if that data blocks are not overwritten).
More Info: Inode Structure, How file deletion works
...unless the file is explicitly marked withchattr +s
("shred") attribute. It tells the filesystem to specifically overwrite this file with zeroes on deletion. Only some filesystems will support that attribute.
â telcoM
Feb 4 at 21:29
add a comment |Â
up vote
3
down vote
In Unix-style filesystems (including on Linux), files are not really "at" any particular place. Instead, the system uses hardlinks to point into pieces of what amounts to a big blob of data. So when you create a file, you also create its first hardlink: the one which actually resides at the place where you "saved" the file. If you make more hardlinks, then as far as the system knows, the file actually exists in several places at once.
When you "delete" a file, normally you're actually only deleting the hardlink that existed at the place you specified. This is why the system call to delete files is called unlink()
. The system won't actually delete the file until there are no hardlinks left to it. But once that last hardlink is destroyed, so is the data.
So, where do files you delete go? If there are still hardlinks, they files are wherever the hardlinks you didn't delete are. If there are no hardlinks left, the files are gone.
add a comment |Â
up vote
0
down vote
Look also into ~/.snapshot if the file was recently removed.
4
This will only work if you have some magic filesystem that provides that feature (like a NetApp) or if you're using a special version of rm.
â mattdm
Mar 8 '12 at 3:18
add a comment |Â
6 Answers
6
active
oldest
votes
6 Answers
6
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
105
down vote
accepted
Nowhere, gone, vanished. Well, more specifically, the file gets unlinked. The data is still sitting there on disk, but the link to it is removed. It used to be possible to retrieve the data, but nowadays the metadata is cleared and nothing's recoverable.
There is no Trash can for rm
, nor should there be. If you need a Trash can, you should use a higher-level interface. There is a command-line utility in trash-cli
on Ubuntu, but most of the time GUI file managers like Nautilus or Dolphin are used to provide a standard Trash can. The Trash can is standard itself. Files trashed in Dolphin will be visible in the Trash from Nautilus.
Files are usually moved to somewhere like ~/.local/share/Trash/files/
when trashed. The rm
command on UNIX/Linux is comparable to del
on DOS/Windows which also deletes and does not move files to the Recycle Bin. Another thing to realize is that moving a file across filesystems like to your USB disk from your hard disk drive is really 1) a copy of the file data followed by 2) unlinking the original file. You wouldn't want your Trash to be filled up with these extra copies.
3
Thanks very much for the clear explanation. I don't mind using the CLI, I just need to be a little more careful when using wildcards. :)
â boehj
Apr 8 '11 at 4:42
14
I'd be cautious with using something likelibtrash
to change the behavior of rm. Lots of scripts userm
to clean up files and you don't want those showing up in Trash. I recommend using a dedicated command liketrash
from thetrash-cli
package. @pedro I should add that I once created a file * in my home directory. I had accidentally quoted * when I shouldn't creating it so I decided to remove it with with rm * natually. When I realized what I did I quickly killed the command, but it had already deleted a number of files in my home directory.
â penguin359
Apr 8 '11 at 5:13
4
It is very rare to have a something like a trash can in the shell, so if you add it on you local machine and get used to it or even depend on it in your daily work. You can get in trouble when using some of the other 99% of the unix:s without one...
â Johan
Apr 8 '11 at 7:04
2
I think the take-home message here is that I need to stop CLI'ing in the wee hours and pay better attention.
â boehj
Apr 8 '11 at 8:06
2
Along the same lines as what @Johan said, RedHat used to (still does?) set aliases for commands likecp
, andmv
, tocp -i
andmv -i
when running as root. This changes the default behavior so those commands will always ask before overwriting existing files. Some sysadmins recommended to specifically remove those aliases so you don't end up expecting that behavior which may end up being lethal when on another system that follow default behavior.
â penguin359
Apr 8 '11 at 8:13
 |Â
show 5 more comments
up vote
105
down vote
accepted
Nowhere, gone, vanished. Well, more specifically, the file gets unlinked. The data is still sitting there on disk, but the link to it is removed. It used to be possible to retrieve the data, but nowadays the metadata is cleared and nothing's recoverable.
There is no Trash can for rm
, nor should there be. If you need a Trash can, you should use a higher-level interface. There is a command-line utility in trash-cli
on Ubuntu, but most of the time GUI file managers like Nautilus or Dolphin are used to provide a standard Trash can. The Trash can is standard itself. Files trashed in Dolphin will be visible in the Trash from Nautilus.
Files are usually moved to somewhere like ~/.local/share/Trash/files/
when trashed. The rm
command on UNIX/Linux is comparable to del
on DOS/Windows which also deletes and does not move files to the Recycle Bin. Another thing to realize is that moving a file across filesystems like to your USB disk from your hard disk drive is really 1) a copy of the file data followed by 2) unlinking the original file. You wouldn't want your Trash to be filled up with these extra copies.
3
Thanks very much for the clear explanation. I don't mind using the CLI, I just need to be a little more careful when using wildcards. :)
â boehj
Apr 8 '11 at 4:42
14
I'd be cautious with using something likelibtrash
to change the behavior of rm. Lots of scripts userm
to clean up files and you don't want those showing up in Trash. I recommend using a dedicated command liketrash
from thetrash-cli
package. @pedro I should add that I once created a file * in my home directory. I had accidentally quoted * when I shouldn't creating it so I decided to remove it with with rm * natually. When I realized what I did I quickly killed the command, but it had already deleted a number of files in my home directory.
â penguin359
Apr 8 '11 at 5:13
4
It is very rare to have a something like a trash can in the shell, so if you add it on you local machine and get used to it or even depend on it in your daily work. You can get in trouble when using some of the other 99% of the unix:s without one...
â Johan
Apr 8 '11 at 7:04
2
I think the take-home message here is that I need to stop CLI'ing in the wee hours and pay better attention.
â boehj
Apr 8 '11 at 8:06
2
Along the same lines as what @Johan said, RedHat used to (still does?) set aliases for commands likecp
, andmv
, tocp -i
andmv -i
when running as root. This changes the default behavior so those commands will always ask before overwriting existing files. Some sysadmins recommended to specifically remove those aliases so you don't end up expecting that behavior which may end up being lethal when on another system that follow default behavior.
â penguin359
Apr 8 '11 at 8:13
 |Â
show 5 more comments
up vote
105
down vote
accepted
up vote
105
down vote
accepted
Nowhere, gone, vanished. Well, more specifically, the file gets unlinked. The data is still sitting there on disk, but the link to it is removed. It used to be possible to retrieve the data, but nowadays the metadata is cleared and nothing's recoverable.
There is no Trash can for rm
, nor should there be. If you need a Trash can, you should use a higher-level interface. There is a command-line utility in trash-cli
on Ubuntu, but most of the time GUI file managers like Nautilus or Dolphin are used to provide a standard Trash can. The Trash can is standard itself. Files trashed in Dolphin will be visible in the Trash from Nautilus.
Files are usually moved to somewhere like ~/.local/share/Trash/files/
when trashed. The rm
command on UNIX/Linux is comparable to del
on DOS/Windows which also deletes and does not move files to the Recycle Bin. Another thing to realize is that moving a file across filesystems like to your USB disk from your hard disk drive is really 1) a copy of the file data followed by 2) unlinking the original file. You wouldn't want your Trash to be filled up with these extra copies.
Nowhere, gone, vanished. Well, more specifically, the file gets unlinked. The data is still sitting there on disk, but the link to it is removed. It used to be possible to retrieve the data, but nowadays the metadata is cleared and nothing's recoverable.
There is no Trash can for rm
, nor should there be. If you need a Trash can, you should use a higher-level interface. There is a command-line utility in trash-cli
on Ubuntu, but most of the time GUI file managers like Nautilus or Dolphin are used to provide a standard Trash can. The Trash can is standard itself. Files trashed in Dolphin will be visible in the Trash from Nautilus.
Files are usually moved to somewhere like ~/.local/share/Trash/files/
when trashed. The rm
command on UNIX/Linux is comparable to del
on DOS/Windows which also deletes and does not move files to the Recycle Bin. Another thing to realize is that moving a file across filesystems like to your USB disk from your hard disk drive is really 1) a copy of the file data followed by 2) unlinking the original file. You wouldn't want your Trash to be filled up with these extra copies.
edited Feb 4 at 20:24
Jesse_b
10.5k22659
10.5k22659
answered Apr 8 '11 at 3:54
penguin359
8,27923039
8,27923039
3
Thanks very much for the clear explanation. I don't mind using the CLI, I just need to be a little more careful when using wildcards. :)
â boehj
Apr 8 '11 at 4:42
14
I'd be cautious with using something likelibtrash
to change the behavior of rm. Lots of scripts userm
to clean up files and you don't want those showing up in Trash. I recommend using a dedicated command liketrash
from thetrash-cli
package. @pedro I should add that I once created a file * in my home directory. I had accidentally quoted * when I shouldn't creating it so I decided to remove it with with rm * natually. When I realized what I did I quickly killed the command, but it had already deleted a number of files in my home directory.
â penguin359
Apr 8 '11 at 5:13
4
It is very rare to have a something like a trash can in the shell, so if you add it on you local machine and get used to it or even depend on it in your daily work. You can get in trouble when using some of the other 99% of the unix:s without one...
â Johan
Apr 8 '11 at 7:04
2
I think the take-home message here is that I need to stop CLI'ing in the wee hours and pay better attention.
â boehj
Apr 8 '11 at 8:06
2
Along the same lines as what @Johan said, RedHat used to (still does?) set aliases for commands likecp
, andmv
, tocp -i
andmv -i
when running as root. This changes the default behavior so those commands will always ask before overwriting existing files. Some sysadmins recommended to specifically remove those aliases so you don't end up expecting that behavior which may end up being lethal when on another system that follow default behavior.
â penguin359
Apr 8 '11 at 8:13
 |Â
show 5 more comments
3
Thanks very much for the clear explanation. I don't mind using the CLI, I just need to be a little more careful when using wildcards. :)
â boehj
Apr 8 '11 at 4:42
14
I'd be cautious with using something likelibtrash
to change the behavior of rm. Lots of scripts userm
to clean up files and you don't want those showing up in Trash. I recommend using a dedicated command liketrash
from thetrash-cli
package. @pedro I should add that I once created a file * in my home directory. I had accidentally quoted * when I shouldn't creating it so I decided to remove it with with rm * natually. When I realized what I did I quickly killed the command, but it had already deleted a number of files in my home directory.
â penguin359
Apr 8 '11 at 5:13
4
It is very rare to have a something like a trash can in the shell, so if you add it on you local machine and get used to it or even depend on it in your daily work. You can get in trouble when using some of the other 99% of the unix:s without one...
â Johan
Apr 8 '11 at 7:04
2
I think the take-home message here is that I need to stop CLI'ing in the wee hours and pay better attention.
â boehj
Apr 8 '11 at 8:06
2
Along the same lines as what @Johan said, RedHat used to (still does?) set aliases for commands likecp
, andmv
, tocp -i
andmv -i
when running as root. This changes the default behavior so those commands will always ask before overwriting existing files. Some sysadmins recommended to specifically remove those aliases so you don't end up expecting that behavior which may end up being lethal when on another system that follow default behavior.
â penguin359
Apr 8 '11 at 8:13
3
3
Thanks very much for the clear explanation. I don't mind using the CLI, I just need to be a little more careful when using wildcards. :)
â boehj
Apr 8 '11 at 4:42
Thanks very much for the clear explanation. I don't mind using the CLI, I just need to be a little more careful when using wildcards. :)
â boehj
Apr 8 '11 at 4:42
14
14
I'd be cautious with using something like
libtrash
to change the behavior of rm. Lots of scripts use rm
to clean up files and you don't want those showing up in Trash. I recommend using a dedicated command like trash
from the trash-cli
package. @pedro I should add that I once created a file * in my home directory. I had accidentally quoted * when I shouldn't creating it so I decided to remove it with with rm * natually. When I realized what I did I quickly killed the command, but it had already deleted a number of files in my home directory.â penguin359
Apr 8 '11 at 5:13
I'd be cautious with using something like
libtrash
to change the behavior of rm. Lots of scripts use rm
to clean up files and you don't want those showing up in Trash. I recommend using a dedicated command like trash
from the trash-cli
package. @pedro I should add that I once created a file * in my home directory. I had accidentally quoted * when I shouldn't creating it so I decided to remove it with with rm * natually. When I realized what I did I quickly killed the command, but it had already deleted a number of files in my home directory.â penguin359
Apr 8 '11 at 5:13
4
4
It is very rare to have a something like a trash can in the shell, so if you add it on you local machine and get used to it or even depend on it in your daily work. You can get in trouble when using some of the other 99% of the unix:s without one...
â Johan
Apr 8 '11 at 7:04
It is very rare to have a something like a trash can in the shell, so if you add it on you local machine and get used to it or even depend on it in your daily work. You can get in trouble when using some of the other 99% of the unix:s without one...
â Johan
Apr 8 '11 at 7:04
2
2
I think the take-home message here is that I need to stop CLI'ing in the wee hours and pay better attention.
â boehj
Apr 8 '11 at 8:06
I think the take-home message here is that I need to stop CLI'ing in the wee hours and pay better attention.
â boehj
Apr 8 '11 at 8:06
2
2
Along the same lines as what @Johan said, RedHat used to (still does?) set aliases for commands like
cp
, and mv
, to cp -i
and mv -i
when running as root. This changes the default behavior so those commands will always ask before overwriting existing files. Some sysadmins recommended to specifically remove those aliases so you don't end up expecting that behavior which may end up being lethal when on another system that follow default behavior.â penguin359
Apr 8 '11 at 8:13
Along the same lines as what @Johan said, RedHat used to (still does?) set aliases for commands like
cp
, and mv
, to cp -i
and mv -i
when running as root. This changes the default behavior so those commands will always ask before overwriting existing files. Some sysadmins recommended to specifically remove those aliases so you don't end up expecting that behavior which may end up being lethal when on another system that follow default behavior.â penguin359
Apr 8 '11 at 8:13
 |Â
show 5 more comments
up vote
10
down vote
For ext3/ext4, you can try recovering files using tools like extundelete or ext3grep, or even go messing with the low-level structures manually (not for the faint of heart); for many filesystems, you can try to search for the not-yet-overwritten blocks by certain patterns (e.g. magicrescue can search for JPEG headers, amongst other things). Note that these are using heuristics to recover the files from the metadata left behind, so full recovery is not guaranteed - it's more of a last-chance bet (as those require that some traces of the files remain in the journal, and that the blocks weren't overwritten yet).
So, for all intents and purposes, files removed with rm
are gone - you could try such necromancy as these tools offer, but don't depend on it: these are the tools to try when everything else fails. Better dig out your latest backups (you have been making backups, right? Oh well, live and learn...).
2
For high-value text data, you can always use any robust general-purpose tool (even emacs or perl) to look at the "raw device" for the disk that contained the removed file, and search for known strings; I've recovered Word documents for people this way; they lose the mark-up, but can recover most of the text. Obviously this is disaster recovery, not "Undo".
â alexis
Dec 7 '13 at 14:05
A proper 'manually' link: web.archive.org/web/20131221183925/http://â¦
â sjas
Jan 29 '15 at 22:56
add a comment |Â
up vote
10
down vote
For ext3/ext4, you can try recovering files using tools like extundelete or ext3grep, or even go messing with the low-level structures manually (not for the faint of heart); for many filesystems, you can try to search for the not-yet-overwritten blocks by certain patterns (e.g. magicrescue can search for JPEG headers, amongst other things). Note that these are using heuristics to recover the files from the metadata left behind, so full recovery is not guaranteed - it's more of a last-chance bet (as those require that some traces of the files remain in the journal, and that the blocks weren't overwritten yet).
So, for all intents and purposes, files removed with rm
are gone - you could try such necromancy as these tools offer, but don't depend on it: these are the tools to try when everything else fails. Better dig out your latest backups (you have been making backups, right? Oh well, live and learn...).
2
For high-value text data, you can always use any robust general-purpose tool (even emacs or perl) to look at the "raw device" for the disk that contained the removed file, and search for known strings; I've recovered Word documents for people this way; they lose the mark-up, but can recover most of the text. Obviously this is disaster recovery, not "Undo".
â alexis
Dec 7 '13 at 14:05
A proper 'manually' link: web.archive.org/web/20131221183925/http://â¦
â sjas
Jan 29 '15 at 22:56
add a comment |Â
up vote
10
down vote
up vote
10
down vote
For ext3/ext4, you can try recovering files using tools like extundelete or ext3grep, or even go messing with the low-level structures manually (not for the faint of heart); for many filesystems, you can try to search for the not-yet-overwritten blocks by certain patterns (e.g. magicrescue can search for JPEG headers, amongst other things). Note that these are using heuristics to recover the files from the metadata left behind, so full recovery is not guaranteed - it's more of a last-chance bet (as those require that some traces of the files remain in the journal, and that the blocks weren't overwritten yet).
So, for all intents and purposes, files removed with rm
are gone - you could try such necromancy as these tools offer, but don't depend on it: these are the tools to try when everything else fails. Better dig out your latest backups (you have been making backups, right? Oh well, live and learn...).
For ext3/ext4, you can try recovering files using tools like extundelete or ext3grep, or even go messing with the low-level structures manually (not for the faint of heart); for many filesystems, you can try to search for the not-yet-overwritten blocks by certain patterns (e.g. magicrescue can search for JPEG headers, amongst other things). Note that these are using heuristics to recover the files from the metadata left behind, so full recovery is not guaranteed - it's more of a last-chance bet (as those require that some traces of the files remain in the journal, and that the blocks weren't overwritten yet).
So, for all intents and purposes, files removed with rm
are gone - you could try such necromancy as these tools offer, but don't depend on it: these are the tools to try when everything else fails. Better dig out your latest backups (you have been making backups, right? Oh well, live and learn...).
edited Apr 8 '11 at 20:52
answered Apr 8 '11 at 18:21
Piskvor
57838
57838
2
For high-value text data, you can always use any robust general-purpose tool (even emacs or perl) to look at the "raw device" for the disk that contained the removed file, and search for known strings; I've recovered Word documents for people this way; they lose the mark-up, but can recover most of the text. Obviously this is disaster recovery, not "Undo".
â alexis
Dec 7 '13 at 14:05
A proper 'manually' link: web.archive.org/web/20131221183925/http://â¦
â sjas
Jan 29 '15 at 22:56
add a comment |Â
2
For high-value text data, you can always use any robust general-purpose tool (even emacs or perl) to look at the "raw device" for the disk that contained the removed file, and search for known strings; I've recovered Word documents for people this way; they lose the mark-up, but can recover most of the text. Obviously this is disaster recovery, not "Undo".
â alexis
Dec 7 '13 at 14:05
A proper 'manually' link: web.archive.org/web/20131221183925/http://â¦
â sjas
Jan 29 '15 at 22:56
2
2
For high-value text data, you can always use any robust general-purpose tool (even emacs or perl) to look at the "raw device" for the disk that contained the removed file, and search for known strings; I've recovered Word documents for people this way; they lose the mark-up, but can recover most of the text. Obviously this is disaster recovery, not "Undo".
â alexis
Dec 7 '13 at 14:05
For high-value text data, you can always use any robust general-purpose tool (even emacs or perl) to look at the "raw device" for the disk that contained the removed file, and search for known strings; I've recovered Word documents for people this way; they lose the mark-up, but can recover most of the text. Obviously this is disaster recovery, not "Undo".
â alexis
Dec 7 '13 at 14:05
A proper 'manually' link: web.archive.org/web/20131221183925/http://â¦
â sjas
Jan 29 '15 at 22:56
A proper 'manually' link: web.archive.org/web/20131221183925/http://â¦
â sjas
Jan 29 '15 at 22:56
add a comment |Â
up vote
7
down vote
Regarding undoing the effects of rm
:
Given that most filesystems only remove the reference to the data and indicate that the blocks as free, you could try to locate your data reading directly from the device. With a bit of luck the blocks containing your file(s) haven't been claimed for something else.
This assumes you have something fairly unique to look for, that you have root
on the system and I'm guessing piecing together anything that spans more than one file system block (probably 4k) might end up quite laborious if the file system didn't manage to put the file(s) in contiguous blocks.
I have successfully recovered the contents of a couple of plain-text files by running strings on the device the file system was on, and using grep
looking for something from those files with a large context (-C
). (And shortly after that incident, the company decided to spend some resources on implementing backups)
It's slightly more complicated e.g. by ext3 zeroing out the block pointers in the inode, but yes, looking for the files directly might work - if they're small enough or allocated in a contiguous block. This is sometimes called file carving and there are tools likemagicrescue
that try to find images or sounds by their distinctive patterns.
â Piskvor
Apr 8 '11 at 20:48
add a comment |Â
up vote
7
down vote
Regarding undoing the effects of rm
:
Given that most filesystems only remove the reference to the data and indicate that the blocks as free, you could try to locate your data reading directly from the device. With a bit of luck the blocks containing your file(s) haven't been claimed for something else.
This assumes you have something fairly unique to look for, that you have root
on the system and I'm guessing piecing together anything that spans more than one file system block (probably 4k) might end up quite laborious if the file system didn't manage to put the file(s) in contiguous blocks.
I have successfully recovered the contents of a couple of plain-text files by running strings on the device the file system was on, and using grep
looking for something from those files with a large context (-C
). (And shortly after that incident, the company decided to spend some resources on implementing backups)
It's slightly more complicated e.g. by ext3 zeroing out the block pointers in the inode, but yes, looking for the files directly might work - if they're small enough or allocated in a contiguous block. This is sometimes called file carving and there are tools likemagicrescue
that try to find images or sounds by their distinctive patterns.
â Piskvor
Apr 8 '11 at 20:48
add a comment |Â
up vote
7
down vote
up vote
7
down vote
Regarding undoing the effects of rm
:
Given that most filesystems only remove the reference to the data and indicate that the blocks as free, you could try to locate your data reading directly from the device. With a bit of luck the blocks containing your file(s) haven't been claimed for something else.
This assumes you have something fairly unique to look for, that you have root
on the system and I'm guessing piecing together anything that spans more than one file system block (probably 4k) might end up quite laborious if the file system didn't manage to put the file(s) in contiguous blocks.
I have successfully recovered the contents of a couple of plain-text files by running strings on the device the file system was on, and using grep
looking for something from those files with a large context (-C
). (And shortly after that incident, the company decided to spend some resources on implementing backups)
Regarding undoing the effects of rm
:
Given that most filesystems only remove the reference to the data and indicate that the blocks as free, you could try to locate your data reading directly from the device. With a bit of luck the blocks containing your file(s) haven't been claimed for something else.
This assumes you have something fairly unique to look for, that you have root
on the system and I'm guessing piecing together anything that spans more than one file system block (probably 4k) might end up quite laborious if the file system didn't manage to put the file(s) in contiguous blocks.
I have successfully recovered the contents of a couple of plain-text files by running strings on the device the file system was on, and using grep
looking for something from those files with a large context (-C
). (And shortly after that incident, the company decided to spend some resources on implementing backups)
edited Dec 7 '13 at 13:06
erch
1,94093457
1,94093457
answered Apr 8 '11 at 17:32
Kjetil Jorgensen
1,01457
1,01457
It's slightly more complicated e.g. by ext3 zeroing out the block pointers in the inode, but yes, looking for the files directly might work - if they're small enough or allocated in a contiguous block. This is sometimes called file carving and there are tools likemagicrescue
that try to find images or sounds by their distinctive patterns.
â Piskvor
Apr 8 '11 at 20:48
add a comment |Â
It's slightly more complicated e.g. by ext3 zeroing out the block pointers in the inode, but yes, looking for the files directly might work - if they're small enough or allocated in a contiguous block. This is sometimes called file carving and there are tools likemagicrescue
that try to find images or sounds by their distinctive patterns.
â Piskvor
Apr 8 '11 at 20:48
It's slightly more complicated e.g. by ext3 zeroing out the block pointers in the inode, but yes, looking for the files directly might work - if they're small enough or allocated in a contiguous block. This is sometimes called file carving and there are tools like
magicrescue
that try to find images or sounds by their distinctive patterns.â Piskvor
Apr 8 '11 at 20:48
It's slightly more complicated e.g. by ext3 zeroing out the block pointers in the inode, but yes, looking for the files directly might work - if they're small enough or allocated in a contiguous block. This is sometimes called file carving and there are tools like
magicrescue
that try to find images or sounds by their distinctive patterns.â Piskvor
Apr 8 '11 at 20:48
add a comment |Â
up vote
6
down vote
Whenever you delete a file using rm
command, the file's data is never deleted. In other words the blocks in the file system containing data is still there.
What happens is when you run the rm
command, the system marks the inode belonging to that file as unused and the data blocks of that file also as unused (but not wiped out). However ext3
zeros most of the fields in the inode, when a file is deleted.
This normal marking of unused is done for the speed... Otherwise for deletion it will take some more time. That's why you might have noted deleting even large files are faster (you can recover the data if that data blocks are not overwritten).
More Info: Inode Structure, How file deletion works
...unless the file is explicitly marked withchattr +s
("shred") attribute. It tells the filesystem to specifically overwrite this file with zeroes on deletion. Only some filesystems will support that attribute.
â telcoM
Feb 4 at 21:29
add a comment |Â
up vote
6
down vote
Whenever you delete a file using rm
command, the file's data is never deleted. In other words the blocks in the file system containing data is still there.
What happens is when you run the rm
command, the system marks the inode belonging to that file as unused and the data blocks of that file also as unused (but not wiped out). However ext3
zeros most of the fields in the inode, when a file is deleted.
This normal marking of unused is done for the speed... Otherwise for deletion it will take some more time. That's why you might have noted deleting even large files are faster (you can recover the data if that data blocks are not overwritten).
More Info: Inode Structure, How file deletion works
...unless the file is explicitly marked withchattr +s
("shred") attribute. It tells the filesystem to specifically overwrite this file with zeroes on deletion. Only some filesystems will support that attribute.
â telcoM
Feb 4 at 21:29
add a comment |Â
up vote
6
down vote
up vote
6
down vote
Whenever you delete a file using rm
command, the file's data is never deleted. In other words the blocks in the file system containing data is still there.
What happens is when you run the rm
command, the system marks the inode belonging to that file as unused and the data blocks of that file also as unused (but not wiped out). However ext3
zeros most of the fields in the inode, when a file is deleted.
This normal marking of unused is done for the speed... Otherwise for deletion it will take some more time. That's why you might have noted deleting even large files are faster (you can recover the data if that data blocks are not overwritten).
More Info: Inode Structure, How file deletion works
Whenever you delete a file using rm
command, the file's data is never deleted. In other words the blocks in the file system containing data is still there.
What happens is when you run the rm
command, the system marks the inode belonging to that file as unused and the data blocks of that file also as unused (but not wiped out). However ext3
zeros most of the fields in the inode, when a file is deleted.
This normal marking of unused is done for the speed... Otherwise for deletion it will take some more time. That's why you might have noted deleting even large files are faster (you can recover the data if that data blocks are not overwritten).
More Info: Inode Structure, How file deletion works
edited Dec 4 '12 at 8:38
jasonwryan
46.9k14127176
46.9k14127176
answered Dec 4 '12 at 3:46
sarath
6111
6111
...unless the file is explicitly marked withchattr +s
("shred") attribute. It tells the filesystem to specifically overwrite this file with zeroes on deletion. Only some filesystems will support that attribute.
â telcoM
Feb 4 at 21:29
add a comment |Â
...unless the file is explicitly marked withchattr +s
("shred") attribute. It tells the filesystem to specifically overwrite this file with zeroes on deletion. Only some filesystems will support that attribute.
â telcoM
Feb 4 at 21:29
...unless the file is explicitly marked with
chattr +s
("shred") attribute. It tells the filesystem to specifically overwrite this file with zeroes on deletion. Only some filesystems will support that attribute.â telcoM
Feb 4 at 21:29
...unless the file is explicitly marked with
chattr +s
("shred") attribute. It tells the filesystem to specifically overwrite this file with zeroes on deletion. Only some filesystems will support that attribute.â telcoM
Feb 4 at 21:29
add a comment |Â
up vote
3
down vote
In Unix-style filesystems (including on Linux), files are not really "at" any particular place. Instead, the system uses hardlinks to point into pieces of what amounts to a big blob of data. So when you create a file, you also create its first hardlink: the one which actually resides at the place where you "saved" the file. If you make more hardlinks, then as far as the system knows, the file actually exists in several places at once.
When you "delete" a file, normally you're actually only deleting the hardlink that existed at the place you specified. This is why the system call to delete files is called unlink()
. The system won't actually delete the file until there are no hardlinks left to it. But once that last hardlink is destroyed, so is the data.
So, where do files you delete go? If there are still hardlinks, they files are wherever the hardlinks you didn't delete are. If there are no hardlinks left, the files are gone.
add a comment |Â
up vote
3
down vote
In Unix-style filesystems (including on Linux), files are not really "at" any particular place. Instead, the system uses hardlinks to point into pieces of what amounts to a big blob of data. So when you create a file, you also create its first hardlink: the one which actually resides at the place where you "saved" the file. If you make more hardlinks, then as far as the system knows, the file actually exists in several places at once.
When you "delete" a file, normally you're actually only deleting the hardlink that existed at the place you specified. This is why the system call to delete files is called unlink()
. The system won't actually delete the file until there are no hardlinks left to it. But once that last hardlink is destroyed, so is the data.
So, where do files you delete go? If there are still hardlinks, they files are wherever the hardlinks you didn't delete are. If there are no hardlinks left, the files are gone.
add a comment |Â
up vote
3
down vote
up vote
3
down vote
In Unix-style filesystems (including on Linux), files are not really "at" any particular place. Instead, the system uses hardlinks to point into pieces of what amounts to a big blob of data. So when you create a file, you also create its first hardlink: the one which actually resides at the place where you "saved" the file. If you make more hardlinks, then as far as the system knows, the file actually exists in several places at once.
When you "delete" a file, normally you're actually only deleting the hardlink that existed at the place you specified. This is why the system call to delete files is called unlink()
. The system won't actually delete the file until there are no hardlinks left to it. But once that last hardlink is destroyed, so is the data.
So, where do files you delete go? If there are still hardlinks, they files are wherever the hardlinks you didn't delete are. If there are no hardlinks left, the files are gone.
In Unix-style filesystems (including on Linux), files are not really "at" any particular place. Instead, the system uses hardlinks to point into pieces of what amounts to a big blob of data. So when you create a file, you also create its first hardlink: the one which actually resides at the place where you "saved" the file. If you make more hardlinks, then as far as the system knows, the file actually exists in several places at once.
When you "delete" a file, normally you're actually only deleting the hardlink that existed at the place you specified. This is why the system call to delete files is called unlink()
. The system won't actually delete the file until there are no hardlinks left to it. But once that last hardlink is destroyed, so is the data.
So, where do files you delete go? If there are still hardlinks, they files are wherever the hardlinks you didn't delete are. If there are no hardlinks left, the files are gone.
answered Sep 17 '13 at 16:55
The Spooniest
26111
26111
add a comment |Â
add a comment |Â
up vote
0
down vote
Look also into ~/.snapshot if the file was recently removed.
4
This will only work if you have some magic filesystem that provides that feature (like a NetApp) or if you're using a special version of rm.
â mattdm
Mar 8 '12 at 3:18
add a comment |Â
up vote
0
down vote
Look also into ~/.snapshot if the file was recently removed.
4
This will only work if you have some magic filesystem that provides that feature (like a NetApp) or if you're using a special version of rm.
â mattdm
Mar 8 '12 at 3:18
add a comment |Â
up vote
0
down vote
up vote
0
down vote
Look also into ~/.snapshot if the file was recently removed.
Look also into ~/.snapshot if the file was recently removed.
answered Mar 8 '12 at 2:11
Andy
1
1
4
This will only work if you have some magic filesystem that provides that feature (like a NetApp) or if you're using a special version of rm.
â mattdm
Mar 8 '12 at 3:18
add a comment |Â
4
This will only work if you have some magic filesystem that provides that feature (like a NetApp) or if you're using a special version of rm.
â mattdm
Mar 8 '12 at 3:18
4
4
This will only work if you have some magic filesystem that provides that feature (like a NetApp) or if you're using a special version of rm.
â mattdm
Mar 8 '12 at 3:18
This will only work if you have some magic filesystem that provides that feature (like a NetApp) or if you're using a special version of rm.
â mattdm
Mar 8 '12 at 3:18
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f10883%2fwhere-do-files-go-when-the-rm-command-is-issued%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
3
here's a possible duplicate. undo on linux. But I'm not really sure it is as where do files go is quite different from is there a way to undo.
â xenoterracide
Apr 8 '11 at 12:47