Rename folder with odd characters

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP












5















I have a folder on my Mac called "␀␀␀␀HFS+ Private Data". I'm trying to delete it but it contains a bunch of odd characters that are choking unlink, rm and mv, making it difficult to remove it and its contents. I've tried whipping up some code to call unlink() directly just in case unlink/rm/mv binaries are doing some other stuff - but no, unlink() can't parse this character.



I used echo and od to figure out what character this is:



************@Trinity:~/Desktop/test$ echo -e "␀" | od -t oC -An
342 220 200 012`


I looked up 342 here: http://ascii-code.com - and found that it's part of the Latin-1 set. I tried iconv to convert it to UTF-8:



************@Trinity:~/Desktop/test$ iconv -f latin1 -t utf-8 "␀␀␀␀HFS+ Private Data"
iconv: ␀␀␀␀HFS+ Private Data: I/O error


So how do I delete this folder? Can I pass hex/oct codes to rm or mv or something? I've tried everything I can think of, including rm *, invoking sudo, etc. The problem is that unlink chokes on that character, so I need to change that character somehow. I was also thinking about installing Debian in a VM and giving it access to this folder so that I could try from there, in case this is an issue with the tools I have in my OS X environment.



EDIT:
I tried this:



************@Trinity:~/Desktop/test$ echo -e "␀␀␀HFS+ Private Data" | od -t oC -An
342 220 200 342 220 200 342 220 200 110 106 123 053 040 120 162
151 166 141 164 145 040 104 141 164 141 012`

************@Trinity:~/Desktop/test$ echo "34222020034222020034222020011010612353401201621511661411641454010414116414112" | xargs rm

rm: 342220200342220200342220200110106123053040120162151166141164145040104141164141012: No such file or directory

************@Trinity:~/Desktop/test$ echo "342"
342


EDIT2: showing the unlink() error



************@Trinity:~/Desktop/test$ unlink test3.txt
************@Trinity:~/Desktop/test$ unlink "␀␀␀␀HFS+ Private Data/1.txt"
unlink: ␀␀␀␀HFS+ Private Data/1.txt: Invalid argument
************@Trinity:~/Desktop/test$ cd "␀␀␀␀HFS+ Private Data/"
************@Trinity:~/Desktop/test/␀␀␀␀HFS+ Private Data$ unlink 1.txt
unlink: 1.txt: Invalid argument


EDIT3: showing that it's not an HFS+/filesystem issue, but rather a filename issue



************@Trinity:~/Desktop/test$ mkdir "␀␀␀␀testTest"
************@Trinity:~/Desktop/test$ rm -r "␀␀␀␀testTest"
rm: ␀␀␀␀testTest: Invalid argument


EDIT4: this might be progress... I'm going to mess with the locale next.



************@Trinity:~/Desktop/test$ ls | grep -i *test* | xxd
0000000: e290 80e2 9080 e290 80e2 9080 7465 7374 ............test
0000010: 5465 7374 0a Test.

************@Trinity:~/Desktop/test$ rm -r $'xe2x90x80xe2x90x80xe2x90x80xe2x90x80x74x65x73x74x54x65x73x74x0a'
rm: ␀␀␀␀testTest
: No such file or directory

Follow-up to this: nope, false hope. I dropped the x0a on the end and it 'worked'... kind of.

************@Trinity:~/Desktop/test$ rm -r $'xe2x90x80xe2x90x80xe2x90x80xe2x90x80x74x65x73x74x54x65x73x74'
rm: ␀␀␀␀testTest: Invalid argument









share|improve this question
























  • The odd characters and the "I/O error" make it sound a lot like filesystem corruption. Have you run a disk check recently?

    – David King
    Jan 7 '16 at 19:17











  • @DavidKing I know, but it's not. This is a re-creation of an error on my client's machine, and both of them are clean in terms of filesystem corruption. The I/O error shows up in iconv but in all the other utilities where unlink() is involved it's a different error.

    – Harv
    Jan 7 '16 at 19:18











  • What is the other error?

    – David King
    Jan 7 '16 at 19:18











  • @DavidKing see edit2. The code I whipped up (a few lines of C++ basically just calling unlink()), had the same error. Invalid argument.

    – Harv
    Jan 7 '16 at 19:22











  • Thanks @don_crissti. How did you find that question? I've favorited it so that I can link back here if I do find a solution.

    – Harv
    Jan 7 '16 at 21:47















5















I have a folder on my Mac called "␀␀␀␀HFS+ Private Data". I'm trying to delete it but it contains a bunch of odd characters that are choking unlink, rm and mv, making it difficult to remove it and its contents. I've tried whipping up some code to call unlink() directly just in case unlink/rm/mv binaries are doing some other stuff - but no, unlink() can't parse this character.



I used echo and od to figure out what character this is:



************@Trinity:~/Desktop/test$ echo -e "␀" | od -t oC -An
342 220 200 012`


I looked up 342 here: http://ascii-code.com - and found that it's part of the Latin-1 set. I tried iconv to convert it to UTF-8:



************@Trinity:~/Desktop/test$ iconv -f latin1 -t utf-8 "␀␀␀␀HFS+ Private Data"
iconv: ␀␀␀␀HFS+ Private Data: I/O error


So how do I delete this folder? Can I pass hex/oct codes to rm or mv or something? I've tried everything I can think of, including rm *, invoking sudo, etc. The problem is that unlink chokes on that character, so I need to change that character somehow. I was also thinking about installing Debian in a VM and giving it access to this folder so that I could try from there, in case this is an issue with the tools I have in my OS X environment.



EDIT:
I tried this:



************@Trinity:~/Desktop/test$ echo -e "␀␀␀HFS+ Private Data" | od -t oC -An
342 220 200 342 220 200 342 220 200 110 106 123 053 040 120 162
151 166 141 164 145 040 104 141 164 141 012`

************@Trinity:~/Desktop/test$ echo "34222020034222020034222020011010612353401201621511661411641454010414116414112" | xargs rm

rm: 342220200342220200342220200110106123053040120162151166141164145040104141164141012: No such file or directory

************@Trinity:~/Desktop/test$ echo "342"
342


EDIT2: showing the unlink() error



************@Trinity:~/Desktop/test$ unlink test3.txt
************@Trinity:~/Desktop/test$ unlink "␀␀␀␀HFS+ Private Data/1.txt"
unlink: ␀␀␀␀HFS+ Private Data/1.txt: Invalid argument
************@Trinity:~/Desktop/test$ cd "␀␀␀␀HFS+ Private Data/"
************@Trinity:~/Desktop/test/␀␀␀␀HFS+ Private Data$ unlink 1.txt
unlink: 1.txt: Invalid argument


EDIT3: showing that it's not an HFS+/filesystem issue, but rather a filename issue



************@Trinity:~/Desktop/test$ mkdir "␀␀␀␀testTest"
************@Trinity:~/Desktop/test$ rm -r "␀␀␀␀testTest"
rm: ␀␀␀␀testTest: Invalid argument


EDIT4: this might be progress... I'm going to mess with the locale next.



************@Trinity:~/Desktop/test$ ls | grep -i *test* | xxd
0000000: e290 80e2 9080 e290 80e2 9080 7465 7374 ............test
0000010: 5465 7374 0a Test.

************@Trinity:~/Desktop/test$ rm -r $'xe2x90x80xe2x90x80xe2x90x80xe2x90x80x74x65x73x74x54x65x73x74x0a'
rm: ␀␀␀␀testTest
: No such file or directory

Follow-up to this: nope, false hope. I dropped the x0a on the end and it 'worked'... kind of.

************@Trinity:~/Desktop/test$ rm -r $'xe2x90x80xe2x90x80xe2x90x80xe2x90x80x74x65x73x74x54x65x73x74'
rm: ␀␀␀␀testTest: Invalid argument









share|improve this question
























  • The odd characters and the "I/O error" make it sound a lot like filesystem corruption. Have you run a disk check recently?

    – David King
    Jan 7 '16 at 19:17











  • @DavidKing I know, but it's not. This is a re-creation of an error on my client's machine, and both of them are clean in terms of filesystem corruption. The I/O error shows up in iconv but in all the other utilities where unlink() is involved it's a different error.

    – Harv
    Jan 7 '16 at 19:18











  • What is the other error?

    – David King
    Jan 7 '16 at 19:18











  • @DavidKing see edit2. The code I whipped up (a few lines of C++ basically just calling unlink()), had the same error. Invalid argument.

    – Harv
    Jan 7 '16 at 19:22











  • Thanks @don_crissti. How did you find that question? I've favorited it so that I can link back here if I do find a solution.

    – Harv
    Jan 7 '16 at 21:47













5












5








5


2






I have a folder on my Mac called "␀␀␀␀HFS+ Private Data". I'm trying to delete it but it contains a bunch of odd characters that are choking unlink, rm and mv, making it difficult to remove it and its contents. I've tried whipping up some code to call unlink() directly just in case unlink/rm/mv binaries are doing some other stuff - but no, unlink() can't parse this character.



I used echo and od to figure out what character this is:



************@Trinity:~/Desktop/test$ echo -e "␀" | od -t oC -An
342 220 200 012`


I looked up 342 here: http://ascii-code.com - and found that it's part of the Latin-1 set. I tried iconv to convert it to UTF-8:



************@Trinity:~/Desktop/test$ iconv -f latin1 -t utf-8 "␀␀␀␀HFS+ Private Data"
iconv: ␀␀␀␀HFS+ Private Data: I/O error


So how do I delete this folder? Can I pass hex/oct codes to rm or mv or something? I've tried everything I can think of, including rm *, invoking sudo, etc. The problem is that unlink chokes on that character, so I need to change that character somehow. I was also thinking about installing Debian in a VM and giving it access to this folder so that I could try from there, in case this is an issue with the tools I have in my OS X environment.



EDIT:
I tried this:



************@Trinity:~/Desktop/test$ echo -e "␀␀␀HFS+ Private Data" | od -t oC -An
342 220 200 342 220 200 342 220 200 110 106 123 053 040 120 162
151 166 141 164 145 040 104 141 164 141 012`

************@Trinity:~/Desktop/test$ echo "34222020034222020034222020011010612353401201621511661411641454010414116414112" | xargs rm

rm: 342220200342220200342220200110106123053040120162151166141164145040104141164141012: No such file or directory

************@Trinity:~/Desktop/test$ echo "342"
342


EDIT2: showing the unlink() error



************@Trinity:~/Desktop/test$ unlink test3.txt
************@Trinity:~/Desktop/test$ unlink "␀␀␀␀HFS+ Private Data/1.txt"
unlink: ␀␀␀␀HFS+ Private Data/1.txt: Invalid argument
************@Trinity:~/Desktop/test$ cd "␀␀␀␀HFS+ Private Data/"
************@Trinity:~/Desktop/test/␀␀␀␀HFS+ Private Data$ unlink 1.txt
unlink: 1.txt: Invalid argument


EDIT3: showing that it's not an HFS+/filesystem issue, but rather a filename issue



************@Trinity:~/Desktop/test$ mkdir "␀␀␀␀testTest"
************@Trinity:~/Desktop/test$ rm -r "␀␀␀␀testTest"
rm: ␀␀␀␀testTest: Invalid argument


EDIT4: this might be progress... I'm going to mess with the locale next.



************@Trinity:~/Desktop/test$ ls | grep -i *test* | xxd
0000000: e290 80e2 9080 e290 80e2 9080 7465 7374 ............test
0000010: 5465 7374 0a Test.

************@Trinity:~/Desktop/test$ rm -r $'xe2x90x80xe2x90x80xe2x90x80xe2x90x80x74x65x73x74x54x65x73x74x0a'
rm: ␀␀␀␀testTest
: No such file or directory

Follow-up to this: nope, false hope. I dropped the x0a on the end and it 'worked'... kind of.

************@Trinity:~/Desktop/test$ rm -r $'xe2x90x80xe2x90x80xe2x90x80xe2x90x80x74x65x73x74x54x65x73x74'
rm: ␀␀␀␀testTest: Invalid argument









share|improve this question
















I have a folder on my Mac called "␀␀␀␀HFS+ Private Data". I'm trying to delete it but it contains a bunch of odd characters that are choking unlink, rm and mv, making it difficult to remove it and its contents. I've tried whipping up some code to call unlink() directly just in case unlink/rm/mv binaries are doing some other stuff - but no, unlink() can't parse this character.



I used echo and od to figure out what character this is:



************@Trinity:~/Desktop/test$ echo -e "␀" | od -t oC -An
342 220 200 012`


I looked up 342 here: http://ascii-code.com - and found that it's part of the Latin-1 set. I tried iconv to convert it to UTF-8:



************@Trinity:~/Desktop/test$ iconv -f latin1 -t utf-8 "␀␀␀␀HFS+ Private Data"
iconv: ␀␀␀␀HFS+ Private Data: I/O error


So how do I delete this folder? Can I pass hex/oct codes to rm or mv or something? I've tried everything I can think of, including rm *, invoking sudo, etc. The problem is that unlink chokes on that character, so I need to change that character somehow. I was also thinking about installing Debian in a VM and giving it access to this folder so that I could try from there, in case this is an issue with the tools I have in my OS X environment.



EDIT:
I tried this:



************@Trinity:~/Desktop/test$ echo -e "␀␀␀HFS+ Private Data" | od -t oC -An
342 220 200 342 220 200 342 220 200 110 106 123 053 040 120 162
151 166 141 164 145 040 104 141 164 141 012`

************@Trinity:~/Desktop/test$ echo "34222020034222020034222020011010612353401201621511661411641454010414116414112" | xargs rm

rm: 342220200342220200342220200110106123053040120162151166141164145040104141164141012: No such file or directory

************@Trinity:~/Desktop/test$ echo "342"
342


EDIT2: showing the unlink() error



************@Trinity:~/Desktop/test$ unlink test3.txt
************@Trinity:~/Desktop/test$ unlink "␀␀␀␀HFS+ Private Data/1.txt"
unlink: ␀␀␀␀HFS+ Private Data/1.txt: Invalid argument
************@Trinity:~/Desktop/test$ cd "␀␀␀␀HFS+ Private Data/"
************@Trinity:~/Desktop/test/␀␀␀␀HFS+ Private Data$ unlink 1.txt
unlink: 1.txt: Invalid argument


EDIT3: showing that it's not an HFS+/filesystem issue, but rather a filename issue



************@Trinity:~/Desktop/test$ mkdir "␀␀␀␀testTest"
************@Trinity:~/Desktop/test$ rm -r "␀␀␀␀testTest"
rm: ␀␀␀␀testTest: Invalid argument


EDIT4: this might be progress... I'm going to mess with the locale next.



************@Trinity:~/Desktop/test$ ls | grep -i *test* | xxd
0000000: e290 80e2 9080 e290 80e2 9080 7465 7374 ............test
0000010: 5465 7374 0a Test.

************@Trinity:~/Desktop/test$ rm -r $'xe2x90x80xe2x90x80xe2x90x80xe2x90x80x74x65x73x74x54x65x73x74x0a'
rm: ␀␀␀␀testTest
: No such file or directory

Follow-up to this: nope, false hope. I dropped the x0a on the end and it 'worked'... kind of.

************@Trinity:~/Desktop/test$ rm -r $'xe2x90x80xe2x90x80xe2x90x80xe2x90x80x74x65x73x74x54x65x73x74'
rm: ␀␀␀␀testTest: Invalid argument






osx rm character-encoding mv special-characters






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Feb 28 at 16:49







Harv

















asked Jan 7 '16 at 18:31









HarvHarv

3421411




3421411












  • The odd characters and the "I/O error" make it sound a lot like filesystem corruption. Have you run a disk check recently?

    – David King
    Jan 7 '16 at 19:17











  • @DavidKing I know, but it's not. This is a re-creation of an error on my client's machine, and both of them are clean in terms of filesystem corruption. The I/O error shows up in iconv but in all the other utilities where unlink() is involved it's a different error.

    – Harv
    Jan 7 '16 at 19:18











  • What is the other error?

    – David King
    Jan 7 '16 at 19:18











  • @DavidKing see edit2. The code I whipped up (a few lines of C++ basically just calling unlink()), had the same error. Invalid argument.

    – Harv
    Jan 7 '16 at 19:22











  • Thanks @don_crissti. How did you find that question? I've favorited it so that I can link back here if I do find a solution.

    – Harv
    Jan 7 '16 at 21:47

















  • The odd characters and the "I/O error" make it sound a lot like filesystem corruption. Have you run a disk check recently?

    – David King
    Jan 7 '16 at 19:17











  • @DavidKing I know, but it's not. This is a re-creation of an error on my client's machine, and both of them are clean in terms of filesystem corruption. The I/O error shows up in iconv but in all the other utilities where unlink() is involved it's a different error.

    – Harv
    Jan 7 '16 at 19:18











  • What is the other error?

    – David King
    Jan 7 '16 at 19:18











  • @DavidKing see edit2. The code I whipped up (a few lines of C++ basically just calling unlink()), had the same error. Invalid argument.

    – Harv
    Jan 7 '16 at 19:22











  • Thanks @don_crissti. How did you find that question? I've favorited it so that I can link back here if I do find a solution.

    – Harv
    Jan 7 '16 at 21:47
















The odd characters and the "I/O error" make it sound a lot like filesystem corruption. Have you run a disk check recently?

– David King
Jan 7 '16 at 19:17





The odd characters and the "I/O error" make it sound a lot like filesystem corruption. Have you run a disk check recently?

– David King
Jan 7 '16 at 19:17













@DavidKing I know, but it's not. This is a re-creation of an error on my client's machine, and both of them are clean in terms of filesystem corruption. The I/O error shows up in iconv but in all the other utilities where unlink() is involved it's a different error.

– Harv
Jan 7 '16 at 19:18





@DavidKing I know, but it's not. This is a re-creation of an error on my client's machine, and both of them are clean in terms of filesystem corruption. The I/O error shows up in iconv but in all the other utilities where unlink() is involved it's a different error.

– Harv
Jan 7 '16 at 19:18













What is the other error?

– David King
Jan 7 '16 at 19:18





What is the other error?

– David King
Jan 7 '16 at 19:18













@DavidKing see edit2. The code I whipped up (a few lines of C++ basically just calling unlink()), had the same error. Invalid argument.

– Harv
Jan 7 '16 at 19:22





@DavidKing see edit2. The code I whipped up (a few lines of C++ basically just calling unlink()), had the same error. Invalid argument.

– Harv
Jan 7 '16 at 19:22













Thanks @don_crissti. How did you find that question? I've favorited it so that I can link back here if I do find a solution.

– Harv
Jan 7 '16 at 21:47





Thanks @don_crissti. How did you find that question? I've favorited it so that I can link back here if I do find a solution.

– Harv
Jan 7 '16 at 21:47










6 Answers
6






active

oldest

votes


















0














Have you tried simply renaming the folder to something else then deleting it?



A method that has worked for me was to live boot into a Linux environment via CD/USB, dismount the drive with the 'odd' named directory/file, THEN deleting it. This method works most of the time, not every, for me.






share|improve this answer























  • I wish... while I haven't booted from a Linux LiveCD (I'd have to decrypt my entire partition for one), I did install Debian in a VM and give it full read/write access. Same problem. The rm binary itself seems unable to deal with this character. I also tried Windows 7 Pro. I've lined up a few other distributions to try, in the hopes they might have a different rm utility: PCLinuxOS, OpenSUSE, Fedora and Slackware. I'm getting desperate...

    – Harv
    Jan 9 '16 at 17:02












  • A VM also use the underlying OS filesystem. Try booting to a real linux OS. Testing in a Debian system, the directory could be created and erased at will without any problem.

    – user79743
    Jan 10 '16 at 0:51











  • Thanks, Edgar. I finally got around to this today. To add some details and clarification: I had to decrypt my volume (FileVault2 was enabled), get Ubuntu onto a bootable USB key (I just used Parallels/Windows 7 + Rufus + the ISO), installed rEFInd, rebooted into Ubuntu, dismounted the OS X volume because it mounts automatically as RO, re-mounted it as RW and deleted the directory. Back into OS X, re-enabled FV2, and away I go.

    – Harv
    Mar 13 '16 at 6:07



















4














According to https://apple.stackexchange.com/questions/31734/hfs-private-directory-data that folder is used for filesystem inner workings. You probably can't delete it and, even if you could, it would most likely brick your filesystem.






share|improve this answer

























  • Understood, but actually that folder is from an old backup, not currently actually in use. I'm trying to remove old data. Also the re-created version on my machine isn't special in any way, I just use mkdir and pasted the filename, so IMO it's not the case of a special filesystem node or anything like that, it's a filename issue.

    – Harv
    Jan 7 '16 at 19:41












  • You keep shooting down all my good ideas :( Is it actually on an HFS+ filesystem?

    – David King
    Jan 7 '16 at 19:48











  • Sorry man. I appreciate the help. Yes, HFS+.

    – Harv
    Jan 7 '16 at 19:48











  • It's not a problem I just thought I had this solved for you a few times now. I'm guessing that HFS has protections built in to prevent deletion of that folder even if it isn't the actual folder that HFS is using.

    – David King
    Jan 7 '16 at 19:53






  • 2





    I have the same issue if I create a folder called "␀␀␀␀Test" and try to rm -r it.

    – Harv
    Jan 7 '16 at 19:55


















1














I know that this has already been resolved for OP, but for anyone stumbling upon this question, this seems to be a 10.11 El Capitan only problem. I tried and was able to delete files with this character in OS X 10.4 Tiger and OS X 10.10 Yosemite, so it very likely works on the other ones.






share|improve this answer
































    1














    Just an FYI:



    The "␀␀␀␀HFS+ Private Data" folder is an HFS+ special folder that is used to hold the actual file-data and meta-data for hard-linked files.



    So multiple directory-entries point to a 'file' in this hidden directory, which in turn has the actual file-data and atributes attached.



    It has some special attributes like the four leading ZERO characters in the name, as well as a few other bits in its meta-data to make it very unlikely it is ever 'seen' by the end-user in normal use.



    When found in some backup (so no live copy) as a visible folder you can safely remove it, if the system allows you to do so (perhaps after low-level renaming with a hex-editor or other tool.



    There is a similar hidden file called ".HFS+ Private Directory Data" that is used to store hard-link info on folders.






    share|improve this answer






























      0














      It looks like there's a (retired?) spec here:




      Indirect node files exist in a special directory called the metadata directory. This directory exists in the volume's root directory. The name of the metadata directory is four null characters followed by the string HFS+ Private Data. The directory's creation date is set to the creation date of the volume's root directory. The kIsInvisible and kNameLocked bits are set in the directory's Finder information. The icon location in the Finder info is set to the point (22460, 22460). These Finder info settings are not mandatory, but they tend to reduce accidental changes to the metadata directory. An implementation that automatically follows hard links should make the metadata directory inaccessable from its normal file system interface.




      Note:




      The case-insensitive Unicode string comparison used by HFS Plus and case-insensitive HFSX sorts null characters after all other characters, so the metadata directory will typically be the last item in the root directory. On case-sensitive HFSX volumes, null characters sort before other characters, so the metadata directory will typically be the first item in the root directory.





      POSIX semantics allow an open file to be unlinked (deleted). These open but unlinked files are stored on HFS Plus volumes much like a hard link. When the open file is deleted, it is renamed and moved into the metadata directory. The new name is the string "temp" followed by the catalog node ID converted to decimal text. When the file is eventually closed, this temporary file may be removed. All such temporary files may be removed when repairing an unmounted HFS Plus volume.




      Repairing the Metadata Directory



      When repairing an HFS Plus volume with hard links or a metadata directory, there are several conditions that might need to be repaired:



      • Opened but deleted files (which are now orphaned).


      • Orphaned indirect node files (no hard links refer to them).


      • Broken hard link (hard link exists, but indirect node file does not).


      • Incorrect link count.


      • Link reference was 0.


      Opened but deleted files are files whose names start with "temp", and are in the metadata directory. If the volume is not in use (not mounted, and not being used by any other utility), then these files can be deleted. Volumes with a journal, even one with no active transactions, may have opened but undeleted files that need to be deleted.



      Detecting an orphaned indirect node file, broken hard link, or incorrect link count requires finding all hard link files in the catalog, and comparing the number of found hard links for each link reference with the link count of the corresponding indirect node file.



      A hard link with a link reference equal to 0 is invalid. Such a hard link may be the result of a hard link being copied or restored by an implementation or utility that does not use the permissions in catalog records. It may be possible to repair the hard link by determining the proper link reference. Otherwise, the hard link should be deleted.








      share|improve this answer























      • Thanks for putting in the work to find and post that. However, we're not actually dealing with the HFS+ Private Data folder on a live system, this is a remenant of an old system and I'm trying to delete old data that's not in use. But also, I was able to insert this problematic character into another folder name and have it also not deletable - so it's a filesystem/unlink() bug, as opposed to a limitation of this being a special folder. Any folder or file with this special character isn't removable on my system.

        – Harv
        Jan 10 '16 at 2:48











      • @Harv - I don't think it can't be both. if you look through that link you'll notice a lot of stuff about special reserved fields. I'm assuming that the filesystem bug is interpreting any file which fits its expectation of its namespace to actually be in its namespace. So I would think the solution would be to treat the file as it does and repair it likewise - that's the first way I would go, anyway. And the docs said that such problems might only be addressed offline and unmounted.

        – mikeserv
        Jan 10 '16 at 2:51












      • oh interesting. So you think the special character (even on my system as opposed to where the bug first showed up - on another system), indicates to the fs somehow that this is a special folder. Is that accurate?

        – Harv
        Jan 10 '16 at 2:55











      • @Harv - well, it talks about how unlinking cant be done online. and so it sounds like your issue. and the filenames are pretty similar...? i dunno what you mean by even on my system as opposed to... but it does also mention failed backups in the same paragraph. it is an HFS filesystem, yes?

        – mikeserv
        Jan 10 '16 at 2:57












      • HFS+. I replicated the problem on my own system (after discovering it on a client's machine), then found that the existence of those weird characters in any folder name causes the same problem. If I insert that character and call the file or folder "test" it exhibits the same behaviour. That lead me to think it's not a special file or folder, but a problem in unlink() that it can't process that character.

        – Harv
        Jan 10 '16 at 3:00


















      0














      Spent some time with Apple Support on this and they told me the only answer was to do a Time Machine backup of my volume, wipe the original volume, create a new user account and then selectively copy all my files over manually from the Time Machine backup. Realizing the work this would entail in restoring my wonderfully complex account with all my preferences, setups, software authorisations, scripts, handy utilities, etc, I didn't want to do that so I had another idea that worked for me.



      I worked out that I could move the immediate folder above the undeleteable file, so I corralled all the files I had with the null characters that nothing would allow me to delete them with into another folder, and then created a Carbon Copy Cloner clone of my entire boot HDD, but excluding the folder with the undeleteables. I then booted into this disk, reformatted my original disk, and restored the clone, sans the undeletable files.



      When someone works out how to make the various Unix based OSes deals with these file names, we'll all be very happy, but in the meantime, CCC to the rescue for me.






      share|improve this answer























      • See the accepted answer and comments here: unix.stackexchange.com/questions/254276/…

        – Harv
        Apr 3 '16 at 5:48










      Your Answer








      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "106"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader:
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      ,
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );













      draft saved

      draft discarded


















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f253932%2frename-folder-with-odd-characters%23new-answer', 'question_page');

      );

      Post as a guest















      Required, but never shown

























      6 Answers
      6






      active

      oldest

      votes








      6 Answers
      6






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      0














      Have you tried simply renaming the folder to something else then deleting it?



      A method that has worked for me was to live boot into a Linux environment via CD/USB, dismount the drive with the 'odd' named directory/file, THEN deleting it. This method works most of the time, not every, for me.






      share|improve this answer























      • I wish... while I haven't booted from a Linux LiveCD (I'd have to decrypt my entire partition for one), I did install Debian in a VM and give it full read/write access. Same problem. The rm binary itself seems unable to deal with this character. I also tried Windows 7 Pro. I've lined up a few other distributions to try, in the hopes they might have a different rm utility: PCLinuxOS, OpenSUSE, Fedora and Slackware. I'm getting desperate...

        – Harv
        Jan 9 '16 at 17:02












      • A VM also use the underlying OS filesystem. Try booting to a real linux OS. Testing in a Debian system, the directory could be created and erased at will without any problem.

        – user79743
        Jan 10 '16 at 0:51











      • Thanks, Edgar. I finally got around to this today. To add some details and clarification: I had to decrypt my volume (FileVault2 was enabled), get Ubuntu onto a bootable USB key (I just used Parallels/Windows 7 + Rufus + the ISO), installed rEFInd, rebooted into Ubuntu, dismounted the OS X volume because it mounts automatically as RO, re-mounted it as RW and deleted the directory. Back into OS X, re-enabled FV2, and away I go.

        – Harv
        Mar 13 '16 at 6:07
















      0














      Have you tried simply renaming the folder to something else then deleting it?



      A method that has worked for me was to live boot into a Linux environment via CD/USB, dismount the drive with the 'odd' named directory/file, THEN deleting it. This method works most of the time, not every, for me.






      share|improve this answer























      • I wish... while I haven't booted from a Linux LiveCD (I'd have to decrypt my entire partition for one), I did install Debian in a VM and give it full read/write access. Same problem. The rm binary itself seems unable to deal with this character. I also tried Windows 7 Pro. I've lined up a few other distributions to try, in the hopes they might have a different rm utility: PCLinuxOS, OpenSUSE, Fedora and Slackware. I'm getting desperate...

        – Harv
        Jan 9 '16 at 17:02












      • A VM also use the underlying OS filesystem. Try booting to a real linux OS. Testing in a Debian system, the directory could be created and erased at will without any problem.

        – user79743
        Jan 10 '16 at 0:51











      • Thanks, Edgar. I finally got around to this today. To add some details and clarification: I had to decrypt my volume (FileVault2 was enabled), get Ubuntu onto a bootable USB key (I just used Parallels/Windows 7 + Rufus + the ISO), installed rEFInd, rebooted into Ubuntu, dismounted the OS X volume because it mounts automatically as RO, re-mounted it as RW and deleted the directory. Back into OS X, re-enabled FV2, and away I go.

        – Harv
        Mar 13 '16 at 6:07














      0












      0








      0







      Have you tried simply renaming the folder to something else then deleting it?



      A method that has worked for me was to live boot into a Linux environment via CD/USB, dismount the drive with the 'odd' named directory/file, THEN deleting it. This method works most of the time, not every, for me.






      share|improve this answer













      Have you tried simply renaming the folder to something else then deleting it?



      A method that has worked for me was to live boot into a Linux environment via CD/USB, dismount the drive with the 'odd' named directory/file, THEN deleting it. This method works most of the time, not every, for me.







      share|improve this answer












      share|improve this answer



      share|improve this answer










      answered Jan 9 '16 at 11:55









      Edgar NaserEdgar Naser

      1441




      1441












      • I wish... while I haven't booted from a Linux LiveCD (I'd have to decrypt my entire partition for one), I did install Debian in a VM and give it full read/write access. Same problem. The rm binary itself seems unable to deal with this character. I also tried Windows 7 Pro. I've lined up a few other distributions to try, in the hopes they might have a different rm utility: PCLinuxOS, OpenSUSE, Fedora and Slackware. I'm getting desperate...

        – Harv
        Jan 9 '16 at 17:02












      • A VM also use the underlying OS filesystem. Try booting to a real linux OS. Testing in a Debian system, the directory could be created and erased at will without any problem.

        – user79743
        Jan 10 '16 at 0:51











      • Thanks, Edgar. I finally got around to this today. To add some details and clarification: I had to decrypt my volume (FileVault2 was enabled), get Ubuntu onto a bootable USB key (I just used Parallels/Windows 7 + Rufus + the ISO), installed rEFInd, rebooted into Ubuntu, dismounted the OS X volume because it mounts automatically as RO, re-mounted it as RW and deleted the directory. Back into OS X, re-enabled FV2, and away I go.

        – Harv
        Mar 13 '16 at 6:07


















      • I wish... while I haven't booted from a Linux LiveCD (I'd have to decrypt my entire partition for one), I did install Debian in a VM and give it full read/write access. Same problem. The rm binary itself seems unable to deal with this character. I also tried Windows 7 Pro. I've lined up a few other distributions to try, in the hopes they might have a different rm utility: PCLinuxOS, OpenSUSE, Fedora and Slackware. I'm getting desperate...

        – Harv
        Jan 9 '16 at 17:02












      • A VM also use the underlying OS filesystem. Try booting to a real linux OS. Testing in a Debian system, the directory could be created and erased at will without any problem.

        – user79743
        Jan 10 '16 at 0:51











      • Thanks, Edgar. I finally got around to this today. To add some details and clarification: I had to decrypt my volume (FileVault2 was enabled), get Ubuntu onto a bootable USB key (I just used Parallels/Windows 7 + Rufus + the ISO), installed rEFInd, rebooted into Ubuntu, dismounted the OS X volume because it mounts automatically as RO, re-mounted it as RW and deleted the directory. Back into OS X, re-enabled FV2, and away I go.

        – Harv
        Mar 13 '16 at 6:07

















      I wish... while I haven't booted from a Linux LiveCD (I'd have to decrypt my entire partition for one), I did install Debian in a VM and give it full read/write access. Same problem. The rm binary itself seems unable to deal with this character. I also tried Windows 7 Pro. I've lined up a few other distributions to try, in the hopes they might have a different rm utility: PCLinuxOS, OpenSUSE, Fedora and Slackware. I'm getting desperate...

      – Harv
      Jan 9 '16 at 17:02






      I wish... while I haven't booted from a Linux LiveCD (I'd have to decrypt my entire partition for one), I did install Debian in a VM and give it full read/write access. Same problem. The rm binary itself seems unable to deal with this character. I also tried Windows 7 Pro. I've lined up a few other distributions to try, in the hopes they might have a different rm utility: PCLinuxOS, OpenSUSE, Fedora and Slackware. I'm getting desperate...

      – Harv
      Jan 9 '16 at 17:02














      A VM also use the underlying OS filesystem. Try booting to a real linux OS. Testing in a Debian system, the directory could be created and erased at will without any problem.

      – user79743
      Jan 10 '16 at 0:51





      A VM also use the underlying OS filesystem. Try booting to a real linux OS. Testing in a Debian system, the directory could be created and erased at will without any problem.

      – user79743
      Jan 10 '16 at 0:51













      Thanks, Edgar. I finally got around to this today. To add some details and clarification: I had to decrypt my volume (FileVault2 was enabled), get Ubuntu onto a bootable USB key (I just used Parallels/Windows 7 + Rufus + the ISO), installed rEFInd, rebooted into Ubuntu, dismounted the OS X volume because it mounts automatically as RO, re-mounted it as RW and deleted the directory. Back into OS X, re-enabled FV2, and away I go.

      – Harv
      Mar 13 '16 at 6:07






      Thanks, Edgar. I finally got around to this today. To add some details and clarification: I had to decrypt my volume (FileVault2 was enabled), get Ubuntu onto a bootable USB key (I just used Parallels/Windows 7 + Rufus + the ISO), installed rEFInd, rebooted into Ubuntu, dismounted the OS X volume because it mounts automatically as RO, re-mounted it as RW and deleted the directory. Back into OS X, re-enabled FV2, and away I go.

      – Harv
      Mar 13 '16 at 6:07














      4














      According to https://apple.stackexchange.com/questions/31734/hfs-private-directory-data that folder is used for filesystem inner workings. You probably can't delete it and, even if you could, it would most likely brick your filesystem.






      share|improve this answer

























      • Understood, but actually that folder is from an old backup, not currently actually in use. I'm trying to remove old data. Also the re-created version on my machine isn't special in any way, I just use mkdir and pasted the filename, so IMO it's not the case of a special filesystem node or anything like that, it's a filename issue.

        – Harv
        Jan 7 '16 at 19:41












      • You keep shooting down all my good ideas :( Is it actually on an HFS+ filesystem?

        – David King
        Jan 7 '16 at 19:48











      • Sorry man. I appreciate the help. Yes, HFS+.

        – Harv
        Jan 7 '16 at 19:48











      • It's not a problem I just thought I had this solved for you a few times now. I'm guessing that HFS has protections built in to prevent deletion of that folder even if it isn't the actual folder that HFS is using.

        – David King
        Jan 7 '16 at 19:53






      • 2





        I have the same issue if I create a folder called "␀␀␀␀Test" and try to rm -r it.

        – Harv
        Jan 7 '16 at 19:55















      4














      According to https://apple.stackexchange.com/questions/31734/hfs-private-directory-data that folder is used for filesystem inner workings. You probably can't delete it and, even if you could, it would most likely brick your filesystem.






      share|improve this answer

























      • Understood, but actually that folder is from an old backup, not currently actually in use. I'm trying to remove old data. Also the re-created version on my machine isn't special in any way, I just use mkdir and pasted the filename, so IMO it's not the case of a special filesystem node or anything like that, it's a filename issue.

        – Harv
        Jan 7 '16 at 19:41












      • You keep shooting down all my good ideas :( Is it actually on an HFS+ filesystem?

        – David King
        Jan 7 '16 at 19:48











      • Sorry man. I appreciate the help. Yes, HFS+.

        – Harv
        Jan 7 '16 at 19:48











      • It's not a problem I just thought I had this solved for you a few times now. I'm guessing that HFS has protections built in to prevent deletion of that folder even if it isn't the actual folder that HFS is using.

        – David King
        Jan 7 '16 at 19:53






      • 2





        I have the same issue if I create a folder called "␀␀␀␀Test" and try to rm -r it.

        – Harv
        Jan 7 '16 at 19:55













      4












      4








      4







      According to https://apple.stackexchange.com/questions/31734/hfs-private-directory-data that folder is used for filesystem inner workings. You probably can't delete it and, even if you could, it would most likely brick your filesystem.






      share|improve this answer















      According to https://apple.stackexchange.com/questions/31734/hfs-private-directory-data that folder is used for filesystem inner workings. You probably can't delete it and, even if you could, it would most likely brick your filesystem.







      share|improve this answer














      share|improve this answer



      share|improve this answer








      edited Apr 13 '17 at 12:45









      Community

      1




      1










      answered Jan 7 '16 at 19:31









      David KingDavid King

      2,868421




      2,868421












      • Understood, but actually that folder is from an old backup, not currently actually in use. I'm trying to remove old data. Also the re-created version on my machine isn't special in any way, I just use mkdir and pasted the filename, so IMO it's not the case of a special filesystem node or anything like that, it's a filename issue.

        – Harv
        Jan 7 '16 at 19:41












      • You keep shooting down all my good ideas :( Is it actually on an HFS+ filesystem?

        – David King
        Jan 7 '16 at 19:48











      • Sorry man. I appreciate the help. Yes, HFS+.

        – Harv
        Jan 7 '16 at 19:48











      • It's not a problem I just thought I had this solved for you a few times now. I'm guessing that HFS has protections built in to prevent deletion of that folder even if it isn't the actual folder that HFS is using.

        – David King
        Jan 7 '16 at 19:53






      • 2





        I have the same issue if I create a folder called "␀␀␀␀Test" and try to rm -r it.

        – Harv
        Jan 7 '16 at 19:55

















      • Understood, but actually that folder is from an old backup, not currently actually in use. I'm trying to remove old data. Also the re-created version on my machine isn't special in any way, I just use mkdir and pasted the filename, so IMO it's not the case of a special filesystem node or anything like that, it's a filename issue.

        – Harv
        Jan 7 '16 at 19:41












      • You keep shooting down all my good ideas :( Is it actually on an HFS+ filesystem?

        – David King
        Jan 7 '16 at 19:48











      • Sorry man. I appreciate the help. Yes, HFS+.

        – Harv
        Jan 7 '16 at 19:48











      • It's not a problem I just thought I had this solved for you a few times now. I'm guessing that HFS has protections built in to prevent deletion of that folder even if it isn't the actual folder that HFS is using.

        – David King
        Jan 7 '16 at 19:53






      • 2





        I have the same issue if I create a folder called "␀␀␀␀Test" and try to rm -r it.

        – Harv
        Jan 7 '16 at 19:55
















      Understood, but actually that folder is from an old backup, not currently actually in use. I'm trying to remove old data. Also the re-created version on my machine isn't special in any way, I just use mkdir and pasted the filename, so IMO it's not the case of a special filesystem node or anything like that, it's a filename issue.

      – Harv
      Jan 7 '16 at 19:41






      Understood, but actually that folder is from an old backup, not currently actually in use. I'm trying to remove old data. Also the re-created version on my machine isn't special in any way, I just use mkdir and pasted the filename, so IMO it's not the case of a special filesystem node or anything like that, it's a filename issue.

      – Harv
      Jan 7 '16 at 19:41














      You keep shooting down all my good ideas :( Is it actually on an HFS+ filesystem?

      – David King
      Jan 7 '16 at 19:48





      You keep shooting down all my good ideas :( Is it actually on an HFS+ filesystem?

      – David King
      Jan 7 '16 at 19:48













      Sorry man. I appreciate the help. Yes, HFS+.

      – Harv
      Jan 7 '16 at 19:48





      Sorry man. I appreciate the help. Yes, HFS+.

      – Harv
      Jan 7 '16 at 19:48













      It's not a problem I just thought I had this solved for you a few times now. I'm guessing that HFS has protections built in to prevent deletion of that folder even if it isn't the actual folder that HFS is using.

      – David King
      Jan 7 '16 at 19:53





      It's not a problem I just thought I had this solved for you a few times now. I'm guessing that HFS has protections built in to prevent deletion of that folder even if it isn't the actual folder that HFS is using.

      – David King
      Jan 7 '16 at 19:53




      2




      2





      I have the same issue if I create a folder called "␀␀␀␀Test" and try to rm -r it.

      – Harv
      Jan 7 '16 at 19:55





      I have the same issue if I create a folder called "␀␀␀␀Test" and try to rm -r it.

      – Harv
      Jan 7 '16 at 19:55











      1














      I know that this has already been resolved for OP, but for anyone stumbling upon this question, this seems to be a 10.11 El Capitan only problem. I tried and was able to delete files with this character in OS X 10.4 Tiger and OS X 10.10 Yosemite, so it very likely works on the other ones.






      share|improve this answer





























        1














        I know that this has already been resolved for OP, but for anyone stumbling upon this question, this seems to be a 10.11 El Capitan only problem. I tried and was able to delete files with this character in OS X 10.4 Tiger and OS X 10.10 Yosemite, so it very likely works on the other ones.






        share|improve this answer



























          1












          1








          1







          I know that this has already been resolved for OP, but for anyone stumbling upon this question, this seems to be a 10.11 El Capitan only problem. I tried and was able to delete files with this character in OS X 10.4 Tiger and OS X 10.10 Yosemite, so it very likely works on the other ones.






          share|improve this answer















          I know that this has already been resolved for OP, but for anyone stumbling upon this question, this seems to be a 10.11 El Capitan only problem. I tried and was able to delete files with this character in OS X 10.4 Tiger and OS X 10.10 Yosemite, so it very likely works on the other ones.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Feb 2 '16 at 14:10

























          answered Feb 1 '16 at 22:21









          DisplayNameDisplayName

          4,61394781




          4,61394781





















              1














              Just an FYI:



              The "␀␀␀␀HFS+ Private Data" folder is an HFS+ special folder that is used to hold the actual file-data and meta-data for hard-linked files.



              So multiple directory-entries point to a 'file' in this hidden directory, which in turn has the actual file-data and atributes attached.



              It has some special attributes like the four leading ZERO characters in the name, as well as a few other bits in its meta-data to make it very unlikely it is ever 'seen' by the end-user in normal use.



              When found in some backup (so no live copy) as a visible folder you can safely remove it, if the system allows you to do so (perhaps after low-level renaming with a hex-editor or other tool.



              There is a similar hidden file called ".HFS+ Private Directory Data" that is used to store hard-link info on folders.






              share|improve this answer



























                1














                Just an FYI:



                The "␀␀␀␀HFS+ Private Data" folder is an HFS+ special folder that is used to hold the actual file-data and meta-data for hard-linked files.



                So multiple directory-entries point to a 'file' in this hidden directory, which in turn has the actual file-data and atributes attached.



                It has some special attributes like the four leading ZERO characters in the name, as well as a few other bits in its meta-data to make it very unlikely it is ever 'seen' by the end-user in normal use.



                When found in some backup (so no live copy) as a visible folder you can safely remove it, if the system allows you to do so (perhaps after low-level renaming with a hex-editor or other tool.



                There is a similar hidden file called ".HFS+ Private Directory Data" that is used to store hard-link info on folders.






                share|improve this answer

























                  1












                  1








                  1







                  Just an FYI:



                  The "␀␀␀␀HFS+ Private Data" folder is an HFS+ special folder that is used to hold the actual file-data and meta-data for hard-linked files.



                  So multiple directory-entries point to a 'file' in this hidden directory, which in turn has the actual file-data and atributes attached.



                  It has some special attributes like the four leading ZERO characters in the name, as well as a few other bits in its meta-data to make it very unlikely it is ever 'seen' by the end-user in normal use.



                  When found in some backup (so no live copy) as a visible folder you can safely remove it, if the system allows you to do so (perhaps after low-level renaming with a hex-editor or other tool.



                  There is a similar hidden file called ".HFS+ Private Directory Data" that is used to store hard-link info on folders.






                  share|improve this answer













                  Just an FYI:



                  The "␀␀␀␀HFS+ Private Data" folder is an HFS+ special folder that is used to hold the actual file-data and meta-data for hard-linked files.



                  So multiple directory-entries point to a 'file' in this hidden directory, which in turn has the actual file-data and atributes attached.



                  It has some special attributes like the four leading ZERO characters in the name, as well as a few other bits in its meta-data to make it very unlikely it is ever 'seen' by the end-user in normal use.



                  When found in some backup (so no live copy) as a visible folder you can safely remove it, if the system allows you to do so (perhaps after low-level renaming with a hex-editor or other tool.



                  There is a similar hidden file called ".HFS+ Private Directory Data" that is used to store hard-link info on folders.







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Apr 14 '18 at 14:30









                  Jan van WijkJan van Wijk

                  112




                  112





















                      0














                      It looks like there's a (retired?) spec here:




                      Indirect node files exist in a special directory called the metadata directory. This directory exists in the volume's root directory. The name of the metadata directory is four null characters followed by the string HFS+ Private Data. The directory's creation date is set to the creation date of the volume's root directory. The kIsInvisible and kNameLocked bits are set in the directory's Finder information. The icon location in the Finder info is set to the point (22460, 22460). These Finder info settings are not mandatory, but they tend to reduce accidental changes to the metadata directory. An implementation that automatically follows hard links should make the metadata directory inaccessable from its normal file system interface.




                      Note:




                      The case-insensitive Unicode string comparison used by HFS Plus and case-insensitive HFSX sorts null characters after all other characters, so the metadata directory will typically be the last item in the root directory. On case-sensitive HFSX volumes, null characters sort before other characters, so the metadata directory will typically be the first item in the root directory.





                      POSIX semantics allow an open file to be unlinked (deleted). These open but unlinked files are stored on HFS Plus volumes much like a hard link. When the open file is deleted, it is renamed and moved into the metadata directory. The new name is the string "temp" followed by the catalog node ID converted to decimal text. When the file is eventually closed, this temporary file may be removed. All such temporary files may be removed when repairing an unmounted HFS Plus volume.




                      Repairing the Metadata Directory



                      When repairing an HFS Plus volume with hard links or a metadata directory, there are several conditions that might need to be repaired:



                      • Opened but deleted files (which are now orphaned).


                      • Orphaned indirect node files (no hard links refer to them).


                      • Broken hard link (hard link exists, but indirect node file does not).


                      • Incorrect link count.


                      • Link reference was 0.


                      Opened but deleted files are files whose names start with "temp", and are in the metadata directory. If the volume is not in use (not mounted, and not being used by any other utility), then these files can be deleted. Volumes with a journal, even one with no active transactions, may have opened but undeleted files that need to be deleted.



                      Detecting an orphaned indirect node file, broken hard link, or incorrect link count requires finding all hard link files in the catalog, and comparing the number of found hard links for each link reference with the link count of the corresponding indirect node file.



                      A hard link with a link reference equal to 0 is invalid. Such a hard link may be the result of a hard link being copied or restored by an implementation or utility that does not use the permissions in catalog records. It may be possible to repair the hard link by determining the proper link reference. Otherwise, the hard link should be deleted.








                      share|improve this answer























                      • Thanks for putting in the work to find and post that. However, we're not actually dealing with the HFS+ Private Data folder on a live system, this is a remenant of an old system and I'm trying to delete old data that's not in use. But also, I was able to insert this problematic character into another folder name and have it also not deletable - so it's a filesystem/unlink() bug, as opposed to a limitation of this being a special folder. Any folder or file with this special character isn't removable on my system.

                        – Harv
                        Jan 10 '16 at 2:48











                      • @Harv - I don't think it can't be both. if you look through that link you'll notice a lot of stuff about special reserved fields. I'm assuming that the filesystem bug is interpreting any file which fits its expectation of its namespace to actually be in its namespace. So I would think the solution would be to treat the file as it does and repair it likewise - that's the first way I would go, anyway. And the docs said that such problems might only be addressed offline and unmounted.

                        – mikeserv
                        Jan 10 '16 at 2:51












                      • oh interesting. So you think the special character (even on my system as opposed to where the bug first showed up - on another system), indicates to the fs somehow that this is a special folder. Is that accurate?

                        – Harv
                        Jan 10 '16 at 2:55











                      • @Harv - well, it talks about how unlinking cant be done online. and so it sounds like your issue. and the filenames are pretty similar...? i dunno what you mean by even on my system as opposed to... but it does also mention failed backups in the same paragraph. it is an HFS filesystem, yes?

                        – mikeserv
                        Jan 10 '16 at 2:57












                      • HFS+. I replicated the problem on my own system (after discovering it on a client's machine), then found that the existence of those weird characters in any folder name causes the same problem. If I insert that character and call the file or folder "test" it exhibits the same behaviour. That lead me to think it's not a special file or folder, but a problem in unlink() that it can't process that character.

                        – Harv
                        Jan 10 '16 at 3:00















                      0














                      It looks like there's a (retired?) spec here:




                      Indirect node files exist in a special directory called the metadata directory. This directory exists in the volume's root directory. The name of the metadata directory is four null characters followed by the string HFS+ Private Data. The directory's creation date is set to the creation date of the volume's root directory. The kIsInvisible and kNameLocked bits are set in the directory's Finder information. The icon location in the Finder info is set to the point (22460, 22460). These Finder info settings are not mandatory, but they tend to reduce accidental changes to the metadata directory. An implementation that automatically follows hard links should make the metadata directory inaccessable from its normal file system interface.




                      Note:




                      The case-insensitive Unicode string comparison used by HFS Plus and case-insensitive HFSX sorts null characters after all other characters, so the metadata directory will typically be the last item in the root directory. On case-sensitive HFSX volumes, null characters sort before other characters, so the metadata directory will typically be the first item in the root directory.





                      POSIX semantics allow an open file to be unlinked (deleted). These open but unlinked files are stored on HFS Plus volumes much like a hard link. When the open file is deleted, it is renamed and moved into the metadata directory. The new name is the string "temp" followed by the catalog node ID converted to decimal text. When the file is eventually closed, this temporary file may be removed. All such temporary files may be removed when repairing an unmounted HFS Plus volume.




                      Repairing the Metadata Directory



                      When repairing an HFS Plus volume with hard links or a metadata directory, there are several conditions that might need to be repaired:



                      • Opened but deleted files (which are now orphaned).


                      • Orphaned indirect node files (no hard links refer to them).


                      • Broken hard link (hard link exists, but indirect node file does not).


                      • Incorrect link count.


                      • Link reference was 0.


                      Opened but deleted files are files whose names start with "temp", and are in the metadata directory. If the volume is not in use (not mounted, and not being used by any other utility), then these files can be deleted. Volumes with a journal, even one with no active transactions, may have opened but undeleted files that need to be deleted.



                      Detecting an orphaned indirect node file, broken hard link, or incorrect link count requires finding all hard link files in the catalog, and comparing the number of found hard links for each link reference with the link count of the corresponding indirect node file.



                      A hard link with a link reference equal to 0 is invalid. Such a hard link may be the result of a hard link being copied or restored by an implementation or utility that does not use the permissions in catalog records. It may be possible to repair the hard link by determining the proper link reference. Otherwise, the hard link should be deleted.








                      share|improve this answer























                      • Thanks for putting in the work to find and post that. However, we're not actually dealing with the HFS+ Private Data folder on a live system, this is a remenant of an old system and I'm trying to delete old data that's not in use. But also, I was able to insert this problematic character into another folder name and have it also not deletable - so it's a filesystem/unlink() bug, as opposed to a limitation of this being a special folder. Any folder or file with this special character isn't removable on my system.

                        – Harv
                        Jan 10 '16 at 2:48











                      • @Harv - I don't think it can't be both. if you look through that link you'll notice a lot of stuff about special reserved fields. I'm assuming that the filesystem bug is interpreting any file which fits its expectation of its namespace to actually be in its namespace. So I would think the solution would be to treat the file as it does and repair it likewise - that's the first way I would go, anyway. And the docs said that such problems might only be addressed offline and unmounted.

                        – mikeserv
                        Jan 10 '16 at 2:51












                      • oh interesting. So you think the special character (even on my system as opposed to where the bug first showed up - on another system), indicates to the fs somehow that this is a special folder. Is that accurate?

                        – Harv
                        Jan 10 '16 at 2:55











                      • @Harv - well, it talks about how unlinking cant be done online. and so it sounds like your issue. and the filenames are pretty similar...? i dunno what you mean by even on my system as opposed to... but it does also mention failed backups in the same paragraph. it is an HFS filesystem, yes?

                        – mikeserv
                        Jan 10 '16 at 2:57












                      • HFS+. I replicated the problem on my own system (after discovering it on a client's machine), then found that the existence of those weird characters in any folder name causes the same problem. If I insert that character and call the file or folder "test" it exhibits the same behaviour. That lead me to think it's not a special file or folder, but a problem in unlink() that it can't process that character.

                        – Harv
                        Jan 10 '16 at 3:00













                      0












                      0








                      0







                      It looks like there's a (retired?) spec here:




                      Indirect node files exist in a special directory called the metadata directory. This directory exists in the volume's root directory. The name of the metadata directory is four null characters followed by the string HFS+ Private Data. The directory's creation date is set to the creation date of the volume's root directory. The kIsInvisible and kNameLocked bits are set in the directory's Finder information. The icon location in the Finder info is set to the point (22460, 22460). These Finder info settings are not mandatory, but they tend to reduce accidental changes to the metadata directory. An implementation that automatically follows hard links should make the metadata directory inaccessable from its normal file system interface.




                      Note:




                      The case-insensitive Unicode string comparison used by HFS Plus and case-insensitive HFSX sorts null characters after all other characters, so the metadata directory will typically be the last item in the root directory. On case-sensitive HFSX volumes, null characters sort before other characters, so the metadata directory will typically be the first item in the root directory.





                      POSIX semantics allow an open file to be unlinked (deleted). These open but unlinked files are stored on HFS Plus volumes much like a hard link. When the open file is deleted, it is renamed and moved into the metadata directory. The new name is the string "temp" followed by the catalog node ID converted to decimal text. When the file is eventually closed, this temporary file may be removed. All such temporary files may be removed when repairing an unmounted HFS Plus volume.




                      Repairing the Metadata Directory



                      When repairing an HFS Plus volume with hard links or a metadata directory, there are several conditions that might need to be repaired:



                      • Opened but deleted files (which are now orphaned).


                      • Orphaned indirect node files (no hard links refer to them).


                      • Broken hard link (hard link exists, but indirect node file does not).


                      • Incorrect link count.


                      • Link reference was 0.


                      Opened but deleted files are files whose names start with "temp", and are in the metadata directory. If the volume is not in use (not mounted, and not being used by any other utility), then these files can be deleted. Volumes with a journal, even one with no active transactions, may have opened but undeleted files that need to be deleted.



                      Detecting an orphaned indirect node file, broken hard link, or incorrect link count requires finding all hard link files in the catalog, and comparing the number of found hard links for each link reference with the link count of the corresponding indirect node file.



                      A hard link with a link reference equal to 0 is invalid. Such a hard link may be the result of a hard link being copied or restored by an implementation or utility that does not use the permissions in catalog records. It may be possible to repair the hard link by determining the proper link reference. Otherwise, the hard link should be deleted.








                      share|improve this answer













                      It looks like there's a (retired?) spec here:




                      Indirect node files exist in a special directory called the metadata directory. This directory exists in the volume's root directory. The name of the metadata directory is four null characters followed by the string HFS+ Private Data. The directory's creation date is set to the creation date of the volume's root directory. The kIsInvisible and kNameLocked bits are set in the directory's Finder information. The icon location in the Finder info is set to the point (22460, 22460). These Finder info settings are not mandatory, but they tend to reduce accidental changes to the metadata directory. An implementation that automatically follows hard links should make the metadata directory inaccessable from its normal file system interface.




                      Note:




                      The case-insensitive Unicode string comparison used by HFS Plus and case-insensitive HFSX sorts null characters after all other characters, so the metadata directory will typically be the last item in the root directory. On case-sensitive HFSX volumes, null characters sort before other characters, so the metadata directory will typically be the first item in the root directory.





                      POSIX semantics allow an open file to be unlinked (deleted). These open but unlinked files are stored on HFS Plus volumes much like a hard link. When the open file is deleted, it is renamed and moved into the metadata directory. The new name is the string "temp" followed by the catalog node ID converted to decimal text. When the file is eventually closed, this temporary file may be removed. All such temporary files may be removed when repairing an unmounted HFS Plus volume.




                      Repairing the Metadata Directory



                      When repairing an HFS Plus volume with hard links or a metadata directory, there are several conditions that might need to be repaired:



                      • Opened but deleted files (which are now orphaned).


                      • Orphaned indirect node files (no hard links refer to them).


                      • Broken hard link (hard link exists, but indirect node file does not).


                      • Incorrect link count.


                      • Link reference was 0.


                      Opened but deleted files are files whose names start with "temp", and are in the metadata directory. If the volume is not in use (not mounted, and not being used by any other utility), then these files can be deleted. Volumes with a journal, even one with no active transactions, may have opened but undeleted files that need to be deleted.



                      Detecting an orphaned indirect node file, broken hard link, or incorrect link count requires finding all hard link files in the catalog, and comparing the number of found hard links for each link reference with the link count of the corresponding indirect node file.



                      A hard link with a link reference equal to 0 is invalid. Such a hard link may be the result of a hard link being copied or restored by an implementation or utility that does not use the permissions in catalog records. It may be possible to repair the hard link by determining the proper link reference. Otherwise, the hard link should be deleted.









                      share|improve this answer












                      share|improve this answer



                      share|improve this answer










                      answered Jan 10 '16 at 0:22









                      mikeservmikeserv

                      46k669161




                      46k669161












                      • Thanks for putting in the work to find and post that. However, we're not actually dealing with the HFS+ Private Data folder on a live system, this is a remenant of an old system and I'm trying to delete old data that's not in use. But also, I was able to insert this problematic character into another folder name and have it also not deletable - so it's a filesystem/unlink() bug, as opposed to a limitation of this being a special folder. Any folder or file with this special character isn't removable on my system.

                        – Harv
                        Jan 10 '16 at 2:48











                      • @Harv - I don't think it can't be both. if you look through that link you'll notice a lot of stuff about special reserved fields. I'm assuming that the filesystem bug is interpreting any file which fits its expectation of its namespace to actually be in its namespace. So I would think the solution would be to treat the file as it does and repair it likewise - that's the first way I would go, anyway. And the docs said that such problems might only be addressed offline and unmounted.

                        – mikeserv
                        Jan 10 '16 at 2:51












                      • oh interesting. So you think the special character (even on my system as opposed to where the bug first showed up - on another system), indicates to the fs somehow that this is a special folder. Is that accurate?

                        – Harv
                        Jan 10 '16 at 2:55











                      • @Harv - well, it talks about how unlinking cant be done online. and so it sounds like your issue. and the filenames are pretty similar...? i dunno what you mean by even on my system as opposed to... but it does also mention failed backups in the same paragraph. it is an HFS filesystem, yes?

                        – mikeserv
                        Jan 10 '16 at 2:57












                      • HFS+. I replicated the problem on my own system (after discovering it on a client's machine), then found that the existence of those weird characters in any folder name causes the same problem. If I insert that character and call the file or folder "test" it exhibits the same behaviour. That lead me to think it's not a special file or folder, but a problem in unlink() that it can't process that character.

                        – Harv
                        Jan 10 '16 at 3:00

















                      • Thanks for putting in the work to find and post that. However, we're not actually dealing with the HFS+ Private Data folder on a live system, this is a remenant of an old system and I'm trying to delete old data that's not in use. But also, I was able to insert this problematic character into another folder name and have it also not deletable - so it's a filesystem/unlink() bug, as opposed to a limitation of this being a special folder. Any folder or file with this special character isn't removable on my system.

                        – Harv
                        Jan 10 '16 at 2:48











                      • @Harv - I don't think it can't be both. if you look through that link you'll notice a lot of stuff about special reserved fields. I'm assuming that the filesystem bug is interpreting any file which fits its expectation of its namespace to actually be in its namespace. So I would think the solution would be to treat the file as it does and repair it likewise - that's the first way I would go, anyway. And the docs said that such problems might only be addressed offline and unmounted.

                        – mikeserv
                        Jan 10 '16 at 2:51












                      • oh interesting. So you think the special character (even on my system as opposed to where the bug first showed up - on another system), indicates to the fs somehow that this is a special folder. Is that accurate?

                        – Harv
                        Jan 10 '16 at 2:55











                      • @Harv - well, it talks about how unlinking cant be done online. and so it sounds like your issue. and the filenames are pretty similar...? i dunno what you mean by even on my system as opposed to... but it does also mention failed backups in the same paragraph. it is an HFS filesystem, yes?

                        – mikeserv
                        Jan 10 '16 at 2:57












                      • HFS+. I replicated the problem on my own system (after discovering it on a client's machine), then found that the existence of those weird characters in any folder name causes the same problem. If I insert that character and call the file or folder "test" it exhibits the same behaviour. That lead me to think it's not a special file or folder, but a problem in unlink() that it can't process that character.

                        – Harv
                        Jan 10 '16 at 3:00
















                      Thanks for putting in the work to find and post that. However, we're not actually dealing with the HFS+ Private Data folder on a live system, this is a remenant of an old system and I'm trying to delete old data that's not in use. But also, I was able to insert this problematic character into another folder name and have it also not deletable - so it's a filesystem/unlink() bug, as opposed to a limitation of this being a special folder. Any folder or file with this special character isn't removable on my system.

                      – Harv
                      Jan 10 '16 at 2:48





                      Thanks for putting in the work to find and post that. However, we're not actually dealing with the HFS+ Private Data folder on a live system, this is a remenant of an old system and I'm trying to delete old data that's not in use. But also, I was able to insert this problematic character into another folder name and have it also not deletable - so it's a filesystem/unlink() bug, as opposed to a limitation of this being a special folder. Any folder or file with this special character isn't removable on my system.

                      – Harv
                      Jan 10 '16 at 2:48













                      @Harv - I don't think it can't be both. if you look through that link you'll notice a lot of stuff about special reserved fields. I'm assuming that the filesystem bug is interpreting any file which fits its expectation of its namespace to actually be in its namespace. So I would think the solution would be to treat the file as it does and repair it likewise - that's the first way I would go, anyway. And the docs said that such problems might only be addressed offline and unmounted.

                      – mikeserv
                      Jan 10 '16 at 2:51






                      @Harv - I don't think it can't be both. if you look through that link you'll notice a lot of stuff about special reserved fields. I'm assuming that the filesystem bug is interpreting any file which fits its expectation of its namespace to actually be in its namespace. So I would think the solution would be to treat the file as it does and repair it likewise - that's the first way I would go, anyway. And the docs said that such problems might only be addressed offline and unmounted.

                      – mikeserv
                      Jan 10 '16 at 2:51














                      oh interesting. So you think the special character (even on my system as opposed to where the bug first showed up - on another system), indicates to the fs somehow that this is a special folder. Is that accurate?

                      – Harv
                      Jan 10 '16 at 2:55





                      oh interesting. So you think the special character (even on my system as opposed to where the bug first showed up - on another system), indicates to the fs somehow that this is a special folder. Is that accurate?

                      – Harv
                      Jan 10 '16 at 2:55













                      @Harv - well, it talks about how unlinking cant be done online. and so it sounds like your issue. and the filenames are pretty similar...? i dunno what you mean by even on my system as opposed to... but it does also mention failed backups in the same paragraph. it is an HFS filesystem, yes?

                      – mikeserv
                      Jan 10 '16 at 2:57






                      @Harv - well, it talks about how unlinking cant be done online. and so it sounds like your issue. and the filenames are pretty similar...? i dunno what you mean by even on my system as opposed to... but it does also mention failed backups in the same paragraph. it is an HFS filesystem, yes?

                      – mikeserv
                      Jan 10 '16 at 2:57














                      HFS+. I replicated the problem on my own system (after discovering it on a client's machine), then found that the existence of those weird characters in any folder name causes the same problem. If I insert that character and call the file or folder "test" it exhibits the same behaviour. That lead me to think it's not a special file or folder, but a problem in unlink() that it can't process that character.

                      – Harv
                      Jan 10 '16 at 3:00





                      HFS+. I replicated the problem on my own system (after discovering it on a client's machine), then found that the existence of those weird characters in any folder name causes the same problem. If I insert that character and call the file or folder "test" it exhibits the same behaviour. That lead me to think it's not a special file or folder, but a problem in unlink() that it can't process that character.

                      – Harv
                      Jan 10 '16 at 3:00











                      0














                      Spent some time with Apple Support on this and they told me the only answer was to do a Time Machine backup of my volume, wipe the original volume, create a new user account and then selectively copy all my files over manually from the Time Machine backup. Realizing the work this would entail in restoring my wonderfully complex account with all my preferences, setups, software authorisations, scripts, handy utilities, etc, I didn't want to do that so I had another idea that worked for me.



                      I worked out that I could move the immediate folder above the undeleteable file, so I corralled all the files I had with the null characters that nothing would allow me to delete them with into another folder, and then created a Carbon Copy Cloner clone of my entire boot HDD, but excluding the folder with the undeleteables. I then booted into this disk, reformatted my original disk, and restored the clone, sans the undeletable files.



                      When someone works out how to make the various Unix based OSes deals with these file names, we'll all be very happy, but in the meantime, CCC to the rescue for me.






                      share|improve this answer























                      • See the accepted answer and comments here: unix.stackexchange.com/questions/254276/…

                        – Harv
                        Apr 3 '16 at 5:48















                      0














                      Spent some time with Apple Support on this and they told me the only answer was to do a Time Machine backup of my volume, wipe the original volume, create a new user account and then selectively copy all my files over manually from the Time Machine backup. Realizing the work this would entail in restoring my wonderfully complex account with all my preferences, setups, software authorisations, scripts, handy utilities, etc, I didn't want to do that so I had another idea that worked for me.



                      I worked out that I could move the immediate folder above the undeleteable file, so I corralled all the files I had with the null characters that nothing would allow me to delete them with into another folder, and then created a Carbon Copy Cloner clone of my entire boot HDD, but excluding the folder with the undeleteables. I then booted into this disk, reformatted my original disk, and restored the clone, sans the undeletable files.



                      When someone works out how to make the various Unix based OSes deals with these file names, we'll all be very happy, but in the meantime, CCC to the rescue for me.






                      share|improve this answer























                      • See the accepted answer and comments here: unix.stackexchange.com/questions/254276/…

                        – Harv
                        Apr 3 '16 at 5:48













                      0












                      0








                      0







                      Spent some time with Apple Support on this and they told me the only answer was to do a Time Machine backup of my volume, wipe the original volume, create a new user account and then selectively copy all my files over manually from the Time Machine backup. Realizing the work this would entail in restoring my wonderfully complex account with all my preferences, setups, software authorisations, scripts, handy utilities, etc, I didn't want to do that so I had another idea that worked for me.



                      I worked out that I could move the immediate folder above the undeleteable file, so I corralled all the files I had with the null characters that nothing would allow me to delete them with into another folder, and then created a Carbon Copy Cloner clone of my entire boot HDD, but excluding the folder with the undeleteables. I then booted into this disk, reformatted my original disk, and restored the clone, sans the undeletable files.



                      When someone works out how to make the various Unix based OSes deals with these file names, we'll all be very happy, but in the meantime, CCC to the rescue for me.






                      share|improve this answer













                      Spent some time with Apple Support on this and they told me the only answer was to do a Time Machine backup of my volume, wipe the original volume, create a new user account and then selectively copy all my files over manually from the Time Machine backup. Realizing the work this would entail in restoring my wonderfully complex account with all my preferences, setups, software authorisations, scripts, handy utilities, etc, I didn't want to do that so I had another idea that worked for me.



                      I worked out that I could move the immediate folder above the undeleteable file, so I corralled all the files I had with the null characters that nothing would allow me to delete them with into another folder, and then created a Carbon Copy Cloner clone of my entire boot HDD, but excluding the folder with the undeleteables. I then booted into this disk, reformatted my original disk, and restored the clone, sans the undeletable files.



                      When someone works out how to make the various Unix based OSes deals with these file names, we'll all be very happy, but in the meantime, CCC to the rescue for me.







                      share|improve this answer












                      share|improve this answer



                      share|improve this answer










                      answered Apr 2 '16 at 22:13









                      user164004user164004

                      11




                      11












                      • See the accepted answer and comments here: unix.stackexchange.com/questions/254276/…

                        – Harv
                        Apr 3 '16 at 5:48

















                      • See the accepted answer and comments here: unix.stackexchange.com/questions/254276/…

                        – Harv
                        Apr 3 '16 at 5:48
















                      See the accepted answer and comments here: unix.stackexchange.com/questions/254276/…

                      – Harv
                      Apr 3 '16 at 5:48





                      See the accepted answer and comments here: unix.stackexchange.com/questions/254276/…

                      – Harv
                      Apr 3 '16 at 5:48

















                      draft saved

                      draft discarded
















































                      Thanks for contributing an answer to Unix & Linux Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid


                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.

                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f253932%2frename-folder-with-odd-characters%23new-answer', 'question_page');

                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown






                      Popular posts from this blog

                      How to check contact read email or not when send email to Individual?

                      Bahrain

                      Postfix configuration issue with fips on centos 7; mailgun relay