Optimizing logical sector size for physical sector size 4096 HDD
Clash Royale CLAN TAG#URR8PPP
up vote
13
down vote
favorite
With many new hard drive disks the physical sector size is 4096. Would it be possible to make the system use a logical sector size of the same size, rather than the default logical sector size of 512?
Will it speed up bulk reads and writes?
Where can it be configured?
hard-disk performance io
add a comment |Â
up vote
13
down vote
favorite
With many new hard drive disks the physical sector size is 4096. Would it be possible to make the system use a logical sector size of the same size, rather than the default logical sector size of 512?
Will it speed up bulk reads and writes?
Where can it be configured?
hard-disk performance io
See unix.stackexchange.com/a/18542/1131 for comments on alignment issues. Recent versions ofmkfs.*
should automatically use the optimal sector size. You can do somemkfs.*
tests and inspect the result (either in the verbose output of mkfs or in a related fs utility program).
â maxschlepzig
Jan 13 '15 at 14:30
Thanks but no alignment issues
â Matan
Jan 13 '15 at 15:46
add a comment |Â
up vote
13
down vote
favorite
up vote
13
down vote
favorite
With many new hard drive disks the physical sector size is 4096. Would it be possible to make the system use a logical sector size of the same size, rather than the default logical sector size of 512?
Will it speed up bulk reads and writes?
Where can it be configured?
hard-disk performance io
With many new hard drive disks the physical sector size is 4096. Would it be possible to make the system use a logical sector size of the same size, rather than the default logical sector size of 512?
Will it speed up bulk reads and writes?
Where can it be configured?
hard-disk performance io
hard-disk performance io
edited Jan 13 '15 at 13:20
asked Jan 13 '15 at 13:14
Matan
2011211
2011211
See unix.stackexchange.com/a/18542/1131 for comments on alignment issues. Recent versions ofmkfs.*
should automatically use the optimal sector size. You can do somemkfs.*
tests and inspect the result (either in the verbose output of mkfs or in a related fs utility program).
â maxschlepzig
Jan 13 '15 at 14:30
Thanks but no alignment issues
â Matan
Jan 13 '15 at 15:46
add a comment |Â
See unix.stackexchange.com/a/18542/1131 for comments on alignment issues. Recent versions ofmkfs.*
should automatically use the optimal sector size. You can do somemkfs.*
tests and inspect the result (either in the verbose output of mkfs or in a related fs utility program).
â maxschlepzig
Jan 13 '15 at 14:30
Thanks but no alignment issues
â Matan
Jan 13 '15 at 15:46
See unix.stackexchange.com/a/18542/1131 for comments on alignment issues. Recent versions of
mkfs.*
should automatically use the optimal sector size. You can do some mkfs.*
tests and inspect the result (either in the verbose output of mkfs or in a related fs utility program).â maxschlepzig
Jan 13 '15 at 14:30
See unix.stackexchange.com/a/18542/1131 for comments on alignment issues. Recent versions of
mkfs.*
should automatically use the optimal sector size. You can do some mkfs.*
tests and inspect the result (either in the verbose output of mkfs or in a related fs utility program).â maxschlepzig
Jan 13 '15 at 14:30
Thanks but no alignment issues
â Matan
Jan 13 '15 at 15:46
Thanks but no alignment issues
â Matan
Jan 13 '15 at 15:46
add a comment |Â
4 Answers
4
active
oldest
votes
up vote
19
down vote
512 byte is not really the default sector size. It depends on your hardware.
You can display what physical/logical sector sizes your disk reports via the /sys
pseudo filesystem, for instance:
# cat /sys/block/sda/queue/physical_block_size
4096
# cat /sys/block/sda/queue/logical_block_size
512
What is the difference between those two values?
- The
physical_block_size
is the minimal size of a block the drive is able to write in an atomic operation. - The
logical_block_size
is the smallest size the drive is able to write (cf. the linux kernel documentation).
Thus, if you have a 4k drive it makes sense that your storage stack (filesystem etc.) uses something equal or greater than the physical sector size.
Those values are also displayed in recent versions of fdisk
, for instance:
# fdisk -l /dev/sda
[..]
Sector size (logical/physical): 512 bytes / 4096 bytes
On current linux distributions, programs (that should care about the optimal sector size) like mkfs.xfs
will pick the optimal sector size by default (e.g. 4096 bytes).
But you can also explicitly specify it via an option, for instance:
# mkfs.xfs -f -s size=4096 /dev/sda
Or:
# mkfs.ext4 -F -b 4096 /dev/sda
In any case, most mkfs
variants will also display the used block size during execution.
For an existing filesystem the block size can be determined with a command like:
# xfs_info /mnt
[..]
meta-data= sectsz=4096
data = bsize=4096
naming =version 2 bsize=4096
log =internal bsize=4096
= sectsz=4096
realtime =none extsz=4096
Or:
# tune2fs -l /dev/sda
Block size: 4096
Fragment size: 4096
Or:
# btrfs inspect-internal dump-super /dev/sda | grep size
csum_size 4
sys_array_size 97
sectorsize 4096
nodesize 16384
leafsize 16384
stripesize 4096
dev_item.sector_size 4096
When creating the filesystem on a partition, another thing to check then is if the partition start address is actually aligned to the physical block size. For example, look at the fdisk -l
output, convert the start addresses into bytes, divide them by the physical block size - the reminder must be zero if the partitions are aligned.
Thanks, your clarification of what Linux describes as logical and physical block sizes was very helpful. I recently ran into a situation where the same 8TB hard drive showed up as 512-byte blocks in one USB enclosure and 4K blocks in another, both logical and physical. I thought something was wrong because Linux didn't see the partition map when I changed enclosures, but then I found that GPT starts on the second logical block, so it was just in the wrong place for the new alignment. I used gdisk to recreate one single big partition, and all of my data was still there (ext4 with 4K blocks).
â Raptor007
Mar 13 at 8:08
add a comment |Â
up vote
1
down vote
Yes it is possible, however doing so would cause the drive to fill much quicker than it should. For files less than 512K, each file would then take up a full 4096K (4MB) and fill the rest of the sector with 0's due to the inability for most filesystems (NTFS and the like) to allow files to share sectors.
The best option for a filesystem would be to allow variable sector sizes, however this increases the size of the MFT (master file table) and increases the risk of data corruption while reducing the ability to recover data easily. In other words, the boundaries would be not entirely known by the recovery software.
So, while a 4096K logical sector size is awesome for large files, for a normal everyday use PC, it's just a bunch of 0's.
Now, with that said, there is the option of storing data in the MFT itself when it comes to data smaller than the logical sector size. This, however, means that your MFT becomes huge and data would be written twice (there are two copies of the MFT on your HDD). You would also have to specify the maximum size of the MFT which can cause for problems when either you reach its maximum or the drive usage exceeds what would be free for the MFT to use.
All of this is based on the usage of an NTFS file system.
On the brighter side of things, NTFS does allow you to use native compression for files at the block level for any logical sector size of 4MB or less. This limitation is applied due to the way in which the NTFS compression works. 4MB blocks are read and compressed regardless of the logical sector size. This, of course, cannot occur for anything larger than 4MB in sector size due to it then crossing boundaries and losing data.
So, does this clear things up for you a bit?
I think this answer would be better if it didn't use NTFS as the example filesystem, as (unless things have changed in the past few years) it isn't supported well or at all under Unix and friends
â Fox
Feb 24 '17 at 3:16
The K stands for Kilo which is a metric unit prefix meaning one thousand. 4 MiB is 1024 sectors and not 1 as you suggest. 4096 bytes is 4 KiB or 0,00390625 MiB.
â Aeyoun
Feb 15 at 4:21
add a comment |Â
up vote
0
down vote
No, it is not possible, nor would it matter if it were. IO is typically done in units of at least 4096 bytes anyhow, and usually much more.
1
Not sure how this statement relates to my question.. logical sector size may imply smaller chunks in some part of the IO pipeline, plus needless emulation on the HDD firmware. Care to clarify?
â Matan
Jan 13 '15 at 15:48
1
@Matan, I can't make sense out of your comment at all. I explained that IO is not performed 512 bytes at a time, so the fact that the disk is addressed in 512 byte sectors doesn't matter. The only time the drive has to do any emulating is if you try to do a write that is not 4k aligned, and since IO is normally done in multiples of 4k anyhow, and modern partitioning tools make sure the partition starts on a 4k boundary, that won't happen.
â psusi
Jan 13 '15 at 18:36
This "answer" is entirely wrong, both in its claims and its conclusions. Logical size is set when a volume is formatted. Mismatches between logical and physical size have a definite cost in technologies such as flash-based storage.
â Chris Stratton
Jul 28 at 17:47
@ChrisStratton, no, you are thinking of the filesystem block/cluster size. Logical sector size is the size of a sector that the drive reports to the OS, and it can only read and write units of that size or a multiple of that size, and sectors are numbered in units of that size. Compare with the physical sector size, where some drives actually read and write 4k sectors internally, but pretend they are using 512 byte sectors for backward compatibility with older operating systems that can't deal with 4k logical sectors.
â psusi
Aug 21 at 13:02
Incorrect. The problem exists any time you have software trying to operate on units smaller than the storage system's physical size. This is especially true with logic units smaller than a flash erase block size - software ends up having to duplicate the untouched portions to a new physical block, and that costs erase life, regardless if it's done by the operating system file system code or by code internal to the drive.
â Chris Stratton
Aug 21 at 13:06
 |Â
show 3 more comments
up vote
0
down vote
Sector:
1) Logical Sector: Called Native Sector.
Manufacture default setting. user cannot change.
Before 2010 year: 512b/sector
After 2010 year: 4k/sector.
Few manufacture provide HDD tool to change native sector.
2) Physical Sector: Called Cluster(or allocation unit - FAT windows) or Block(Linux/Unix)
User can change physical sector size 512b,1k,2k,4k,... by format or partition tool. Physical sector contains one or few more native sectors.
(example1: if you have HDD 512b/native sector: user can set 4K/Physical sector. this mean 1 cluster = 4 native sector)
(example2: if you have HDD 4K/native sector: user can set 4K/Physical sector. this mead 1 cluster = 1 native sector)
3) File system deal with Physical sector(or block or Cluster) only.
New contributor
add a comment |Â
4 Answers
4
active
oldest
votes
4 Answers
4
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
19
down vote
512 byte is not really the default sector size. It depends on your hardware.
You can display what physical/logical sector sizes your disk reports via the /sys
pseudo filesystem, for instance:
# cat /sys/block/sda/queue/physical_block_size
4096
# cat /sys/block/sda/queue/logical_block_size
512
What is the difference between those two values?
- The
physical_block_size
is the minimal size of a block the drive is able to write in an atomic operation. - The
logical_block_size
is the smallest size the drive is able to write (cf. the linux kernel documentation).
Thus, if you have a 4k drive it makes sense that your storage stack (filesystem etc.) uses something equal or greater than the physical sector size.
Those values are also displayed in recent versions of fdisk
, for instance:
# fdisk -l /dev/sda
[..]
Sector size (logical/physical): 512 bytes / 4096 bytes
On current linux distributions, programs (that should care about the optimal sector size) like mkfs.xfs
will pick the optimal sector size by default (e.g. 4096 bytes).
But you can also explicitly specify it via an option, for instance:
# mkfs.xfs -f -s size=4096 /dev/sda
Or:
# mkfs.ext4 -F -b 4096 /dev/sda
In any case, most mkfs
variants will also display the used block size during execution.
For an existing filesystem the block size can be determined with a command like:
# xfs_info /mnt
[..]
meta-data= sectsz=4096
data = bsize=4096
naming =version 2 bsize=4096
log =internal bsize=4096
= sectsz=4096
realtime =none extsz=4096
Or:
# tune2fs -l /dev/sda
Block size: 4096
Fragment size: 4096
Or:
# btrfs inspect-internal dump-super /dev/sda | grep size
csum_size 4
sys_array_size 97
sectorsize 4096
nodesize 16384
leafsize 16384
stripesize 4096
dev_item.sector_size 4096
When creating the filesystem on a partition, another thing to check then is if the partition start address is actually aligned to the physical block size. For example, look at the fdisk -l
output, convert the start addresses into bytes, divide them by the physical block size - the reminder must be zero if the partitions are aligned.
Thanks, your clarification of what Linux describes as logical and physical block sizes was very helpful. I recently ran into a situation where the same 8TB hard drive showed up as 512-byte blocks in one USB enclosure and 4K blocks in another, both logical and physical. I thought something was wrong because Linux didn't see the partition map when I changed enclosures, but then I found that GPT starts on the second logical block, so it was just in the wrong place for the new alignment. I used gdisk to recreate one single big partition, and all of my data was still there (ext4 with 4K blocks).
â Raptor007
Mar 13 at 8:08
add a comment |Â
up vote
19
down vote
512 byte is not really the default sector size. It depends on your hardware.
You can display what physical/logical sector sizes your disk reports via the /sys
pseudo filesystem, for instance:
# cat /sys/block/sda/queue/physical_block_size
4096
# cat /sys/block/sda/queue/logical_block_size
512
What is the difference between those two values?
- The
physical_block_size
is the minimal size of a block the drive is able to write in an atomic operation. - The
logical_block_size
is the smallest size the drive is able to write (cf. the linux kernel documentation).
Thus, if you have a 4k drive it makes sense that your storage stack (filesystem etc.) uses something equal or greater than the physical sector size.
Those values are also displayed in recent versions of fdisk
, for instance:
# fdisk -l /dev/sda
[..]
Sector size (logical/physical): 512 bytes / 4096 bytes
On current linux distributions, programs (that should care about the optimal sector size) like mkfs.xfs
will pick the optimal sector size by default (e.g. 4096 bytes).
But you can also explicitly specify it via an option, for instance:
# mkfs.xfs -f -s size=4096 /dev/sda
Or:
# mkfs.ext4 -F -b 4096 /dev/sda
In any case, most mkfs
variants will also display the used block size during execution.
For an existing filesystem the block size can be determined with a command like:
# xfs_info /mnt
[..]
meta-data= sectsz=4096
data = bsize=4096
naming =version 2 bsize=4096
log =internal bsize=4096
= sectsz=4096
realtime =none extsz=4096
Or:
# tune2fs -l /dev/sda
Block size: 4096
Fragment size: 4096
Or:
# btrfs inspect-internal dump-super /dev/sda | grep size
csum_size 4
sys_array_size 97
sectorsize 4096
nodesize 16384
leafsize 16384
stripesize 4096
dev_item.sector_size 4096
When creating the filesystem on a partition, another thing to check then is if the partition start address is actually aligned to the physical block size. For example, look at the fdisk -l
output, convert the start addresses into bytes, divide them by the physical block size - the reminder must be zero if the partitions are aligned.
Thanks, your clarification of what Linux describes as logical and physical block sizes was very helpful. I recently ran into a situation where the same 8TB hard drive showed up as 512-byte blocks in one USB enclosure and 4K blocks in another, both logical and physical. I thought something was wrong because Linux didn't see the partition map when I changed enclosures, but then I found that GPT starts on the second logical block, so it was just in the wrong place for the new alignment. I used gdisk to recreate one single big partition, and all of my data was still there (ext4 with 4K blocks).
â Raptor007
Mar 13 at 8:08
add a comment |Â
up vote
19
down vote
up vote
19
down vote
512 byte is not really the default sector size. It depends on your hardware.
You can display what physical/logical sector sizes your disk reports via the /sys
pseudo filesystem, for instance:
# cat /sys/block/sda/queue/physical_block_size
4096
# cat /sys/block/sda/queue/logical_block_size
512
What is the difference between those two values?
- The
physical_block_size
is the minimal size of a block the drive is able to write in an atomic operation. - The
logical_block_size
is the smallest size the drive is able to write (cf. the linux kernel documentation).
Thus, if you have a 4k drive it makes sense that your storage stack (filesystem etc.) uses something equal or greater than the physical sector size.
Those values are also displayed in recent versions of fdisk
, for instance:
# fdisk -l /dev/sda
[..]
Sector size (logical/physical): 512 bytes / 4096 bytes
On current linux distributions, programs (that should care about the optimal sector size) like mkfs.xfs
will pick the optimal sector size by default (e.g. 4096 bytes).
But you can also explicitly specify it via an option, for instance:
# mkfs.xfs -f -s size=4096 /dev/sda
Or:
# mkfs.ext4 -F -b 4096 /dev/sda
In any case, most mkfs
variants will also display the used block size during execution.
For an existing filesystem the block size can be determined with a command like:
# xfs_info /mnt
[..]
meta-data= sectsz=4096
data = bsize=4096
naming =version 2 bsize=4096
log =internal bsize=4096
= sectsz=4096
realtime =none extsz=4096
Or:
# tune2fs -l /dev/sda
Block size: 4096
Fragment size: 4096
Or:
# btrfs inspect-internal dump-super /dev/sda | grep size
csum_size 4
sys_array_size 97
sectorsize 4096
nodesize 16384
leafsize 16384
stripesize 4096
dev_item.sector_size 4096
When creating the filesystem on a partition, another thing to check then is if the partition start address is actually aligned to the physical block size. For example, look at the fdisk -l
output, convert the start addresses into bytes, divide them by the physical block size - the reminder must be zero if the partitions are aligned.
512 byte is not really the default sector size. It depends on your hardware.
You can display what physical/logical sector sizes your disk reports via the /sys
pseudo filesystem, for instance:
# cat /sys/block/sda/queue/physical_block_size
4096
# cat /sys/block/sda/queue/logical_block_size
512
What is the difference between those two values?
- The
physical_block_size
is the minimal size of a block the drive is able to write in an atomic operation. - The
logical_block_size
is the smallest size the drive is able to write (cf. the linux kernel documentation).
Thus, if you have a 4k drive it makes sense that your storage stack (filesystem etc.) uses something equal or greater than the physical sector size.
Those values are also displayed in recent versions of fdisk
, for instance:
# fdisk -l /dev/sda
[..]
Sector size (logical/physical): 512 bytes / 4096 bytes
On current linux distributions, programs (that should care about the optimal sector size) like mkfs.xfs
will pick the optimal sector size by default (e.g. 4096 bytes).
But you can also explicitly specify it via an option, for instance:
# mkfs.xfs -f -s size=4096 /dev/sda
Or:
# mkfs.ext4 -F -b 4096 /dev/sda
In any case, most mkfs
variants will also display the used block size during execution.
For an existing filesystem the block size can be determined with a command like:
# xfs_info /mnt
[..]
meta-data= sectsz=4096
data = bsize=4096
naming =version 2 bsize=4096
log =internal bsize=4096
= sectsz=4096
realtime =none extsz=4096
Or:
# tune2fs -l /dev/sda
Block size: 4096
Fragment size: 4096
Or:
# btrfs inspect-internal dump-super /dev/sda | grep size
csum_size 4
sys_array_size 97
sectorsize 4096
nodesize 16384
leafsize 16384
stripesize 4096
dev_item.sector_size 4096
When creating the filesystem on a partition, another thing to check then is if the partition start address is actually aligned to the physical block size. For example, look at the fdisk -l
output, convert the start addresses into bytes, divide them by the physical block size - the reminder must be zero if the partitions are aligned.
edited Dec 13 '17 at 10:01
answered Jan 13 '15 at 22:55
maxschlepzig
32.7k31135206
32.7k31135206
Thanks, your clarification of what Linux describes as logical and physical block sizes was very helpful. I recently ran into a situation where the same 8TB hard drive showed up as 512-byte blocks in one USB enclosure and 4K blocks in another, both logical and physical. I thought something was wrong because Linux didn't see the partition map when I changed enclosures, but then I found that GPT starts on the second logical block, so it was just in the wrong place for the new alignment. I used gdisk to recreate one single big partition, and all of my data was still there (ext4 with 4K blocks).
â Raptor007
Mar 13 at 8:08
add a comment |Â
Thanks, your clarification of what Linux describes as logical and physical block sizes was very helpful. I recently ran into a situation where the same 8TB hard drive showed up as 512-byte blocks in one USB enclosure and 4K blocks in another, both logical and physical. I thought something was wrong because Linux didn't see the partition map when I changed enclosures, but then I found that GPT starts on the second logical block, so it was just in the wrong place for the new alignment. I used gdisk to recreate one single big partition, and all of my data was still there (ext4 with 4K blocks).
â Raptor007
Mar 13 at 8:08
Thanks, your clarification of what Linux describes as logical and physical block sizes was very helpful. I recently ran into a situation where the same 8TB hard drive showed up as 512-byte blocks in one USB enclosure and 4K blocks in another, both logical and physical. I thought something was wrong because Linux didn't see the partition map when I changed enclosures, but then I found that GPT starts on the second logical block, so it was just in the wrong place for the new alignment. I used gdisk to recreate one single big partition, and all of my data was still there (ext4 with 4K blocks).
â Raptor007
Mar 13 at 8:08
Thanks, your clarification of what Linux describes as logical and physical block sizes was very helpful. I recently ran into a situation where the same 8TB hard drive showed up as 512-byte blocks in one USB enclosure and 4K blocks in another, both logical and physical. I thought something was wrong because Linux didn't see the partition map when I changed enclosures, but then I found that GPT starts on the second logical block, so it was just in the wrong place for the new alignment. I used gdisk to recreate one single big partition, and all of my data was still there (ext4 with 4K blocks).
â Raptor007
Mar 13 at 8:08
add a comment |Â
up vote
1
down vote
Yes it is possible, however doing so would cause the drive to fill much quicker than it should. For files less than 512K, each file would then take up a full 4096K (4MB) and fill the rest of the sector with 0's due to the inability for most filesystems (NTFS and the like) to allow files to share sectors.
The best option for a filesystem would be to allow variable sector sizes, however this increases the size of the MFT (master file table) and increases the risk of data corruption while reducing the ability to recover data easily. In other words, the boundaries would be not entirely known by the recovery software.
So, while a 4096K logical sector size is awesome for large files, for a normal everyday use PC, it's just a bunch of 0's.
Now, with that said, there is the option of storing data in the MFT itself when it comes to data smaller than the logical sector size. This, however, means that your MFT becomes huge and data would be written twice (there are two copies of the MFT on your HDD). You would also have to specify the maximum size of the MFT which can cause for problems when either you reach its maximum or the drive usage exceeds what would be free for the MFT to use.
All of this is based on the usage of an NTFS file system.
On the brighter side of things, NTFS does allow you to use native compression for files at the block level for any logical sector size of 4MB or less. This limitation is applied due to the way in which the NTFS compression works. 4MB blocks are read and compressed regardless of the logical sector size. This, of course, cannot occur for anything larger than 4MB in sector size due to it then crossing boundaries and losing data.
So, does this clear things up for you a bit?
I think this answer would be better if it didn't use NTFS as the example filesystem, as (unless things have changed in the past few years) it isn't supported well or at all under Unix and friends
â Fox
Feb 24 '17 at 3:16
The K stands for Kilo which is a metric unit prefix meaning one thousand. 4 MiB is 1024 sectors and not 1 as you suggest. 4096 bytes is 4 KiB or 0,00390625 MiB.
â Aeyoun
Feb 15 at 4:21
add a comment |Â
up vote
1
down vote
Yes it is possible, however doing so would cause the drive to fill much quicker than it should. For files less than 512K, each file would then take up a full 4096K (4MB) and fill the rest of the sector with 0's due to the inability for most filesystems (NTFS and the like) to allow files to share sectors.
The best option for a filesystem would be to allow variable sector sizes, however this increases the size of the MFT (master file table) and increases the risk of data corruption while reducing the ability to recover data easily. In other words, the boundaries would be not entirely known by the recovery software.
So, while a 4096K logical sector size is awesome for large files, for a normal everyday use PC, it's just a bunch of 0's.
Now, with that said, there is the option of storing data in the MFT itself when it comes to data smaller than the logical sector size. This, however, means that your MFT becomes huge and data would be written twice (there are two copies of the MFT on your HDD). You would also have to specify the maximum size of the MFT which can cause for problems when either you reach its maximum or the drive usage exceeds what would be free for the MFT to use.
All of this is based on the usage of an NTFS file system.
On the brighter side of things, NTFS does allow you to use native compression for files at the block level for any logical sector size of 4MB or less. This limitation is applied due to the way in which the NTFS compression works. 4MB blocks are read and compressed regardless of the logical sector size. This, of course, cannot occur for anything larger than 4MB in sector size due to it then crossing boundaries and losing data.
So, does this clear things up for you a bit?
I think this answer would be better if it didn't use NTFS as the example filesystem, as (unless things have changed in the past few years) it isn't supported well or at all under Unix and friends
â Fox
Feb 24 '17 at 3:16
The K stands for Kilo which is a metric unit prefix meaning one thousand. 4 MiB is 1024 sectors and not 1 as you suggest. 4096 bytes is 4 KiB or 0,00390625 MiB.
â Aeyoun
Feb 15 at 4:21
add a comment |Â
up vote
1
down vote
up vote
1
down vote
Yes it is possible, however doing so would cause the drive to fill much quicker than it should. For files less than 512K, each file would then take up a full 4096K (4MB) and fill the rest of the sector with 0's due to the inability for most filesystems (NTFS and the like) to allow files to share sectors.
The best option for a filesystem would be to allow variable sector sizes, however this increases the size of the MFT (master file table) and increases the risk of data corruption while reducing the ability to recover data easily. In other words, the boundaries would be not entirely known by the recovery software.
So, while a 4096K logical sector size is awesome for large files, for a normal everyday use PC, it's just a bunch of 0's.
Now, with that said, there is the option of storing data in the MFT itself when it comes to data smaller than the logical sector size. This, however, means that your MFT becomes huge and data would be written twice (there are two copies of the MFT on your HDD). You would also have to specify the maximum size of the MFT which can cause for problems when either you reach its maximum or the drive usage exceeds what would be free for the MFT to use.
All of this is based on the usage of an NTFS file system.
On the brighter side of things, NTFS does allow you to use native compression for files at the block level for any logical sector size of 4MB or less. This limitation is applied due to the way in which the NTFS compression works. 4MB blocks are read and compressed regardless of the logical sector size. This, of course, cannot occur for anything larger than 4MB in sector size due to it then crossing boundaries and losing data.
So, does this clear things up for you a bit?
Yes it is possible, however doing so would cause the drive to fill much quicker than it should. For files less than 512K, each file would then take up a full 4096K (4MB) and fill the rest of the sector with 0's due to the inability for most filesystems (NTFS and the like) to allow files to share sectors.
The best option for a filesystem would be to allow variable sector sizes, however this increases the size of the MFT (master file table) and increases the risk of data corruption while reducing the ability to recover data easily. In other words, the boundaries would be not entirely known by the recovery software.
So, while a 4096K logical sector size is awesome for large files, for a normal everyday use PC, it's just a bunch of 0's.
Now, with that said, there is the option of storing data in the MFT itself when it comes to data smaller than the logical sector size. This, however, means that your MFT becomes huge and data would be written twice (there are two copies of the MFT on your HDD). You would also have to specify the maximum size of the MFT which can cause for problems when either you reach its maximum or the drive usage exceeds what would be free for the MFT to use.
All of this is based on the usage of an NTFS file system.
On the brighter side of things, NTFS does allow you to use native compression for files at the block level for any logical sector size of 4MB or less. This limitation is applied due to the way in which the NTFS compression works. 4MB blocks are read and compressed regardless of the logical sector size. This, of course, cannot occur for anything larger than 4MB in sector size due to it then crossing boundaries and losing data.
So, does this clear things up for you a bit?
answered Feb 24 '17 at 1:58
D337z
191
191
I think this answer would be better if it didn't use NTFS as the example filesystem, as (unless things have changed in the past few years) it isn't supported well or at all under Unix and friends
â Fox
Feb 24 '17 at 3:16
The K stands for Kilo which is a metric unit prefix meaning one thousand. 4 MiB is 1024 sectors and not 1 as you suggest. 4096 bytes is 4 KiB or 0,00390625 MiB.
â Aeyoun
Feb 15 at 4:21
add a comment |Â
I think this answer would be better if it didn't use NTFS as the example filesystem, as (unless things have changed in the past few years) it isn't supported well or at all under Unix and friends
â Fox
Feb 24 '17 at 3:16
The K stands for Kilo which is a metric unit prefix meaning one thousand. 4 MiB is 1024 sectors and not 1 as you suggest. 4096 bytes is 4 KiB or 0,00390625 MiB.
â Aeyoun
Feb 15 at 4:21
I think this answer would be better if it didn't use NTFS as the example filesystem, as (unless things have changed in the past few years) it isn't supported well or at all under Unix and friends
â Fox
Feb 24 '17 at 3:16
I think this answer would be better if it didn't use NTFS as the example filesystem, as (unless things have changed in the past few years) it isn't supported well or at all under Unix and friends
â Fox
Feb 24 '17 at 3:16
The K stands for Kilo which is a metric unit prefix meaning one thousand. 4 MiB is 1024 sectors and not 1 as you suggest. 4096 bytes is 4 KiB or 0,00390625 MiB.
â Aeyoun
Feb 15 at 4:21
The K stands for Kilo which is a metric unit prefix meaning one thousand. 4 MiB is 1024 sectors and not 1 as you suggest. 4096 bytes is 4 KiB or 0,00390625 MiB.
â Aeyoun
Feb 15 at 4:21
add a comment |Â
up vote
0
down vote
No, it is not possible, nor would it matter if it were. IO is typically done in units of at least 4096 bytes anyhow, and usually much more.
1
Not sure how this statement relates to my question.. logical sector size may imply smaller chunks in some part of the IO pipeline, plus needless emulation on the HDD firmware. Care to clarify?
â Matan
Jan 13 '15 at 15:48
1
@Matan, I can't make sense out of your comment at all. I explained that IO is not performed 512 bytes at a time, so the fact that the disk is addressed in 512 byte sectors doesn't matter. The only time the drive has to do any emulating is if you try to do a write that is not 4k aligned, and since IO is normally done in multiples of 4k anyhow, and modern partitioning tools make sure the partition starts on a 4k boundary, that won't happen.
â psusi
Jan 13 '15 at 18:36
This "answer" is entirely wrong, both in its claims and its conclusions. Logical size is set when a volume is formatted. Mismatches between logical and physical size have a definite cost in technologies such as flash-based storage.
â Chris Stratton
Jul 28 at 17:47
@ChrisStratton, no, you are thinking of the filesystem block/cluster size. Logical sector size is the size of a sector that the drive reports to the OS, and it can only read and write units of that size or a multiple of that size, and sectors are numbered in units of that size. Compare with the physical sector size, where some drives actually read and write 4k sectors internally, but pretend they are using 512 byte sectors for backward compatibility with older operating systems that can't deal with 4k logical sectors.
â psusi
Aug 21 at 13:02
Incorrect. The problem exists any time you have software trying to operate on units smaller than the storage system's physical size. This is especially true with logic units smaller than a flash erase block size - software ends up having to duplicate the untouched portions to a new physical block, and that costs erase life, regardless if it's done by the operating system file system code or by code internal to the drive.
â Chris Stratton
Aug 21 at 13:06
 |Â
show 3 more comments
up vote
0
down vote
No, it is not possible, nor would it matter if it were. IO is typically done in units of at least 4096 bytes anyhow, and usually much more.
1
Not sure how this statement relates to my question.. logical sector size may imply smaller chunks in some part of the IO pipeline, plus needless emulation on the HDD firmware. Care to clarify?
â Matan
Jan 13 '15 at 15:48
1
@Matan, I can't make sense out of your comment at all. I explained that IO is not performed 512 bytes at a time, so the fact that the disk is addressed in 512 byte sectors doesn't matter. The only time the drive has to do any emulating is if you try to do a write that is not 4k aligned, and since IO is normally done in multiples of 4k anyhow, and modern partitioning tools make sure the partition starts on a 4k boundary, that won't happen.
â psusi
Jan 13 '15 at 18:36
This "answer" is entirely wrong, both in its claims and its conclusions. Logical size is set when a volume is formatted. Mismatches between logical and physical size have a definite cost in technologies such as flash-based storage.
â Chris Stratton
Jul 28 at 17:47
@ChrisStratton, no, you are thinking of the filesystem block/cluster size. Logical sector size is the size of a sector that the drive reports to the OS, and it can only read and write units of that size or a multiple of that size, and sectors are numbered in units of that size. Compare with the physical sector size, where some drives actually read and write 4k sectors internally, but pretend they are using 512 byte sectors for backward compatibility with older operating systems that can't deal with 4k logical sectors.
â psusi
Aug 21 at 13:02
Incorrect. The problem exists any time you have software trying to operate on units smaller than the storage system's physical size. This is especially true with logic units smaller than a flash erase block size - software ends up having to duplicate the untouched portions to a new physical block, and that costs erase life, regardless if it's done by the operating system file system code or by code internal to the drive.
â Chris Stratton
Aug 21 at 13:06
 |Â
show 3 more comments
up vote
0
down vote
up vote
0
down vote
No, it is not possible, nor would it matter if it were. IO is typically done in units of at least 4096 bytes anyhow, and usually much more.
No, it is not possible, nor would it matter if it were. IO is typically done in units of at least 4096 bytes anyhow, and usually much more.
answered Jan 13 '15 at 13:58
psusi
13.3k22439
13.3k22439
1
Not sure how this statement relates to my question.. logical sector size may imply smaller chunks in some part of the IO pipeline, plus needless emulation on the HDD firmware. Care to clarify?
â Matan
Jan 13 '15 at 15:48
1
@Matan, I can't make sense out of your comment at all. I explained that IO is not performed 512 bytes at a time, so the fact that the disk is addressed in 512 byte sectors doesn't matter. The only time the drive has to do any emulating is if you try to do a write that is not 4k aligned, and since IO is normally done in multiples of 4k anyhow, and modern partitioning tools make sure the partition starts on a 4k boundary, that won't happen.
â psusi
Jan 13 '15 at 18:36
This "answer" is entirely wrong, both in its claims and its conclusions. Logical size is set when a volume is formatted. Mismatches between logical and physical size have a definite cost in technologies such as flash-based storage.
â Chris Stratton
Jul 28 at 17:47
@ChrisStratton, no, you are thinking of the filesystem block/cluster size. Logical sector size is the size of a sector that the drive reports to the OS, and it can only read and write units of that size or a multiple of that size, and sectors are numbered in units of that size. Compare with the physical sector size, where some drives actually read and write 4k sectors internally, but pretend they are using 512 byte sectors for backward compatibility with older operating systems that can't deal with 4k logical sectors.
â psusi
Aug 21 at 13:02
Incorrect. The problem exists any time you have software trying to operate on units smaller than the storage system's physical size. This is especially true with logic units smaller than a flash erase block size - software ends up having to duplicate the untouched portions to a new physical block, and that costs erase life, regardless if it's done by the operating system file system code or by code internal to the drive.
â Chris Stratton
Aug 21 at 13:06
 |Â
show 3 more comments
1
Not sure how this statement relates to my question.. logical sector size may imply smaller chunks in some part of the IO pipeline, plus needless emulation on the HDD firmware. Care to clarify?
â Matan
Jan 13 '15 at 15:48
1
@Matan, I can't make sense out of your comment at all. I explained that IO is not performed 512 bytes at a time, so the fact that the disk is addressed in 512 byte sectors doesn't matter. The only time the drive has to do any emulating is if you try to do a write that is not 4k aligned, and since IO is normally done in multiples of 4k anyhow, and modern partitioning tools make sure the partition starts on a 4k boundary, that won't happen.
â psusi
Jan 13 '15 at 18:36
This "answer" is entirely wrong, both in its claims and its conclusions. Logical size is set when a volume is formatted. Mismatches between logical and physical size have a definite cost in technologies such as flash-based storage.
â Chris Stratton
Jul 28 at 17:47
@ChrisStratton, no, you are thinking of the filesystem block/cluster size. Logical sector size is the size of a sector that the drive reports to the OS, and it can only read and write units of that size or a multiple of that size, and sectors are numbered in units of that size. Compare with the physical sector size, where some drives actually read and write 4k sectors internally, but pretend they are using 512 byte sectors for backward compatibility with older operating systems that can't deal with 4k logical sectors.
â psusi
Aug 21 at 13:02
Incorrect. The problem exists any time you have software trying to operate on units smaller than the storage system's physical size. This is especially true with logic units smaller than a flash erase block size - software ends up having to duplicate the untouched portions to a new physical block, and that costs erase life, regardless if it's done by the operating system file system code or by code internal to the drive.
â Chris Stratton
Aug 21 at 13:06
1
1
Not sure how this statement relates to my question.. logical sector size may imply smaller chunks in some part of the IO pipeline, plus needless emulation on the HDD firmware. Care to clarify?
â Matan
Jan 13 '15 at 15:48
Not sure how this statement relates to my question.. logical sector size may imply smaller chunks in some part of the IO pipeline, plus needless emulation on the HDD firmware. Care to clarify?
â Matan
Jan 13 '15 at 15:48
1
1
@Matan, I can't make sense out of your comment at all. I explained that IO is not performed 512 bytes at a time, so the fact that the disk is addressed in 512 byte sectors doesn't matter. The only time the drive has to do any emulating is if you try to do a write that is not 4k aligned, and since IO is normally done in multiples of 4k anyhow, and modern partitioning tools make sure the partition starts on a 4k boundary, that won't happen.
â psusi
Jan 13 '15 at 18:36
@Matan, I can't make sense out of your comment at all. I explained that IO is not performed 512 bytes at a time, so the fact that the disk is addressed in 512 byte sectors doesn't matter. The only time the drive has to do any emulating is if you try to do a write that is not 4k aligned, and since IO is normally done in multiples of 4k anyhow, and modern partitioning tools make sure the partition starts on a 4k boundary, that won't happen.
â psusi
Jan 13 '15 at 18:36
This "answer" is entirely wrong, both in its claims and its conclusions. Logical size is set when a volume is formatted. Mismatches between logical and physical size have a definite cost in technologies such as flash-based storage.
â Chris Stratton
Jul 28 at 17:47
This "answer" is entirely wrong, both in its claims and its conclusions. Logical size is set when a volume is formatted. Mismatches between logical and physical size have a definite cost in technologies such as flash-based storage.
â Chris Stratton
Jul 28 at 17:47
@ChrisStratton, no, you are thinking of the filesystem block/cluster size. Logical sector size is the size of a sector that the drive reports to the OS, and it can only read and write units of that size or a multiple of that size, and sectors are numbered in units of that size. Compare with the physical sector size, where some drives actually read and write 4k sectors internally, but pretend they are using 512 byte sectors for backward compatibility with older operating systems that can't deal with 4k logical sectors.
â psusi
Aug 21 at 13:02
@ChrisStratton, no, you are thinking of the filesystem block/cluster size. Logical sector size is the size of a sector that the drive reports to the OS, and it can only read and write units of that size or a multiple of that size, and sectors are numbered in units of that size. Compare with the physical sector size, where some drives actually read and write 4k sectors internally, but pretend they are using 512 byte sectors for backward compatibility with older operating systems that can't deal with 4k logical sectors.
â psusi
Aug 21 at 13:02
Incorrect. The problem exists any time you have software trying to operate on units smaller than the storage system's physical size. This is especially true with logic units smaller than a flash erase block size - software ends up having to duplicate the untouched portions to a new physical block, and that costs erase life, regardless if it's done by the operating system file system code or by code internal to the drive.
â Chris Stratton
Aug 21 at 13:06
Incorrect. The problem exists any time you have software trying to operate on units smaller than the storage system's physical size. This is especially true with logic units smaller than a flash erase block size - software ends up having to duplicate the untouched portions to a new physical block, and that costs erase life, regardless if it's done by the operating system file system code or by code internal to the drive.
â Chris Stratton
Aug 21 at 13:06
 |Â
show 3 more comments
up vote
0
down vote
Sector:
1) Logical Sector: Called Native Sector.
Manufacture default setting. user cannot change.
Before 2010 year: 512b/sector
After 2010 year: 4k/sector.
Few manufacture provide HDD tool to change native sector.
2) Physical Sector: Called Cluster(or allocation unit - FAT windows) or Block(Linux/Unix)
User can change physical sector size 512b,1k,2k,4k,... by format or partition tool. Physical sector contains one or few more native sectors.
(example1: if you have HDD 512b/native sector: user can set 4K/Physical sector. this mean 1 cluster = 4 native sector)
(example2: if you have HDD 4K/native sector: user can set 4K/Physical sector. this mead 1 cluster = 1 native sector)
3) File system deal with Physical sector(or block or Cluster) only.
New contributor
add a comment |Â
up vote
0
down vote
Sector:
1) Logical Sector: Called Native Sector.
Manufacture default setting. user cannot change.
Before 2010 year: 512b/sector
After 2010 year: 4k/sector.
Few manufacture provide HDD tool to change native sector.
2) Physical Sector: Called Cluster(or allocation unit - FAT windows) or Block(Linux/Unix)
User can change physical sector size 512b,1k,2k,4k,... by format or partition tool. Physical sector contains one or few more native sectors.
(example1: if you have HDD 512b/native sector: user can set 4K/Physical sector. this mean 1 cluster = 4 native sector)
(example2: if you have HDD 4K/native sector: user can set 4K/Physical sector. this mead 1 cluster = 1 native sector)
3) File system deal with Physical sector(or block or Cluster) only.
New contributor
add a comment |Â
up vote
0
down vote
up vote
0
down vote
Sector:
1) Logical Sector: Called Native Sector.
Manufacture default setting. user cannot change.
Before 2010 year: 512b/sector
After 2010 year: 4k/sector.
Few manufacture provide HDD tool to change native sector.
2) Physical Sector: Called Cluster(or allocation unit - FAT windows) or Block(Linux/Unix)
User can change physical sector size 512b,1k,2k,4k,... by format or partition tool. Physical sector contains one or few more native sectors.
(example1: if you have HDD 512b/native sector: user can set 4K/Physical sector. this mean 1 cluster = 4 native sector)
(example2: if you have HDD 4K/native sector: user can set 4K/Physical sector. this mead 1 cluster = 1 native sector)
3) File system deal with Physical sector(or block or Cluster) only.
New contributor
Sector:
1) Logical Sector: Called Native Sector.
Manufacture default setting. user cannot change.
Before 2010 year: 512b/sector
After 2010 year: 4k/sector.
Few manufacture provide HDD tool to change native sector.
2) Physical Sector: Called Cluster(or allocation unit - FAT windows) or Block(Linux/Unix)
User can change physical sector size 512b,1k,2k,4k,... by format or partition tool. Physical sector contains one or few more native sectors.
(example1: if you have HDD 512b/native sector: user can set 4K/Physical sector. this mean 1 cluster = 4 native sector)
(example2: if you have HDD 4K/native sector: user can set 4K/Physical sector. this mead 1 cluster = 1 native sector)
3) File system deal with Physical sector(or block or Cluster) only.
New contributor
New contributor
answered 16 mins ago
Thomas
1
1
New contributor
New contributor
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f178899%2foptimizing-logical-sector-size-for-physical-sector-size-4096-hdd%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
See unix.stackexchange.com/a/18542/1131 for comments on alignment issues. Recent versions of
mkfs.*
should automatically use the optimal sector size. You can do somemkfs.*
tests and inspect the result (either in the verbose output of mkfs or in a related fs utility program).â maxschlepzig
Jan 13 '15 at 14:30
Thanks but no alignment issues
â Matan
Jan 13 '15 at 15:46