zfs and solaris11: how to reduce ram consumption?
Clash Royale CLAN TAG#URR8PPP
up vote
1
down vote
favorite
sudo echo ::memstat | sudo mdb -k
Usage Type/Subtype Pages Bytes %Tot %Tot/%Subt
---------------------------- ---------------- -------- ----- -----------
Kernel 291425 1.1g 17.5%
ZFS 844447 3.2g 50.7%
zfs is over 3G,but I have set ARC to consume max 2G
cat /etc/system
set zfs:zfs_arc_max = 2147483648
set zfs:zfs_arc_min = 1073741824
I have reboot of course.
Version is 11.4 beta
solaris zfs
add a comment |Â
up vote
1
down vote
favorite
sudo echo ::memstat | sudo mdb -k
Usage Type/Subtype Pages Bytes %Tot %Tot/%Subt
---------------------------- ---------------- -------- ----- -----------
Kernel 291425 1.1g 17.5%
ZFS 844447 3.2g 50.7%
zfs is over 3G,but I have set ARC to consume max 2G
cat /etc/system
set zfs:zfs_arc_max = 2147483648
set zfs:zfs_arc_min = 1073741824
I have reboot of course.
Version is 11.4 beta
solaris zfs
Which version of Solaris 11?
â Andrew Henle
Feb 26 at 10:37
Version is 11.4 beta
â elbarna
Feb 26 at 10:51
1
See this: docs.oracle.com/cd/E53394_01/html/E54818/â¦zfs_arc_max
appears to have been deprecated in favor of some amorphoususer_reserve_hint_pct
. Seems like they still have some of the we-know-better-than-everyone-else "ZFS is ALWAYS consistent on disk so nofsck
will EVER be needed!!!!" developers running things... :-/ That attitude from Sun's original ZFS implementation was misguided and misplaced 10+ years ago - it's a shame it appears to still live on.
â Andrew Henle
Feb 26 at 11:03
Thanks. Add as answer please, so I can close and vote
â elbarna
Feb 26 at 12:09
add a comment |Â
up vote
1
down vote
favorite
up vote
1
down vote
favorite
sudo echo ::memstat | sudo mdb -k
Usage Type/Subtype Pages Bytes %Tot %Tot/%Subt
---------------------------- ---------------- -------- ----- -----------
Kernel 291425 1.1g 17.5%
ZFS 844447 3.2g 50.7%
zfs is over 3G,but I have set ARC to consume max 2G
cat /etc/system
set zfs:zfs_arc_max = 2147483648
set zfs:zfs_arc_min = 1073741824
I have reboot of course.
Version is 11.4 beta
solaris zfs
sudo echo ::memstat | sudo mdb -k
Usage Type/Subtype Pages Bytes %Tot %Tot/%Subt
---------------------------- ---------------- -------- ----- -----------
Kernel 291425 1.1g 17.5%
ZFS 844447 3.2g 50.7%
zfs is over 3G,but I have set ARC to consume max 2G
cat /etc/system
set zfs:zfs_arc_max = 2147483648
set zfs:zfs_arc_min = 1073741824
I have reboot of course.
Version is 11.4 beta
solaris zfs
edited Feb 26 at 10:51
asked Feb 26 at 10:20
elbarna
3,80393477
3,80393477
Which version of Solaris 11?
â Andrew Henle
Feb 26 at 10:37
Version is 11.4 beta
â elbarna
Feb 26 at 10:51
1
See this: docs.oracle.com/cd/E53394_01/html/E54818/â¦zfs_arc_max
appears to have been deprecated in favor of some amorphoususer_reserve_hint_pct
. Seems like they still have some of the we-know-better-than-everyone-else "ZFS is ALWAYS consistent on disk so nofsck
will EVER be needed!!!!" developers running things... :-/ That attitude from Sun's original ZFS implementation was misguided and misplaced 10+ years ago - it's a shame it appears to still live on.
â Andrew Henle
Feb 26 at 11:03
Thanks. Add as answer please, so I can close and vote
â elbarna
Feb 26 at 12:09
add a comment |Â
Which version of Solaris 11?
â Andrew Henle
Feb 26 at 10:37
Version is 11.4 beta
â elbarna
Feb 26 at 10:51
1
See this: docs.oracle.com/cd/E53394_01/html/E54818/â¦zfs_arc_max
appears to have been deprecated in favor of some amorphoususer_reserve_hint_pct
. Seems like they still have some of the we-know-better-than-everyone-else "ZFS is ALWAYS consistent on disk so nofsck
will EVER be needed!!!!" developers running things... :-/ That attitude from Sun's original ZFS implementation was misguided and misplaced 10+ years ago - it's a shame it appears to still live on.
â Andrew Henle
Feb 26 at 11:03
Thanks. Add as answer please, so I can close and vote
â elbarna
Feb 26 at 12:09
Which version of Solaris 11?
â Andrew Henle
Feb 26 at 10:37
Which version of Solaris 11?
â Andrew Henle
Feb 26 at 10:37
Version is 11.4 beta
â elbarna
Feb 26 at 10:51
Version is 11.4 beta
â elbarna
Feb 26 at 10:51
1
1
See this: docs.oracle.com/cd/E53394_01/html/E54818/â¦
zfs_arc_max
appears to have been deprecated in favor of some amorphous user_reserve_hint_pct
. Seems like they still have some of the we-know-better-than-everyone-else "ZFS is ALWAYS consistent on disk so no fsck
will EVER be needed!!!!" developers running things... :-/ That attitude from Sun's original ZFS implementation was misguided and misplaced 10+ years ago - it's a shame it appears to still live on.â Andrew Henle
Feb 26 at 11:03
See this: docs.oracle.com/cd/E53394_01/html/E54818/â¦
zfs_arc_max
appears to have been deprecated in favor of some amorphous user_reserve_hint_pct
. Seems like they still have some of the we-know-better-than-everyone-else "ZFS is ALWAYS consistent on disk so no fsck
will EVER be needed!!!!" developers running things... :-/ That attitude from Sun's original ZFS implementation was misguided and misplaced 10+ years ago - it's a shame it appears to still live on.â Andrew Henle
Feb 26 at 11:03
Thanks. Add as answer please, so I can close and vote
â elbarna
Feb 26 at 12:09
Thanks. Add as answer please, so I can close and vote
â elbarna
Feb 26 at 12:09
add a comment |Â
1 Answer
1
active
oldest
votes
up vote
3
down vote
accepted
zfs_arc_max
has apparently been deprecated. See https://docs.oracle.com/cd/E53394_01/html/E54818/chapterzfs-3.html#scrolltoc:
ZFS Memory Management Parameters
This section describes parameters related to ZFS memory management.
user_reserve_hint_pct
Description
Informs the system about how much memory is reserved for application
use, and therefore limits how much memory can be used by the ZFS ARC
cache as the cache increases over time.
By means of this parameter, administrators can maintain a large
reserve of available free memory for future application demands. The
user_reserve_hint_pct parameter is intended to be used in place of the
zfs_arc_max parameter to restrict the growth of the ZFS ARC cache.
Note - Review Document 1663862.1, Memory Management Between ZFS and
Applications in Oracle Solaris 11.2, in My Oracle Support (MOS) for
guidance in tuning this parameter. Data Type
Unsigned Integer (64-bit)
Default
0
If a dedicated system is used to run a set of applications with a
known memory footprint, set the parameter to the value of that
footprint, such as the sum of the SGA of Oracle database.
To assign a value to the parameter, run the script that is provided in
Document 1663862.1 in My Oracle Support (MOS). To make the tuning
persistent across reboots, refer to script output for instructions
about using âÂÂp option.
Range
0-99
Units
Percent
Dynamic
Yes
You can adjust the setting of this parameter dynamically on a running
system.
When to Change
For upward adjustments, increase the value if the initial value is
determined to be insufficient over time for application requirements,
or if application demand increases on the system. Perform this
adjustment only within a scheduled system maintenance window. After
you have changed the value, reboot the system.
For downward adjustments, decrease the value if allowed by application
requirements. Make sure to use decrease the value only by small
amounts, no greater than 5% at a time.
Commitment Level
Unstable
...
zfs_arc_max
Description
Determines the maximum size of the ZFS Adaptive Replacement Cache
(ARC). However, seeuser_reserve_hint_pct
.
In my opinion, this is a huge step backwards. A hard limit is replaced with a mere "hint". There can be very, very, very good reasons for a hard limit.
(I'm wondering if there really is an undocumented ARC hard limit. Sun/Oracle has a history of doing things like that with ZFS. "ZFS is always consistent on disk! You don't need fsck
or any debugging tools. No, you don't. WE SAID YOU DON'T. WHY OH WHY WON'T YOU BELIEVE US?!?! Oh, ummm, ahhh, yeah, here's zdb
. We've been using it internally for years so it's pretty mature...")
Oh, user_reserve_hint_pct is quite a hard limit. It just limits something different. It limits the memory consumed by the kernel and it has an important advantage about the old way, it's dynamic and it can be changed while the system is running. As the ARC is at the moment the only thing that can shrink on user demand, you limit the ARC by proxy. At the end it's the more correct way, you limit ARC to ensure enough memory for the application, not because you want a certain size of ZFS ARC.
â c0t0d0s0
Mar 5 at 4:34
Regarding your ZFS rant: You should tell the other part of the "No ZFS error checking"-story. The idea is that any generic repair mechanism is just able to force the filesystem into a state based on a set of assumption, that may be mountable but may destroy a lot of data on the way. When there is a bug that does damage to the on-disk state it should repair the on-disk state with the knowledge of the bug at every read or write and do it without sending a lot of data into lost+found.
â c0t0d0s0
Mar 5 at 4:46
@c0t0d0s0 Oh, user_reserve_hint_pct is quite a hard limit. Then why is it called a "hint"? Why not just documentzfs_arc_max
and make it dynamic? Do you have access to the source code? And the ZFS "always consistent on disk" marketing hype, even if possible, completely depends upon there not being any bugs in the code. The very fact thatzdb
secretly existed for years before becoming public is proof those claims were known to be lies when they were made: if ZFS is "always consistent on disk", why doeszdb
even exist? That claim generated much laughter when ZFS debuted.
â Andrew Henle
Mar 5 at 10:52
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
3
down vote
accepted
zfs_arc_max
has apparently been deprecated. See https://docs.oracle.com/cd/E53394_01/html/E54818/chapterzfs-3.html#scrolltoc:
ZFS Memory Management Parameters
This section describes parameters related to ZFS memory management.
user_reserve_hint_pct
Description
Informs the system about how much memory is reserved for application
use, and therefore limits how much memory can be used by the ZFS ARC
cache as the cache increases over time.
By means of this parameter, administrators can maintain a large
reserve of available free memory for future application demands. The
user_reserve_hint_pct parameter is intended to be used in place of the
zfs_arc_max parameter to restrict the growth of the ZFS ARC cache.
Note - Review Document 1663862.1, Memory Management Between ZFS and
Applications in Oracle Solaris 11.2, in My Oracle Support (MOS) for
guidance in tuning this parameter. Data Type
Unsigned Integer (64-bit)
Default
0
If a dedicated system is used to run a set of applications with a
known memory footprint, set the parameter to the value of that
footprint, such as the sum of the SGA of Oracle database.
To assign a value to the parameter, run the script that is provided in
Document 1663862.1 in My Oracle Support (MOS). To make the tuning
persistent across reboots, refer to script output for instructions
about using âÂÂp option.
Range
0-99
Units
Percent
Dynamic
Yes
You can adjust the setting of this parameter dynamically on a running
system.
When to Change
For upward adjustments, increase the value if the initial value is
determined to be insufficient over time for application requirements,
or if application demand increases on the system. Perform this
adjustment only within a scheduled system maintenance window. After
you have changed the value, reboot the system.
For downward adjustments, decrease the value if allowed by application
requirements. Make sure to use decrease the value only by small
amounts, no greater than 5% at a time.
Commitment Level
Unstable
...
zfs_arc_max
Description
Determines the maximum size of the ZFS Adaptive Replacement Cache
(ARC). However, seeuser_reserve_hint_pct
.
In my opinion, this is a huge step backwards. A hard limit is replaced with a mere "hint". There can be very, very, very good reasons for a hard limit.
(I'm wondering if there really is an undocumented ARC hard limit. Sun/Oracle has a history of doing things like that with ZFS. "ZFS is always consistent on disk! You don't need fsck
or any debugging tools. No, you don't. WE SAID YOU DON'T. WHY OH WHY WON'T YOU BELIEVE US?!?! Oh, ummm, ahhh, yeah, here's zdb
. We've been using it internally for years so it's pretty mature...")
Oh, user_reserve_hint_pct is quite a hard limit. It just limits something different. It limits the memory consumed by the kernel and it has an important advantage about the old way, it's dynamic and it can be changed while the system is running. As the ARC is at the moment the only thing that can shrink on user demand, you limit the ARC by proxy. At the end it's the more correct way, you limit ARC to ensure enough memory for the application, not because you want a certain size of ZFS ARC.
â c0t0d0s0
Mar 5 at 4:34
Regarding your ZFS rant: You should tell the other part of the "No ZFS error checking"-story. The idea is that any generic repair mechanism is just able to force the filesystem into a state based on a set of assumption, that may be mountable but may destroy a lot of data on the way. When there is a bug that does damage to the on-disk state it should repair the on-disk state with the knowledge of the bug at every read or write and do it without sending a lot of data into lost+found.
â c0t0d0s0
Mar 5 at 4:46
@c0t0d0s0 Oh, user_reserve_hint_pct is quite a hard limit. Then why is it called a "hint"? Why not just documentzfs_arc_max
and make it dynamic? Do you have access to the source code? And the ZFS "always consistent on disk" marketing hype, even if possible, completely depends upon there not being any bugs in the code. The very fact thatzdb
secretly existed for years before becoming public is proof those claims were known to be lies when they were made: if ZFS is "always consistent on disk", why doeszdb
even exist? That claim generated much laughter when ZFS debuted.
â Andrew Henle
Mar 5 at 10:52
add a comment |Â
up vote
3
down vote
accepted
zfs_arc_max
has apparently been deprecated. See https://docs.oracle.com/cd/E53394_01/html/E54818/chapterzfs-3.html#scrolltoc:
ZFS Memory Management Parameters
This section describes parameters related to ZFS memory management.
user_reserve_hint_pct
Description
Informs the system about how much memory is reserved for application
use, and therefore limits how much memory can be used by the ZFS ARC
cache as the cache increases over time.
By means of this parameter, administrators can maintain a large
reserve of available free memory for future application demands. The
user_reserve_hint_pct parameter is intended to be used in place of the
zfs_arc_max parameter to restrict the growth of the ZFS ARC cache.
Note - Review Document 1663862.1, Memory Management Between ZFS and
Applications in Oracle Solaris 11.2, in My Oracle Support (MOS) for
guidance in tuning this parameter. Data Type
Unsigned Integer (64-bit)
Default
0
If a dedicated system is used to run a set of applications with a
known memory footprint, set the parameter to the value of that
footprint, such as the sum of the SGA of Oracle database.
To assign a value to the parameter, run the script that is provided in
Document 1663862.1 in My Oracle Support (MOS). To make the tuning
persistent across reboots, refer to script output for instructions
about using âÂÂp option.
Range
0-99
Units
Percent
Dynamic
Yes
You can adjust the setting of this parameter dynamically on a running
system.
When to Change
For upward adjustments, increase the value if the initial value is
determined to be insufficient over time for application requirements,
or if application demand increases on the system. Perform this
adjustment only within a scheduled system maintenance window. After
you have changed the value, reboot the system.
For downward adjustments, decrease the value if allowed by application
requirements. Make sure to use decrease the value only by small
amounts, no greater than 5% at a time.
Commitment Level
Unstable
...
zfs_arc_max
Description
Determines the maximum size of the ZFS Adaptive Replacement Cache
(ARC). However, seeuser_reserve_hint_pct
.
In my opinion, this is a huge step backwards. A hard limit is replaced with a mere "hint". There can be very, very, very good reasons for a hard limit.
(I'm wondering if there really is an undocumented ARC hard limit. Sun/Oracle has a history of doing things like that with ZFS. "ZFS is always consistent on disk! You don't need fsck
or any debugging tools. No, you don't. WE SAID YOU DON'T. WHY OH WHY WON'T YOU BELIEVE US?!?! Oh, ummm, ahhh, yeah, here's zdb
. We've been using it internally for years so it's pretty mature...")
Oh, user_reserve_hint_pct is quite a hard limit. It just limits something different. It limits the memory consumed by the kernel and it has an important advantage about the old way, it's dynamic and it can be changed while the system is running. As the ARC is at the moment the only thing that can shrink on user demand, you limit the ARC by proxy. At the end it's the more correct way, you limit ARC to ensure enough memory for the application, not because you want a certain size of ZFS ARC.
â c0t0d0s0
Mar 5 at 4:34
Regarding your ZFS rant: You should tell the other part of the "No ZFS error checking"-story. The idea is that any generic repair mechanism is just able to force the filesystem into a state based on a set of assumption, that may be mountable but may destroy a lot of data on the way. When there is a bug that does damage to the on-disk state it should repair the on-disk state with the knowledge of the bug at every read or write and do it without sending a lot of data into lost+found.
â c0t0d0s0
Mar 5 at 4:46
@c0t0d0s0 Oh, user_reserve_hint_pct is quite a hard limit. Then why is it called a "hint"? Why not just documentzfs_arc_max
and make it dynamic? Do you have access to the source code? And the ZFS "always consistent on disk" marketing hype, even if possible, completely depends upon there not being any bugs in the code. The very fact thatzdb
secretly existed for years before becoming public is proof those claims were known to be lies when they were made: if ZFS is "always consistent on disk", why doeszdb
even exist? That claim generated much laughter when ZFS debuted.
â Andrew Henle
Mar 5 at 10:52
add a comment |Â
up vote
3
down vote
accepted
up vote
3
down vote
accepted
zfs_arc_max
has apparently been deprecated. See https://docs.oracle.com/cd/E53394_01/html/E54818/chapterzfs-3.html#scrolltoc:
ZFS Memory Management Parameters
This section describes parameters related to ZFS memory management.
user_reserve_hint_pct
Description
Informs the system about how much memory is reserved for application
use, and therefore limits how much memory can be used by the ZFS ARC
cache as the cache increases over time.
By means of this parameter, administrators can maintain a large
reserve of available free memory for future application demands. The
user_reserve_hint_pct parameter is intended to be used in place of the
zfs_arc_max parameter to restrict the growth of the ZFS ARC cache.
Note - Review Document 1663862.1, Memory Management Between ZFS and
Applications in Oracle Solaris 11.2, in My Oracle Support (MOS) for
guidance in tuning this parameter. Data Type
Unsigned Integer (64-bit)
Default
0
If a dedicated system is used to run a set of applications with a
known memory footprint, set the parameter to the value of that
footprint, such as the sum of the SGA of Oracle database.
To assign a value to the parameter, run the script that is provided in
Document 1663862.1 in My Oracle Support (MOS). To make the tuning
persistent across reboots, refer to script output for instructions
about using âÂÂp option.
Range
0-99
Units
Percent
Dynamic
Yes
You can adjust the setting of this parameter dynamically on a running
system.
When to Change
For upward adjustments, increase the value if the initial value is
determined to be insufficient over time for application requirements,
or if application demand increases on the system. Perform this
adjustment only within a scheduled system maintenance window. After
you have changed the value, reboot the system.
For downward adjustments, decrease the value if allowed by application
requirements. Make sure to use decrease the value only by small
amounts, no greater than 5% at a time.
Commitment Level
Unstable
...
zfs_arc_max
Description
Determines the maximum size of the ZFS Adaptive Replacement Cache
(ARC). However, seeuser_reserve_hint_pct
.
In my opinion, this is a huge step backwards. A hard limit is replaced with a mere "hint". There can be very, very, very good reasons for a hard limit.
(I'm wondering if there really is an undocumented ARC hard limit. Sun/Oracle has a history of doing things like that with ZFS. "ZFS is always consistent on disk! You don't need fsck
or any debugging tools. No, you don't. WE SAID YOU DON'T. WHY OH WHY WON'T YOU BELIEVE US?!?! Oh, ummm, ahhh, yeah, here's zdb
. We've been using it internally for years so it's pretty mature...")
zfs_arc_max
has apparently been deprecated. See https://docs.oracle.com/cd/E53394_01/html/E54818/chapterzfs-3.html#scrolltoc:
ZFS Memory Management Parameters
This section describes parameters related to ZFS memory management.
user_reserve_hint_pct
Description
Informs the system about how much memory is reserved for application
use, and therefore limits how much memory can be used by the ZFS ARC
cache as the cache increases over time.
By means of this parameter, administrators can maintain a large
reserve of available free memory for future application demands. The
user_reserve_hint_pct parameter is intended to be used in place of the
zfs_arc_max parameter to restrict the growth of the ZFS ARC cache.
Note - Review Document 1663862.1, Memory Management Between ZFS and
Applications in Oracle Solaris 11.2, in My Oracle Support (MOS) for
guidance in tuning this parameter. Data Type
Unsigned Integer (64-bit)
Default
0
If a dedicated system is used to run a set of applications with a
known memory footprint, set the parameter to the value of that
footprint, such as the sum of the SGA of Oracle database.
To assign a value to the parameter, run the script that is provided in
Document 1663862.1 in My Oracle Support (MOS). To make the tuning
persistent across reboots, refer to script output for instructions
about using âÂÂp option.
Range
0-99
Units
Percent
Dynamic
Yes
You can adjust the setting of this parameter dynamically on a running
system.
When to Change
For upward adjustments, increase the value if the initial value is
determined to be insufficient over time for application requirements,
or if application demand increases on the system. Perform this
adjustment only within a scheduled system maintenance window. After
you have changed the value, reboot the system.
For downward adjustments, decrease the value if allowed by application
requirements. Make sure to use decrease the value only by small
amounts, no greater than 5% at a time.
Commitment Level
Unstable
...
zfs_arc_max
Description
Determines the maximum size of the ZFS Adaptive Replacement Cache
(ARC). However, seeuser_reserve_hint_pct
.
In my opinion, this is a huge step backwards. A hard limit is replaced with a mere "hint". There can be very, very, very good reasons for a hard limit.
(I'm wondering if there really is an undocumented ARC hard limit. Sun/Oracle has a history of doing things like that with ZFS. "ZFS is always consistent on disk! You don't need fsck
or any debugging tools. No, you don't. WE SAID YOU DON'T. WHY OH WHY WON'T YOU BELIEVE US?!?! Oh, ummm, ahhh, yeah, here's zdb
. We've been using it internally for years so it's pretty mature...")
answered Feb 26 at 22:52
Andrew Henle
2,542911
2,542911
Oh, user_reserve_hint_pct is quite a hard limit. It just limits something different. It limits the memory consumed by the kernel and it has an important advantage about the old way, it's dynamic and it can be changed while the system is running. As the ARC is at the moment the only thing that can shrink on user demand, you limit the ARC by proxy. At the end it's the more correct way, you limit ARC to ensure enough memory for the application, not because you want a certain size of ZFS ARC.
â c0t0d0s0
Mar 5 at 4:34
Regarding your ZFS rant: You should tell the other part of the "No ZFS error checking"-story. The idea is that any generic repair mechanism is just able to force the filesystem into a state based on a set of assumption, that may be mountable but may destroy a lot of data on the way. When there is a bug that does damage to the on-disk state it should repair the on-disk state with the knowledge of the bug at every read or write and do it without sending a lot of data into lost+found.
â c0t0d0s0
Mar 5 at 4:46
@c0t0d0s0 Oh, user_reserve_hint_pct is quite a hard limit. Then why is it called a "hint"? Why not just documentzfs_arc_max
and make it dynamic? Do you have access to the source code? And the ZFS "always consistent on disk" marketing hype, even if possible, completely depends upon there not being any bugs in the code. The very fact thatzdb
secretly existed for years before becoming public is proof those claims were known to be lies when they were made: if ZFS is "always consistent on disk", why doeszdb
even exist? That claim generated much laughter when ZFS debuted.
â Andrew Henle
Mar 5 at 10:52
add a comment |Â
Oh, user_reserve_hint_pct is quite a hard limit. It just limits something different. It limits the memory consumed by the kernel and it has an important advantage about the old way, it's dynamic and it can be changed while the system is running. As the ARC is at the moment the only thing that can shrink on user demand, you limit the ARC by proxy. At the end it's the more correct way, you limit ARC to ensure enough memory for the application, not because you want a certain size of ZFS ARC.
â c0t0d0s0
Mar 5 at 4:34
Regarding your ZFS rant: You should tell the other part of the "No ZFS error checking"-story. The idea is that any generic repair mechanism is just able to force the filesystem into a state based on a set of assumption, that may be mountable but may destroy a lot of data on the way. When there is a bug that does damage to the on-disk state it should repair the on-disk state with the knowledge of the bug at every read or write and do it without sending a lot of data into lost+found.
â c0t0d0s0
Mar 5 at 4:46
@c0t0d0s0 Oh, user_reserve_hint_pct is quite a hard limit. Then why is it called a "hint"? Why not just documentzfs_arc_max
and make it dynamic? Do you have access to the source code? And the ZFS "always consistent on disk" marketing hype, even if possible, completely depends upon there not being any bugs in the code. The very fact thatzdb
secretly existed for years before becoming public is proof those claims were known to be lies when they were made: if ZFS is "always consistent on disk", why doeszdb
even exist? That claim generated much laughter when ZFS debuted.
â Andrew Henle
Mar 5 at 10:52
Oh, user_reserve_hint_pct is quite a hard limit. It just limits something different. It limits the memory consumed by the kernel and it has an important advantage about the old way, it's dynamic and it can be changed while the system is running. As the ARC is at the moment the only thing that can shrink on user demand, you limit the ARC by proxy. At the end it's the more correct way, you limit ARC to ensure enough memory for the application, not because you want a certain size of ZFS ARC.
â c0t0d0s0
Mar 5 at 4:34
Oh, user_reserve_hint_pct is quite a hard limit. It just limits something different. It limits the memory consumed by the kernel and it has an important advantage about the old way, it's dynamic and it can be changed while the system is running. As the ARC is at the moment the only thing that can shrink on user demand, you limit the ARC by proxy. At the end it's the more correct way, you limit ARC to ensure enough memory for the application, not because you want a certain size of ZFS ARC.
â c0t0d0s0
Mar 5 at 4:34
Regarding your ZFS rant: You should tell the other part of the "No ZFS error checking"-story. The idea is that any generic repair mechanism is just able to force the filesystem into a state based on a set of assumption, that may be mountable but may destroy a lot of data on the way. When there is a bug that does damage to the on-disk state it should repair the on-disk state with the knowledge of the bug at every read or write and do it without sending a lot of data into lost+found.
â c0t0d0s0
Mar 5 at 4:46
Regarding your ZFS rant: You should tell the other part of the "No ZFS error checking"-story. The idea is that any generic repair mechanism is just able to force the filesystem into a state based on a set of assumption, that may be mountable but may destroy a lot of data on the way. When there is a bug that does damage to the on-disk state it should repair the on-disk state with the knowledge of the bug at every read or write and do it without sending a lot of data into lost+found.
â c0t0d0s0
Mar 5 at 4:46
@c0t0d0s0 Oh, user_reserve_hint_pct is quite a hard limit. Then why is it called a "hint"? Why not just document
zfs_arc_max
and make it dynamic? Do you have access to the source code? And the ZFS "always consistent on disk" marketing hype, even if possible, completely depends upon there not being any bugs in the code. The very fact that zdb
secretly existed for years before becoming public is proof those claims were known to be lies when they were made: if ZFS is "always consistent on disk", why does zdb
even exist? That claim generated much laughter when ZFS debuted.â Andrew Henle
Mar 5 at 10:52
@c0t0d0s0 Oh, user_reserve_hint_pct is quite a hard limit. Then why is it called a "hint"? Why not just document
zfs_arc_max
and make it dynamic? Do you have access to the source code? And the ZFS "always consistent on disk" marketing hype, even if possible, completely depends upon there not being any bugs in the code. The very fact that zdb
secretly existed for years before becoming public is proof those claims were known to be lies when they were made: if ZFS is "always consistent on disk", why does zdb
even exist? That claim generated much laughter when ZFS debuted.â Andrew Henle
Mar 5 at 10:52
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f426642%2fzfs-and-solaris11-how-to-reduce-ram-consumption%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Which version of Solaris 11?
â Andrew Henle
Feb 26 at 10:37
Version is 11.4 beta
â elbarna
Feb 26 at 10:51
1
See this: docs.oracle.com/cd/E53394_01/html/E54818/â¦
zfs_arc_max
appears to have been deprecated in favor of some amorphoususer_reserve_hint_pct
. Seems like they still have some of the we-know-better-than-everyone-else "ZFS is ALWAYS consistent on disk so nofsck
will EVER be needed!!!!" developers running things... :-/ That attitude from Sun's original ZFS implementation was misguided and misplaced 10+ years ago - it's a shame it appears to still live on.â Andrew Henle
Feb 26 at 11:03
Thanks. Add as answer please, so I can close and vote
â elbarna
Feb 26 at 12:09