How to deal with high number of page faults?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite












I've got ~200000 page faults per second and I think it's a huge number and it affects the overall system performance. I'm using MBP with 16G of RAM.



How to deal with such number of page faults (constantly)? I mean, it's possible to find the reason behind it (like which process causes that and why)?




For example:



$ sudo vm_stat 1 
Mach Virtual Memory Statistics: (page size of 4096 bytes)
free active specul inactive throttle wired prgable faults copy 0fill reactive purged file-backed anonymous cmprssed cmprssor dcomprs comprs pageins pageout swapins swapouts
4917 96838 529 97567 0 3289215 2 11428M 127346K 5267290K 2537359K 215040K 47481 147453 11108999 703646 3631740K 3986806K 125027K 3038713 4551661K 4575495K
4728 119600 514 119225 0 3290072 0 197571 160 2067 28142 2 47479 191860 11063976 658555 153524 108572 31 0 11145 18981
4165 104525 514 104919 0 3288821 0 130758 0 2229 74497 12 47449 162509 11094522 689295 90301 121063 3 3 18085 7894
4648 106152 463 105365 0 3289210 0 169256 268 8150 73677 6 47404 164576 11097078 686991 122692 126220 0 0 12751 15474
5364 105849 246 101327 0 3291998 0 194376 81 24019 43351 10 47189 160233 11103191 687904 121794 129662 72 0 14705 14031
4800 131711 234 126573 0 3289384 0 272238 0 4346 110454 0 47177 211341 11035585 639095 167782 108490 0 0 10628 23480
3813 114535 203 114136 0 3289283 0 235409 0 3292 39228 6 47149 181725 11065977 670501 153957 184877 12 2 18041 17254
4568 115828 104 116790 0 3289299 0 211943 0 2536 81178 1 46680 186042 11061206 665872 139337 134989 0 0 18848 19316
3273 95211 105 95156 0 3289239 0 223742 268 2123 40588 4 46670 143802 11103048 708970 156600 198575 0 0 22910 8
$ top -n1 | head
Processes: 453 total, 8 running, 16 stuck, 429 sleeping, 2870 threads
2016/07/21 15:06:33
Load Avg: 7.95, 9.09, 9.22
CPU usage: 9.17% user, 41.17% sys, 49.64% idle
SharedLibs: 54M resident, 8520K data, 3092K linkedit.
MemRegions: 307835 total, 4421M resident, 11M private, 143M shared.
PhysMem: 16G used (13G wired), 4880K unused.
VM: 4577G vsize, 528M framework vsize, 4619730480(0) swapins, 4643542587(0) swapouts.
Networks: packets: 67336466/41G in, 59785188/11G out.
Disks: 620736722/18T read, 615258201/18T written.
$ uptime
15:25pm up 17 days 23:31, 48 users, load average: 7.95, 9.09, 9.22
$ sudo fs_usage | grep -e PgIn -e PgOut
15:25:02 PgOut[ST1P] 0.000146 W kernel_task
15:25:02 PgOut[ST1P] 0.000241 W kernel_task
15:25:02 PgOut[ST1P] 0.000234 W kernel_task
15:25:02 PgOut[ST1P] 0.000317 W kernel_task
15:25:02 PgOut[ST1P] 0.000333 W kernel_task
15:25:02 PgOut[ST1P] 0.000252 W kernel_task
15:25:02 PgOut[ST1P] 0.000248 W kernel_task
15:25:02 PgOut[ST1P] 0.000240 W kernel_task
15:25:02 PgOut[ST1P] 0.000236 W kernel_task
15:25:02 PgOut[ST1P] 0.000261 W kernel_task
15:25:02 PgOut[ST1P] 0.000257 W kernel_task
15:25:02 PgOut[ST1P] 0.000253 W kernel_task
15:25:02 PgOut[ST1P] 0.000289 W kernel_task
15:25:02 PgOut[ST1P] 0.000159 W kernel_task
15:25:02 PgOut[ST1P] 0.000120 W kernel_task









share|improve this question























  • you might find something from commands atop, or iotop.
    – meuh
    Jul 21 '16 at 19:53










  • Tried, but it was showing the same that kernel has the most writes.
    – kenorb
    Jul 21 '16 at 19:55














up vote
1
down vote

favorite












I've got ~200000 page faults per second and I think it's a huge number and it affects the overall system performance. I'm using MBP with 16G of RAM.



How to deal with such number of page faults (constantly)? I mean, it's possible to find the reason behind it (like which process causes that and why)?




For example:



$ sudo vm_stat 1 
Mach Virtual Memory Statistics: (page size of 4096 bytes)
free active specul inactive throttle wired prgable faults copy 0fill reactive purged file-backed anonymous cmprssed cmprssor dcomprs comprs pageins pageout swapins swapouts
4917 96838 529 97567 0 3289215 2 11428M 127346K 5267290K 2537359K 215040K 47481 147453 11108999 703646 3631740K 3986806K 125027K 3038713 4551661K 4575495K
4728 119600 514 119225 0 3290072 0 197571 160 2067 28142 2 47479 191860 11063976 658555 153524 108572 31 0 11145 18981
4165 104525 514 104919 0 3288821 0 130758 0 2229 74497 12 47449 162509 11094522 689295 90301 121063 3 3 18085 7894
4648 106152 463 105365 0 3289210 0 169256 268 8150 73677 6 47404 164576 11097078 686991 122692 126220 0 0 12751 15474
5364 105849 246 101327 0 3291998 0 194376 81 24019 43351 10 47189 160233 11103191 687904 121794 129662 72 0 14705 14031
4800 131711 234 126573 0 3289384 0 272238 0 4346 110454 0 47177 211341 11035585 639095 167782 108490 0 0 10628 23480
3813 114535 203 114136 0 3289283 0 235409 0 3292 39228 6 47149 181725 11065977 670501 153957 184877 12 2 18041 17254
4568 115828 104 116790 0 3289299 0 211943 0 2536 81178 1 46680 186042 11061206 665872 139337 134989 0 0 18848 19316
3273 95211 105 95156 0 3289239 0 223742 268 2123 40588 4 46670 143802 11103048 708970 156600 198575 0 0 22910 8
$ top -n1 | head
Processes: 453 total, 8 running, 16 stuck, 429 sleeping, 2870 threads
2016/07/21 15:06:33
Load Avg: 7.95, 9.09, 9.22
CPU usage: 9.17% user, 41.17% sys, 49.64% idle
SharedLibs: 54M resident, 8520K data, 3092K linkedit.
MemRegions: 307835 total, 4421M resident, 11M private, 143M shared.
PhysMem: 16G used (13G wired), 4880K unused.
VM: 4577G vsize, 528M framework vsize, 4619730480(0) swapins, 4643542587(0) swapouts.
Networks: packets: 67336466/41G in, 59785188/11G out.
Disks: 620736722/18T read, 615258201/18T written.
$ uptime
15:25pm up 17 days 23:31, 48 users, load average: 7.95, 9.09, 9.22
$ sudo fs_usage | grep -e PgIn -e PgOut
15:25:02 PgOut[ST1P] 0.000146 W kernel_task
15:25:02 PgOut[ST1P] 0.000241 W kernel_task
15:25:02 PgOut[ST1P] 0.000234 W kernel_task
15:25:02 PgOut[ST1P] 0.000317 W kernel_task
15:25:02 PgOut[ST1P] 0.000333 W kernel_task
15:25:02 PgOut[ST1P] 0.000252 W kernel_task
15:25:02 PgOut[ST1P] 0.000248 W kernel_task
15:25:02 PgOut[ST1P] 0.000240 W kernel_task
15:25:02 PgOut[ST1P] 0.000236 W kernel_task
15:25:02 PgOut[ST1P] 0.000261 W kernel_task
15:25:02 PgOut[ST1P] 0.000257 W kernel_task
15:25:02 PgOut[ST1P] 0.000253 W kernel_task
15:25:02 PgOut[ST1P] 0.000289 W kernel_task
15:25:02 PgOut[ST1P] 0.000159 W kernel_task
15:25:02 PgOut[ST1P] 0.000120 W kernel_task









share|improve this question























  • you might find something from commands atop, or iotop.
    – meuh
    Jul 21 '16 at 19:53










  • Tried, but it was showing the same that kernel has the most writes.
    – kenorb
    Jul 21 '16 at 19:55












up vote
1
down vote

favorite









up vote
1
down vote

favorite











I've got ~200000 page faults per second and I think it's a huge number and it affects the overall system performance. I'm using MBP with 16G of RAM.



How to deal with such number of page faults (constantly)? I mean, it's possible to find the reason behind it (like which process causes that and why)?




For example:



$ sudo vm_stat 1 
Mach Virtual Memory Statistics: (page size of 4096 bytes)
free active specul inactive throttle wired prgable faults copy 0fill reactive purged file-backed anonymous cmprssed cmprssor dcomprs comprs pageins pageout swapins swapouts
4917 96838 529 97567 0 3289215 2 11428M 127346K 5267290K 2537359K 215040K 47481 147453 11108999 703646 3631740K 3986806K 125027K 3038713 4551661K 4575495K
4728 119600 514 119225 0 3290072 0 197571 160 2067 28142 2 47479 191860 11063976 658555 153524 108572 31 0 11145 18981
4165 104525 514 104919 0 3288821 0 130758 0 2229 74497 12 47449 162509 11094522 689295 90301 121063 3 3 18085 7894
4648 106152 463 105365 0 3289210 0 169256 268 8150 73677 6 47404 164576 11097078 686991 122692 126220 0 0 12751 15474
5364 105849 246 101327 0 3291998 0 194376 81 24019 43351 10 47189 160233 11103191 687904 121794 129662 72 0 14705 14031
4800 131711 234 126573 0 3289384 0 272238 0 4346 110454 0 47177 211341 11035585 639095 167782 108490 0 0 10628 23480
3813 114535 203 114136 0 3289283 0 235409 0 3292 39228 6 47149 181725 11065977 670501 153957 184877 12 2 18041 17254
4568 115828 104 116790 0 3289299 0 211943 0 2536 81178 1 46680 186042 11061206 665872 139337 134989 0 0 18848 19316
3273 95211 105 95156 0 3289239 0 223742 268 2123 40588 4 46670 143802 11103048 708970 156600 198575 0 0 22910 8
$ top -n1 | head
Processes: 453 total, 8 running, 16 stuck, 429 sleeping, 2870 threads
2016/07/21 15:06:33
Load Avg: 7.95, 9.09, 9.22
CPU usage: 9.17% user, 41.17% sys, 49.64% idle
SharedLibs: 54M resident, 8520K data, 3092K linkedit.
MemRegions: 307835 total, 4421M resident, 11M private, 143M shared.
PhysMem: 16G used (13G wired), 4880K unused.
VM: 4577G vsize, 528M framework vsize, 4619730480(0) swapins, 4643542587(0) swapouts.
Networks: packets: 67336466/41G in, 59785188/11G out.
Disks: 620736722/18T read, 615258201/18T written.
$ uptime
15:25pm up 17 days 23:31, 48 users, load average: 7.95, 9.09, 9.22
$ sudo fs_usage | grep -e PgIn -e PgOut
15:25:02 PgOut[ST1P] 0.000146 W kernel_task
15:25:02 PgOut[ST1P] 0.000241 W kernel_task
15:25:02 PgOut[ST1P] 0.000234 W kernel_task
15:25:02 PgOut[ST1P] 0.000317 W kernel_task
15:25:02 PgOut[ST1P] 0.000333 W kernel_task
15:25:02 PgOut[ST1P] 0.000252 W kernel_task
15:25:02 PgOut[ST1P] 0.000248 W kernel_task
15:25:02 PgOut[ST1P] 0.000240 W kernel_task
15:25:02 PgOut[ST1P] 0.000236 W kernel_task
15:25:02 PgOut[ST1P] 0.000261 W kernel_task
15:25:02 PgOut[ST1P] 0.000257 W kernel_task
15:25:02 PgOut[ST1P] 0.000253 W kernel_task
15:25:02 PgOut[ST1P] 0.000289 W kernel_task
15:25:02 PgOut[ST1P] 0.000159 W kernel_task
15:25:02 PgOut[ST1P] 0.000120 W kernel_task









share|improve this question















I've got ~200000 page faults per second and I think it's a huge number and it affects the overall system performance. I'm using MBP with 16G of RAM.



How to deal with such number of page faults (constantly)? I mean, it's possible to find the reason behind it (like which process causes that and why)?




For example:



$ sudo vm_stat 1 
Mach Virtual Memory Statistics: (page size of 4096 bytes)
free active specul inactive throttle wired prgable faults copy 0fill reactive purged file-backed anonymous cmprssed cmprssor dcomprs comprs pageins pageout swapins swapouts
4917 96838 529 97567 0 3289215 2 11428M 127346K 5267290K 2537359K 215040K 47481 147453 11108999 703646 3631740K 3986806K 125027K 3038713 4551661K 4575495K
4728 119600 514 119225 0 3290072 0 197571 160 2067 28142 2 47479 191860 11063976 658555 153524 108572 31 0 11145 18981
4165 104525 514 104919 0 3288821 0 130758 0 2229 74497 12 47449 162509 11094522 689295 90301 121063 3 3 18085 7894
4648 106152 463 105365 0 3289210 0 169256 268 8150 73677 6 47404 164576 11097078 686991 122692 126220 0 0 12751 15474
5364 105849 246 101327 0 3291998 0 194376 81 24019 43351 10 47189 160233 11103191 687904 121794 129662 72 0 14705 14031
4800 131711 234 126573 0 3289384 0 272238 0 4346 110454 0 47177 211341 11035585 639095 167782 108490 0 0 10628 23480
3813 114535 203 114136 0 3289283 0 235409 0 3292 39228 6 47149 181725 11065977 670501 153957 184877 12 2 18041 17254
4568 115828 104 116790 0 3289299 0 211943 0 2536 81178 1 46680 186042 11061206 665872 139337 134989 0 0 18848 19316
3273 95211 105 95156 0 3289239 0 223742 268 2123 40588 4 46670 143802 11103048 708970 156600 198575 0 0 22910 8
$ top -n1 | head
Processes: 453 total, 8 running, 16 stuck, 429 sleeping, 2870 threads
2016/07/21 15:06:33
Load Avg: 7.95, 9.09, 9.22
CPU usage: 9.17% user, 41.17% sys, 49.64% idle
SharedLibs: 54M resident, 8520K data, 3092K linkedit.
MemRegions: 307835 total, 4421M resident, 11M private, 143M shared.
PhysMem: 16G used (13G wired), 4880K unused.
VM: 4577G vsize, 528M framework vsize, 4619730480(0) swapins, 4643542587(0) swapouts.
Networks: packets: 67336466/41G in, 59785188/11G out.
Disks: 620736722/18T read, 615258201/18T written.
$ uptime
15:25pm up 17 days 23:31, 48 users, load average: 7.95, 9.09, 9.22
$ sudo fs_usage | grep -e PgIn -e PgOut
15:25:02 PgOut[ST1P] 0.000146 W kernel_task
15:25:02 PgOut[ST1P] 0.000241 W kernel_task
15:25:02 PgOut[ST1P] 0.000234 W kernel_task
15:25:02 PgOut[ST1P] 0.000317 W kernel_task
15:25:02 PgOut[ST1P] 0.000333 W kernel_task
15:25:02 PgOut[ST1P] 0.000252 W kernel_task
15:25:02 PgOut[ST1P] 0.000248 W kernel_task
15:25:02 PgOut[ST1P] 0.000240 W kernel_task
15:25:02 PgOut[ST1P] 0.000236 W kernel_task
15:25:02 PgOut[ST1P] 0.000261 W kernel_task
15:25:02 PgOut[ST1P] 0.000257 W kernel_task
15:25:02 PgOut[ST1P] 0.000253 W kernel_task
15:25:02 PgOut[ST1P] 0.000289 W kernel_task
15:25:02 PgOut[ST1P] 0.000159 W kernel_task
15:25:02 PgOut[ST1P] 0.000120 W kernel_task






kernel osx performance swap virtual-memory






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Jul 21 '16 at 14:43

























asked Jul 21 '16 at 14:30









kenorb

8,281367106




8,281367106











  • you might find something from commands atop, or iotop.
    – meuh
    Jul 21 '16 at 19:53










  • Tried, but it was showing the same that kernel has the most writes.
    – kenorb
    Jul 21 '16 at 19:55
















  • you might find something from commands atop, or iotop.
    – meuh
    Jul 21 '16 at 19:53










  • Tried, but it was showing the same that kernel has the most writes.
    – kenorb
    Jul 21 '16 at 19:55















you might find something from commands atop, or iotop.
– meuh
Jul 21 '16 at 19:53




you might find something from commands atop, or iotop.
– meuh
Jul 21 '16 at 19:53












Tried, but it was showing the same that kernel has the most writes.
– kenorb
Jul 21 '16 at 19:55




Tried, but it was showing the same that kernel has the most writes.
– kenorb
Jul 21 '16 at 19:55










1 Answer
1






active

oldest

votes

















up vote
1
down vote













If your kernel supports it, you can try to record the stack at the time of each page fault. Run this command, then interrupt it after a few seconds:



sudo perf record -e page-faults -ag


It will create a large binary file perf.data which you can visualise with



perf report 


perf is a huge subject. You can start with the tutorial.






share|improve this answer




















  • Sounds like a great tool, but unfortunately I think perf is for Linux only, not Unix. But still the information is useful.
    – kenorb
    Jul 23 '16 at 15:10











  • MacOS seems to have Dtrace instead of perf, and it is supposedly even better! There are a top of dtrace scripts described here, one of which may help.
    – meuh
    Jul 23 '16 at 18:13










Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "106"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f297391%2fhow-to-deal-with-high-number-of-page-faults%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
1
down vote













If your kernel supports it, you can try to record the stack at the time of each page fault. Run this command, then interrupt it after a few seconds:



sudo perf record -e page-faults -ag


It will create a large binary file perf.data which you can visualise with



perf report 


perf is a huge subject. You can start with the tutorial.






share|improve this answer




















  • Sounds like a great tool, but unfortunately I think perf is for Linux only, not Unix. But still the information is useful.
    – kenorb
    Jul 23 '16 at 15:10











  • MacOS seems to have Dtrace instead of perf, and it is supposedly even better! There are a top of dtrace scripts described here, one of which may help.
    – meuh
    Jul 23 '16 at 18:13














up vote
1
down vote













If your kernel supports it, you can try to record the stack at the time of each page fault. Run this command, then interrupt it after a few seconds:



sudo perf record -e page-faults -ag


It will create a large binary file perf.data which you can visualise with



perf report 


perf is a huge subject. You can start with the tutorial.






share|improve this answer




















  • Sounds like a great tool, but unfortunately I think perf is for Linux only, not Unix. But still the information is useful.
    – kenorb
    Jul 23 '16 at 15:10











  • MacOS seems to have Dtrace instead of perf, and it is supposedly even better! There are a top of dtrace scripts described here, one of which may help.
    – meuh
    Jul 23 '16 at 18:13












up vote
1
down vote










up vote
1
down vote









If your kernel supports it, you can try to record the stack at the time of each page fault. Run this command, then interrupt it after a few seconds:



sudo perf record -e page-faults -ag


It will create a large binary file perf.data which you can visualise with



perf report 


perf is a huge subject. You can start with the tutorial.






share|improve this answer












If your kernel supports it, you can try to record the stack at the time of each page fault. Run this command, then interrupt it after a few seconds:



sudo perf record -e page-faults -ag


It will create a large binary file perf.data which you can visualise with



perf report 


perf is a huge subject. You can start with the tutorial.







share|improve this answer












share|improve this answer



share|improve this answer










answered Jul 23 '16 at 13:14









meuh

31.4k11754




31.4k11754











  • Sounds like a great tool, but unfortunately I think perf is for Linux only, not Unix. But still the information is useful.
    – kenorb
    Jul 23 '16 at 15:10











  • MacOS seems to have Dtrace instead of perf, and it is supposedly even better! There are a top of dtrace scripts described here, one of which may help.
    – meuh
    Jul 23 '16 at 18:13
















  • Sounds like a great tool, but unfortunately I think perf is for Linux only, not Unix. But still the information is useful.
    – kenorb
    Jul 23 '16 at 15:10











  • MacOS seems to have Dtrace instead of perf, and it is supposedly even better! There are a top of dtrace scripts described here, one of which may help.
    – meuh
    Jul 23 '16 at 18:13















Sounds like a great tool, but unfortunately I think perf is for Linux only, not Unix. But still the information is useful.
– kenorb
Jul 23 '16 at 15:10





Sounds like a great tool, but unfortunately I think perf is for Linux only, not Unix. But still the information is useful.
– kenorb
Jul 23 '16 at 15:10













MacOS seems to have Dtrace instead of perf, and it is supposedly even better! There are a top of dtrace scripts described here, one of which may help.
– meuh
Jul 23 '16 at 18:13




MacOS seems to have Dtrace instead of perf, and it is supposedly even better! There are a top of dtrace scripts described here, one of which may help.
– meuh
Jul 23 '16 at 18:13

















draft saved

draft discarded
















































Thanks for contributing an answer to Unix & Linux Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f297391%2fhow-to-deal-with-high-number-of-page-faults%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown






Popular posts from this blog

How to check contact read email or not when send email to Individual?

Bahrain

Postfix configuration issue with fips on centos 7; mailgun relay