Who's consuming my inotify resources?
Clash Royale CLAN TAG#URR8PPP
After a recent upgrade to Fedora 15, I'm finding that a number of tools are failing with errors along the lines of:
tail: inotify resources exhausted
tail: inotify cannot be used, reverting to polling
It's not just tail
that's reporting problems with inotify, either. Is there any way to interrogate the kernel to find out what process or processes are consuming the inotify resources? The current inotify-related sysctl
settings look like this:
fs.inotify.max_user_instances = 128
fs.inotify.max_user_watches = 8192
fs.inotify.max_queued_events = 16384
fedora kernel inotify
add a comment |
After a recent upgrade to Fedora 15, I'm finding that a number of tools are failing with errors along the lines of:
tail: inotify resources exhausted
tail: inotify cannot be used, reverting to polling
It's not just tail
that's reporting problems with inotify, either. Is there any way to interrogate the kernel to find out what process or processes are consuming the inotify resources? The current inotify-related sysctl
settings look like this:
fs.inotify.max_user_instances = 128
fs.inotify.max_user_watches = 8192
fs.inotify.max_queued_events = 16384
fedora kernel inotify
add a comment |
After a recent upgrade to Fedora 15, I'm finding that a number of tools are failing with errors along the lines of:
tail: inotify resources exhausted
tail: inotify cannot be used, reverting to polling
It's not just tail
that's reporting problems with inotify, either. Is there any way to interrogate the kernel to find out what process or processes are consuming the inotify resources? The current inotify-related sysctl
settings look like this:
fs.inotify.max_user_instances = 128
fs.inotify.max_user_watches = 8192
fs.inotify.max_queued_events = 16384
fedora kernel inotify
After a recent upgrade to Fedora 15, I'm finding that a number of tools are failing with errors along the lines of:
tail: inotify resources exhausted
tail: inotify cannot be used, reverting to polling
It's not just tail
that's reporting problems with inotify, either. Is there any way to interrogate the kernel to find out what process or processes are consuming the inotify resources? The current inotify-related sysctl
settings look like this:
fs.inotify.max_user_instances = 128
fs.inotify.max_user_watches = 8192
fs.inotify.max_queued_events = 16384
fedora kernel inotify
fedora kernel inotify
edited Jun 23 '11 at 22:36
Gilles
543k12811011618
543k12811011618
asked Jun 23 '11 at 15:39
larskslarsks
11.4k33042
11.4k33042
add a comment |
add a comment |
7 Answers
7
active
oldest
votes
It seems that if the process creates inotify instance via inotify_init(), the resulting file that represents filedescriptor in the /proc filesystem is a symlink to (non-existing) 'anon_inode:inotify' file.
$ cd /proc/5317/fd
$ ls -l
total 0
lrwx------ 1 puzel users 64 Jun 24 10:36 0 -> /dev/pts/25
lrwx------ 1 puzel users 64 Jun 24 10:36 1 -> /dev/pts/25
lrwx------ 1 puzel users 64 Jun 24 10:36 2 -> /dev/pts/25
lr-x------ 1 puzel users 64 Jun 24 10:36 3 -> anon_inode:inotify
lr-x------ 1 puzel users 64 Jun 24 10:36 4 -> anon_inode:inotify
Unless I misunderstood the concept, the following command should show you list of processes (their representation in /proc), sorted by number of inotify instances they use.
for foo in /proc/*/fd/*; do readlink -f $foo; done | grep inotify | sort | uniq -c | sort -nr
8
Excellent, thank you! I didn't know about the inotify inodes showing up in /proc. For my purposes, the command can be simplified to this:find /proc/*/fd/* -type l -lname 'anon_inode:inotify' -print
– larsks
Jun 25 '11 at 12:29
I'm glad it helped. And your solution with find -lname is indeed much nicer than mine with for loop and readlink.
– Petr Uzel
Jun 26 '11 at 10:44
3
Note that you could also be out of watches (not instances). E.g., on my system, that gives a low-teens number of instances, but there are many tens of thousands of watches from KDE's desktop search. Its too bad there isn't an easier way to check how many watches/instances are in use, since the kernel clearly knows...
– derobert
Jan 23 '13 at 16:18
To show the command lines of the offending programs:find /proc/*/fd/* -type l -lname 'anon_inode:inotify' -exec sh -c 'cat $(dirname )/../cmdline; echo ""' ; 2>/dev/null
– Mark K Cowan
Mar 28 '17 at 15:42
add a comment |
You are probably running out of inotify watches rather than instances. To find out who's creating a lot of watches:
- Do
echo 1 >> /sys/kernel/debug/tracing/events/syscalls/sys_exit_inotify_add_watch/enable
to enable tracing of watch adds; - Do
cat /sys/kernel/debug/tracing/tracing_enabled
to make sure it's set to 1 and if it isn't doecho 1 >> /sys/kernel/debug/tracing/tracing_enabled
; - Restart the processes with inotify instances (determined as described in Petr Uzel's answer) that you suspect of creating a lot of watches; and
- Read the file
/sys/kernel/debug/tracing/trace
to watch how many watches are created and by which processes.
When you're done, make sure to echo 0 into the enable file (and the tracing_enabled file if you had to enable that as well) to turn off tracing so you won't incur the performance hit of continuing to trace.
It was a backup application creating lots of inotify watches, and the solution in the accepted answer helped identify the culprit. However, I wasn't previously familiar with the system call tracing you've demonstrated here. Very cool. Thanks for the information!
– larsks
Jan 23 '13 at 16:35
2
are you sure it is '/sys/kernel/debug/tracing/tracing_enabled' ? On my system it seems the correct path is '/sys/kernel/debug/tracing/tracing_on'...
– Kartoch
Apr 10 '13 at 14:42
There is no /sys/kernel/debug/tracing/events/syscalls/sys_exit_inotify_add_watch/enable nor /sys/kernel/debug/tracing/tracing_enabled on Gentoo Linux, but /sys/kernel/debug/tracing/tracing_enabled exists. Why is that?
– zeekvfu
Dec 4 '13 at 16:32
As @Kartoch implies, you need to doecho 1 | sudo tee /sys/kernel/debug/tracing/tracing_on
on modern distros (Ubuntu 18.04.2 LTS).
– oligofren
Feb 22 at 14:53
It wasn't sufficient to do the commands for me, I also needed to do : ` cd /sys/kernel/debug/tracing/ ; echo function > current_tracer; echo SyS_inotify_add_watch > set_ftrace_filter`
– oligofren
Feb 25 at 9:05
|
show 1 more comment
To trace which processes consume inotify watches (not instances) you can use the dynamic ftrace feature of the kernel if it is enabled in your kernel.
The kernel option you need is CONFIG_DYNAMIC_FTRACE
.
First mount the debugfs filesystem if it is not already mounted.
mount -t debugfs nodev /sys/kernel/debug
Go under the tracing
subdirectory of this debugfs directory
cd /sys/kernel/debug/tracing
Enable tracing of function calls
echo function > current_tracer
Filter only SyS_inotify_add_watch
system calls
echo SyS_inotify_add_watch > set_ftrace_filter
Clear the trace ring buffer if it wasn't empty
echo > trace
Enable tracing if it is not already enabled
echo 1 > tracing_on
Restart the suspected process (in my case it was crashplan, a backup application)
Watch the inotify_watch being exhausted
wc -l trace
cat trace
Done
add a comment |
find /proc/*/fd/* -type l -lname 'anon_inode:inotify' 2>/dev/null | cut -f 1-4 -d'/' | sort | uniq -c | sort -nr
add a comment |
I ran into this problem, and none of these answers give you the answer of "how many watches is each process currently using?" The one-liners all give you how many instances are open, which is only part of the story, and the trace stuff is only useful to see new watches being opened.
TL;DR: This will get you a file with a list of open inotify
instances and the number of watches they have, along with the pids and binaries that spawned them, sorted in descending order by watch count:
sudo lsof | awk '/anon_inode/ gsub(/[urw]$/,"",$4); print "/proc/"$2"/fdinfo/"$4; ' | while read fdi; do count=$(sudo grep -c inotify $fdi); exe=$(sudo readlink $(dirname $(dirname $fdi))/exe); echo -e $count"t"$fdi"t"$exe; done | sort -nr > watches
That's a big ball of mess, so here's how I got there. To start, I ran a tail
on a test file, and looked at the fd's it opened:
joel@gladstone:~$ tail -f test > /dev/null &
[3] 22734
joel@opx1:~$ ls -ahltr /proc/22734/fd
total 0
dr-xr-xr-x 9 joel joel 0 Feb 22 22:34 ..
dr-x------ 2 joel joel 0 Feb 22 22:34 .
lr-x------ 1 joel joel 64 Feb 22 22:35 4 -> anon_inode:inotify
lr-x------ 1 joel joel 64 Feb 22 22:35 3 -> /home/joel/test
lrwx------ 1 joel joel 64 Feb 22 22:35 2 -> /dev/pts/2
l-wx------ 1 joel joel 64 Feb 22 22:35 1 -> /dev/null
lrwx------ 1 joel joel 64 Feb 22 22:35 0 -> /dev/pts/2
So, 4 is the fd we want to investigate. Let's see what's in the fdinfo
for that:
joel@opx1:~$ cat /proc/22734/fdinfo/4
pos: 0
flags: 00
mnt_id: 11
inotify wd:1 ino:15f51d sdev:ca00003 mask:c06 ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:1df51500a75e538c
That looks like a entry for the watch at the bottom!
Let's try something with more watches, this time with the inotifywait
utility, just watching whatever is in /tmp
:
joel@gladstone:~$ inotifywait /tmp/* &
[4] 27862
joel@gladstone:~$ Setting up watches.
Watches established.
joel@gladstone:~$ ls -ahtlr /proc/27862/fd | grep inotify
lr-x------ 1 joel joel 64 Feb 22 22:41 3 -> anon_inode:inotify
joel@gladstone:~$ cat /proc/27862/fdinfo/3
pos: 0
flags: 00
mnt_id: 11
inotify wd:6 ino:7fdc sdev:ca00003 mask:fff ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:dc7f0000551e9d88
inotify wd:5 ino:7fcb sdev:ca00003 mask:fff ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:cb7f00005b1f9d88
inotify wd:4 ino:7fcc sdev:ca00003 mask:fff ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:cc7f00006a1d9d88
inotify wd:3 ino:7fc6 sdev:ca00003 mask:fff ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:c67f00005d1d9d88
inotify wd:2 ino:7fc7 sdev:ca00003 mask:fff ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:c77f0000461d9d88
inotify wd:1 ino:7fd7 sdev:ca00003 mask:fff ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:d77f00000053c98b
Aha! More entries! So we should have six things in /tmp
then:
joel@opx1:~$ ls /tmp/ | wc -l
6
Excellent. My new inotifywait
has one entry in its fd
list (which is what the other one-liners here are counting), but six entries in its fdinfo
file. So we can figure out how many watches a given fd for a given process is using by consulting its fdinfo
file. Now to put it together with some of the above to grab a list of processes that have notify watches open and use that to count the entries in each fdinfo
. This is similar to above, so I'll just dump the one-liner here:
sudo lsof | awk '/anon_inode/ gsub(/[urw]$/,"",$4); print "/proc/"$2"/fdinfo/"$4; ' | while read fdi; do count=$(sudo grep -c inotify $fdi); echo -e $count"t"$fdi; done
There's some thick stuff in here, but the basics are that I use awk
to build an fdinfo
path from the lsof
output, grabbing the pid and fd number, stripping the u/r/w flag from the latter. Then for each constructed fdinfo
path, I count the number of inotify
lines and output the count and the pid.
It would be nice if I had what processes these pids represent in the same place though, right? I thought so. So, in a particularly messy bit, I settled on calling dirname
twice on the fdinfo
path to get pack to /proc/<pid>
, adding /exe
to it, and then running readlink
on that to get the exe name of the process. Throw that in there as well, sort it by number of watches, and redirect it to a file for safe-keeping and we get:
sudo lsof | awk '/anon_inode/ gsub(/[urw]$/,"",$4); print "/proc/"$2"/fdinfo/"$4; ' | while read fdi; do count=$(sudo grep -c inotify $fdi); exe=$(sudo readlink $(dirname $(dirname $fdi))/exe); echo -e $count"t"$fdi"t"$exe; done | sort -n > watches
Running that without sudo to just show my processes I launched above, I get:
joel@gladstone:~$ cat watches
6 /proc/4906/fdinfo/3 /usr/bin/inotifywait
1 /proc/22734/fdinfo/4 /usr/bin/tail
Perfect! A list of processes, fd's, and how many watches each is using, which is exactly what I needed.
When usinglsof
for this purpose, I would recommend using the-nP
flags to avoid unnecessary lookups of reverse DNS and port names. In this particular case, adding-bw
to avoid potentially blocking syscalls is also recommended. That said, withlsof
gobbling up 3 seconds of wall clock time on my humble workstation (of which 2 seconds are spent in the kernel), this approach is nice for exploration but alas unsuitable for monitoring purposes.
– BertD
Mar 15 '18 at 10:39
add a comment |
I have modified the script present in above to show the list of processes those are consuming inotify resources:
ps -p `find /proc/*/fd/* -type l -lname 'anon_inode:inotify' -print | sed s/'^/proc/'/''/ | sed s/'/fd.*$'/''/`
I think there is a way to replace my double sed.
Yes. Use either
cut -f 3 -d '/'
or
sed -e 's/^/proc/([0-9]*)/.*/1'
and you'll only get the pid.
Also, if you add
2> /dev/null
in the find, you'll get rid of any pesky error lines thrown by find. So this would work:
ps -p $(find /proc/*/fd/* -type l -lname 'anon_inode:inotify' -print 2> /dev/null | sed -e 's/^/proc/([0-9]*)/.*/1/')
add a comment |
As @Jonathan Kamens said, you are probably running out of watches. I have a premade script, inotify-consumers
, that lists this for you:
$ time inotify-consumers | head
INOTIFY
WATCHER
COUNT PID CMD
----------------------------------------
6688 27262 /home/dvlpr/apps/WebStorm-2018.3.4/WebStorm-183.5429.34/bin/fsnotifier64
411 27581 node /home/dvlpr/dev/kiwi-frontend/node_modules/.bin/webpack --config config/webpack.dev.js
79 1541 /usr/lib/gnome-settings-daemon/gsd-xsettings
30 1664 /usr/lib/gvfs/gvfsd-trash --spawner :1.22 /org/gtk/gvfs/exec_spaw/0
14 1630 /usr/bin/gnome-software --gapplication-service
real 0m0.099s
user 0m0.042s
sys 0m0.062s
Here you quickly see why the default limit of 8K watchers is too little on a development machine, as just WebStorm instance quickly maxes this when encountering a node_modules
folder with thousands of folders. Add a webpack watcher to guarantee problems ...
Just copy the contents of the script (or the file on GitHub) and put it somewhere in your $PATH
, like /usr/local/bin
. For reference, the main content of the script is simply this
find /proc/*/fd
-lname anon_inode:inotify
-printf '%hinfo/%fn' 2>/dev/null
| xargs grep -c '^inotify'
| sort -n -t: -k2 -r
In case you are wondering how to increase the limits, here's how to make it permanent:
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "106"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f15509%2fwhos-consuming-my-inotify-resources%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
7 Answers
7
active
oldest
votes
7 Answers
7
active
oldest
votes
active
oldest
votes
active
oldest
votes
It seems that if the process creates inotify instance via inotify_init(), the resulting file that represents filedescriptor in the /proc filesystem is a symlink to (non-existing) 'anon_inode:inotify' file.
$ cd /proc/5317/fd
$ ls -l
total 0
lrwx------ 1 puzel users 64 Jun 24 10:36 0 -> /dev/pts/25
lrwx------ 1 puzel users 64 Jun 24 10:36 1 -> /dev/pts/25
lrwx------ 1 puzel users 64 Jun 24 10:36 2 -> /dev/pts/25
lr-x------ 1 puzel users 64 Jun 24 10:36 3 -> anon_inode:inotify
lr-x------ 1 puzel users 64 Jun 24 10:36 4 -> anon_inode:inotify
Unless I misunderstood the concept, the following command should show you list of processes (their representation in /proc), sorted by number of inotify instances they use.
for foo in /proc/*/fd/*; do readlink -f $foo; done | grep inotify | sort | uniq -c | sort -nr
8
Excellent, thank you! I didn't know about the inotify inodes showing up in /proc. For my purposes, the command can be simplified to this:find /proc/*/fd/* -type l -lname 'anon_inode:inotify' -print
– larsks
Jun 25 '11 at 12:29
I'm glad it helped. And your solution with find -lname is indeed much nicer than mine with for loop and readlink.
– Petr Uzel
Jun 26 '11 at 10:44
3
Note that you could also be out of watches (not instances). E.g., on my system, that gives a low-teens number of instances, but there are many tens of thousands of watches from KDE's desktop search. Its too bad there isn't an easier way to check how many watches/instances are in use, since the kernel clearly knows...
– derobert
Jan 23 '13 at 16:18
To show the command lines of the offending programs:find /proc/*/fd/* -type l -lname 'anon_inode:inotify' -exec sh -c 'cat $(dirname )/../cmdline; echo ""' ; 2>/dev/null
– Mark K Cowan
Mar 28 '17 at 15:42
add a comment |
It seems that if the process creates inotify instance via inotify_init(), the resulting file that represents filedescriptor in the /proc filesystem is a symlink to (non-existing) 'anon_inode:inotify' file.
$ cd /proc/5317/fd
$ ls -l
total 0
lrwx------ 1 puzel users 64 Jun 24 10:36 0 -> /dev/pts/25
lrwx------ 1 puzel users 64 Jun 24 10:36 1 -> /dev/pts/25
lrwx------ 1 puzel users 64 Jun 24 10:36 2 -> /dev/pts/25
lr-x------ 1 puzel users 64 Jun 24 10:36 3 -> anon_inode:inotify
lr-x------ 1 puzel users 64 Jun 24 10:36 4 -> anon_inode:inotify
Unless I misunderstood the concept, the following command should show you list of processes (their representation in /proc), sorted by number of inotify instances they use.
for foo in /proc/*/fd/*; do readlink -f $foo; done | grep inotify | sort | uniq -c | sort -nr
8
Excellent, thank you! I didn't know about the inotify inodes showing up in /proc. For my purposes, the command can be simplified to this:find /proc/*/fd/* -type l -lname 'anon_inode:inotify' -print
– larsks
Jun 25 '11 at 12:29
I'm glad it helped. And your solution with find -lname is indeed much nicer than mine with for loop and readlink.
– Petr Uzel
Jun 26 '11 at 10:44
3
Note that you could also be out of watches (not instances). E.g., on my system, that gives a low-teens number of instances, but there are many tens of thousands of watches from KDE's desktop search. Its too bad there isn't an easier way to check how many watches/instances are in use, since the kernel clearly knows...
– derobert
Jan 23 '13 at 16:18
To show the command lines of the offending programs:find /proc/*/fd/* -type l -lname 'anon_inode:inotify' -exec sh -c 'cat $(dirname )/../cmdline; echo ""' ; 2>/dev/null
– Mark K Cowan
Mar 28 '17 at 15:42
add a comment |
It seems that if the process creates inotify instance via inotify_init(), the resulting file that represents filedescriptor in the /proc filesystem is a symlink to (non-existing) 'anon_inode:inotify' file.
$ cd /proc/5317/fd
$ ls -l
total 0
lrwx------ 1 puzel users 64 Jun 24 10:36 0 -> /dev/pts/25
lrwx------ 1 puzel users 64 Jun 24 10:36 1 -> /dev/pts/25
lrwx------ 1 puzel users 64 Jun 24 10:36 2 -> /dev/pts/25
lr-x------ 1 puzel users 64 Jun 24 10:36 3 -> anon_inode:inotify
lr-x------ 1 puzel users 64 Jun 24 10:36 4 -> anon_inode:inotify
Unless I misunderstood the concept, the following command should show you list of processes (their representation in /proc), sorted by number of inotify instances they use.
for foo in /proc/*/fd/*; do readlink -f $foo; done | grep inotify | sort | uniq -c | sort -nr
It seems that if the process creates inotify instance via inotify_init(), the resulting file that represents filedescriptor in the /proc filesystem is a symlink to (non-existing) 'anon_inode:inotify' file.
$ cd /proc/5317/fd
$ ls -l
total 0
lrwx------ 1 puzel users 64 Jun 24 10:36 0 -> /dev/pts/25
lrwx------ 1 puzel users 64 Jun 24 10:36 1 -> /dev/pts/25
lrwx------ 1 puzel users 64 Jun 24 10:36 2 -> /dev/pts/25
lr-x------ 1 puzel users 64 Jun 24 10:36 3 -> anon_inode:inotify
lr-x------ 1 puzel users 64 Jun 24 10:36 4 -> anon_inode:inotify
Unless I misunderstood the concept, the following command should show you list of processes (their representation in /proc), sorted by number of inotify instances they use.
for foo in /proc/*/fd/*; do readlink -f $foo; done | grep inotify | sort | uniq -c | sort -nr
edited Jun 24 '11 at 11:12
answered Jun 24 '11 at 8:43
Petr UzelPetr Uzel
5,0772224
5,0772224
8
Excellent, thank you! I didn't know about the inotify inodes showing up in /proc. For my purposes, the command can be simplified to this:find /proc/*/fd/* -type l -lname 'anon_inode:inotify' -print
– larsks
Jun 25 '11 at 12:29
I'm glad it helped. And your solution with find -lname is indeed much nicer than mine with for loop and readlink.
– Petr Uzel
Jun 26 '11 at 10:44
3
Note that you could also be out of watches (not instances). E.g., on my system, that gives a low-teens number of instances, but there are many tens of thousands of watches from KDE's desktop search. Its too bad there isn't an easier way to check how many watches/instances are in use, since the kernel clearly knows...
– derobert
Jan 23 '13 at 16:18
To show the command lines of the offending programs:find /proc/*/fd/* -type l -lname 'anon_inode:inotify' -exec sh -c 'cat $(dirname )/../cmdline; echo ""' ; 2>/dev/null
– Mark K Cowan
Mar 28 '17 at 15:42
add a comment |
8
Excellent, thank you! I didn't know about the inotify inodes showing up in /proc. For my purposes, the command can be simplified to this:find /proc/*/fd/* -type l -lname 'anon_inode:inotify' -print
– larsks
Jun 25 '11 at 12:29
I'm glad it helped. And your solution with find -lname is indeed much nicer than mine with for loop and readlink.
– Petr Uzel
Jun 26 '11 at 10:44
3
Note that you could also be out of watches (not instances). E.g., on my system, that gives a low-teens number of instances, but there are many tens of thousands of watches from KDE's desktop search. Its too bad there isn't an easier way to check how many watches/instances are in use, since the kernel clearly knows...
– derobert
Jan 23 '13 at 16:18
To show the command lines of the offending programs:find /proc/*/fd/* -type l -lname 'anon_inode:inotify' -exec sh -c 'cat $(dirname )/../cmdline; echo ""' ; 2>/dev/null
– Mark K Cowan
Mar 28 '17 at 15:42
8
8
Excellent, thank you! I didn't know about the inotify inodes showing up in /proc. For my purposes, the command can be simplified to this:
find /proc/*/fd/* -type l -lname 'anon_inode:inotify' -print
– larsks
Jun 25 '11 at 12:29
Excellent, thank you! I didn't know about the inotify inodes showing up in /proc. For my purposes, the command can be simplified to this:
find /proc/*/fd/* -type l -lname 'anon_inode:inotify' -print
– larsks
Jun 25 '11 at 12:29
I'm glad it helped. And your solution with find -lname is indeed much nicer than mine with for loop and readlink.
– Petr Uzel
Jun 26 '11 at 10:44
I'm glad it helped. And your solution with find -lname is indeed much nicer than mine with for loop and readlink.
– Petr Uzel
Jun 26 '11 at 10:44
3
3
Note that you could also be out of watches (not instances). E.g., on my system, that gives a low-teens number of instances, but there are many tens of thousands of watches from KDE's desktop search. Its too bad there isn't an easier way to check how many watches/instances are in use, since the kernel clearly knows...
– derobert
Jan 23 '13 at 16:18
Note that you could also be out of watches (not instances). E.g., on my system, that gives a low-teens number of instances, but there are many tens of thousands of watches from KDE's desktop search. Its too bad there isn't an easier way to check how many watches/instances are in use, since the kernel clearly knows...
– derobert
Jan 23 '13 at 16:18
To show the command lines of the offending programs:
find /proc/*/fd/* -type l -lname 'anon_inode:inotify' -exec sh -c 'cat $(dirname )/../cmdline; echo ""' ; 2>/dev/null
– Mark K Cowan
Mar 28 '17 at 15:42
To show the command lines of the offending programs:
find /proc/*/fd/* -type l -lname 'anon_inode:inotify' -exec sh -c 'cat $(dirname )/../cmdline; echo ""' ; 2>/dev/null
– Mark K Cowan
Mar 28 '17 at 15:42
add a comment |
You are probably running out of inotify watches rather than instances. To find out who's creating a lot of watches:
- Do
echo 1 >> /sys/kernel/debug/tracing/events/syscalls/sys_exit_inotify_add_watch/enable
to enable tracing of watch adds; - Do
cat /sys/kernel/debug/tracing/tracing_enabled
to make sure it's set to 1 and if it isn't doecho 1 >> /sys/kernel/debug/tracing/tracing_enabled
; - Restart the processes with inotify instances (determined as described in Petr Uzel's answer) that you suspect of creating a lot of watches; and
- Read the file
/sys/kernel/debug/tracing/trace
to watch how many watches are created and by which processes.
When you're done, make sure to echo 0 into the enable file (and the tracing_enabled file if you had to enable that as well) to turn off tracing so you won't incur the performance hit of continuing to trace.
It was a backup application creating lots of inotify watches, and the solution in the accepted answer helped identify the culprit. However, I wasn't previously familiar with the system call tracing you've demonstrated here. Very cool. Thanks for the information!
– larsks
Jan 23 '13 at 16:35
2
are you sure it is '/sys/kernel/debug/tracing/tracing_enabled' ? On my system it seems the correct path is '/sys/kernel/debug/tracing/tracing_on'...
– Kartoch
Apr 10 '13 at 14:42
There is no /sys/kernel/debug/tracing/events/syscalls/sys_exit_inotify_add_watch/enable nor /sys/kernel/debug/tracing/tracing_enabled on Gentoo Linux, but /sys/kernel/debug/tracing/tracing_enabled exists. Why is that?
– zeekvfu
Dec 4 '13 at 16:32
As @Kartoch implies, you need to doecho 1 | sudo tee /sys/kernel/debug/tracing/tracing_on
on modern distros (Ubuntu 18.04.2 LTS).
– oligofren
Feb 22 at 14:53
It wasn't sufficient to do the commands for me, I also needed to do : ` cd /sys/kernel/debug/tracing/ ; echo function > current_tracer; echo SyS_inotify_add_watch > set_ftrace_filter`
– oligofren
Feb 25 at 9:05
|
show 1 more comment
You are probably running out of inotify watches rather than instances. To find out who's creating a lot of watches:
- Do
echo 1 >> /sys/kernel/debug/tracing/events/syscalls/sys_exit_inotify_add_watch/enable
to enable tracing of watch adds; - Do
cat /sys/kernel/debug/tracing/tracing_enabled
to make sure it's set to 1 and if it isn't doecho 1 >> /sys/kernel/debug/tracing/tracing_enabled
; - Restart the processes with inotify instances (determined as described in Petr Uzel's answer) that you suspect of creating a lot of watches; and
- Read the file
/sys/kernel/debug/tracing/trace
to watch how many watches are created and by which processes.
When you're done, make sure to echo 0 into the enable file (and the tracing_enabled file if you had to enable that as well) to turn off tracing so you won't incur the performance hit of continuing to trace.
It was a backup application creating lots of inotify watches, and the solution in the accepted answer helped identify the culprit. However, I wasn't previously familiar with the system call tracing you've demonstrated here. Very cool. Thanks for the information!
– larsks
Jan 23 '13 at 16:35
2
are you sure it is '/sys/kernel/debug/tracing/tracing_enabled' ? On my system it seems the correct path is '/sys/kernel/debug/tracing/tracing_on'...
– Kartoch
Apr 10 '13 at 14:42
There is no /sys/kernel/debug/tracing/events/syscalls/sys_exit_inotify_add_watch/enable nor /sys/kernel/debug/tracing/tracing_enabled on Gentoo Linux, but /sys/kernel/debug/tracing/tracing_enabled exists. Why is that?
– zeekvfu
Dec 4 '13 at 16:32
As @Kartoch implies, you need to doecho 1 | sudo tee /sys/kernel/debug/tracing/tracing_on
on modern distros (Ubuntu 18.04.2 LTS).
– oligofren
Feb 22 at 14:53
It wasn't sufficient to do the commands for me, I also needed to do : ` cd /sys/kernel/debug/tracing/ ; echo function > current_tracer; echo SyS_inotify_add_watch > set_ftrace_filter`
– oligofren
Feb 25 at 9:05
|
show 1 more comment
You are probably running out of inotify watches rather than instances. To find out who's creating a lot of watches:
- Do
echo 1 >> /sys/kernel/debug/tracing/events/syscalls/sys_exit_inotify_add_watch/enable
to enable tracing of watch adds; - Do
cat /sys/kernel/debug/tracing/tracing_enabled
to make sure it's set to 1 and if it isn't doecho 1 >> /sys/kernel/debug/tracing/tracing_enabled
; - Restart the processes with inotify instances (determined as described in Petr Uzel's answer) that you suspect of creating a lot of watches; and
- Read the file
/sys/kernel/debug/tracing/trace
to watch how many watches are created and by which processes.
When you're done, make sure to echo 0 into the enable file (and the tracing_enabled file if you had to enable that as well) to turn off tracing so you won't incur the performance hit of continuing to trace.
You are probably running out of inotify watches rather than instances. To find out who's creating a lot of watches:
- Do
echo 1 >> /sys/kernel/debug/tracing/events/syscalls/sys_exit_inotify_add_watch/enable
to enable tracing of watch adds; - Do
cat /sys/kernel/debug/tracing/tracing_enabled
to make sure it's set to 1 and if it isn't doecho 1 >> /sys/kernel/debug/tracing/tracing_enabled
; - Restart the processes with inotify instances (determined as described in Petr Uzel's answer) that you suspect of creating a lot of watches; and
- Read the file
/sys/kernel/debug/tracing/trace
to watch how many watches are created and by which processes.
When you're done, make sure to echo 0 into the enable file (and the tracing_enabled file if you had to enable that as well) to turn off tracing so you won't incur the performance hit of continuing to trace.
edited Jan 23 '13 at 16:01
slm♦
254k71538687
254k71538687
answered Jan 23 '13 at 14:36
Jonathan KamensJonathan Kamens
33123
33123
It was a backup application creating lots of inotify watches, and the solution in the accepted answer helped identify the culprit. However, I wasn't previously familiar with the system call tracing you've demonstrated here. Very cool. Thanks for the information!
– larsks
Jan 23 '13 at 16:35
2
are you sure it is '/sys/kernel/debug/tracing/tracing_enabled' ? On my system it seems the correct path is '/sys/kernel/debug/tracing/tracing_on'...
– Kartoch
Apr 10 '13 at 14:42
There is no /sys/kernel/debug/tracing/events/syscalls/sys_exit_inotify_add_watch/enable nor /sys/kernel/debug/tracing/tracing_enabled on Gentoo Linux, but /sys/kernel/debug/tracing/tracing_enabled exists. Why is that?
– zeekvfu
Dec 4 '13 at 16:32
As @Kartoch implies, you need to doecho 1 | sudo tee /sys/kernel/debug/tracing/tracing_on
on modern distros (Ubuntu 18.04.2 LTS).
– oligofren
Feb 22 at 14:53
It wasn't sufficient to do the commands for me, I also needed to do : ` cd /sys/kernel/debug/tracing/ ; echo function > current_tracer; echo SyS_inotify_add_watch > set_ftrace_filter`
– oligofren
Feb 25 at 9:05
|
show 1 more comment
It was a backup application creating lots of inotify watches, and the solution in the accepted answer helped identify the culprit. However, I wasn't previously familiar with the system call tracing you've demonstrated here. Very cool. Thanks for the information!
– larsks
Jan 23 '13 at 16:35
2
are you sure it is '/sys/kernel/debug/tracing/tracing_enabled' ? On my system it seems the correct path is '/sys/kernel/debug/tracing/tracing_on'...
– Kartoch
Apr 10 '13 at 14:42
There is no /sys/kernel/debug/tracing/events/syscalls/sys_exit_inotify_add_watch/enable nor /sys/kernel/debug/tracing/tracing_enabled on Gentoo Linux, but /sys/kernel/debug/tracing/tracing_enabled exists. Why is that?
– zeekvfu
Dec 4 '13 at 16:32
As @Kartoch implies, you need to doecho 1 | sudo tee /sys/kernel/debug/tracing/tracing_on
on modern distros (Ubuntu 18.04.2 LTS).
– oligofren
Feb 22 at 14:53
It wasn't sufficient to do the commands for me, I also needed to do : ` cd /sys/kernel/debug/tracing/ ; echo function > current_tracer; echo SyS_inotify_add_watch > set_ftrace_filter`
– oligofren
Feb 25 at 9:05
It was a backup application creating lots of inotify watches, and the solution in the accepted answer helped identify the culprit. However, I wasn't previously familiar with the system call tracing you've demonstrated here. Very cool. Thanks for the information!
– larsks
Jan 23 '13 at 16:35
It was a backup application creating lots of inotify watches, and the solution in the accepted answer helped identify the culprit. However, I wasn't previously familiar with the system call tracing you've demonstrated here. Very cool. Thanks for the information!
– larsks
Jan 23 '13 at 16:35
2
2
are you sure it is '/sys/kernel/debug/tracing/tracing_enabled' ? On my system it seems the correct path is '/sys/kernel/debug/tracing/tracing_on'...
– Kartoch
Apr 10 '13 at 14:42
are you sure it is '/sys/kernel/debug/tracing/tracing_enabled' ? On my system it seems the correct path is '/sys/kernel/debug/tracing/tracing_on'...
– Kartoch
Apr 10 '13 at 14:42
There is no /sys/kernel/debug/tracing/events/syscalls/sys_exit_inotify_add_watch/enable nor /sys/kernel/debug/tracing/tracing_enabled on Gentoo Linux, but /sys/kernel/debug/tracing/tracing_enabled exists. Why is that?
– zeekvfu
Dec 4 '13 at 16:32
There is no /sys/kernel/debug/tracing/events/syscalls/sys_exit_inotify_add_watch/enable nor /sys/kernel/debug/tracing/tracing_enabled on Gentoo Linux, but /sys/kernel/debug/tracing/tracing_enabled exists. Why is that?
– zeekvfu
Dec 4 '13 at 16:32
As @Kartoch implies, you need to do
echo 1 | sudo tee /sys/kernel/debug/tracing/tracing_on
on modern distros (Ubuntu 18.04.2 LTS).– oligofren
Feb 22 at 14:53
As @Kartoch implies, you need to do
echo 1 | sudo tee /sys/kernel/debug/tracing/tracing_on
on modern distros (Ubuntu 18.04.2 LTS).– oligofren
Feb 22 at 14:53
It wasn't sufficient to do the commands for me, I also needed to do : ` cd /sys/kernel/debug/tracing/ ; echo function > current_tracer; echo SyS_inotify_add_watch > set_ftrace_filter`
– oligofren
Feb 25 at 9:05
It wasn't sufficient to do the commands for me, I also needed to do : ` cd /sys/kernel/debug/tracing/ ; echo function > current_tracer; echo SyS_inotify_add_watch > set_ftrace_filter`
– oligofren
Feb 25 at 9:05
|
show 1 more comment
To trace which processes consume inotify watches (not instances) you can use the dynamic ftrace feature of the kernel if it is enabled in your kernel.
The kernel option you need is CONFIG_DYNAMIC_FTRACE
.
First mount the debugfs filesystem if it is not already mounted.
mount -t debugfs nodev /sys/kernel/debug
Go under the tracing
subdirectory of this debugfs directory
cd /sys/kernel/debug/tracing
Enable tracing of function calls
echo function > current_tracer
Filter only SyS_inotify_add_watch
system calls
echo SyS_inotify_add_watch > set_ftrace_filter
Clear the trace ring buffer if it wasn't empty
echo > trace
Enable tracing if it is not already enabled
echo 1 > tracing_on
Restart the suspected process (in my case it was crashplan, a backup application)
Watch the inotify_watch being exhausted
wc -l trace
cat trace
Done
add a comment |
To trace which processes consume inotify watches (not instances) you can use the dynamic ftrace feature of the kernel if it is enabled in your kernel.
The kernel option you need is CONFIG_DYNAMIC_FTRACE
.
First mount the debugfs filesystem if it is not already mounted.
mount -t debugfs nodev /sys/kernel/debug
Go under the tracing
subdirectory of this debugfs directory
cd /sys/kernel/debug/tracing
Enable tracing of function calls
echo function > current_tracer
Filter only SyS_inotify_add_watch
system calls
echo SyS_inotify_add_watch > set_ftrace_filter
Clear the trace ring buffer if it wasn't empty
echo > trace
Enable tracing if it is not already enabled
echo 1 > tracing_on
Restart the suspected process (in my case it was crashplan, a backup application)
Watch the inotify_watch being exhausted
wc -l trace
cat trace
Done
add a comment |
To trace which processes consume inotify watches (not instances) you can use the dynamic ftrace feature of the kernel if it is enabled in your kernel.
The kernel option you need is CONFIG_DYNAMIC_FTRACE
.
First mount the debugfs filesystem if it is not already mounted.
mount -t debugfs nodev /sys/kernel/debug
Go under the tracing
subdirectory of this debugfs directory
cd /sys/kernel/debug/tracing
Enable tracing of function calls
echo function > current_tracer
Filter only SyS_inotify_add_watch
system calls
echo SyS_inotify_add_watch > set_ftrace_filter
Clear the trace ring buffer if it wasn't empty
echo > trace
Enable tracing if it is not already enabled
echo 1 > tracing_on
Restart the suspected process (in my case it was crashplan, a backup application)
Watch the inotify_watch being exhausted
wc -l trace
cat trace
Done
To trace which processes consume inotify watches (not instances) you can use the dynamic ftrace feature of the kernel if it is enabled in your kernel.
The kernel option you need is CONFIG_DYNAMIC_FTRACE
.
First mount the debugfs filesystem if it is not already mounted.
mount -t debugfs nodev /sys/kernel/debug
Go under the tracing
subdirectory of this debugfs directory
cd /sys/kernel/debug/tracing
Enable tracing of function calls
echo function > current_tracer
Filter only SyS_inotify_add_watch
system calls
echo SyS_inotify_add_watch > set_ftrace_filter
Clear the trace ring buffer if it wasn't empty
echo > trace
Enable tracing if it is not already enabled
echo 1 > tracing_on
Restart the suspected process (in my case it was crashplan, a backup application)
Watch the inotify_watch being exhausted
wc -l trace
cat trace
Done
answered Dec 3 '14 at 10:01
silvergunsilvergun
5111
5111
add a comment |
add a comment |
find /proc/*/fd/* -type l -lname 'anon_inode:inotify' 2>/dev/null | cut -f 1-4 -d'/' | sort | uniq -c | sort -nr
add a comment |
find /proc/*/fd/* -type l -lname 'anon_inode:inotify' 2>/dev/null | cut -f 1-4 -d'/' | sort | uniq -c | sort -nr
add a comment |
find /proc/*/fd/* -type l -lname 'anon_inode:inotify' 2>/dev/null | cut -f 1-4 -d'/' | sort | uniq -c | sort -nr
find /proc/*/fd/* -type l -lname 'anon_inode:inotify' 2>/dev/null | cut -f 1-4 -d'/' | sort | uniq -c | sort -nr
edited Sep 12 '12 at 19:02
jasonwryan
50.6k14135189
50.6k14135189
answered Sep 12 '12 at 18:46
PaulPaul
311
311
add a comment |
add a comment |
I ran into this problem, and none of these answers give you the answer of "how many watches is each process currently using?" The one-liners all give you how many instances are open, which is only part of the story, and the trace stuff is only useful to see new watches being opened.
TL;DR: This will get you a file with a list of open inotify
instances and the number of watches they have, along with the pids and binaries that spawned them, sorted in descending order by watch count:
sudo lsof | awk '/anon_inode/ gsub(/[urw]$/,"",$4); print "/proc/"$2"/fdinfo/"$4; ' | while read fdi; do count=$(sudo grep -c inotify $fdi); exe=$(sudo readlink $(dirname $(dirname $fdi))/exe); echo -e $count"t"$fdi"t"$exe; done | sort -nr > watches
That's a big ball of mess, so here's how I got there. To start, I ran a tail
on a test file, and looked at the fd's it opened:
joel@gladstone:~$ tail -f test > /dev/null &
[3] 22734
joel@opx1:~$ ls -ahltr /proc/22734/fd
total 0
dr-xr-xr-x 9 joel joel 0 Feb 22 22:34 ..
dr-x------ 2 joel joel 0 Feb 22 22:34 .
lr-x------ 1 joel joel 64 Feb 22 22:35 4 -> anon_inode:inotify
lr-x------ 1 joel joel 64 Feb 22 22:35 3 -> /home/joel/test
lrwx------ 1 joel joel 64 Feb 22 22:35 2 -> /dev/pts/2
l-wx------ 1 joel joel 64 Feb 22 22:35 1 -> /dev/null
lrwx------ 1 joel joel 64 Feb 22 22:35 0 -> /dev/pts/2
So, 4 is the fd we want to investigate. Let's see what's in the fdinfo
for that:
joel@opx1:~$ cat /proc/22734/fdinfo/4
pos: 0
flags: 00
mnt_id: 11
inotify wd:1 ino:15f51d sdev:ca00003 mask:c06 ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:1df51500a75e538c
That looks like a entry for the watch at the bottom!
Let's try something with more watches, this time with the inotifywait
utility, just watching whatever is in /tmp
:
joel@gladstone:~$ inotifywait /tmp/* &
[4] 27862
joel@gladstone:~$ Setting up watches.
Watches established.
joel@gladstone:~$ ls -ahtlr /proc/27862/fd | grep inotify
lr-x------ 1 joel joel 64 Feb 22 22:41 3 -> anon_inode:inotify
joel@gladstone:~$ cat /proc/27862/fdinfo/3
pos: 0
flags: 00
mnt_id: 11
inotify wd:6 ino:7fdc sdev:ca00003 mask:fff ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:dc7f0000551e9d88
inotify wd:5 ino:7fcb sdev:ca00003 mask:fff ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:cb7f00005b1f9d88
inotify wd:4 ino:7fcc sdev:ca00003 mask:fff ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:cc7f00006a1d9d88
inotify wd:3 ino:7fc6 sdev:ca00003 mask:fff ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:c67f00005d1d9d88
inotify wd:2 ino:7fc7 sdev:ca00003 mask:fff ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:c77f0000461d9d88
inotify wd:1 ino:7fd7 sdev:ca00003 mask:fff ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:d77f00000053c98b
Aha! More entries! So we should have six things in /tmp
then:
joel@opx1:~$ ls /tmp/ | wc -l
6
Excellent. My new inotifywait
has one entry in its fd
list (which is what the other one-liners here are counting), but six entries in its fdinfo
file. So we can figure out how many watches a given fd for a given process is using by consulting its fdinfo
file. Now to put it together with some of the above to grab a list of processes that have notify watches open and use that to count the entries in each fdinfo
. This is similar to above, so I'll just dump the one-liner here:
sudo lsof | awk '/anon_inode/ gsub(/[urw]$/,"",$4); print "/proc/"$2"/fdinfo/"$4; ' | while read fdi; do count=$(sudo grep -c inotify $fdi); echo -e $count"t"$fdi; done
There's some thick stuff in here, but the basics are that I use awk
to build an fdinfo
path from the lsof
output, grabbing the pid and fd number, stripping the u/r/w flag from the latter. Then for each constructed fdinfo
path, I count the number of inotify
lines and output the count and the pid.
It would be nice if I had what processes these pids represent in the same place though, right? I thought so. So, in a particularly messy bit, I settled on calling dirname
twice on the fdinfo
path to get pack to /proc/<pid>
, adding /exe
to it, and then running readlink
on that to get the exe name of the process. Throw that in there as well, sort it by number of watches, and redirect it to a file for safe-keeping and we get:
sudo lsof | awk '/anon_inode/ gsub(/[urw]$/,"",$4); print "/proc/"$2"/fdinfo/"$4; ' | while read fdi; do count=$(sudo grep -c inotify $fdi); exe=$(sudo readlink $(dirname $(dirname $fdi))/exe); echo -e $count"t"$fdi"t"$exe; done | sort -n > watches
Running that without sudo to just show my processes I launched above, I get:
joel@gladstone:~$ cat watches
6 /proc/4906/fdinfo/3 /usr/bin/inotifywait
1 /proc/22734/fdinfo/4 /usr/bin/tail
Perfect! A list of processes, fd's, and how many watches each is using, which is exactly what I needed.
When usinglsof
for this purpose, I would recommend using the-nP
flags to avoid unnecessary lookups of reverse DNS and port names. In this particular case, adding-bw
to avoid potentially blocking syscalls is also recommended. That said, withlsof
gobbling up 3 seconds of wall clock time on my humble workstation (of which 2 seconds are spent in the kernel), this approach is nice for exploration but alas unsuitable for monitoring purposes.
– BertD
Mar 15 '18 at 10:39
add a comment |
I ran into this problem, and none of these answers give you the answer of "how many watches is each process currently using?" The one-liners all give you how many instances are open, which is only part of the story, and the trace stuff is only useful to see new watches being opened.
TL;DR: This will get you a file with a list of open inotify
instances and the number of watches they have, along with the pids and binaries that spawned them, sorted in descending order by watch count:
sudo lsof | awk '/anon_inode/ gsub(/[urw]$/,"",$4); print "/proc/"$2"/fdinfo/"$4; ' | while read fdi; do count=$(sudo grep -c inotify $fdi); exe=$(sudo readlink $(dirname $(dirname $fdi))/exe); echo -e $count"t"$fdi"t"$exe; done | sort -nr > watches
That's a big ball of mess, so here's how I got there. To start, I ran a tail
on a test file, and looked at the fd's it opened:
joel@gladstone:~$ tail -f test > /dev/null &
[3] 22734
joel@opx1:~$ ls -ahltr /proc/22734/fd
total 0
dr-xr-xr-x 9 joel joel 0 Feb 22 22:34 ..
dr-x------ 2 joel joel 0 Feb 22 22:34 .
lr-x------ 1 joel joel 64 Feb 22 22:35 4 -> anon_inode:inotify
lr-x------ 1 joel joel 64 Feb 22 22:35 3 -> /home/joel/test
lrwx------ 1 joel joel 64 Feb 22 22:35 2 -> /dev/pts/2
l-wx------ 1 joel joel 64 Feb 22 22:35 1 -> /dev/null
lrwx------ 1 joel joel 64 Feb 22 22:35 0 -> /dev/pts/2
So, 4 is the fd we want to investigate. Let's see what's in the fdinfo
for that:
joel@opx1:~$ cat /proc/22734/fdinfo/4
pos: 0
flags: 00
mnt_id: 11
inotify wd:1 ino:15f51d sdev:ca00003 mask:c06 ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:1df51500a75e538c
That looks like a entry for the watch at the bottom!
Let's try something with more watches, this time with the inotifywait
utility, just watching whatever is in /tmp
:
joel@gladstone:~$ inotifywait /tmp/* &
[4] 27862
joel@gladstone:~$ Setting up watches.
Watches established.
joel@gladstone:~$ ls -ahtlr /proc/27862/fd | grep inotify
lr-x------ 1 joel joel 64 Feb 22 22:41 3 -> anon_inode:inotify
joel@gladstone:~$ cat /proc/27862/fdinfo/3
pos: 0
flags: 00
mnt_id: 11
inotify wd:6 ino:7fdc sdev:ca00003 mask:fff ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:dc7f0000551e9d88
inotify wd:5 ino:7fcb sdev:ca00003 mask:fff ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:cb7f00005b1f9d88
inotify wd:4 ino:7fcc sdev:ca00003 mask:fff ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:cc7f00006a1d9d88
inotify wd:3 ino:7fc6 sdev:ca00003 mask:fff ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:c67f00005d1d9d88
inotify wd:2 ino:7fc7 sdev:ca00003 mask:fff ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:c77f0000461d9d88
inotify wd:1 ino:7fd7 sdev:ca00003 mask:fff ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:d77f00000053c98b
Aha! More entries! So we should have six things in /tmp
then:
joel@opx1:~$ ls /tmp/ | wc -l
6
Excellent. My new inotifywait
has one entry in its fd
list (which is what the other one-liners here are counting), but six entries in its fdinfo
file. So we can figure out how many watches a given fd for a given process is using by consulting its fdinfo
file. Now to put it together with some of the above to grab a list of processes that have notify watches open and use that to count the entries in each fdinfo
. This is similar to above, so I'll just dump the one-liner here:
sudo lsof | awk '/anon_inode/ gsub(/[urw]$/,"",$4); print "/proc/"$2"/fdinfo/"$4; ' | while read fdi; do count=$(sudo grep -c inotify $fdi); echo -e $count"t"$fdi; done
There's some thick stuff in here, but the basics are that I use awk
to build an fdinfo
path from the lsof
output, grabbing the pid and fd number, stripping the u/r/w flag from the latter. Then for each constructed fdinfo
path, I count the number of inotify
lines and output the count and the pid.
It would be nice if I had what processes these pids represent in the same place though, right? I thought so. So, in a particularly messy bit, I settled on calling dirname
twice on the fdinfo
path to get pack to /proc/<pid>
, adding /exe
to it, and then running readlink
on that to get the exe name of the process. Throw that in there as well, sort it by number of watches, and redirect it to a file for safe-keeping and we get:
sudo lsof | awk '/anon_inode/ gsub(/[urw]$/,"",$4); print "/proc/"$2"/fdinfo/"$4; ' | while read fdi; do count=$(sudo grep -c inotify $fdi); exe=$(sudo readlink $(dirname $(dirname $fdi))/exe); echo -e $count"t"$fdi"t"$exe; done | sort -n > watches
Running that without sudo to just show my processes I launched above, I get:
joel@gladstone:~$ cat watches
6 /proc/4906/fdinfo/3 /usr/bin/inotifywait
1 /proc/22734/fdinfo/4 /usr/bin/tail
Perfect! A list of processes, fd's, and how many watches each is using, which is exactly what I needed.
When usinglsof
for this purpose, I would recommend using the-nP
flags to avoid unnecessary lookups of reverse DNS and port names. In this particular case, adding-bw
to avoid potentially blocking syscalls is also recommended. That said, withlsof
gobbling up 3 seconds of wall clock time on my humble workstation (of which 2 seconds are spent in the kernel), this approach is nice for exploration but alas unsuitable for monitoring purposes.
– BertD
Mar 15 '18 at 10:39
add a comment |
I ran into this problem, and none of these answers give you the answer of "how many watches is each process currently using?" The one-liners all give you how many instances are open, which is only part of the story, and the trace stuff is only useful to see new watches being opened.
TL;DR: This will get you a file with a list of open inotify
instances and the number of watches they have, along with the pids and binaries that spawned them, sorted in descending order by watch count:
sudo lsof | awk '/anon_inode/ gsub(/[urw]$/,"",$4); print "/proc/"$2"/fdinfo/"$4; ' | while read fdi; do count=$(sudo grep -c inotify $fdi); exe=$(sudo readlink $(dirname $(dirname $fdi))/exe); echo -e $count"t"$fdi"t"$exe; done | sort -nr > watches
That's a big ball of mess, so here's how I got there. To start, I ran a tail
on a test file, and looked at the fd's it opened:
joel@gladstone:~$ tail -f test > /dev/null &
[3] 22734
joel@opx1:~$ ls -ahltr /proc/22734/fd
total 0
dr-xr-xr-x 9 joel joel 0 Feb 22 22:34 ..
dr-x------ 2 joel joel 0 Feb 22 22:34 .
lr-x------ 1 joel joel 64 Feb 22 22:35 4 -> anon_inode:inotify
lr-x------ 1 joel joel 64 Feb 22 22:35 3 -> /home/joel/test
lrwx------ 1 joel joel 64 Feb 22 22:35 2 -> /dev/pts/2
l-wx------ 1 joel joel 64 Feb 22 22:35 1 -> /dev/null
lrwx------ 1 joel joel 64 Feb 22 22:35 0 -> /dev/pts/2
So, 4 is the fd we want to investigate. Let's see what's in the fdinfo
for that:
joel@opx1:~$ cat /proc/22734/fdinfo/4
pos: 0
flags: 00
mnt_id: 11
inotify wd:1 ino:15f51d sdev:ca00003 mask:c06 ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:1df51500a75e538c
That looks like a entry for the watch at the bottom!
Let's try something with more watches, this time with the inotifywait
utility, just watching whatever is in /tmp
:
joel@gladstone:~$ inotifywait /tmp/* &
[4] 27862
joel@gladstone:~$ Setting up watches.
Watches established.
joel@gladstone:~$ ls -ahtlr /proc/27862/fd | grep inotify
lr-x------ 1 joel joel 64 Feb 22 22:41 3 -> anon_inode:inotify
joel@gladstone:~$ cat /proc/27862/fdinfo/3
pos: 0
flags: 00
mnt_id: 11
inotify wd:6 ino:7fdc sdev:ca00003 mask:fff ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:dc7f0000551e9d88
inotify wd:5 ino:7fcb sdev:ca00003 mask:fff ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:cb7f00005b1f9d88
inotify wd:4 ino:7fcc sdev:ca00003 mask:fff ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:cc7f00006a1d9d88
inotify wd:3 ino:7fc6 sdev:ca00003 mask:fff ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:c67f00005d1d9d88
inotify wd:2 ino:7fc7 sdev:ca00003 mask:fff ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:c77f0000461d9d88
inotify wd:1 ino:7fd7 sdev:ca00003 mask:fff ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:d77f00000053c98b
Aha! More entries! So we should have six things in /tmp
then:
joel@opx1:~$ ls /tmp/ | wc -l
6
Excellent. My new inotifywait
has one entry in its fd
list (which is what the other one-liners here are counting), but six entries in its fdinfo
file. So we can figure out how many watches a given fd for a given process is using by consulting its fdinfo
file. Now to put it together with some of the above to grab a list of processes that have notify watches open and use that to count the entries in each fdinfo
. This is similar to above, so I'll just dump the one-liner here:
sudo lsof | awk '/anon_inode/ gsub(/[urw]$/,"",$4); print "/proc/"$2"/fdinfo/"$4; ' | while read fdi; do count=$(sudo grep -c inotify $fdi); echo -e $count"t"$fdi; done
There's some thick stuff in here, but the basics are that I use awk
to build an fdinfo
path from the lsof
output, grabbing the pid and fd number, stripping the u/r/w flag from the latter. Then for each constructed fdinfo
path, I count the number of inotify
lines and output the count and the pid.
It would be nice if I had what processes these pids represent in the same place though, right? I thought so. So, in a particularly messy bit, I settled on calling dirname
twice on the fdinfo
path to get pack to /proc/<pid>
, adding /exe
to it, and then running readlink
on that to get the exe name of the process. Throw that in there as well, sort it by number of watches, and redirect it to a file for safe-keeping and we get:
sudo lsof | awk '/anon_inode/ gsub(/[urw]$/,"",$4); print "/proc/"$2"/fdinfo/"$4; ' | while read fdi; do count=$(sudo grep -c inotify $fdi); exe=$(sudo readlink $(dirname $(dirname $fdi))/exe); echo -e $count"t"$fdi"t"$exe; done | sort -n > watches
Running that without sudo to just show my processes I launched above, I get:
joel@gladstone:~$ cat watches
6 /proc/4906/fdinfo/3 /usr/bin/inotifywait
1 /proc/22734/fdinfo/4 /usr/bin/tail
Perfect! A list of processes, fd's, and how many watches each is using, which is exactly what I needed.
I ran into this problem, and none of these answers give you the answer of "how many watches is each process currently using?" The one-liners all give you how many instances are open, which is only part of the story, and the trace stuff is only useful to see new watches being opened.
TL;DR: This will get you a file with a list of open inotify
instances and the number of watches they have, along with the pids and binaries that spawned them, sorted in descending order by watch count:
sudo lsof | awk '/anon_inode/ gsub(/[urw]$/,"",$4); print "/proc/"$2"/fdinfo/"$4; ' | while read fdi; do count=$(sudo grep -c inotify $fdi); exe=$(sudo readlink $(dirname $(dirname $fdi))/exe); echo -e $count"t"$fdi"t"$exe; done | sort -nr > watches
That's a big ball of mess, so here's how I got there. To start, I ran a tail
on a test file, and looked at the fd's it opened:
joel@gladstone:~$ tail -f test > /dev/null &
[3] 22734
joel@opx1:~$ ls -ahltr /proc/22734/fd
total 0
dr-xr-xr-x 9 joel joel 0 Feb 22 22:34 ..
dr-x------ 2 joel joel 0 Feb 22 22:34 .
lr-x------ 1 joel joel 64 Feb 22 22:35 4 -> anon_inode:inotify
lr-x------ 1 joel joel 64 Feb 22 22:35 3 -> /home/joel/test
lrwx------ 1 joel joel 64 Feb 22 22:35 2 -> /dev/pts/2
l-wx------ 1 joel joel 64 Feb 22 22:35 1 -> /dev/null
lrwx------ 1 joel joel 64 Feb 22 22:35 0 -> /dev/pts/2
So, 4 is the fd we want to investigate. Let's see what's in the fdinfo
for that:
joel@opx1:~$ cat /proc/22734/fdinfo/4
pos: 0
flags: 00
mnt_id: 11
inotify wd:1 ino:15f51d sdev:ca00003 mask:c06 ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:1df51500a75e538c
That looks like a entry for the watch at the bottom!
Let's try something with more watches, this time with the inotifywait
utility, just watching whatever is in /tmp
:
joel@gladstone:~$ inotifywait /tmp/* &
[4] 27862
joel@gladstone:~$ Setting up watches.
Watches established.
joel@gladstone:~$ ls -ahtlr /proc/27862/fd | grep inotify
lr-x------ 1 joel joel 64 Feb 22 22:41 3 -> anon_inode:inotify
joel@gladstone:~$ cat /proc/27862/fdinfo/3
pos: 0
flags: 00
mnt_id: 11
inotify wd:6 ino:7fdc sdev:ca00003 mask:fff ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:dc7f0000551e9d88
inotify wd:5 ino:7fcb sdev:ca00003 mask:fff ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:cb7f00005b1f9d88
inotify wd:4 ino:7fcc sdev:ca00003 mask:fff ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:cc7f00006a1d9d88
inotify wd:3 ino:7fc6 sdev:ca00003 mask:fff ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:c67f00005d1d9d88
inotify wd:2 ino:7fc7 sdev:ca00003 mask:fff ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:c77f0000461d9d88
inotify wd:1 ino:7fd7 sdev:ca00003 mask:fff ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:d77f00000053c98b
Aha! More entries! So we should have six things in /tmp
then:
joel@opx1:~$ ls /tmp/ | wc -l
6
Excellent. My new inotifywait
has one entry in its fd
list (which is what the other one-liners here are counting), but six entries in its fdinfo
file. So we can figure out how many watches a given fd for a given process is using by consulting its fdinfo
file. Now to put it together with some of the above to grab a list of processes that have notify watches open and use that to count the entries in each fdinfo
. This is similar to above, so I'll just dump the one-liner here:
sudo lsof | awk '/anon_inode/ gsub(/[urw]$/,"",$4); print "/proc/"$2"/fdinfo/"$4; ' | while read fdi; do count=$(sudo grep -c inotify $fdi); echo -e $count"t"$fdi; done
There's some thick stuff in here, but the basics are that I use awk
to build an fdinfo
path from the lsof
output, grabbing the pid and fd number, stripping the u/r/w flag from the latter. Then for each constructed fdinfo
path, I count the number of inotify
lines and output the count and the pid.
It would be nice if I had what processes these pids represent in the same place though, right? I thought so. So, in a particularly messy bit, I settled on calling dirname
twice on the fdinfo
path to get pack to /proc/<pid>
, adding /exe
to it, and then running readlink
on that to get the exe name of the process. Throw that in there as well, sort it by number of watches, and redirect it to a file for safe-keeping and we get:
sudo lsof | awk '/anon_inode/ gsub(/[urw]$/,"",$4); print "/proc/"$2"/fdinfo/"$4; ' | while read fdi; do count=$(sudo grep -c inotify $fdi); exe=$(sudo readlink $(dirname $(dirname $fdi))/exe); echo -e $count"t"$fdi"t"$exe; done | sort -n > watches
Running that without sudo to just show my processes I launched above, I get:
joel@gladstone:~$ cat watches
6 /proc/4906/fdinfo/3 /usr/bin/inotifywait
1 /proc/22734/fdinfo/4 /usr/bin/tail
Perfect! A list of processes, fd's, and how many watches each is using, which is exactly what I needed.
edited Feb 23 '18 at 18:54
answered Feb 22 '18 at 22:58
cincodenadacincodenada
1313
1313
When usinglsof
for this purpose, I would recommend using the-nP
flags to avoid unnecessary lookups of reverse DNS and port names. In this particular case, adding-bw
to avoid potentially blocking syscalls is also recommended. That said, withlsof
gobbling up 3 seconds of wall clock time on my humble workstation (of which 2 seconds are spent in the kernel), this approach is nice for exploration but alas unsuitable for monitoring purposes.
– BertD
Mar 15 '18 at 10:39
add a comment |
When usinglsof
for this purpose, I would recommend using the-nP
flags to avoid unnecessary lookups of reverse DNS and port names. In this particular case, adding-bw
to avoid potentially blocking syscalls is also recommended. That said, withlsof
gobbling up 3 seconds of wall clock time on my humble workstation (of which 2 seconds are spent in the kernel), this approach is nice for exploration but alas unsuitable for monitoring purposes.
– BertD
Mar 15 '18 at 10:39
When using
lsof
for this purpose, I would recommend using the -nP
flags to avoid unnecessary lookups of reverse DNS and port names. In this particular case, adding -bw
to avoid potentially blocking syscalls is also recommended. That said, with lsof
gobbling up 3 seconds of wall clock time on my humble workstation (of which 2 seconds are spent in the kernel), this approach is nice for exploration but alas unsuitable for monitoring purposes.– BertD
Mar 15 '18 at 10:39
When using
lsof
for this purpose, I would recommend using the -nP
flags to avoid unnecessary lookups of reverse DNS and port names. In this particular case, adding -bw
to avoid potentially blocking syscalls is also recommended. That said, with lsof
gobbling up 3 seconds of wall clock time on my humble workstation (of which 2 seconds are spent in the kernel), this approach is nice for exploration but alas unsuitable for monitoring purposes.– BertD
Mar 15 '18 at 10:39
add a comment |
I have modified the script present in above to show the list of processes those are consuming inotify resources:
ps -p `find /proc/*/fd/* -type l -lname 'anon_inode:inotify' -print | sed s/'^/proc/'/''/ | sed s/'/fd.*$'/''/`
I think there is a way to replace my double sed.
Yes. Use either
cut -f 3 -d '/'
or
sed -e 's/^/proc/([0-9]*)/.*/1'
and you'll only get the pid.
Also, if you add
2> /dev/null
in the find, you'll get rid of any pesky error lines thrown by find. So this would work:
ps -p $(find /proc/*/fd/* -type l -lname 'anon_inode:inotify' -print 2> /dev/null | sed -e 's/^/proc/([0-9]*)/.*/1/')
add a comment |
I have modified the script present in above to show the list of processes those are consuming inotify resources:
ps -p `find /proc/*/fd/* -type l -lname 'anon_inode:inotify' -print | sed s/'^/proc/'/''/ | sed s/'/fd.*$'/''/`
I think there is a way to replace my double sed.
Yes. Use either
cut -f 3 -d '/'
or
sed -e 's/^/proc/([0-9]*)/.*/1'
and you'll only get the pid.
Also, if you add
2> /dev/null
in the find, you'll get rid of any pesky error lines thrown by find. So this would work:
ps -p $(find /proc/*/fd/* -type l -lname 'anon_inode:inotify' -print 2> /dev/null | sed -e 's/^/proc/([0-9]*)/.*/1/')
add a comment |
I have modified the script present in above to show the list of processes those are consuming inotify resources:
ps -p `find /proc/*/fd/* -type l -lname 'anon_inode:inotify' -print | sed s/'^/proc/'/''/ | sed s/'/fd.*$'/''/`
I think there is a way to replace my double sed.
Yes. Use either
cut -f 3 -d '/'
or
sed -e 's/^/proc/([0-9]*)/.*/1'
and you'll only get the pid.
Also, if you add
2> /dev/null
in the find, you'll get rid of any pesky error lines thrown by find. So this would work:
ps -p $(find /proc/*/fd/* -type l -lname 'anon_inode:inotify' -print 2> /dev/null | sed -e 's/^/proc/([0-9]*)/.*/1/')
I have modified the script present in above to show the list of processes those are consuming inotify resources:
ps -p `find /proc/*/fd/* -type l -lname 'anon_inode:inotify' -print | sed s/'^/proc/'/''/ | sed s/'/fd.*$'/''/`
I think there is a way to replace my double sed.
Yes. Use either
cut -f 3 -d '/'
or
sed -e 's/^/proc/([0-9]*)/.*/1'
and you'll only get the pid.
Also, if you add
2> /dev/null
in the find, you'll get rid of any pesky error lines thrown by find. So this would work:
ps -p $(find /proc/*/fd/* -type l -lname 'anon_inode:inotify' -print 2> /dev/null | sed -e 's/^/proc/([0-9]*)/.*/1/')
edited Jul 24 '13 at 12:48
Community♦
1
1
answered Jul 18 '13 at 12:38
Arkadij KuzhelArkadij Kuzhel
113
113
add a comment |
add a comment |
As @Jonathan Kamens said, you are probably running out of watches. I have a premade script, inotify-consumers
, that lists this for you:
$ time inotify-consumers | head
INOTIFY
WATCHER
COUNT PID CMD
----------------------------------------
6688 27262 /home/dvlpr/apps/WebStorm-2018.3.4/WebStorm-183.5429.34/bin/fsnotifier64
411 27581 node /home/dvlpr/dev/kiwi-frontend/node_modules/.bin/webpack --config config/webpack.dev.js
79 1541 /usr/lib/gnome-settings-daemon/gsd-xsettings
30 1664 /usr/lib/gvfs/gvfsd-trash --spawner :1.22 /org/gtk/gvfs/exec_spaw/0
14 1630 /usr/bin/gnome-software --gapplication-service
real 0m0.099s
user 0m0.042s
sys 0m0.062s
Here you quickly see why the default limit of 8K watchers is too little on a development machine, as just WebStorm instance quickly maxes this when encountering a node_modules
folder with thousands of folders. Add a webpack watcher to guarantee problems ...
Just copy the contents of the script (or the file on GitHub) and put it somewhere in your $PATH
, like /usr/local/bin
. For reference, the main content of the script is simply this
find /proc/*/fd
-lname anon_inode:inotify
-printf '%hinfo/%fn' 2>/dev/null
| xargs grep -c '^inotify'
| sort -n -t: -k2 -r
In case you are wondering how to increase the limits, here's how to make it permanent:
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
add a comment |
As @Jonathan Kamens said, you are probably running out of watches. I have a premade script, inotify-consumers
, that lists this for you:
$ time inotify-consumers | head
INOTIFY
WATCHER
COUNT PID CMD
----------------------------------------
6688 27262 /home/dvlpr/apps/WebStorm-2018.3.4/WebStorm-183.5429.34/bin/fsnotifier64
411 27581 node /home/dvlpr/dev/kiwi-frontend/node_modules/.bin/webpack --config config/webpack.dev.js
79 1541 /usr/lib/gnome-settings-daemon/gsd-xsettings
30 1664 /usr/lib/gvfs/gvfsd-trash --spawner :1.22 /org/gtk/gvfs/exec_spaw/0
14 1630 /usr/bin/gnome-software --gapplication-service
real 0m0.099s
user 0m0.042s
sys 0m0.062s
Here you quickly see why the default limit of 8K watchers is too little on a development machine, as just WebStorm instance quickly maxes this when encountering a node_modules
folder with thousands of folders. Add a webpack watcher to guarantee problems ...
Just copy the contents of the script (or the file on GitHub) and put it somewhere in your $PATH
, like /usr/local/bin
. For reference, the main content of the script is simply this
find /proc/*/fd
-lname anon_inode:inotify
-printf '%hinfo/%fn' 2>/dev/null
| xargs grep -c '^inotify'
| sort -n -t: -k2 -r
In case you are wondering how to increase the limits, here's how to make it permanent:
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
add a comment |
As @Jonathan Kamens said, you are probably running out of watches. I have a premade script, inotify-consumers
, that lists this for you:
$ time inotify-consumers | head
INOTIFY
WATCHER
COUNT PID CMD
----------------------------------------
6688 27262 /home/dvlpr/apps/WebStorm-2018.3.4/WebStorm-183.5429.34/bin/fsnotifier64
411 27581 node /home/dvlpr/dev/kiwi-frontend/node_modules/.bin/webpack --config config/webpack.dev.js
79 1541 /usr/lib/gnome-settings-daemon/gsd-xsettings
30 1664 /usr/lib/gvfs/gvfsd-trash --spawner :1.22 /org/gtk/gvfs/exec_spaw/0
14 1630 /usr/bin/gnome-software --gapplication-service
real 0m0.099s
user 0m0.042s
sys 0m0.062s
Here you quickly see why the default limit of 8K watchers is too little on a development machine, as just WebStorm instance quickly maxes this when encountering a node_modules
folder with thousands of folders. Add a webpack watcher to guarantee problems ...
Just copy the contents of the script (or the file on GitHub) and put it somewhere in your $PATH
, like /usr/local/bin
. For reference, the main content of the script is simply this
find /proc/*/fd
-lname anon_inode:inotify
-printf '%hinfo/%fn' 2>/dev/null
| xargs grep -c '^inotify'
| sort -n -t: -k2 -r
In case you are wondering how to increase the limits, here's how to make it permanent:
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
As @Jonathan Kamens said, you are probably running out of watches. I have a premade script, inotify-consumers
, that lists this for you:
$ time inotify-consumers | head
INOTIFY
WATCHER
COUNT PID CMD
----------------------------------------
6688 27262 /home/dvlpr/apps/WebStorm-2018.3.4/WebStorm-183.5429.34/bin/fsnotifier64
411 27581 node /home/dvlpr/dev/kiwi-frontend/node_modules/.bin/webpack --config config/webpack.dev.js
79 1541 /usr/lib/gnome-settings-daemon/gsd-xsettings
30 1664 /usr/lib/gvfs/gvfsd-trash --spawner :1.22 /org/gtk/gvfs/exec_spaw/0
14 1630 /usr/bin/gnome-software --gapplication-service
real 0m0.099s
user 0m0.042s
sys 0m0.062s
Here you quickly see why the default limit of 8K watchers is too little on a development machine, as just WebStorm instance quickly maxes this when encountering a node_modules
folder with thousands of folders. Add a webpack watcher to guarantee problems ...
Just copy the contents of the script (or the file on GitHub) and put it somewhere in your $PATH
, like /usr/local/bin
. For reference, the main content of the script is simply this
find /proc/*/fd
-lname anon_inode:inotify
-printf '%hinfo/%fn' 2>/dev/null
| xargs grep -c '^inotify'
| sort -n -t: -k2 -r
In case you are wondering how to increase the limits, here's how to make it permanent:
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
answered Feb 25 at 9:10
oligofrenoligofren
1499
1499
add a comment |
add a comment |
Thanks for contributing an answer to Unix & Linux Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f15509%2fwhos-consuming-my-inotify-resources%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown