tmpfs /run/user/1000 ran out of inodes, but it only has 30 files
Clash Royale CLAN TAG#URR8PPP
So today I bother notice an error message being generated by gui program:
(FreeFileSync:21930): dconf-CRITICAL **: 11:46:39.475: unable to create file '/run/user/1000/dconf/user': No space left on device. dconf will not work properly.
Where /run/user/1000
is a tmpfs
for the user's run
folder. Thing is that there was plenty of free space on it:
$ df -h /run/user/1000
Filesystem Size Used Avail Use% Mounted on
tmpfs 1.6G 120K 1.6G 1% /run/user/1000
So why then? Well then I discover that there are 0
free inodes remaining.
$ df -i /run/user/1000
Filesystem Inodes IUsed IFree IUse% Mounted on
tmpfs 2027420 2027420 0 100% /run/user/1000
OK great. However the problems is this: I simply cannot find out the reason for this. Because there are very few files exisiting on this drive, as shown below:
$ echo $PWD ; find . | wc -l
/run/user/1000
30
...and other than that, there are very few open programs that are still clinging onto deleted files:
$ sudo lsof $PWD | grep deleted
lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs
Output information may be incomplete.
albert 17684 id 72u REG 0,69 1026 1200359 /run/user/1000/#1200359 (deleted)
Only albert. And after the quitting of albert, the number of used up INodes (100% !) remained the same.
On ubuntu 18.10. My system has been up for quite a long time without a reboot. Still haven't rebooted yet. Will do this soon. And see if that clears the error.
[edited]
BTW, here is a link to show the difference in output between the du
and df
commands, in regards to the reported number of used inodes:
https://gist.github.com/dreamcat4/6740c40bb313c1a016d35a0c00a8ab92
They do not seem to agree with each other!
ubuntu tmpfs dconf
add a comment |
So today I bother notice an error message being generated by gui program:
(FreeFileSync:21930): dconf-CRITICAL **: 11:46:39.475: unable to create file '/run/user/1000/dconf/user': No space left on device. dconf will not work properly.
Where /run/user/1000
is a tmpfs
for the user's run
folder. Thing is that there was plenty of free space on it:
$ df -h /run/user/1000
Filesystem Size Used Avail Use% Mounted on
tmpfs 1.6G 120K 1.6G 1% /run/user/1000
So why then? Well then I discover that there are 0
free inodes remaining.
$ df -i /run/user/1000
Filesystem Inodes IUsed IFree IUse% Mounted on
tmpfs 2027420 2027420 0 100% /run/user/1000
OK great. However the problems is this: I simply cannot find out the reason for this. Because there are very few files exisiting on this drive, as shown below:
$ echo $PWD ; find . | wc -l
/run/user/1000
30
...and other than that, there are very few open programs that are still clinging onto deleted files:
$ sudo lsof $PWD | grep deleted
lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs
Output information may be incomplete.
albert 17684 id 72u REG 0,69 1026 1200359 /run/user/1000/#1200359 (deleted)
Only albert. And after the quitting of albert, the number of used up INodes (100% !) remained the same.
On ubuntu 18.10. My system has been up for quite a long time without a reboot. Still haven't rebooted yet. Will do this soon. And see if that clears the error.
[edited]
BTW, here is a link to show the difference in output between the du
and df
commands, in regards to the reported number of used inodes:
https://gist.github.com/dreamcat4/6740c40bb313c1a016d35a0c00a8ab92
They do not seem to agree with each other!
ubuntu tmpfs dconf
A related question is unix.stackexchange.com/questions/309898 .
– JdeBP
Jan 3 at 18:19
add a comment |
So today I bother notice an error message being generated by gui program:
(FreeFileSync:21930): dconf-CRITICAL **: 11:46:39.475: unable to create file '/run/user/1000/dconf/user': No space left on device. dconf will not work properly.
Where /run/user/1000
is a tmpfs
for the user's run
folder. Thing is that there was plenty of free space on it:
$ df -h /run/user/1000
Filesystem Size Used Avail Use% Mounted on
tmpfs 1.6G 120K 1.6G 1% /run/user/1000
So why then? Well then I discover that there are 0
free inodes remaining.
$ df -i /run/user/1000
Filesystem Inodes IUsed IFree IUse% Mounted on
tmpfs 2027420 2027420 0 100% /run/user/1000
OK great. However the problems is this: I simply cannot find out the reason for this. Because there are very few files exisiting on this drive, as shown below:
$ echo $PWD ; find . | wc -l
/run/user/1000
30
...and other than that, there are very few open programs that are still clinging onto deleted files:
$ sudo lsof $PWD | grep deleted
lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs
Output information may be incomplete.
albert 17684 id 72u REG 0,69 1026 1200359 /run/user/1000/#1200359 (deleted)
Only albert. And after the quitting of albert, the number of used up INodes (100% !) remained the same.
On ubuntu 18.10. My system has been up for quite a long time without a reboot. Still haven't rebooted yet. Will do this soon. And see if that clears the error.
[edited]
BTW, here is a link to show the difference in output between the du
and df
commands, in regards to the reported number of used inodes:
https://gist.github.com/dreamcat4/6740c40bb313c1a016d35a0c00a8ab92
They do not seem to agree with each other!
ubuntu tmpfs dconf
So today I bother notice an error message being generated by gui program:
(FreeFileSync:21930): dconf-CRITICAL **: 11:46:39.475: unable to create file '/run/user/1000/dconf/user': No space left on device. dconf will not work properly.
Where /run/user/1000
is a tmpfs
for the user's run
folder. Thing is that there was plenty of free space on it:
$ df -h /run/user/1000
Filesystem Size Used Avail Use% Mounted on
tmpfs 1.6G 120K 1.6G 1% /run/user/1000
So why then? Well then I discover that there are 0
free inodes remaining.
$ df -i /run/user/1000
Filesystem Inodes IUsed IFree IUse% Mounted on
tmpfs 2027420 2027420 0 100% /run/user/1000
OK great. However the problems is this: I simply cannot find out the reason for this. Because there are very few files exisiting on this drive, as shown below:
$ echo $PWD ; find . | wc -l
/run/user/1000
30
...and other than that, there are very few open programs that are still clinging onto deleted files:
$ sudo lsof $PWD | grep deleted
lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs
Output information may be incomplete.
albert 17684 id 72u REG 0,69 1026 1200359 /run/user/1000/#1200359 (deleted)
Only albert. And after the quitting of albert, the number of used up INodes (100% !) remained the same.
On ubuntu 18.10. My system has been up for quite a long time without a reboot. Still haven't rebooted yet. Will do this soon. And see if that clears the error.
[edited]
BTW, here is a link to show the difference in output between the du
and df
commands, in regards to the reported number of used inodes:
https://gist.github.com/dreamcat4/6740c40bb313c1a016d35a0c00a8ab92
They do not seem to agree with each other!
ubuntu tmpfs dconf
ubuntu tmpfs dconf
edited Jan 3 at 13:59
Rui F Ribeiro
39.5k1479132
39.5k1479132
asked Jan 3 at 12:04
Dreamcat4Dreamcat4
1162
1162
A related question is unix.stackexchange.com/questions/309898 .
– JdeBP
Jan 3 at 18:19
add a comment |
A related question is unix.stackexchange.com/questions/309898 .
– JdeBP
Jan 3 at 18:19
A related question is unix.stackexchange.com/questions/309898 .
– JdeBP
Jan 3 at 18:19
A related question is unix.stackexchange.com/questions/309898 .
– JdeBP
Jan 3 at 18:19
add a comment |
1 Answer
1
active
oldest
votes
You can increase the number of inodes available on a remount. From the kernel documentation :
tmpfs has three mount options for sizing:
size: The limit of allocated bytes for this tmpfs instance. The
default is half of your physical RAM without swap. If you
oversize your tmpfs instances the machine will deadlock
since the OOM handler will not be able to free that memory.
nr_blocks: The same as size, but in blocks of PAGE_SIZE.
nr_inodes: The maximum number of inodes for this instance. The default
is half of the number of your physical RAM pages, or (on a
machine with highmem) the number of lowmem RAM pages,
whichever is the lower.
These parameters accept a suffix k, m or g for kilo, mega and giga and
can be changed on remount.
While true, this doesn't explain why the OP has so many used inodes which seems to be the main thrust of the question.
– terdon♦
Jan 3 at 12:42
Rather ironically, looking back to what I wrote yesterday it seems I never actually asked a specific question! But yes: increasing the number of inodes isn't actually going to help very much if they all just get used up again anyhow. And there no actual reason being presented here to justify that if2027420
is being consumed by only 30 files. Then doubling that number of available inodes will not just consume all the extra new inodes too. Hence then the 'why?', must I really have to ask!!??!!
– Dreamcat4
Jan 4 at 7:26
Given the "can't stat" error, have a look at this link
– JRFerguson
Jan 4 at 16:35
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "106"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f492223%2ftmpfs-run-user-1000-ran-out-of-inodes-but-it-only-has-30-files%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
You can increase the number of inodes available on a remount. From the kernel documentation :
tmpfs has three mount options for sizing:
size: The limit of allocated bytes for this tmpfs instance. The
default is half of your physical RAM without swap. If you
oversize your tmpfs instances the machine will deadlock
since the OOM handler will not be able to free that memory.
nr_blocks: The same as size, but in blocks of PAGE_SIZE.
nr_inodes: The maximum number of inodes for this instance. The default
is half of the number of your physical RAM pages, or (on a
machine with highmem) the number of lowmem RAM pages,
whichever is the lower.
These parameters accept a suffix k, m or g for kilo, mega and giga and
can be changed on remount.
While true, this doesn't explain why the OP has so many used inodes which seems to be the main thrust of the question.
– terdon♦
Jan 3 at 12:42
Rather ironically, looking back to what I wrote yesterday it seems I never actually asked a specific question! But yes: increasing the number of inodes isn't actually going to help very much if they all just get used up again anyhow. And there no actual reason being presented here to justify that if2027420
is being consumed by only 30 files. Then doubling that number of available inodes will not just consume all the extra new inodes too. Hence then the 'why?', must I really have to ask!!??!!
– Dreamcat4
Jan 4 at 7:26
Given the "can't stat" error, have a look at this link
– JRFerguson
Jan 4 at 16:35
add a comment |
You can increase the number of inodes available on a remount. From the kernel documentation :
tmpfs has three mount options for sizing:
size: The limit of allocated bytes for this tmpfs instance. The
default is half of your physical RAM without swap. If you
oversize your tmpfs instances the machine will deadlock
since the OOM handler will not be able to free that memory.
nr_blocks: The same as size, but in blocks of PAGE_SIZE.
nr_inodes: The maximum number of inodes for this instance. The default
is half of the number of your physical RAM pages, or (on a
machine with highmem) the number of lowmem RAM pages,
whichever is the lower.
These parameters accept a suffix k, m or g for kilo, mega and giga and
can be changed on remount.
While true, this doesn't explain why the OP has so many used inodes which seems to be the main thrust of the question.
– terdon♦
Jan 3 at 12:42
Rather ironically, looking back to what I wrote yesterday it seems I never actually asked a specific question! But yes: increasing the number of inodes isn't actually going to help very much if they all just get used up again anyhow. And there no actual reason being presented here to justify that if2027420
is being consumed by only 30 files. Then doubling that number of available inodes will not just consume all the extra new inodes too. Hence then the 'why?', must I really have to ask!!??!!
– Dreamcat4
Jan 4 at 7:26
Given the "can't stat" error, have a look at this link
– JRFerguson
Jan 4 at 16:35
add a comment |
You can increase the number of inodes available on a remount. From the kernel documentation :
tmpfs has three mount options for sizing:
size: The limit of allocated bytes for this tmpfs instance. The
default is half of your physical RAM without swap. If you
oversize your tmpfs instances the machine will deadlock
since the OOM handler will not be able to free that memory.
nr_blocks: The same as size, but in blocks of PAGE_SIZE.
nr_inodes: The maximum number of inodes for this instance. The default
is half of the number of your physical RAM pages, or (on a
machine with highmem) the number of lowmem RAM pages,
whichever is the lower.
These parameters accept a suffix k, m or g for kilo, mega and giga and
can be changed on remount.
You can increase the number of inodes available on a remount. From the kernel documentation :
tmpfs has three mount options for sizing:
size: The limit of allocated bytes for this tmpfs instance. The
default is half of your physical RAM without swap. If you
oversize your tmpfs instances the machine will deadlock
since the OOM handler will not be able to free that memory.
nr_blocks: The same as size, but in blocks of PAGE_SIZE.
nr_inodes: The maximum number of inodes for this instance. The default
is half of the number of your physical RAM pages, or (on a
machine with highmem) the number of lowmem RAM pages,
whichever is the lower.
These parameters accept a suffix k, m or g for kilo, mega and giga and
can be changed on remount.
answered Jan 3 at 12:35
JRFergusonJRFerguson
9,77232430
9,77232430
While true, this doesn't explain why the OP has so many used inodes which seems to be the main thrust of the question.
– terdon♦
Jan 3 at 12:42
Rather ironically, looking back to what I wrote yesterday it seems I never actually asked a specific question! But yes: increasing the number of inodes isn't actually going to help very much if they all just get used up again anyhow. And there no actual reason being presented here to justify that if2027420
is being consumed by only 30 files. Then doubling that number of available inodes will not just consume all the extra new inodes too. Hence then the 'why?', must I really have to ask!!??!!
– Dreamcat4
Jan 4 at 7:26
Given the "can't stat" error, have a look at this link
– JRFerguson
Jan 4 at 16:35
add a comment |
While true, this doesn't explain why the OP has so many used inodes which seems to be the main thrust of the question.
– terdon♦
Jan 3 at 12:42
Rather ironically, looking back to what I wrote yesterday it seems I never actually asked a specific question! But yes: increasing the number of inodes isn't actually going to help very much if they all just get used up again anyhow. And there no actual reason being presented here to justify that if2027420
is being consumed by only 30 files. Then doubling that number of available inodes will not just consume all the extra new inodes too. Hence then the 'why?', must I really have to ask!!??!!
– Dreamcat4
Jan 4 at 7:26
Given the "can't stat" error, have a look at this link
– JRFerguson
Jan 4 at 16:35
While true, this doesn't explain why the OP has so many used inodes which seems to be the main thrust of the question.
– terdon♦
Jan 3 at 12:42
While true, this doesn't explain why the OP has so many used inodes which seems to be the main thrust of the question.
– terdon♦
Jan 3 at 12:42
Rather ironically, looking back to what I wrote yesterday it seems I never actually asked a specific question! But yes: increasing the number of inodes isn't actually going to help very much if they all just get used up again anyhow. And there no actual reason being presented here to justify that if
2027420
is being consumed by only 30 files. Then doubling that number of available inodes will not just consume all the extra new inodes too. Hence then the 'why?', must I really have to ask!!??!!– Dreamcat4
Jan 4 at 7:26
Rather ironically, looking back to what I wrote yesterday it seems I never actually asked a specific question! But yes: increasing the number of inodes isn't actually going to help very much if they all just get used up again anyhow. And there no actual reason being presented here to justify that if
2027420
is being consumed by only 30 files. Then doubling that number of available inodes will not just consume all the extra new inodes too. Hence then the 'why?', must I really have to ask!!??!!– Dreamcat4
Jan 4 at 7:26
Given the "can't stat" error, have a look at this link
– JRFerguson
Jan 4 at 16:35
Given the "can't stat" error, have a look at this link
– JRFerguson
Jan 4 at 16:35
add a comment |
Thanks for contributing an answer to Unix & Linux Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f492223%2ftmpfs-run-user-1000-ran-out-of-inodes-but-it-only-has-30-files%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
A related question is unix.stackexchange.com/questions/309898 .
– JdeBP
Jan 3 at 18:19