Diagnose on ubuntu 16 OSError: [Errno 28] No space left on device: but actually plenty of space

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP












0














df reports no problems and plenty of space and plenty of inodes available. I can still write small new text files. Crashing python program is writing to a subdir I created in my home directory. My program is writing millions of very small files, like over 10 million, maybe much more, but well under half terabyte of bytes total I expect. THis is a (until now) lightly used conventional harddisk on a relatively new-ish workstation. Is there some way to pinpoint the problem here? Is there a quota limit in Ubuntu home directories? I only use ssh into this host and have no local kbd or GUI access, but I can do X remoting, so please limit suggestions to command lines which you can provide to me to try. thanks!



inFile: RC_2018-01-24
outDir: tmp
outputToScreenOnly: 0
Traceback (most recent call last):
File "/mnt/fastssd/bot_subreddit_recom/write_user_docs.py", line 84, in <module>
with open(fqfn, 'w') as f:
OSError: [Errno 28] No space left on device: '/home/ga/reddit_data/tmp/yourstrulytony.RC_2018-01-24.doc'
^C(py36) ga@ga-HP-Z820:~/reddit_data$ df
Filesystem 1K-blocks Used Available Use% Mounted on
udev 65954704 0 65954704 0% /dev
tmpfs 13196056 9852 13186204 1% /run
/dev/mapper/ubuntu--vg-root 1789679056 318441852 1380303752 19% /
tmpfs 65980276 0 65980276 0% /dev/shm
tmpfs 5120 4 5116 1% /run/lock
tmpfs 65980276 0 65980276 0% /sys/fs/cgroup
/dev/nvme0n1 492128608 238291700 228815144 52% /mnt/fastssd
/dev/sda2 483946 157208 301753 35% /boot
/dev/sda1 523248 3496 519752 1% /boot/efi
tmpfs 13196056 4 13196052 1% /run/user/1000
(py36) ga@ga-HP-Z820:~/reddit_data$ man df
(py36) ga@ga-HP-Z820:~/reddit_data$ df /
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/ubuntu--vg-root 1789679056 318441852 1380303752 19% /
(py36) ga@ga-HP-Z820:~/reddit_data$ df -i /
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/ubuntu--vg-root 113647616 11444684 102202932 11% /
(py36) ga@ga-HP-Z820:~/reddit_data$ find tmp -maxdepth 1 -type f | wc -l
10603003
(py36) ga@ga-HP-Z820:~$ uname -a
Linux ga-HP-Z820 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 10:58:50 UTC 2018 x86_64









share|improve this question





















  • Are there any symlinks in the path for /home/ga/reddit_data/tmp/yourstrulytony.RC_2018-01-24.doc' because that may make it point to another disk. df /home/ga/reddit_data/tmp/ and df -i /home/ga/reddit_data/tmp/ may help verify that.
    – Stephen Harris
    Dec 10 at 23:54










  • Also what is the type of the filesystem that you're writing to? Some filesystems may not allow a single file to be greater than 2Gb in size, even if you have free space. Showing the output of mount | grep _whatever_disk_you_write_to may help.
    – Stephen Harris
    Dec 11 at 0:06










  • Thanks for the input. I didn't get anything good though. THe df commands are outputting the same data as before in original post. The files written are uniformly small. The files being read are bigger 2GB and the filesystem is ext4.
    – Geoffrey Anderson
    Dec 11 at 15:43










  • On Unix filesystems there are also inodes that can be exhausted, especially if you have many files.
    – Martin Sugioarto
    16 hours ago















0














df reports no problems and plenty of space and plenty of inodes available. I can still write small new text files. Crashing python program is writing to a subdir I created in my home directory. My program is writing millions of very small files, like over 10 million, maybe much more, but well under half terabyte of bytes total I expect. THis is a (until now) lightly used conventional harddisk on a relatively new-ish workstation. Is there some way to pinpoint the problem here? Is there a quota limit in Ubuntu home directories? I only use ssh into this host and have no local kbd or GUI access, but I can do X remoting, so please limit suggestions to command lines which you can provide to me to try. thanks!



inFile: RC_2018-01-24
outDir: tmp
outputToScreenOnly: 0
Traceback (most recent call last):
File "/mnt/fastssd/bot_subreddit_recom/write_user_docs.py", line 84, in <module>
with open(fqfn, 'w') as f:
OSError: [Errno 28] No space left on device: '/home/ga/reddit_data/tmp/yourstrulytony.RC_2018-01-24.doc'
^C(py36) ga@ga-HP-Z820:~/reddit_data$ df
Filesystem 1K-blocks Used Available Use% Mounted on
udev 65954704 0 65954704 0% /dev
tmpfs 13196056 9852 13186204 1% /run
/dev/mapper/ubuntu--vg-root 1789679056 318441852 1380303752 19% /
tmpfs 65980276 0 65980276 0% /dev/shm
tmpfs 5120 4 5116 1% /run/lock
tmpfs 65980276 0 65980276 0% /sys/fs/cgroup
/dev/nvme0n1 492128608 238291700 228815144 52% /mnt/fastssd
/dev/sda2 483946 157208 301753 35% /boot
/dev/sda1 523248 3496 519752 1% /boot/efi
tmpfs 13196056 4 13196052 1% /run/user/1000
(py36) ga@ga-HP-Z820:~/reddit_data$ man df
(py36) ga@ga-HP-Z820:~/reddit_data$ df /
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/ubuntu--vg-root 1789679056 318441852 1380303752 19% /
(py36) ga@ga-HP-Z820:~/reddit_data$ df -i /
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/ubuntu--vg-root 113647616 11444684 102202932 11% /
(py36) ga@ga-HP-Z820:~/reddit_data$ find tmp -maxdepth 1 -type f | wc -l
10603003
(py36) ga@ga-HP-Z820:~$ uname -a
Linux ga-HP-Z820 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 10:58:50 UTC 2018 x86_64









share|improve this question





















  • Are there any symlinks in the path for /home/ga/reddit_data/tmp/yourstrulytony.RC_2018-01-24.doc' because that may make it point to another disk. df /home/ga/reddit_data/tmp/ and df -i /home/ga/reddit_data/tmp/ may help verify that.
    – Stephen Harris
    Dec 10 at 23:54










  • Also what is the type of the filesystem that you're writing to? Some filesystems may not allow a single file to be greater than 2Gb in size, even if you have free space. Showing the output of mount | grep _whatever_disk_you_write_to may help.
    – Stephen Harris
    Dec 11 at 0:06










  • Thanks for the input. I didn't get anything good though. THe df commands are outputting the same data as before in original post. The files written are uniformly small. The files being read are bigger 2GB and the filesystem is ext4.
    – Geoffrey Anderson
    Dec 11 at 15:43










  • On Unix filesystems there are also inodes that can be exhausted, especially if you have many files.
    – Martin Sugioarto
    16 hours ago













0












0








0







df reports no problems and plenty of space and plenty of inodes available. I can still write small new text files. Crashing python program is writing to a subdir I created in my home directory. My program is writing millions of very small files, like over 10 million, maybe much more, but well under half terabyte of bytes total I expect. THis is a (until now) lightly used conventional harddisk on a relatively new-ish workstation. Is there some way to pinpoint the problem here? Is there a quota limit in Ubuntu home directories? I only use ssh into this host and have no local kbd or GUI access, but I can do X remoting, so please limit suggestions to command lines which you can provide to me to try. thanks!



inFile: RC_2018-01-24
outDir: tmp
outputToScreenOnly: 0
Traceback (most recent call last):
File "/mnt/fastssd/bot_subreddit_recom/write_user_docs.py", line 84, in <module>
with open(fqfn, 'w') as f:
OSError: [Errno 28] No space left on device: '/home/ga/reddit_data/tmp/yourstrulytony.RC_2018-01-24.doc'
^C(py36) ga@ga-HP-Z820:~/reddit_data$ df
Filesystem 1K-blocks Used Available Use% Mounted on
udev 65954704 0 65954704 0% /dev
tmpfs 13196056 9852 13186204 1% /run
/dev/mapper/ubuntu--vg-root 1789679056 318441852 1380303752 19% /
tmpfs 65980276 0 65980276 0% /dev/shm
tmpfs 5120 4 5116 1% /run/lock
tmpfs 65980276 0 65980276 0% /sys/fs/cgroup
/dev/nvme0n1 492128608 238291700 228815144 52% /mnt/fastssd
/dev/sda2 483946 157208 301753 35% /boot
/dev/sda1 523248 3496 519752 1% /boot/efi
tmpfs 13196056 4 13196052 1% /run/user/1000
(py36) ga@ga-HP-Z820:~/reddit_data$ man df
(py36) ga@ga-HP-Z820:~/reddit_data$ df /
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/ubuntu--vg-root 1789679056 318441852 1380303752 19% /
(py36) ga@ga-HP-Z820:~/reddit_data$ df -i /
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/ubuntu--vg-root 113647616 11444684 102202932 11% /
(py36) ga@ga-HP-Z820:~/reddit_data$ find tmp -maxdepth 1 -type f | wc -l
10603003
(py36) ga@ga-HP-Z820:~$ uname -a
Linux ga-HP-Z820 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 10:58:50 UTC 2018 x86_64









share|improve this question













df reports no problems and plenty of space and plenty of inodes available. I can still write small new text files. Crashing python program is writing to a subdir I created in my home directory. My program is writing millions of very small files, like over 10 million, maybe much more, but well under half terabyte of bytes total I expect. THis is a (until now) lightly used conventional harddisk on a relatively new-ish workstation. Is there some way to pinpoint the problem here? Is there a quota limit in Ubuntu home directories? I only use ssh into this host and have no local kbd or GUI access, but I can do X remoting, so please limit suggestions to command lines which you can provide to me to try. thanks!



inFile: RC_2018-01-24
outDir: tmp
outputToScreenOnly: 0
Traceback (most recent call last):
File "/mnt/fastssd/bot_subreddit_recom/write_user_docs.py", line 84, in <module>
with open(fqfn, 'w') as f:
OSError: [Errno 28] No space left on device: '/home/ga/reddit_data/tmp/yourstrulytony.RC_2018-01-24.doc'
^C(py36) ga@ga-HP-Z820:~/reddit_data$ df
Filesystem 1K-blocks Used Available Use% Mounted on
udev 65954704 0 65954704 0% /dev
tmpfs 13196056 9852 13186204 1% /run
/dev/mapper/ubuntu--vg-root 1789679056 318441852 1380303752 19% /
tmpfs 65980276 0 65980276 0% /dev/shm
tmpfs 5120 4 5116 1% /run/lock
tmpfs 65980276 0 65980276 0% /sys/fs/cgroup
/dev/nvme0n1 492128608 238291700 228815144 52% /mnt/fastssd
/dev/sda2 483946 157208 301753 35% /boot
/dev/sda1 523248 3496 519752 1% /boot/efi
tmpfs 13196056 4 13196052 1% /run/user/1000
(py36) ga@ga-HP-Z820:~/reddit_data$ man df
(py36) ga@ga-HP-Z820:~/reddit_data$ df /
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/ubuntu--vg-root 1789679056 318441852 1380303752 19% /
(py36) ga@ga-HP-Z820:~/reddit_data$ df -i /
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/ubuntu--vg-root 113647616 11444684 102202932 11% /
(py36) ga@ga-HP-Z820:~/reddit_data$ find tmp -maxdepth 1 -type f | wc -l
10603003
(py36) ga@ga-HP-Z820:~$ uname -a
Linux ga-HP-Z820 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 10:58:50 UTC 2018 x86_64






ubuntu disk






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Dec 10 at 22:27









Geoffrey Anderson

1467




1467











  • Are there any symlinks in the path for /home/ga/reddit_data/tmp/yourstrulytony.RC_2018-01-24.doc' because that may make it point to another disk. df /home/ga/reddit_data/tmp/ and df -i /home/ga/reddit_data/tmp/ may help verify that.
    – Stephen Harris
    Dec 10 at 23:54










  • Also what is the type of the filesystem that you're writing to? Some filesystems may not allow a single file to be greater than 2Gb in size, even if you have free space. Showing the output of mount | grep _whatever_disk_you_write_to may help.
    – Stephen Harris
    Dec 11 at 0:06










  • Thanks for the input. I didn't get anything good though. THe df commands are outputting the same data as before in original post. The files written are uniformly small. The files being read are bigger 2GB and the filesystem is ext4.
    – Geoffrey Anderson
    Dec 11 at 15:43










  • On Unix filesystems there are also inodes that can be exhausted, especially if you have many files.
    – Martin Sugioarto
    16 hours ago
















  • Are there any symlinks in the path for /home/ga/reddit_data/tmp/yourstrulytony.RC_2018-01-24.doc' because that may make it point to another disk. df /home/ga/reddit_data/tmp/ and df -i /home/ga/reddit_data/tmp/ may help verify that.
    – Stephen Harris
    Dec 10 at 23:54










  • Also what is the type of the filesystem that you're writing to? Some filesystems may not allow a single file to be greater than 2Gb in size, even if you have free space. Showing the output of mount | grep _whatever_disk_you_write_to may help.
    – Stephen Harris
    Dec 11 at 0:06










  • Thanks for the input. I didn't get anything good though. THe df commands are outputting the same data as before in original post. The files written are uniformly small. The files being read are bigger 2GB and the filesystem is ext4.
    – Geoffrey Anderson
    Dec 11 at 15:43










  • On Unix filesystems there are also inodes that can be exhausted, especially if you have many files.
    – Martin Sugioarto
    16 hours ago















Are there any symlinks in the path for /home/ga/reddit_data/tmp/yourstrulytony.RC_2018-01-24.doc' because that may make it point to another disk. df /home/ga/reddit_data/tmp/ and df -i /home/ga/reddit_data/tmp/ may help verify that.
– Stephen Harris
Dec 10 at 23:54




Are there any symlinks in the path for /home/ga/reddit_data/tmp/yourstrulytony.RC_2018-01-24.doc' because that may make it point to another disk. df /home/ga/reddit_data/tmp/ and df -i /home/ga/reddit_data/tmp/ may help verify that.
– Stephen Harris
Dec 10 at 23:54












Also what is the type of the filesystem that you're writing to? Some filesystems may not allow a single file to be greater than 2Gb in size, even if you have free space. Showing the output of mount | grep _whatever_disk_you_write_to may help.
– Stephen Harris
Dec 11 at 0:06




Also what is the type of the filesystem that you're writing to? Some filesystems may not allow a single file to be greater than 2Gb in size, even if you have free space. Showing the output of mount | grep _whatever_disk_you_write_to may help.
– Stephen Harris
Dec 11 at 0:06












Thanks for the input. I didn't get anything good though. THe df commands are outputting the same data as before in original post. The files written are uniformly small. The files being read are bigger 2GB and the filesystem is ext4.
– Geoffrey Anderson
Dec 11 at 15:43




Thanks for the input. I didn't get anything good though. THe df commands are outputting the same data as before in original post. The files written are uniformly small. The files being read are bigger 2GB and the filesystem is ext4.
– Geoffrey Anderson
Dec 11 at 15:43












On Unix filesystems there are also inodes that can be exhausted, especially if you have many files.
– Martin Sugioarto
16 hours ago




On Unix filesystems there are also inodes that can be exhausted, especially if you have many files.
– Martin Sugioarto
16 hours ago










1 Answer
1






active

oldest

votes


















0














Looks like you may be filling your ram and swap up fast enough to produce this error. Such small files take little time to propagate. Thus it is likely that they are just being created faster than they can be physically written to disk. Try adding a sleep or wait cycle between each file creation and see if that helps. Or add a read in there of some small bit of data that will cause a pause between file writes.






share|improve this answer




















  • Interesting. There's a long delay of waiting for IO to happen according to htop in the S column, where a D is often showing for my programs. I am running 4 in parallel and they definitely are being bottlenecked by IO delays on disk writes. There is a lot of reading happening at the beginning and I can see CPU briefly go to 100% so it's already "pausing" between file writes as far as one program instance is concerned. I have htop running and RAM and swap are never close to saturated at all. I have 100GB of RAM and all I see is allocation like 5GB or less of RAM and no swap so not a RAM issue
    – Geoffrey Anderson
    Dec 10 at 22:43











  • I still think it is a latency issue and you should find ways of leveling the load and or slowing it down.
    – Michael Prokopec
    Dec 10 at 22:47










  • OK I'm looking into latency changes that I can make. I might use no parallelism, or I might use an SSD rather than spinning disk. There were definitely some hints that you were correct because it took really long time to respond to my keystrokes when I did some little typical commands in another ssh terminal to this host while that was running.
    – Geoffrey Anderson
    Dec 10 at 22:51











Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "106"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f487226%2fdiagnose-on-ubuntu-16-oserror-errno-28-no-space-left-on-device-but-actually%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









0














Looks like you may be filling your ram and swap up fast enough to produce this error. Such small files take little time to propagate. Thus it is likely that they are just being created faster than they can be physically written to disk. Try adding a sleep or wait cycle between each file creation and see if that helps. Or add a read in there of some small bit of data that will cause a pause between file writes.






share|improve this answer




















  • Interesting. There's a long delay of waiting for IO to happen according to htop in the S column, where a D is often showing for my programs. I am running 4 in parallel and they definitely are being bottlenecked by IO delays on disk writes. There is a lot of reading happening at the beginning and I can see CPU briefly go to 100% so it's already "pausing" between file writes as far as one program instance is concerned. I have htop running and RAM and swap are never close to saturated at all. I have 100GB of RAM and all I see is allocation like 5GB or less of RAM and no swap so not a RAM issue
    – Geoffrey Anderson
    Dec 10 at 22:43











  • I still think it is a latency issue and you should find ways of leveling the load and or slowing it down.
    – Michael Prokopec
    Dec 10 at 22:47










  • OK I'm looking into latency changes that I can make. I might use no parallelism, or I might use an SSD rather than spinning disk. There were definitely some hints that you were correct because it took really long time to respond to my keystrokes when I did some little typical commands in another ssh terminal to this host while that was running.
    – Geoffrey Anderson
    Dec 10 at 22:51
















0














Looks like you may be filling your ram and swap up fast enough to produce this error. Such small files take little time to propagate. Thus it is likely that they are just being created faster than they can be physically written to disk. Try adding a sleep or wait cycle between each file creation and see if that helps. Or add a read in there of some small bit of data that will cause a pause between file writes.






share|improve this answer




















  • Interesting. There's a long delay of waiting for IO to happen according to htop in the S column, where a D is often showing for my programs. I am running 4 in parallel and they definitely are being bottlenecked by IO delays on disk writes. There is a lot of reading happening at the beginning and I can see CPU briefly go to 100% so it's already "pausing" between file writes as far as one program instance is concerned. I have htop running and RAM and swap are never close to saturated at all. I have 100GB of RAM and all I see is allocation like 5GB or less of RAM and no swap so not a RAM issue
    – Geoffrey Anderson
    Dec 10 at 22:43











  • I still think it is a latency issue and you should find ways of leveling the load and or slowing it down.
    – Michael Prokopec
    Dec 10 at 22:47










  • OK I'm looking into latency changes that I can make. I might use no parallelism, or I might use an SSD rather than spinning disk. There were definitely some hints that you were correct because it took really long time to respond to my keystrokes when I did some little typical commands in another ssh terminal to this host while that was running.
    – Geoffrey Anderson
    Dec 10 at 22:51














0












0








0






Looks like you may be filling your ram and swap up fast enough to produce this error. Such small files take little time to propagate. Thus it is likely that they are just being created faster than they can be physically written to disk. Try adding a sleep or wait cycle between each file creation and see if that helps. Or add a read in there of some small bit of data that will cause a pause between file writes.






share|improve this answer












Looks like you may be filling your ram and swap up fast enough to produce this error. Such small files take little time to propagate. Thus it is likely that they are just being created faster than they can be physically written to disk. Try adding a sleep or wait cycle between each file creation and see if that helps. Or add a read in there of some small bit of data that will cause a pause between file writes.







share|improve this answer












share|improve this answer



share|improve this answer










answered Dec 10 at 22:40









Michael Prokopec

1,001116




1,001116











  • Interesting. There's a long delay of waiting for IO to happen according to htop in the S column, where a D is often showing for my programs. I am running 4 in parallel and they definitely are being bottlenecked by IO delays on disk writes. There is a lot of reading happening at the beginning and I can see CPU briefly go to 100% so it's already "pausing" between file writes as far as one program instance is concerned. I have htop running and RAM and swap are never close to saturated at all. I have 100GB of RAM and all I see is allocation like 5GB or less of RAM and no swap so not a RAM issue
    – Geoffrey Anderson
    Dec 10 at 22:43











  • I still think it is a latency issue and you should find ways of leveling the load and or slowing it down.
    – Michael Prokopec
    Dec 10 at 22:47










  • OK I'm looking into latency changes that I can make. I might use no parallelism, or I might use an SSD rather than spinning disk. There were definitely some hints that you were correct because it took really long time to respond to my keystrokes when I did some little typical commands in another ssh terminal to this host while that was running.
    – Geoffrey Anderson
    Dec 10 at 22:51

















  • Interesting. There's a long delay of waiting for IO to happen according to htop in the S column, where a D is often showing for my programs. I am running 4 in parallel and they definitely are being bottlenecked by IO delays on disk writes. There is a lot of reading happening at the beginning and I can see CPU briefly go to 100% so it's already "pausing" between file writes as far as one program instance is concerned. I have htop running and RAM and swap are never close to saturated at all. I have 100GB of RAM and all I see is allocation like 5GB or less of RAM and no swap so not a RAM issue
    – Geoffrey Anderson
    Dec 10 at 22:43











  • I still think it is a latency issue and you should find ways of leveling the load and or slowing it down.
    – Michael Prokopec
    Dec 10 at 22:47










  • OK I'm looking into latency changes that I can make. I might use no parallelism, or I might use an SSD rather than spinning disk. There were definitely some hints that you were correct because it took really long time to respond to my keystrokes when I did some little typical commands in another ssh terminal to this host while that was running.
    – Geoffrey Anderson
    Dec 10 at 22:51
















Interesting. There's a long delay of waiting for IO to happen according to htop in the S column, where a D is often showing for my programs. I am running 4 in parallel and they definitely are being bottlenecked by IO delays on disk writes. There is a lot of reading happening at the beginning and I can see CPU briefly go to 100% so it's already "pausing" between file writes as far as one program instance is concerned. I have htop running and RAM and swap are never close to saturated at all. I have 100GB of RAM and all I see is allocation like 5GB or less of RAM and no swap so not a RAM issue
– Geoffrey Anderson
Dec 10 at 22:43





Interesting. There's a long delay of waiting for IO to happen according to htop in the S column, where a D is often showing for my programs. I am running 4 in parallel and they definitely are being bottlenecked by IO delays on disk writes. There is a lot of reading happening at the beginning and I can see CPU briefly go to 100% so it's already "pausing" between file writes as far as one program instance is concerned. I have htop running and RAM and swap are never close to saturated at all. I have 100GB of RAM and all I see is allocation like 5GB or less of RAM and no swap so not a RAM issue
– Geoffrey Anderson
Dec 10 at 22:43













I still think it is a latency issue and you should find ways of leveling the load and or slowing it down.
– Michael Prokopec
Dec 10 at 22:47




I still think it is a latency issue and you should find ways of leveling the load and or slowing it down.
– Michael Prokopec
Dec 10 at 22:47












OK I'm looking into latency changes that I can make. I might use no parallelism, or I might use an SSD rather than spinning disk. There were definitely some hints that you were correct because it took really long time to respond to my keystrokes when I did some little typical commands in another ssh terminal to this host while that was running.
– Geoffrey Anderson
Dec 10 at 22:51





OK I'm looking into latency changes that I can make. I might use no parallelism, or I might use an SSD rather than spinning disk. There were definitely some hints that you were correct because it took really long time to respond to my keystrokes when I did some little typical commands in another ssh terminal to this host while that was running.
– Geoffrey Anderson
Dec 10 at 22:51


















draft saved

draft discarded
















































Thanks for contributing an answer to Unix & Linux Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f487226%2fdiagnose-on-ubuntu-16-oserror-errno-28-no-space-left-on-device-but-actually%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown






Popular posts from this blog

How to check contact read email or not when send email to Individual?

Bahrain

Postfix configuration issue with fips on centos 7; mailgun relay