Gnome, nautilus copy files to USB stops at 100% or near

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP












18















I had similar issues before but i don't remember how i solved it.



When i try to copy something to USB stick, with FAT, it stops near the end, sometimes at 100%. And of course, when i transfer the memory stick somewhere else, it doesn't contain complete file. (file is a movie!)



I tried to mount device with mount -o flush, but i get same issue.



Also, i did format the USB stick with new FAT partition...



Any idea what cold i do?



p.s.
I believe it's not related to OS, which is Debian, and i believe that coping from SSD drive doesn't make it stuck.










share|improve this question

















  • 3





    Somewhere I have met the following the matter explanation. In the case copying was made via operating memory, and idicator shows the process of reading data from drive. But wrighting process is much longer especially to USB-stick ( it can be 100 times slowly: like 2Mb/sec wrighting against 200Mb/sec of reading for example) and more over if you use no-native file systems like FAT or NTFS under linux. So try to wait for end transaction even if stopped on 100% but don't closed (which should indicate finish).

    – Costas
    Jan 24 '15 at 14:53












  • just wondering if it is possible at all to check the progress in that situation???

    – mariotanenbaum
    Jan 24 '15 at 19:34











  • try to format pendrive with option overwrite exiting data with zeros It works on My trancend 8GB pendrive

    – Akshay Daundkar
    Jul 6 '17 at 7:37











  • For anyone coming across this issue, just format your drive to NTFS.

    – Ricky Boyce
    Jan 4 at 10:02















18















I had similar issues before but i don't remember how i solved it.



When i try to copy something to USB stick, with FAT, it stops near the end, sometimes at 100%. And of course, when i transfer the memory stick somewhere else, it doesn't contain complete file. (file is a movie!)



I tried to mount device with mount -o flush, but i get same issue.



Also, i did format the USB stick with new FAT partition...



Any idea what cold i do?



p.s.
I believe it's not related to OS, which is Debian, and i believe that coping from SSD drive doesn't make it stuck.










share|improve this question

















  • 3





    Somewhere I have met the following the matter explanation. In the case copying was made via operating memory, and idicator shows the process of reading data from drive. But wrighting process is much longer especially to USB-stick ( it can be 100 times slowly: like 2Mb/sec wrighting against 200Mb/sec of reading for example) and more over if you use no-native file systems like FAT or NTFS under linux. So try to wait for end transaction even if stopped on 100% but don't closed (which should indicate finish).

    – Costas
    Jan 24 '15 at 14:53












  • just wondering if it is possible at all to check the progress in that situation???

    – mariotanenbaum
    Jan 24 '15 at 19:34











  • try to format pendrive with option overwrite exiting data with zeros It works on My trancend 8GB pendrive

    – Akshay Daundkar
    Jul 6 '17 at 7:37











  • For anyone coming across this issue, just format your drive to NTFS.

    – Ricky Boyce
    Jan 4 at 10:02













18












18








18


13






I had similar issues before but i don't remember how i solved it.



When i try to copy something to USB stick, with FAT, it stops near the end, sometimes at 100%. And of course, when i transfer the memory stick somewhere else, it doesn't contain complete file. (file is a movie!)



I tried to mount device with mount -o flush, but i get same issue.



Also, i did format the USB stick with new FAT partition...



Any idea what cold i do?



p.s.
I believe it's not related to OS, which is Debian, and i believe that coping from SSD drive doesn't make it stuck.










share|improve this question














I had similar issues before but i don't remember how i solved it.



When i try to copy something to USB stick, with FAT, it stops near the end, sometimes at 100%. And of course, when i transfer the memory stick somewhere else, it doesn't contain complete file. (file is a movie!)



I tried to mount device with mount -o flush, but i get same issue.



Also, i did format the USB stick with new FAT partition...



Any idea what cold i do?



p.s.
I believe it's not related to OS, which is Debian, and i believe that coping from SSD drive doesn't make it stuck.







linux usb nautilus clipboard






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Jan 24 '15 at 14:38









mariotanenbaummariotanenbaum

203128




203128







  • 3





    Somewhere I have met the following the matter explanation. In the case copying was made via operating memory, and idicator shows the process of reading data from drive. But wrighting process is much longer especially to USB-stick ( it can be 100 times slowly: like 2Mb/sec wrighting against 200Mb/sec of reading for example) and more over if you use no-native file systems like FAT or NTFS under linux. So try to wait for end transaction even if stopped on 100% but don't closed (which should indicate finish).

    – Costas
    Jan 24 '15 at 14:53












  • just wondering if it is possible at all to check the progress in that situation???

    – mariotanenbaum
    Jan 24 '15 at 19:34











  • try to format pendrive with option overwrite exiting data with zeros It works on My trancend 8GB pendrive

    – Akshay Daundkar
    Jul 6 '17 at 7:37











  • For anyone coming across this issue, just format your drive to NTFS.

    – Ricky Boyce
    Jan 4 at 10:02












  • 3





    Somewhere I have met the following the matter explanation. In the case copying was made via operating memory, and idicator shows the process of reading data from drive. But wrighting process is much longer especially to USB-stick ( it can be 100 times slowly: like 2Mb/sec wrighting against 200Mb/sec of reading for example) and more over if you use no-native file systems like FAT or NTFS under linux. So try to wait for end transaction even if stopped on 100% but don't closed (which should indicate finish).

    – Costas
    Jan 24 '15 at 14:53












  • just wondering if it is possible at all to check the progress in that situation???

    – mariotanenbaum
    Jan 24 '15 at 19:34











  • try to format pendrive with option overwrite exiting data with zeros It works on My trancend 8GB pendrive

    – Akshay Daundkar
    Jul 6 '17 at 7:37











  • For anyone coming across this issue, just format your drive to NTFS.

    – Ricky Boyce
    Jan 4 at 10:02







3




3





Somewhere I have met the following the matter explanation. In the case copying was made via operating memory, and idicator shows the process of reading data from drive. But wrighting process is much longer especially to USB-stick ( it can be 100 times slowly: like 2Mb/sec wrighting against 200Mb/sec of reading for example) and more over if you use no-native file systems like FAT or NTFS under linux. So try to wait for end transaction even if stopped on 100% but don't closed (which should indicate finish).

– Costas
Jan 24 '15 at 14:53






Somewhere I have met the following the matter explanation. In the case copying was made via operating memory, and idicator shows the process of reading data from drive. But wrighting process is much longer especially to USB-stick ( it can be 100 times slowly: like 2Mb/sec wrighting against 200Mb/sec of reading for example) and more over if you use no-native file systems like FAT or NTFS under linux. So try to wait for end transaction even if stopped on 100% but don't closed (which should indicate finish).

– Costas
Jan 24 '15 at 14:53














just wondering if it is possible at all to check the progress in that situation???

– mariotanenbaum
Jan 24 '15 at 19:34





just wondering if it is possible at all to check the progress in that situation???

– mariotanenbaum
Jan 24 '15 at 19:34













try to format pendrive with option overwrite exiting data with zeros It works on My trancend 8GB pendrive

– Akshay Daundkar
Jul 6 '17 at 7:37





try to format pendrive with option overwrite exiting data with zeros It works on My trancend 8GB pendrive

– Akshay Daundkar
Jul 6 '17 at 7:37













For anyone coming across this issue, just format your drive to NTFS.

– Ricky Boyce
Jan 4 at 10:02





For anyone coming across this issue, just format your drive to NTFS.

– Ricky Boyce
Jan 4 at 10:02










1 Answer
1






active

oldest

votes


















26














The reason it happens that way is that the program says "write this data" and the linux kernel copies it into a memory buffer that is queued to go to disk, and then says "ok, done". So the program thinks it has copied everything. Then the program closes the file, but suddenly the kernel makes it wait while that buffer is pushed out to disk.



So, unfortunately the program can't tell you how long it will take to flush the buffer because it doesn't know.



If you want to try some power-user tricks, you can reduce the size of the buffer that Linux uses by setting the kernel parameter vm.dirty_bytes to something like 15000000 (15 MB). This means the application can't get more than 15MB ahead of its actual progress. (You can change kernel parameters on the fly with sudo sysctl vm.dirty_bytes=15000000 but making them stay across a reboot requires changing a config file like /etc/sysctl.conf which might be specific to your distro.)



A side effect is that your computer might have lower data-writing throughput with this setting, but on the whole, I find it helpful to see that a program is running a long time while it writes lots of data vs. the confusion of having a program appear to be done with its job but the system lagging badly as the kernel does the actual work. Setting dirty_bytes to a reasonably small value can also help prevent your system from becoming unresponsive when you're low on free memory and run a program that suddenly writes lots of data.



But, don't set it too small! I use 15MB as a rough estimate that the kernel can flush the buffer to a normal hard drive in 1/4 of a second or less. It keeps my system from feeling "laggy".






share|improve this answer

























  • I was looking for a fix to this problem for a year or more, I thought it was just a bug in the linux. Thanks a lot.

    – Sidahmed
    Jan 13 '17 at 12:50






  • 1





    Linux noob here, could someone post how to change the <dirty_bytes> values?

    – Brofessor
    Jun 15 '18 at 9:21











  • @Brofessor Oh, sorry, I should have described it by the official name instead of /proc details. Answer is updated.

    – dataless
    Jun 16 '18 at 18:24






  • 1





    This is similar to unix.stackexchange.com/questions/107703/… --- should have been fixed, but believe me, it's not. I had to add it to Ubuntu 18.04 to stop behaving funny...

    – Rmano
    Nov 9 '18 at 19:09










Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "106"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f180818%2fgnome-nautilus-copy-files-to-usb-stops-at-100-or-near%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









26














The reason it happens that way is that the program says "write this data" and the linux kernel copies it into a memory buffer that is queued to go to disk, and then says "ok, done". So the program thinks it has copied everything. Then the program closes the file, but suddenly the kernel makes it wait while that buffer is pushed out to disk.



So, unfortunately the program can't tell you how long it will take to flush the buffer because it doesn't know.



If you want to try some power-user tricks, you can reduce the size of the buffer that Linux uses by setting the kernel parameter vm.dirty_bytes to something like 15000000 (15 MB). This means the application can't get more than 15MB ahead of its actual progress. (You can change kernel parameters on the fly with sudo sysctl vm.dirty_bytes=15000000 but making them stay across a reboot requires changing a config file like /etc/sysctl.conf which might be specific to your distro.)



A side effect is that your computer might have lower data-writing throughput with this setting, but on the whole, I find it helpful to see that a program is running a long time while it writes lots of data vs. the confusion of having a program appear to be done with its job but the system lagging badly as the kernel does the actual work. Setting dirty_bytes to a reasonably small value can also help prevent your system from becoming unresponsive when you're low on free memory and run a program that suddenly writes lots of data.



But, don't set it too small! I use 15MB as a rough estimate that the kernel can flush the buffer to a normal hard drive in 1/4 of a second or less. It keeps my system from feeling "laggy".






share|improve this answer

























  • I was looking for a fix to this problem for a year or more, I thought it was just a bug in the linux. Thanks a lot.

    – Sidahmed
    Jan 13 '17 at 12:50






  • 1





    Linux noob here, could someone post how to change the <dirty_bytes> values?

    – Brofessor
    Jun 15 '18 at 9:21











  • @Brofessor Oh, sorry, I should have described it by the official name instead of /proc details. Answer is updated.

    – dataless
    Jun 16 '18 at 18:24






  • 1





    This is similar to unix.stackexchange.com/questions/107703/… --- should have been fixed, but believe me, it's not. I had to add it to Ubuntu 18.04 to stop behaving funny...

    – Rmano
    Nov 9 '18 at 19:09















26














The reason it happens that way is that the program says "write this data" and the linux kernel copies it into a memory buffer that is queued to go to disk, and then says "ok, done". So the program thinks it has copied everything. Then the program closes the file, but suddenly the kernel makes it wait while that buffer is pushed out to disk.



So, unfortunately the program can't tell you how long it will take to flush the buffer because it doesn't know.



If you want to try some power-user tricks, you can reduce the size of the buffer that Linux uses by setting the kernel parameter vm.dirty_bytes to something like 15000000 (15 MB). This means the application can't get more than 15MB ahead of its actual progress. (You can change kernel parameters on the fly with sudo sysctl vm.dirty_bytes=15000000 but making them stay across a reboot requires changing a config file like /etc/sysctl.conf which might be specific to your distro.)



A side effect is that your computer might have lower data-writing throughput with this setting, but on the whole, I find it helpful to see that a program is running a long time while it writes lots of data vs. the confusion of having a program appear to be done with its job but the system lagging badly as the kernel does the actual work. Setting dirty_bytes to a reasonably small value can also help prevent your system from becoming unresponsive when you're low on free memory and run a program that suddenly writes lots of data.



But, don't set it too small! I use 15MB as a rough estimate that the kernel can flush the buffer to a normal hard drive in 1/4 of a second or less. It keeps my system from feeling "laggy".






share|improve this answer

























  • I was looking for a fix to this problem for a year or more, I thought it was just a bug in the linux. Thanks a lot.

    – Sidahmed
    Jan 13 '17 at 12:50






  • 1





    Linux noob here, could someone post how to change the <dirty_bytes> values?

    – Brofessor
    Jun 15 '18 at 9:21











  • @Brofessor Oh, sorry, I should have described it by the official name instead of /proc details. Answer is updated.

    – dataless
    Jun 16 '18 at 18:24






  • 1





    This is similar to unix.stackexchange.com/questions/107703/… --- should have been fixed, but believe me, it's not. I had to add it to Ubuntu 18.04 to stop behaving funny...

    – Rmano
    Nov 9 '18 at 19:09













26












26








26







The reason it happens that way is that the program says "write this data" and the linux kernel copies it into a memory buffer that is queued to go to disk, and then says "ok, done". So the program thinks it has copied everything. Then the program closes the file, but suddenly the kernel makes it wait while that buffer is pushed out to disk.



So, unfortunately the program can't tell you how long it will take to flush the buffer because it doesn't know.



If you want to try some power-user tricks, you can reduce the size of the buffer that Linux uses by setting the kernel parameter vm.dirty_bytes to something like 15000000 (15 MB). This means the application can't get more than 15MB ahead of its actual progress. (You can change kernel parameters on the fly with sudo sysctl vm.dirty_bytes=15000000 but making them stay across a reboot requires changing a config file like /etc/sysctl.conf which might be specific to your distro.)



A side effect is that your computer might have lower data-writing throughput with this setting, but on the whole, I find it helpful to see that a program is running a long time while it writes lots of data vs. the confusion of having a program appear to be done with its job but the system lagging badly as the kernel does the actual work. Setting dirty_bytes to a reasonably small value can also help prevent your system from becoming unresponsive when you're low on free memory and run a program that suddenly writes lots of data.



But, don't set it too small! I use 15MB as a rough estimate that the kernel can flush the buffer to a normal hard drive in 1/4 of a second or less. It keeps my system from feeling "laggy".






share|improve this answer















The reason it happens that way is that the program says "write this data" and the linux kernel copies it into a memory buffer that is queued to go to disk, and then says "ok, done". So the program thinks it has copied everything. Then the program closes the file, but suddenly the kernel makes it wait while that buffer is pushed out to disk.



So, unfortunately the program can't tell you how long it will take to flush the buffer because it doesn't know.



If you want to try some power-user tricks, you can reduce the size of the buffer that Linux uses by setting the kernel parameter vm.dirty_bytes to something like 15000000 (15 MB). This means the application can't get more than 15MB ahead of its actual progress. (You can change kernel parameters on the fly with sudo sysctl vm.dirty_bytes=15000000 but making them stay across a reboot requires changing a config file like /etc/sysctl.conf which might be specific to your distro.)



A side effect is that your computer might have lower data-writing throughput with this setting, but on the whole, I find it helpful to see that a program is running a long time while it writes lots of data vs. the confusion of having a program appear to be done with its job but the system lagging badly as the kernel does the actual work. Setting dirty_bytes to a reasonably small value can also help prevent your system from becoming unresponsive when you're low on free memory and run a program that suddenly writes lots of data.



But, don't set it too small! I use 15MB as a rough estimate that the kernel can flush the buffer to a normal hard drive in 1/4 of a second or less. It keeps my system from feeling "laggy".







share|improve this answer














share|improve this answer



share|improve this answer








edited Jun 16 '18 at 19:06

























answered Jan 27 '15 at 1:37









datalessdataless

1,079913




1,079913












  • I was looking for a fix to this problem for a year or more, I thought it was just a bug in the linux. Thanks a lot.

    – Sidahmed
    Jan 13 '17 at 12:50






  • 1





    Linux noob here, could someone post how to change the <dirty_bytes> values?

    – Brofessor
    Jun 15 '18 at 9:21











  • @Brofessor Oh, sorry, I should have described it by the official name instead of /proc details. Answer is updated.

    – dataless
    Jun 16 '18 at 18:24






  • 1





    This is similar to unix.stackexchange.com/questions/107703/… --- should have been fixed, but believe me, it's not. I had to add it to Ubuntu 18.04 to stop behaving funny...

    – Rmano
    Nov 9 '18 at 19:09

















  • I was looking for a fix to this problem for a year or more, I thought it was just a bug in the linux. Thanks a lot.

    – Sidahmed
    Jan 13 '17 at 12:50






  • 1





    Linux noob here, could someone post how to change the <dirty_bytes> values?

    – Brofessor
    Jun 15 '18 at 9:21











  • @Brofessor Oh, sorry, I should have described it by the official name instead of /proc details. Answer is updated.

    – dataless
    Jun 16 '18 at 18:24






  • 1





    This is similar to unix.stackexchange.com/questions/107703/… --- should have been fixed, but believe me, it's not. I had to add it to Ubuntu 18.04 to stop behaving funny...

    – Rmano
    Nov 9 '18 at 19:09
















I was looking for a fix to this problem for a year or more, I thought it was just a bug in the linux. Thanks a lot.

– Sidahmed
Jan 13 '17 at 12:50





I was looking for a fix to this problem for a year or more, I thought it was just a bug in the linux. Thanks a lot.

– Sidahmed
Jan 13 '17 at 12:50




1




1





Linux noob here, could someone post how to change the <dirty_bytes> values?

– Brofessor
Jun 15 '18 at 9:21





Linux noob here, could someone post how to change the <dirty_bytes> values?

– Brofessor
Jun 15 '18 at 9:21













@Brofessor Oh, sorry, I should have described it by the official name instead of /proc details. Answer is updated.

– dataless
Jun 16 '18 at 18:24





@Brofessor Oh, sorry, I should have described it by the official name instead of /proc details. Answer is updated.

– dataless
Jun 16 '18 at 18:24




1




1





This is similar to unix.stackexchange.com/questions/107703/… --- should have been fixed, but believe me, it's not. I had to add it to Ubuntu 18.04 to stop behaving funny...

– Rmano
Nov 9 '18 at 19:09





This is similar to unix.stackexchange.com/questions/107703/… --- should have been fixed, but believe me, it's not. I had to add it to Ubuntu 18.04 to stop behaving funny...

– Rmano
Nov 9 '18 at 19:09

















draft saved

draft discarded
















































Thanks for contributing an answer to Unix & Linux Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f180818%2fgnome-nautilus-copy-files-to-usb-stops-at-100-or-near%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown






Popular posts from this blog

How to check contact read email or not when send email to Individual?

Displaying single band from multi-band raster using QGIS

How many registers does an x86_64 CPU actually have?