Simple file copy (or write) causes ten second+ latency on Linux filesystem

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP












0















I am running Linux on a spinning hard drive. WDC WD5000LPLX-7, "WD Black Mobile", 7200 RPM.



I noticed a simple file copy (or write) causes fsync() latencies of over ten seconds. Is there some way to avoid this on Linux, without replacing the hardware or changing the cp command[*]? Or is there no other way to avoid this?



[*] I am able to avoid it if I write to the files using O_DIRECT instead.



What is fsync() ?



https://thunk.org/tytso/blog/2009/03/15/dont-fear-the-fsync/



fsync() is the way to update a file atomically, to be safe in case of power failure.



Application developers are advised to write configuration/state updates using a separate thread, so they do not freeze the user interface if the write takes a while. (See example: freeze in gnome-shell). However, this advice does not seem as useful when saving user files. For example when you edit files one at a time using an editor in the terminal - vi my-file, edit, wq to finish. Naturally vi waits for fsync() to finish before exiting. You might prefer to use a different editor, but I bet yours does the same thing :-).



Test setup



$ sudo -i
# lvcreate alan_dell_2016 -n test --extents 100%FREE
Logical volume "test" created.
# ls -l /dev/alan_dell_2016/test
lrwxrwxrwx. 1 root root 7 Feb 18 13:34 /dev/alan_dell_2016/test -> ../dm-3

$ uname -r
4.20.3-200.fc29.x86_64

$ cat /sys/block/sda/queue/scheduler
mq-deadline [bfq] none
$ cat /sys/block/dm-3/queue/scheduler
none


I have reproduced the gnome-shell freeze using the CFQ I/O scheduler. CFQ goes away in the next kernel release anyway, so for the moment I have been configuring my system to use BFQ.



I have also tried the mq-deadline scheduler. With all of these I/O schedulers, I saw fsync() latencies longer than ten seconds. My kernel is built with CONFIG_BLK_WBT_MQ=y. (WBT applies to the deadline scheduler; it does not apply to bfq by default).



# mkfs.ext4 /dev/alan_dell_2016/test
mke2fs 1.44.3 (10-July-2018)
Creating filesystem with 2982912 4k blocks and 746304 inodes
Filesystem UUID: 736bee3c-f0eb-49ee-b5be-de56ef1f38d4
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

# mount /dev/alan_dell_2016/test /mnt
# cd /mnt
# df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/alan_dell_2016-test 12G 41M 11G 1% /mnt


Test run



# dd if=/dev/zero of=writetest bs=1M count=5k conv=fsync & sleep 1; while true; do time sh -c 'echo 1 > latencytest; time sync latencytest; mv latencytest latencytest2'; sleep 1; killall -0 dd || break; done
[1] 17060

real 1m14.972s
user 0m0.001s
sys 0m0.000s

real 1m14.978s
user 0m0.005s
sys 0m0.002s
5120+0 records in
5120+0 records out
5368709120 bytes (5.4 GB, 5.0 GiB) copied, 75.9998 s, 70.6 MB/s
[1]+ Done dd if=/dev/zero of=writetest bs=1M count=5k conv=fsync
dd: no process found

# cp writetest copytest & sleep 3; while true; do time sh -c 'echo 1 > latencytest; time sync latencytest; mv latencytest latencytest2'; sleep 3; killall -0 cp || break; done
[1] 17397

real 0m59.479s
user 0m0.000s
sys 0m0.002s
[1]+ Done cp -i writetest copytest

real 0m59.504s
user 0m0.037s
sys 0m4.385s
cp: no process found


I suppose this involves filesystem details. If I do the same sort of thing at the block device level, the latency is much lower.



# cd / && umount /mnt
# dd if=/dev/zero of=/dev/alan_dell_2016/test bs=1M count=2000 conv=fsync &
[1] 6681
# dd if=/dev/zero of=/dev/alan_dell_2016/test oflag=sync bs=4096 count=1
1+0 records in
1+0 records out
4096 bytes (4.1 kB, 4.0 KiB) copied, 0.193815 s, 21.1 kB/s









share|improve this question




























    0















    I am running Linux on a spinning hard drive. WDC WD5000LPLX-7, "WD Black Mobile", 7200 RPM.



    I noticed a simple file copy (or write) causes fsync() latencies of over ten seconds. Is there some way to avoid this on Linux, without replacing the hardware or changing the cp command[*]? Or is there no other way to avoid this?



    [*] I am able to avoid it if I write to the files using O_DIRECT instead.



    What is fsync() ?



    https://thunk.org/tytso/blog/2009/03/15/dont-fear-the-fsync/



    fsync() is the way to update a file atomically, to be safe in case of power failure.



    Application developers are advised to write configuration/state updates using a separate thread, so they do not freeze the user interface if the write takes a while. (See example: freeze in gnome-shell). However, this advice does not seem as useful when saving user files. For example when you edit files one at a time using an editor in the terminal - vi my-file, edit, wq to finish. Naturally vi waits for fsync() to finish before exiting. You might prefer to use a different editor, but I bet yours does the same thing :-).



    Test setup



    $ sudo -i
    # lvcreate alan_dell_2016 -n test --extents 100%FREE
    Logical volume "test" created.
    # ls -l /dev/alan_dell_2016/test
    lrwxrwxrwx. 1 root root 7 Feb 18 13:34 /dev/alan_dell_2016/test -> ../dm-3

    $ uname -r
    4.20.3-200.fc29.x86_64

    $ cat /sys/block/sda/queue/scheduler
    mq-deadline [bfq] none
    $ cat /sys/block/dm-3/queue/scheduler
    none


    I have reproduced the gnome-shell freeze using the CFQ I/O scheduler. CFQ goes away in the next kernel release anyway, so for the moment I have been configuring my system to use BFQ.



    I have also tried the mq-deadline scheduler. With all of these I/O schedulers, I saw fsync() latencies longer than ten seconds. My kernel is built with CONFIG_BLK_WBT_MQ=y. (WBT applies to the deadline scheduler; it does not apply to bfq by default).



    # mkfs.ext4 /dev/alan_dell_2016/test
    mke2fs 1.44.3 (10-July-2018)
    Creating filesystem with 2982912 4k blocks and 746304 inodes
    Filesystem UUID: 736bee3c-f0eb-49ee-b5be-de56ef1f38d4
    Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208

    Allocating group tables: done
    Writing inode tables: done
    Creating journal (16384 blocks): done
    Writing superblocks and filesystem accounting information: done

    # mount /dev/alan_dell_2016/test /mnt
    # cd /mnt
    # df -h .
    Filesystem Size Used Avail Use% Mounted on
    /dev/mapper/alan_dell_2016-test 12G 41M 11G 1% /mnt


    Test run



    # dd if=/dev/zero of=writetest bs=1M count=5k conv=fsync & sleep 1; while true; do time sh -c 'echo 1 > latencytest; time sync latencytest; mv latencytest latencytest2'; sleep 1; killall -0 dd || break; done
    [1] 17060

    real 1m14.972s
    user 0m0.001s
    sys 0m0.000s

    real 1m14.978s
    user 0m0.005s
    sys 0m0.002s
    5120+0 records in
    5120+0 records out
    5368709120 bytes (5.4 GB, 5.0 GiB) copied, 75.9998 s, 70.6 MB/s
    [1]+ Done dd if=/dev/zero of=writetest bs=1M count=5k conv=fsync
    dd: no process found

    # cp writetest copytest & sleep 3; while true; do time sh -c 'echo 1 > latencytest; time sync latencytest; mv latencytest latencytest2'; sleep 3; killall -0 cp || break; done
    [1] 17397

    real 0m59.479s
    user 0m0.000s
    sys 0m0.002s
    [1]+ Done cp -i writetest copytest

    real 0m59.504s
    user 0m0.037s
    sys 0m4.385s
    cp: no process found


    I suppose this involves filesystem details. If I do the same sort of thing at the block device level, the latency is much lower.



    # cd / && umount /mnt
    # dd if=/dev/zero of=/dev/alan_dell_2016/test bs=1M count=2000 conv=fsync &
    [1] 6681
    # dd if=/dev/zero of=/dev/alan_dell_2016/test oflag=sync bs=4096 count=1
    1+0 records in
    1+0 records out
    4096 bytes (4.1 kB, 4.0 KiB) copied, 0.193815 s, 21.1 kB/s









    share|improve this question


























      0












      0








      0


      1






      I am running Linux on a spinning hard drive. WDC WD5000LPLX-7, "WD Black Mobile", 7200 RPM.



      I noticed a simple file copy (or write) causes fsync() latencies of over ten seconds. Is there some way to avoid this on Linux, without replacing the hardware or changing the cp command[*]? Or is there no other way to avoid this?



      [*] I am able to avoid it if I write to the files using O_DIRECT instead.



      What is fsync() ?



      https://thunk.org/tytso/blog/2009/03/15/dont-fear-the-fsync/



      fsync() is the way to update a file atomically, to be safe in case of power failure.



      Application developers are advised to write configuration/state updates using a separate thread, so they do not freeze the user interface if the write takes a while. (See example: freeze in gnome-shell). However, this advice does not seem as useful when saving user files. For example when you edit files one at a time using an editor in the terminal - vi my-file, edit, wq to finish. Naturally vi waits for fsync() to finish before exiting. You might prefer to use a different editor, but I bet yours does the same thing :-).



      Test setup



      $ sudo -i
      # lvcreate alan_dell_2016 -n test --extents 100%FREE
      Logical volume "test" created.
      # ls -l /dev/alan_dell_2016/test
      lrwxrwxrwx. 1 root root 7 Feb 18 13:34 /dev/alan_dell_2016/test -> ../dm-3

      $ uname -r
      4.20.3-200.fc29.x86_64

      $ cat /sys/block/sda/queue/scheduler
      mq-deadline [bfq] none
      $ cat /sys/block/dm-3/queue/scheduler
      none


      I have reproduced the gnome-shell freeze using the CFQ I/O scheduler. CFQ goes away in the next kernel release anyway, so for the moment I have been configuring my system to use BFQ.



      I have also tried the mq-deadline scheduler. With all of these I/O schedulers, I saw fsync() latencies longer than ten seconds. My kernel is built with CONFIG_BLK_WBT_MQ=y. (WBT applies to the deadline scheduler; it does not apply to bfq by default).



      # mkfs.ext4 /dev/alan_dell_2016/test
      mke2fs 1.44.3 (10-July-2018)
      Creating filesystem with 2982912 4k blocks and 746304 inodes
      Filesystem UUID: 736bee3c-f0eb-49ee-b5be-de56ef1f38d4
      Superblock backups stored on blocks:
      32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208

      Allocating group tables: done
      Writing inode tables: done
      Creating journal (16384 blocks): done
      Writing superblocks and filesystem accounting information: done

      # mount /dev/alan_dell_2016/test /mnt
      # cd /mnt
      # df -h .
      Filesystem Size Used Avail Use% Mounted on
      /dev/mapper/alan_dell_2016-test 12G 41M 11G 1% /mnt


      Test run



      # dd if=/dev/zero of=writetest bs=1M count=5k conv=fsync & sleep 1; while true; do time sh -c 'echo 1 > latencytest; time sync latencytest; mv latencytest latencytest2'; sleep 1; killall -0 dd || break; done
      [1] 17060

      real 1m14.972s
      user 0m0.001s
      sys 0m0.000s

      real 1m14.978s
      user 0m0.005s
      sys 0m0.002s
      5120+0 records in
      5120+0 records out
      5368709120 bytes (5.4 GB, 5.0 GiB) copied, 75.9998 s, 70.6 MB/s
      [1]+ Done dd if=/dev/zero of=writetest bs=1M count=5k conv=fsync
      dd: no process found

      # cp writetest copytest & sleep 3; while true; do time sh -c 'echo 1 > latencytest; time sync latencytest; mv latencytest latencytest2'; sleep 3; killall -0 cp || break; done
      [1] 17397

      real 0m59.479s
      user 0m0.000s
      sys 0m0.002s
      [1]+ Done cp -i writetest copytest

      real 0m59.504s
      user 0m0.037s
      sys 0m4.385s
      cp: no process found


      I suppose this involves filesystem details. If I do the same sort of thing at the block device level, the latency is much lower.



      # cd / && umount /mnt
      # dd if=/dev/zero of=/dev/alan_dell_2016/test bs=1M count=2000 conv=fsync &
      [1] 6681
      # dd if=/dev/zero of=/dev/alan_dell_2016/test oflag=sync bs=4096 count=1
      1+0 records in
      1+0 records out
      4096 bytes (4.1 kB, 4.0 KiB) copied, 0.193815 s, 21.1 kB/s









      share|improve this question
















      I am running Linux on a spinning hard drive. WDC WD5000LPLX-7, "WD Black Mobile", 7200 RPM.



      I noticed a simple file copy (or write) causes fsync() latencies of over ten seconds. Is there some way to avoid this on Linux, without replacing the hardware or changing the cp command[*]? Or is there no other way to avoid this?



      [*] I am able to avoid it if I write to the files using O_DIRECT instead.



      What is fsync() ?



      https://thunk.org/tytso/blog/2009/03/15/dont-fear-the-fsync/



      fsync() is the way to update a file atomically, to be safe in case of power failure.



      Application developers are advised to write configuration/state updates using a separate thread, so they do not freeze the user interface if the write takes a while. (See example: freeze in gnome-shell). However, this advice does not seem as useful when saving user files. For example when you edit files one at a time using an editor in the terminal - vi my-file, edit, wq to finish. Naturally vi waits for fsync() to finish before exiting. You might prefer to use a different editor, but I bet yours does the same thing :-).



      Test setup



      $ sudo -i
      # lvcreate alan_dell_2016 -n test --extents 100%FREE
      Logical volume "test" created.
      # ls -l /dev/alan_dell_2016/test
      lrwxrwxrwx. 1 root root 7 Feb 18 13:34 /dev/alan_dell_2016/test -> ../dm-3

      $ uname -r
      4.20.3-200.fc29.x86_64

      $ cat /sys/block/sda/queue/scheduler
      mq-deadline [bfq] none
      $ cat /sys/block/dm-3/queue/scheduler
      none


      I have reproduced the gnome-shell freeze using the CFQ I/O scheduler. CFQ goes away in the next kernel release anyway, so for the moment I have been configuring my system to use BFQ.



      I have also tried the mq-deadline scheduler. With all of these I/O schedulers, I saw fsync() latencies longer than ten seconds. My kernel is built with CONFIG_BLK_WBT_MQ=y. (WBT applies to the deadline scheduler; it does not apply to bfq by default).



      # mkfs.ext4 /dev/alan_dell_2016/test
      mke2fs 1.44.3 (10-July-2018)
      Creating filesystem with 2982912 4k blocks and 746304 inodes
      Filesystem UUID: 736bee3c-f0eb-49ee-b5be-de56ef1f38d4
      Superblock backups stored on blocks:
      32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208

      Allocating group tables: done
      Writing inode tables: done
      Creating journal (16384 blocks): done
      Writing superblocks and filesystem accounting information: done

      # mount /dev/alan_dell_2016/test /mnt
      # cd /mnt
      # df -h .
      Filesystem Size Used Avail Use% Mounted on
      /dev/mapper/alan_dell_2016-test 12G 41M 11G 1% /mnt


      Test run



      # dd if=/dev/zero of=writetest bs=1M count=5k conv=fsync & sleep 1; while true; do time sh -c 'echo 1 > latencytest; time sync latencytest; mv latencytest latencytest2'; sleep 1; killall -0 dd || break; done
      [1] 17060

      real 1m14.972s
      user 0m0.001s
      sys 0m0.000s

      real 1m14.978s
      user 0m0.005s
      sys 0m0.002s
      5120+0 records in
      5120+0 records out
      5368709120 bytes (5.4 GB, 5.0 GiB) copied, 75.9998 s, 70.6 MB/s
      [1]+ Done dd if=/dev/zero of=writetest bs=1M count=5k conv=fsync
      dd: no process found

      # cp writetest copytest & sleep 3; while true; do time sh -c 'echo 1 > latencytest; time sync latencytest; mv latencytest latencytest2'; sleep 3; killall -0 cp || break; done
      [1] 17397

      real 0m59.479s
      user 0m0.000s
      sys 0m0.002s
      [1]+ Done cp -i writetest copytest

      real 0m59.504s
      user 0m0.037s
      sys 0m4.385s
      cp: no process found


      I suppose this involves filesystem details. If I do the same sort of thing at the block device level, the latency is much lower.



      # cd / && umount /mnt
      # dd if=/dev/zero of=/dev/alan_dell_2016/test bs=1M count=2000 conv=fsync &
      [1] 6681
      # dd if=/dev/zero of=/dev/alan_dell_2016/test oflag=sync bs=4096 count=1
      1+0 records in
      1+0 records out
      4096 bytes (4.1 kB, 4.0 KiB) copied, 0.193815 s, 21.1 kB/s






      linux filesystems latency






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Feb 21 at 11:41







      sourcejedi

















      asked Feb 18 at 16:19









      sourcejedisourcejedi

      25.1k441108




      25.1k441108




















          1 Answer
          1






          active

          oldest

          votes


















          0














          Yes, it is filesystem-specific. The following tests were performed using a kernel built from the bcachefs tree, version v4.20-297-g2252e4b79f8f (2019-02-14). The figures are not rigorous statistics, but show the gross differences when I tested with different filesystems.



          ext4 bfq: writetest saw 15s, 30s. copytest 10s, 40s.
          ext4 mq-deadline: writetest saw 10s, 30s. copytest 5s, 45s.

          ext3 bfq: writetest saw 20s, 40s. copytest ~0.2s, once saw 0.5s and 2s.
          ext3 mq-deadline: writetest saw 50s. copytest ~0.2s, very occasionally 1.5s.
          ext3 mq-deadline, wbt disabled: writetest saw 10s, 40s. copytest similar to the above.

          ext2 bfq: writetest 0.1 - 0.9s. copytest ~0.5s.
          ext2 mq-deadline: writetest 0.2 - 0.6s. copytest ~0.4s

          xfs bfq: writetest 0.5 - 2s. copytest 0.5 - 3.5s.
          xfs mq-deadline: writetest 0.2s, some 0.5s. copytest 0 - 3s.

          bcachefs bfq: writetest 1.5 - 3s.
          bcachefs mq-deadline: writetest 1 - 5s.

          btrfs bfq: writetest 0.5-2s, copytest 1 - 2s.
          btrfs mq-deadline: writetest ~0.4s, copytest 1 - 4s.


          The ext3 figures look better when copying files, but ext3 is not a good idea for latency in general (e.g. see the tytso link :-). ext2 lacks journalling - journalling is generally desirable for robustness, but ext journalling is what causes this latency.



          So the alternatives I am most interested are XFS, the experimental bcachefs, and btrfs. I expect XFS is the simplest to use, at least on spinning hard drives. One prominent difference is that there is no tool to shrink XFS filesystems, only to grow them.






          share|improve this answer
























            Your Answer








            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "106"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f501394%2fsimple-file-copy-or-write-causes-ten-second-latency-on-linux-filesystem%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            0














            Yes, it is filesystem-specific. The following tests were performed using a kernel built from the bcachefs tree, version v4.20-297-g2252e4b79f8f (2019-02-14). The figures are not rigorous statistics, but show the gross differences when I tested with different filesystems.



            ext4 bfq: writetest saw 15s, 30s. copytest 10s, 40s.
            ext4 mq-deadline: writetest saw 10s, 30s. copytest 5s, 45s.

            ext3 bfq: writetest saw 20s, 40s. copytest ~0.2s, once saw 0.5s and 2s.
            ext3 mq-deadline: writetest saw 50s. copytest ~0.2s, very occasionally 1.5s.
            ext3 mq-deadline, wbt disabled: writetest saw 10s, 40s. copytest similar to the above.

            ext2 bfq: writetest 0.1 - 0.9s. copytest ~0.5s.
            ext2 mq-deadline: writetest 0.2 - 0.6s. copytest ~0.4s

            xfs bfq: writetest 0.5 - 2s. copytest 0.5 - 3.5s.
            xfs mq-deadline: writetest 0.2s, some 0.5s. copytest 0 - 3s.

            bcachefs bfq: writetest 1.5 - 3s.
            bcachefs mq-deadline: writetest 1 - 5s.

            btrfs bfq: writetest 0.5-2s, copytest 1 - 2s.
            btrfs mq-deadline: writetest ~0.4s, copytest 1 - 4s.


            The ext3 figures look better when copying files, but ext3 is not a good idea for latency in general (e.g. see the tytso link :-). ext2 lacks journalling - journalling is generally desirable for robustness, but ext journalling is what causes this latency.



            So the alternatives I am most interested are XFS, the experimental bcachefs, and btrfs. I expect XFS is the simplest to use, at least on spinning hard drives. One prominent difference is that there is no tool to shrink XFS filesystems, only to grow them.






            share|improve this answer





























              0














              Yes, it is filesystem-specific. The following tests were performed using a kernel built from the bcachefs tree, version v4.20-297-g2252e4b79f8f (2019-02-14). The figures are not rigorous statistics, but show the gross differences when I tested with different filesystems.



              ext4 bfq: writetest saw 15s, 30s. copytest 10s, 40s.
              ext4 mq-deadline: writetest saw 10s, 30s. copytest 5s, 45s.

              ext3 bfq: writetest saw 20s, 40s. copytest ~0.2s, once saw 0.5s and 2s.
              ext3 mq-deadline: writetest saw 50s. copytest ~0.2s, very occasionally 1.5s.
              ext3 mq-deadline, wbt disabled: writetest saw 10s, 40s. copytest similar to the above.

              ext2 bfq: writetest 0.1 - 0.9s. copytest ~0.5s.
              ext2 mq-deadline: writetest 0.2 - 0.6s. copytest ~0.4s

              xfs bfq: writetest 0.5 - 2s. copytest 0.5 - 3.5s.
              xfs mq-deadline: writetest 0.2s, some 0.5s. copytest 0 - 3s.

              bcachefs bfq: writetest 1.5 - 3s.
              bcachefs mq-deadline: writetest 1 - 5s.

              btrfs bfq: writetest 0.5-2s, copytest 1 - 2s.
              btrfs mq-deadline: writetest ~0.4s, copytest 1 - 4s.


              The ext3 figures look better when copying files, but ext3 is not a good idea for latency in general (e.g. see the tytso link :-). ext2 lacks journalling - journalling is generally desirable for robustness, but ext journalling is what causes this latency.



              So the alternatives I am most interested are XFS, the experimental bcachefs, and btrfs. I expect XFS is the simplest to use, at least on spinning hard drives. One prominent difference is that there is no tool to shrink XFS filesystems, only to grow them.






              share|improve this answer



























                0












                0








                0







                Yes, it is filesystem-specific. The following tests were performed using a kernel built from the bcachefs tree, version v4.20-297-g2252e4b79f8f (2019-02-14). The figures are not rigorous statistics, but show the gross differences when I tested with different filesystems.



                ext4 bfq: writetest saw 15s, 30s. copytest 10s, 40s.
                ext4 mq-deadline: writetest saw 10s, 30s. copytest 5s, 45s.

                ext3 bfq: writetest saw 20s, 40s. copytest ~0.2s, once saw 0.5s and 2s.
                ext3 mq-deadline: writetest saw 50s. copytest ~0.2s, very occasionally 1.5s.
                ext3 mq-deadline, wbt disabled: writetest saw 10s, 40s. copytest similar to the above.

                ext2 bfq: writetest 0.1 - 0.9s. copytest ~0.5s.
                ext2 mq-deadline: writetest 0.2 - 0.6s. copytest ~0.4s

                xfs bfq: writetest 0.5 - 2s. copytest 0.5 - 3.5s.
                xfs mq-deadline: writetest 0.2s, some 0.5s. copytest 0 - 3s.

                bcachefs bfq: writetest 1.5 - 3s.
                bcachefs mq-deadline: writetest 1 - 5s.

                btrfs bfq: writetest 0.5-2s, copytest 1 - 2s.
                btrfs mq-deadline: writetest ~0.4s, copytest 1 - 4s.


                The ext3 figures look better when copying files, but ext3 is not a good idea for latency in general (e.g. see the tytso link :-). ext2 lacks journalling - journalling is generally desirable for robustness, but ext journalling is what causes this latency.



                So the alternatives I am most interested are XFS, the experimental bcachefs, and btrfs. I expect XFS is the simplest to use, at least on spinning hard drives. One prominent difference is that there is no tool to shrink XFS filesystems, only to grow them.






                share|improve this answer















                Yes, it is filesystem-specific. The following tests were performed using a kernel built from the bcachefs tree, version v4.20-297-g2252e4b79f8f (2019-02-14). The figures are not rigorous statistics, but show the gross differences when I tested with different filesystems.



                ext4 bfq: writetest saw 15s, 30s. copytest 10s, 40s.
                ext4 mq-deadline: writetest saw 10s, 30s. copytest 5s, 45s.

                ext3 bfq: writetest saw 20s, 40s. copytest ~0.2s, once saw 0.5s and 2s.
                ext3 mq-deadline: writetest saw 50s. copytest ~0.2s, very occasionally 1.5s.
                ext3 mq-deadline, wbt disabled: writetest saw 10s, 40s. copytest similar to the above.

                ext2 bfq: writetest 0.1 - 0.9s. copytest ~0.5s.
                ext2 mq-deadline: writetest 0.2 - 0.6s. copytest ~0.4s

                xfs bfq: writetest 0.5 - 2s. copytest 0.5 - 3.5s.
                xfs mq-deadline: writetest 0.2s, some 0.5s. copytest 0 - 3s.

                bcachefs bfq: writetest 1.5 - 3s.
                bcachefs mq-deadline: writetest 1 - 5s.

                btrfs bfq: writetest 0.5-2s, copytest 1 - 2s.
                btrfs mq-deadline: writetest ~0.4s, copytest 1 - 4s.


                The ext3 figures look better when copying files, but ext3 is not a good idea for latency in general (e.g. see the tytso link :-). ext2 lacks journalling - journalling is generally desirable for robustness, but ext journalling is what causes this latency.



                So the alternatives I am most interested are XFS, the experimental bcachefs, and btrfs. I expect XFS is the simplest to use, at least on spinning hard drives. One prominent difference is that there is no tool to shrink XFS filesystems, only to grow them.







                share|improve this answer














                share|improve this answer



                share|improve this answer








                edited Feb 19 at 11:33

























                answered Feb 19 at 11:07









                sourcejedisourcejedi

                25.1k441108




                25.1k441108



























                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Unix & Linux Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f501394%2fsimple-file-copy-or-write-causes-ten-second-latency-on-linux-filesystem%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown






                    Popular posts from this blog

                    How to check contact read email or not when send email to Individual?

                    Displaying single band from multi-band raster using QGIS

                    How many registers does an x86_64 CPU actually have?