Why would ssh perform slower than iperf3?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
0
down vote

favorite












I'm frustrated by what appears to be openssh client/server performing at a maximum of ~ 1.03 gigabits/s, while iperf3 can sustain 7.03 gigabits/s easily over the same, point to point link.




  • Forcing Nagle,


  • using nc as an ssh ProxyCommand,

  • [disabling compression](https://gist.github.com/KartikTalwar/4393116,


  • forcing specific ciphers,

none of it has any effect;



  • cpu load is under 70% (of 2400% total) on both boxes during the iperf test and ssh transport tests,

  • no block devices are involved

I Just don't get it, is ssh simply incapable of 10gbe? Are the ciphers or hashing slowing me down? Did somebody hard code a gigabit limit in the openssl client source? Will I have to open 8+ independent ssh connections to throw data over this pipe at line speed?




AS seen below, the tiny tiny green blips are cat /dev/zero | ssh target 'cat >/dev/null'; the purple/orange blob is iperf3 over ssh port forwarding, the tall blips are regular iperf3
enter image description here



Some Stats;




  • iperf3 over the dedicated link: 7.11Gbits/s



    (I suspect mishandling by me of the fiber has reduced this from its original ~9Gbit performance new, se la vi)



  • iperf3 over dedicated link (mtu=9000): 7.55 Gbits/s


  • iperf3 over the gbe lan: 941Mbits/s

  • iperf3 over ssh over the direct link: 1.03 Gbits/s


  • iperf3 over ssh over gigabit lan: 941Mbits/s



    (So clearly ssh is using the right route, and its still slightly faster than regular gbe to use)




  • iperf3 over ssh with ProxyCommand nc: 1.1Gbit/s



    (another very very slim gain)




  • iperf3 over ssh with ProxyCommand nc (mtu=9000): 1.01Gbit/s



    (mtu seems to have dergraded the link speed in this case)








share|improve this question





















  • go multiple ssh in parallel. It also depends how did you use iperf. iperf is not some random tool you mess with you need to know what value your reading
    – Kiwy
    Jun 27 at 7:20










  • FWIW, I would include specific options/settings for the ssh client/server, in order to rule out certain items on that list of potential bottle-necks you provided.
    – ILMostro_7
    Jun 28 at 2:36














up vote
0
down vote

favorite












I'm frustrated by what appears to be openssh client/server performing at a maximum of ~ 1.03 gigabits/s, while iperf3 can sustain 7.03 gigabits/s easily over the same, point to point link.




  • Forcing Nagle,


  • using nc as an ssh ProxyCommand,

  • [disabling compression](https://gist.github.com/KartikTalwar/4393116,


  • forcing specific ciphers,

none of it has any effect;



  • cpu load is under 70% (of 2400% total) on both boxes during the iperf test and ssh transport tests,

  • no block devices are involved

I Just don't get it, is ssh simply incapable of 10gbe? Are the ciphers or hashing slowing me down? Did somebody hard code a gigabit limit in the openssl client source? Will I have to open 8+ independent ssh connections to throw data over this pipe at line speed?




AS seen below, the tiny tiny green blips are cat /dev/zero | ssh target 'cat >/dev/null'; the purple/orange blob is iperf3 over ssh port forwarding, the tall blips are regular iperf3
enter image description here



Some Stats;




  • iperf3 over the dedicated link: 7.11Gbits/s



    (I suspect mishandling by me of the fiber has reduced this from its original ~9Gbit performance new, se la vi)



  • iperf3 over dedicated link (mtu=9000): 7.55 Gbits/s


  • iperf3 over the gbe lan: 941Mbits/s

  • iperf3 over ssh over the direct link: 1.03 Gbits/s


  • iperf3 over ssh over gigabit lan: 941Mbits/s



    (So clearly ssh is using the right route, and its still slightly faster than regular gbe to use)




  • iperf3 over ssh with ProxyCommand nc: 1.1Gbit/s



    (another very very slim gain)




  • iperf3 over ssh with ProxyCommand nc (mtu=9000): 1.01Gbit/s



    (mtu seems to have dergraded the link speed in this case)








share|improve this question





















  • go multiple ssh in parallel. It also depends how did you use iperf. iperf is not some random tool you mess with you need to know what value your reading
    – Kiwy
    Jun 27 at 7:20










  • FWIW, I would include specific options/settings for the ssh client/server, in order to rule out certain items on that list of potential bottle-necks you provided.
    – ILMostro_7
    Jun 28 at 2:36












up vote
0
down vote

favorite









up vote
0
down vote

favorite











I'm frustrated by what appears to be openssh client/server performing at a maximum of ~ 1.03 gigabits/s, while iperf3 can sustain 7.03 gigabits/s easily over the same, point to point link.




  • Forcing Nagle,


  • using nc as an ssh ProxyCommand,

  • [disabling compression](https://gist.github.com/KartikTalwar/4393116,


  • forcing specific ciphers,

none of it has any effect;



  • cpu load is under 70% (of 2400% total) on both boxes during the iperf test and ssh transport tests,

  • no block devices are involved

I Just don't get it, is ssh simply incapable of 10gbe? Are the ciphers or hashing slowing me down? Did somebody hard code a gigabit limit in the openssl client source? Will I have to open 8+ independent ssh connections to throw data over this pipe at line speed?




AS seen below, the tiny tiny green blips are cat /dev/zero | ssh target 'cat >/dev/null'; the purple/orange blob is iperf3 over ssh port forwarding, the tall blips are regular iperf3
enter image description here



Some Stats;




  • iperf3 over the dedicated link: 7.11Gbits/s



    (I suspect mishandling by me of the fiber has reduced this from its original ~9Gbit performance new, se la vi)



  • iperf3 over dedicated link (mtu=9000): 7.55 Gbits/s


  • iperf3 over the gbe lan: 941Mbits/s

  • iperf3 over ssh over the direct link: 1.03 Gbits/s


  • iperf3 over ssh over gigabit lan: 941Mbits/s



    (So clearly ssh is using the right route, and its still slightly faster than regular gbe to use)




  • iperf3 over ssh with ProxyCommand nc: 1.1Gbit/s



    (another very very slim gain)




  • iperf3 over ssh with ProxyCommand nc (mtu=9000): 1.01Gbit/s



    (mtu seems to have dergraded the link speed in this case)








share|improve this question













I'm frustrated by what appears to be openssh client/server performing at a maximum of ~ 1.03 gigabits/s, while iperf3 can sustain 7.03 gigabits/s easily over the same, point to point link.




  • Forcing Nagle,


  • using nc as an ssh ProxyCommand,

  • [disabling compression](https://gist.github.com/KartikTalwar/4393116,


  • forcing specific ciphers,

none of it has any effect;



  • cpu load is under 70% (of 2400% total) on both boxes during the iperf test and ssh transport tests,

  • no block devices are involved

I Just don't get it, is ssh simply incapable of 10gbe? Are the ciphers or hashing slowing me down? Did somebody hard code a gigabit limit in the openssl client source? Will I have to open 8+ independent ssh connections to throw data over this pipe at line speed?




AS seen below, the tiny tiny green blips are cat /dev/zero | ssh target 'cat >/dev/null'; the purple/orange blob is iperf3 over ssh port forwarding, the tall blips are regular iperf3
enter image description here



Some Stats;




  • iperf3 over the dedicated link: 7.11Gbits/s



    (I suspect mishandling by me of the fiber has reduced this from its original ~9Gbit performance new, se la vi)



  • iperf3 over dedicated link (mtu=9000): 7.55 Gbits/s


  • iperf3 over the gbe lan: 941Mbits/s

  • iperf3 over ssh over the direct link: 1.03 Gbits/s


  • iperf3 over ssh over gigabit lan: 941Mbits/s



    (So clearly ssh is using the right route, and its still slightly faster than regular gbe to use)




  • iperf3 over ssh with ProxyCommand nc: 1.1Gbit/s



    (another very very slim gain)




  • iperf3 over ssh with ProxyCommand nc (mtu=9000): 1.01Gbit/s



    (mtu seems to have dergraded the link speed in this case)










share|improve this question












share|improve this question




share|improve this question








edited Jun 28 at 2:28
























asked Jun 27 at 7:17









ThorSummoner

1,08741327




1,08741327











  • go multiple ssh in parallel. It also depends how did you use iperf. iperf is not some random tool you mess with you need to know what value your reading
    – Kiwy
    Jun 27 at 7:20










  • FWIW, I would include specific options/settings for the ssh client/server, in order to rule out certain items on that list of potential bottle-necks you provided.
    – ILMostro_7
    Jun 28 at 2:36
















  • go multiple ssh in parallel. It also depends how did you use iperf. iperf is not some random tool you mess with you need to know what value your reading
    – Kiwy
    Jun 27 at 7:20










  • FWIW, I would include specific options/settings for the ssh client/server, in order to rule out certain items on that list of potential bottle-necks you provided.
    – ILMostro_7
    Jun 28 at 2:36















go multiple ssh in parallel. It also depends how did you use iperf. iperf is not some random tool you mess with you need to know what value your reading
– Kiwy
Jun 27 at 7:20




go multiple ssh in parallel. It also depends how did you use iperf. iperf is not some random tool you mess with you need to know what value your reading
– Kiwy
Jun 27 at 7:20












FWIW, I would include specific options/settings for the ssh client/server, in order to rule out certain items on that list of potential bottle-necks you provided.
– ILMostro_7
Jun 28 at 2:36




FWIW, I would include specific options/settings for the ssh client/server, in order to rule out certain items on that list of potential bottle-necks you provided.
– ILMostro_7
Jun 28 at 2:36










2 Answers
2






active

oldest

votes

















up vote
0
down vote













The almost 7Gbps is apparently a false value because it is just a max peak (the "blips") and not a sustained transfer rate. This might happen because of the ways you Network software estimates the transfer speed (a small package that does not occupy all your band may arrive more quickly). As a matter of fact you have a 200Mbps average in the same line of the 7Gbps reading.
Try with a sustained transfer of an adequately big file , say > 1GB






share|improve this answer




























    up vote
    0
    down vote













    Seems like SSH's is just incapable of faster transfers, for a single instance anyway. I finally realized I should run the test on loopback (on linux 4.10.0 loopback can sustain 12Gbits/s) and ssh to loopback still only performed near 125~135Mbits/s; Even yet, the ext4 file system on an ssd might only barely tar up ~ 30-50 Mbits/s; for large transfers I've settled on establishing a physically-secure link and using dd over netcat; which bottlenecks on the disk array throughput, not the file system, or transfer protocol. I was also enable to get aria2 to work with sftp, so I gave up on parallel ssh client transfers. Seems like there is a long way to go...~






    share|improve this answer























      Your Answer







      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "106"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      convertImagesToLinks: false,
      noModals: false,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );








       

      draft saved


      draft discarded


















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f452160%2fwhy-would-ssh-perform-slower-than-iperf3%23new-answer', 'question_page');

      );

      Post as a guest






























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes








      up vote
      0
      down vote













      The almost 7Gbps is apparently a false value because it is just a max peak (the "blips") and not a sustained transfer rate. This might happen because of the ways you Network software estimates the transfer speed (a small package that does not occupy all your band may arrive more quickly). As a matter of fact you have a 200Mbps average in the same line of the 7Gbps reading.
      Try with a sustained transfer of an adequately big file , say > 1GB






      share|improve this answer

























        up vote
        0
        down vote













        The almost 7Gbps is apparently a false value because it is just a max peak (the "blips") and not a sustained transfer rate. This might happen because of the ways you Network software estimates the transfer speed (a small package that does not occupy all your band may arrive more quickly). As a matter of fact you have a 200Mbps average in the same line of the 7Gbps reading.
        Try with a sustained transfer of an adequately big file , say > 1GB






        share|improve this answer























          up vote
          0
          down vote










          up vote
          0
          down vote









          The almost 7Gbps is apparently a false value because it is just a max peak (the "blips") and not a sustained transfer rate. This might happen because of the ways you Network software estimates the transfer speed (a small package that does not occupy all your band may arrive more quickly). As a matter of fact you have a 200Mbps average in the same line of the 7Gbps reading.
          Try with a sustained transfer of an adequately big file , say > 1GB






          share|improve this answer













          The almost 7Gbps is apparently a false value because it is just a max peak (the "blips") and not a sustained transfer rate. This might happen because of the ways you Network software estimates the transfer speed (a small package that does not occupy all your band may arrive more quickly). As a matter of fact you have a 200Mbps average in the same line of the 7Gbps reading.
          Try with a sustained transfer of an adequately big file , say > 1GB







          share|improve this answer













          share|improve this answer



          share|improve this answer











          answered Jun 27 at 9:01









          NeuroNik

          12




          12






















              up vote
              0
              down vote













              Seems like SSH's is just incapable of faster transfers, for a single instance anyway. I finally realized I should run the test on loopback (on linux 4.10.0 loopback can sustain 12Gbits/s) and ssh to loopback still only performed near 125~135Mbits/s; Even yet, the ext4 file system on an ssd might only barely tar up ~ 30-50 Mbits/s; for large transfers I've settled on establishing a physically-secure link and using dd over netcat; which bottlenecks on the disk array throughput, not the file system, or transfer protocol. I was also enable to get aria2 to work with sftp, so I gave up on parallel ssh client transfers. Seems like there is a long way to go...~






              share|improve this answer



























                up vote
                0
                down vote













                Seems like SSH's is just incapable of faster transfers, for a single instance anyway. I finally realized I should run the test on loopback (on linux 4.10.0 loopback can sustain 12Gbits/s) and ssh to loopback still only performed near 125~135Mbits/s; Even yet, the ext4 file system on an ssd might only barely tar up ~ 30-50 Mbits/s; for large transfers I've settled on establishing a physically-secure link and using dd over netcat; which bottlenecks on the disk array throughput, not the file system, or transfer protocol. I was also enable to get aria2 to work with sftp, so I gave up on parallel ssh client transfers. Seems like there is a long way to go...~






                share|improve this answer

























                  up vote
                  0
                  down vote










                  up vote
                  0
                  down vote









                  Seems like SSH's is just incapable of faster transfers, for a single instance anyway. I finally realized I should run the test on loopback (on linux 4.10.0 loopback can sustain 12Gbits/s) and ssh to loopback still only performed near 125~135Mbits/s; Even yet, the ext4 file system on an ssd might only barely tar up ~ 30-50 Mbits/s; for large transfers I've settled on establishing a physically-secure link and using dd over netcat; which bottlenecks on the disk array throughput, not the file system, or transfer protocol. I was also enable to get aria2 to work with sftp, so I gave up on parallel ssh client transfers. Seems like there is a long way to go...~






                  share|improve this answer















                  Seems like SSH's is just incapable of faster transfers, for a single instance anyway. I finally realized I should run the test on loopback (on linux 4.10.0 loopback can sustain 12Gbits/s) and ssh to loopback still only performed near 125~135Mbits/s; Even yet, the ext4 file system on an ssd might only barely tar up ~ 30-50 Mbits/s; for large transfers I've settled on establishing a physically-secure link and using dd over netcat; which bottlenecks on the disk array throughput, not the file system, or transfer protocol. I was also enable to get aria2 to work with sftp, so I gave up on parallel ssh client transfers. Seems like there is a long way to go...~







                  share|improve this answer















                  share|improve this answer



                  share|improve this answer








                  answered Jun 29 at 1:29



























                  community wiki





                  ThorSummoner























                       

                      draft saved


                      draft discarded


























                       


                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f452160%2fwhy-would-ssh-perform-slower-than-iperf3%23new-answer', 'question_page');

                      );

                      Post as a guest













































































                      Popular posts from this blog

                      How to check contact read email or not when send email to Individual?

                      Bahrain

                      Postfix configuration issue with fips on centos 7; mailgun relay