Why would ssh perform slower than iperf3?
Clash Royale CLAN TAG#URR8PPP
up vote
0
down vote
favorite
I'm frustrated by what appears to be openssh client/server performing at a maximum of ~ 1.03 gigabits/s, while iperf3 can sustain 7.03 gigabits/s easily over the same, point to point link.
Forcing Nagle,
using nc as an ssh ProxyCommand,- [disabling compression](https://gist.github.com/KartikTalwar/4393116,
forcing specific ciphers,
none of it has any effect;
- cpu load is under 70% (of 2400% total) on both boxes during the iperf test and ssh transport tests,
- no block devices are involved
I Just don't get it, is ssh simply incapable of 10gbe? Are the ciphers or hashing slowing me down? Did somebody hard code a gigabit limit in the openssl client source? Will I have to open 8+ independent ssh connections to throw data over this pipe at line speed?
AS seen below, the tiny tiny green blips are cat /dev/zero | ssh target 'cat >/dev/null'
; the purple/orange blob is iperf3 over ssh port forwarding, the tall blips are regular iperf3
Some Stats;
iperf3 over the dedicated link: 7.11Gbits/s
(I suspect mishandling by me of the fiber has reduced this from its original ~9Gbit performance new, se la vi)
iperf3 over dedicated link (mtu=9000): 7.55 Gbits/s
- iperf3 over the gbe lan: 941Mbits/s
- iperf3 over ssh over the direct link: 1.03 Gbits/s
iperf3 over ssh over gigabit lan: 941Mbits/s
(So clearly ssh is using the right route, and its still slightly faster than regular gbe to use)
iperf3 over ssh with ProxyCommand nc: 1.1Gbit/s
(another very very slim gain)
iperf3 over ssh with ProxyCommand nc (mtu=9000): 1.01Gbit/s
(mtu seems to have dergraded the link speed in this case)
ssh
add a comment |Â
up vote
0
down vote
favorite
I'm frustrated by what appears to be openssh client/server performing at a maximum of ~ 1.03 gigabits/s, while iperf3 can sustain 7.03 gigabits/s easily over the same, point to point link.
Forcing Nagle,
using nc as an ssh ProxyCommand,- [disabling compression](https://gist.github.com/KartikTalwar/4393116,
forcing specific ciphers,
none of it has any effect;
- cpu load is under 70% (of 2400% total) on both boxes during the iperf test and ssh transport tests,
- no block devices are involved
I Just don't get it, is ssh simply incapable of 10gbe? Are the ciphers or hashing slowing me down? Did somebody hard code a gigabit limit in the openssl client source? Will I have to open 8+ independent ssh connections to throw data over this pipe at line speed?
AS seen below, the tiny tiny green blips are cat /dev/zero | ssh target 'cat >/dev/null'
; the purple/orange blob is iperf3 over ssh port forwarding, the tall blips are regular iperf3
Some Stats;
iperf3 over the dedicated link: 7.11Gbits/s
(I suspect mishandling by me of the fiber has reduced this from its original ~9Gbit performance new, se la vi)
iperf3 over dedicated link (mtu=9000): 7.55 Gbits/s
- iperf3 over the gbe lan: 941Mbits/s
- iperf3 over ssh over the direct link: 1.03 Gbits/s
iperf3 over ssh over gigabit lan: 941Mbits/s
(So clearly ssh is using the right route, and its still slightly faster than regular gbe to use)
iperf3 over ssh with ProxyCommand nc: 1.1Gbit/s
(another very very slim gain)
iperf3 over ssh with ProxyCommand nc (mtu=9000): 1.01Gbit/s
(mtu seems to have dergraded the link speed in this case)
ssh
go multiple ssh in parallel. It also depends how did you use iperf. iperf is not some random tool you mess with you need to know what value your reading
â Kiwy
Jun 27 at 7:20
FWIW, I would include specific options/settings for the ssh client/server, in order to rule out certain items on that list of potential bottle-necks you provided.
â ILMostro_7
Jun 28 at 2:36
add a comment |Â
up vote
0
down vote
favorite
up vote
0
down vote
favorite
I'm frustrated by what appears to be openssh client/server performing at a maximum of ~ 1.03 gigabits/s, while iperf3 can sustain 7.03 gigabits/s easily over the same, point to point link.
Forcing Nagle,
using nc as an ssh ProxyCommand,- [disabling compression](https://gist.github.com/KartikTalwar/4393116,
forcing specific ciphers,
none of it has any effect;
- cpu load is under 70% (of 2400% total) on both boxes during the iperf test and ssh transport tests,
- no block devices are involved
I Just don't get it, is ssh simply incapable of 10gbe? Are the ciphers or hashing slowing me down? Did somebody hard code a gigabit limit in the openssl client source? Will I have to open 8+ independent ssh connections to throw data over this pipe at line speed?
AS seen below, the tiny tiny green blips are cat /dev/zero | ssh target 'cat >/dev/null'
; the purple/orange blob is iperf3 over ssh port forwarding, the tall blips are regular iperf3
Some Stats;
iperf3 over the dedicated link: 7.11Gbits/s
(I suspect mishandling by me of the fiber has reduced this from its original ~9Gbit performance new, se la vi)
iperf3 over dedicated link (mtu=9000): 7.55 Gbits/s
- iperf3 over the gbe lan: 941Mbits/s
- iperf3 over ssh over the direct link: 1.03 Gbits/s
iperf3 over ssh over gigabit lan: 941Mbits/s
(So clearly ssh is using the right route, and its still slightly faster than regular gbe to use)
iperf3 over ssh with ProxyCommand nc: 1.1Gbit/s
(another very very slim gain)
iperf3 over ssh with ProxyCommand nc (mtu=9000): 1.01Gbit/s
(mtu seems to have dergraded the link speed in this case)
ssh
I'm frustrated by what appears to be openssh client/server performing at a maximum of ~ 1.03 gigabits/s, while iperf3 can sustain 7.03 gigabits/s easily over the same, point to point link.
Forcing Nagle,
using nc as an ssh ProxyCommand,- [disabling compression](https://gist.github.com/KartikTalwar/4393116,
forcing specific ciphers,
none of it has any effect;
- cpu load is under 70% (of 2400% total) on both boxes during the iperf test and ssh transport tests,
- no block devices are involved
I Just don't get it, is ssh simply incapable of 10gbe? Are the ciphers or hashing slowing me down? Did somebody hard code a gigabit limit in the openssl client source? Will I have to open 8+ independent ssh connections to throw data over this pipe at line speed?
AS seen below, the tiny tiny green blips are cat /dev/zero | ssh target 'cat >/dev/null'
; the purple/orange blob is iperf3 over ssh port forwarding, the tall blips are regular iperf3
Some Stats;
iperf3 over the dedicated link: 7.11Gbits/s
(I suspect mishandling by me of the fiber has reduced this from its original ~9Gbit performance new, se la vi)
iperf3 over dedicated link (mtu=9000): 7.55 Gbits/s
- iperf3 over the gbe lan: 941Mbits/s
- iperf3 over ssh over the direct link: 1.03 Gbits/s
iperf3 over ssh over gigabit lan: 941Mbits/s
(So clearly ssh is using the right route, and its still slightly faster than regular gbe to use)
iperf3 over ssh with ProxyCommand nc: 1.1Gbit/s
(another very very slim gain)
iperf3 over ssh with ProxyCommand nc (mtu=9000): 1.01Gbit/s
(mtu seems to have dergraded the link speed in this case)
ssh
edited Jun 28 at 2:28
asked Jun 27 at 7:17
ThorSummoner
1,08741327
1,08741327
go multiple ssh in parallel. It also depends how did you use iperf. iperf is not some random tool you mess with you need to know what value your reading
â Kiwy
Jun 27 at 7:20
FWIW, I would include specific options/settings for the ssh client/server, in order to rule out certain items on that list of potential bottle-necks you provided.
â ILMostro_7
Jun 28 at 2:36
add a comment |Â
go multiple ssh in parallel. It also depends how did you use iperf. iperf is not some random tool you mess with you need to know what value your reading
â Kiwy
Jun 27 at 7:20
FWIW, I would include specific options/settings for the ssh client/server, in order to rule out certain items on that list of potential bottle-necks you provided.
â ILMostro_7
Jun 28 at 2:36
go multiple ssh in parallel. It also depends how did you use iperf. iperf is not some random tool you mess with you need to know what value your reading
â Kiwy
Jun 27 at 7:20
go multiple ssh in parallel. It also depends how did you use iperf. iperf is not some random tool you mess with you need to know what value your reading
â Kiwy
Jun 27 at 7:20
FWIW, I would include specific options/settings for the ssh client/server, in order to rule out certain items on that list of potential bottle-necks you provided.
â ILMostro_7
Jun 28 at 2:36
FWIW, I would include specific options/settings for the ssh client/server, in order to rule out certain items on that list of potential bottle-necks you provided.
â ILMostro_7
Jun 28 at 2:36
add a comment |Â
2 Answers
2
active
oldest
votes
up vote
0
down vote
The almost 7Gbps is apparently a false value because it is just a max peak (the "blips") and not a sustained transfer rate. This might happen because of the ways you Network software estimates the transfer speed (a small package that does not occupy all your band may arrive more quickly). As a matter of fact you have a 200Mbps average in the same line of the 7Gbps reading.
Try with a sustained transfer of an adequately big file , say > 1GB
add a comment |Â
up vote
0
down vote
Seems like SSH's is just incapable of faster transfers, for a single instance anyway. I finally realized I should run the test on loopback (on linux 4.10.0 loopback can sustain 12Gbits/s) and ssh to loopback still only performed near 125~135Mbits/s; Even yet, the ext4 file system on an ssd might only barely tar up ~ 30-50 Mbits/s; for large transfers I've settled on establishing a physically-secure link and using dd over netcat; which bottlenecks on the disk array throughput, not the file system, or transfer protocol. I was also enable to get aria2 to work with sftp, so I gave up on parallel ssh client transfers. Seems like there is a long way to go...~
add a comment |Â
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
The almost 7Gbps is apparently a false value because it is just a max peak (the "blips") and not a sustained transfer rate. This might happen because of the ways you Network software estimates the transfer speed (a small package that does not occupy all your band may arrive more quickly). As a matter of fact you have a 200Mbps average in the same line of the 7Gbps reading.
Try with a sustained transfer of an adequately big file , say > 1GB
add a comment |Â
up vote
0
down vote
The almost 7Gbps is apparently a false value because it is just a max peak (the "blips") and not a sustained transfer rate. This might happen because of the ways you Network software estimates the transfer speed (a small package that does not occupy all your band may arrive more quickly). As a matter of fact you have a 200Mbps average in the same line of the 7Gbps reading.
Try with a sustained transfer of an adequately big file , say > 1GB
add a comment |Â
up vote
0
down vote
up vote
0
down vote
The almost 7Gbps is apparently a false value because it is just a max peak (the "blips") and not a sustained transfer rate. This might happen because of the ways you Network software estimates the transfer speed (a small package that does not occupy all your band may arrive more quickly). As a matter of fact you have a 200Mbps average in the same line of the 7Gbps reading.
Try with a sustained transfer of an adequately big file , say > 1GB
The almost 7Gbps is apparently a false value because it is just a max peak (the "blips") and not a sustained transfer rate. This might happen because of the ways you Network software estimates the transfer speed (a small package that does not occupy all your band may arrive more quickly). As a matter of fact you have a 200Mbps average in the same line of the 7Gbps reading.
Try with a sustained transfer of an adequately big file , say > 1GB
answered Jun 27 at 9:01
NeuroNik
12
12
add a comment |Â
add a comment |Â
up vote
0
down vote
Seems like SSH's is just incapable of faster transfers, for a single instance anyway. I finally realized I should run the test on loopback (on linux 4.10.0 loopback can sustain 12Gbits/s) and ssh to loopback still only performed near 125~135Mbits/s; Even yet, the ext4 file system on an ssd might only barely tar up ~ 30-50 Mbits/s; for large transfers I've settled on establishing a physically-secure link and using dd over netcat; which bottlenecks on the disk array throughput, not the file system, or transfer protocol. I was also enable to get aria2 to work with sftp, so I gave up on parallel ssh client transfers. Seems like there is a long way to go...~
add a comment |Â
up vote
0
down vote
Seems like SSH's is just incapable of faster transfers, for a single instance anyway. I finally realized I should run the test on loopback (on linux 4.10.0 loopback can sustain 12Gbits/s) and ssh to loopback still only performed near 125~135Mbits/s; Even yet, the ext4 file system on an ssd might only barely tar up ~ 30-50 Mbits/s; for large transfers I've settled on establishing a physically-secure link and using dd over netcat; which bottlenecks on the disk array throughput, not the file system, or transfer protocol. I was also enable to get aria2 to work with sftp, so I gave up on parallel ssh client transfers. Seems like there is a long way to go...~
add a comment |Â
up vote
0
down vote
up vote
0
down vote
Seems like SSH's is just incapable of faster transfers, for a single instance anyway. I finally realized I should run the test on loopback (on linux 4.10.0 loopback can sustain 12Gbits/s) and ssh to loopback still only performed near 125~135Mbits/s; Even yet, the ext4 file system on an ssd might only barely tar up ~ 30-50 Mbits/s; for large transfers I've settled on establishing a physically-secure link and using dd over netcat; which bottlenecks on the disk array throughput, not the file system, or transfer protocol. I was also enable to get aria2 to work with sftp, so I gave up on parallel ssh client transfers. Seems like there is a long way to go...~
Seems like SSH's is just incapable of faster transfers, for a single instance anyway. I finally realized I should run the test on loopback (on linux 4.10.0 loopback can sustain 12Gbits/s) and ssh to loopback still only performed near 125~135Mbits/s; Even yet, the ext4 file system on an ssd might only barely tar up ~ 30-50 Mbits/s; for large transfers I've settled on establishing a physically-secure link and using dd over netcat; which bottlenecks on the disk array throughput, not the file system, or transfer protocol. I was also enable to get aria2 to work with sftp, so I gave up on parallel ssh client transfers. Seems like there is a long way to go...~
answered Jun 29 at 1:29
community wiki
ThorSummoner
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f452160%2fwhy-would-ssh-perform-slower-than-iperf3%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
go multiple ssh in parallel. It also depends how did you use iperf. iperf is not some random tool you mess with you need to know what value your reading
â Kiwy
Jun 27 at 7:20
FWIW, I would include specific options/settings for the ssh client/server, in order to rule out certain items on that list of potential bottle-necks you provided.
â ILMostro_7
Jun 28 at 2:36