Using iperf to measure ethernet to infiniband gateway performance.
This time I’m retesting between two better equipped hosts:
- 1x Q6600 Quadcore (20gpbs HCA, sender)
- 1xE8500 Dualcore (1gbit ethernet, receiver)
Both systems have the same mainboard, OS release, etc.
[root@waxh0004 ~]# iperf -c 192.168.100.106
Client connecting to 192.168.100.106, TCP port 5001
TCP window size: 16.0 KByte (default)
[ 3] local 192.168.100.107 port 47438 connected with 192.168.100.106 port 5001
[ 3] 0.0-10.0 sec 1.10 GBytes 942 Mbits/sec
I so don’t see a performance issue there, now. Note I haven’t even yet done any tuning so far.
For some reason the 20gbit/s port is running at 10gbps only, I haven’t gotten to the reason for that yet.
The HCA (mthca3-something) is supposed to run at 20gbit/s, the cable should be OK and the port is set correctly, but they won’t negotiate 20gbit/s.
[root@waxh0004 ~]# ibstat
CA type: MT25204
Number of ports: 1
Firmware version: 1.2.0
Hardware version: a0
System image GUID:
Physical state: LinkUp
waxs0002# show interface ib 5/2
InfiniBand Interface Information
port : 5/2
name : 5/2
type : ib4xTXPD
desc : 5/2 (322)
last-change : Thu Apr 1 18:24:09 2010
mtu : 2048
auto-negotiate-supported : yes
auto-negotiate : enabled
admin-status : up
oper-status : up
admin-speed : 4x-ddr(20gbps)
oper-speed : 4x-sdr(10gbps)
link-trap : enabled
phy-state : link-up
dongle-type : none
dongle-state : no-state-change
But I’ll soon crack this and also see to some tuning for the MTUs, Infiniband should be able to support 4K which would do far better for my block-IO centric stuff.
After that it’s time to move to using LACP for the ethernet-bound linux machines and also to re-activate the 5 additional uplinks between the gigabit switch and the IB switch, although I will have to re-think if it is worth to waste ports on the L3 switch for that. Unfortunately QinQ and linespeed performance ususally only come with a L3 switch around them 🙂
If all works out as nicely as it seems to, the long route will be switching to the 2Port 10GE module for the cisco infiniband gateway and using a similar module for the ethernet switch. but this is not anywhere soon, as either of these run around $2k even from wholesale dealers. By the time I could afford them, 10ge switches will have dropped some more and I’d be looking at a 10ge backbone. But as of today, this doesn’t make sense *at all* and also the 6gbit I have will take some time to saturate on mostly random IO.
Wait for some more peeks at how Amplidata with their beautiful scale out appliances come into play for that – after the break.
p.s.:Interestingly enough, the performance measured for UDP was lower than for TCP. My only guess would be something like firewall connection tracking on the sender?