Profile cover photo
Profile photo
Ilmari Heikkinen
7,473 followers -
Doin' stuff
Doin' stuff

7,473 followers
About
Communities and Collections
View all
Posts

Post has attachment
IO limits
It's all about latency, man. Latency, latency, latency. Latency drives your max IOPS. The other aspects are how big are your IOs and how many can you do in parallel. But, dude, it's mostly about latency. That's the thing, the big kahuna, the ultimate limit....
Add a comment...

Post has attachment
RDMA cat
Today I wrote a small RDMA test program using libibverbs. That library has a pretty steep learning curve.  Anyhow. To use libibverbs and librdmacm on CentOS, install rdma-core-devel and compile your things with -lrdmacm -libverbs. Timings below, my test set...
Add a comment...

Post has attachment
4k over IB
So, technically, I could stream uncompressed 4k@60Hz video over the Infiniband network. 4k60 needs about 2 GB/s of bandwidth, the network goes at 3 GB/s. This... how would I try this? I'd need a source of 4k frames. Draw on the GPU to a framebuffer, then gl...
Add a comment...

Post has attachment
Quick timings
NVMe and NFS, cold cache on client and server. 4.3 GiB in under three seconds. $ cat /nfs/nvme/Various/UV.zip | pv > /dev/null
4.3GiB 0:00:02 [1.55GiB/s] The three-disk HDD pool gets around 300 MB/s, but once the ARC picks up the data it goes at network sp...
Add a comment...

Post has attachment
Infinibanding, pt 4. Almost there
Got my PCIe-M.2 adapters, plugged 'em in, one of them runs at PCIe 1.0 lane speeds instead of PCIe 3.0, capping perf to 850 MB/s. And causes a Critical Interrupt #0x18 | Bus Fatal Error that resets the machine. Then the thing overheats and makes the S...
Add a comment...

Post has attachment
InfiniBanding, pt. 3, now with ZFS
Latest on the weekend fileserver project: ib_send_bw 3.3 GB/s between two native CentOS boxes. The server has two cards so it should manage 6+ GB/s aggregate bandwidth and hopefully feel like a local NVMe SSD to the clients. (Or more like remote page cache....
Add a comment...

Post has attachment
Unified Interconnect
Playing with InfiniBand got me thinking. This thing is basically a PCIe to PCIe -bridge. The old kit runs at x4 PCIe 3 speeds, the new stuff is x16 PCIe. The next generation is x16 PCIe 4.0 and 5.0. Why jump through all the hoops? Thunderbolt is x4 PCIe ov...
Add a comment...

Post has attachment
InfiniBanding, pt. 2
InfiniBand benchmarks with ConnectX-2 QDR cards (PCIe 2.0 x8 -- very annoying lane spec outside of server gear: either it eats up a PCIe 3.0 x16 slot, or you end up running at half speed, and it's too slow to hit the full 32 Gbps of QDR InfiniBand. Oh yes, ...
Add a comment...

Post has attachment
Quick test with old Infiniband kit
Two IBM ConnectX-2 cards, hooked up to a Voltaire 4036 switch that sounds like a turbocharged hair dryer. CentOS 7, one host bare metal, other on top of ESXi 6.5. Best I saw thus far: 3009 MB/s RDMA transfer. Around 2.4 GB/s with iperf3. These things seem t...
Add a comment...

Post has attachment
OpenVPN settings for 1 Gbps tunnel
Here are the relevant parts of the OpenVPN 2.4 server config that got me 900+ Mbps iperf3 on GbE LAN: # Use TCP, I couldn't get good perf out of UDP.

proto tcp

# Use AES-256-GCM:
# - more secure than 128 bit
# - GCM has built-in authentication, see http...
Add a comment...
Wait while more posts are being loaded