Wednesday, November 25, 2009

HPCwire: An Ethernet Protocol for InfiniBand

HPCwire: An Ethernet Protocol for InfiniBand
The catch is that it will be based on Ethernet, so performance will initially be constrained to 10 gigabits/second throughput and multi-microsecond latencies. InfiniBand, of course, already offers much better performance, which is why it continues to expand its footprint in the HPC market. But since the technology behind lossless Ethernet is coming to resemble InfiniBand, vendors like Voltaire and Mellanox are using the convergence as an opportunity to enter the Ethernet arena. "We're not naive enough to think the entire world is going to convert to InfiniBand," says Mellanox marketing VP John Monson, who joined the company in March.
Voltaire has announced its intention to build 10 GigE datacenter switches, which the company plans to launch later this year. Meanwhile at Interop in Las Vegas, Mellanox demonstrated a number of Ethernet-centric technologies, including an RDMA over Ethernet (RDMAoE) capability on the company's ConnectX EN adapters.
RDMAoE is not iWARP (Internet Wide Area RDMA Protocol), which is currently the only RDMA-based Ethernet standard that has a following with NIC vendors like Chelsio Communications and NetEffect (now part of Intel). Mellanox never jumped on the iWARP bandwagon, claiming that the technology's TCP offload model makes the design too complex and expensive to attract widespread support, and that scaling iWARP to 40 gigabits per seconds (datacenter Ethernet's next speed bump) would be problematic. More importantly, for a number of reasons Linux support for TCP offload never materialized.

No comments: