A high-speed interface used to connect storage networks to computers and interconnect clusters of computers together. Introduced in 1999, InfiniBand was designed for high-performance computing using switched, point-to-point channels. InfiniBand was designed for switching fabric architectures that interconnect devices in local networks and cloud datacenters. Because of its high speed, it is widely used for AI (see
AI datacenter). See
SAN.
A Technology Merger
Supporting both copper wire and optical fibers and originally known as "System I/O," InfiniBand is a combination of Intel's NGIO (Next Generation I/O) and Future I/O from IBM, HP and Compaq. See
RDMA,
PCI Express,
NGIO and
Future I/O.
INFINIBAND SIGNALING SPEEDS
IN EACH DIRECTION
Data transfer rate is
80% of signaling speed.
Number of
Bonded Channels
1x 4x 8x 12x
Type (gigabits per second)
Single (SDR) 2.5 10 20 30
Double (DDR) 5 20 40 60
Quad (QDR) 10 40 80 120
14 (FDR) 14 56 112 168
Enhanced (EDR) 26 104 208 312
DR = "data rate"