AOC-IBH-XQD

Dual Port, Low Latency InfiniBand Adapter Cards For SuperBlade

AOC-IBH-XQD This InfiniBand mezzanine card for the SuperBlade delivers low-latency and high-bandwidth for performance-driven server and storage clustering applications in Enterprise Data Centers, High-Performance Computing, and Embedded environments. Clustered data bases, parallelized applications, transactional services and high-performance embedded I/O applications will achieve significant performance improvements resulting in reduced completion time and lower cost per operation. AOC-IBH-XQD simplifies network deployment by consolidating clustering, communications, storage, and management I/O and by providing enhanced performance in virtualized server environments. In addition to this outstanding InfiniBand capability, the AOC-IBH-XQD can be configured alternatively as a 10-Gigabit Ethernet NIC when used with the Supermicro SBM-XEM-002 10-Gigabit Pass-Through module or the SBM-XEM-X10SM 10 Gbps Ethernet switch.

AOC-IBH-X3QS

Single Port, Low Latency InfiniBand Adapter Cards For SuperBlade

AOC-IBH-X3QS This InfiniBand mezzanine card for the SuperBlade delivers low-latency and high-bandwidth for performance-driven server and storage clustering applications in Enterprise Data Centers, High-Performance Computing, and Embedded environments. Clustered data bases, parallelized applications, transactional services and high-performance embedded I/O applications will achieve significant performance improvements resulting in reduced completion time and lower cost per operation. AOC-IBH-X3QS simplifies network deployment by consolidating clustering, communications, storage, and management I/O and by providing enhanced performance in virtualized server environments. In addition to this outstanding InfiniBand capability, the AOC-IBH-X3QS can be configured alternatively as a 10-Gigabit Ethernet NIC when used with the Supermicro SBM-XEM-002M 10-Gigabit Pass-Through module or the SBM-XEM-X10SM 10Gbps Ethernet switch.

AOC-IBH-X3QD

Dual Port, Low Latency InfiniBand Adapter Cards For SuperBlade

AOC-IBH-X3QD This InfiniBand mezzanine card for the SuperBlade delivers low-latency and high-bandwidth for performance-driven server and storage clustering applications in Enterprise Data Centers, High-Performance Computing, and Embedded environments. Clustered data bases, parallelized applications, transactional services and high-performance embedded I/O applications will achieve significant performance improvements resulting in reduced completion time and lower cost per operation. AOC-IBH-X3QD simplifies network deployment by consolidating clustering, communications, storage, and management I/O and by providing enhanced performance in virtualized server environments. In addition to this outstanding InfiniBand capability, the AOC-IBH-X3QD can be configured alternatively as a 10-Gigabit Ethernet NIC when used with the Supermicro SBM-XEM-002 10-Gigabit Pass-Through module or the SBM-XEM-X10SM 10 Gbps Ethernet switch.

SBM-IBP-D14 (IB Pass-Through) †

Internal Ports
  • Fourteen internal 4x DDR ports (20Gbps)
External Uplink Ports
  • Fourteen external 4x DDR copper ports (20Gbps – CX-4 connectors)

SBM-IBS-001 (IB Switch) † – EOL

Switch Chip
  • Mellanox InfiniScale III
Internal Ports
  • Fourteen Internal 4x DDR Ports
External Uplink Ports
  • Ten 4x DDR external copper ports (CX-4 Connectors)
Bandwidth
  • 4x DDR (20-Gbps) non-blocking architecture 960-Gbps total switch bandwidth (24-Port)

SBM-IBS-Q3616*/ SBM-IBS-Q3616M* and SBM-IBS-Q3618*/ SBM-IBS-Q3618M*

Switch Chip
  • Mellanox InfiniScale IV
Internal Ports
  • Twenty (SBM-IBS-Q3616/M)
  • Eighteen (SBM-IBS-Q3618/M)
External Uplink Ports
  • Sixteen 4X QDR with QSFP connectors (SBM-IBS-Q3616/M)
  • Eighteen 4X QDR with QSFP connectors (SBM-IBS-Q3618/M)
Bandwidth
  • 4x QDR (40-Gbps) non-blocking architecture 2.88Tbps total switch bandwidth (36-Port)

SBM-IBS-F3616M

Switch Chip
  • Mellanox SwitchX
Internal Ports
  • Twenty FDR10/FDR ports at 40/56Gbps
External Uplink Ports
  • Sixteen 4x FDR with QSFP connectors
  • 4X FDR (56-Gbps) non-blocking architecture with 56Gbps through external ports
Bandwidth
  • 3.392Tbps total switch bandwidth (36-Port)

SBM-GEP-T20

Internal Ports
  • Twenty 1-Gbps downlink ports for LAN interfaces of the server blades
External Uplink Ports
  • Twenty 1-Gbps uplink RJ-45 ports fixed at 1-Gbp (no auto-negotiation)
Type
  • Ethernet pass-through module
Protocols
  • N/A

SBM-GEM-002

Internal Ports
  • Fourteen 1-Gbps downlink ports for LAN interfaces of server blades
External Uplink Ports
  • Fourteen RJ-45 uplink ports fixed at 1-Gbps (no auto-negotiation)
Type
  • Ethernet pass-through module
Protocols
  • N/A

SBM-XEM-002M †

Internal Ports
  • Fourteen 10-Gbps downlink XAUI ports
External Uplink Ports
  • Fourteen SFP+ uplink ports fixed at 10Gbps (no auto-negotiation)
Type
  • Ethernet pass-through module
Connections
  • 10GBASE-SR, 10GBASE-LRM, 10GBASE-ER, 10GBASE-LR, Twinax