InfiniBand switchesMSB7800-ES2F

NVIDIA Mellanox MSB7800-ES2F 100G InfiniBand Switch 36-Port 7.2Tb/s Managed Switch with P2C Airflow

High-performance 100G InfiniBand switch with 36 QSFP28 ports, 7.2Tb/s throughput, SHARP acceleration, and P2C airflow for HPC/AI data centers.

RFQ

Inquiry ready

Clear path for part number, quantity and BOM-led contact.

EN / AR

Bilingual buyer flow

Useful for procurement teams coordinating technical review across regions.

PDF

Datasheet available

The product datasheet can be downloaded directly from this page.

Fast technical snapshotDatasheet available
NVIDIA Mellanox MSB7800-ES2F 100G InfiniBand Switch 36-Port 7.2Tb/s Managed Switch with P2C Airflow
MSB7800-ES2F
Product photoCMS image ready

Selected specs

Deployment fit

Top‑of‑Rack (ToR) Leaf Connectivity – Ideal for connecting compute nodes in small to extremely large clusters, providing high‑density 100Gb/s access layer switching.

Follow-up path

Quote / Compatibility / Availability

Prepared for part-number led and BOM-led requests

NVIDIA Mellanox MSB7800-ES2F 100G InfiniBand Switch 36-Port 7.2Tb/s Managed Switch with P2C Airflow
MSB7800-ES2F
Product photoCMS image ready

Product media

From product visual to RFQ context

When product photography is available it appears here for faster review, and a clean technical outline remains useful for early evaluation when imagery is limited.

01Product imagery when available
02Clear technical outline for fast review
03Built for specification review before inquiry

Overview

Product overview

The NVIDIA Mellanox MSB7800‑ES2F is a high‑performance 100Gb/s InfiniBand smart switch designed to meet the demanding requirements of modern HPC, AI, and cloud data centers. Part of the NVIDIA SB7800 series, this switch delivers 36 QSFP28 ports operating at 100Gb/s per port, with an aggregate non‑blocking throughput of 7.2 Tb/s and ultra‑low port latency. Built on the proven InfiniBand architecture, the MSB7800‑ES2F provides the ideal top‑of‑rack leaf connectivity solution for small to extremely large clusters. The switch features a fully managed architecture with an onboard dual‑core x86 CPU running MLNX‑OS, enabling comprehensive chassis management of firmware, power supplies, fans, and ports. With support for NVIDIA SHARP in‑network computing technology, the MSB7800‑ES2F offloads collective operations from CPUs to the network fabric, dramatically accelerating MPI and deep learning workloads. The MSB7800‑ES2F variant features P2C (port‑to‑power) airflow, making it suitable for data center cooling architectures where cold air is supplied from the port side.

Key features

  • 100Gb/s InfiniBand per port – 36 QSFP28 ports delivering full bidirectional bandwidth with non‑blocking architecture.
  • 7.2 Tb/s Aggregate Throughput – High‑density switching capacity in a compact 1U form factor.
  • In‑Network Computing Acceleration – NVIDIA SHARP technology offloads collective operations from CPUs to the switch fabric, reducing MPI operation time and freeing valuable CPU resources for computation.
  • Fully Managed with MLNX‑OS – Onboard dual‑core x86 CPU (Intel Celeron 1047UE) with 4GB RAM and 16GB SSD provides comprehensive management via CLI, WebUI, SNMP, and JSON interfaces.
  • Redundant & Hot‑Swappable Components – 1+1 redundant power supplies with 80 Plus Gold certification, plus hot‑swappable fan modules for maximum availability.
  • Energy Efficient Design – ATIS weighted power consumption as low as 122W for a fully populated system; dynamic power scaling reduces consumption when ports are not fully utilized.
  • Flexible Cooling Options – P2C (port‑to‑power) airflow configuration (MSB7800-ES2F) with front‑to‑rear cooling.
  • UFM Ready – Can be integrated with NVIDIA Unified Fabric Manager (UFM) for advanced telemetry, predictive analytics, and automated fabric orchestration.

Typical applications

  • Top‑of‑Rack (ToR) Leaf Connectivity – Ideal for connecting compute nodes in small to extremely large clusters, providing high‑density 100Gb/s access layer switching.
  • AI & Machine Learning Clusters – GPU‑based systems requiring high‑bandwidth, low‑latency interconnect with SHARP acceleration for NCCL collective operations.
  • High‑Performance Computing (HPC) – Research labs, universities, and enterprise HPC environments running MPI‑based simulations and modeling workloads.
  • Enterprise Data Centers – Storage and compute fabrics requiring reliable, high‑performance InfiniBand connectivity.
  • Cloud & Hyperscale Infrastructure – Scalable fabric deployments supporting fat tree, DragonFly+, and other advanced topologies.

Buying flow

How this product review turns into a real inquiry

This section gives buyers a practical handoff from technical review into quotation or compatibility follow-up.

01

Identify the exact target model

Start from MSB7800-ES2F or the product name to reduce ambiguity before quotation starts.

02

Add quantity or deployment context

Share quantity, destination or BOM context so the commercial follow-up is aligned with the project scope.

03

Move into availability and fit review

Use the page as the handoff point for stock discussion, compatibility checks and delivery planning.

Technical snapshot

Trust signals

Why buyers use this page to review MSB7800-ES2F

The page is designed to reduce the distance between specification review and a real commercial inquiry.

RFQ

01

Part-number led inquiry path

MSB7800-ES2F can be used as the direct starting point for quotation, BOM sharing and availability discussion.

Review

02

Compatibility review before follow-up

The NVIDIA Mellanox MSB7800-ES2F 100G InfiniBand Switch 36-Port 7.2Tb/s Managed Switch with P2C Airflow page is positioned to support compatibility questions and project-fit review before commercial action is finalized.

Project

03

Project and quantity alignment

This model can be framed against use cases such as Top‑of‑Rack (ToR) Leaf Connectivity – Ideal for connecting compute nodes in small to extremely large clusters, providing high‑density 100Gb/s access layer switching. together with quantity and rollout timing.

Language

04

English and Arabic follow-up

The bilingual route supports regional buying teams that need technical review and RFQ handling in two languages.

FAQ

Frequently asked questions about MSB7800-ES2F

Short answers that give buyers clearer expectations before they move into the quote form.

Can I request a quote for MSB7800-ES2F directly from this page?01

Yes. This page is designed to move MSB7800-ES2F review directly into quotation, availability or BOM discussion.

Does MSB7800-ES2F support compatibility discussion before purchase?02

Yes. The page can be used to frame compatibility questions around Top‑of‑Rack (ToR) Leaf Connectivity – Ideal for connecting compute nodes in small to extremely large clusters, providing high‑density 100Gb/s access layer switching. before commercial follow-up.

Can MSB7800-ES2F be compared with nearby models in the same family?03

Yes. Related products within the same category give buyers an internal comparison path before they submit an inquiry.

Can I send quantity or a BOM list with the inquiry?04

Yes. The inquiry path is designed to accept part numbers, quantity, BOM context and availability requirements in one submission.

Related products

Related products in the same family

Internal links that support model comparison and adjacent product discovery.

MSB7890-ES2FInfiniBand switches

NVIDIA Mellanox MSB7890-ES2F 100G InfiniBand Switch 36-Port 7.2Tb/s Unmanaged Switch with P2C Airflow UFM Ready

High-performance 36-port 100G InfiniBand switch with 7.2Tb/s throughput, ultra-low latency, and UFM-ready for HPC/AI data centers. Features SHARP acceleration.

NVIDIA Mellanox MSB7890-ES2F 100G InfiniBand Switch 36-Port 7.2Tb/s Unmanaged Switch with P2C Airflow UFM Ready
MSB7890-ES2F
Product photoCMS image ready
MQM8700-HS2FInfiniBand switches

NVIDIA Quantum MQM8700-HS2F 200G InfiniBand Switch 40-Port 16Tb/s

High-performance 200G InfiniBand switch with 40 QSFP56 ports, 16Tb/s throughput, and SHARP acceleration for AI/HPC clusters. Low latency, managed, RoHS compliant.

NVIDIA Quantum MQM8700-HS2F 200G InfiniBand Switch 40-Port 16Tb/s
MQM8700-HS2F
Product photoCMS image ready
Ordering code
MQM8700-HS2F(920-9B110-00FH-0MD)
MQM8700-HS2RInfiniBand switches

NVIDIA Quantum MQM8700-HS2R 200G InfiniBand Switch | 40-Port 16Tb/s Managed Switch with C2P Airflow

High-performance 200G InfiniBand switch with 40 QSFP56 ports, 16Tb/s throughput, and SHARP in-network computing for AI/HPC clusters.

NVIDIA Quantum MQM8700-HS2R 200G InfiniBand Switch | 40-Port 16Tb/s Managed Switch with C2P Airflow
MQM8700-HS2R
Product photoCMS image ready
Ordering code
MQM8700-HS2R(920-9B110-00RH-0M0)
MQM8790-HS2FInfiniBand switches

NVIDIA Quantum MQM8790-HS2F 200G InfiniBand Switch Unmanaged 40-Port 16Tb/s P2C Airflow UFM Ready

High-performance 40-port 200G InfiniBand switch with 16Tb/s throughput, UFM ready for AI clusters and HPC. Features SHARP acceleration, dual PSU, and sub-130ns latency.

NVIDIA Quantum MQM8790-HS2F 200G InfiniBand Switch Unmanaged 40-Port 16Tb/s P2C Airflow UFM Ready
MQM8790-HS2F
Product photoCMS image ready
Ordering code
MQM8790-HS2F(920-9B110-00FH-0D0)
MQM8790-HS2RInfiniBand switches

NVIDIA Quantum MQM8790-HS2R 200G InfiniBand Switch Unmanaged, 40-Port 16Tb/s, C2P Airflow UFM Ready

High-performance 200G InfiniBand switch with 40 QSFP56 ports, 16Tb/s throughput, and UFM-ready for AI clusters & HPC data centers.

NVIDIA Quantum MQM8790-HS2R 200G InfiniBand Switch Unmanaged, 40-Port 16Tb/s, C2P Airflow UFM Ready
MQM8790-HS2R
Product photoCMS image ready
Ordering code
MQM8790-HS2R(920-9B110-00RH-0D0)
MQM9790-NS2RInfiniBand switches

NVIDIA Quantum-2 MQM9790-NS2R 64-Port 400Gb/s InfiniBand Switch Unmanaged, C2P Airflow

64-port 400Gb/s NDR InfiniBand switch with 51.2Tb/s throughput, SHARPv3 AI acceleration, and C2P airflow. Ideal for HPC and AI workloads.

NVIDIA Quantum-2 MQM9790-NS2R 64-Port 400Gb/s InfiniBand Switch Unmanaged, C2P Airflow
MQM9790-NS2R
Product photoCMS image ready
Ordering code
MQM9790-NS2R(920-9B210-00RN-0D0)
64 ports 400Gb/s NDR (32x OSFP connectors) – non‑blocking
>66.5 billion packets/sec
MQM9700-NS2FInfiniBand switches

NVIDIA Quantum-2 QM9700-NS2F Managed InfiniBand Switch 64-Port 400G NDR 51.2 Tb/s Throughput P2C Airflow

High-performance 64-port 400G InfiniBand switch with 51.2 Tb/s throughput, SHARPv3 acceleration, and P2C airflow for AI/HPC clusters.

NVIDIA Quantum-2 QM9700-NS2F Managed InfiniBand Switch 64-Port 400G NDR 51.2 Tb/s Throughput P2C Airflow
MQM9700-NS2F
Product photoCMS image ready
Ordering code
MQM9700-NS2F(920-9B210-00FN-0M0)
51.2 Tb/s aggregate bidirectional throughput; >66.5 billion packets per second (BPPS)
x86 Coffee Lake i3, 8GB DDR4 SO-DIMM (2666 MT/s), 16GB M.2 SSD
MQM9700-NS2RInfiniBand switches

NVIDIA Quantum-2 QM9700-NS2R 64-Port 400G InfiniBand Managed Switch

64-port 400G InfiniBand managed switch with 51.2 Tb/s throughput, SHARPv3 acceleration, and ultra-low latency for AI/HPC clusters.

NVIDIA Quantum-2 QM9700-NS2R 64-Port 400G InfiniBand Managed Switch
MQM9700-NS2R
Product photoCMS image ready
Ordering code
MQM9700-NS2R(920-9B210-00RN-0M2)
51.2 Tb/s aggregate bidirectional throughput; >66.5 billion packets per second (BPPS)
x86 Coffee Lake i3, 8GB DDR4 SO-DIMM (2666 MT/s), 16GB M.2 SSD
MQM9790-NS2FInfiniBand switches

NVIDIA mellanox Quantum-2 MQM9790-NS2F InfiniBand Switch 64-Port 400G NDR Smart Switch

High-performance 400Gb/s InfiniBand switch with 64 ports, 51.2Tb/s throughput, and SHARPv3 acceleration for AI/HPC data centers. 1U rack-mount design.

NVIDIA mellanox Quantum-2 MQM9790-NS2F InfiniBand Switch 64-Port 400G NDR Smart Switch
MQM9790-NS2F
Product photoCMS image ready
Ordering code
MQM9790-NS2F(920-9B210-00FN-0D0)
64 ports of 400Gb/s (32 OSFP connectors)
Over 66.5 BPPS

MSB7800-ES2F

Quick quote

Request now