Network adapters & DPUsMCX755106AS-HEAT

NVIDIA ConnectX-7 MCX755106AS-HEAT NDR 400Gb/s InfiniBand Smart Adapter

Dual-port 400Gb/s InfiniBand & RoCE smart adapter with PCIe Gen5 x16, GPUDirect RDMA, and hardware security offloads for AI, HPC, and cloud data centers.

RFQ

Inquiry ready

Clear path for part number, quantity and BOM-led contact.

EN / AR

Bilingual buyer flow

Useful for procurement teams coordinating technical review across regions.

PDF

Datasheet available

The product datasheet can be downloaded directly from this page.

Fast technical snapshotDatasheet available
NVIDIA ConnectX-7 MCX755106AS-HEAT NDR 400Gb/s InfiniBand Smart Adapter
MCX755106AS-HEAT
Product photoCMS image ready

Selected specs

Ordering code

MCX755106AS-HEAT(900-9X7AH-0078-DTZ)

PCIe HHHL (Half Height Half Length), FHHL bracket optional

PCIe Gen5.0 x16 (up to 32 lanes, supporting bifurcation & Multi-Host)

InfiniBand (NDR/HDR/EDR) & Ethernet (400GbE, 200GbE, 100GbE, 50GbE, 25GbE, 10GbE)

Dual-port QSFP-DD (2x 400Gb/s NDR, 800Gb/s aggregate)

NDR 400Gb/s per port, HDR 200Gb/s, EDR 100Gb/s, FDR (compatible)

Integrated in-network memory for rendezvous offload & burst buffer

Deployment fit

AI & Machine Learning Clusters - Large-scale training with NCCL, UCX, and GPUDirect RDMA.

Follow-up path

Quote / Compatibility / Availability

Prepared for part-number led and BOM-led requests

NVIDIA ConnectX-7 MCX755106AS-HEAT NDR 400Gb/s InfiniBand Smart Adapter
MCX755106AS-HEAT
Product photoCMS image ready

Product media

From product visual to RFQ context

When product photography is available it appears here for faster review, and a clean technical outline remains useful for early evaluation when imagery is limited.

01Product imagery when available
02Clear technical outline for fast review
03Built for specification review before inquiry

Overview

Product overview

The NVIDIA ConnectX-7 family delivers groundbreaking performance with up to 400Gb/s bandwidth per port, supporting both InfiniBand (NDR/HDR/EDR) and Ethernet (up to 400GbE). Model MCX755106AS-HEAT features PCIe Gen5 host interface (up to x32 lanes), dual-port 400Gb/s density, multi-host capability, and advanced engines for GPUDirect RDMA, NVMe-oF acceleration, and inline cryptography. Built for demanding AI training, simulation, and real-time analytics, this adapter reduces CPU overhead while maximizing data throughput and security. With on-board memory for rendezvous offload, SHARP collective acceleration, and ASAP2 SDN offloads, ConnectX-7 transforms standard servers into high-performance network nodes with near-zero jitter and nanosecond-precision timing (IEEE 1588v2 Class C).

Key features

  • NDR InfiniBand & 400GbE ready - Up to 400Gb/s per port, dual-port configuration delivering 800Gb/s aggregate bandwidth; supports NDR, HDR, EDR InfiniBand and 400/200/100/50/25/10GbE.
  • PCIe Gen5 x16 (up to x32 lanes) - High-throughput host interface with TLP processing hints, ATS, PASID, and SR-IOV.
  • In-Network Computing - Hardware offload of collective operations (SHARP), rendezvous protocol, burst buffer offload.
  • GPUDirect RDMA & GPUDirect Storage - Direct GPU-to-NIC data path, accelerating deep learning and data analytics.
  • Hardware Security Engines - Inline IPsec/TLS/MACsec encryption/decryption (AES-GCM 128/256-bit) + secure boot with hardware root-of-trust.
  • Advanced Storage Acceleration - NVMe-oF (over Fabrics/TCP), NVMe/TCP offload, T10-DIF signature handover, iSER, NFS over RDMA, SMB Direct.
  • ASAP2 SDN & VirtIO Acceleration - OVS offload, VXLAN/GENEVE/NVGRE encapsulation, connection tracking, and programmable parser.
  • Precision Timing - PTP (IEEE 1588v2) with 12ns accuracy, SyncE, time-triggered scheduling, packet pacing.

Typical applications

  • AI & Machine Learning Clusters - Large-scale training with NCCL, UCX, and GPUDirect RDMA.
  • HPC Simulation & Research - Weather modeling, genomics, molecular dynamics requiring low-latency MPI.
  • Hyperscale Cloud & SDDC - Overlay networking, NFV acceleration, secure multi-tenancy (SR-IOV).
  • Enterprise Storage Systems - NVMe-oF target offload and distributed file systems (Lustre, GPUDirect Storage).
  • 5G Edge & Telecom - Time-sensitive infrastructures with Class C PTP and MACsec security.

Buying flow

How this product review turns into a real inquiry

This section gives buyers a practical handoff from technical review into quotation or compatibility follow-up.

01

Identify the exact target model

Start from MCX755106AS-HEAT or the product name to reduce ambiguity before quotation starts.

02

Add quantity or deployment context

Share quantity, destination or BOM context so the commercial follow-up is aligned with the project scope.

03

Move into availability and fit review

Use the page as the handoff point for stock discussion, compatibility checks and delivery planning.

Technical snapshot

Ordering code
MCX755106AS-HEAT(900-9X7AH-0078-DTZ)
PCIe HHHL (Half Height Half Length), FHHL bracket optional
PCIe Gen5.0 x16 (up to 32 lanes, supporting bifurcation & Multi-Host)
InfiniBand (NDR/HDR/EDR) & Ethernet (400GbE, 200GbE, 100GbE, 50GbE, 25GbE, 10GbE)
Dual-port QSFP-DD (2x 400Gb/s NDR, 800Gb/s aggregate)
NDR 400Gb/s per port, HDR 200Gb/s, EDR 100Gb/s, FDR (compatible)
Integrated in-network memory for rendezvous offload & burst buffer
Inline IPsec, TLS, MACsec (AES-GCM 128/256-bit), Secure Boot, Flash Encryption
NVMe-oF (TCP/Fabrics), NVMe/TCP, T10-DIF, SRP, iSER, NFS over RDMA, SMB Direct
IEEE 1588v2 PTP (12ns accuracy), SyncE, programmable PPS, time-triggered scheduling
SR-IOV, VirtIO acceleration, VXLAN/NVGRE/GENEVE offload, Connection tracking (L4 firewall)
NC-SI, MCTP over SMBus/PCIe, PLDM (Monitor/Firmware/FRU/Redfish), SPDM, SPI flash, JTAG
Not publicly specified - dual-port high-performance adapter requires adequate airflow; please confirm before ordering
0°C to 55°C (with appropriate chassis cooling)
Note: Some parameters may vary based on firmware and system configuration. Consult NVIDIA documentation or contact Starsurge for specific validation.
Advantages - Built for Modern Data Centers
Lowest Total Cost of Ownership Offloads CPU from networking, storage, and security tasks -- lowering power and cooling costs per Gb/s.

Trust signals

Why buyers use this page to review MCX755106AS-HEAT

The page is designed to reduce the distance between specification review and a real commercial inquiry.

RFQ

01

Part-number led inquiry path

MCX755106AS-HEAT can be used as the direct starting point for quotation, BOM sharing and availability discussion.

Review

02

Compatibility review before follow-up

The NVIDIA ConnectX-7 MCX755106AS-HEAT NDR 400Gb/s InfiniBand Smart Adapter page is positioned to support compatibility questions and project-fit review before commercial action is finalized.

Project

03

Project and quantity alignment

This model can be framed against use cases such as AI & Machine Learning Clusters - Large-scale training with NCCL, UCX, and GPUDirect RDMA. together with quantity and rollout timing.

Language

04

English and Arabic follow-up

The bilingual route supports regional buying teams that need technical review and RFQ handling in two languages.

FAQ

Frequently asked questions about MCX755106AS-HEAT

Short answers that give buyers clearer expectations before they move into the quote form.

Can I request a quote for MCX755106AS-HEAT directly from this page?01

Yes. This page is designed to move MCX755106AS-HEAT review directly into quotation, availability or BOM discussion.

Does MCX755106AS-HEAT support compatibility discussion before purchase?02

Yes. The page can be used to frame compatibility questions around AI & Machine Learning Clusters - Large-scale training with NCCL, UCX, and GPUDirect RDMA. before commercial follow-up.

Can MCX755106AS-HEAT be compared with nearby models in the same family?03

Yes. Related products within the same category give buyers an internal comparison path before they submit an inquiry.

Can I send quantity or a BOM list with the inquiry?04

Yes. The inquiry path is designed to accept part numbers, quantity, BOM context and availability requirements in one submission.

Related products

Related products in the same family

Internal links that support model comparison and adjacent product discovery.

MCX516A-CCATNetwork adapters & DPUs

MCX516A-CCAT Dual-Port 100GbE Ethernet Adapter by NVIDIA

Dual-port 100GbE PCIe adapter with RoCE, 750ns latency, 200Mpps throughput. Ideal for AI, cloud, and storage with NVMe-oF, SR-IOV, and ASAP2 offloads.

MCX516A-CCAT Dual-Port 100GbE Ethernet Adapter by NVIDIA
MCX516A-CCAT
Product photoCMS image ready
Low-profile PCIe add-in card. Ships with tall bracket mounted, short bracket included.
PCIe 3.0 x16 (compatible with x8, x4, x2, x1; auto-negotiated)
Up to 200 million messages per second (Mpps); 197 Mpps with DPDK
SR-IOV: up to 512 Virtual Functions, 8 Physical Functions per port
MCX4121A-XCATNetwork adapters & DPUs

NVIDIA ConnectX-4 Lx EN MCX4121A-XCAT – Dual-Port 10GbE SFP28 Adapter Card with RoCE Virtualization Offloads

Dual-port 10GbE SFP28 adapter with RoCE, SR-IOV virtualization, and VXLAN offloads. Ideal for cloud, storage, and database servers requiring low latency.

NVIDIA ConnectX-4 Lx EN MCX4121A-XCAT – Dual-Port 10GbE SFP28 Adapter Card with RoCE Virtualization Offloads
MCX4121A-XCAT
Product photoCMS image ready
PCIe 3.0 x8 (compatible with x16, x4, x2, x1; auto-negotiated)
SR-IOV: up to 256 Virtual Functions, 8 Physical Functions per port
Yes – RDMA over Converged Ethernet (RoCE)
TCP/UDP checksum offload, LSO/LRO, RSS, TSS, VLAN insertion/stripping
MCX631102AN-ADATNetwork adapters & DPUs

NVIDIA ConnectX-6 Lx MCX631102AS-ADAT OCP 3.0 Adapter for Cloud & Enterprise

High-performance OCP 3.0 SmartNIC with 25/50GbE ports, PCIe Gen4, IPsec encryption, and SDN acceleration for cloud data centers.

NVIDIA ConnectX-6 Lx MCX631102AS-ADAT OCP 3.0 Adapter for Cloud & Enterprise
MCX631102AN-ADAT
Product photoCMS image ready
OCP 3.0 Small Form Factor (SFF), hot-pluggable
PCIe Gen4 x8 (compatible with PCIe Gen3 x8)
Zero-Touch RoCE, RoCE Congestion Control, IPsec over RoCE
Inline IPsec (AES-XTS 256/512-bit), hardware root-of-trust, secure firmware update
MCX631432AN-ADABNetwork adapters & DPUs

NVIDIA ConnectX-6 Lx MCX631432AN-ADAB 25/50GbE SmartNIC

Dual-port 25/50GbE SmartNIC with PCIe Gen4, IPsec encryption, and Zero-Touch RoCE. Ideal for cloud, enterprise, and NFV workloads with 75Mpps throughput.

NVIDIA ConnectX-6 Lx MCX631432AN-ADAB 25/50GbE SmartNIC
MCX631432AN-ADAB
Product photoCMS image ready
PCIe Low Profile (also available in OCP 3.0 SFF)
2x 10/25GbE or 1x 50GbE (SFP28 / QSFP28 depending on SKU variant)
PCIe Gen4 x8 (compatible with PCIe Gen3 x8)
Zero-Touch RoCE, RoCE Congestion Control, IPsec over RoCE
MCX653106A-HDAT-SPNetwork adapters & DPUs

NVIDIA ConnectX-6 MCX653106A-HDAT 200Gb/s Dual-Port InfiniBand Smart Adapter

Dual-port 200Gb/s InfiniBand smart adapter with PCIe 4.0 support, hardware encryption, and in-network computing for HPC and AI workloads.

NVIDIA ConnectX-6 MCX653106A-HDAT 200Gb/s Dual-Port InfiniBand Smart Adapter
MCX653106A-HDAT-SP
Product photoCMS image ready
PCIe Gen 4.0/3.0 x16 (also supports x8, x4, x2, x1)
RDMA, XRC, DCT, ODP, Hardware Congestion Control, 16M I/O channels, 8 VLs + VL15
RoCE, LSO/LRO, checksum offload, RSS/TSS, VXLAN/NVGRE/Geneve offload
NVMe-oF (target/initiator), T10-DIF, SRP, iSER, SMB Direct
MCX75310AAS-NEATNetwork adapters & DPUs

NVIDIA ConnectX-7 MCX75310AAS-NEAT Dual-Port 400Gb/s InfiniBand & Ethernet Smart Adapter – PCIe 5.0 x16, NDR

High-performance 400Gb/s dual-port adapter with PCIe 5.0 x16, hardware security offloads, and NVMe-oF support for AI/HPC data centers.

NVIDIA ConnectX-7 MCX75310AAS-NEAT Dual-Port 400Gb/s InfiniBand & Ethernet Smart Adapter – PCIe 5.0 x16, NDR
MCX75310AAS-NEAT
Product photoCMS image ready
Ordering code
MCX75310AAS-NEAT(900-9X766-003N-SQ0)
PCIe 5.0 x16 (32 lanes)
IPsec, TLS 1.3, MACsec, AES-XTS
C8180Network adapters & DPUs

NVIDIA ConnectX-8 SuperNIC C8180(900-9X81E-00EX-ST0) 800G AI Networking Adapter

800Gb/s AI networking adapter with PCIe Gen6, InfiniBand/Ethernet support, and GPUDirect RDMA for hyperscale AI data centers.

NVIDIA ConnectX-8 SuperNIC C8180(900-9X81E-00EX-ST0) 800G AI Networking Adapter
C8180
Product photoCMS image ready
Ordering code
C8180(900-9X81E-00EX-ST0)
900-9X81Q-00CN-ST0Network adapters & DPUs

NVIDIA ConnectX-8 SuperNIC C8240 | 800G AI Networking Adapter for Hyperscale GPU Clusters

800Gb/s dual-port AI networking adapter with PCIe Gen6, InfiniBand/Ethernet support, and GPUDirect RDMA for hyperscale GPU clusters.

NVIDIA ConnectX-8 SuperNIC C8240 | 800G AI Networking Adapter for Hyperscale GPU Clusters
900-9X81Q-00CN-ST0
Product photoCMS image ready
MCX555A-ECATNetwork adapters & DPUs

NVIDIA MCX555A-ECAT 100Gb/s Single-Port QSFP28 InfiniBand Adapter PCIe 3.0 x16 ConnectX-5 Network Card

High-performance 100Gb/s InfiniBand adapter with PCIe 3.0 x16, QSFP28 port, RDMA, and NVMe-oF offloads for HPC and AI workloads.

NVIDIA MCX555A-ECAT 100Gb/s Single-Port QSFP28 InfiniBand Adapter PCIe 3.0 x16 ConnectX-5 Network Card
MCX555A-ECAT
Product photoCMS image ready
1x QSFP28, up to 100Gb/s InfiniBand (EDR) and 100GbE
PCI Express 3.0 x16 (compatible with x8, x4, x2, x1)
100GbE, 50GbE, 40GbE, 25GbE, 10GbE, 1GbE; IEEE 802.3cd, 802.3bj, 802.3by, 802.3ba, 802.3ae
RDMA over Converged Ethernet (RoCE), hardware reliable transport, out-of-order RDMA, atomic operations
MCX653106A-ECATNetwork adapters & DPUs

NVIDIA MCX653106A-ECAT ConnectX-6 100Gb/s Dual-Port InfiniBand & Ethernet Smart Network Interface Card

High-performance dual-port 100Gb/s InfiniBand & Ethernet NIC with RDMA offloads, ideal for HPC, AI, and cloud data centers. PCIe 4.0 ready.

NVIDIA MCX653106A-ECAT ConnectX-6 100Gb/s Dual-Port InfiniBand & Ethernet Smart Network Interface Card
MCX653106A-ECAT
Product photoCMS image ready
MCX653106A-ECAT (part of the mq9700 compatible series)
Up to 100 Gb/s (EDR InfiniBand or 100GbE)
PCIe 3.0/4.0 x16
SR-IOV (Up to 1000 VFs per port)
MCX653106A-HDATNetwork adapters & DPUs

NVIDIA MCX653106A-HDAT ConnectX-6 Dual-Port 200Gb/s InfiniBand Adapter | HDR Smart NIC for HPC & AI

NVIDIA MCX653106A-HDAT ConnectX-6 dual-port 200Gb/s InfiniBand/Ethernet smart adapter.

NVIDIA MCX653106A-HDAT ConnectX-6 Dual-Port 200Gb/s InfiniBand Adapter | HDR Smart NIC for HPC & AI
MCX653106A-HDAT
Product photoCMS image ready
PCIe 4.0 x16 (3.0 compatible)
In-Network Computing: Offloads collective communication operations and memory access directly into the network
NVIDIA GPUDirect Technologies: Direct GPU-to-GPU communication and GPU-to-storage access
Hardware-Based I/O Virtualization (ASAP²): High-performance network isolation for virtual machines and containers

MCX755106AS-HEAT

Quick quote

Request now