Network adapters & DPUsMCX653106A-HDAT-SP

NVIDIA ConnectX-6 MCX653106A-HDAT 200Gb/s Dual-Port InfiniBand Smart Adapter

Dual-port 200Gb/s InfiniBand smart adapter with PCIe 4.0 support, hardware encryption, and in-network computing for HPC and AI workloads.

RFQ

Inquiry ready

Clear path for part number, quantity and BOM-led contact.

EN / AR

Bilingual buyer flow

Useful for procurement teams coordinating technical review across regions.

PDF

Datasheet available

The product datasheet can be downloaded directly from this page.

Fast technical snapshotDatasheet available
NVIDIA ConnectX-6 MCX653106A-HDAT 200Gb/s Dual-Port InfiniBand Smart Adapter
MCX653106A-HDAT-SP
Product photoCMS image ready

Selected specs

PCIe Gen 4.0/3.0 x16 (also supports x8, x4, x2, x1)

RDMA, XRC, DCT, ODP, Hardware Congestion Control, 16M I/O channels, 8 VLs + VL15

RoCE, LSO/LRO, checksum offload, RSS/TSS, VXLAN/NVGRE/Geneve offload

NVMe-oF (target/initiator), T10-DIF, SRP, iSER, SMB Direct

Hardware XTS-AES 256/512-bit block encryption, FIPS compliant

NC-SI, MCTP over SMBus/PCIe, PLDM for Monitor/Firmware, I2C, JTAG

Deployment fit

High-Performance Computing (HPC): Large-scale clusters running weather simulation, computational fluid dynamics, and molecular dynamics.

Follow-up path

Quote / Compatibility / Availability

Prepared for part-number led and BOM-led requests

NVIDIA ConnectX-6 MCX653106A-HDAT 200Gb/s Dual-Port InfiniBand Smart Adapter
MCX653106A-HDAT-SP
Product photoCMS image ready

Product media

From product visual to RFQ context

When product photography is available it appears here for faster review, and a clean technical outline remains useful for early evaluation when imagery is limited.

01Product imagery when available
02Clear technical outline for fast review
03Built for specification review before inquiry

Overview

Product overview

The NVIDIA ConnectX-6 MCX653106A-HDAT is a dual-port 200Gb/s InfiniBand and Ethernet smart adapter, designed as a cornerstone of the NVIDIA Quantum InfiniBand platform. It integrates advanced features like Remote Direct Memory Access (RDMA), NVMe over Fabrics (NVMe-oF) offloads, and block-level encryption to drastically reduce CPU overhead. By moving computation into the network fabric, this adapter enhances scalability and efficiency for high-performance computing, machine learning workloads, and hyperscale cloud infrastructures.

Key features

  • Ultra-High Throughput: 200Gb/s connectivity per port with a maximum aggregate bandwidth of 200Gb/s.
  • In-Network Computing: Hardware offloads for collective operations, MPI tag matching, and rendezvous protocol.
  • Block-Level Encryption: XTS-AES 256/512-bit hardware encryption for FIPS-compliant data security.
  • PCIe 4.0 Support: 16 GT/s link rate with full backward compatibility to PCIe 3.0/2.0/1.1.
  • Message Rate: Up to 215 million messages per second for extreme small-packet performance.
  • Storage Offloads: NVMe-oF target and initiator offloads, T10-DIF, and support for SRP, iSER, NFS RDMA.
  • Virtualization: SR-IOV with up to 1K virtual functions and ASAP2 for OVS offload.
  • ConnectX-6 integrates NVIDIA’s unique In-Network Computing engines, offloading collective communication operations (like MPI all-reduce) from the CPU to the network fabric. This drastically reduces latency and frees CPU cycles for application processing. Combined with RDMA and advanced memory mapping (UMR), the adapter enables GPU Direct RDMA and peer-to-peer GPU communication across the network, accelerating AI training clusters and complex simulations.

Typical applications

  • High-Performance Computing (HPC): Large-scale clusters running weather simulation, computational fluid dynamics, and molecular dynamics.
  • AI & Machine Learning: Distributed training of deep neural networks requiring high throughput and low latency.
  • Enterprise Data Centers: NVMe-oF storage targets, database acceleration, and virtualized infrastructure.
  • Hyperscale Cloud: Multi-tenant environments requiring hardware-based isolation and QoS.
  • Liquid-Cooled Platforms: Compatible with Intel Server System D50TNP cold plate designs for high-density deployments.

Buying flow

How this product review turns into a real inquiry

This section gives buyers a practical handoff from technical review into quotation or compatibility follow-up.

01

Identify the exact target model

Start from MCX653106A-HDAT-SP or the product name to reduce ambiguity before quotation starts.

02

Add quantity or deployment context

Share quantity, destination or BOM context so the commercial follow-up is aligned with the project scope.

03

Move into availability and fit review

Use the page as the handoff point for stock discussion, compatibility checks and delivery planning.

Technical snapshot

PCIe Gen 4.0/3.0 x16 (also supports x8, x4, x2, x1)
RDMA, XRC, DCT, ODP, Hardware Congestion Control, 16M I/O channels, 8 VLs + VL15
RoCE, LSO/LRO, checksum offload, RSS/TSS, VXLAN/NVGRE/Geneve offload
NVMe-oF (target/initiator), T10-DIF, SRP, iSER, SMB Direct
Hardware XTS-AES 256/512-bit block encryption, FIPS compliant
NC-SI, MCTP over SMBus/PCIe, PLDM for Monitor/Firmware, I2C, JTAG

Trust signals

Why buyers use this page to review MCX653106A-HDAT-SP

The page is designed to reduce the distance between specification review and a real commercial inquiry.

RFQ

01

Part-number led inquiry path

MCX653106A-HDAT-SP can be used as the direct starting point for quotation, BOM sharing and availability discussion.

Review

02

Compatibility review before follow-up

The NVIDIA ConnectX-6 MCX653106A-HDAT 200Gb/s Dual-Port InfiniBand Smart Adapter page is positioned to support compatibility questions and project-fit review before commercial action is finalized.

Project

03

Project and quantity alignment

This model can be framed against use cases such as High-Performance Computing (HPC): Large-scale clusters running weather simulation, computational fluid dynamics, and molecular dynamics. together with quantity and rollout timing.

Language

04

English and Arabic follow-up

The bilingual route supports regional buying teams that need technical review and RFQ handling in two languages.

FAQ

Frequently asked questions about MCX653106A-HDAT-SP

Short answers that give buyers clearer expectations before they move into the quote form.

Can I request a quote for MCX653106A-HDAT-SP directly from this page?01

Yes. This page is designed to move MCX653106A-HDAT-SP review directly into quotation, availability or BOM discussion.

Does MCX653106A-HDAT-SP support compatibility discussion before purchase?02

Yes. The page can be used to frame compatibility questions around High-Performance Computing (HPC): Large-scale clusters running weather simulation, computational fluid dynamics, and molecular dynamics. before commercial follow-up.

Can MCX653106A-HDAT-SP be compared with nearby models in the same family?03

Yes. Related products within the same category give buyers an internal comparison path before they submit an inquiry.

Can I send quantity or a BOM list with the inquiry?04

Yes. The inquiry path is designed to accept part numbers, quantity, BOM context and availability requirements in one submission.

Related products

Related products in the same family

Internal links that support model comparison and adjacent product discovery.

MCX516A-CCATNetwork adapters & DPUs

MCX516A-CCAT Dual-Port 100GbE Ethernet Adapter by NVIDIA

Dual-port 100GbE PCIe adapter with RoCE, 750ns latency, 200Mpps throughput. Ideal for AI, cloud, and storage with NVMe-oF, SR-IOV, and ASAP2 offloads.

MCX516A-CCAT Dual-Port 100GbE Ethernet Adapter by NVIDIA
MCX516A-CCAT
Product photoCMS image ready
Low-profile PCIe add-in card. Ships with tall bracket mounted, short bracket included.
PCIe 3.0 x16 (compatible with x8, x4, x2, x1; auto-negotiated)
Up to 200 million messages per second (Mpps); 197 Mpps with DPDK
SR-IOV: up to 512 Virtual Functions, 8 Physical Functions per port
MCX4121A-XCATNetwork adapters & DPUs

NVIDIA ConnectX-4 Lx EN MCX4121A-XCAT – Dual-Port 10GbE SFP28 Adapter Card with RoCE Virtualization Offloads

Dual-port 10GbE SFP28 adapter with RoCE, SR-IOV virtualization, and VXLAN offloads. Ideal for cloud, storage, and database servers requiring low latency.

NVIDIA ConnectX-4 Lx EN MCX4121A-XCAT – Dual-Port 10GbE SFP28 Adapter Card with RoCE Virtualization Offloads
MCX4121A-XCAT
Product photoCMS image ready
PCIe 3.0 x8 (compatible with x16, x4, x2, x1; auto-negotiated)
SR-IOV: up to 256 Virtual Functions, 8 Physical Functions per port
Yes – RDMA over Converged Ethernet (RoCE)
TCP/UDP checksum offload, LSO/LRO, RSS, TSS, VLAN insertion/stripping
MCX631102AN-ADATNetwork adapters & DPUs

NVIDIA ConnectX-6 Lx MCX631102AS-ADAT OCP 3.0 Adapter for Cloud & Enterprise

High-performance OCP 3.0 SmartNIC with 25/50GbE ports, PCIe Gen4, IPsec encryption, and SDN acceleration for cloud data centers.

NVIDIA ConnectX-6 Lx MCX631102AS-ADAT OCP 3.0 Adapter for Cloud & Enterprise
MCX631102AN-ADAT
Product photoCMS image ready
OCP 3.0 Small Form Factor (SFF), hot-pluggable
PCIe Gen4 x8 (compatible with PCIe Gen3 x8)
Zero-Touch RoCE, RoCE Congestion Control, IPsec over RoCE
Inline IPsec (AES-XTS 256/512-bit), hardware root-of-trust, secure firmware update
MCX631432AN-ADABNetwork adapters & DPUs

NVIDIA ConnectX-6 Lx MCX631432AN-ADAB 25/50GbE SmartNIC

Dual-port 25/50GbE SmartNIC with PCIe Gen4, IPsec encryption, and Zero-Touch RoCE. Ideal for cloud, enterprise, and NFV workloads with 75Mpps throughput.

NVIDIA ConnectX-6 Lx MCX631432AN-ADAB 25/50GbE SmartNIC
MCX631432AN-ADAB
Product photoCMS image ready
PCIe Low Profile (also available in OCP 3.0 SFF)
2x 10/25GbE or 1x 50GbE (SFP28 / QSFP28 depending on SKU variant)
PCIe Gen4 x8 (compatible with PCIe Gen3 x8)
Zero-Touch RoCE, RoCE Congestion Control, IPsec over RoCE
MCX75310AAS-NEATNetwork adapters & DPUs

NVIDIA ConnectX-7 MCX75310AAS-NEAT Dual-Port 400Gb/s InfiniBand & Ethernet Smart Adapter – PCIe 5.0 x16, NDR

High-performance 400Gb/s dual-port adapter with PCIe 5.0 x16, hardware security offloads, and NVMe-oF support for AI/HPC data centers.

NVIDIA ConnectX-7 MCX75310AAS-NEAT Dual-Port 400Gb/s InfiniBand & Ethernet Smart Adapter – PCIe 5.0 x16, NDR
MCX75310AAS-NEAT
Product photoCMS image ready
Ordering code
MCX75310AAS-NEAT(900-9X766-003N-SQ0)
PCIe 5.0 x16 (32 lanes)
IPsec, TLS 1.3, MACsec, AES-XTS
MCX755106AS-HEATNetwork adapters & DPUs

NVIDIA ConnectX-7 MCX755106AS-HEAT NDR 400Gb/s InfiniBand Smart Adapter

Dual-port 400Gb/s InfiniBand & RoCE smart adapter with PCIe Gen5 x16, GPUDirect RDMA, and hardware security offloads for AI, HPC, and cloud data centers.

NVIDIA ConnectX-7 MCX755106AS-HEAT NDR 400Gb/s InfiniBand Smart Adapter
MCX755106AS-HEAT
Product photoCMS image ready
Ordering code
MCX755106AS-HEAT(900-9X7AH-0078-DTZ)
PCIe HHHL (Half Height Half Length), FHHL bracket optional
PCIe Gen5.0 x16 (up to 32 lanes, supporting bifurcation & Multi-Host)
C8180Network adapters & DPUs

NVIDIA ConnectX-8 SuperNIC C8180(900-9X81E-00EX-ST0) 800G AI Networking Adapter

800Gb/s AI networking adapter with PCIe Gen6, InfiniBand/Ethernet support, and GPUDirect RDMA for hyperscale AI data centers.

NVIDIA ConnectX-8 SuperNIC C8180(900-9X81E-00EX-ST0) 800G AI Networking Adapter
C8180
Product photoCMS image ready
Ordering code
C8180(900-9X81E-00EX-ST0)
900-9X81Q-00CN-ST0Network adapters & DPUs

NVIDIA ConnectX-8 SuperNIC C8240 | 800G AI Networking Adapter for Hyperscale GPU Clusters

800Gb/s dual-port AI networking adapter with PCIe Gen6, InfiniBand/Ethernet support, and GPUDirect RDMA for hyperscale GPU clusters.

NVIDIA ConnectX-8 SuperNIC C8240 | 800G AI Networking Adapter for Hyperscale GPU Clusters
900-9X81Q-00CN-ST0
Product photoCMS image ready
MCX555A-ECATNetwork adapters & DPUs

NVIDIA MCX555A-ECAT 100Gb/s Single-Port QSFP28 InfiniBand Adapter PCIe 3.0 x16 ConnectX-5 Network Card

High-performance 100Gb/s InfiniBand adapter with PCIe 3.0 x16, QSFP28 port, RDMA, and NVMe-oF offloads for HPC and AI workloads.

NVIDIA MCX555A-ECAT 100Gb/s Single-Port QSFP28 InfiniBand Adapter PCIe 3.0 x16 ConnectX-5 Network Card
MCX555A-ECAT
Product photoCMS image ready
1x QSFP28, up to 100Gb/s InfiniBand (EDR) and 100GbE
PCI Express 3.0 x16 (compatible with x8, x4, x2, x1)
100GbE, 50GbE, 40GbE, 25GbE, 10GbE, 1GbE; IEEE 802.3cd, 802.3bj, 802.3by, 802.3ba, 802.3ae
RDMA over Converged Ethernet (RoCE), hardware reliable transport, out-of-order RDMA, atomic operations
MCX653106A-ECATNetwork adapters & DPUs

NVIDIA MCX653106A-ECAT ConnectX-6 100Gb/s Dual-Port InfiniBand & Ethernet Smart Network Interface Card

High-performance dual-port 100Gb/s InfiniBand & Ethernet NIC with RDMA offloads, ideal for HPC, AI, and cloud data centers. PCIe 4.0 ready.

NVIDIA MCX653106A-ECAT ConnectX-6 100Gb/s Dual-Port InfiniBand & Ethernet Smart Network Interface Card
MCX653106A-ECAT
Product photoCMS image ready
MCX653106A-ECAT (part of the mq9700 compatible series)
Up to 100 Gb/s (EDR InfiniBand or 100GbE)
PCIe 3.0/4.0 x16
SR-IOV (Up to 1000 VFs per port)
MCX653106A-HDATNetwork adapters & DPUs

NVIDIA MCX653106A-HDAT ConnectX-6 Dual-Port 200Gb/s InfiniBand Adapter | HDR Smart NIC for HPC & AI

NVIDIA MCX653106A-HDAT ConnectX-6 dual-port 200Gb/s InfiniBand/Ethernet smart adapter.

NVIDIA MCX653106A-HDAT ConnectX-6 Dual-Port 200Gb/s InfiniBand Adapter | HDR Smart NIC for HPC & AI
MCX653106A-HDAT
Product photoCMS image ready
PCIe 4.0 x16 (3.0 compatible)
In-Network Computing: Offloads collective communication operations and memory access directly into the network
NVIDIA GPUDirect Technologies: Direct GPU-to-GPU communication and GPU-to-storage access
Hardware-Based I/O Virtualization (ASAP²): High-performance network isolation for virtual machines and containers

MCX653106A-HDAT-SP

Quick quote

Request now