Network adapters & DPUs900-9X81Q-00CN-ST0

NVIDIA ConnectX-8 SuperNIC C8240 | 800G AI Networking Adapter for Hyperscale GPU Clusters

800Gb/s dual-port AI networking adapter with PCIe Gen6, InfiniBand/Ethernet support, and GPUDirect RDMA for hyperscale GPU clusters.

RFQ

Inquiry ready

Clear path for part number, quantity and BOM-led contact.

EN / AR

Bilingual buyer flow

Useful for procurement teams coordinating technical review across regions.

PDF

Datasheet available

The product datasheet can be downloaded directly from this page.

Fast technical snapshotDatasheet available
NVIDIA ConnectX-8 SuperNIC C8240 | 800G AI Networking Adapter for Hyperscale GPU Clusters
900-9X81Q-00CN-ST0
Product photoCMS image ready

Selected specs

Deployment fit

The ConnectX-8 SuperNIC C8240 is purpose-built for next-generation AI fabrics and hyperscale cloud environments:

Follow-up path

Quote / Compatibility / Availability

Prepared for part-number led and BOM-led requests

NVIDIA ConnectX-8 SuperNIC C8240 | 800G AI Networking Adapter for Hyperscale GPU Clusters
900-9X81Q-00CN-ST0
Product photoCMS image ready

Product media

From product visual to RFQ context

When product photography is available it appears here for faster review, and a clean technical outline remains useful for early evaluation when imagery is limited.

01Product imagery when available
02Clear technical outline for fast review
03Built for specification review before inquiry

Overview

Product overview

The NVIDIA ConnectX-8 SuperNIC (C8240, 900-9X81Q-00CN-ST0) represents a generational leap in AI fabric acceleration. Supporting up to 800 gigabits per second (Gb/s) over InfiniBand or Ethernet, this adapter eliminates network bottlenecks in large-scale GPU clusters. With native PCIe Gen6 (up to 48 lanes) and advanced features such as NVIDIA GPUDirect RDMA, SHARP in-network computing, and programmable congestion control, the ConnectX-8 ensures maximum throughput and lowest latency for training, inference, and data-intensive HPC workloads. Its power-efficient design aligns with sustainable AI data center goals while enabling scaling beyond hundreds of thousands of GPUs. 800Gb/s total bandwidth – supports 800/400/200/100 Gb/s InfiniBand speeds and 400/200/100/50/25 Gb/s Ethernet. PCIe Gen6 host interface – up to 48 lanes, low overhead, and Multi-Host support for up to four hosts.

Key features

    Typical applications

    • The ConnectX-8 SuperNIC C8240 is purpose-built for next-generation AI fabrics and hyperscale cloud environments:
    • AI Factories & Large Language Model Clusters – trillion-parameter model training with 800G front-end and back-end networks.
    • High-Performance Computing (HPC) – SHARPv3 in-network reduction accelerates MPI collectives for scientific simulations.
    • GPU-Accelerated Cloud Data Centers – multi-tenant isolation, overlay offloads, and advanced QoS.
    • Enterprise AI Infrastructure – from inference farms to AI data platforms requiring deterministic low latency.
    • Storage & Converged Fabrics – GPUDirect Storage and RoCEv2 for NVMe-oF and distributed file systems.

    Buying flow

    How this product review turns into a real inquiry

    This section gives buyers a practical handoff from technical review into quotation or compatibility follow-up.

    01

    Identify the exact target model

    Start from 900-9X81Q-00CN-ST0 or the product name to reduce ambiguity before quotation starts.

    02

    Add quantity or deployment context

    Share quantity, destination or BOM context so the commercial follow-up is aligned with the project scope.

    03

    Move into availability and fit review

    Use the page as the handoff point for stock discussion, compatibility checks and delivery planning.

    Technical snapshot

    Trust signals

    Why buyers use this page to review 900-9X81Q-00CN-ST0

    The page is designed to reduce the distance between specification review and a real commercial inquiry.

    RFQ

    01

    Part-number led inquiry path

    900-9X81Q-00CN-ST0 can be used as the direct starting point for quotation, BOM sharing and availability discussion.

    Review

    02

    Compatibility review before follow-up

    The NVIDIA ConnectX-8 SuperNIC C8240 | 800G AI Networking Adapter for Hyperscale GPU Clusters page is positioned to support compatibility questions and project-fit review before commercial action is finalized.

    Project

    03

    Project and quantity alignment

    This model can be framed against use cases such as The ConnectX-8 SuperNIC C8240 is purpose-built for next-generation AI fabrics and hyperscale cloud environments: together with quantity and rollout timing.

    Language

    04

    English and Arabic follow-up

    The bilingual route supports regional buying teams that need technical review and RFQ handling in two languages.

    FAQ

    Frequently asked questions about 900-9X81Q-00CN-ST0

    Short answers that give buyers clearer expectations before they move into the quote form.

    Can I request a quote for 900-9X81Q-00CN-ST0 directly from this page?01

    Yes. This page is designed to move 900-9X81Q-00CN-ST0 review directly into quotation, availability or BOM discussion.

    Does 900-9X81Q-00CN-ST0 support compatibility discussion before purchase?02

    Yes. The page can be used to frame compatibility questions around The ConnectX-8 SuperNIC C8240 is purpose-built for next-generation AI fabrics and hyperscale cloud environments: before commercial follow-up.

    Can 900-9X81Q-00CN-ST0 be compared with nearby models in the same family?03

    Yes. Related products within the same category give buyers an internal comparison path before they submit an inquiry.

    Can I send quantity or a BOM list with the inquiry?04

    Yes. The inquiry path is designed to accept part numbers, quantity, BOM context and availability requirements in one submission.

    Related products

    Related products in the same family

    Internal links that support model comparison and adjacent product discovery.

    MCX516A-CCATNetwork adapters & DPUs

    MCX516A-CCAT Dual-Port 100GbE Ethernet Adapter by NVIDIA

    Dual-port 100GbE PCIe adapter with RoCE, 750ns latency, 200Mpps throughput. Ideal for AI, cloud, and storage with NVMe-oF, SR-IOV, and ASAP2 offloads.

    MCX516A-CCAT Dual-Port 100GbE Ethernet Adapter by NVIDIA
    MCX516A-CCAT
    Product photoCMS image ready
    Low-profile PCIe add-in card. Ships with tall bracket mounted, short bracket included.
    PCIe 3.0 x16 (compatible with x8, x4, x2, x1; auto-negotiated)
    Up to 200 million messages per second (Mpps); 197 Mpps with DPDK
    SR-IOV: up to 512 Virtual Functions, 8 Physical Functions per port
    MCX4121A-XCATNetwork adapters & DPUs

    NVIDIA ConnectX-4 Lx EN MCX4121A-XCAT – Dual-Port 10GbE SFP28 Adapter Card with RoCE Virtualization Offloads

    Dual-port 10GbE SFP28 adapter with RoCE, SR-IOV virtualization, and VXLAN offloads. Ideal for cloud, storage, and database servers requiring low latency.

    NVIDIA ConnectX-4 Lx EN MCX4121A-XCAT – Dual-Port 10GbE SFP28 Adapter Card with RoCE Virtualization Offloads
    MCX4121A-XCAT
    Product photoCMS image ready
    PCIe 3.0 x8 (compatible with x16, x4, x2, x1; auto-negotiated)
    SR-IOV: up to 256 Virtual Functions, 8 Physical Functions per port
    Yes – RDMA over Converged Ethernet (RoCE)
    TCP/UDP checksum offload, LSO/LRO, RSS, TSS, VLAN insertion/stripping
    MCX631102AN-ADATNetwork adapters & DPUs

    NVIDIA ConnectX-6 Lx MCX631102AS-ADAT OCP 3.0 Adapter for Cloud & Enterprise

    High-performance OCP 3.0 SmartNIC with 25/50GbE ports, PCIe Gen4, IPsec encryption, and SDN acceleration for cloud data centers.

    NVIDIA ConnectX-6 Lx MCX631102AS-ADAT OCP 3.0 Adapter for Cloud & Enterprise
    MCX631102AN-ADAT
    Product photoCMS image ready
    OCP 3.0 Small Form Factor (SFF), hot-pluggable
    PCIe Gen4 x8 (compatible with PCIe Gen3 x8)
    Zero-Touch RoCE, RoCE Congestion Control, IPsec over RoCE
    Inline IPsec (AES-XTS 256/512-bit), hardware root-of-trust, secure firmware update
    MCX631432AN-ADABNetwork adapters & DPUs

    NVIDIA ConnectX-6 Lx MCX631432AN-ADAB 25/50GbE SmartNIC

    Dual-port 25/50GbE SmartNIC with PCIe Gen4, IPsec encryption, and Zero-Touch RoCE. Ideal for cloud, enterprise, and NFV workloads with 75Mpps throughput.

    NVIDIA ConnectX-6 Lx MCX631432AN-ADAB 25/50GbE SmartNIC
    MCX631432AN-ADAB
    Product photoCMS image ready
    PCIe Low Profile (also available in OCP 3.0 SFF)
    2x 10/25GbE or 1x 50GbE (SFP28 / QSFP28 depending on SKU variant)
    PCIe Gen4 x8 (compatible with PCIe Gen3 x8)
    Zero-Touch RoCE, RoCE Congestion Control, IPsec over RoCE
    MCX653106A-HDAT-SPNetwork adapters & DPUs

    NVIDIA ConnectX-6 MCX653106A-HDAT 200Gb/s Dual-Port InfiniBand Smart Adapter

    Dual-port 200Gb/s InfiniBand smart adapter with PCIe 4.0 support, hardware encryption, and in-network computing for HPC and AI workloads.

    NVIDIA ConnectX-6 MCX653106A-HDAT 200Gb/s Dual-Port InfiniBand Smart Adapter
    MCX653106A-HDAT-SP
    Product photoCMS image ready
    PCIe Gen 4.0/3.0 x16 (also supports x8, x4, x2, x1)
    RDMA, XRC, DCT, ODP, Hardware Congestion Control, 16M I/O channels, 8 VLs + VL15
    RoCE, LSO/LRO, checksum offload, RSS/TSS, VXLAN/NVGRE/Geneve offload
    NVMe-oF (target/initiator), T10-DIF, SRP, iSER, SMB Direct
    MCX75310AAS-NEATNetwork adapters & DPUs

    NVIDIA ConnectX-7 MCX75310AAS-NEAT Dual-Port 400Gb/s InfiniBand & Ethernet Smart Adapter – PCIe 5.0 x16, NDR

    High-performance 400Gb/s dual-port adapter with PCIe 5.0 x16, hardware security offloads, and NVMe-oF support for AI/HPC data centers.

    NVIDIA ConnectX-7 MCX75310AAS-NEAT Dual-Port 400Gb/s InfiniBand & Ethernet Smart Adapter – PCIe 5.0 x16, NDR
    MCX75310AAS-NEAT
    Product photoCMS image ready
    Ordering code
    MCX75310AAS-NEAT(900-9X766-003N-SQ0)
    PCIe 5.0 x16 (32 lanes)
    IPsec, TLS 1.3, MACsec, AES-XTS
    MCX755106AS-HEATNetwork adapters & DPUs

    NVIDIA ConnectX-7 MCX755106AS-HEAT NDR 400Gb/s InfiniBand Smart Adapter

    Dual-port 400Gb/s InfiniBand & RoCE smart adapter with PCIe Gen5 x16, GPUDirect RDMA, and hardware security offloads for AI, HPC, and cloud data centers.

    NVIDIA ConnectX-7 MCX755106AS-HEAT NDR 400Gb/s InfiniBand Smart Adapter
    MCX755106AS-HEAT
    Product photoCMS image ready
    Ordering code
    MCX755106AS-HEAT(900-9X7AH-0078-DTZ)
    PCIe HHHL (Half Height Half Length), FHHL bracket optional
    PCIe Gen5.0 x16 (up to 32 lanes, supporting bifurcation & Multi-Host)
    C8180Network adapters & DPUs

    NVIDIA ConnectX-8 SuperNIC C8180(900-9X81E-00EX-ST0) 800G AI Networking Adapter

    800Gb/s AI networking adapter with PCIe Gen6, InfiniBand/Ethernet support, and GPUDirect RDMA for hyperscale AI data centers.

    NVIDIA ConnectX-8 SuperNIC C8180(900-9X81E-00EX-ST0) 800G AI Networking Adapter
    C8180
    Product photoCMS image ready
    Ordering code
    C8180(900-9X81E-00EX-ST0)
    MCX555A-ECATNetwork adapters & DPUs

    NVIDIA MCX555A-ECAT 100Gb/s Single-Port QSFP28 InfiniBand Adapter PCIe 3.0 x16 ConnectX-5 Network Card

    High-performance 100Gb/s InfiniBand adapter with PCIe 3.0 x16, QSFP28 port, RDMA, and NVMe-oF offloads for HPC and AI workloads.

    NVIDIA MCX555A-ECAT 100Gb/s Single-Port QSFP28 InfiniBand Adapter PCIe 3.0 x16 ConnectX-5 Network Card
    MCX555A-ECAT
    Product photoCMS image ready
    1x QSFP28, up to 100Gb/s InfiniBand (EDR) and 100GbE
    PCI Express 3.0 x16 (compatible with x8, x4, x2, x1)
    100GbE, 50GbE, 40GbE, 25GbE, 10GbE, 1GbE; IEEE 802.3cd, 802.3bj, 802.3by, 802.3ba, 802.3ae
    RDMA over Converged Ethernet (RoCE), hardware reliable transport, out-of-order RDMA, atomic operations
    MCX653106A-ECATNetwork adapters & DPUs

    NVIDIA MCX653106A-ECAT ConnectX-6 100Gb/s Dual-Port InfiniBand & Ethernet Smart Network Interface Card

    High-performance dual-port 100Gb/s InfiniBand & Ethernet NIC with RDMA offloads, ideal for HPC, AI, and cloud data centers. PCIe 4.0 ready.

    NVIDIA MCX653106A-ECAT ConnectX-6 100Gb/s Dual-Port InfiniBand & Ethernet Smart Network Interface Card
    MCX653106A-ECAT
    Product photoCMS image ready
    MCX653106A-ECAT (part of the mq9700 compatible series)
    Up to 100 Gb/s (EDR InfiniBand or 100GbE)
    PCIe 3.0/4.0 x16
    SR-IOV (Up to 1000 VFs per port)
    MCX653106A-HDATNetwork adapters & DPUs

    NVIDIA MCX653106A-HDAT ConnectX-6 Dual-Port 200Gb/s InfiniBand Adapter | HDR Smart NIC for HPC & AI

    NVIDIA MCX653106A-HDAT ConnectX-6 dual-port 200Gb/s InfiniBand/Ethernet smart adapter.

    NVIDIA MCX653106A-HDAT ConnectX-6 Dual-Port 200Gb/s InfiniBand Adapter | HDR Smart NIC for HPC & AI
    MCX653106A-HDAT
    Product photoCMS image ready
    PCIe 4.0 x16 (3.0 compatible)
    In-Network Computing: Offloads collective communication operations and memory access directly into the network
    NVIDIA GPUDirect Technologies: Direct GPU-to-GPU communication and GPU-to-storage access
    Hardware-Based I/O Virtualization (ASAP²): High-performance network isolation for virtual machines and containers

    900-9X81Q-00CN-ST0

    Quick quote

    Request now