RFQ
01Part-number led inquiry path
MCX516A-CCAT can be used as the direct starting point for quotation, BOM sharing and availability discussion.
Dual-port 100GbE PCIe adapter with RoCE, 750ns latency, 200Mpps throughput. Ideal for AI, cloud, and storage with NVMe-oF, SR-IOV, and ASAP2 offloads.
RFQ
Inquiry ready
Clear path for part number, quantity and BOM-led contact.
EN / AR
Bilingual buyer flow
Useful for procurement teams coordinating technical review across regions.
Datasheet available
The product datasheet can be downloaded directly from this page.

Selected specs
Low-profile PCIe add-in card. Ships with tall bracket mounted, short bracket included.
PCIe 3.0 x16 (compatible with x8, x4, x2, x1; auto-negotiated)
Up to 200 million messages per second (Mpps); 197 Mpps with DPDK
SR-IOV: up to 512 Virtual Functions, 8 Physical Functions per port
Yes – RDMA over Converged Ethernet (RoCE)
VXLAN, NVGRE, GENEVE, MPLS, NSH hardware encapsulation/de-encapsulation
NVMe-oF target offloads, T10-DIF Signature Handover, SRP, iSER, NFS RDMA, SMB Direct
Tag matching, rendezvous offload, adaptive routing, burst buffer offload, embedded PCIe switch, ODP, XRC, DCT
Deployment fit
Cloud and Web 2.0 Data Centers: High-density virtualization, overlay networks, and vSwitch offloads reduce CPU utilization while maintaining wire-speed performance.
Follow-up path
Quote / Compatibility / Availability
Prepared for part-number led and BOM-led requests

Product media
When product photography is available it appears here for faster review, and a clean technical outline remains useful for early evaluation when imagery is limited.
Overview
The NVIDIA ConnectX-5 EN MCX516A-CCAT is a dual-port 100GbE Ethernet adapter card designed for the most demanding data center workloads. Built on the ConnectX-5 architecture, this adapter supports multiple speeds including 100GbE, 50GbE, 40GbE, 25GbE, 10GbE, and 1GbE, providing seamless migration paths and infrastructure flexibility. With 750ns latency, up to 200 million messages per second (Mpps), and PCIe 3.0 x16 host interface, the MCX516A-CCAT delivers industry-leading throughput and CPU efficiency. Key capabilities include RoCE (RDMA over Converged Ethernet), SR-IOV virtualization with up to 512 Virtual Functions, ASAP2 accelerated switching and packet processing for vSwitch/vRouter offloads, NVMe over Fabric target offloads, T10-DIF Signature Handover, and comprehensive overlay network offloads (VXLAN, NVGRE, GENEVE). This adapter is available in a low-profile PCIe form factor with enhanced host management features.
Buying flow
This section gives buyers a practical handoff from technical review into quotation or compatibility follow-up.
Start from MCX516A-CCAT or the product name to reduce ambiguity before quotation starts.
Share quantity, destination or BOM context so the commercial follow-up is aligned with the project scope.
Use the page as the handoff point for stock discussion, compatibility checks and delivery planning.
Trust signals
The page is designed to reduce the distance between specification review and a real commercial inquiry.
RFQ
01MCX516A-CCAT can be used as the direct starting point for quotation, BOM sharing and availability discussion.
Review
02The MCX516A-CCAT Dual-Port 100GbE Ethernet Adapter by NVIDIA page is positioned to support compatibility questions and project-fit review before commercial action is finalized.
Project
03This model can be framed against use cases such as Cloud and Web 2.0 Data Centers: High-density virtualization, overlay networks, and vSwitch offloads reduce CPU utilization while maintaining wire-speed performance. together with quantity and rollout timing.
Language
04The bilingual route supports regional buying teams that need technical review and RFQ handling in two languages.
FAQ
Short answers that give buyers clearer expectations before they move into the quote form.
Yes. This page is designed to move MCX516A-CCAT review directly into quotation, availability or BOM discussion.
Yes. The page can be used to frame compatibility questions around Cloud and Web 2.0 Data Centers: High-density virtualization, overlay networks, and vSwitch offloads reduce CPU utilization while maintaining wire-speed performance. before commercial follow-up.
Yes. Related products within the same category give buyers an internal comparison path before they submit an inquiry.
Yes. The inquiry path is designed to accept part numbers, quantity, BOM context and availability requirements in one submission.
Related products
Internal links that support model comparison and adjacent product discovery.
Dual-port 10GbE SFP28 adapter with RoCE, SR-IOV virtualization, and VXLAN offloads. Ideal for cloud, storage, and database servers requiring low latency.

High-performance OCP 3.0 SmartNIC with 25/50GbE ports, PCIe Gen4, IPsec encryption, and SDN acceleration for cloud data centers.

Dual-port 25/50GbE SmartNIC with PCIe Gen4, IPsec encryption, and Zero-Touch RoCE. Ideal for cloud, enterprise, and NFV workloads with 75Mpps throughput.

Dual-port 200Gb/s InfiniBand smart adapter with PCIe 4.0 support, hardware encryption, and in-network computing for HPC and AI workloads.

High-performance 400Gb/s dual-port adapter with PCIe 5.0 x16, hardware security offloads, and NVMe-oF support for AI/HPC data centers.

Dual-port 400Gb/s InfiniBand & RoCE smart adapter with PCIe Gen5 x16, GPUDirect RDMA, and hardware security offloads for AI, HPC, and cloud data centers.

800Gb/s AI networking adapter with PCIe Gen6, InfiniBand/Ethernet support, and GPUDirect RDMA for hyperscale AI data centers.

800Gb/s dual-port AI networking adapter with PCIe Gen6, InfiniBand/Ethernet support, and GPUDirect RDMA for hyperscale GPU clusters.

High-performance 100Gb/s InfiniBand adapter with PCIe 3.0 x16, QSFP28 port, RDMA, and NVMe-oF offloads for HPC and AI workloads.

High-performance dual-port 100Gb/s InfiniBand & Ethernet NIC with RDMA offloads, ideal for HPC, AI, and cloud data centers. PCIe 4.0 ready.

NVIDIA MCX653106A-HDAT ConnectX-6 dual-port 200Gb/s InfiniBand/Ethernet smart adapter.

MCX516A-CCAT
Quick quote