RFQ
01Part-number led inquiry path
MCX555A-ECAT can be used as the direct starting point for quotation, BOM sharing and availability discussion.
High-performance 100Gb/s InfiniBand adapter with PCIe 3.0 x16, QSFP28 port, RDMA, and NVMe-oF offloads for HPC and AI workloads.
RFQ
Inquiry ready
Clear path for part number, quantity and BOM-led contact.
EN / AR
Bilingual buyer flow
Useful for procurement teams coordinating technical review across regions.
Datasheet available
The product datasheet can be downloaded directly from this page.

Selected specs
1x QSFP28, up to 100Gb/s InfiniBand (EDR) and 100GbE
PCI Express 3.0 x16 (compatible with x8, x4, x2, x1)
100GbE, 50GbE, 40GbE, 25GbE, 10GbE, 1GbE; IEEE 802.3cd, 802.3bj, 802.3by, 802.3ba, 802.3ae
RDMA over Converged Ethernet (RoCE), hardware reliable transport, out-of-order RDMA, atomic operations
Hardware offload for VXLAN, NVGRE, GENEVE encapsulation/decapsulation
NC-SI over MCTP, PLDM for monitor/control and firmware update, I2C, SPI, JTAG
Remote boot over InfiniBand, Ethernet, iSCSI; UEFI, PXE support
Not publicly specified; typical sub-20W range – please confirm for your system
Deployment fit
High-Performance Computing (HPC): Ideal for supercomputing clusters, MPI-based simulations, and scientific research workloads requiring low latency and high message rates.
Follow-up path
Quote / Compatibility / Availability
Prepared for part-number led and BOM-led requests

Product media
When product photography is available it appears here for faster review, and a clean technical outline remains useful for early evaluation when imagery is limited.
Overview
The NVIDIA ConnectX-5 MCX555A-ECAT is a single-port 100Gb/s InfiniBand adapter card in a low-profile PCIe form factor. Leveraging the proven ConnectX-5 architecture, it delivers up to 100Gb/s throughput with sub-microsecond latency and a high message rate. The card supports both InfiniBand (up to EDR) and 100GbE, providing versatile connectivity for high-performance computing, storage, and virtualized environments. Built with an embedded PCIe switch and advanced RDMA capabilities, the MCX555A-ECAT offloads critical communication tasks from the CPU — enabling higher application performance, lower power consumption, and reduced total cost of ownership. It is fully compatible with PCIe 3.0 x16 slots and supports a wide range of operating systems and acceleration frameworks.
Buying flow
This section gives buyers a practical handoff from technical review into quotation or compatibility follow-up.
Start from MCX555A-ECAT or the product name to reduce ambiguity before quotation starts.
Share quantity, destination or BOM context so the commercial follow-up is aligned with the project scope.
Use the page as the handoff point for stock discussion, compatibility checks and delivery planning.
Trust signals
The page is designed to reduce the distance between specification review and a real commercial inquiry.
RFQ
01MCX555A-ECAT can be used as the direct starting point for quotation, BOM sharing and availability discussion.
Review
02The NVIDIA MCX555A-ECAT 100Gb/s Single-Port QSFP28 InfiniBand Adapter PCIe 3.0 x16 ConnectX-5 Network Card page is positioned to support compatibility questions and project-fit review before commercial action is finalized.
Project
03This model can be framed against use cases such as High-Performance Computing (HPC): Ideal for supercomputing clusters, MPI-based simulations, and scientific research workloads requiring low latency and high message rates. together with quantity and rollout timing.
Language
04The bilingual route supports regional buying teams that need technical review and RFQ handling in two languages.
FAQ
Short answers that give buyers clearer expectations before they move into the quote form.
Yes. This page is designed to move MCX555A-ECAT review directly into quotation, availability or BOM discussion.
Yes. The page can be used to frame compatibility questions around High-Performance Computing (HPC): Ideal for supercomputing clusters, MPI-based simulations, and scientific research workloads requiring low latency and high message rates. before commercial follow-up.
Yes. Related products within the same category give buyers an internal comparison path before they submit an inquiry.
Yes. The inquiry path is designed to accept part numbers, quantity, BOM context and availability requirements in one submission.
Related products
Internal links that support model comparison and adjacent product discovery.
Dual-port 100GbE PCIe adapter with RoCE, 750ns latency, 200Mpps throughput. Ideal for AI, cloud, and storage with NVMe-oF, SR-IOV, and ASAP2 offloads.

Dual-port 10GbE SFP28 adapter with RoCE, SR-IOV virtualization, and VXLAN offloads. Ideal for cloud, storage, and database servers requiring low latency.

High-performance OCP 3.0 SmartNIC with 25/50GbE ports, PCIe Gen4, IPsec encryption, and SDN acceleration for cloud data centers.

Dual-port 25/50GbE SmartNIC with PCIe Gen4, IPsec encryption, and Zero-Touch RoCE. Ideal for cloud, enterprise, and NFV workloads with 75Mpps throughput.

Dual-port 200Gb/s InfiniBand smart adapter with PCIe 4.0 support, hardware encryption, and in-network computing for HPC and AI workloads.

High-performance 400Gb/s dual-port adapter with PCIe 5.0 x16, hardware security offloads, and NVMe-oF support for AI/HPC data centers.

Dual-port 400Gb/s InfiniBand & RoCE smart adapter with PCIe Gen5 x16, GPUDirect RDMA, and hardware security offloads for AI, HPC, and cloud data centers.

800Gb/s AI networking adapter with PCIe Gen6, InfiniBand/Ethernet support, and GPUDirect RDMA for hyperscale AI data centers.

800Gb/s dual-port AI networking adapter with PCIe Gen6, InfiniBand/Ethernet support, and GPUDirect RDMA for hyperscale GPU clusters.

High-performance dual-port 100Gb/s InfiniBand & Ethernet NIC with RDMA offloads, ideal for HPC, AI, and cloud data centers. PCIe 4.0 ready.

NVIDIA MCX653106A-HDAT ConnectX-6 dual-port 200Gb/s InfiniBand/Ethernet smart adapter.

MCX555A-ECAT
Quick quote