Standard

NVMe-oF / NVMe over TCP

A protocol family for accessing NVMe storage devices over network fabrics (RDMA, TCP, Fibre Channel), enabling disaggregated flash storage with near-local access latency.

3 connections 2 resources

Summary

What it is

A protocol family for accessing NVMe storage devices over network fabrics (RDMA, TCP, Fibre Channel), enabling disaggregated flash storage with near-local access latency.

Where it fits

NVMe-oF enables the high-performance storage tier beneath S3. Object storage systems like VAST Data and MinIO can use NVMe-oF to access disaggregated flash arrays with microsecond-level latency — eliminating the local-disk constraint for high-performance object storage.

Misconceptions / Traps
  • NVMe over TCP is not the same as NVMe over RDMA. TCP has higher latency (tens of microseconds vs single-digit microseconds) but works on standard Ethernet without special NICs.
  • NVMe-oF disaggregates storage from compute but introduces network reliability as a storage dependency. Network partitions become storage failures.
Key Connections
  • enables NVMe-backed Object Tier — the protocol for accessing disaggregated flash
  • scoped_to Object Storage — underlying transport for high-performance object stores

Definition

What it is

A transport protocol specification for accessing NVMe storage devices over network fabrics (TCP, RDMA, Fibre Channel), enabling disaggregated all-flash storage with near-local-disk latency.

Why it exists

Object storage backends ultimately write to physical storage. NVMe-oF allows the flash tier to be disaggregated from compute while maintaining the low-latency characteristics of local NVMe, enabling flexible all-flash object storage architectures.

Primary use cases

Disaggregated flash storage backends for object stores, high-performance storage networking, remote NVMe access for AI/ML workloads.

Connections 3

Outbound 2
Inbound 1

Resources 2