Notice
This document is for a development version of Ceph.
NVMe/TCP Initiator for VMware ESX
Prerequisites
A VMware ESXi host running VMware vSphere Hypervisor (ESXi) 7.0U3 version or later.
Deployed Ceph NVMe-oF gateway.
Ceph cluster with NVMe-oF configuration.
Subsystem defined in the gateway.
Configuration
The following instructions will use the default vSphere web client and esxcli.
Enable NVMe/TCP on a NIC:
esxcli nvme fabric enable --protocol TCP --device vmnicN
Replace
N
with the number of the NIC.Tag a VMKernel NIC to permit NVMe/TCP traffic:
esxcli network uip interface tag add --interface-nme vmkN --tagname NVMeTCP
Replace
N
with the ID of the VMkernel.Configure the VMware ESXi host for NVMe/TCP:
List the NVMe-oF adapter:
esxcli nvme adapter list
Discover NVMe-oF subsystems:
esxcli nvme fabric discover -a NVME_TCP_ADAPTER -i GATEWAY_IP -p 4420
Connect to NVME-oF gateway subsystem:
esxcli nvme connect -a NVME_TCP_ADAPTER -i GATEWAY_IP -p 4420 -s SUBSYSTEM_NQN
List the NVMe/TCP controllers:
esxcli nvme controller list
List the NVMe-oF namespaces in the subsystem:
esxcli nvme namespace list
Verify that the initiator has been set up correctly:
From the vSphere client go to the ESXi host.
On the Storage page go to the Devices tab.
Verify that the NVME/TCP disks are listed in the table.
Brought to you by the Ceph Foundation
The Ceph Documentation is a community resource funded and hosted by the non-profit Ceph Foundation. If you would like to support this and our other efforts, please consider joining now.