Forum Discussion
Q1 (subnet size: /24 vs /23-/22)
No real problem using /23 or /22 for the NVMe/TCP fabrics. NVMe/TCP is still unicast TCP; the main tradeoff is bigger L2 blast radius (ARP/neighbor, troubleshooting) vs. managing more VLANs/subnets. VMware’s NVMe/TCP guidance is basically “use port binding” and a vmk per subnet—it doesn’t constrain you to /24.
https://www.vmware.com/docs/configuring-nvmeof-tcp
https://support.purestorage.com/bundle/m_howtos_for_vmware_solutions/page/Solutions/VMware_Platform_Guide/How-To_s_for_VMware_Solutions/NVMe_over_Fabrics/topics/concept/c_confirm_nvmetcp_support_02.html
Q2 (multiple FlashArrays into one cluster over the same NVMe/TCP fabrics)
Yes, totally valid: a single VMware cluster can consume datastores from multiple arrays over the same two NVMe/TCP fabrics using the same host initiators (NQNs). It only gets “silly” when ops/failure-domain isolation matters (tenant separation, different change windows, different teams), or when the storage VLANs become too large/complex to operate cleanly.
Q3 (host count / mapping limit)
1000/1024 host NQNs per array is the commonly stated limit across many FA models; just validate for your exact FA//X and FA//C generation + Purity version in the current Pure docs/support matrix.
Bigger L2 subnet = more ARP chatter + bigger tables. In a /22, every host still only ARPs for what it talks to, but when you have hundreds of hosts and multiple arrays, events like boot storms, vMotion storms, link flaps, failovers, or maintenance can cause a burst of ARP resolution and table churn.
Where it surfaces:
Hosts: ARP cache churn (CPU spikes, transient pathing delays while neighbors resolve).
Switches: CAM/ARP/ND scale and control-plane load (esp. if your ToR is doing any L3 SVIs for those VLANs).
Blast radius: an ARP/loop/broadcast issue hits more endpoints in a larger L2.
How to keep it under control:
Prefer smaller L2 domains (multiple VLANs) even if you keep two fabrics.
If you need more than /24 capacity, consider routing (L3) between host storage VLANs and array target VLANs. That can reduce host ARP to “next hop only” instead of every target IP (the router handles ARP to targets). NVMe/TCP works fine routed; it’s an operational choice.
On the network side: make sure your switches are sized for ARP/ND scale, and use storm control / ARP rate limiting / ARP inspection as appropriate.
Using /22 isn’t wrong, but it increases the failure-domain and ARP/neighbor churn risk—so either segment with more VLANs or route if you’re pushing into “hundreds of hosts + multiple arrays” territory.