Forum Discussion
Hey mrstorey​, I chatted with Lenny about this (he posted your exact question over to me).
So long as the initiator and target are all on the same L2 network, then it works perfectly fine. Probably will come down to more of the workload profile for the initiators. If you inits only have 2 x 25 GB ports, then you'd get constrained there, but if it's 2 x 100 GB or 4 x 100 GB, then you are more likely to hit CPU/Mem limits on the inits.
While it can get a little more complex than I'd want to manage, even if the arrays have multiple network cards and the inits have multiple network cards, you can be specific on how each inits port/ip connects with each arrays target controller/ip. And not worry about some of the issues we'd see with iscsi and it's pathing/routing.
Making the switch from /24 to /22 would be a little bit tricky but you could try doing it one switch path at a time, like work on ct0.eth20 and ct1.eth20 connections and vmknicA on the hosts switched over. Might be useful to actually go to the esxi hosts nvme-tcp target controllers list and remove ct0.eth20 and ct1.eth20 IPs from it, confirm that everything is good, then switch the IPs on the vmknicA and eth20 to /22. Then add back those controller IPs, etc. Confirm you've got active paths and then repeat with the other vmk and eth21.
I truthfully wouldn't say it's a bad idea though to use a /22 instead of a /24 for your L2 connections. Just gives you more comfort and knowing that you aren't going to run out of IPs unless you get to some crazy scale of Arrays and initiators (which is a good problem to have since that means there has been some success). Really comes down to what you are comfortable using and less on Pure saying not to do it.
- mrstorey4 months agoDay Hiker II
Perfect, thanks Alex - appreciated.
I'm inclined to go with a 2 x /22's then for our datacenter sites, to buy us the flexibility of being able to present storage from any array to any cluster, and not hit a ceiling with the number of hosts we can connect storage to in a single site.
That's not to say we'll be mounting every host to every array by default, but i think having multiple vlans will reduce our agility. Ie "because this array is on vlan 1x and 1y, you can't map storage to this cluster because it's initiators are on vlans 2x and 2y".
Whereas if everything was all on a single pair we've have options.It does sound like it would be wise to avoid mounting datastores on a cluster from multiple arrays if you could avoid it however, to limit the number of paths / rescan time etc on a single pair of host's mellanox initiators?
Thanks