Hello All. What is the best practice in terms of presenting RDM disks to VMWare?
Any limitations? I see from the Pure documentation a RDM can be presented to a Host Group, so more than 1 ESXi host is supported. How about presenting that same RDM to hosts that is in the same host group on the Pure but is part of a different cluster within VMWare? I have seen some RDM devices presented to all hosts part of the entire DC in VMWare. Thanks.100Views0likes17CommentsHI all, new to the channel but have a question.
I'm trying to copy a snapshot to an existing vvol using the command "Copy-PfaSnapshotToExistingVvolvmdk". It works great 95% of the time. However we replicate data from one array to another, and I'm trying to do this on the secondary. 5% of the time the command fails with an error indicating that it can't do it because all of the data is not available yet. The worst part is that my entire PowerShell script stops executing when that happens, even when I wrap it in a Try/Catch block. My question is, does anyone know how to verify that a snapshot is 100% ready to use before I attempt to do the Copy-PfaSnapshotToExistingVvolvmdk command?94Views0likes17CommentsSQL within VMware
If we are setting up SQL within vmware using NVME vvols and NVME controllers on the VM, is there a reason to have your data/log/tempdb volumes on separate virtual NVME storage controllers (for reference: https://www.nocentino.com/posts/2021-09-27-sqlserver-vms-best-practices/#vm-configuration)? Or do those benefits not really translate over when doing NVME storage protocols?Solved93Views0likes24CommentsMigrating my VMware Microsoft SQL VM clusters to use vVols
Good day everyone. Since migrating my VMware Microsoft SQL VM clusters to use vVols, I am finding it extremely difficult to map a Windows drive to the VM hard disk number, and then to the Pure vVol. I have Powershell scripts that perform that function with RDMs but they don't work with vVols apparently. I have been searching Github and other areas with no luck. VMware 8 hosting Windows Server 2022 failover cluster VMs with attached vVols. Any assistance would be greatly appreciated.Solved64Views0likes13CommentsHi everyone, we're about to virtualize some physical servers
My idea, as they have some "larger" datadisks which already are on pure, was to user the vmware converter for the os disk (local drive) and use vvol for the data disks. I know I read a guide about it somewhere (probably on codyhosterman|s blog) but can't find it anymore. Can someone push me in the right direction? Thanks in advance.Solved55Views0likes8CommentsHi everyone, our big show stopper for moving to vvol
Hi everyone, our big show stopper for moving to vvol always has been the lack of SAN based backup support in VDDK/VADP. As to my surprise I noted today, that with VDDK 8.0.3 it is even supported with NVMe-oF https://docs.vmware.com/en/VMware-vSphere/8.0/rn/vddk-803-release-notes.html So, as the release notes on vmware‘s side are a little bit vague, does anyone know when vvol backup/restore support over SAN transport was introduced? VDDK 8.0 says it’s not supported. That’s the last clear statement I found before 8.3 supporting all of it.Solved36Views0likes12CommentsWe have 3 arrays at this datacenter in which we run VM's leveraging vVols
In all 3 upgrades we saw VM's that reported 200-85k ms worth of latency while the controllers were being upgraded/rebooted. From a reporting aspect within Pure1 we see the majority of latency at the VM level and not at the array level. At the array level we see Latency hitting 3-14 ms. As we have seen this across multiple systems it feels like it's either an issue with Purity or vCenter/VASA configuration. A few things that we have looked at/checked: • We have checked/confirmed that vCenter is running on a different cluster/storage platform • We have checked/confirmed that all VM's do not have any QOS enabled via SPBMs • We do see that the Storage Provider versions are reporting different between the arrays controllers. However, it's our understanding that the reported version within vCenter does not update and only reports at the time the provider is registered. • Reported Provider Versions: 1.1.1 to 2.0.0 • vCenter version: 7.0.3.01200 • Purity Upgrade: 6.3.8 to 6.3.13 Due to the lengthy pause the VM's are seeing it kind of feels like they are not failing/rolling over to the secondary controller/provider. Our next set of array upgrades are the week of Sept 11th so we have about 2 weeks to figure out what's happening. If anybody has any suggestions or things to try we are open to all suggestions.Solved29Views0likes8CommentsVMware ESXi, 7.0.3, 23307199 Customer asked to move a VM from non-replicated to a replicated LUN (synchronised replication)
Storage team supplied me with a new LUN and made sure it was on the same array so Storage VMotion would use VAAI and be quick. But it wasn’t While waiting and noticing the Storage VMotion was much slower than expected, I checked on the host if VAAI was working and noticed: `esxcli storage core device vaai status get |grep naa.624a937097ffbb24a2f6483c02ea169b -A 5` `naa.624a937097ffbb24a2f6483c02ea169b` `VAAI Plugin Name:` `ATS Status: supported` `Clone Status: unsupported` `Zero Status: supported` `Delete Status: supported` Moving the 3TB VM took 2 hours in total. Because it was late at night I didn’t investigate further but this morning I wanted to dive into it and noticed: `esxcli storage core device vaai status get -d=naa.624a937097ffbb24a2f6483c02ea169b` `naa.624a937097ffbb24a2f6483c02ea169b` `VAAI Plugin Name:` `ATS Status: supported` `Clone Status: supported` `Zero Status: supported` `Delete Status: supported` How come the clone status is now ‘supported’?Solved26Views0likes4Comments