Hello All. What is the best practice in terms of presenting RDM disks to VMWare?
Any limitations? I see from the Pure documentation a RDM can be presented to a Host Group, so more than 1 ESXi host is supported. How about presenting that same RDM to hosts that is in the same host group on the Pure but is part of a different cluster within VMWare? I have seen some RDM devices presented to all hosts part of the entire DC in VMWare. Thanks.201Views0likes17CommentsSQL within VMware
If we are setting up SQL within vmware using NVME vvols and NVME controllers on the VM, is there a reason to have your data/log/tempdb volumes on separate virtual NVME storage controllers (for reference: https://www.nocentino.com/posts/2021-09-27-sqlserver-vms-best-practices/#vm-configuration)? Or do those benefits not really translate over when doing NVME storage protocols?Solved200Views0likes24CommentsHI all, new to the channel but have a question.
I'm trying to copy a snapshot to an existing vvol using the command "Copy-PfaSnapshotToExistingVvolvmdk". It works great 95% of the time. However we replicate data from one array to another, and I'm trying to do this on the secondary. 5% of the time the command fails with an error indicating that it can't do it because all of the data is not available yet. The worst part is that my entire PowerShell script stops executing when that happens, even when I wrap it in a Try/Catch block. My question is, does anyone know how to verify that a snapshot is 100% ready to use before I attempt to do the Copy-PfaSnapshotToExistingVvolvmdk command?183Views0likes17CommentsHi Guys - a customer has pointed out some confusing information regarding FA, ESXi ISCSI and port binding
According to https://core.vmware.com/resource/best-practices-running-vmware-vsphere-iscsi#sec7262-sub12 port binding is unsupported if the FA has iscsi-interfaces in 2 subnets. On https://support.purestorage.com/Solutions/VMware_Platform_Guide/User_Guides_for_VMware_Solutions/FlashArray_VMware_Best_Practices_User_Guide/VMware_and_iSCSI_FAQs#NIC_Teaming_vs_Port_Binding.2C_which_should_I_use.3F|https://support.purestorage.com/Solutions/VMware_Platform_Guide/User_Guides_for_VMware_S[…]ay_VMware_Best_Practices_User_Guide/VMware_and_iSCSI_FAQs it is stated "So which should you use? The answer here is clear, port binding whenever possible. This is not only recommended as a best practice with Pure Storage but by VMware as well."Solved143Views0likes12CommentsMigrating my VMware Microsoft SQL VM clusters to use vVols
Good day everyone. Since migrating my VMware Microsoft SQL VM clusters to use vVols, I am finding it extremely difficult to map a Windows drive to the VM hard disk number, and then to the Pure vVol. I have Powershell scripts that perform that function with RDMs but they don't work with vVols apparently. I have been searching Github and other areas with no luck. VMware 8 hosting Windows Server 2022 failover cluster VMs with attached vVols. Any assistance would be greatly appreciated.Solved124Views0likes13CommentsHaving issues with Long HBA rescan times
Having issues with Long HBA rescan times. Any point in the right direction is appreciated. We already have a high priority ticket in but this coummunity has been helpful more than once! Vsphere 7.0.3.00600, Cisco UCS, Cisco MDS 9132 switches, Pure X90R3 Arrays.108Views0likes9CommentsWe have 3 arrays at this datacenter in which we run VM's leveraging vVols
In all 3 upgrades we saw VM's that reported 200-85k ms worth of latency while the controllers were being upgraded/rebooted. From a reporting aspect within Pure1 we see the majority of latency at the VM level and not at the array level. At the array level we see Latency hitting 3-14 ms. As we have seen this across multiple systems it feels like it's either an issue with Purity or vCenter/VASA configuration. A few things that we have looked at/checked: • We have checked/confirmed that vCenter is running on a different cluster/storage platform • We have checked/confirmed that all VM's do not have any QOS enabled via SPBMs • We do see that the Storage Provider versions are reporting different between the arrays controllers. However, it's our understanding that the reported version within vCenter does not update and only reports at the time the provider is registered. • Reported Provider Versions: 1.1.1 to 2.0.0 • vCenter version: 7.0.3.01200 • Purity Upgrade: 6.3.8 to 6.3.13 Due to the lengthy pause the VM's are seeing it kind of feels like they are not failing/rolling over to the secondary controller/provider. Our next set of array upgrades are the week of Sept 11th so we have about 2 weeks to figure out what's happening. If anybody has any suggestions or things to try we are open to all suggestions.Solved101Views0likes8CommentsHi everyone, we're about to virtualize some physical servers
My idea, as they have some "larger" datadisks which already are on pure, was to user the vmware converter for the os disk (local drive) and use vvol for the data disks. I know I read a guide about it somewhere (probably on codyhosterman|s blog) but can't find it anymore. Can someone push me in the right direction? Thanks in advance.Solved100Views0likes8CommentsHi everyone, our big show stopper for moving to vvol
Hi everyone, our big show stopper for moving to vvol always has been the lack of SAN based backup support in VDDK/VADP. As to my surprise I noted today, that with VDDK 8.0.3 it is even supported with NVMe-oF https://docs.vmware.com/en/VMware-vSphere/8.0/rn/vddk-803-release-notes.html So, as the release notes on vmware‘s side are a little bit vague, does anyone know when vvol backup/restore support over SAN transport was introduced? VDDK 8.0 says it’s not supported. That’s the last clear statement I found before 8.3 supporting all of it.Solved100Views0likes12Comments