Pittsburgh PUG - Launch Party @ River's Casino Drum Bar
You’re Invited! PUG - Time to Launch REGISTER NOW --> Celebrating Pure Storage + Nutanix + Expedient Join us for a special Pure User Group event as we celebrate the launch of two of the industry’s most loved technologies: Pure Storage and Nutanix are coming together in a powerful new way. Even better: Expedient becomes the FIRST Cloud Service Provider to bring this combined solution to market. This event is all about bringing our Pittsburgh-area community together to learn, connect, and celebrate a major milestone in the hybrid cloud and on-prem cloud ecosystem. What You’ll Experience A deep dive into the new Pure Storage + Nutanix integration How Expedient is delivering it as a fully managed cloud service Real-world use cases for cloud-smart modernization Customer-driven conversation, not vendor slides Networking with peers, experts, and the local PUG community Food, drinks, and launch-party fun Why This Matters This three-way partnership brings customers NVMe-fast, always-on performance Effortless scalability and hybrid cloud freedom A cloud service built for simplicity and resiliency Lower operational overhead no firefighting, no forklift upgrades It’s the stack that “just works,” so your teams can focus on innovation instead of maintenance. REGISTER NOW -->37Views1like0CommentsPure Report Podcast: Nutanix and Pure Storage: Propelling Enterprise Virtualization Forward
Check out the latest edition of the Pure Report podcast where we unpack the GA announcement for the Nutanix and Pure Storage partnership. Hear from Cody_Hosterman and Nutanix VP Product Ketan Shah on the technical details of the integration and how this partnership came together.
7Views1like0CommentsNutanix and Pure Storage are Changing Virtualization
Big news, virtualization fans! The combined Nutanix + Pure Storage solution is now available. You can read all about it in the blog and get further details on our Nutanix partner page. We’ve been talking to lots of folks about this offering, both Pure users and not, and the consensus is that people are glad to see some new stability being brought to the virtualization world, with a solution from two customer-centric organizations. To give you a sense of the value of using external storage with FlashArray, an early adopter (I’m not at liberty to name them) running a nearly 2 PB database workload will save about 50% on rack space with significant savings on power, cooling and operational costs. Please contact your Pure sales team if you want to learn more about this solution.9Views1like0CommentsPure Report Podcast - Nutanix and Pure
Join industry experts Don Poorman and Erin Stevens as we unpack the latest trends in virtualization strategies and why the timing is perfect for a new approach, and how Nutanix, with its AHV hypervisor, is well positioned with Pure to deliver a solution designed from the ground up for high-performance and enterprise scale. This episode explores the "what" and "how" of the jointly engineered Pure and Nutanix solution, detailing how Nutanix AHV hosts can leverage Pure FlashArray for shared storage, offering an apples-to-apples replacement for traditional setups. We'll cover the details around joint integrations, including the NVMe/TCP connection between Nutanix and Pure FlashArray, and how VMs are managed through Nutanix Prism for a granular vVols-like experience. Learn about the specific workloads and use cases this solution targets, particularly environments needing a balance of computing and networking, and those with high transactional database demands.
27Views1like0CommentsWho's using Pure Protect?
Hey everyone, Just wondering if anyone else is using Pure Protect yet. We have gone through the quick start guide and have a VMWare to VMWare configuration setup. We have configured our first policy and group utilizing a test VM but it seems to be stuck in the protection phase. I would be very interested to hear what others have seen or experienced. -Charles114Views1like3CommentsAzure lovers, you'll like this one...
☁️ Did everyone catch the big cloud news at last week's Accelerate? Pure Storage Cloud for Azure Native is now GA! Thats right - it's here! You can now tap into Pure's block storage directly inside azure-no extra layers, no hassle. It works just like the rest of your Azure services but with the simplicity and efficiency you expect from Pure Storage. If you are thinking about how to get more out of Azure, definitely give 👉 this blog a read.32Views1like0CommentsPure User Group | Denver: Winter Meetup Sponsored by Nutanix
Don’t miss our Pure User Group (PUG) Winter Meetup—now sponsored by Nutanix—on Thursday, October 16th from 3:00–5:00 PM MT at Tavern 242 in Morrison, CO. Whether you’re a longtime Pure Storage customer or just joining the community, this is your chance to connect with fellow users, hear about the future of Pure Storage, and discover what’s next in the world of hypervisors—all while enjoying local brews and getting your skis waxed and prepped for the season. What to Expect: Learn about Pure Storage’s vision for next-generation hypervisors: how we’re approaching modern virtualization, empowering choice and agility, and driving new capabilities for data resilience and performance. Special introduction to Nutanix’s hypervisor technology—discover how Pure Storage and Nutanix are partnering to enable seamless, high-performance virtualization solutions for organizations of every size. Enjoy drinks, snacks, open networking with Pure Storage and Nutanix experts, PLUS on-site ski waxing and prep to get you ready for winter on the slopes. In partnership with: Nutanix Date & Time October 16, 2025 3:00 PM – 5:00 PM MT Location Tavern 242 4285 S. Eldridge St., Unit A 09 Morrison, CO 80465 Register Now!47Views0likes0CommentsThe Spokane Pure User Group is coming Sept. 22nd
Join us for a dynamic Pure Storage Customer Event on September 22 at Uprise Brewing Co! This exclusive gathering brings together Pure Storage experts, featured speakers, and fellow customers for an afternoon packed with learning, networking, and fun. Explore our featured talk tracks: Accelerate Updates: Get an inside look at the latest enhancements to Pure Accelerate. Our experts will share what’s new, showcase the most impactful features, and provide guidance on how to take advantage of these updates to drive innovation and boost operational efficiency within your organization. Virtualization & Cloud: Dive deep into best practices for optimizing your virtualization stacks and seamlessly integrating Pure Storage with today’s cloud environments. Learn strategies for simplifying management, ensuring data protection, and enabling flexible, high-performance hybrid cloud solutions. Space is limited, so mark your calendar for September 22, 3:00 PM – 5:30 PM, and get ready for an engaging, informative event you won't want to miss. We look forward to seeing you at Uprise Brewing Co. Register Now!57Views2likes0CommentsOptimizing VMware ESXi iSCSI Performance with iSCSI Buffers
Optimizing VMware ESXi iSCSI performance involves a multi-faceted approach, touching on network configuration, ESXi settings, and even your storage array's capabilities. One of the common ways to improve iSCSI performance is network configuration (crucial for iSCSI) and ESXi host configuration. In this blog post, I’ll focus on improving iSCSI performance via ESXi host advanced configuration settings. To improve ESXi iSCSI performance via advanced settings, you're primarily looking at parameters that control how the ESXi host interacts with the iSCSI storage at a deeper level. These settings should always be modified with caution, preferably after consulting VMware (Broadcom) documentation or your storage vendor's recommendations, as incorrect changes can lead to instability or worse performance. Recommended steps for adjusting ESXi advanced settings: Understand your workload: Identify if your workload is sequential or random, small block or large block, read-heavy or write-heavy. This influences which settings might be most beneficial. Identify bottlenecks: Use esxtop, vCenter performance charts, and your storage array's monitoring tools to pinpoint where the bottleneck lies (host CPU, network, storage array controllers, disks). Consult documentation: Always refer to VMware's (Broadcom) official KBs and your storage vendor's best practices guides. Change one setting at a time: Make only one change, then thoroughly test and monitor the impact. This allows you to isolate the effect of each change. Make incremental adjustments: Don't make drastic changes. Increase/decrease values incrementally. Test in a lab: If possible, test performance changes in a lab environment before implementing them in production. Be prepared to revert: Make a note of default values before making changes so you can easily revert if issues arise. There are several ESXi advanced settings (VMkernel parameters) that can influence iSCSI performance, for example, iSCSI session cloning, DSNRO, iSCSI adapter device queue depth, MaxIoSize, and others. I’ll focus on the relatively new configuration setting available from vSphere 7.0 U3d onwards, which allows adjusting iSCSI socket buffer sizes. iSCSI Socket Buffer Sizes There are two advanced parameters for adjusting iSCSI socket buffer sizes: SocketSndBufLenKB and SocketRcvBufLenKB. Both those parameters control the size of the TCP send and receive buffers for iSCSI connections and are configurable via ESXi host advanced settings (go to Host > Configure > System > Advanced Settings > Search for ISCSI.SocketSndBufLenKB and ISCSI.SocketRcvBufLenKB). The receive buffer size affects read performance, while the send buffer size affects write performance. For high-bandwidth (10Gbps+) or high-latency networks, increasing these buffers can significantly improve TCP throughput by allowing more data to be "in flight" over the network. This is related to the bandwidth delay product (BDP); see details below. What Value Should I Use? These settings are tunable from vSphere 7.0 U3d onwards; the default values are set to 600KB for SocketSndBufLenKB and 256KB for SocketRcvBufLenKB and can be adjusted up to 6MB for both parameters. My recommendation is to calculate the BDP in your environment, adjust the iSCSI socket buffer sizes, test them, and monitor the results with esxtop (see a how-to below). Note that larger buffers consume more memory on the ESXi host. While generally not a major concern unless extremely large values are used, it's something to be aware of. Bandwidth Delay Product (BDP) Now, let’s take a closer look at the bandwidth delay product (BDP). BDP is a fundamental concept in networking that represents the maximum amount of data that can be in transit (on the "wire") at any given time over a network path. It's essentially the "volume" of the network pipe between two points. Why Is BDP Important for TCP/iSCSI? Transmission Control Protocol (TCP), which iSCSI relies on, uses a "windowing" mechanism to control the flow of data. The TCP send and receive buffers (also known as TCP windows) dictate how much data can be sent before an acknowledgment (ACK) is received. If your TCP buffers are smaller than the BDP, the TCP window will close before the network pipe is full. This means the sender has to stop and wait for ACKs, even if the network link has more capacity. This leads to underutilization of bandwidth and reduced throughput. If your TCP buffers are equal to or larger than the BDP, the sender can keep sending data continuously, filling the network pipe. This ensures maximum throughput and efficiency. When Is BDP Configuration Most Relevant? BDP configuration is important for: High-bandwidth networks: 10/25/40/50/100Gbps iSCSI networks High-latency networks: Stretched clusters, long-distance iSCSI, cloud environments, or environments with multiple network hops between ESXi and storage For typical 1Gbps iSCSI networks with low latency, the default buffer sizes are usually sufficient, as the BDP will likely be smaller than the defaults. However, as network speeds increase, accurately sizing your TCP buffers becomes more critical for maximizing performance. How to Calculate BDP BDP = Bandwidth (BW) × Round Trip Time (RTT) Where: Bandwidth (BW): The data rate of the network link, typically measured in bits per second (bps) or bytes per second (Bps). In ESXi contexts, this refers to the speed of your iSCSI NICs (e.g., 1Gbps, 10Gbps). Round Trip Time (RTT): The time it takes for a packet to travel from the sender to the receiver and back again, measured in seconds (or milliseconds, which then needs conversion to seconds for the formula). This accounts for network latency. Monitoring with esxtop Modifying ESXi advanced settings can yield significant performance benefits, but it requires a deep understanding of your environment and careful, methodical execution and monitoring. I highly recommend watching esxtop metrics for storage performance to monitor the results and see the outcomes of the above changes. How to Use esxtop The most common way is to SSH into your ESXi host, but you can also access the ESXi command line directly from the ESXi host console. Once you are there, type esxtop and press Enter. You'll see the CPU view by default. To get to the disk-related views, press one of the following keys: d (Disk Adapter View/HBA View): Shows performance metrics for your storage adapters (HBAs, software iSCSI adapters, etc.). This is useful for identifying bottlenecks at the host bus adapter level. u (Disk Device View/LUN View): Displays metrics for individual storage devices (LUNs or datastores). This is often the most useful view for identifying shared storage issues. v (Disk VM View/Virtual Machine Disk View): Shows disk performance metrics per virtual machine. This helps you identify which VMs are consuming the most I/O or experiencing high latency. Once you're in a disk view (d, u, or v), you can monitor these key storage metrics: Latency metrics (the most important): DAVG/cmd (device average latency): This tells you how long the storage array itself is taking to process commands. High DAVG often indicates a bottleneck on the storage array (e.g., slow disks, busy controllers, insufficient IOPS). KAVG/cmd (kernel average latency): This represents the time commands spend within the ESXi VMkernel's storage stack. High KAVG often points to queuing issues on the ESXi host. Look at QUED along with KAVG. If KAVG is high and QUED is consistently high, it suggests the ESXi host is queuing too many commands because the path to the storage (or the storage itself) can't keep up. This could be due to a low configured queue depth (Disk.SchedNumReqOutstanding, iscsivmk_LunQDepth) or a saturated network path. GAVG/cmd (guest average latency): This is the end-to-end latency seen by the virtual machine's guest operating system. It's the sum of DAVG + KAVG. This is what the VM and its applications are experiencing. If GAVG is high, you then use DAVG and KAVG to pinpoint where the problem lies. Thresholds: While specific thresholds vary by workload and expectation (e.g., database VMs need lower latency than file servers), general guidelines are: ~10-20ms sustained: Starting to see performance impact. >20-30ms sustained: Significant performance issues are likely. >50ms sustained: Severe performance degradation. I/O metrics: CMDS/s (commands per second): The total number of SCSI commands (reads, writes, and others like reservations). This is often used interchangeably with IOPS. READS/s/WRITES/s: The number of read/write I/O operations per second. MBREAD/s/MBWRTN/s: The throughput in megabytes per second. This tells you how much data is being transferred. Queuing metrics: QUED (queued commands): The number of commands waiting in the queue on the ESXi host. A persistently high QUED value indicates a bottleneck further down the storage path (either the network, the iSCSI adapter, or the storage array itself). This is a strong indicator that your queue depth settings might be too low, or your storage can't handle the incoming load. ACTV (active commands): The number of commands currently being processed by the storage device. QLEN (queue length): The configured queue depth for the device/adapter. Conclusion Modifying iSCSI socket buffer sizes is another method to tune the ESXi iSCSI connection for better performance. Together with other ESXi tunables, it can bring better performance to your storage backend. If the iSCSI connection is already tuned for maximum performance, another option is to implement a more modern protocol such as NVMe over TCP, which Pure Storage fully supports with our arrays.771Views2likes5CommentsHow can I help?
Hey Pure Community, I’m Brian Heck, and I get to be your Senior Systems Engineer if you’re in Alaska, Washington, or Oregon (yup, that’s me—the person behind the emails!). I’m part of the SLED team at Pure, which just means I spend my days helping schools, local governments, and all sorts of public organizations like Tribal make the most of their data. I’ve always believed the secret sauce at Pure isn’t just our tech (though I’m pretty biased about how rock solid flash storage is). It’s actually the people on our team and in this community. There’s zero ego, just lots of curiosity and a drive to solve real problems for real folks. So if you ever just want to ask a question, vent about a challenge, or swap stories about your favorite upgrades, trust me, you’re speaking my language. If you’re wondering: What’s it really like working at Pure (spoiler: the culture here rocks) What new tech or trends I’m excited about in storage, cloud, or AI How to get the most out of your SE team (or what the heck SEs actually do behind the scenes) …please shout! I love sharing tips, diving into rabbit holes, and figuring out better ways to do things together. I’m always up for good conversation, honest feedback, or brainstorming sessions, whether it’s in the forums or over coffee (virtual or real). This community means a lot to me, and I’d really love to hear your stories, see your questions, and learn from your experiences. I've been a part of the VMUG for quite awhile, so things like this are my jam. I’ll do my best to share the good stuff—tech advice, a peek at life at Pure, and maybe a few dad jokes if you’re lucky. How can I help? What’s on your mind?34Views1like0Comments