Node Navigation
Featured Content
Featured Content
Learn how a leading MSO switched from cloud-based video storage to Pure Storage FlashBlade//E and saved 87% on cloud costs. And all while getting a more reliable service and a better customer experie...
5 days ago7Views
0likes
0Comments
See how Pure Cloud Block Store™ enables hybrid cloud in healthcare by supporting workload flexibility and smart migration tools. Speakers: Thomas Whalen Jon Kimerle https://www.purestorage....
4 days ago7Views
0likes
0Comments
Fusion in the spotlight - highlights from our first community AUE (Ask Us Everything)!
We had our first "Ask Us Everything" for our community and we spent the time focusing on Pure Fusion.
13 days ago37Views
1like
0Comments
Welcome!
You've taken the first step and created an account here. What to do next you ask? Here's five simple steps to take after registering to ensure you're getting the most out of this community...
6 months ago562Views
17likes
9Comments
Recent Content
Optimizing VMware ESXi iSCSI Performance with iSCSI Buffers
6 MIN READ Optimizing VMware ESXi iSCSI performance involves a multi-faceted approach, touching on network configuration, ESXi settings, and even your storage array's capabilities. One of the common ways to improve iSCSI performance is network configuration (crucial for iSCSI) and ESXi host configuration. In this blog post, I’ll focus on improving iSCSI performance via ESXi host advanced configuration settings. To improve ESXi iSCSI performance via advanced settings, you're primarily looking at parameters that control how the ESXi host interacts with the iSCSI storage at a deeper level. These settings should always be modified with caution, preferably after consulting VMware (Broadcom) documentation or your storage vendor's recommendations, as incorrect changes can lead to instability or worse performance. Recommended steps for adjusting ESXi advanced settings: Understand your workload: Identify if your workload is sequential or random, small block or large block, read-heavy or write-heavy. This influences which settings might be most beneficial. Identify bottlenecks: Use esxtop, vCenter performance charts, and your storage array's monitoring tools to pinpoint where the bottleneck lies (host CPU, network, storage array controllers, disks). Consult documentation: Always refer to VMware's (Broadcom) official KBs and your storage vendor's best practices guides. Change one setting at a time: Make only one change, then thoroughly test and monitor the impact. This allows you to isolate the effect of each change. Make incremental adjustments: Don't make drastic changes. Increase/decrease values incrementally. Test in a lab: If possible, test performance changes in a lab environment before implementing them in production. Be prepared to revert: Make a note of default values before making changes so you can easily revert if issues arise. There are several ESXi advanced settings (VMkernel parameters) that can influence iSCSI performance, for example, iSCSI session cloning, DSNRO, iSCSI adapter device queue depth, MaxIoSize, and others. I’ll focus on the relatively new configuration setting available from vSphere 7.0 U3d onwards, which allows adjusting iSCSI socket buffer sizes. iSCSI Socket Buffer Sizes There are two advanced parameters for adjusting iSCSI socket buffer sizes: SocketSndBufLenKB and SocketRcvBufLenKB. Both those parameters control the size of the TCP send and receive buffers for iSCSI connections and are configurable via ESXi host advanced settings (go to Host > Configure > System > Advanced Settings > Search for ISCSI.SocketSndBufLenKB and ISCSI.SocketRcvBufLenKB). The receive buffer size affects read performance, while the send buffer size affects write performance. For high-bandwidth (10Gbps+) or high-latency networks, increasing these buffers can significantly improve TCP throughput by allowing more data to be "in flight" over the network. This is related to the bandwidth delay product (BDP); see details below. What Value Should I Use? These settings are tunable from vSphere 7.0 U3d onwards; the default values are set to 600KB for SocketSndBufLenKB and 256KB for SocketRcvBufLenKB and can be adjusted up to 6MB for both parameters. My recommendation is to calculate the BDP in your environment, adjust the iSCSI socket buffer sizes, test them, and monitor the results with esxtop (see a how-to below). Note that larger buffers consume more memory on the ESXi host. While generally not a major concern unless extremely large values are used, it's something to be aware of. Bandwidth Delay Product (BDP) Now, let’s take a closer look at the bandwidth delay product (BDP). BDP is a fundamental concept in networking that represents the maximum amount of data that can be in transit (on the "wire") at any given time over a network path. It's essentially the "volume" of the network pipe between two points. Why Is BDP Important for TCP/iSCSI? Transmission Control Protocol (TCP), which iSCSI relies on, uses a "windowing" mechanism to control the flow of data. The TCP send and receive buffers (also known as TCP windows) dictate how much data can be sent before an acknowledgment (ACK) is received. If your TCP buffers are smaller than the BDP, the TCP window will close before the network pipe is full. This means the sender has to stop and wait for ACKs, even if the network link has more capacity. This leads to underutilization of bandwidth and reduced throughput. If your TCP buffers are equal to or larger than the BDP, the sender can keep sending data continuously, filling the network pipe. This ensures maximum throughput and efficiency. When Is BDP Configuration Most Relevant? BDP configuration is important for: High-bandwidth networks: 10/25/40/50/100Gbps iSCSI networks High-latency networks: Stretched clusters, long-distance iSCSI, cloud environments, or environments with multiple network hops between ESXi and storage For typical 1Gbps iSCSI networks with low latency, the default buffer sizes are usually sufficient, as the BDP will likely be smaller than the defaults. However, as network speeds increase, accurately sizing your TCP buffers becomes more critical for maximizing performance. How to Calculate BDP BDP = Bandwidth (BW) × Round Trip Time (RTT) Where: Bandwidth (BW): The data rate of the network link, typically measured in bits per second (bps) or bytes per second (Bps). In ESXi contexts, this refers to the speed of your iSCSI NICs (e.g., 1Gbps, 10Gbps). Round Trip Time (RTT): The time it takes for a packet to travel from the sender to the receiver and back again, measured in seconds (or milliseconds, which then needs conversion to seconds for the formula). This accounts for network latency. Monitoring with esxtop Modifying ESXi advanced settings can yield significant performance benefits, but it requires a deep understanding of your environment and careful, methodical execution and monitoring. I highly recommend watching esxtop metrics for storage performance to monitor the results and see the outcomes of the above changes. How to Use esxtop The most common way is to SSH into your ESXi host, but you can also access the ESXi command line directly from the ESXi host console. Once you are there, type esxtop and press Enter. You'll see the CPU view by default. To get to the disk-related views, press one of the following keys: d (Disk Adapter View/HBA View): Shows performance metrics for your storage adapters (HBAs, software iSCSI adapters, etc.). This is useful for identifying bottlenecks at the host bus adapter level. u (Disk Device View/LUN View): Displays metrics for individual storage devices (LUNs or datastores). This is often the most useful view for identifying shared storage issues. v (Disk VM View/Virtual Machine Disk View): Shows disk performance metrics per virtual machine. This helps you identify which VMs are consuming the most I/O or experiencing high latency. Once you're in a disk view (d, u, or v), you can monitor these key storage metrics: Latency metrics (the most important): DAVG/cmd (device average latency): This tells you how long the storage array itself is taking to process commands. High DAVG often indicates a bottleneck on the storage array (e.g., slow disks, busy controllers, insufficient IOPS). KAVG/cmd (kernel average latency): This represents the time commands spend within the ESXi VMkernel's storage stack. High KAVG often points to queuing issues on the ESXi host. Look at QUED along with KAVG. If KAVG is high and QUED is consistently high, it suggests the ESXi host is queuing too many commands because the path to the storage (or the storage itself) can't keep up. This could be due to a low configured queue depth (Disk.SchedNumReqOutstanding, iscsivmk_LunQDepth) or a saturated network path. GAVG/cmd (guest average latency): This is the end-to-end latency seen by the virtual machine's guest operating system. It's the sum of DAVG + KAVG. This is what the VM and its applications are experiencing. If GAVG is high, you then use DAVG and KAVG to pinpoint where the problem lies. Thresholds: While specific thresholds vary by workload and expectation (e.g., database VMs need lower latency than file servers), general guidelines are: ~10-20ms sustained: Starting to see performance impact. >20-30ms sustained: Significant performance issues are likely. >50ms sustained: Severe performance degradation. I/O metrics: CMDS/s (commands per second): The total number of SCSI commands (reads, writes, and others like reservations). This is often used interchangeably with IOPS. READS/s/WRITES/s: The number of read/write I/O operations per second. MBREAD/s/MBWRTN/s: The throughput in megabytes per second. This tells you how much data is being transferred. Queuing metrics: QUED (queued commands): The number of commands waiting in the queue on the ESXi host. A persistently high QUED value indicates a bottleneck further down the storage path (either the network, the iSCSI adapter, or the storage array itself). This is a strong indicator that your queue depth settings might be too low, or your storage can't handle the incoming load. ACTV (active commands): The number of commands currently being processed by the storage device. QLEN (queue length): The configured queue depth for the device/adapter. Conclusion Modifying iSCSI socket buffer sizes is another method to tune the ESXi iSCSI connection for better performance. Together with other ESXi tunables, it can bring better performance to your storage backend. If the iSCSI connection is already tuned for maximum performance, another option is to implement a more modern protocol such as NVMe over TCP, which Pure Storage fully supports with our arrays.118Views1like2CommentsAWS Elastic VMware Service with Pure Storage Cloud Block Store (Webinar Overview)
This week's "Clear the Path to VMware in AWS" webinar was very insightful. I wrote a post about it on my blog. Here is the link: https://dmitrywashere.github.io/data/cloud/aws/evs/cbs/purestorage/2025/09/06/pure-vmware-aws.html Let me know your thoughts.3Views0likes0CommentsWelcome Train & ICYMI: Highlights from the week of September 5!
Hello everyone! We had a great week here in the forums, with new faces joining us and some fantastic conversations taking place. A special welcome to all the newest member of our community! benjib We're so happy to have you here. If you haven't already, please take a moment to introduce yourselves in our dedicated section. We'd love to get to know you! In case you missed it, here are some highlights from the past week: Join the Conversation! Optimizing VMware ESXi iSCSI Performance with iSCSI Buffers Video Storage: Less Cost, More Reliability Take a breather and catch up on some of our fun discussions: Weekly Creative Corner: Labor Day Edition! 🏝️ We're looking forward to another week of great discussions with all of you!catud3 days agoCommunity Manager2Views0likes0CommentsCloud - A Place or a Strategy? Unpacking the Pure Storage Enterprise Data Cloud
September 16 | Register Now! 11:00AM PT • 2:00PM ET Have you ever heard that cloud isn’t a place but rather a strategy? Join us for this month’s Coffee Break as host Andrew Miller and JD Wallace (Principal Technologist Director and Pure Report Unplugged Co-host) explore the data center and cloud landscape, how the Pure Storage Enterprise Data Cloud (EDC) strategy is based on customer conversations and where we see the industry going. We’ll cover: Challenges around data control, workload variability, and budget strain Transformation options for storage infrastructure, IT operations, dataset management, and more Why EDC isn’t something you buy but instead something we help you build Intelligent control plane (aka Pure Fusion™)—digging into the history behind fleet management, previous failed industry attempts, and how Pure Storage is building Pure Fusion for both today and tomorrow We’ll also get practical, looking at capabilities around fleet and remote management, presets and workloads, compliance reporting and being compliant from Day 1, built-in cyber resilience, and workload orchestration. Plus, we’ll have live Q&A to answer your questions. LIVE GIVEAWAY: One lucky attendee will win a Home Office premium set, which includes a Yeti mug and a Qi Wireless Charger (approx. value $150). See Terms and Conditions.4Views0likes0CommentsPowerShell & Pure Fusion blog series by Anthony Nocentino
Anthony Nocentino, a Principal Field Solutions Architect at Pure, has started a blog series on using the PowerShell SDK with Pure Fusion. He has several posts coming, and here is his first. Be sure to bookmark his site!mikenelson-pure3 days agoCommunity Manager15Views0likes1CommentJoin Us October 8th for the Milwaukee Pure User Group Event!
Join us for an exclusive Pure Storage User Group event in Milwaukee to hear how Milwaukee Tool transformed its IT infrastructure with Pure solutions. This is your chance to connect with fellow Pure users, learn from real-world experiences, and discover strategies for getting the most out of your Pure investment. John Akemann, Storage Compute Architect, will share Milwaukee Tool’s journey with Pure Storage. He’ll cover: How Milwaukee Tool conducted rigorous POC testing to evaluate Pure Storage. The seamless transition from a successful POC to a full implementation. The specific metrics and business benefits Milwaukee Tool has seen since adopting Pure. You’ll also have the opportunity to explore the iconic Harley Davidson Museum after the event! Whether you're a current Pure customer or considering our solutions, you'll gain valuable insights and hear firsthand how our technology delivers tangible results. Space is limited, so reserve your spot today! Agenda: 2:00 pm | Welcome and check in 2:00 - 4:00 pm | Pure at Milwaukee Tool 4:00 pm | Happy hour and explore the Museum! REGISTER NOW! Date & Time October 8, 2025 2:00 PM - 5:00 PM CTcatud4 days agoCommunity Manager8Views0likes0CommentsAccelerate Breakout Replay: Pure Cloud Block Store™ for Healthcare: Focus on Epic
See how Pure Cloud Block Store™ enables hybrid cloud in healthcare by supporting workload flexibility and smart migration tools. Speakers: Thomas Whalen Jon Kimerle https://www.purestorage.com/video/webinars/pure-cloud-block-store-for-healthcare-focus-on-epic/6375810690112.htmlcatud4 days agoCommunity Manager7Views0likes0CommentsAccelerate Breakout Replay: Using Pure Storage DRaaS Technology to Create Safe and Secure Protections for Today's Healthcare Organic Data Growth Needs in a Post-COVID World
Learn how Pure Protect™ v2+ and SafeMode™ Snapshots enhance EMR cyber defense with ransomware detection and layered data protection. Speakers: Thomas Whalen Chad Monteith Suresh Madhu https://www.purestorage.com/video/webinars/using-pure-storage-draas-technology-to-create-safe-and-secure-pr/6375812173112.htmlcatud4 days agoCommunity Manager5Views0likes0CommentsAccelerate Breakout Replay: The Mainframe Is Not Dead: Long Live the Mainframe
Explore how Luminex + Pure Storage enable cyber resilience across mainframe and open systems in healthcare. Speakers: Priscilla Sandberg Andy Stewart, Luminex Christopher Rogers, Companion Data Services https://www.purestorage.com/video/webinars/the-mainframe-is-not-dead/6375810492112.htmlcatud4 days agoCommunity Manager5Views0likes0CommentsVideo Storage: Less Cost, More Reliability
Learn how a leading MSO switched from cloud-based video storage to Pure Storage FlashBlade//E and saved 87% on cloud costs. And all while getting a more reliable service and a better customer experience via reduced latency. Read all about it.7Views0likes0Comments
Featured Places
Introductions
Welcome! Please introduce yourself to the Pure Storage Community.Pure User Groups
Build and grow your professional network. Explore groups and meetups near you. Don't see a PUG for your area? Reach out to the admins to request a meetup and group.Industry Groups
Join other community members in your industry to learn and share about how Pure is making an impact for your organization./CODE
The Pure /Code community is where collaboration thrives and everyone, from beginners taking their first steps to experts honing their craft, comes together to learn, share, and grow. In this inclusive space, you'll find support, inspiration, and opportunities to elevate your automation, scripting, and coding skills, no matter your starting point or career position. The goal is to break barriers, solve challenges, and most of all, learn from each other.Cloud Native
A group for app developers, platform engineering, and Portworx users around containers and Kubernetes.