Recent Content
Pure Storage Cloud Dedicated on Azure: An intro to Performance
Introduction With Pure Storage Cloud Dedicated on Microsoft Azure, performance is largely governed by three factors that need to be taken into consideration: front-end controller networking, controller back‑end connection to managed disks, and the Purity data path. This post explains how Azure building blocks and these factors influence overal performance. Disclaimer: This post requires basic understanding of PSC Dedicated architecture. Real-life Performance varies based on configuration and workload; examples here are illustrative. Architecture: the building blocks that shape performance Cloud performance often comes from how compute, storage, and networking are assembled. PSC Dedicated deploys two Azure VMs as storage controllers running the Purity operating environment and uses Azure Managed Disks as persistent media. Initiator VMs connect over the Azure Virtual Network using in‑guest iSCSI or NVMe/TCP. Features like inline data reduction, write coalescing through NVRAM, and an I/O rate limiter help keep the array stable and with predictable performance under saturation. Front-end performance: networking caps Azure limits the outbound (egress) bandwidth of Virtual Machines. Each Azure VM has a certain network egress cap assigned and cannot send out more data than what the limit allows. As PSC Dedicated controllers run on Azure VMs, this translates into the following: Network traffic going INTO the PSC Dedicated array - writes - not throttled by Azure outbound bandwidth limits Network traffic going OUT of the PSC Dedicated array - reads - limited User-requested reads (e.g. from an application) as well as any replication traffic leaving the controller share the same egress budget. Because of that, planning workloads with replication should be done carefully to avoid competing with client reads. Back-end performance: VM caps, NVMe, and the write path The Controller VM caps Similarly to frontend network read throughput, Azure enforces per‑VM limits on total backend IOPS and combined read/write throughput. The overall IOPS/throughput of a VM is therefore limited by the lower of: the controller VM's IOPS/throughput cap and the combined IOPS/throughput of all attached managed disks. To avoid unnecessary spend due to overprovisioning, managed disks of PSC Dedicated arrays are configured as to saturate the controller backend caps just right. NVMe backend raises the ceiling Recent PSC Dedicated releases adopt an NVMe backend on supported Azure Premium SSD v2 based SKUs, increasing the controller VM’s backend IOPS and bandwidth ceilings. The disk layout and economics remain the same while the array gains backend headroom. The write path Purity secures initiator writes to NVRAM (for fast acknowledgment) and later destages to data managed disks. For each logical write, the backend cap is therefore tapped multiple times: a write to NVRAM a read from NVRAM during flush and a write to the data managed disks Under mixed read/write non-reducible workloads this can exhaust the combined read/write backend bandwidth and IOPS of the controller VM. Raised caps of the NVMe backend help here. Workload characteristics: iSCSI sessions and data reducibility Block size and session count Increasing iSCSI session count between Initiator VMs and the array does not guarantee better performance; with large blocks, too many sessions can increase latency without improving throughput, especially when multiple initiators converge on the same controller. Establish at least one session per controller for resiliency, then tune based on measured throughput and latency. Data reduction helps extend backend headroom When data is reducible, PSC Dedicated writes fewer physical bytes to backend managed disks. That directly reduces backend write MBps for the same logical workload, delaying the point where Azure’s VM backend caps are reached. The effect is most pronounced for write‑heavy and mixed workloads. Conversely, non‑reducible data translates almost 1:1 to backend traffic, hitting limits sooner and raising latency at high load. Conclusion Predictable performance in the cloud is about aligning architecture and operations with the platform’s limits. For PSC Dedicated on Azure, that means selecting the right controller and initiator VM SKUs, co‑locating resources to minimise network distance, enabling accelerated networking, and tuning workloads (block size, sessions, protocol) to the caps that actually matter. Inline data reduction and NVMe backend extend headroom meaningfully (particularly for mixed workloads) while Purity’s design keeps the experience consistent. Hopefully, this post was able to shed light on at least some of the performance factors of PSC Dedicated on Azure.8Views0likes0CommentsVeeam v13 Integration and Plugin
Hi Everyone, We're new Pure customers this year and have two Flasharray C models, one for virtual infrastructure and the other will be used solely as a storage repository to back up those virtual machines using Veeam Backup and Replication. Our plan is to move away from the current windows-based Veeam v12 in favor of Veeam v13 hardened Linux appliances. We're in the design phase now but have Veeam v13 working great in separate environment with VMware and HPE Nimble. Our question is around Pure Storage and Veeam v13 integration and Plugin support. Veeam's product team mentions there is native integrations in v12, but that storage vendors should be "adopting USAPI" going forward. Is this something that Pure is working on, or maybe already has completed with Veeam Backup and Replication v13?672Views4likes14CommentsAnnouncing the General Availability of Purity//FB 4.6.6
We are happy to announce the general availability of 4.6.6, the seventh release in the 4.6 Feature Release line. See the release notes for all the details about these, and the many other features, bug fixes, and security updates included in the 4.6 release line. UPGRADE RECOMMENDATIONS AND EOL SCHEDULE Customers who are running any previous 4.6 version should upgrade to 4.6.6. Customers who are looking for long-term maintenance of a consistent feature set are recommended to upgrade to the 4.5 LLR. Check out our AI Copilot intelligent assistant for deeper insights into release content and recommendations. Development on the 4.6 release line will continue through February 2026. After this time the full 4.6 feature set will roll into the 4.7 Long Life Release line for long-term maintenance, and the 4.6 line will be declared End-of-Life (EOL). HARDWARE SUPPORT This release is supported on the following FlashBlade Platforms: FB//S100, FB//S200 (R1, R2), FB//S500 (R1, R2), FB//ZMT, FB//E, FB//EXA LINKS AND REFERENCES Purity//FB 4.6 Release Notes Purity//FB Release and End-of-Life Schedule Purity//FB Release Guidelines FlashBlade Hardware and End-of-Support FlashBlade Capacity and Feature Limits Pure1 Manage AI Copilot31Views2likes0CommentsPSC Dedicated on Azure - 6.10.x so far
In this post, I though to do a quick look back at the 6.10.x PSC Dedicated on Azure, as we've seen quite a few interesting features be added. Let's start with NVMe-Based Backend. Prior to the 6.10.0 release, Pure Storage Cloud Dedicated for Azure used an SCSI-based backend to connect Managed Disks (both SSDs and NVRAM) to its controller VMs. Starting with 6.10.0, PSC Dedicated SKUs with Premium V2 SSDs will leverage NVMe-based access for Managed Disks. NVMe is a high-speed storage protocol that enables direct communication with storage devices over the PCIe bus. Compared to SCSI, NVMe brings improvements potentially resulting in lower latency, higher IOPS, and reduced CPU utilization. To begin using the NVMe backend, upgrade the array to Purity version 6.10.0. As part of this upgrade, the existing SCSI-based controller VM is automatically replaced with an equivalent NVMe-enabled VM. This transition is fully automated and transparent, no manual steps or redeployment is required, and there are no changes to the user interface or management workflows. The cost of the array also remains unchanged. NVMe becomes the only supported backend protocol moving onward (6.10.0+), there is no option to revert back to SCSI. Let's also look at the backend Performance Characteristics to better understand the change here. The backend performance - meaning the IOPS and throughput between the controller VM and the attached managed disks - is primarily determined by the VM size. This is because Azure imposes VM-level caps on both backend IOPS and throughput. These limits apply regardless of the number of attached disks. The maximum achievable backend IOPS for the primary controller is based on the lower of: The IOPS cap defined by Azure for the VM SKU The combined IOPS of all attached SSDs (Azure Managed Disks) Individual PSC Dedicated SSD Managed Disk performance was selected and configured as to saturate the controller VM backend limits, i.e.: Maximum VM backend IOPS / # of SSD disks = each SSD IOPS Azure also enforces a VM-level backend bandwidth limit, which is a combined cap across both read and write operations. This means that even with multiple high-throughput disks, the total achievable bandwidth cannot exceed what the VM SKU allows. With the switch to NVMe protocol, Azure increases these backend IOPS and bandwidth caps of compatible VMs are raised. This includes the ones used as PSC Dedicated Controllers (for MP2R2 SKUs) VM Size Backend type Max Backend IOPS Max Backend R/W Throughput (MBps) Frontend Network Bandwidth (Mbps) V10MP2R2 NVMe 88,400 2,300 12,500 V10MP2R2 SCSI 64,800 1,370 12,500 V20MP2R2 NVMe 174,200 4,800 16,000 V20MP2R2 SCSI 129,700 2,740 16,000 Source: https://learn.microsoft.com/en-us/azure/virtual-machines/ebdsv5-ebsv5-series From the table above it is clear both IOPS and Bandwidth are seeing significant improvement, positively influencing certain workloads. Increase in backend IOPS is expected to bring benefits in a mixed read/write workload with small IO sizes. Increase in backend bandwidth can be beneficial for non-reducible mixed read/write workloads with high array utilisation. However, keep in mind the managed disk configuration (both SSD and NVRAM) remains the same. This ensures the overall cost remains unchanged with this switch. Also, while the NVMe backend may contribute to an increased storage performance capabilities, other limits (such as frontend network bandwidth and IOPS) still apply. To further extend the performance potential of PSC Dedicated outside of backend limits, in 6.10.2, we've seen an introduction of a brand new SKU, the Azure V50MP2R2. For the new SKU, Azure D128ds v6 virtual machines (VM) were used as controller VMs, along with Premium SSD v2 managed disks. VMs in this class provide up to 6.75 GBps of network egress for read/replication traffic and significantly higher back‑end IOPS and bandwidth for managed disk connectivity. The NVMe back‑end is used by default on the SKU and similarly to current V10 and V20 models, it supports both customer driven non-disruptive Purity upgrades and Controller Scaling (e.g. it is possible to non-disruptively scale to the V50MP2R2 from lower MP2R2 SKUs). At launch, the V50 is available in the following regions: Central US East US East US 2 South Central US Canada Central Canada East Last but not least, 6.10.3 aims to address Azure maintenance or brief infrastructure events, during which the array can experience short-lived increases in I/O latency to backend managed disks. These spikes may be transient yet noticeable by hosts and applications. To harden array behavior against these conditions, PSC Dedicated 6.10.3 on Azure comes with a newly configured set of array-level tunables. These adjust how controllers interpret delayed I/O, coordinate takeovers, and manage internal leases so the array prefers riding out transient backend conditions rather than initiating a controller failover.12Views0likes0CommentsTurn Your Data into a Competitive Advantage
AI adoption is accelerating across every industry—but the real gap isn’t in ambition, it’s in operationalizing AI reliably and at scale. If your organization is looking to move from early pilots to production-grade AI, FlashStack for AI shows how you can make that shift with confidence. FlashStack AI Factories, co-engineered by Pure Storage, Cisco, and NVIDIA, delivers AI Factory frameworks and provides clients a predictable, scalable path to train, tune, and deploy AI workloads—without introducing operational risk. FlashStack delivers meaningful advantages that help teams operationalize AI more effectively: Consistent, production-grade AI performance powered by NVIDIA’s full-stack architecture—ensuring compute, networking, and storage operate as a synchronized system for dependable training and inference. Faster deployment and easier scaling, enabled by unified management through Pure1 and Cisco Intersight, reducing operational overhead and accelerating time to value. Stronger cyber resilience and reduced risk, with SafeMode immutable snapshots and deep integration with leading SIEM/SOAR/XDR ecosystems to safeguard high-value AI data. Meaningful business outcomes, from shortening AI innovation cycles to powering new copilots, intelligent assistants, and data-driven services. Together, these capabilities help enterprises turn raw data and processing power into AI-driven results—securely, sustainably, and without operational complexity. Read More! FlashStack AI Factories10Views0likes0CommentsAUE - Key Insights
Good morning/afternoon/evening everyone! This is Rich Barlow, Principal Technologist @ Pure. It was super fun to proctor this AUE session with Antonia and Jon. Hopefully everyone got in all of the questions that they wanted to ask - we had so many that we had to answer many of them out of band. So thank you for your enthusiasm and support. Looking forward to the next one! Here's a rundown of the most interesting and impactful questions we were asked. If you have any more please feel free to reach out. FlashArray File: Your Questions, Our Answers (Ask Us Everything Recap) Our latest "Ask Us Everything" webinar with Pure Storage experts Rich Barlow, Antonia Abu Matar, and Jonathan Carnes was another great session. You came ready with sharp questions, making it clear you're all eager to leverage the simplicity of your FlashArray to ditch the complexity of legacy file storage. Here are some the best shared insights from the session: Unify Everything: Performance By Design You asked about the foundation—and it's a game-changer. No Middleman, Low Latency: Jon Carnes confirmed that FlashArray File isn't a bolt-on solution. Since the file service lands directly on the drives, just like block data, there's effectively "no middle layer." The takeaway? You get the same awesome, low-latency performance for file that you rely on for block workloads. Kill the Data Silos: Antonia Abu Matar emphasized the vision behind FlashArray File: combining block and file on a single, shared storage pool. This isn't just tidy; it means you benefit from global data reduction and unified data services across everything. Scale, Simplicity, and Your Weekends Back The community was focused on escaping the complexities of traditional NAS systems. Always-On File Shares: Worried about redundancy? Jon confirmed that FlashArray File implements an "always-on" version of Continuous Available (CA) shares for SMB3 (in Purity 6.9/6.10). It’s on by default for transparent failover and simple client access. Multi-Server Scale-Up: For customers migrating from legacy vendors and needing lots of "multi-servers," we're on it. Jon let us know that engineering is actively working to significantly raise the current limits (aiming for around 100 in the next Purity release), stressing that Pure increases these limits non-disruptively to ensure stability. NDU—Always and Forever: The best part? No more weekend maintenance marathons. The FlashArray philosophy is a "data in place, non-disruptive upgrade." That applies to both block and file, eliminating the painful data migrations you’re used to. Visibility at Your Fingertips: You can grab real-time IOPS and throughput from the GUI or via APIs. For auditing, file access events are pushed via syslog in native JSON format, which makes integrating with tools like Splunk super easy. Conquering Distance and Bandwidth A tough question came in about supporting 800 ESRI users across remote Canadian sites (Yellowknife, Iqaluit, etc.) with real-time file access despite low bandwidth. Smart Access over Replication: Jon suggested looking at Rapid Replicas (available on FlashBlade File). This isn't full replication; it’s a smart solution that synchronizes metadata across sites and only pulls the full data on demand (pull-on-access). This is key for remote locations because it dramatically cuts down on the constant bandwidth consumption of typical data replication. Ready to Simplify? FlashArray File Services lets you consolidate your infrastructure and get back to solving bigger problems—not babysitting your storage. Start leveraging the power of a truly unified and non-disruptive platform today! Join the conversation and share your own experiences in the Pure Community.Ask Us Everything ... Evergreen//One edition!
💬 Have more questions for our experts around Evergreen//One after today's live "Ask Us Everything"? Feel free to drop them below and our experts will answer! dpoorman abarnes and Tago- - Tag! You're it! Or, check out some of these self-serve resources: EG//1 website Introduction to Evergreen//One (video) Evergreen//One for AI: Modern Storage Economics for the AI Era (blog) The Economics of Pure Storage Evergreen Subscriptions (blog) DATIC Protects Citizen Data from Attack (customer case study)Feature Request: Certificate Automation with ACME
Hi Pure people, How about reducing my workload a little by supporting the ACME protocol for certificate renewal? . Certficate lifespans are just getting shorter, and while I have a horrid expect script to renew certificates via ssh to flasharray, it would be much simpler if Purity ran an ACME client itself. PS We use the DNS Challenge method to avoid having to run webservices where they aren't needed.107Views1like2CommentsWhy Your Writes Are Always Safe on FlashArray
The promise of modern storage is simple: when the system says “yes,” your data better be safe. No matter what happens next; power failure, controller hiccup, or the universe throwing what else it has at you writes need to stay acknowledged. FlashArray is engineered around this non‑negotiable principle. Let me walk you through how we deliver on it. Durable First, Fast Always When your application issues a write to FlashArray, here’s the path it takes: Land in DRAM for inline data reduction (dedupe, compression, you know the lightweight stuff). Persist redundantly in NVRAM (mirrored or RAID‑6/DNVR, depending on platform), in a log accessible by either controller. Acknowledge to the host ← This is the critical moment. Flush to flash media in the background, efficiently and asynchronously. Notice what happens between steps 2 and 3? We don’t acknowledge until data is durably persisted in non‑volatile memory. Not “mostly safe,” not “probably fine” but safe and durable. This isn’t a write‑back cache we’ll get around to flushing later. The acknowledgement means your data survived the critical path and is now protected, period. Power Loss? No Problem. FlashArray NVRAM modules include integrated supercapacitors that provide power hold‑up during unexpected power events. When the power drops, these capacitors ensure the buffered write log is safely preserved without batteries to maintain, no external UPS required just to have write safety. Though it is recommended, no external UPS is necessary for write safety; many sites still deploy UPS for broader data center and facility reasons. Because durability is achieved at the NVRAM layer, we eliminate the most common failure mode in legacy systems: the volatile write cache that promises safety but can’t deliver when it matters most. Simpler Path with Integrated DNVR In our latest architectures, we integrate Distributed NVRAM (DNVR) directly into the DirectFlash Module (DFMD). This simplifies the write path fewer hops, tighter integration, better efficiency. And scales NVRAM bandwidth and capacity with the number of modules. By bringing persistence closer to the media, we’re not just maintaining our durability guarantees we’re increasing capacity and streamlining the data path at the same time. Graceful Under Pressure What happens if write ingress temporarily exceeds what the system can flush to flash? FlashArray applies deterministic backpressure you may see latency increase but I/O is not being dropped. Thus data is not at risk. Background processes yield and lower‑priority internal tasks are throttled to prioritize destage operations, keeping the system stable and predictable. Translation: we slow down gracefully and don't fail unpredictably. High Availability by Design Controllers are stateless, with writes durably persisted in NVRAM accessible by either controller. If one controller faults, the peer automatically takes over, replays any in‑flight operations from the durable log, and resumes service. A brief I/O pause may occur during takeover; platforms are sized so a single controller can handle the full workload afterward to minimize disruption to your applications. No acknowledged data is lost. No manual intervention required. Just continuous operation. Beyond the ACK: Protection on Flash After the destage, data on flash is protected with wide‑striped erasure coding for fast, predictable rebuilds and multi‑device fault tolerance. And NO hot‑spare overhead. The Bottom Line Modern flash gives you incredible performance, but performance means nothing if your data isn't safe. FlashArray's architecture makes durability the first principle—not an optimization, not an add-on, but the foundation everything else is built on. When FlashArray says your write is safe, it's safe. That's not marketing. That's engineering. This approach to write safety is part of Pure's commitment to Better Science, doing things the right way, not the easy way. We didn't just swap drives in an existing architecture; we reimagined the entire system from the ground up, from how we co-design hardware and software with DirectFlash to how we map and manage petabytes of metadata at scale. Want to dive deeper? Better Science, Volume 1 — Hardware and Software Co‑design with DirectFlash https://blog.purestorage.com/products/better-science-volume-1-hardware-and-software-co-design-with-directflash/ Better Science, Volume 2 — Maps, Metadata, and the Pyramid https://blog.purestorage.com/perspectives/better-science-volume-2-maps-metadata-and-the-pyramid/ The Pure Report — Better Science Vol. 1 (DirectFlash) https://podcasts.apple.com/gb/podcast/better-science-volume-1-directflash/id1392639991?i=1000569574821103Views1like0CommentsPurity//FA 6.9 is (Finally) Enterprise Ready!
A few months ago I wrote about the top 10 reasons to upgrade to Purity 6.9, and here are 10 more reasons; because…..6.9 has just gone Enterprise Ready! https://support.purestorage.com/bundle/m_flasharray_release/page/FlashArray/FlashArray_Release/01_Purity_FA_Release_Notes/topics/concept/c_purityfa_69x_release_notes.html 10 💍 It's "Long-Life"! Stability until June 2028. That's a longer, more successful relationship than 90% of reality TV couples achieve. 9⚰️ Your Pure SE Won’t Keep Bugging You About Running an EOL Release. You know who you are…. 8💯 It's Been to College. It met the criteria for "customer fleet adoption, cumulative runtime, and observed uptime." Basically, it passed the field test with flying colors. 7🤝 You Get a Side of Fusion. Upgrade to 6.9 and get the powerful, simple-to-use multi-array storage platform management system included. You know you want it! 6😴 The Engineers Can Finally Go Home. A big thank you to the engineering, support, technical program management, and product management teams for all the hard work. Go take a nap! 5🛡️ We Have a Stable Alternative to Chasing New Features. For customers who want rock-solid reliability, you can skip the Feature Release (FR) line drama and stick with the LLR. 4✅ It's The Complete 6.8 Feature Set. You don't lose any capabilities; you just gain the confidence of a battle-tested release. Full meal deal, no compromises. 3🖱️ It's So Easy to Get There, Even The Intern Could Do It. Compatible hardware customers are encouraged to use Self-Service Upgrades (SSU). Less work, more coffee breaks. 2🔒 Guaranteed Bug Fixes and Security Updates. This release is officially maintained, meaning your security team can finally relax... slightly. 1🚨 When You Call Support, We Won’t Start With "Did You Upgrade Yet?"