Node Navigation
Get Started Here
Recent Discussions
AI Governance: It’s Time to Close the Widening Gap
Traditional governance is no longer enough to manage the scale of modern AI. As global regulations begin to fragment, the article "Inside the Shift Toward Internal Data Governance As Global AI Regulation Fragments", Onur Korucu, DataRep Non-Executive Director points out that organizations must move towards toward dynamic, internal industry frameworks. She says, true AI control isn't just about software rules; it requires a deep understanding of your data flows and the infrastructure they run on. Since AI magnifies the biases of its inputs, effective AI governance is, at its core, rigorous data governance. To stay ahead, leaders must stop waiting for universal standards and start embedding continuous, technical monitoring into their own everyday operations. --------------------------------------------------------------- 🗣️ Let's talk about it! 📣 Community Question: In your experience, where is the biggest gap between the legal intent of AI policy and the technological reality of how these systems actually run? Let's discuss! Click through to read the entire article above and let us know your thoughts around it in the comments below!6Views0likes0CommentsWe are just one week away PUG#3
January 28th, the Cincinnati Pure User Group will be convening at Ace's Pickleball to discuss Enterprise file. We will be joined by Matt Niederhelman Unstructured Data Field Solutions Architect to help guide conversation and answer questions about what he is experiencing amongst other customers. Click the link below to register and come join us. Help us guide the conversation with your ideas for future topics. https://info.purestorage.com/2025-Q4AMS-COMREPLTFSCincinnatiPUG-LP_01---Registration-Page.html7Views1like0CommentsWelcome & Intro
I'm VERY excited about this new Pure Storage Community site! Paired with the return of the PUG (Pure Users Group) these are both GREAT opportunities for the MOST important people out there, YOU, our customers to meet others in the industry tackling similar problems. Thought I'd start off with a thread for introductions. Joe Mudra (or just Mudra) here, been at Pure storage ~3 years, prior to that I was a Sr. SE at Arctic Wolf Networks and before that Veeam Software (for a while, ~6 yrs). I started in IT at Ohio University while attending school there before moving to the Columbus area where I worked at a variety of IT shops locally (Omnicare, Residential Finance, Pacer Logistics, XPO Logistics, & Commercial Vehicle Group). I'm currently working with State, Local & Education accounts in Ohio. Needless to say, in that time, I worked with a lot of Network, Server, Storage Infrastructure stacks. But I honestly got my start in software administration (Esker Deliverware (faxing software), Microsoft Server administration, (Whole slew of MSFT products), VMWARE!!! (rip), Cisco, Cisco UCS, IBM, NetApp, EMC, HPE & Dell. Was a bit of the "Give it to Joe, he'll figure it out." guy for a while, and I got learned so much by just raising my hand when asked if anyone wanted the new (sometimes tedious) sounding project. I've been a Pure fanboy from the start. Unfortunately, in my years in the data centers, Pure at the time was out of my price range, as Flash was $$$ back then (wish I had ran the long term TCO for my employers!) and I didn't understand Pure's Evergreen//Forever program. i.e. Refreshed storage for the cost of normal maintenance + flat maintenance costs. (My apologies to my old employers for missing this opportunity.) I learn the most when I get to chat with customers and hear about their challenges. So THANK YOU! To every one of you who take the time to share, I am forever grateful and appreciative!!! Personally, got 2 daughters in Dublin Jerome HS, one who will graduate this year and head off to College, and another in her Freshman year. I spend as much time as life allows with them. And the newest member of my family... a new Jeep Wrangler Willy's ER (Annie). Let's talk about Jeeps!!! :)15Views3likes0CommentsAsk Us Everything about Pure Storage + Nutanix
💬 Get ready for our January 2026 edition of Ask Us Everything, this Friday, January 16th at 9 AM Pacific. This month is all about Pure Storage + Nutanix. If you have a burning question, feel free to ask it here early and we'll add it to the list to answer on Friday. Or if we can't get it answered live, our Pure Storage + Nutanix experts can follow up here. thomasbrown Cody_Hosterman jhoughes & dpoorman are the experts answering your questions during the conversation and here on the community. See you this Friday! (Oh, and if you haven't registered yet, there's still time!) Or, check out some of these self-serve resources: Solution Brief Pure Report Podcast Pure360 Video Nutanix, Intel, & Pure white paper EDIT: Thanks for joining in! If you have additional burning questions and comments, leave them in the comments below for the team!74Views4likes0CommentsPure Storage Cloud Dedicated on Azure: An intro to Performance
Introduction With Pure Storage Cloud Dedicated on Microsoft Azure, performance is largely governed by three factors that need to be taken into consideration: front-end controller networking, controller back‑end connection to managed disks, and the Purity data path. This post explains how Azure building blocks and these factors influence overal performance. Disclaimer: This post requires basic understanding of PSC Dedicated architecture. Real-life Performance varies based on configuration and workload; examples here are illustrative. Architecture: the building blocks that shape performance Cloud performance often comes from how compute, storage, and networking are assembled. PSC Dedicated deploys two Azure VMs as storage controllers running the Purity operating environment and uses Azure Managed Disks as persistent media. Initiator VMs connect over the Azure Virtual Network using in‑guest iSCSI or NVMe/TCP. Features like inline data reduction, write coalescing through NVRAM, and an I/O rate limiter help keep the array stable and with predictable performance under saturation. Front-end performance: networking caps Azure limits the outbound (egress) bandwidth of Virtual Machines. Each Azure VM has a certain network egress cap assigned and cannot send out more data than what the limit allows. As PSC Dedicated controllers run on Azure VMs, this translates into the following: Network traffic going INTO the PSC Dedicated array - writes - not throttled by Azure outbound bandwidth limits Network traffic going OUT of the PSC Dedicated array - reads - limited User-requested reads (e.g. from an application) as well as any replication traffic leaving the controller share the same egress budget. Because of that, planning workloads with replication should be done carefully to avoid competing with client reads. Back-end performance: VM caps, NVMe, and the write path The Controller VM caps Similarly to frontend network read throughput, Azure enforces per‑VM limits on total backend IOPS and combined read/write throughput. The overall IOPS/throughput of a VM is therefore limited by the lower of: the controller VM's IOPS/throughput cap and the combined IOPS/throughput of all attached managed disks. To avoid unnecessary spend due to overprovisioning, managed disks of PSC Dedicated arrays are configured as to saturate the controller backend caps just right. NVMe backend raises the ceiling Recent PSC Dedicated releases adopt an NVMe backend on supported Azure Premium SSD v2 based SKUs, increasing the controller VM’s backend IOPS and bandwidth ceilings. The disk layout and economics remain the same while the array gains backend headroom. The write path Purity secures initiator writes to NVRAM (for fast acknowledgment) and later destages to data managed disks. For each logical write, the backend cap is therefore tapped multiple times: a write to NVRAM a read from NVRAM during flush and a write to the data managed disks Under mixed read/write non-reducible workloads this can exhaust the combined read/write backend bandwidth and IOPS of the controller VM. Raised caps of the NVMe backend help here. Workload characteristics: iSCSI sessions and data reducibility Block size and session count Increasing iSCSI session count between Initiator VMs and the array does not guarantee better performance; with large blocks, too many sessions can increase latency without improving throughput, especially when multiple initiators converge on the same controller. Establish at least one session per controller for resiliency, then tune based on measured throughput and latency. Data reduction helps extend backend headroom When data is reducible, PSC Dedicated writes fewer physical bytes to backend managed disks. That directly reduces backend write MBps for the same logical workload, delaying the point where Azure’s VM backend caps are reached. The effect is most pronounced for write‑heavy and mixed workloads. Conversely, non‑reducible data translates almost 1:1 to backend traffic, hitting limits sooner and raising latency at high load. Conclusion Predictable performance in the cloud is about aligning architecture and operations with the platform’s limits. For PSC Dedicated on Azure, that means selecting the right controller and initiator VM SKUs, co‑locating resources to minimise network distance, enabling accelerated networking, and tuning workloads (block size, sessions, protocol) to the caps that actually matter. Inline data reduction and NVMe backend extend headroom meaningfully (particularly for mixed workloads) while Purity’s design keeps the experience consistent. Hopefully, this post was able to shed light on at least some of the performance factors of PSC Dedicated on Azure.11Views0likes0CommentsVeeam v13 Integration and Plugin
Hi Everyone, We're new Pure customers this year and have two Flasharray C models, one for virtual infrastructure and the other will be used solely as a storage repository to back up those virtual machines using Veeam Backup and Replication. Our plan is to move away from the current windows-based Veeam v12 in favor of Veeam v13 hardened Linux appliances. We're in the design phase now but have Veeam v13 working great in separate environment with VMware and HPE Nimble. Our question is around Pure Storage and Veeam v13 integration and Plugin support. Veeam's product team mentions there is native integrations in v12, but that storage vendors should be "adopting USAPI" going forward. Is this something that Pure is working on, or maybe already has completed with Veeam Backup and Replication v13?742Views4likes14CommentsProxmox VE
Hi all Hope you're all having a great day. We have several customers going down the Proxmox VE road. One of my colleagues was put onto https://github.com/kolesa-team/pve-purestorage-plugin as a possible solution (as using Pure behind Proxmox (using the native Proxmox release) is not a particularly Pure-like experience. Could someone from Pure comment on the plugin's validity/supportability?600Views5likes5CommentsTaking Snapshots of Databases on VMFS Datastores
Hey friends - hopefully you all are taking advantage of our snapshots for copy data management? Well, those of you who use VMDKs know that there's an extra headache thanks to the VMFS datastore layer. Fortunately, I've just published some new examples in our Github repository and have written up a solution overview on my blog! Check it out! https://sqlbek.wordpress.com/2026/01/08/taking-snapshots-of-databases-on-vmfs-datastores/ Github: Refresh VMFS VMDK(s) with Snapshot Point in Time Recovery – VMFS12Views0likes0CommentsAnnouncing the General Availability of Purity//FB 4.6.6
We are happy to announce the general availability of 4.6.6, the seventh release in the 4.6 Feature Release line. See the release notes for all the details about these, and the many other features, bug fixes, and security updates included in the 4.6 release line. UPGRADE RECOMMENDATIONS AND EOL SCHEDULE Customers who are running any previous 4.6 version should upgrade to 4.6.6. Customers who are looking for long-term maintenance of a consistent feature set are recommended to upgrade to the 4.5 LLR. Check out our AI Copilot intelligent assistant for deeper insights into release content and recommendations. Development on the 4.6 release line will continue through February 2026. After this time the full 4.6 feature set will roll into the 4.7 Long Life Release line for long-term maintenance, and the 4.6 line will be declared End-of-Life (EOL). HARDWARE SUPPORT This release is supported on the following FlashBlade Platforms: FB//S100, FB//S200 (R1, R2), FB//S500 (R1, R2), FB//ZMT, FB//E, FB//EXA LINKS AND REFERENCES Purity//FB 4.6 Release Notes Purity//FB Release and End-of-Life Schedule Purity//FB Release Guidelines FlashBlade Hardware and End-of-Support FlashBlade Capacity and Feature Limits Pure1 Manage AI Copilot48Views2likes0CommentsOur Pug (Mascot) Needs a Name
I was thinking yesterday... we need a name for our Pug. Not the community itself, but our beloved Orange Pug, our mascot... Let's put a smile on our mascots face! Share your suggestions here, and we vote both online and in-person on January 28th at Aces.39Views1like2Comments
Upcoming Events
- Feb5Thursday, Feb 05, 2026, 09:00 AM PST
- Feb12Thursday, Feb 12, 2026, 10:00 AM PST
Featured Places
Introductions
Welcome! Please introduce yourself to the Pure Storage Community.Pure User Groups
Explore groups and meetups near you./CODE
The Pure /Code community is where collaboration thrives and everyone, from beginners taking their first steps to experts honing their craft, comes together to learn, share, and grow. In this inclusive space, you'll find support, inspiration, and opportunities to elevate your automation, scripting, and coding skills, no matter your starting point or career position. The goal is to break barriers, solve challenges, and most of all, learn from each other.Career Growth
A forum to discuss career growth and skill development for technology professionals.
Featured Content
Featured Content
February 5 | Register now! Cyberattacks are faster and smarter—recovery must be too. Join Pure Storage and Rubrik to see the industry’s first integrated cyber-recovery solution that delivers full...
18Views
0likes
0Comments
This blog post argues that Context Engineering is the critical new discipline for building autonomous, goal-driven AI agents. Since Large Language Models (LLMs) are stateless and forget information o...
55Views
2likes
0Comments
This article originally appeared on Medium.com and is republished with permission from the author.
Cloud-native applications must often co-exist with legacy applications. Those legacy applications ...
42Views
0likes
0Comments