Node Navigation
Get Started Here
Recent Discussions
Pure Fusion File Presets & Workloads on FB 4.6.7 and FA 6.10.4: Less Click‑Ops, More Policy
If you’ve ever built the “standard” NFS/SMB layout for an app for the fifth time in a week and thought “this should be a function, not a job”, this release is for you. With FlashBlade version 4.6.7 and FlashArray version 6.10.4, Pure Fusion finally gives file the same treatment block has had for a while: presets and workloads for file services across FlashBlade and FlashArray, a much sharper Presets & Workloads UI, plus smarter placement and resource naming controls tuned for real environments—not demos. This post is written for people who already know what NFS export policies and snapshot rules are and are mostly annoyed they still have to configure them by hand. Problem Statement: Your “Standard” File Config is a Lie Current pattern in most environments: Every app team needs “just a few file shares”. You (or your scripts) manually: Pick an array, hope it’s the right one. Create file systems and exports. Glue on snapshot/replication policies. Try to respect naming conventions and tagging. Six months later: Same logical workload looks different on every array. Audit and compliance people open tickets. Nobody remembers what “fs01-old2-bak” was supposed to be. Fusion File Presets & Workloads exist to eradicate that pattern: Presets = declarative templates describing how to provision a workload (block or file). Workloads = concrete instances of those presets deployed somewhere in a Fusion fleet (FA, FB, or both). In nerd-speak, think: Helm chart for storage (preset) vs Helm release (workload). Quick Mental Model: What Presets & Workloads Actually Are A File Preset can include, for example: Number of file systems (FlashBlade or FlashArray File). Directory layout and export policies (for NFS/SMB). Snapshot policies and async replication (through protection groups or pgroups). Per‑workload tags (helps in finding a needle in a haystack & more) Quota and Snapshot parameters A Workload is: A Fusion object that: References the preset in it’s entirety. Tracks where the underlying Purity objects live. Surfaces health, capacity, and placement at the fleet level. In code‑brain terms: preset: app-file-gold parameters: env: prod fs_count: 4 fs_size: 10TB qos_iops_max: 50000 placement: strategy: recommended # Pure1 or dark‑site heuristic depending on connectivity constraints: platform: flashblade Fusion resolves that into resources and objects on one or more arrays: purefs objects, exports, pgroups, QoS, tags, and consistently named resources. So, what’s new you ask? What’s New on FlashBlade in Purity//FB 4.6.7 1. Fusion File Presets & Workloads for FlashBlade Purity//FB 4.6.7 is the release where FlashBlade joins the Fusion presets/workloads party for file. Key points: You can now define Fusion file presets that describe: Number/size of file systems. Export policies (NFS/SMB). Snapshot/replication policies. Tags and other metadata. You then create Fusion file workloads from those presets: Deployed onto any compatible FlashBlade or FlashArray in the fleet, depending on your constraints and placement recommendations. That means you stop hand‑crafting per‑array configs and start stamping out idempotent policies. 2. New Presets & Workloads GUI on FlashBlade Putrity//FB version 4.6.7 brings proper Fusion GUI surfaces to FB: Storage → Presets Create/edit/delete Fusion presets (block + file). Upload/download preset JSON directly from the GUI. Storage → Workloads Instantiate workloads from presets. See placement, status, and underlying resources across the fleet. Why this is a real improvement, not just new tabs: Single mental model across FA and FB: Same abstractions: preset → workload → Purity objects. Same UX for block and file. Guard‑railed customization: GUI only exposes parameters marked as configurable in the preset (with limits), so you can safely delegate provisioning to less storage‑obsessed humans without getting random snapshot policies. 3. JSON Preset Upload/Download (CLI + GUI) This new release also adds full round‑trip JSON support for presets, including in the GUI: On the CLI side: # Export an existing preset definition as JSON purepreset workload download app-file-gold > app-file-gold.json # Edit JSON, save to file share, version control, commit to git, run through CI, etc… # Import the preset into another fleet or array purepreset workload upload --context fleet-prod app-file-gold < app-file-gold.json Effects: Presets become versionable artifacts (Git, code review, promotion). You can maintain a central preset catalog and promote from dev → QA → prod like any other infra‑as‑code. Sharing configs stops being “here’s a screenshot of my settings.”, 4. Fusion Dark Site File Workload Placement + Get Recommendations Many folks run fleets without outbound connectivity, for various reasons. Until now, that meant “no fancy AI placement recommendations” for those sites. Fusion Dark Site File Workload Placement changes that: When Pure1 isn’t reachable, Fusion can still compute placement recommendations for file workloads across the fleet using local telemetry: Capacity utilization. Performance headroom. QoS ceilings/commitments (where applicable). In the GUI, when you’re provisioning a file workload from a preset, you can hit “Get Recommendations”: Fusion evaluates candidate arrays within the fleet. Returns a ranked list of suitable targets, even in an air‑gapped environment. So, in dark sites you still get: Data‑driven “put it here, not there” hints. Consistency with what you’re used to on the block side when Pure1 is available, but without the cloud dependency. What’s New on FlashArray in Purity//FA 6.10.4 1. Fusion File Presets & Workloads for FlashArray File Version 6.10.4 extends Fusion presets and workloads to FlashArray File Services: You can now: Define file presets on FA that capture: File system count/size. NFS/SMB export behavior. QoS caps at workload/volume group level. Snapshot/async replication policies via pgroups. Tags and metadata. Provision file workloads on FlashArray using those presets: From any Fusion‑enabled FA in the fleet. With the same UX and API that you use for block workloads. This effectively normalizes block and file in Fusion: Fleet‑level view. Same provisioning primitives (preset→workload). Same policy and naming controls. 2. Fusion Pure1‑WLP Replication Placement (Block Workloads) Also introduced is Fusion Pure1 Workload Replication Placement for Block Workloads: When you define replication in a block preset: Fusion can ask Pure1 Workload Planner for a placement plan: Primary/replica arrays are chosen using capacity + performance projections. It avoids packing everything onto that one “lucky” array. Workload provisioning then uses this plan automatically: You can override, but the default is data‑backed rather than “whatever’s top of the list.” It’s the same idea as dark‑site file placement, just with more telemetry and projection thanks to Pure1. Resource Naming Controls: Have it your way If you care about naming standards, compliance, and audit (or just hate chaos and stress), this one matters. Fusion Presets Resource Naming Controls let you define deterministic naming patterns for all the objects a preset creates: Examples: Allowed variables might include: workload_name tenant / app / env platform (flasharray-x, flashblade-s, etc.) datacenter site code Sequenced IDs You can also define patterns like: fs_name_pattern: "{tenant}-{env}-{workload_name}-fs{seq}" export_name_pattern: "{tenant}_{env}_{app}_exp{seq}" pgroup_name_pattern: "pg-{app}-{env}-{region}" Result: Every file system, export, pgroup, and volume created by that preset: Follows the pattern. Satisfies internal CS/IT naming policies for compliance and audits. You can still parameterize inputs (e.g., tenant=finops, env=prod), but the structure is enforced. No more hunting down “test2-final-old” in front of auditors and pretending that was intentional. Not speaking from experience though :-) The Updated Presets & Workloads GUI: Simple is better Across Purity//FB v4.6.7 and Purity//FA v6.10.4, Fusion’s UI for presets and workloads is now a graphical wizard-type interface that is easier to follow, with more help along the way.. Single Pane, Shared Semantics Storage → Presets Block + file presets (FA + FB) in one place. JSON import/export. Storage → Workloads All workloads, all arrays. Filter by type, platform, tag, or preset. Benefits for technical users: Quick answer to: “What’s our standard for <workload X>?” “Where did we deploy it, and how many variants exist?” Easy diff between: “What the preset says” vs “what’s actually deployed.” Guard‑Rails Through Parameterization Preset authors (yes, we’re looking at you) decide: Which fields are fixed (prescriptive) vs configurable. The bounds on configurable fields (e.g., fs_size between 1–50 TB). In the GUI, that becomes: A minimal set of fields for provisioners to fill in. Validation baked into the wizard. Workloads that align with standards without needing a 10‑page runbook. Integrated Placement and Naming When you create a workload via the new GUI, you get: “Get Recommendations” for placement: Pure1‑backed in connected sites (block). Dark‑site logic for file workloads on FB when offline. Naming patterns from the resource naming controls baked in, not bolted on afterward. So you’re not manually choosing: Which array is “least bad” today. How to hack the name so it still passes your log‑parsing scripts. CLI / API: What This Looks Like in Practice If you prefer the CLI over the GUI, Fusion doesn’t punish you. Example: Defining and Using a File Preset Author a preset JSON (simplified example): { "name": "app-file-gold", "type": "file", "parameters": { "fs_count": { "min": 1, "max": 16, "default": 4 }, "fs_size_tib": { "min": 1, "max": 50, "default": 10 }, "tenant": { "required": true }, "env": { "allowed": ["dev","test","prod"], "default": "dev" } }, "naming": { "filesystem_pattern": "{tenant}-{env}-{workload_name}-fs{seq}" }, "protection": { "snapshot_policy": "hourly-24h-daily-30d", "replication_targets": ["dr-fb-01"] } } Upload preset into a fleet: purepreset workload upload --context fleet-core app-file-gold < app-file-gold.json Create a workload and let Fusion pick the array: pureworkload create \ --context fleet-core \ --preset app-file-gold \ --name payments-file-prod \ --parameter tenant=payments \ --parameter env=prod \ --parameter fs_count=8 \ --parameter fs_size_tib=20 Inspect placement and underlying resources: pureworkload list --context fleet-core --name payments-file-prod --verbose Behind the scenes: Fusion picks suitable arrays using Pure1 Workload Placement (for connected sites) or dark‑site logic purefs/exports/pgroups are created with names derived from the preset’s naming rules. Example: Binding Existing Commands to Workloads The new version also extends several CLI commands with workload awareness: purefs list --workload payments-file-prod purefs setattr --workload payments-file-prod ... purefs create --workload payments-file-prod --workload-configuration app-file-gold This is handy when you need to: Troubleshoot or resize all file systems in a given workload. Script around logical workloads instead of individual file systems. Why This Matters for You (Not Just for Slides) Net impact of FB 4.6.7 + FA 6.10.4 from an Admin’s perspective: File is now truly first‑class in Fusion, across both FlashArray and FlashBlade. You can encode “how we do storage here” as code: Presets (JSON + GUI). Parameterization and naming rules. Placement and protection choices. Dark sites get sane placement via “Get Recommendations” for file workloads, instead of best‑guess manual picks. Resource naming is finally policy‑driven, not left to whoever is provisioning at 2 AM. GUI, CLI, and API are aligned around the same abstractions, so you can: Prototype in the UI. Commit JSON to Git. Automate via CLI/API without re‑learning concepts. Next Steps If you want to kick the tires: Upgrade: FlashBlade to Purity//FB 4.6.7 FlashArray to Purity//FA 6.10.4 Pick one or two high‑value patterns (e.g., “DB file services”, “analytics scratch”, “home directories”). Implement them as Fusion presets with: Parameters. Placement hints. Naming rules. Wire into your existing tooling: Use the GUI for ad‑hoc. Wrap purepreset / pureworkload in your pipelines for everything else. You already know how to design good storage. These releases just make it a lot harder for your environment to drift away from that design the moment humans touch it.8Views1like0CommentsPure Certifications
Hey gang, If any of you currently hold a Flash Array certification there is an alternative to retaking the test to renew your cert. The Continuing Pure Education (CPE) program takes into account learning activities and community engagement and contribution hours to renew your FA certification. I just successfully renewed my Flash Array Storage Professional cert by tracking my activities. Below are the details I received from Pure. Customers can earn 1 CPE credit per hour of session attendance at Accelerate, for a maximum of 10 CPEs total (i.e., up to 10 hours of sessions). Sessions must be attended live. I would go ahead and add all the sessions you attended at Accelerate to the CPE_Submission form. Associate-level certifications will auto-renew as long as there is at least one active higher-level certification (e.g., Data Storage Associate will auto-renew anytime a Professional-level cert is renewed). All certifications other than the Data Storage Associate should be renewed separately. At this time, the CPE program only applies to FlashArray-based exams. Non- FA exams may be renewed by retaking the respective test every three years. You should be able to get the CPE submission form from your account team. Once complete email your recertification Log to peak-education@purestorage.com for formal processing.5Views1like0CommentsComing soon! The Pure Fusion MCP Server
Have you tried out the power and flexibility of using MCP Servers in your daily admin life? If you haven't, you shoulld really look into the power that they can provide. Pure has developed it's own MCP server for Pure Fusion and we will be releasing it soon. Check out this blog article to read more about the "sneak peek" into what is coming. And always remember - Automate! Automate! Automate!16Views3likes0CommentsAI Governance: It’s Time to Close the Widening Gap
Traditional governance is no longer enough to manage the scale of modern AI. As global regulations begin to fragment, the article "Inside the Shift Toward Internal Data Governance As Global AI Regulation Fragments", Onur Korucu, DataRep Non-Executive Director points out that organizations must move towards toward dynamic, internal industry frameworks. She says, true AI control isn't just about software rules; it requires a deep understanding of your data flows and the infrastructure they run on. Since AI magnifies the biases of its inputs, effective AI governance is, at its core, rigorous data governance. To stay ahead, leaders must stop waiting for universal standards and start embedding continuous, technical monitoring into their own everyday operations. --------------------------------------------------------------- 🗣️ Let's talk about it! 📣 Community Question: In your experience, where is the biggest gap between the legal intent of AI policy and the technological reality of how these systems actually run? Let's discuss! Click through to read the entire article above and let us know your thoughts around it in the comments below!13Views0likes0CommentsWe are just one week away PUG#3
January 28th, the Cincinnati Pure User Group will be convening at Ace's Pickleball to discuss Enterprise file. We will be joined by Matt Niederhelman Unstructured Data Field Solutions Architect to help guide conversation and answer questions about what he is experiencing amongst other customers. Click the link below to register and come join us. Help us guide the conversation with your ideas for future topics. https://info.purestorage.com/2025-Q4AMS-COMREPLTFSCincinnatiPUG-LP_01---Registration-Page.html11Views1like0CommentsWelcome & Intro
I'm VERY excited about this new Pure Storage Community site! Paired with the return of the PUG (Pure Users Group) these are both GREAT opportunities for the MOST important people out there, YOU, our customers to meet others in the industry tackling similar problems. Thought I'd start off with a thread for introductions. Joe Mudra (or just Mudra) here, been at Pure storage ~3 years, prior to that I was a Sr. SE at Arctic Wolf Networks and before that Veeam Software (for a while, ~6 yrs). I started in IT at Ohio University while attending school there before moving to the Columbus area where I worked at a variety of IT shops locally (Omnicare, Residential Finance, Pacer Logistics, XPO Logistics, & Commercial Vehicle Group). I'm currently working with State, Local & Education accounts in Ohio. Needless to say, in that time, I worked with a lot of Network, Server, Storage Infrastructure stacks. But I honestly got my start in software administration (Esker Deliverware (faxing software), Microsoft Server administration, (Whole slew of MSFT products), VMWARE!!! (rip), Cisco, Cisco UCS, IBM, NetApp, EMC, HPE & Dell. Was a bit of the "Give it to Joe, he'll figure it out." guy for a while, and I got learned so much by just raising my hand when asked if anyone wanted the new (sometimes tedious) sounding project. I've been a Pure fanboy from the start. Unfortunately, in my years in the data centers, Pure at the time was out of my price range, as Flash was $$$ back then (wish I had ran the long term TCO for my employers!) and I didn't understand Pure's Evergreen//Forever program. i.e. Refreshed storage for the cost of normal maintenance + flat maintenance costs. (My apologies to my old employers for missing this opportunity.) I learn the most when I get to chat with customers and hear about their challenges. So THANK YOU! To every one of you who take the time to share, I am forever grateful and appreciative!!! Personally, got 2 daughters in Dublin Jerome HS, one who will graduate this year and head off to College, and another in her Freshman year. I spend as much time as life allows with them. And the newest member of my family... a new Jeep Wrangler Willy's ER (Annie). Let's talk about Jeeps!!! :)18Views3likes0CommentsAsk Us Everything about Pure Storage + Nutanix
💬 Get ready for our January 2026 edition of Ask Us Everything, this Friday, January 16th at 9 AM Pacific. This month is all about Pure Storage + Nutanix. If you have a burning question, feel free to ask it here early and we'll add it to the list to answer on Friday. Or if we can't get it answered live, our Pure Storage + Nutanix experts can follow up here. thomasbrown Cody_Hosterman jhoughes & dpoorman are the experts answering your questions during the conversation and here on the community. See you this Friday! (Oh, and if you haven't registered yet, there's still time!) Or, check out some of these self-serve resources: Solution Brief Pure Report Podcast Pure360 Video Nutanix, Intel, & Pure white paper EDIT: Thanks for joining in! If you have additional burning questions and comments, leave them in the comments below for the team!84Views4likes0CommentsPure Storage Cloud Dedicated on Azure: An intro to Performance
Introduction With Pure Storage Cloud Dedicated on Microsoft Azure, performance is largely governed by three factors that need to be taken into consideration: front-end controller networking, controller back‑end connection to managed disks, and the Purity data path. This post explains how Azure building blocks and these factors influence overal performance. Disclaimer: This post requires basic understanding of PSC Dedicated architecture. Real-life Performance varies based on configuration and workload; examples here are illustrative. Architecture: the building blocks that shape performance Cloud performance often comes from how compute, storage, and networking are assembled. PSC Dedicated deploys two Azure VMs as storage controllers running the Purity operating environment and uses Azure Managed Disks as persistent media. Initiator VMs connect over the Azure Virtual Network using in‑guest iSCSI or NVMe/TCP. Features like inline data reduction, write coalescing through NVRAM, and an I/O rate limiter help keep the array stable and with predictable performance under saturation. Front-end performance: networking caps Azure limits the outbound (egress) bandwidth of Virtual Machines. Each Azure VM has a certain network egress cap assigned and cannot send out more data than what the limit allows. As PSC Dedicated controllers run on Azure VMs, this translates into the following: Network traffic going INTO the PSC Dedicated array - writes - not throttled by Azure outbound bandwidth limits Network traffic going OUT of the PSC Dedicated array - reads - limited User-requested reads (e.g. from an application) as well as any replication traffic leaving the controller share the same egress budget. Because of that, planning workloads with replication should be done carefully to avoid competing with client reads. Back-end performance: VM caps, NVMe, and the write path The Controller VM caps Similarly to frontend network read throughput, Azure enforces per‑VM limits on total backend IOPS and combined read/write throughput. The overall IOPS/throughput of a VM is therefore limited by the lower of: the controller VM's IOPS/throughput cap and the combined IOPS/throughput of all attached managed disks. To avoid unnecessary spend due to overprovisioning, managed disks of PSC Dedicated arrays are configured as to saturate the controller backend caps just right. NVMe backend raises the ceiling Recent PSC Dedicated releases adopt an NVMe backend on supported Azure Premium SSD v2 based SKUs, increasing the controller VM’s backend IOPS and bandwidth ceilings. The disk layout and economics remain the same while the array gains backend headroom. The write path Purity secures initiator writes to NVRAM (for fast acknowledgment) and later destages to data managed disks. For each logical write, the backend cap is therefore tapped multiple times: a write to NVRAM a read from NVRAM during flush and a write to the data managed disks Under mixed read/write non-reducible workloads this can exhaust the combined read/write backend bandwidth and IOPS of the controller VM. Raised caps of the NVMe backend help here. Workload characteristics: iSCSI sessions and data reducibility Block size and session count Increasing iSCSI session count between Initiator VMs and the array does not guarantee better performance; with large blocks, too many sessions can increase latency without improving throughput, especially when multiple initiators converge on the same controller. Establish at least one session per controller for resiliency, then tune based on measured throughput and latency. Data reduction helps extend backend headroom When data is reducible, PSC Dedicated writes fewer physical bytes to backend managed disks. That directly reduces backend write MBps for the same logical workload, delaying the point where Azure’s VM backend caps are reached. The effect is most pronounced for write‑heavy and mixed workloads. Conversely, non‑reducible data translates almost 1:1 to backend traffic, hitting limits sooner and raising latency at high load. Conclusion Predictable performance in the cloud is about aligning architecture and operations with the platform’s limits. For PSC Dedicated on Azure, that means selecting the right controller and initiator VM SKUs, co‑locating resources to minimise network distance, enabling accelerated networking, and tuning workloads (block size, sessions, protocol) to the caps that actually matter. Inline data reduction and NVMe backend extend headroom meaningfully (particularly for mixed workloads) while Purity’s design keeps the experience consistent. Hopefully, this post was able to shed light on at least some of the performance factors of PSC Dedicated on Azure.17Views0likes0CommentsVeeam v13 Integration and Plugin
Hi Everyone, We're new Pure customers this year and have two Flasharray C models, one for virtual infrastructure and the other will be used solely as a storage repository to back up those virtual machines using Veeam Backup and Replication. Our plan is to move away from the current windows-based Veeam v12 in favor of Veeam v13 hardened Linux appliances. We're in the design phase now but have Veeam v13 working great in separate environment with VMware and HPE Nimble. Our question is around Pure Storage and Veeam v13 integration and Plugin support. Veeam's product team mentions there is native integrations in v12, but that storage vendors should be "adopting USAPI" going forward. Is this something that Pure is working on, or maybe already has completed with Veeam Backup and Replication v13?795Views4likes14CommentsProxmox VE
Hi all Hope you're all having a great day. We have several customers going down the Proxmox VE road. One of my colleagues was put onto https://github.com/kolesa-team/pve-purestorage-plugin as a possible solution (as using Pure behind Proxmox (using the native Proxmox release) is not a particularly Pure-like experience. Could someone from Pure comment on the plugin's validity/supportability?619Views5likes5Comments
Upcoming Events
- Feb5Thursday, Feb 05, 2026, 09:00 AM PST
- Feb10Tuesday, Feb 10, 2026, 11:00 AM PST
- Feb12Thursday, Feb 12, 2026, 10:00 AM PST
Featured Places
Introductions
Welcome! Please introduce yourself to the Pure Storage Community.Pure User Groups
Explore groups and meetups near you./CODE
The Pure /Code community is where collaboration thrives and everyone, from beginners taking their first steps to experts honing their craft, comes together to learn, share, and grow. In this inclusive space, you'll find support, inspiration, and opportunities to elevate your automation, scripting, and coding skills, no matter your starting point or career position. The goal is to break barriers, solve challenges, and most of all, learn from each other.Career Growth
A forum to discuss career growth and skill development for technology professionals.
Featured Content
Featured Content
February 5 | Register now! Cyberattacks are faster and smarter—recovery must be too. Join Pure Storage and Rubrik to see the industry’s first integrated cyber-recovery solution that delivers full...
31Views
0likes
0Comments
This blog post argues that Context Engineering is the critical new discipline for building autonomous, goal-driven AI agents. Since Large Language Models (LLMs) are stateless and forget information o...
59Views
2likes
0Comments
This article originally appeared on Medium.com and is republished with permission from the author.
Cloud-native applications must often co-exist with legacy applications. Those legacy applications ...
53Views
0likes
0Comments