Pure Fusion File Presets & Workloads on FB 4.6.7 and FA 6.10.4: Less Click‑Ops, More Policy
If you’ve ever built the “standard” NFS/SMB layout for an app for the fifth time in a week and thought “this should be a function, not a job”, this release is for you. With FlashBlade version 4.6.7 and FlashArray version 6.10.4, Pure Fusion finally gives file the same treatment block has had for a while: presets and workloads for file services across FlashBlade and FlashArray, a much sharper Presets & Workloads UI, plus smarter placement and resource naming controls tuned for real environments—not demos. This post is written for people who already know what NFS export policies and snapshot rules are and are mostly annoyed they still have to configure them by hand. Problem Statement: Your “Standard” File Config is a Lie Current pattern in most environments: Every app team needs “just a few file shares”. You (or your scripts) manually: Pick an array, hope it’s the right one. Create file systems and exports. Glue on snapshot/replication policies. Try to respect naming conventions and tagging. Six months later: Same logical workload looks different on every array. Audit and compliance people open tickets. Nobody remembers what “fs01-old2-bak” was supposed to be. Fusion File Presets & Workloads exist to eradicate that pattern: Presets = declarative templates describing how to provision a workload (block or file). Workloads = concrete instances of those presets deployed somewhere in a Fusion fleet (FA, FB, or both). In nerd-speak, think: Helm chart for storage (preset) vs Helm release (workload). Quick Mental Model: What Presets & Workloads Actually Are A File Preset can include, for example: Number of file systems (FlashBlade or FlashArray File). Directory layout and export policies (for NFS/SMB). Snapshot policies and async replication (through protection groups or pgroups). Per‑workload tags (helps in finding a needle in a haystack & more) Quota and Snapshot parameters A Workload is: A Fusion object that: References the preset in it’s entirety. Tracks where the underlying Purity objects live. Surfaces health, capacity, and placement at the fleet level. In code‑brain terms: preset: app-file-gold parameters: env: prod fs_count: 4 fs_size: 10TB qos_iops_max: 50000 placement: strategy: recommended # Pure1 or dark‑site heuristic depending on connectivity constraints: platform: flashblade Fusion resolves that into resources and objects on one or more arrays: purefs objects, exports, pgroups, QoS, tags, and consistently named resources. So, what’s new you ask? What’s New on FlashBlade in Purity//FB 4.6.7 1. Fusion File Presets & Workloads for FlashBlade Purity//FB 4.6.7 is the release where FlashBlade joins the Fusion presets/workloads party for file. Key points: You can now define Fusion file presets that describe: Number/size of file systems. Export policies (NFS/SMB). Snapshot/replication policies. Tags and other metadata. You then create Fusion file workloads from those presets: Deployed onto any compatible FlashBlade or FlashArray in the fleet, depending on your constraints and placement recommendations. That means you stop hand‑crafting per‑array configs and start stamping out idempotent policies. 2. New Presets & Workloads GUI on FlashBlade Putrity//FB version 4.6.7 brings proper Fusion GUI surfaces to FB: Storage → Presets Create/edit/delete Fusion presets (block + file). Upload/download preset JSON directly from the GUI. Storage → Workloads Instantiate workloads from presets. See placement, status, and underlying resources across the fleet. Why this is a real improvement, not just new tabs: Single mental model across FA and FB: Same abstractions: preset → workload → Purity objects. Same UX for block and file. Guard‑railed customization: GUI only exposes parameters marked as configurable in the preset (with limits), so you can safely delegate provisioning to less storage‑obsessed humans without getting random snapshot policies. 3. JSON Preset Upload/Download (CLI + GUI) This new release also adds full round‑trip JSON support for presets, including in the GUI: On the CLI side: # Export an existing preset definition as JSON purepreset workload download app-file-gold > app-file-gold.json # Edit JSON, save to file share, version control, commit to git, run through CI, etc… # Import the preset into another fleet or array purepreset workload upload --context fleet-prod app-file-gold < app-file-gold.json Effects: Presets become versionable artifacts (Git, code review, promotion). You can maintain a central preset catalog and promote from dev → QA → prod like any other infra‑as‑code. Sharing configs stops being “here’s a screenshot of my settings.”, 4. Fusion Dark Site File Workload Placement + Get Recommendations Many folks run fleets without outbound connectivity, for various reasons. Until now, that meant “no fancy AI placement recommendations” for those sites. Fusion Dark Site File Workload Placement changes that: When Pure1 isn’t reachable, Fusion can still compute placement recommendations for file workloads across the fleet using local telemetry: Capacity utilization. Performance headroom. QoS ceilings/commitments (where applicable). In the GUI, when you’re provisioning a file workload from a preset, you can hit “Get Recommendations”: Fusion evaluates candidate arrays within the fleet. Returns a ranked list of suitable targets, even in an air‑gapped environment. So, in dark sites you still get: Data‑driven “put it here, not there” hints. Consistency with what you’re used to on the block side when Pure1 is available, but without the cloud dependency. What’s New on FlashArray in Purity//FA 6.10.4 1. Fusion File Presets & Workloads for FlashArray File Version 6.10.4 extends Fusion presets and workloads to FlashArray File Services: You can now: Define file presets on FA that capture: File system count/size. NFS/SMB export behavior. QoS caps at workload/volume group level. Snapshot/async replication policies via pgroups. Tags and metadata. Provision file workloads on FlashArray using those presets: From any Fusion‑enabled FA in the fleet. With the same UX and API that you use for block workloads. This effectively normalizes block and file in Fusion: Fleet‑level view. Same provisioning primitives (preset→workload). Same policy and naming controls. 2. Fusion Pure1‑WLP Replication Placement (Block Workloads) Also introduced is Fusion Pure1 Workload Replication Placement for Block Workloads: When you define replication in a block preset: Fusion can ask Pure1 Workload Planner for a placement plan: Primary/replica arrays are chosen using capacity + performance projections. It avoids packing everything onto that one “lucky” array. Workload provisioning then uses this plan automatically: You can override, but the default is data‑backed rather than “whatever’s top of the list.” It’s the same idea as dark‑site file placement, just with more telemetry and projection thanks to Pure1. Resource Naming Controls: Have it your way If you care about naming standards, compliance, and audit (or just hate chaos and stress), this one matters. Fusion Presets Resource Naming Controls let you define deterministic naming patterns for all the objects a preset creates: Examples: Allowed variables might include: workload_name tenant / app / env platform (flasharray-x, flashblade-s, etc.) datacenter site code Sequenced IDs You can also define patterns like: fs_name_pattern: "{tenant}-{env}-{workload_name}-fs{seq}" export_name_pattern: "{tenant}_{env}_{app}_exp{seq}" pgroup_name_pattern: "pg-{app}-{env}-{region}" Result: Every file system, export, pgroup, and volume created by that preset: Follows the pattern. Satisfies internal CS/IT naming policies for compliance and audits. You can still parameterize inputs (e.g., tenant=finops, env=prod), but the structure is enforced. No more hunting down “test2-final-old” in front of auditors and pretending that was intentional. Not speaking from experience though :-) The Updated Presets & Workloads GUI: Simple is better Across Purity//FB v4.6.7 and Purity//FA v6.10.4, Fusion’s UI for presets and workloads is now a graphical wizard-type interface that is easier to follow, with more help along the way.. Single Pane, Shared Semantics Storage → Presets Block + file presets (FA + FB) in one place. JSON import/export. Storage → Workloads All workloads, all arrays. Filter by type, platform, tag, or preset. Benefits for technical users: Quick answer to: “What’s our standard for <workload X>?” “Where did we deploy it, and how many variants exist?” Easy diff between: “What the preset says” vs “what’s actually deployed.” Guard‑Rails Through Parameterization Preset authors (yes, we’re looking at you) decide: Which fields are fixed (prescriptive) vs configurable. The bounds on configurable fields (e.g., fs_size between 1–50 TB). In the GUI, that becomes: A minimal set of fields for provisioners to fill in. Validation baked into the wizard. Workloads that align with standards without needing a 10‑page runbook. Integrated Placement and Naming When you create a workload via the new GUI, you get: “Get Recommendations” for placement: Pure1‑backed in connected sites (block). Dark‑site logic for file workloads on FB when offline. Naming patterns from the resource naming controls baked in, not bolted on afterward. So you’re not manually choosing: Which array is “least bad” today. How to hack the name so it still passes your log‑parsing scripts. CLI / API: What This Looks Like in Practice If you prefer the CLI over the GUI, Fusion doesn’t punish you. Example: Defining and Using a File Preset Author a preset JSON (simplified example): { "name": "app-file-gold", "type": "file", "parameters": { "fs_count": { "min": 1, "max": 16, "default": 4 }, "fs_size_tib": { "min": 1, "max": 50, "default": 10 }, "tenant": { "required": true }, "env": { "allowed": ["dev","test","prod"], "default": "dev" } }, "naming": { "filesystem_pattern": "{tenant}-{env}-{workload_name}-fs{seq}" }, "protection": { "snapshot_policy": "hourly-24h-daily-30d", "replication_targets": ["dr-fb-01"] } } Upload preset into a fleet: purepreset workload upload --context fleet-core app-file-gold < app-file-gold.json Create a workload and let Fusion pick the array: pureworkload create \ --context fleet-core \ --preset app-file-gold \ --name payments-file-prod \ --parameter tenant=payments \ --parameter env=prod \ --parameter fs_count=8 \ --parameter fs_size_tib=20 Inspect placement and underlying resources: pureworkload list --context fleet-core --name payments-file-prod --verbose Behind the scenes: Fusion picks suitable arrays using Pure1 Workload Placement (for connected sites) or dark‑site logic purefs/exports/pgroups are created with names derived from the preset’s naming rules. Example: Binding Existing Commands to Workloads The new version also extends several CLI commands with workload awareness: purefs list --workload payments-file-prod purefs setattr --workload payments-file-prod ... purefs create --workload payments-file-prod --workload-configuration app-file-gold This is handy when you need to: Troubleshoot or resize all file systems in a given workload. Script around logical workloads instead of individual file systems. Why This Matters for You (Not Just for Slides) Net impact of FB 4.6.7 + FA 6.10.4 from an Admin’s perspective: File is now truly first‑class in Fusion, across both FlashArray and FlashBlade. You can encode “how we do storage here” as code: Presets (JSON + GUI). Parameterization and naming rules. Placement and protection choices. Dark sites get sane placement via “Get Recommendations” for file workloads, instead of best‑guess manual picks. Resource naming is finally policy‑driven, not left to whoever is provisioning at 2 AM. GUI, CLI, and API are aligned around the same abstractions, so you can: Prototype in the UI. Commit JSON to Git. Automate via CLI/API without re‑learning concepts. Next Steps If you want to kick the tires: Upgrade: FlashBlade to Purity//FB 4.6.7 FlashArray to Purity//FA 6.10.4 Pick one or two high‑value patterns (e.g., “DB file services”, “analytics scratch”, “home directories”). Implement them as Fusion presets with: Parameters. Placement hints. Naming rules. Wire into your existing tooling: Use the GUI for ad‑hoc. Wrap purepreset / pureworkload in your pipelines for everything else. You already know how to design good storage. These releases just make it a lot harder for your environment to drift away from that design the moment humans touch it.69Views2likes0CommentsFeature Request: Certificate Automation with ACME
Hi Pure people, How about reducing my workload a little by supporting the ACME protocol for certificate renewal? . Certficate lifespans are just getting shorter, and while I have a horrid expect script to renew certificates via ssh to flasharray, it would be much simpler if Purity ran an ACME client itself. PS We use the DNS Challenge method to avoid having to run webservices where they aren't needed.134Views1like2Comments4 steps to enable Pure Fusion
Several teams like yours have recently switched on Pure Fusion and saved 39.5 hours of staff time per day by boosting application-response times. It’s been a game changer for enterprise data management. Read more on how Mississippi Department of Revenue deployed Pure Storage® platform for a faster, more versatile storage to boost application performance, protect data, and support hypervisor mobility. Pure Fusion unifies enterprise data and automates workflows with simplified storage management, workload automation and AI-driven workload placement. With the power of an Intelligent Control Plane, Fusion automates storage management across cloud, edge or core or any protocol file, object or block. Anchoring the Enterprise Data Cloud, it unifies data services and integrates with existing infrastructures, turning complex, manual tasks into streamlined, policy-driven operations. Fusion enables end-to-end automation—freeing you to accelerate innovation while reducing operational risk and overhead. Here are the 4 steps to enable Pure Fusion: Click here for the complete Pure Fusion Quick Start Guide. Using Secure LDAP (LDAPS) requires additional configuration with certificates. Please reference the Quick Start guide for more information. For compatibility reference, please see the Compatibility Matrix.133Views1like1CommentAsk us everything about Purity Upgrades!
💬 Have more questions for our experts around Purity Upgrades after today's live "Ask Us Everything"? Feel free to drop them below and our experts will answer! dpoorman , skennedy , rquast , jhoughes tag your it! You can also check out these upgrade resources: Bulk self-service upgrades demo video Upgrade your own FlashArray with Pure1 blog Fleet-wide self-service upgrades brief356Views3likes3CommentsFeature Request for Puritan Heroes — Manage snapshot schedules directly in Pure1
An opportunity to shine Summary / problem Today, snapshot scheduling and edits are performed per-array (e.g., via Purity protection policies / groups). At fleet scale this is manual, slow for change control, and hard to audit centrally. We’d like first-class, Pure1-native capabilities to create, modify, and bulk-apply snapshot schedules—without logging into each array. Impact / why this matters Fleet operations: one place to set/adjust RPO/retention across many FlashArrays/FlashBlades. Change management & compliance: audited, role-scoped changes with approver workflow. Risk & capacity: pre-change impact analysis (capacity, performance) using Pure1 telemetry. Consistency: policy templates and guardrails reduce configuration drift. Scope (initial ask) Pure1 UI “Snapshot Policies” workspace to create/edit/delete schedules and retentions (minute/hour/day/week/month), including SafeMode/immutability options where supported. Bulk Apply/Override to multiple arrays, volume groups, file systems/shares. What-if capacity forecast prior to save (based on Pure1 analytics). Change review & approvals (optional), with rationale and ticket ID fields. Maintenance/blackout windows and exceptions. Full audit trail (who, what changed, before/after, when, target objects). RBAC: granular rights (view vs. edit vs. approve), and per-array or per-group scoping. Pure1 Public API New endpoints to programmatically create/read/update/delete snapshot policies, bind/unbind them to objects, run an on-demand snapshot from a policy, and pull audit events—so we can integrate with CI/CD and ITSM. (We rely on Pure1 API today for telemetry; adding write ops for policies would unlock safe automation.) Edge / connectivity Use Pure1 Edge Service for secure, two-way execution on arrays. Include graceful handling for dark/offline assets (queue and notify). Acceptance criteria (examples) From Pure1, I can create a policy “gold-daily-35d” and attach it to 100+ arrays / 1,000+ objects in a single operation. A non-admin can propose a policy change; an approver must OK it before it takes effect. Audit page shows a human-readable diff (before/after crontab-style schedule and retention) plus API payloads. Capacity impact estimator shows expected snapshot object growth over 30/90 days based on historical change rates. API: POST /snapshot-policies, PATCH /snapshot-policies/{id}, POST /snapshot-policy-attachments, GET /audit-events?type=snapshot_policy. Nice-to-haves Prebuilt policy templates (e.g., 15-min RPO for 24h + daily 35d + monthly 12m). Guardrails (“warn/deny if estimated growth > X TB or RPO < Y min”). Service-now/Jira webhook on approval/change. Environment Mixed FA/FB fleet; Pure1 connected, Edge Service available. SafeMode enabled on arrays requiring immutability. Thanks in advance - Garry Ohanian149Views2likes1CommentGetting Started with Pure Storage Fusion: A Quick Guide to Unified Fleet Management
One of the most powerful updates in the Pure Storage ecosystem is the ability to federate arrays into a unified fleet with Fusion. Whether you're scaling out infrastructure or simplifying operations across data centers, Fusion makes multi-array management seamless—and the setup process is refreshingly simple. Here’s a quick walkthrough to get your fleet up and running: 🔹 Step 1: Create or Join a Fleet From the Fleet Management tab in the Purity UI, you can either create a new fleet or join an existing one. Creating a fleet? Just assign a memorable name and generate a one-time fleet key. This key acts like a secure handshake, ensuring that only authorized arrays can join. 🔹 Step 2: Add Arrays to the Fleet On each array you want to bring into the fold: Select Join Fleet, enter the fleet name, and paste in the fleet key. Once verified, the array becomes part of your managed fleet. 🔹 Step 3: Manage as One With federation complete, you now have a single, unified control plane. Any array in the fleet can serve as your management entry point—configure, monitor, and operate across the entire environment from one location. This capability is a big leap forward for simplifying scale and operations—especially for hybrid cloud or multi-site environments. If you're testing it out, I’d love to hear how it's working for you or what use cases you're solving.841Views6likes2CommentsFREE BEER FOR ALL!!! Now That I Have Your Attention, Let's Talk About Purity Updates.
WAIT WAIT WAIT - don't leave yet because of my free beer tomfoolery....hear me out. Listen, we get it. Storage OS updates are historically the LAST thing you ever want to consider for your already impossibly thin maintenance windows. And, we all know NOBODY ever grew up saying, "When I get older, I want to manage enterprise storage for its rock and roll lifestyle." 😀 But - hear me out. Any past pain, suffering, or heavy drinking you may have taken on during previous OS updates with other legacy vendors has been minimized or even flat out eliminated by how we handle updating Purity for FlashArray and FlashBlade. We offer two tracks you can leverage for making them happen by either working directly with support for a white glove update experience where they do all the work remotely, or you can complete them via the Self Support Update (SSU) feature built into Pure1. We encourage regular Purity updates for two reasons: Performance. stability and security improvements...obviously New feature adoption. Want Fusion 2.0? Want the ability to deliver NFS/SMB shares on your FlashArray? These are bundled into your Purity updates and require no additional licensing costs to adopt if you want them. Think of them as over the air feature updates that are all the rage for EVs... For now, take a quick look at the Purity version you are running. If you haven't updated it in a year or two (which many of you probably haven't), you're missing out on being able to squeeze extra value out of your storage. I will be posting some supporting demos and other materials to help you visualize the process in the coming month or so. I would LOVE any feedback from the community, good or bad, on current or past experiences with our updating experience...through it all we can get more boats to rise with the tide! Stay tuned! DP265Views6likes1CommentAnnouncing the General Availability of Purity//FA 6.8.6
We are happy to announce the general availability of 6.8.6, the seventh release in the 6.8 Feature Release line, including the SMB Continuous Availability feature, which guarantees zero downtime for customers' businesses during controller disruptions and upgrades, ensuring uninterrupted access to shared files. Some of the improvements to Purity contained in this release include: SMB Continuous Availability preserves file handles to ensure uninterrupted SMB access during controller failovers and upgrades. Target Pods for Pgroup Replication allows customers to target a specific pod for protection groups, avoiding the clutter of snapshots replicating to an array’s root pod. CBS for AWS Write Optimization for Amazon S3 improves how its data is committed and managed on Amazon S3 and can significantly drop the AWS infrastructure operating cost, providing customers with write or replication heavy workloads with cost reductions of up to 50%. Allow NVMe Read-Only Volumes for ActiveDR eliminates restriction on promotion/demotion of pods containing NVMe-connected volumes, saving customers from unexpected command failures and time-consuming workarounds. For more detailed information about features, bug fixes, and security updates included in each release, see the release notes. UPGRADE RECOMMENDATIONS AND EOL SCHEDULE We recommend customers with compatible hardware who are looking for the fastest access to the latest features upgrade to this new feature release. When possible, customers should make use of Self-Service Upgrades (SSU) to ease the process of planning and executing non-disruptive Purity upgrades for their fleet. This 6.8 release line is planned for feature development through May 2025 with additional fixes coming in June 2025 to end the release line. HARDWARE SUPPORT This release is supported on the following FlashArray Platforms: Cloud Block Store for Azure and AWS, FA//X (R3, R4), FA//C (R3, R4), FA//XL, FA//E, and FA//RC (starting with 6.8.5). Note, DFS firmware version 2.2.3 is recommended with this release. ACKNOWLEDGEMENTS We would like to thank everyone within the engineering, support, technical program management, product management, product marketing, finance and technical product specialist teams who contributed to this release. LINKS AND REFERENCES Purity//FA 6.8 Release Notes Self-Service Upgrades Purity//FA Release and End-of-Life Schedule FlashArray Hardware and End-of-Support DirectFlash Shelf Software Compatibility Matrix FlashArray Capacity and Feature Limits443Views1like0CommentsTry Fusion Presets and Workloads in Pure Test Drive
Pure Test Drive now has a hands on lab designed to let you demo the Presets and Workloads capability included in Purity//FA 6.8.3. A Preset is a reusable template that describes how to provision and configure the Purity objects that make up a Workload; and Workloads are a container object that holds references to other Purity objects, which will enable them to be managed and monitored as a set. Watch a recorded walkthrough or roll up your sleeves and try it your self by filling out this form or asking your local Pure SE for a voucher for the Pure Fusion Presets & Workloads lab.104Views1like0Comments