Recent Discussions
Ask Us Everything about Pure1® Self-service!
💬 Get ready for our first May 2026 edition of Ask Us Everything, this Friday, May 1st at 9 AM Pacific. This month is all about Pure1® Self-service. If you have a burning question, feel free to ask it here early and we'll add it to the list to answer on Friday. Or if we can't get it answered live, our Everpure experts can follow up here. jclark, mbradford plus dpoorman dpoorman are the moderators and experts answering your questions during the conversation as well as here on the community. See you this Friday! (Oh, and if you haven't registered yet, there's still time!)Ask Us Everything Recap: Rethinking Storage with the Intelligent Control Plane
The latest Ask Us Everything session focused on a topic that’s quickly becoming central to Everpure’s strategy: the intelligent control plane. And based on the questions from the community, it’s clear that many teams are starting to think beyond individual arrays and toward managing storage as a unified platform. Here are the key takeaways—driven by the questions attendees asked and the answers from Everpure experts Don Poorman, Zane Allyn and Mike Nelson. “Do I need to rebuild my automation to use Everpure Fusion?” Most teams already have automation in place, whether it’s Terraform, Ansible, or years of scripts. The good news: you don’t have to start over. Everpure Fusion is API-driven, so existing workflows can stay intact. In practice, you’re simply shifting from targeting individual arrays to targeting the fleet as a whole. That often means adding a parameter, not rewriting everything. Everpure Fusion picks up where any existing automation gets bogged down, so tasks get simpler as you scale, not more complex. The takeaway: Everpure Fusion helps you scale your existing automation—it simplifies, standardizes and extends it across your data estate. “What does API-first really mean here?” At Everpure, API-first isn’t just a label. The APIs are built before the GUI, which means everything you can do in the GUI is already available programmatically. For practitioners, that translates to flexibility. Whether you’re scripting, using infrastructure-as-code, or experimenting with AI-driven workflows, you’re not waiting for features to be exposed—you already have access. It’s a subtle difference from legacy storage, where automation often lags behind the interface. “How do I approach automation without losing control?” Attendees raised a common concern: automation can feel risky. The advice was straightforward—start with outcomes, not everything at once. Automate a single workflow, apply guardrails, and expand gradually. Automation here isn’t about removing control. It’s about: Reducing repetitive work Minimizing human error Freeing up time for higher-value tasks For most admins juggling multiple systems, that shift is practical—not theoretical. “What does this look like in real workflows?” One of the most relatable examples discussed was a ServiceNow-style request flow. Instead of manually provisioning storage across multiple systems, a user submits a request describing what they need—performance, protection, and resiliency. From there, Everpure Fusion and Pure1 handle the process automatically. The result is faster, more consistent delivery with fewer manual steps. More importantly, it abstracts the complexity away from both the admin and the requester. That’s a major difference from legacy environments, where admins must manage each step across each array. “What do I actually need to install?” This answer surprised some people. Everpure Fusion isn’t a separate product. It’s built into Purity. Once you’re on the right version (Purity//FA 6.8.1 or later, Purity//FB 4.5.5 or later), getting started is simple: Create a fleet Add arrays That’s it. No additional infrastructure, no separate control plane to deploy. This lowers the barrier significantly and makes it easy to start small and build as your needs require. “How does this scale?” As expected, scale came up quickly. Instead of managing arrays individually, Everpure Fusion introduces fleet-level management. New capabilities like topology groups allow further organization within that fleet—by region, workload, or compliance requirements. This is where Everpure’s approach really diverges from legacy storage. You’re no longer limited to thinking in terms of hardware. You can organize storage in ways that reflect how your business actually operates. “What happens if something fails?” Everpure Fusion is distributed across the arrays in the fleet. There’s no single point of failure. If one system goes offline, the rest of the fleet continues operating normally. That design keeps management resilient while still enabling centralized control. Final thoughts The biggest shift highlighted in this session is simple: Stop managing arrays. Start managing outcomes. With the intelligent control plane—powered by Everpure Fusion and Pure1—Everpure enables: Policy-driven automation Fleet-scale visibility Simpler, faster operations For storage teams, that means less time on manual tasks and more time focused on how data supports the business. And based on the conversation, that’s exactly where our customers want to go. Find out more about the Everpure Intelligent Control Plane here. Check out this and all our other Ask Us Everything sessions. And, keep the conversation going by jumping into the Everpure Community.35Views1like0CommentsAsk Us Everything About Intelligent Control with Everpure Fusion + Pure1
💬 Get ready for our April 2026 edition of Ask Us Everything, this Friday, April 17th at 9 AM Pacific. This month is all Intelligent Control with Everpure Fusion + Pure1. If you have a burning question, feel free to ask it here early and we'll add it to the list to answer on Friday. Or if we can't get it answered live, our Everpure experts can follow up here. Allynz, mikenelson-pure, plus dpoorman are the moderators and experts answering your questions during the conversation as well as here on the community. See you this Friday! (Oh, and if you haven't registered yet, there's still time!) Or, check out this self-serve resource: Presets and Workloads in Everpure Fusion video Pure Fusion Presets and Workloads: Enabling Automation Innovation for Storage Workloads Unlock the Future of Data Management with Pure Fusion File Presetscatud13 days agoTrekker III117Views0likes0CommentsPure Fusion File Presets & Workloads on FB 4.6.7 and FA 6.10.4: Less Click‑Ops, More Policy
If you’ve ever built the “standard” NFS/SMB layout for an app for the fifth time in a week and thought “this should be a function, not a job”, this release is for you. With FlashBlade version 4.6.7 and FlashArray version 6.10.4, Pure Fusion finally gives file the same treatment block has had for a while: presets and workloads for file services across FlashBlade and FlashArray, a much sharper Presets & Workloads UI, plus smarter placement and resource naming controls tuned for real environments—not demos. This post is written for people who already know what NFS export policies and snapshot rules are and are mostly annoyed they still have to configure them by hand. Problem Statement: Your “Standard” File Config is a Lie Current pattern in most environments: Every app team needs “just a few file shares”. You (or your scripts) manually: Pick an array, hope it’s the right one. Create file systems and exports. Glue on snapshot/replication policies. Try to respect naming conventions and tagging. Six months later: Same logical workload looks different on every array. Audit and compliance people open tickets. Nobody remembers what “fs01-old2-bak” was supposed to be. Fusion File Presets & Workloads exist to eradicate that pattern: Presets = declarative templates describing how to provision a workload (block or file). Workloads = concrete instances of those presets deployed somewhere in a Fusion fleet (FA, FB, or both). In nerd-speak, think: Helm chart for storage (preset) vs Helm release (workload). Quick Mental Model: What Presets & Workloads Actually Are A File Preset can include, for example: Number of file systems (FlashBlade or FlashArray File). Directory layout and export policies (for NFS/SMB). Snapshot policies and async replication (through protection groups or pgroups). Per‑workload tags (helps in finding a needle in a haystack & more) Quota and Snapshot parameters A Workload is: A Fusion object that: References the preset in it’s entirety. Tracks where the underlying Purity objects live. Surfaces health, capacity, and placement at the fleet level. In code‑brain terms: preset: app-file-gold parameters: env: prod fs_count: 4 fs_size: 10TB qos_iops_max: 50000 placement: strategy: recommended # Pure1 or dark‑site heuristic depending on connectivity constraints: platform: flashblade Fusion resolves that into resources and objects on one or more arrays: purefs objects, exports, pgroups, QoS, tags, and consistently named resources. So, what’s new you ask? What’s New on FlashBlade in Purity//FB 4.6.7 1. Fusion File Presets & Workloads for FlashBlade Purity//FB 4.6.7 is the release where FlashBlade joins the Fusion presets/workloads party for file. Key points: You can now define Fusion file presets that describe: Number/size of file systems. Export policies (NFS/SMB). Snapshot/replication policies. Tags and other metadata. You then create Fusion file workloads from those presets: Deployed onto any compatible FlashBlade or FlashArray in the fleet, depending on your constraints and placement recommendations. That means you stop hand‑crafting per‑array configs and start stamping out idempotent policies. 2. New Presets & Workloads GUI on FlashBlade Putrity//FB version 4.6.7 brings proper Fusion GUI surfaces to FB: Storage → Presets Create/edit/delete Fusion presets (block + file). Upload/download preset JSON directly from the GUI. Storage → Workloads Instantiate workloads from presets. See placement, status, and underlying resources across the fleet. Why this is a real improvement, not just new tabs: Single mental model across FA and FB: Same abstractions: preset → workload → Purity objects. Same UX for block and file. Guard‑railed customization: GUI only exposes parameters marked as configurable in the preset (with limits), so you can safely delegate provisioning to less storage‑obsessed humans without getting random snapshot policies. 3. JSON Preset Upload/Download (CLI + GUI) This new release also adds full round‑trip JSON support for presets, including in the GUI: On the CLI side: # Export an existing preset definition as JSON purepreset workload download app-file-gold > app-file-gold.json # Edit JSON, save to file share, version control, commit to git, run through CI, etc… # Import the preset into another fleet or array purepreset workload upload --context fleet-prod app-file-gold < app-file-gold.json Effects: Presets become versionable artifacts (Git, code review, promotion). You can maintain a central preset catalog and promote from dev → QA → prod like any other infra‑as‑code. Sharing configs stops being “here’s a screenshot of my settings.”, 4. Fusion Dark Site File Workload Placement + Get Recommendations Many folks run fleets without outbound connectivity, for various reasons. Until now, that meant “no fancy AI placement recommendations” for those sites. Fusion Dark Site File Workload Placement changes that: When Pure1 isn’t reachable, Fusion can still compute placement recommendations for file workloads across the fleet using local telemetry: Capacity utilization. Performance headroom. QoS ceilings/commitments (where applicable). In the GUI, when you’re provisioning a file workload from a preset, you can hit “Get Recommendations”: Fusion evaluates candidate arrays within the fleet. Returns a ranked list of suitable targets, even in an air‑gapped environment. So, in dark sites you still get: Data‑driven “put it here, not there” hints. Consistency with what you’re used to on the block side when Pure1 is available, but without the cloud dependency. What’s New on FlashArray in Purity//FA 6.10.4 1. Fusion File Presets & Workloads for FlashArray File Version 6.10.4 extends Fusion presets and workloads to FlashArray File Services: You can now: Define file presets on FA that capture: File system count/size. NFS/SMB export behavior. QoS caps at workload/volume group level. Snapshot/async replication policies via pgroups. Tags and metadata. Provision file workloads on FlashArray using those presets: From any Fusion‑enabled FA in the fleet. With the same UX and API that you use for block workloads. This effectively normalizes block and file in Fusion: Fleet‑level view. Same provisioning primitives (preset→workload). Same policy and naming controls. 2. Fusion Pure1‑WLP Replication Placement (Block Workloads) Also introduced is Fusion Pure1 Workload Replication Placement for Block Workloads: When you define replication in a block preset: Fusion can ask Pure1 Workload Planner for a placement plan: Primary/replica arrays are chosen using capacity + performance projections. It avoids packing everything onto that one “lucky” array. Workload provisioning then uses this plan automatically: You can override, but the default is data‑backed rather than “whatever’s top of the list.” It’s the same idea as dark‑site file placement, just with more telemetry and projection thanks to Pure1. Resource Naming Controls: Have it your way If you care about naming standards, compliance, and audit (or just hate chaos and stress), this one matters. Fusion Presets Resource Naming Controls let you define deterministic naming patterns for all the objects a preset creates: Examples: Allowed variables might include: workload_name tenant / app / env platform (flasharray-x, flashblade-s, etc.) datacenter site code Sequenced IDs You can also define patterns like: fs_name_pattern: "{tenant}-{env}-{workload_name}-fs{seq}" export_name_pattern: "{tenant}_{env}_{app}_exp{seq}" pgroup_name_pattern: "pg-{app}-{env}-{region}" Result: Every file system, export, pgroup, and volume created by that preset: Follows the pattern. Satisfies internal CS/IT naming policies for compliance and audits. You can still parameterize inputs (e.g., tenant=finops, env=prod), but the structure is enforced. No more hunting down “test2-final-old” in front of auditors and pretending that was intentional. Not speaking from experience though :-) The Updated Presets & Workloads GUI: Simple is better Across Purity//FB v4.6.7 and Purity//FA v6.10.4, Fusion’s UI for presets and workloads is now a graphical wizard-type interface that is easier to follow, with more help along the way.. Single Pane, Shared Semantics Storage → Presets Block + file presets (FA + FB) in one place. JSON import/export. Storage → Workloads All workloads, all arrays. Filter by type, platform, tag, or preset. Benefits for technical users: Quick answer to: “What’s our standard for <workload X>?” “Where did we deploy it, and how many variants exist?” Easy diff between: “What the preset says” vs “what’s actually deployed.” Guard‑Rails Through Parameterization Preset authors (yes, we’re looking at you) decide: Which fields are fixed (prescriptive) vs configurable. The bounds on configurable fields (e.g., fs_size between 1–50 TB). In the GUI, that becomes: A minimal set of fields for provisioners to fill in. Validation baked into the wizard. Workloads that align with standards without needing a 10‑page runbook. Integrated Placement and Naming When you create a workload via the new GUI, you get: “Get Recommendations” for placement: Pure1‑backed in connected sites (block). Dark‑site logic for file workloads on FB when offline. Naming patterns from the resource naming controls baked in, not bolted on afterward. So you’re not manually choosing: Which array is “least bad” today. How to hack the name so it still passes your log‑parsing scripts. CLI / API: What This Looks Like in Practice If you prefer the CLI over the GUI, Fusion doesn’t punish you. Example: Defining and Using a File Preset Author a preset JSON (simplified example): { "name": "app-file-gold", "type": "file", "parameters": { "fs_count": { "min": 1, "max": 16, "default": 4 }, "fs_size_tib": { "min": 1, "max": 50, "default": 10 }, "tenant": { "required": true }, "env": { "allowed": ["dev","test","prod"], "default": "dev" } }, "naming": { "filesystem_pattern": "{tenant}-{env}-{workload_name}-fs{seq}" }, "protection": { "snapshot_policy": "hourly-24h-daily-30d", "replication_targets": ["dr-fb-01"] } } Upload preset into a fleet: purepreset workload upload --context fleet-core app-file-gold < app-file-gold.json Create a workload and let Fusion pick the array: pureworkload create \ --context fleet-core \ --preset app-file-gold \ --name payments-file-prod \ --parameter tenant=payments \ --parameter env=prod \ --parameter fs_count=8 \ --parameter fs_size_tib=20 Inspect placement and underlying resources: pureworkload list --context fleet-core --name payments-file-prod --verbose Behind the scenes: Fusion picks suitable arrays using Pure1 Workload Placement (for connected sites) or dark‑site logic purefs/exports/pgroups are created with names derived from the preset’s naming rules. Example: Binding Existing Commands to Workloads The new version also extends several CLI commands with workload awareness: purefs list --workload payments-file-prod purefs setattr --workload payments-file-prod ... purefs create --workload payments-file-prod --workload-configuration app-file-gold This is handy when you need to: Troubleshoot or resize all file systems in a given workload. Script around logical workloads instead of individual file systems. Why This Matters for You (Not Just for Slides) Net impact of FB 4.6.7 + FA 6.10.4 from an Admin’s perspective: File is now truly first‑class in Fusion, across both FlashArray and FlashBlade. You can encode “how we do storage here” as code: Presets (JSON + GUI). Parameterization and naming rules. Placement and protection choices. Dark sites get sane placement via “Get Recommendations” for file workloads, instead of best‑guess manual picks. Resource naming is finally policy‑driven, not left to whoever is provisioning at 2 AM. GUI, CLI, and API are aligned around the same abstractions, so you can: Prototype in the UI. Commit JSON to Git. Automate via CLI/API without re‑learning concepts. Next Steps If you want to kick the tires: Upgrade: FlashBlade to Purity//FB 4.6.7 FlashArray to Purity//FA 6.10.4 Pick one or two high‑value patterns (e.g., “DB file services”, “analytics scratch”, “home directories”). Implement them as Fusion presets with: Parameters. Placement hints. Naming rules. Wire into your existing tooling: Use the GUI for ad‑hoc. Wrap purepreset / pureworkload in your pipelines for everything else. You already know how to design good storage. These releases just make it a lot harder for your environment to drift away from that design the moment humans touch it.240Views3likes0CommentsFeature Request: Certificate Automation with ACME
Hi Pure people, How about reducing my workload a little by supporting the ACME protocol for certificate renewal? . Certficate lifespans are just getting shorter, and while I have a horrid expect script to renew certificates via ssh to flasharray, it would be much simpler if Purity ran an ACME client itself. PS We use the DNS Challenge method to avoid having to run webservices where they aren't needed.jasoncormie4 months agoDay Hiker II295Views1like2CommentsPure Report Podcast - Pure Fusion
Jump into the latest Pure Report podcast - all about Pure Fusion and featuring dpoorman and mikenelson-pure delivering all the great details about the technology at the heart of the intelligent control plane. Learn what Fusion is – a powerful capability included in the latest versions of the Purity operating environment that provides a centralized, unified management experience across an entire fleet of arrays. Mike and Don explain how Fusion inherently adopts Pure's API-First strategy, offering robust automation capabilities through PowerShell SDK, Ansible, and Python. We highlight how Fusion drives management, compliance, and workload configuration consistency from a single pane of glass, and how it's a vital foundation of Pure's Enterprise Data Cloud (EDC) vision. Check it out and let us know how your journey to adopting Fusion is going!
Ludes5 months agoTrekker III69Views0likes0Comments4 steps to enable Pure Fusion
Several teams like yours have recently switched on Pure Fusion and saved 39.5 hours of staff time per day by boosting application-response times. It’s been a game changer for enterprise data management. Read more on how Mississippi Department of Revenue deployed Pure Storage® platform for a faster, more versatile storage to boost application performance, protect data, and support hypervisor mobility. Pure Fusion unifies enterprise data and automates workflows with simplified storage management, workload automation and AI-driven workload placement. With the power of an Intelligent Control Plane, Fusion automates storage management across cloud, edge or core or any protocol file, object or block. Anchoring the Enterprise Data Cloud, it unifies data services and integrates with existing infrastructures, turning complex, manual tasks into streamlined, policy-driven operations. Fusion enables end-to-end automation—freeing you to accelerate innovation while reducing operational risk and overhead. Here are the 4 steps to enable Pure Fusion: Click here for the complete Pure Fusion Quick Start Guide. Using Secure LDAP (LDAPS) requires additional configuration with certificates. Please reference the Quick Start guide for more information. For compatibility reference, please see the Compatibility Matrix.272Views1like1CommentAnnouncing the General Availability of Purity//FA 6.10.1
We are happy to announce the general availability of 6.10.1, the second release in the 6.10 Feature Release line, continuing to deliver on our Evergreen promise, offering customers new integrations with third-party solutions and expanded platform capabilities, allowing them to extract even more value from their enterprise data cloud. Some of the Purity features contained in this release include: Rubrik Tag Visualization highlights compromised storage volumes and snapshots identified by Rubrik, directly in Fusion’s fleet-wide GUI, enabling customers to quickly gain actionable insights to protect data and minimize downtime—making cyber resilience simple and scalable. Unified Replication Support on X20R4/R5 aligns support for ActiveDR and Block Async replication use cases across the currently-available product line, expanding data protection capabilities for customers on entry-level platforms. Object Tagging Phase 4 adds REST and CLI support for adding metadata tags to remote pods, remote p-group snapshots and volume snapshots, giving users and processes more options for organizing and filtering storage objects. See the release notes for all the details about these, and the many other features, bug fixes, and security updates included in the 6.10 release line. UPGRADE RECOMMENDATIONS AND EOL SCHEDULE Customers who are looking for continued delivery of all the newest capabilities should upgrade to 6.10.1. Customers who are looking for long-term maintenance of the 6.8 feature set are recommended to upgrade to the 6.9 LLR. When possible, customers should make use of Self-Service Upgrades (SSU) to ease the process of planning and executing non-disruptive Purity upgrades for their fleet. Development on the 6.10 release line will continue through March 2026. After this time the full 6.10 feature set will roll into the 6.11 Long Life Release line for long-term maintenance, and the 6.10 line will be declared End-of-Life (EOL). HARDWARE SUPPORT This release is supported on the following FlashArray Platforms: Cloud Block Store for Azure and AWS, FA//X (R3, R4, R5), FA//C (R3, R4, R5), FA//XL (R1, R5), FA//E, and FA//RC20. Note, DFS software version 2.2.5 is recommended with this release. LINKS AND REFERENCES Purity//FA 6.10 Release Notes Self-Service Upgrades Purity//FA Release and End-of-Life Schedule FlashArray Hardware and End-of-Support DirectFlash Shelf Software Compatibility Matrix FlashArray Capacity and Feature Limits FlashArray Feature Interoperability MatrixLudes6 months agoTrekker III537Views0likes0CommentsAsk us everything about Purity Upgrades!
💬 Have more questions for our experts around Purity Upgrades after today's live "Ask Us Everything"? Feel free to drop them below and our experts will answer! dpoorman , skennedy , rquast , jhoughes tag your it! You can also check out these upgrade resources: Bulk self-service upgrades demo video Upgrade your own FlashArray with Pure1 blog Fleet-wide self-service upgrades brieflegan7 months agoDay Hiker II523Views3likes3CommentsFeature Request for Puritan Heroes — Manage snapshot schedules directly in Pure1
An opportunity to shine Summary / problem Today, snapshot scheduling and edits are performed per-array (e.g., via Purity protection policies / groups). At fleet scale this is manual, slow for change control, and hard to audit centrally. We’d like first-class, Pure1-native capabilities to create, modify, and bulk-apply snapshot schedules—without logging into each array. Impact / why this matters Fleet operations: one place to set/adjust RPO/retention across many FlashArrays/FlashBlades. Change management & compliance: audited, role-scoped changes with approver workflow. Risk & capacity: pre-change impact analysis (capacity, performance) using Pure1 telemetry. Consistency: policy templates and guardrails reduce configuration drift. Scope (initial ask) Pure1 UI “Snapshot Policies” workspace to create/edit/delete schedules and retentions (minute/hour/day/week/month), including SafeMode/immutability options where supported. Bulk Apply/Override to multiple arrays, volume groups, file systems/shares. What-if capacity forecast prior to save (based on Pure1 analytics). Change review & approvals (optional), with rationale and ticket ID fields. Maintenance/blackout windows and exceptions. Full audit trail (who, what changed, before/after, when, target objects). RBAC: granular rights (view vs. edit vs. approve), and per-array or per-group scoping. Pure1 Public API New endpoints to programmatically create/read/update/delete snapshot policies, bind/unbind them to objects, run an on-demand snapshot from a policy, and pull audit events—so we can integrate with CI/CD and ITSM. (We rely on Pure1 API today for telemetry; adding write ops for policies would unlock safe automation.) Edge / connectivity Use Pure1 Edge Service for secure, two-way execution on arrays. Include graceful handling for dark/offline assets (queue and notify). Acceptance criteria (examples) From Pure1, I can create a policy “gold-daily-35d” and attach it to 100+ arrays / 1,000+ objects in a single operation. A non-admin can propose a policy change; an approver must OK it before it takes effect. Audit page shows a human-readable diff (before/after crontab-style schedule and retention) plus API payloads. Capacity impact estimator shows expected snapshot object growth over 30/90 days based on historical change rates. API: POST /snapshot-policies, PATCH /snapshot-policies/{id}, POST /snapshot-policy-attachments, GET /audit-events?type=snapshot_policy. Nice-to-haves Prebuilt policy templates (e.g., 15-min RPO for 24h + daily 35d + monthly 12m). Guardrails (“warn/deny if estimated growth > X TB or RPO < Y min”). Service-now/Jira webhook on approval/change. Environment Mixed FA/FB fleet; Pure1 connected, Edge Service available. SafeMode enabled on arrays requiring immutability. Thanks in advance - Garry OhanianGarry7 months agoDay Hiker III247Views2likes1Comment