Flash Array Certification
All FlashArray Admins, If any of you currently hold a Flash Array certification there is an alternative to retaking the test to renew your cert. The Continuing Pure Education (CPE) program takes into account learning activities and community engagement and contribution hours to renew your FA certification. I just successfully renewed my Flash Array Storage Professional cert by tracking my activities. Below are the details I received from Pure. Customers can earn 1 CPE credit per hour of session attendance at Accelerate, for a maximum of 10 CPEs total (i.e., up to 10 hours of sessions). Sessions must be attended live. I would go ahead and add all the sessions you attended at Accelerate to the CPE_Submission form. Associate-level certifications will auto-renew as long as there is at least one active higher-level certification (e.g., Data Storage Associate will auto-renew anytime a Professional-level cert is renewed). All certifications other than the Data Storage Associate should be renewed separately. At this time, the CPE program only applies to FlashArray-based exams. Non- FA exams may be renewed by retaking the respective test every three years. You should be able to get the CPE submission form from your account team. Once complete email your recertification Log to peak-education@purestorage.com for formal processing. -Charlie18Views1like0CommentsPure Fusion File Presets & Workloads on FB 4.6.7 and FA 6.10.4: Less Click‑Ops, More Policy
If you’ve ever built the “standard” NFS/SMB layout for an app for the fifth time in a week and thought “this should be a function, not a job”, this release is for you. With FlashBlade version 4.6.7 and FlashArray version 6.10.4, Pure Fusion finally gives file the same treatment block has had for a while: presets and workloads for file services across FlashBlade and FlashArray, a much sharper Presets & Workloads UI, plus smarter placement and resource naming controls tuned for real environments—not demos. This post is written for people who already know what NFS export policies and snapshot rules are and are mostly annoyed they still have to configure them by hand. Problem Statement: Your “Standard” File Config is a Lie Current pattern in most environments: Every app team needs “just a few file shares”. You (or your scripts) manually: Pick an array, hope it’s the right one. Create file systems and exports. Glue on snapshot/replication policies. Try to respect naming conventions and tagging. Six months later: Same logical workload looks different on every array. Audit and compliance people open tickets. Nobody remembers what “fs01-old2-bak” was supposed to be. Fusion File Presets & Workloads exist to eradicate that pattern: Presets = declarative templates describing how to provision a workload (block or file). Workloads = concrete instances of those presets deployed somewhere in a Fusion fleet (FA, FB, or both). In nerd-speak, think: Helm chart for storage (preset) vs Helm release (workload). Quick Mental Model: What Presets & Workloads Actually Are A File Preset can include, for example: Number of file systems (FlashBlade or FlashArray File). Directory layout and export policies (for NFS/SMB). Snapshot policies and async replication (through protection groups or pgroups). Per‑workload tags (helps in finding a needle in a haystack & more) Quota and Snapshot parameters A Workload is: A Fusion object that: References the preset in it’s entirety. Tracks where the underlying Purity objects live. Surfaces health, capacity, and placement at the fleet level. In code‑brain terms: preset: app-file-gold parameters: env: prod fs_count: 4 fs_size: 10TB qos_iops_max: 50000 placement: strategy: recommended # Pure1 or dark‑site heuristic depending on connectivity constraints: platform: flashblade Fusion resolves that into resources and objects on one or more arrays: purefs objects, exports, pgroups, QoS, tags, and consistently named resources. So, what’s new you ask? What’s New on FlashBlade in Purity//FB 4.6.7 1. Fusion File Presets & Workloads for FlashBlade Purity//FB 4.6.7 is the release where FlashBlade joins the Fusion presets/workloads party for file. Key points: You can now define Fusion file presets that describe: Number/size of file systems. Export policies (NFS/SMB). Snapshot/replication policies. Tags and other metadata. You then create Fusion file workloads from those presets: Deployed onto any compatible FlashBlade or FlashArray in the fleet, depending on your constraints and placement recommendations. That means you stop hand‑crafting per‑array configs and start stamping out idempotent policies. 2. New Presets & Workloads GUI on FlashBlade Putrity//FB version 4.6.7 brings proper Fusion GUI surfaces to FB: Storage → Presets Create/edit/delete Fusion presets (block + file). Upload/download preset JSON directly from the GUI. Storage → Workloads Instantiate workloads from presets. See placement, status, and underlying resources across the fleet. Why this is a real improvement, not just new tabs: Single mental model across FA and FB: Same abstractions: preset → workload → Purity objects. Same UX for block and file. Guard‑railed customization: GUI only exposes parameters marked as configurable in the preset (with limits), so you can safely delegate provisioning to less storage‑obsessed humans without getting random snapshot policies. 3. JSON Preset Upload/Download (CLI + GUI) This new release also adds full round‑trip JSON support for presets, including in the GUI: On the CLI side: # Export an existing preset definition as JSON purepreset workload download app-file-gold > app-file-gold.json # Edit JSON, save to file share, version control, commit to git, run through CI, etc… # Import the preset into another fleet or array purepreset workload upload --context fleet-prod app-file-gold < app-file-gold.json Effects: Presets become versionable artifacts (Git, code review, promotion). You can maintain a central preset catalog and promote from dev → QA → prod like any other infra‑as‑code. Sharing configs stops being “here’s a screenshot of my settings.”, 4. Fusion Dark Site File Workload Placement + Get Recommendations Many folks run fleets without outbound connectivity, for various reasons. Until now, that meant “no fancy AI placement recommendations” for those sites. Fusion Dark Site File Workload Placement changes that: When Pure1 isn’t reachable, Fusion can still compute placement recommendations for file workloads across the fleet using local telemetry: Capacity utilization. Performance headroom. QoS ceilings/commitments (where applicable). In the GUI, when you’re provisioning a file workload from a preset, you can hit “Get Recommendations”: Fusion evaluates candidate arrays within the fleet. Returns a ranked list of suitable targets, even in an air‑gapped environment. So, in dark sites you still get: Data‑driven “put it here, not there” hints. Consistency with what you’re used to on the block side when Pure1 is available, but without the cloud dependency. What’s New on FlashArray in Purity//FA 6.10.4 1. Fusion File Presets & Workloads for FlashArray File Version 6.10.4 extends Fusion presets and workloads to FlashArray File Services: You can now: Define file presets on FA that capture: File system count/size. NFS/SMB export behavior. QoS caps at workload/volume group level. Snapshot/async replication policies via pgroups. Tags and metadata. Provision file workloads on FlashArray using those presets: From any Fusion‑enabled FA in the fleet. With the same UX and API that you use for block workloads. This effectively normalizes block and file in Fusion: Fleet‑level view. Same provisioning primitives (preset→workload). Same policy and naming controls. 2. Fusion Pure1‑WLP Replication Placement (Block Workloads) Also introduced is Fusion Pure1 Workload Replication Placement for Block Workloads: When you define replication in a block preset: Fusion can ask Pure1 Workload Planner for a placement plan: Primary/replica arrays are chosen using capacity + performance projections. It avoids packing everything onto that one “lucky” array. Workload provisioning then uses this plan automatically: You can override, but the default is data‑backed rather than “whatever’s top of the list.” It’s the same idea as dark‑site file placement, just with more telemetry and projection thanks to Pure1. Resource Naming Controls: Have it your way If you care about naming standards, compliance, and audit (or just hate chaos and stress), this one matters. Fusion Presets Resource Naming Controls let you define deterministic naming patterns for all the objects a preset creates: Examples: Allowed variables might include: workload_name tenant / app / env platform (flasharray-x, flashblade-s, etc.) datacenter site code Sequenced IDs You can also define patterns like: fs_name_pattern: "{tenant}-{env}-{workload_name}-fs{seq}" export_name_pattern: "{tenant}_{env}_{app}_exp{seq}" pgroup_name_pattern: "pg-{app}-{env}-{region}" Result: Every file system, export, pgroup, and volume created by that preset: Follows the pattern. Satisfies internal CS/IT naming policies for compliance and audits. You can still parameterize inputs (e.g., tenant=finops, env=prod), but the structure is enforced. No more hunting down “test2-final-old” in front of auditors and pretending that was intentional. Not speaking from experience though :-) The Updated Presets & Workloads GUI: Simple is better Across Purity//FB v4.6.7 and Purity//FA v6.10.4, Fusion’s UI for presets and workloads is now a graphical wizard-type interface that is easier to follow, with more help along the way.. Single Pane, Shared Semantics Storage → Presets Block + file presets (FA + FB) in one place. JSON import/export. Storage → Workloads All workloads, all arrays. Filter by type, platform, tag, or preset. Benefits for technical users: Quick answer to: “What’s our standard for <workload X>?” “Where did we deploy it, and how many variants exist?” Easy diff between: “What the preset says” vs “what’s actually deployed.” Guard‑Rails Through Parameterization Preset authors (yes, we’re looking at you) decide: Which fields are fixed (prescriptive) vs configurable. The bounds on configurable fields (e.g., fs_size between 1–50 TB). In the GUI, that becomes: A minimal set of fields for provisioners to fill in. Validation baked into the wizard. Workloads that align with standards without needing a 10‑page runbook. Integrated Placement and Naming When you create a workload via the new GUI, you get: “Get Recommendations” for placement: Pure1‑backed in connected sites (block). Dark‑site logic for file workloads on FB when offline. Naming patterns from the resource naming controls baked in, not bolted on afterward. So you’re not manually choosing: Which array is “least bad” today. How to hack the name so it still passes your log‑parsing scripts. CLI / API: What This Looks Like in Practice If you prefer the CLI over the GUI, Fusion doesn’t punish you. Example: Defining and Using a File Preset Author a preset JSON (simplified example): { "name": "app-file-gold", "type": "file", "parameters": { "fs_count": { "min": 1, "max": 16, "default": 4 }, "fs_size_tib": { "min": 1, "max": 50, "default": 10 }, "tenant": { "required": true }, "env": { "allowed": ["dev","test","prod"], "default": "dev" } }, "naming": { "filesystem_pattern": "{tenant}-{env}-{workload_name}-fs{seq}" }, "protection": { "snapshot_policy": "hourly-24h-daily-30d", "replication_targets": ["dr-fb-01"] } } Upload preset into a fleet: purepreset workload upload --context fleet-core app-file-gold < app-file-gold.json Create a workload and let Fusion pick the array: pureworkload create \ --context fleet-core \ --preset app-file-gold \ --name payments-file-prod \ --parameter tenant=payments \ --parameter env=prod \ --parameter fs_count=8 \ --parameter fs_size_tib=20 Inspect placement and underlying resources: pureworkload list --context fleet-core --name payments-file-prod --verbose Behind the scenes: Fusion picks suitable arrays using Pure1 Workload Placement (for connected sites) or dark‑site logic purefs/exports/pgroups are created with names derived from the preset’s naming rules. Example: Binding Existing Commands to Workloads The new version also extends several CLI commands with workload awareness: purefs list --workload payments-file-prod purefs setattr --workload payments-file-prod ... purefs create --workload payments-file-prod --workload-configuration app-file-gold This is handy when you need to: Troubleshoot or resize all file systems in a given workload. Script around logical workloads instead of individual file systems. Why This Matters for You (Not Just for Slides) Net impact of FB 4.6.7 + FA 6.10.4 from an Admin’s perspective: File is now truly first‑class in Fusion, across both FlashArray and FlashBlade. You can encode “how we do storage here” as code: Presets (JSON + GUI). Parameterization and naming rules. Placement and protection choices. Dark sites get sane placement via “Get Recommendations” for file workloads, instead of best‑guess manual picks. Resource naming is finally policy‑driven, not left to whoever is provisioning at 2 AM. GUI, CLI, and API are aligned around the same abstractions, so you can: Prototype in the UI. Commit JSON to Git. Automate via CLI/API without re‑learning concepts. Next Steps If you want to kick the tires: Upgrade: FlashBlade to Purity//FB 4.6.7 FlashArray to Purity//FA 6.10.4 Pick one or two high‑value patterns (e.g., “DB file services”, “analytics scratch”, “home directories”). Implement them as Fusion presets with: Parameters. Placement hints. Naming rules. Wire into your existing tooling: Use the GUI for ad‑hoc. Wrap purepreset / pureworkload in your pipelines for everything else. You already know how to design good storage. These releases just make it a lot harder for your environment to drift away from that design the moment humans touch it.81Views2likes0CommentsAUE - Key Insights
Good morning/afternoon/evening everyone! This is Rich Barlow, Principal Technologist @ Pure. It was super fun to proctor this AUE session with Antonia and Jon. Hopefully everyone got in all of the questions that they wanted to ask - we had so many that we had to answer many of them out of band. So thank you for your enthusiasm and support. Looking forward to the next one! Here's a rundown of the most interesting and impactful questions we were asked. If you have any more please feel free to reach out. FlashArray File: Your Questions, Our Answers (Ask Us Everything Recap) Our latest "Ask Us Everything" webinar with Pure Storage experts Rich Barlow, Antonia Abu Matar, and Jonathan Carnes was another great session. You came ready with sharp questions, making it clear you're all eager to leverage the simplicity of your FlashArray to ditch the complexity of legacy file storage. Here are some the best shared insights from the session: Unify Everything: Performance By Design You asked about the foundation—and it's a game-changer. No Middleman, Low Latency: Jon Carnes confirmed that FlashArray File isn't a bolt-on solution. Since the file service lands directly on the drives, just like block data, there's effectively "no middle layer." The takeaway? You get the same awesome, low-latency performance for file that you rely on for block workloads. Kill the Data Silos: Antonia Abu Matar emphasized the vision behind FlashArray File: combining block and file on a single, shared storage pool. This isn't just tidy; it means you benefit from global data reduction and unified data services across everything. Scale, Simplicity, and Your Weekends Back The community was focused on escaping the complexities of traditional NAS systems. Always-On File Shares: Worried about redundancy? Jon confirmed that FlashArray File implements an "always-on" version of Continuous Available (CA) shares for SMB3 (in Purity 6.9/6.10). It’s on by default for transparent failover and simple client access. Multi-Server Scale-Up: For customers migrating from legacy vendors and needing lots of "multi-servers," we're on it. Jon let us know that engineering is actively working to significantly raise the current limits (aiming for around 100 in the next Purity release), stressing that Pure increases these limits non-disruptively to ensure stability. NDU—Always and Forever: The best part? No more weekend maintenance marathons. The FlashArray philosophy is a "data in place, non-disruptive upgrade." That applies to both block and file, eliminating the painful data migrations you’re used to. Visibility at Your Fingertips: You can grab real-time IOPS and throughput from the GUI or via APIs. For auditing, file access events are pushed via syslog in native JSON format, which makes integrating with tools like Splunk super easy. Conquering Distance and Bandwidth A tough question came in about supporting 800 ESRI users across remote Canadian sites (Yellowknife, Iqaluit, etc.) with real-time file access despite low bandwidth. Smart Access over Replication: Jon suggested looking at Rapid Replicas (available on FlashBlade File). This isn't full replication; it’s a smart solution that synchronizes metadata across sites and only pulls the full data on demand (pull-on-access). This is key for remote locations because it dramatically cuts down on the constant bandwidth consumption of typical data replication. Ready to Simplify? FlashArray File Services lets you consolidate your infrastructure and get back to solving bigger problems—not babysitting your storage. Start leveraging the power of a truly unified and non-disruptive platform today! Join the conversation and share your own experiences in the Pure Community.40Views1like0CommentsFeature Request: Certificate Automation with ACME
Hi Pure people, How about reducing my workload a little by supporting the ACME protocol for certificate renewal? . Certficate lifespans are just getting shorter, and while I have a horrid expect script to renew certificates via ssh to flasharray, it would be much simpler if Purity ran an ACME client itself. PS We use the DNS Challenge method to avoid having to run webservices where they aren't needed.137Views1like2CommentsWhy Your Writes Are Always Safe on FlashArray
The promise of modern storage is simple: when the system says “yes,” your data better be safe. No matter what happens next; power failure, controller hiccup, or the universe throwing what else it has at you writes need to stay acknowledged. FlashArray is engineered around this non‑negotiable principle. Let me walk you through how we deliver on it. Durable First, Fast Always When your application issues a write to FlashArray, here’s the path it takes: Land in DRAM for inline data reduction (dedupe, compression, you know the lightweight stuff). Persist redundantly in NVRAM (mirrored or RAID‑6/DNVR, depending on platform), in a log accessible by either controller. Acknowledge to the host ← This is the critical moment. Flush to flash media in the background, efficiently and asynchronously. Notice what happens between steps 2 and 3? We don’t acknowledge until data is durably persisted in non‑volatile memory. Not “mostly safe,” not “probably fine” but safe and durable. This isn’t a write‑back cache we’ll get around to flushing later. The acknowledgement means your data survived the critical path and is now protected, period. Power Loss? No Problem. FlashArray NVRAM modules include integrated supercapacitors that provide power hold‑up during unexpected power events. When the power drops, these capacitors ensure the buffered write log is safely preserved without batteries to maintain, no external UPS required just to have write safety. Though it is recommended, no external UPS is necessary for write safety; many sites still deploy UPS for broader data center and facility reasons. Because durability is achieved at the NVRAM layer, we eliminate the most common failure mode in legacy systems: the volatile write cache that promises safety but can’t deliver when it matters most. Simpler Path with Integrated DNVR In our latest architectures, we integrate Distributed NVRAM (DNVR) directly into the DirectFlash Module (DFMD). This simplifies the write path fewer hops, tighter integration, better efficiency. And scales NVRAM bandwidth and capacity with the number of modules. By bringing persistence closer to the media, we’re not just maintaining our durability guarantees we’re increasing capacity and streamlining the data path at the same time. Graceful Under Pressure What happens if write ingress temporarily exceeds what the system can flush to flash? FlashArray applies deterministic backpressure you may see latency increase but I/O is not being dropped. Thus data is not at risk. Background processes yield and lower‑priority internal tasks are throttled to prioritize destage operations, keeping the system stable and predictable. Translation: we slow down gracefully and don't fail unpredictably. High Availability by Design Controllers are stateless, with writes durably persisted in NVRAM accessible by either controller. If one controller faults, the peer automatically takes over, replays any in‑flight operations from the durable log, and resumes service. A brief I/O pause may occur during takeover; platforms are sized so a single controller can handle the full workload afterward to minimize disruption to your applications. No acknowledged data is lost. No manual intervention required. Just continuous operation. Beyond the ACK: Protection on Flash After the destage, data on flash is protected with wide‑striped erasure coding for fast, predictable rebuilds and multi‑device fault tolerance. And NO hot‑spare overhead. The Bottom Line Modern flash gives you incredible performance, but performance means nothing if your data isn't safe. FlashArray's architecture makes durability the first principle—not an optimization, not an add-on, but the foundation everything else is built on. When FlashArray says your write is safe, it's safe. That's not marketing. That's engineering. This approach to write safety is part of Pure's commitment to Better Science, doing things the right way, not the easy way. We didn't just swap drives in an existing architecture; we reimagined the entire system from the ground up, from how we co-design hardware and software with DirectFlash to how we map and manage petabytes of metadata at scale. Want to dive deeper? Better Science, Volume 1 — Hardware and Software Co‑design with DirectFlash https://blog.purestorage.com/products/better-science-volume-1-hardware-and-software-co-design-with-directflash/ Better Science, Volume 2 — Maps, Metadata, and the Pyramid https://blog.purestorage.com/perspectives/better-science-volume-2-maps-metadata-and-the-pyramid/ The Pure Report — Better Science Vol. 1 (DirectFlash) https://podcasts.apple.com/gb/podcast/better-science-volume-1-directflash/id1392639991?i=1000569574821126Views1like0CommentsPurity//FA 6.9 is (Finally) Enterprise Ready!
A few months ago I wrote about the top 10 reasons to upgrade to Purity 6.9, and here are 10 more reasons; because…..6.9 has just gone Enterprise Ready! https://support.purestorage.com/bundle/m_flasharray_release/page/FlashArray/FlashArray_Release/01_Purity_FA_Release_Notes/topics/concept/c_purityfa_69x_release_notes.html 10 💍 It's "Long-Life"! Stability until June 2028. That's a longer, more successful relationship than 90% of reality TV couples achieve. 9⚰️ Your Pure SE Won’t Keep Bugging You About Running an EOL Release. You know who you are…. 8💯 It's Been to College. It met the criteria for "customer fleet adoption, cumulative runtime, and observed uptime." Basically, it passed the field test with flying colors. 7🤝 You Get a Side of Fusion. Upgrade to 6.9 and get the powerful, simple-to-use multi-array storage platform management system included. You know you want it! 6😴 The Engineers Can Finally Go Home. A big thank you to the engineering, support, technical program management, and product management teams for all the hard work. Go take a nap! 5🛡️ We Have a Stable Alternative to Chasing New Features. For customers who want rock-solid reliability, you can skip the Feature Release (FR) line drama and stick with the LLR. 4✅ It's The Complete 6.8 Feature Set. You don't lose any capabilities; you just gain the confidence of a battle-tested release. Full meal deal, no compromises. 3🖱️ It's So Easy to Get There, Even The Intern Could Do It. Compatible hardware customers are encouraged to use Self-Service Upgrades (SSU). Less work, more coffee breaks. 2🔒 Guaranteed Bug Fixes and Security Updates. This release is officially maintained, meaning your security team can finally relax... slightly. 1🚨 When You Call Support, We Won’t Start With "Did You Upgrade Yet?"443Views1like0CommentsAnnouncing the General Availability of Purity//FA 6.7.7 LLR
We are happy to announce the general availability of 6.7.7, the eighth release in the 6.7 Long-Life Release (LLR) line! This release line is based on the feature set introduced in 6.6, providing long-term consistency in capabilities, user experience, and interoperability, with the latest fixes and security updates. For more detailed information about bug fixes and security updates included in each release, see the release notes. UPGRADE RECOMMENDATIONS AND EOL SCHEDULE We recommend customers already running 6.7 who are looking for the latest fixes and updates to upgrade to this long-life release. Customers looking for a newer feature set, including Fusion fleet management, should consider an upgrade to the 6.9 LLR. When possible, customers should make use of Self-Service Upgrades (SSU) to ease the process of planning and executing non-disruptive Purity upgrades for their fleet. The 6.7 LLR line is planned for development through October 2027. HARDWARE SUPPORT This release is supported on the following FlashArray Platforms: FA//X (R2, R3, R4), FA//C (R1, R3, R4), FA//XL (R1), FA//E, and Pure Storage Cloud Dedicated. The PSC Dedicated release may take up to a week to be available on the AWS Marketplace and Azure Marketplace. Note, DFS software version 2.2.4 is recommended with this release LINKS AND REFERENCES Purity//FA 6.7 Release Notes Purity//FA 6.6/6.7 Feature Content Self-Service Upgrades Purity//FA Release and End-of-Life Schedule FlashArray Hardware and End-of-Support DirectFlash Shelf Software Compatibility Matrix FlashArray Capacity and Feature Limits226Views0likes0CommentsIT'S PURITY UPGRADE TUESDAY AGAIN!
Yes, I am going to die on the hill of making Tuesdays a new special day for Pure Storage 😀. For those of you out there who have upgraded Purity within the last two years or so - what was the feature that was added or upgraded that made you say, "WOW"? I'll start - FA Files was my first launch project here so I got a solid peek behind the curtains about all that went into it and was blown away by where the PM team was planning on taking it. What about your experience? Sound off below!134Views1like1CommentChoosing Between Snapshots and Backups? Use Both
Let's settle the old debate: snapshots or backups for data protection? The answer is you need both, working together. The Problem VMware snapshots are great for quick rollbacks, but they create redo logs that strain storage IO and need eventual consolidation. During active snapshots, your storage reads multiple files simultaneously, potentially impacting production. Storage snapshots like Pure's are instantaneous and lightweight, but they capture entire volumes at once, are only crash-consistent, and require full restores or manual workarounds to extract specific data. Neither alone covers every recovery scenario you'll face. The Solution Integrate VMware, Pure Storage, and Veeam into a cohesive platform: Leverage Pure snapshots for fast, efficient data capture without production impact Use Veeam to orchestrate application-consistent backups and enable granular restores Keep snapshots close to the source for quick recovery Maintain backup files for long-term retention Replicate everything to DR sites with the same capabilities The Payoff One integrated solution gives you flexibility for any situation: ransomware recovery from immutable snapshots, granular file restores, site failovers, or long-term archive retrieval. All without impacting production. Modern data protection isn't about picking sides. It's about making your storage, hypervisor, and backup solution work together intelligently. Hear more here on Pure360 Pure Storage and Veeam- Why Architecture Matters143Views3likes1Comment4 steps to enable Pure Fusion
Several teams like yours have recently switched on Pure Fusion and saved 39.5 hours of staff time per day by boosting application-response times. It’s been a game changer for enterprise data management. Read more on how Mississippi Department of Revenue deployed Pure Storage® platform for a faster, more versatile storage to boost application performance, protect data, and support hypervisor mobility. Pure Fusion unifies enterprise data and automates workflows with simplified storage management, workload automation and AI-driven workload placement. With the power of an Intelligent Control Plane, Fusion automates storage management across cloud, edge or core or any protocol file, object or block. Anchoring the Enterprise Data Cloud, it unifies data services and integrates with existing infrastructures, turning complex, manual tasks into streamlined, policy-driven operations. Fusion enables end-to-end automation—freeing you to accelerate innovation while reducing operational risk and overhead. Here are the 4 steps to enable Pure Fusion: Click here for the complete Pure Fusion Quick Start Guide. Using Secure LDAP (LDAPS) requires additional configuration with certificates. Please reference the Quick Start guide for more information. For compatibility reference, please see the Compatibility Matrix.135Views1like1Comment