Recent Content
Flash Array Certification
All FlashArray Admins, If any of you currently hold a Flash Array certification there is an alternative to retaking the test to renew your cert. The Continuing Pure Education (CPE) program takes into account learning activities and community engagement and contribution hours to renew your FA certification. I just successfully renewed my Flash Array Storage Professional cert by tracking my activities. Below are the details I received from Pure. Customers can earn 1 CPE credit per hour of session attendance at Accelerate, for a maximum of 10 CPEs total (i.e., up to 10 hours of sessions). Sessions must be attended live. I would go ahead and add all the sessions you attended at Accelerate to the CPE_Submission form. Associate-level certifications will auto-renew as long as there is at least one active higher-level certification (e.g., Data Storage Associate will auto-renew anytime a Professional-level cert is renewed). All certifications other than the Data Storage Associate should be renewed separately. At this time, the CPE program only applies to FlashArray-based exams. Non- FA exams may be renewed by retaking the respective test every three years. You should be able to get the CPE submission form from your account team. Once complete email your recertification Log to peak-education@purestorage.com for formal processing. -CharliePure Fusion File Presets & Workloads on FB 4.6.7 and FA 6.10.4: Less Click‑Ops, More Policy
If you’ve ever built the “standard” NFS/SMB layout for an app for the fifth time in a week and thought “this should be a function, not a job”, this release is for you. With FlashBlade version 4.6.7 and FlashArray version 6.10.4, Pure Fusion finally gives file the same treatment block has had for a while: presets and workloads for file services across FlashBlade and FlashArray, a much sharper Presets & Workloads UI, plus smarter placement and resource naming controls tuned for real environments—not demos. This post is written for people who already know what NFS export policies and snapshot rules are and are mostly annoyed they still have to configure them by hand. Problem Statement: Your “Standard” File Config is a Lie Current pattern in most environments: Every app team needs “just a few file shares”. You (or your scripts) manually: Pick an array, hope it’s the right one. Create file systems and exports. Glue on snapshot/replication policies. Try to respect naming conventions and tagging. Six months later: Same logical workload looks different on every array. Audit and compliance people open tickets. Nobody remembers what “fs01-old2-bak” was supposed to be. Fusion File Presets & Workloads exist to eradicate that pattern: Presets = declarative templates describing how to provision a workload (block or file). Workloads = concrete instances of those presets deployed somewhere in a Fusion fleet (FA, FB, or both). In nerd-speak, think: Helm chart for storage (preset) vs Helm release (workload). Quick Mental Model: What Presets & Workloads Actually Are A File Preset can include, for example: Number of file systems (FlashBlade or FlashArray File). Directory layout and export policies (for NFS/SMB). Snapshot policies and async replication (through protection groups or pgroups). Per‑workload tags (helps in finding a needle in a haystack & more) Quota and Snapshot parameters A Workload is: A Fusion object that: References the preset in it’s entirety. Tracks where the underlying Purity objects live. Surfaces health, capacity, and placement at the fleet level. In code‑brain terms: preset: app-file-gold parameters: env: prod fs_count: 4 fs_size: 10TB qos_iops_max: 50000 placement: strategy: recommended # Pure1 or dark‑site heuristic depending on connectivity constraints: platform: flashblade Fusion resolves that into resources and objects on one or more arrays: purefs objects, exports, pgroups, QoS, tags, and consistently named resources. So, what’s new you ask? What’s New on FlashBlade in Purity//FB 4.6.7 1. Fusion File Presets & Workloads for FlashBlade Purity//FB 4.6.7 is the release where FlashBlade joins the Fusion presets/workloads party for file. Key points: You can now define Fusion file presets that describe: Number/size of file systems. Export policies (NFS/SMB). Snapshot/replication policies. Tags and other metadata. You then create Fusion file workloads from those presets: Deployed onto any compatible FlashBlade or FlashArray in the fleet, depending on your constraints and placement recommendations. That means you stop hand‑crafting per‑array configs and start stamping out idempotent policies. 2. New Presets & Workloads GUI on FlashBlade Putrity//FB version 4.6.7 brings proper Fusion GUI surfaces to FB: Storage → Presets Create/edit/delete Fusion presets (block + file). Upload/download preset JSON directly from the GUI. Storage → Workloads Instantiate workloads from presets. See placement, status, and underlying resources across the fleet. Why this is a real improvement, not just new tabs: Single mental model across FA and FB: Same abstractions: preset → workload → Purity objects. Same UX for block and file. Guard‑railed customization: GUI only exposes parameters marked as configurable in the preset (with limits), so you can safely delegate provisioning to less storage‑obsessed humans without getting random snapshot policies. 3. JSON Preset Upload/Download (CLI + GUI) This new release also adds full round‑trip JSON support for presets, including in the GUI: On the CLI side: # Export an existing preset definition as JSON purepreset workload download app-file-gold > app-file-gold.json # Edit JSON, save to file share, version control, commit to git, run through CI, etc… # Import the preset into another fleet or array purepreset workload upload --context fleet-prod app-file-gold < app-file-gold.json Effects: Presets become versionable artifacts (Git, code review, promotion). You can maintain a central preset catalog and promote from dev → QA → prod like any other infra‑as‑code. Sharing configs stops being “here’s a screenshot of my settings.”, 4. Fusion Dark Site File Workload Placement + Get Recommendations Many folks run fleets without outbound connectivity, for various reasons. Until now, that meant “no fancy AI placement recommendations” for those sites. Fusion Dark Site File Workload Placement changes that: When Pure1 isn’t reachable, Fusion can still compute placement recommendations for file workloads across the fleet using local telemetry: Capacity utilization. Performance headroom. QoS ceilings/commitments (where applicable). In the GUI, when you’re provisioning a file workload from a preset, you can hit “Get Recommendations”: Fusion evaluates candidate arrays within the fleet. Returns a ranked list of suitable targets, even in an air‑gapped environment. So, in dark sites you still get: Data‑driven “put it here, not there” hints. Consistency with what you’re used to on the block side when Pure1 is available, but without the cloud dependency. What’s New on FlashArray in Purity//FA 6.10.4 1. Fusion File Presets & Workloads for FlashArray File Version 6.10.4 extends Fusion presets and workloads to FlashArray File Services: You can now: Define file presets on FA that capture: File system count/size. NFS/SMB export behavior. QoS caps at workload/volume group level. Snapshot/async replication policies via pgroups. Tags and metadata. Provision file workloads on FlashArray using those presets: From any Fusion‑enabled FA in the fleet. With the same UX and API that you use for block workloads. This effectively normalizes block and file in Fusion: Fleet‑level view. Same provisioning primitives (preset→workload). Same policy and naming controls. 2. Fusion Pure1‑WLP Replication Placement (Block Workloads) Also introduced is Fusion Pure1 Workload Replication Placement for Block Workloads: When you define replication in a block preset: Fusion can ask Pure1 Workload Planner for a placement plan: Primary/replica arrays are chosen using capacity + performance projections. It avoids packing everything onto that one “lucky” array. Workload provisioning then uses this plan automatically: You can override, but the default is data‑backed rather than “whatever’s top of the list.” It’s the same idea as dark‑site file placement, just with more telemetry and projection thanks to Pure1. Resource Naming Controls: Have it your way If you care about naming standards, compliance, and audit (or just hate chaos and stress), this one matters. Fusion Presets Resource Naming Controls let you define deterministic naming patterns for all the objects a preset creates: Examples: Allowed variables might include: workload_name tenant / app / env platform (flasharray-x, flashblade-s, etc.) datacenter site code Sequenced IDs You can also define patterns like: fs_name_pattern: "{tenant}-{env}-{workload_name}-fs{seq}" export_name_pattern: "{tenant}_{env}_{app}_exp{seq}" pgroup_name_pattern: "pg-{app}-{env}-{region}" Result: Every file system, export, pgroup, and volume created by that preset: Follows the pattern. Satisfies internal CS/IT naming policies for compliance and audits. You can still parameterize inputs (e.g., tenant=finops, env=prod), but the structure is enforced. No more hunting down “test2-final-old” in front of auditors and pretending that was intentional. Not speaking from experience though :-) The Updated Presets & Workloads GUI: Simple is better Across Purity//FB v4.6.7 and Purity//FA v6.10.4, Fusion’s UI for presets and workloads is now a graphical wizard-type interface that is easier to follow, with more help along the way.. Single Pane, Shared Semantics Storage → Presets Block + file presets (FA + FB) in one place. JSON import/export. Storage → Workloads All workloads, all arrays. Filter by type, platform, tag, or preset. Benefits for technical users: Quick answer to: “What’s our standard for <workload X>?” “Where did we deploy it, and how many variants exist?” Easy diff between: “What the preset says” vs “what’s actually deployed.” Guard‑Rails Through Parameterization Preset authors (yes, we’re looking at you) decide: Which fields are fixed (prescriptive) vs configurable. The bounds on configurable fields (e.g., fs_size between 1–50 TB). In the GUI, that becomes: A minimal set of fields for provisioners to fill in. Validation baked into the wizard. Workloads that align with standards without needing a 10‑page runbook. Integrated Placement and Naming When you create a workload via the new GUI, you get: “Get Recommendations” for placement: Pure1‑backed in connected sites (block). Dark‑site logic for file workloads on FB when offline. Naming patterns from the resource naming controls baked in, not bolted on afterward. So you’re not manually choosing: Which array is “least bad” today. How to hack the name so it still passes your log‑parsing scripts. CLI / API: What This Looks Like in Practice If you prefer the CLI over the GUI, Fusion doesn’t punish you. Example: Defining and Using a File Preset Author a preset JSON (simplified example): { "name": "app-file-gold", "type": "file", "parameters": { "fs_count": { "min": 1, "max": 16, "default": 4 }, "fs_size_tib": { "min": 1, "max": 50, "default": 10 }, "tenant": { "required": true }, "env": { "allowed": ["dev","test","prod"], "default": "dev" } }, "naming": { "filesystem_pattern": "{tenant}-{env}-{workload_name}-fs{seq}" }, "protection": { "snapshot_policy": "hourly-24h-daily-30d", "replication_targets": ["dr-fb-01"] } } Upload preset into a fleet: purepreset workload upload --context fleet-core app-file-gold < app-file-gold.json Create a workload and let Fusion pick the array: pureworkload create \ --context fleet-core \ --preset app-file-gold \ --name payments-file-prod \ --parameter tenant=payments \ --parameter env=prod \ --parameter fs_count=8 \ --parameter fs_size_tib=20 Inspect placement and underlying resources: pureworkload list --context fleet-core --name payments-file-prod --verbose Behind the scenes: Fusion picks suitable arrays using Pure1 Workload Placement (for connected sites) or dark‑site logic purefs/exports/pgroups are created with names derived from the preset’s naming rules. Example: Binding Existing Commands to Workloads The new version also extends several CLI commands with workload awareness: purefs list --workload payments-file-prod purefs setattr --workload payments-file-prod ... purefs create --workload payments-file-prod --workload-configuration app-file-gold This is handy when you need to: Troubleshoot or resize all file systems in a given workload. Script around logical workloads instead of individual file systems. Why This Matters for You (Not Just for Slides) Net impact of FB 4.6.7 + FA 6.10.4 from an Admin’s perspective: File is now truly first‑class in Fusion, across both FlashArray and FlashBlade. You can encode “how we do storage here” as code: Presets (JSON + GUI). Parameterization and naming rules. Placement and protection choices. Dark sites get sane placement via “Get Recommendations” for file workloads, instead of best‑guess manual picks. Resource naming is finally policy‑driven, not left to whoever is provisioning at 2 AM. GUI, CLI, and API are aligned around the same abstractions, so you can: Prototype in the UI. Commit JSON to Git. Automate via CLI/API without re‑learning concepts. Next Steps If you want to kick the tires: Upgrade: FlashBlade to Purity//FB 4.6.7 FlashArray to Purity//FA 6.10.4 Pick one or two high‑value patterns (e.g., “DB file services”, “analytics scratch”, “home directories”). Implement them as Fusion presets with: Parameters. Placement hints. Naming rules. Wire into your existing tooling: Use the GUI for ad‑hoc. Wrap purepreset / pureworkload in your pipelines for everything else. You already know how to design good storage. These releases just make it a lot harder for your environment to drift away from that design the moment humans touch it.69Views2likes0CommentsPure Storage Cloud Dedicated on Azure: An intro to Performance
Introduction With Pure Storage Cloud Dedicated on Microsoft Azure, performance is largely governed by three factors that need to be taken into consideration: front-end controller networking, controller back‑end connection to managed disks, and the Purity data path. This post explains how Azure building blocks and these factors influence overal performance. Disclaimer: This post requires basic understanding of PSC Dedicated architecture. Real-life Performance varies based on configuration and workload; examples here are illustrative. Architecture: the building blocks that shape performance Cloud performance often comes from how compute, storage, and networking are assembled. PSC Dedicated deploys two Azure VMs as storage controllers running the Purity operating environment and uses Azure Managed Disks as persistent media. Initiator VMs connect over the Azure Virtual Network using in‑guest iSCSI or NVMe/TCP. Features like inline data reduction, write coalescing through NVRAM, and an I/O rate limiter help keep the array stable and with predictable performance under saturation. Front-end performance: networking caps Azure limits the outbound (egress) bandwidth of Virtual Machines. Each Azure VM has a certain network egress cap assigned and cannot send out more data than what the limit allows. As PSC Dedicated controllers run on Azure VMs, this translates into the following: Network traffic going INTO the PSC Dedicated array - writes - not throttled by Azure outbound bandwidth limits Network traffic going OUT of the PSC Dedicated array - reads - limited User-requested reads (e.g. from an application) as well as any replication traffic leaving the controller share the same egress budget. Because of that, planning workloads with replication should be done carefully to avoid competing with client reads. Back-end performance: VM caps, NVMe, and the write path The Controller VM caps Similarly to frontend network read throughput, Azure enforces per‑VM limits on total backend IOPS and combined read/write throughput. The overall IOPS/throughput of a VM is therefore limited by the lower of: the controller VM's IOPS/throughput cap and the combined IOPS/throughput of all attached managed disks. To avoid unnecessary spend due to overprovisioning, managed disks of PSC Dedicated arrays are configured as to saturate the controller backend caps just right. NVMe backend raises the ceiling Recent PSC Dedicated releases adopt an NVMe backend on supported Azure Premium SSD v2 based SKUs, increasing the controller VM’s backend IOPS and bandwidth ceilings. The disk layout and economics remain the same while the array gains backend headroom. The write path Purity secures initiator writes to NVRAM (for fast acknowledgment) and later destages to data managed disks. For each logical write, the backend cap is therefore tapped multiple times: a write to NVRAM a read from NVRAM during flush and a write to the data managed disks Under mixed read/write non-reducible workloads this can exhaust the combined read/write backend bandwidth and IOPS of the controller VM. Raised caps of the NVMe backend help here. Workload characteristics: iSCSI sessions and data reducibility Block size and session count Increasing iSCSI session count between Initiator VMs and the array does not guarantee better performance; with large blocks, too many sessions can increase latency without improving throughput, especially when multiple initiators converge on the same controller. Establish at least one session per controller for resiliency, then tune based on measured throughput and latency. Data reduction helps extend backend headroom When data is reducible, PSC Dedicated writes fewer physical bytes to backend managed disks. That directly reduces backend write MBps for the same logical workload, delaying the point where Azure’s VM backend caps are reached. The effect is most pronounced for write‑heavy and mixed workloads. Conversely, non‑reducible data translates almost 1:1 to backend traffic, hitting limits sooner and raising latency at high load. Conclusion Predictable performance in the cloud is about aligning architecture and operations with the platform’s limits. For PSC Dedicated on Azure, that means selecting the right controller and initiator VM SKUs, co‑locating resources to minimise network distance, enabling accelerated networking, and tuning workloads (block size, sessions, protocol) to the caps that actually matter. Inline data reduction and NVMe backend extend headroom meaningfully (particularly for mixed workloads) while Purity’s design keeps the experience consistent. Hopefully, this post was able to shed light on at least some of the performance factors of PSC Dedicated on Azure.20Views0likes0CommentsVeeam v13 Integration and Plugin
Hi Everyone, We're new Pure customers this year and have two Flasharray C models, one for virtual infrastructure and the other will be used solely as a storage repository to back up those virtual machines using Veeam Backup and Replication. Our plan is to move away from the current windows-based Veeam v12 in favor of Veeam v13 hardened Linux appliances. We're in the design phase now but have Veeam v13 working great in separate environment with VMware and HPE Nimble. Our question is around Pure Storage and Veeam v13 integration and Plugin support. Veeam's product team mentions there is native integrations in v12, but that storage vendors should be "adopting USAPI" going forward. Is this something that Pure is working on, or maybe already has completed with Veeam Backup and Replication v13?916Views4likes14CommentsAnnouncing the General Availability of Purity//FB 4.6.6
We are happy to announce the general availability of 4.6.6, the seventh release in the 4.6 Feature Release line. See the release notes for all the details about these, and the many other features, bug fixes, and security updates included in the 4.6 release line. UPGRADE RECOMMENDATIONS AND EOL SCHEDULE Customers who are running any previous 4.6 version should upgrade to 4.6.6. Customers who are looking for long-term maintenance of a consistent feature set are recommended to upgrade to the 4.5 LLR. Check out our AI Copilot intelligent assistant for deeper insights into release content and recommendations. Development on the 4.6 release line will continue through February 2026. After this time the full 4.6 feature set will roll into the 4.7 Long Life Release line for long-term maintenance, and the 4.6 line will be declared End-of-Life (EOL). HARDWARE SUPPORT This release is supported on the following FlashBlade Platforms: FB//S100, FB//S200 (R1, R2), FB//S500 (R1, R2), FB//ZMT, FB//E, FB//EXA LINKS AND REFERENCES Purity//FB 4.6 Release Notes Purity//FB Release and End-of-Life Schedule Purity//FB Release Guidelines FlashBlade Hardware and End-of-Support FlashBlade Capacity and Feature Limits Pure1 Manage AI Copilot64Views2likes0CommentsPSC Dedicated on Azure - 6.10.x so far
In this post, I though to do a quick look back at the 6.10.x PSC Dedicated on Azure, as we've seen quite a few interesting features be added. Let's start with NVMe-Based Backend. Prior to the 6.10.0 release, Pure Storage Cloud Dedicated for Azure used an SCSI-based backend to connect Managed Disks (both SSDs and NVRAM) to its controller VMs. Starting with 6.10.0, PSC Dedicated SKUs with Premium V2 SSDs will leverage NVMe-based access for Managed Disks. NVMe is a high-speed storage protocol that enables direct communication with storage devices over the PCIe bus. Compared to SCSI, NVMe brings improvements potentially resulting in lower latency, higher IOPS, and reduced CPU utilization. To begin using the NVMe backend, upgrade the array to Purity version 6.10.0. As part of this upgrade, the existing SCSI-based controller VM is automatically replaced with an equivalent NVMe-enabled VM. This transition is fully automated and transparent, no manual steps or redeployment is required, and there are no changes to the user interface or management workflows. The cost of the array also remains unchanged. NVMe becomes the only supported backend protocol moving onward (6.10.0+), there is no option to revert back to SCSI. Let's also look at the backend Performance Characteristics to better understand the change here. The backend performance - meaning the IOPS and throughput between the controller VM and the attached managed disks - is primarily determined by the VM size. This is because Azure imposes VM-level caps on both backend IOPS and throughput. These limits apply regardless of the number of attached disks. The maximum achievable backend IOPS for the primary controller is based on the lower of: The IOPS cap defined by Azure for the VM SKU The combined IOPS of all attached SSDs (Azure Managed Disks) Individual PSC Dedicated SSD Managed Disk performance was selected and configured as to saturate the controller VM backend limits, i.e.: Maximum VM backend IOPS / # of SSD disks = each SSD IOPS Azure also enforces a VM-level backend bandwidth limit, which is a combined cap across both read and write operations. This means that even with multiple high-throughput disks, the total achievable bandwidth cannot exceed what the VM SKU allows. With the switch to NVMe protocol, Azure increases these backend IOPS and bandwidth caps of compatible VMs are raised. This includes the ones used as PSC Dedicated Controllers (for MP2R2 SKUs) VM Size Backend type Max Backend IOPS Max Backend R/W Throughput (MBps) Frontend Network Bandwidth (Mbps) V10MP2R2 NVMe 88,400 2,300 12,500 V10MP2R2 SCSI 64,800 1,370 12,500 V20MP2R2 NVMe 174,200 4,800 16,000 V20MP2R2 SCSI 129,700 2,740 16,000 Source: https://learn.microsoft.com/en-us/azure/virtual-machines/ebdsv5-ebsv5-series From the table above it is clear both IOPS and Bandwidth are seeing significant improvement, positively influencing certain workloads. Increase in backend IOPS is expected to bring benefits in a mixed read/write workload with small IO sizes. Increase in backend bandwidth can be beneficial for non-reducible mixed read/write workloads with high array utilisation. However, keep in mind the managed disk configuration (both SSD and NVRAM) remains the same. This ensures the overall cost remains unchanged with this switch. Also, while the NVMe backend may contribute to an increased storage performance capabilities, other limits (such as frontend network bandwidth and IOPS) still apply. To further extend the performance potential of PSC Dedicated outside of backend limits, in 6.10.2, we've seen an introduction of a brand new SKU, the Azure V50MP2R2. For the new SKU, Azure D128ds v6 virtual machines (VM) were used as controller VMs, along with Premium SSD v2 managed disks. VMs in this class provide up to 6.75 GBps of network egress for read/replication traffic and significantly higher back‑end IOPS and bandwidth for managed disk connectivity. The NVMe back‑end is used by default on the SKU and similarly to current V10 and V20 models, it supports both customer driven non-disruptive Purity upgrades and Controller Scaling (e.g. it is possible to non-disruptively scale to the V50MP2R2 from lower MP2R2 SKUs). At launch, the V50 is available in the following regions: Central US East US East US 2 South Central US Canada Central Canada East Last but not least, 6.10.3 aims to address Azure maintenance or brief infrastructure events, during which the array can experience short-lived increases in I/O latency to backend managed disks. These spikes may be transient yet noticeable by hosts and applications. To harden array behavior against these conditions, PSC Dedicated 6.10.3 on Azure comes with a newly configured set of array-level tunables. These adjust how controllers interpret delayed I/O, coordinate takeovers, and manage internal leases so the array prefers riding out transient backend conditions rather than initiating a controller failover.23Views0likes0CommentsTurn Your Data into a Competitive Advantage
AI adoption is accelerating across every industry—but the real gap isn’t in ambition, it’s in operationalizing AI reliably and at scale. If your organization is looking to move from early pilots to production-grade AI, FlashStack for AI shows how you can make that shift with confidence. FlashStack AI Factories, co-engineered by Pure Storage, Cisco, and NVIDIA, delivers AI Factory frameworks and provides clients a predictable, scalable path to train, tune, and deploy AI workloads—without introducing operational risk. FlashStack delivers meaningful advantages that help teams operationalize AI more effectively: Consistent, production-grade AI performance powered by NVIDIA’s full-stack architecture—ensuring compute, networking, and storage operate as a synchronized system for dependable training and inference. Faster deployment and easier scaling, enabled by unified management through Pure1 and Cisco Intersight, reducing operational overhead and accelerating time to value. Stronger cyber resilience and reduced risk, with SafeMode immutable snapshots and deep integration with leading SIEM/SOAR/XDR ecosystems to safeguard high-value AI data. Meaningful business outcomes, from shortening AI innovation cycles to powering new copilots, intelligent assistants, and data-driven services. Together, these capabilities help enterprises turn raw data and processing power into AI-driven results—securely, sustainably, and without operational complexity. Read More! FlashStack AI Factories16Views0likes0CommentsAUE - Key Insights
Good morning/afternoon/evening everyone! This is Rich Barlow, Principal Technologist @ Pure. It was super fun to proctor this AUE session with Antonia and Jon. Hopefully everyone got in all of the questions that they wanted to ask - we had so many that we had to answer many of them out of band. So thank you for your enthusiasm and support. Looking forward to the next one! Here's a rundown of the most interesting and impactful questions we were asked. If you have any more please feel free to reach out. FlashArray File: Your Questions, Our Answers (Ask Us Everything Recap) Our latest "Ask Us Everything" webinar with Pure Storage experts Rich Barlow, Antonia Abu Matar, and Jonathan Carnes was another great session. You came ready with sharp questions, making it clear you're all eager to leverage the simplicity of your FlashArray to ditch the complexity of legacy file storage. Here are some the best shared insights from the session: Unify Everything: Performance By Design You asked about the foundation—and it's a game-changer. No Middleman, Low Latency: Jon Carnes confirmed that FlashArray File isn't a bolt-on solution. Since the file service lands directly on the drives, just like block data, there's effectively "no middle layer." The takeaway? You get the same awesome, low-latency performance for file that you rely on for block workloads. Kill the Data Silos: Antonia Abu Matar emphasized the vision behind FlashArray File: combining block and file on a single, shared storage pool. This isn't just tidy; it means you benefit from global data reduction and unified data services across everything. Scale, Simplicity, and Your Weekends Back The community was focused on escaping the complexities of traditional NAS systems. Always-On File Shares: Worried about redundancy? Jon confirmed that FlashArray File implements an "always-on" version of Continuous Available (CA) shares for SMB3 (in Purity 6.9/6.10). It’s on by default for transparent failover and simple client access. Multi-Server Scale-Up: For customers migrating from legacy vendors and needing lots of "multi-servers," we're on it. Jon let us know that engineering is actively working to significantly raise the current limits (aiming for around 100 in the next Purity release), stressing that Pure increases these limits non-disruptively to ensure stability. NDU—Always and Forever: The best part? No more weekend maintenance marathons. The FlashArray philosophy is a "data in place, non-disruptive upgrade." That applies to both block and file, eliminating the painful data migrations you’re used to. Visibility at Your Fingertips: You can grab real-time IOPS and throughput from the GUI or via APIs. For auditing, file access events are pushed via syslog in native JSON format, which makes integrating with tools like Splunk super easy. Conquering Distance and Bandwidth A tough question came in about supporting 800 ESRI users across remote Canadian sites (Yellowknife, Iqaluit, etc.) with real-time file access despite low bandwidth. Smart Access over Replication: Jon suggested looking at Rapid Replicas (available on FlashBlade File). This isn't full replication; it’s a smart solution that synchronizes metadata across sites and only pulls the full data on demand (pull-on-access). This is key for remote locations because it dramatically cuts down on the constant bandwidth consumption of typical data replication. Ready to Simplify? FlashArray File Services lets you consolidate your infrastructure and get back to solving bigger problems—not babysitting your storage. Start leveraging the power of a truly unified and non-disruptive platform today! Join the conversation and share your own experiences in the Pure Community.Ask Us Everything ... Evergreen//One edition!
💬 Have more questions for our experts around Evergreen//One after today's live "Ask Us Everything"? Feel free to drop them below and our experts will answer! dpoorman​ abarnes​ and Tago- - Tag! You're it! Or, check out some of these self-serve resources: EG//1 website Introduction to Evergreen//One (video) Evergreen//One for AI: Modern Storage Economics for the AI Era (blog) The Economics of Pure Storage Evergreen Subscriptions (blog) DATIC Protects Citizen Data from Attack (customer case study)