Pure FlashArray CLI Quick References (daily feeds)
Questions Commands Explanations How to display the NTP servers configured in a Pure Flash Array purearray list --ntpserver List the NTP servers configured How to enable phonehome in a Pure Flash Array purearray enable phonehome Enable phonehome or dialhome feature of array. How to list all the FC ports in a Pure Flash Array purehw list --type fc List all the FC ports in an Array with status and speed information How to configure the DNS attributes of a Pure Flash array puredns setattr --domain test.com --nameservers 192.168.0.10,192.168.2.11 Add the IPv4 addresses of two DNS servers for Array to use to resolve hostnames to IP addresses, and the domain suffix test.com for DNS searches. How to list all the connected volumes for a hostgroup in a Pure Flash array purehgroup list --connect MY-HOSTS List all the connected volumes for hostgroup MY-HOSTS How to add hosts to existing hostgroups in Pure Flash Array purehgroup setattr MY-HOSTS --addhostlist MY-HOST-002,MY-HOST-003 Add MY-HOST-002 and MY-HOST-003 to existing hostgroup MY-HOSTS How to list all the Controllers in a Pure Flash Array purehw list --type ct List all the Controller in an Array How to eradicate multiple Virtual Volumes in Pure Flash Array purevol eradicate MY_VOL_001 MY_VOL_002 Eradicate virtual volumes MY_VOL_001 and MY_VOL_002 which are destroyed earlier. This will fully destroy the volumes and not be able to recover further. How to add new HBA wwn to a host object in Pure Flash Array purehost setattr MY-SERVER-001 --addwwnlist 1000000000000003 Add new HBA wwn 1000000000000003 to host MY-HOST-001. 1000000000000003 should not be part of any other host. How to display all the host initiators know to the Flash Array pureport list --initiator Display all the host initiator WWNs, IQNs, NQNs known for the Flash Array. This also shows the target ports on which the initiators are eligible to communicate. How to list all the flagged alerts in a Pure Flash array purealert list --flagged List all the alerts that are flagged. By default all alerts are flagged. We can unflag command once those are resolved. How to display the Dial Home status of a Pure Flash Array purearray list --phonehome Display the dial home configuration status of the Array How to unflag an alert in the Pure Flash array purealert unflag 121212 Unflag alert with ID 121212. This will not appear in the flagged alert list. How to rename a Pure Flash Array purearray rename MYARRAY001 Set the name of the array to MYARRAY001 How to admit the newly connected drive modules in a Pure Flash array puredrive admit Admit all drive modules that have been added or connected but not yet admitted to the array. Once successfully admitted, the status of the drive modules will change from unadmitted to healthy. How to display the replication throttle limit of a Pure Flash Array purearray list --connect --throttle Display the replication throttle limit How to eradicate a Volume in Pure Flash Array purevol eradicate MY_VOL_001 Eradicate virtual volume MY_VOL_001 which is destroyed earlier. This will fully destroy the volume and not be able to recover further. How to unstretch a POD purepod remove --array PFAX70-REMOTE MYPOD001 Remove the remote array PFAX70-REMOTE from the POD MYPOD001. This will unstretch the POD and volume data inside the POD no longer synchronously replicated between two arrays. Volumes within the POD will be only visible in local array. How to list all the Open alerts in a Pure Flash array purealert list --filter "state='open'" List all the alerts in Open state How to list all the Hosts with connected volumes purehost list --connect List all the hosts in a Flash Array which have connected volumes How to create a volume and include in POD purevol create --size 1G MYPOD001::MY_VOL_001 Create a volume of 1GiB size and include it in MYPOD001. If MYPOD001 is stretched, the same volume will be created and visible on the remote arrays too. The volume name and WWN number will appear same from each arrays. How to list all the volumes sorted by size and consumption on a Pure Flash Array purevol list --space --sort size,total List all the volumes sorted by size of each volume and then total space consumed. Both fields are sorted in ascending order. How to pause the replication link in a Pure Flash array purepod replica-link pause PRDPOD001 --remote ARRAY002 --remote-pod DRPOD001 Pause the Active/DR replication by pausing the replica link connection between local and remote array. To continue the replication resume the replica link How to recover a Volume in Pure Flash Array purevol recover MY_VOL_001 Recover virtual volume MY_VOL_001 which is destroyed earlier. How to change the role of a user in Flash Array pureadmin setattr testuser --role array_admin Change the role of the user testuser to array_admin role. Possible roles are readonly, ops_admin, storage_admin, array_admin How to move the volume out of a pod in Pure Flash array purevol move MYPOD001::vol001 "" Move the volume vol001 out of the non-stretched pod MYPOD001. Will throw an error if trying to move from a stretched pod. How to connect volume to hostgroup in Pure Flash Array purevol connect MY_VOL_001 --hgroup MY-HOSTS Connect volume MY_VOL_001 to hostgroup MY-HOSTS. This will assign a lun id to the volume. The lun id will start from 1 and go up to 16383. How to list all the Hosts in a Flash Array purehost list List all the hosts in a Flash Array with its member WWNs or IQNs or NQNs. This will also show the Host Groups if it part of any. How to create a copy of Volume in Pure Flash Array purevol copy MY_VOL_001 MY_VOL_002 Create a copy MY_VOL_001 and name it as MY_VOL_002. If MY_VOL_002 already exists this will throw and error. How to rename a Volume in Pure Flash Array purevol rename MY_VOL_001 MY_VOL_002 Rename virtual volume MY_VOL_001 to MY_VOL_002 How to display the historical capacity and usage statistics information of a Pure Flash Array purearray list --space --historical 30d Display the capacity and usage statistics information of the Array since last 30 days How to connect host to volume with a specific LUN id in Pure Flash Array purehost connect MY-SERVER-001 --vol MY_VOL_001 --lun 10 Connect volume MY_VOL_001 to host MY-SERVER-001 and assign LUN id 10. This will Provide the R/W access to the volume. How to list all the snap shots in a Pure Flash Array purevol list --snap List all the snap shots How to list all the users with api tokens configured in the Flash Array pureadmin list --api-token List all the users with api tokens configured How to reduce the size of a Volume in Pure Flash Array purevol truncate --size 1G MY_VOL_001 Reduce the size of MY_VOL_001 to 1GB ( from current size of 8GB for example ) How to list all flash drives and NVRAM modules in a Pure Flash Array with total capacity puredrive list --total List all the flash drive modules in an Array with the total capacity figure How to disconnect volume from host in Pure Flash Array purevol disconnect MY_VOL_001 --host MY-SERVER-001 Disconnect volume MY_VOL_001 from host MY-SERVER-001. This will remove the visibility of the volume to the host. How to create a hostgroup with existing hosts in Pure Flash Array purehgroup create MY-HOSTS --hostlist MY-HOST-001,MY-HOST-002 Create hostgroup MY-HOSTS and add existing hosts MY-HOST-001 and MY-HOST-002 in to it How to stretch a POD purepod add --array PFAX70-REMOTE MYPOD001 Add the remote array PFAX70-REMOTE to the POD MYPOD001. This will stretch the POD and volume data inside the POD synchronously replicated between two arrays. The arrays in a stretched POD are considered as peers, there is no concept of source and target. Volumes within the POD will be visible in each arrays with same serial numbers. How to create multiple Volume in a Pure Flash Array purevol create --size 10G MY_VOLUME_001 MY_VOLUME_002 Create Virtual volumes MY_VOLUME_001 and MY_VOL_SIZE_002 of size 10GB How to remove hosts from hostgroups in Pure Flash Array purehgroup setattr MY-HOSTS --remhostlist MY-HOST-002,MY-HOST-003 Remove MY-HOST-002 and MY-HOST-003 from hostgroup MY-HOSTS How to delete host object in a Pure Flash Array purehost delete MY-SERVER-001 Delete host MY-SERVER-001 How to search for HBA WWN and on which FC port its been logged in to on Flash Array pureport list --initiator --raw --filter "initiator.wwn='1000000000000001'" Search for HBA WWN 1000000000000001 and on which FC port its been logged in to. How to list all the closed alerts in the Pure Flash array purealert list --filter "state='closed'" List all the closed alerts How to disconnect a specific volume from the host in Pure Flash Array purehost disconnect MY-SERVER-001 --vol MY_VOL_001 Disconnect volume MY_VOL_001 from host MY-SERVER-001. This will remove the visibility of the volume to the host.4Views0likes0CommentsPure FlashArray CLI Quick References (daily feeds)
Questions Commands Explanations How to display serial number of a specific hardware component of a Pure Flash Array purehw list CT0 --spec Display model, part number, and serial number of Controller 0 How to check the Remote Assist is active or inactive in a Pure Flash Array purearray remoteassist --status check the Remote Assist is active or inactive How to change the password for a user in Flash Array pureadmin setattr testuser --password Change the password for the user testuser How to display the name serial number and firmware version of Pure Flash Array purearray list Display the array name,serial number and firmware version How to generate API token for a user in Flash Array pureadmin create testuser --api-token Generate an API token for the user testuser How to increase the size of a Volume in Pure Flash Array purevol setattr --size 2G MY_VOL_001 Increase the size of MY_VOL_001 to 2GB ( from current size of 1GB for example ) How to list all the Ethernet ports in a Pure Flash Array purehw list --type eth List all the Ethernet ports in an Array How to set the NTP server for Pure Flash Array purearray setattr --ntpserver time.google.com Set the NTP server How to list all the Host initiators and connected volumes purehost list --all List all the hosts in a Flash Array along with its member initiators connected to volumes through target ports How to list all the destroyed Virtual Volumes pending for eradication in a Pure Flash Array purevol list --pending-only List all the destroyed Virtual Volumes pending for eradication How to list all the hardware components of a Pure Flash Array along with part and serial number purehw list --spec List all the hardware components along with information like Model name, Part number and serial number How to list the pods with the mediator connectivity status in a Pure Flash array purepod list --mediator List all the pods along with connectivity status from each array to the mediator. How to create a user in Flash Array pureadmin create testuser --role storage_admin Create user testuser with storage_admin role. Possible roles are readonly, ops_admin, storage_admin, array_admin How to destroy Volume in Pure Flash Array purevol destroy MY_VOL_001 Destroy virtual volume MY_VOL_001. This volume can be recovered within 24hrs. After that, physical storage occupied this volume will be reclaimed. How to display audit log records in Pure Flash Storage pureaudit list Display the list of audit records. Audit trail records are created whenever administrative actions are perfromed by a user (for eg: creating, destroying, eradicating a volume) How to list all the controllers connected to a Pure Flash Array purearray list --controller List all the controllers connected to the Array. This will also display the model and status of each controller How to list all the hardware components of a Pure Flash Array purehw list List all the hardware components along with information like status, temperature, voltage etc. How to list all the HBA WWNs logged in to a Flash Array target FC port pureport list --initiator --raw --filter "name='CT0.FC0'" Display all HBA WWNs logged to the FC port CT0.FC0 How to expose the api token for the current user pureadmin list --api-token --expose List all the users with api tokens configured and expose the api token for the current user loggedin. How to set the personality for a host purehost setattr MY-SERVER-001 --personality esxi Set the personality of host MY-SERVER-001 to esxi. Some other values are aix, solaris etc.. How to display all the FC Ports in the Flash Arrays pureport list --raw --filter "name='*FC*'" Display all the Fibre Channel Ports with its WWNs in the Flash Arrays How to display the host with a specified WWN purehost list --filter "wwn='1000000000000003'" Display the host with WWN 1000000000000003 as a member How to connect volume to host with specific LUN id in Pure Flash Array purevol connect MY_VOL_001 --host MY-SERVER-001 --lun 10 Connect volume MY_VOL_001 to host MY-SERVER-001 and assign LUN id 10. This will Provide the R/W access to the volume. How to copy data from one Volume to another in a Pure Flash Array purevol copy MY_VOL_001 MY_VOL_002 --overwrite Copy data from MY_VOL_001 to an existing volume MY_VOL_002. Contents of MY_VOL_002 will be overwritten.20Views0likes0CommentsPure FlashArray CLI Quick References (daily feeds)
1. How to display serial number of a specific hardware component of a Pure Flash Array? purehw list CT0 --spec Explanation: Display model, part number, and serial number of Controller 0 2. How to check the Remote Assist is active or inactive in a Pure Flash Array? purearray remoteassist --status Explanation: check the Remote Assist is active or inactive 3. How to change the password for a user in Flash Array? pureadmin setattr testuser --password Explanation: Change the password for the user testuser 4. How to display the name serial number and firmware version of Pure Flash Array? purearray list Explanation: Display the array name,serial number and firmware version 5. How to generate API token for a user in Flash Array? pureadmin create testuser --api-token Explanation: Generate an API token for the user testuser30Views0likes0CommentsActiveCluster for File
We’re proud to announce the availability of ActiveCluster for file, Everpure’s premier business continuity solution and a fundamental enabler of our Enterprise Data Cloud vision, where Service Level Agreements define what storage, network and compute resources are assigned dynamically to application data sets rather than an hardware-to-app architectures. With ActiveCluster for file, Everpure is extending the benefits of data mobility, continuous access and policy-driven management to file workloads. What is ActiveCluster? Everpure launched ActiveCluster in 2017, and rapidly took the mission critical, enterprise block storage world by storm. ActiveCluster rapidly enabled enterprise customers with the most demanding block workloads to deploy synchronous, always available, always up-to-date, LUNs or volumes to hosts stretched across geographic distances. What set ActiveCluster apart from the existing solutions at the time, and even now, is how simple to set up Everpure RTO-0 and RPO-0 file solutions are, and how flexible and adaptable to the ever changing business needs hosting these data sets become after being deployed on a Everpure Fusion fleets. Today, we’re adding file protocol support like NFSv3, NFSv4.1, SMB 2.0, and SMB 3.0 w/ continuously available shares to our ActiveCluster solution. Realms as a new container ActiveCluster for file utilizes a new, high-level container called a Realm, to synchronously mirror both user data and storage configuration information necessary to provide data access to authorized users on either side of the stretched file system(s). Realms are handy to put applications with similar Recovery Point Objectives and similar Recovery Time Objectives together. Realm Synchronous Replication The act of synchronously mirroring both the user data and storage configuration information across two different FlashArrays is called ‘stretching’. Similar to how a pod is stretched across two FlashArrays, a Realm can be stretched between any FlashArray system that has no more than 11ms Round Trip Time average latencies on their array replication links. Either Fibrechannel or Ethernet array replication links can be used to replicate file data synchronously. Figure 1. ActiveCluster for file can be deployed in different modalities Realms as namespaces for policies Realms contain unique snapshot, audit logging, replication and export policies. These policies are only viewable and attachable to storage objects within the Realm, creating a building block for hosting multiple different end customers or tenants on Fusion fleets. These policies are automatically replicated over to the other array if the Realm is stretched, reducing operator burden in failover scenarios. To prevent split brain scenarios (where a network partition in the array links or replication links stop communication between the pair of FlashArrays) Everpure’s fully managed Cloud Mediator service will determine which remaining FlashArray controller pair can process writes, and which array will not. Unlike other business continuity solutions, ActiveCluster customers don’t have to worry about patching or maintaining the security of separate VMs to act as a mediator service to prevent split brain scenarios. Multiple servers supported per Realm, different IDPs allowed. Each Realm can have one or more servers configured in it, which act as protocol end points for clients and hosts to connect to. Each server in a Realm can have a different IP address, or utilize a different Identity Provider Service. When a failover condition occurs (like a site disaster on one side), automatic failover and the clients in either data center are on the same Ethernet segment or broadcast domain, a failover condition will emit a gratuitous Reverse Address Resolution Protocol request (RARP), mapping the new MAC address of the ethernet interface on one side to a same IP address being used. Applications may see a small pause in reads or writes being serviced, but will not have to re-issue I/O or remount / remap shares or exports. Managed directory quotas can also be used for any filesystem or managed directory attached to the servers in the Realm being stretched. These quota policies automatically get replicated with user data, so the same customer experience in terms of usable space exists both before and after an automatic failover. New Guided Setup available for ActiveCluster for file Deploying new ActiveCluster for file solutions can occur in less than five minutes on already racked and powered arrays. A Guided Setup wizard is available to quickly capture the necessary information to stretch a Realm. This wizard can be started from multiple locations within the Purity GUI. ActiveCluster for file fully takes advantage of Fusion fleets and the ability to manage storage infrastructure as code, programmatically and via policy. Realms are not tied to hardware, and can ‘float’ Realms with ActiveCluster for file support not only provide a 0-RTO and 0-Recovery Point Object at the storage layer for mission critical applications, but they also provide a mechanism to transparently move the data and configuration in the Realm non-disruptively somewhere else within your fleet, whether it’s follow the Sun type round-robin hops, where the Realm’s location changes depending on the time of day, or is moved as a part of a data-center migration. Coupled with Fusion, Everpure’s intelligent control plane, ActiveCluster for file enables workloads and application data and their configuration information to dynamically and seamlessly move to the right location, at the right time, at the right granularity. Seamless movement across greater geographic distances can be accomplished by stretching and unstretching the same Realm between different arrays, as long as the RTT latency between them is <11ms. Service Level Agreements are the lingua franca of the Enterprise Data Cloud Service Level Agreements are the natural language of business owners, and are integral for companies who want to move away from managing storage arrays to managing their business data. They capture answers to questions like “How fast do you need access to this data? Does it need to be backed up or otherwise protected against site-wide failure? SLAs are what forms our vision behind the App-to-data operational model. This App-to-data model takes abstract, high level business requirements as input, and then automatically configures and deploys the required storage services to meet the service level agreement just defined. A Fusion fleet manager’s perspective is one of many different application tiles, and their health, not just a series of HA pairs spread out across different data centers. Data management operations, like instant backups, cloning, movement is applied as “verbs” to the application data set’s name or workload ID, and not to a mismatched storage container whose hardware boundaries impose limits on your app team. An Intelligent, unified control plane manages and enforces SLA’s across the fleet autonomously, like a modern cloud operating model but that can be deployed in any modality, whether on-prem, in the cloud or a hybrid. This scalable model, with Fusion’s intelligent control plane, supports ALL workloads, from modern AI workloads, containers and High Performance Workloads to extremely large image or rich media archives. An Enterprise Data Cloud, made up of discrete nodes tied loosely together, where Service Level Definitions define autonomous system behavior. Stop managing your storage arrays, and start managing your data. Learn more about ActiveCluster for file Read the support documentation for Purity 6.12.0 Test and deploy Fusion fleets and file presets Ask your account executive or system engineer for a demo!47Views2likes0CommentsFlash Array Certification
All FlashArray Admins, If any of you currently hold a Flash Array certification there is an alternative to retaking the test to renew your cert. The Continuing Pure Education (CPE) program takes into account learning activities and community engagement and contribution hours to renew your FA certification. I just successfully renewed my Flash Array Storage Professional cert by tracking my activities. Below are the details I received from Pure. Customers can earn 1 CPE credit per hour of session attendance at Accelerate, for a maximum of 10 CPEs total (i.e., up to 10 hours of sessions). Sessions must be attended live. I would go ahead and add all the sessions you attended at Accelerate to the CPE_Submission form. Associate-level certifications will auto-renew as long as there is at least one active higher-level certification (e.g., Data Storage Associate will auto-renew anytime a Professional-level cert is renewed). All certifications other than the Data Storage Associate should be renewed separately. At this time, the CPE program only applies to FlashArray-based exams. Non- FA exams may be renewed by retaking the respective test every three years. You should be able to get the CPE submission form from your account team. Once complete email your recertification Log to peak-education@purestorage.com for formal processing. -Charlie186Views2likes0CommentsVeeam v13 Integration and Plugin
Hi Everyone, We're new Pure customers this year and have two Flasharray C models, one for virtual infrastructure and the other will be used solely as a storage repository to back up those virtual machines using Veeam Backup and Replication. Our plan is to move away from the current windows-based Veeam v12 in favor of Veeam v13 hardened Linux appliances. We're in the design phase now but have Veeam v13 working great in separate environment with VMware and HPE Nimble. Our question is around Pure Storage and Veeam v13 integration and Plugin support. Veeam's product team mentions there is native integrations in v12, but that storage vendors should be "adopting USAPI" going forward. Is this something that Pure is working on, or maybe already has completed with Veeam Backup and Replication v13?1.3KViews4likes14CommentsAUE - Key Insights
Good morning/afternoon/evening everyone! This is Rich Barlow, Principal Technologist @ Pure. It was super fun to proctor this AUE session with Antonia and Jon. Hopefully everyone got in all of the questions that they wanted to ask - we had so many that we had to answer many of them out of band. So thank you for your enthusiasm and support. Looking forward to the next one! Here's a rundown of the most interesting and impactful questions we were asked. If you have any more please feel free to reach out. FlashArray File: Your Questions, Our Answers (Ask Us Everything Recap) Our latest "Ask Us Everything" webinar with Pure Storage experts Rich Barlow, Antonia Abu Matar, and Jonathan Carnes was another great session. You came ready with sharp questions, making it clear you're all eager to leverage the simplicity of your FlashArray to ditch the complexity of legacy file storage. Here are some the best shared insights from the session: Unify Everything: Performance By Design You asked about the foundation—and it's a game-changer. No Middleman, Low Latency: Jon Carnes confirmed that FlashArray File isn't a bolt-on solution. Since the file service lands directly on the drives, just like block data, there's effectively "no middle layer." The takeaway? You get the same awesome, low-latency performance for file that you rely on for block workloads. Kill the Data Silos: Antonia Abu Matar emphasized the vision behind FlashArray File: combining block and file on a single, shared storage pool. This isn't just tidy; it means you benefit from global data reduction and unified data services across everything. Scale, Simplicity, and Your Weekends Back The community was focused on escaping the complexities of traditional NAS systems. Always-On File Shares: Worried about redundancy? Jon confirmed that FlashArray File implements an "always-on" version of Continuous Available (CA) shares for SMB3 (in Purity 6.9/6.10). It’s on by default for transparent failover and simple client access. Multi-Server Scale-Up: For customers migrating from legacy vendors and needing lots of "multi-servers," we're on it. Jon let us know that engineering is actively working to significantly raise the current limits (aiming for around 100 in the next Purity release), stressing that Pure increases these limits non-disruptively to ensure stability. NDU—Always and Forever: The best part? No more weekend maintenance marathons. The FlashArray philosophy is a "data in place, non-disruptive upgrade." That applies to both block and file, eliminating the painful data migrations you’re used to. Visibility at Your Fingertips: You can grab real-time IOPS and throughput from the GUI or via APIs. For auditing, file access events are pushed via syslog in native JSON format, which makes integrating with tools like Splunk super easy. Conquering Distance and Bandwidth A tough question came in about supporting 800 ESRI users across remote Canadian sites (Yellowknife, Iqaluit, etc.) with real-time file access despite low bandwidth. Smart Access over Replication: Jon suggested looking at Rapid Replicas (available on FlashBlade File). This isn't full replication; it’s a smart solution that synchronizes metadata across sites and only pulls the full data on demand (pull-on-access). This is key for remote locations because it dramatically cuts down on the constant bandwidth consumption of typical data replication. Ready to Simplify? FlashArray File Services lets you consolidate your infrastructure and get back to solving bigger problems—not babysitting your storage. Start leveraging the power of a truly unified and non-disruptive platform today! Join the conversation and share your own experiences in the Pure Community.60Views1like0CommentsWhy Your Writes Are Always Safe on FlashArray
The promise of modern storage is simple: when the system says “yes,” your data better be safe. No matter what happens next; power failure, controller hiccup, or the universe throwing what else it has at you writes need to stay acknowledged. FlashArray is engineered around this non‑negotiable principle. Let me walk you through how we deliver on it. Durable First, Fast Always When your application issues a write to FlashArray, here’s the path it takes: Land in DRAM for inline data reduction (dedupe, compression, you know the lightweight stuff). Persist redundantly in NVRAM (mirrored or RAID‑6/DNVR, depending on platform), in a log accessible by either controller. Acknowledge to the host ← This is the critical moment. Flush to flash media in the background, efficiently and asynchronously. Notice what happens between steps 2 and 3? We don’t acknowledge until data is durably persisted in non‑volatile memory. Not “mostly safe,” not “probably fine” but safe and durable. This isn’t a write‑back cache we’ll get around to flushing later. The acknowledgement means your data survived the critical path and is now protected, period. Power Loss? No Problem. FlashArray NVRAM modules include integrated supercapacitors that provide power hold‑up during unexpected power events. When the power drops, these capacitors ensure the buffered write log is safely preserved without batteries to maintain, no external UPS required just to have write safety. Though it is recommended, no external UPS is necessary for write safety; many sites still deploy UPS for broader data center and facility reasons. Because durability is achieved at the NVRAM layer, we eliminate the most common failure mode in legacy systems: the volatile write cache that promises safety but can’t deliver when it matters most. Simpler Path with Integrated DNVR In our latest architectures, we integrate Distributed NVRAM (DNVR) directly into the DirectFlash Module (DFMD). This simplifies the write path fewer hops, tighter integration, better efficiency. And scales NVRAM bandwidth and capacity with the number of modules. By bringing persistence closer to the media, we’re not just maintaining our durability guarantees we’re increasing capacity and streamlining the data path at the same time. Graceful Under Pressure What happens if write ingress temporarily exceeds what the system can flush to flash? FlashArray applies deterministic backpressure you may see latency increase but I/O is not being dropped. Thus data is not at risk. Background processes yield and lower‑priority internal tasks are throttled to prioritize destage operations, keeping the system stable and predictable. Translation: we slow down gracefully and don't fail unpredictably. High Availability by Design Controllers are stateless, with writes durably persisted in NVRAM accessible by either controller. If one controller faults, the peer automatically takes over, replays any in‑flight operations from the durable log, and resumes service. A brief I/O pause may occur during takeover; platforms are sized so a single controller can handle the full workload afterward to minimize disruption to your applications. No acknowledged data is lost. No manual intervention required. Just continuous operation. Beyond the ACK: Protection on Flash After the destage, data on flash is protected with wide‑striped erasure coding for fast, predictable rebuilds and multi‑device fault tolerance. And NO hot‑spare overhead. The Bottom Line Modern flash gives you incredible performance, but performance means nothing if your data isn't safe. FlashArray's architecture makes durability the first principle—not an optimization, not an add-on, but the foundation everything else is built on. When FlashArray says your write is safe, it's safe. That's not marketing. That's engineering. This approach to write safety is part of Pure's commitment to Better Science, doing things the right way, not the easy way. We didn't just swap drives in an existing architecture; we reimagined the entire system from the ground up, from how we co-design hardware and software with DirectFlash to how we map and manage petabytes of metadata at scale. Want to dive deeper? Better Science, Volume 1 — Hardware and Software Co‑design with DirectFlash https://blog.purestorage.com/products/better-science-volume-1-hardware-and-software-co-design-with-directflash/ Better Science, Volume 2 — Maps, Metadata, and the Pyramid https://blog.purestorage.com/perspectives/better-science-volume-2-maps-metadata-and-the-pyramid/ The Pure Report — Better Science Vol. 1 (DirectFlash) https://podcasts.apple.com/gb/podcast/better-science-volume-1-directflash/id1392639991?i=1000569574821158Views1like0CommentsPurity//FA 6.9 is (Finally) Enterprise Ready!
A few months ago I wrote about the top 10 reasons to upgrade to Purity 6.9, and here are 10 more reasons; because…..6.9 has just gone Enterprise Ready! https://support.purestorage.com/bundle/m_flasharray_release/page/FlashArray/FlashArray_Release/01_Purity_FA_Release_Notes/topics/concept/c_purityfa_69x_release_notes.html 10 💍 It's "Long-Life"! Stability until June 2028. That's a longer, more successful relationship than 90% of reality TV couples achieve. 9⚰️ Your Pure SE Won’t Keep Bugging You About Running an EOL Release. You know who you are…. 8💯 It's Been to College. It met the criteria for "customer fleet adoption, cumulative runtime, and observed uptime." Basically, it passed the field test with flying colors. 7🤝 You Get a Side of Fusion. Upgrade to 6.9 and get the powerful, simple-to-use multi-array storage platform management system included. You know you want it! 6😴 The Engineers Can Finally Go Home. A big thank you to the engineering, support, technical program management, and product management teams for all the hard work. Go take a nap! 5🛡️ We Have a Stable Alternative to Chasing New Features. For customers who want rock-solid reliability, you can skip the Feature Release (FR) line drama and stick with the LLR. 4✅ It's The Complete 6.8 Feature Set. You don't lose any capabilities; you just gain the confidence of a battle-tested release. Full meal deal, no compromises. 3🖱️ It's So Easy to Get There, Even The Intern Could Do It. Compatible hardware customers are encouraged to use Self-Service Upgrades (SSU). Less work, more coffee breaks. 2🔒 Guaranteed Bug Fixes and Security Updates. This release is officially maintained, meaning your security team can finally relax... slightly. 1🚨 When You Call Support, We Won’t Start With "Did You Upgrade Yet?"577Views1like0Comments