Recent Content
Getting Started with Pure Storage Fusion: A Quick Guide to Unified Fleet Management
One of the most powerful updates in the Pure Storage ecosystem is the ability to federate arrays into a unified fleet with Fusion. Whether you're scaling out infrastructure or simplifying operations across data centers, Fusion makes multi-array management seamless—and the setup process is refreshingly simple. Here’s a quick walkthrough to get your fleet up and running: 🔹 Step 1: Create or Join a Fleet From the Fleet Management tab in the Purity UI, you can either create a new fleet or join an existing one. Creating a fleet? Just assign a memorable name and generate a one-time fleet key. This key acts like a secure handshake, ensuring that only authorized arrays can join. 🔹 Step 2: Add Arrays to the Fleet On each array you want to bring into the fold: Select Join Fleet, enter the fleet name, and paste in the fleet key. Once verified, the array becomes part of your managed fleet. 🔹 Step 3: Manage as One With federation complete, you now have a single, unified control plane. Any array in the fleet can serve as your management entry point—configure, monitor, and operate across the entire environment from one location. This capability is a big leap forward for simplifying scale and operations—especially for hybrid cloud or multi-site environments. If you're testing it out, I’d love to hear how it's working for you or what use cases you're solving.5Views0likes0CommentsAnnouncing the General Availability of Purity//FA 6.8.6
We are happy to announce the general availability of 6.8.6, the seventh release in the 6.8 Feature Release line, including the SMB Continuous Availability feature, which guarantees zero downtime for customers' businesses during controller disruptions and upgrades, ensuring uninterrupted access to shared files. Some of the improvements to Purity contained in this release include: SMB Continuous Availability preserves file handles to ensure uninterrupted SMB access during controller failovers and upgrades. Target Pods for Pgroup Replication allows customers to target a specific pod for protection groups, avoiding the clutter of snapshots replicating to an array’s root pod. CBS for AWS Write Optimization for Amazon S3 improves how its data is committed and managed on Amazon S3 and can significantly drop the AWS infrastructure operating cost, providing customers with write or replication heavy workloads with cost reductions of up to 50%. Allow NVMe Read-Only Volumes for ActiveDR eliminates restriction on promotion/demotion of pods containing NVMe-connected volumes, saving customers from unexpected command failures and time-consuming workarounds. For more detailed information about features, bug fixes, and security updates included in each release, see the release notes. UPGRADE RECOMMENDATIONS AND EOL SCHEDULE We recommend customers with compatible hardware who are looking for the fastest access to the latest features upgrade to this new feature release. When possible, customers should make use of Self-Service Upgrades (SSU) to ease the process of planning and executing non-disruptive Purity upgrades for their fleet. This 6.8 release line is planned for feature development through May 2025 with additional fixes coming in June 2025 to end the release line. HARDWARE SUPPORT This release is supported on the following FlashArray Platforms: Cloud Block Store for Azure and AWS, FA//X (R3, R4), FA//C (R3, R4), FA//XL, FA//E, and FA//RC (starting with 6.8.5). Note, DFS firmware version 2.2.3 is recommended with this release. ACKNOWLEDGEMENTS We would like to thank everyone within the engineering, support, technical program management, product management, product marketing, finance and technical product specialist teams who contributed to this release. LINKS AND REFERENCES Purity//FA 6.8 Release Notes Self-Service Upgrades Purity//FA Release and End-of-Life Schedule FlashArray Hardware and End-of-Support DirectFlash Shelf Software Compatibility Matrix FlashArray Capacity and Feature Limits7Views1like0CommentsSnapshots and growth
I have a question about snapshot growth and retention. Last week we had 14 days worth of snapshots and due to some storage growth, we changed this to 7 days worth of snapshots. Before the change was made snapshots were taking up about 21 TB of space, after the change that number is around 10 TB. This reduction of space was more than expected. We expected around a 5 TB reduction. We just added up days 8-14 to get the 5 number. The other 6 TB reduction came from the most recent snapshot which at the time was 11 TB in size and now its down to around 5 TB in size. Does anybody know why the most current snapshot also had a large reduction after making this change? We are trying to figure out future growth including snapshot growth.113Views0likes4CommentsActiveCluster Asynchronous
Hello Having question about ActiveCluster Asynchronous is it possible to configure between 2 existing clusters Poland to Germany . Idea is to have ability to perform DR tests in Datacenters located in different country The customer is currently running a configuration based on X20R2/R4. They have two systems in the Poland and two in the Germany. Local replication between arrays is configured using ActiveCluster Synchronous. ################## Reviewed documentation is ActiveCluster over Fibre Channel and ActiveDR or Async is supported on the same system starting with Purity 6.1.3+. ActiveDR must be configured with separate volumes (and pods) from ActiveCluster volumes. ActiveCluster asynchronous works in such way that that both arrays Metro Arrays replicate data to 3d array protection group snapshot only Leveraging asynchronous replication is easy to do, it's a simple matter of defining a Target array in a Protection Group after connecting the array. Once defined in a Protection Group, the Protection Group itself can be moved into an ActiveCluster (our synchronous replication, RPO0 replication service) Pod, where the Protection Group is owned by two arrays. The defined Target can replicate regularly scheduled snapshots to a third array. This Active-Active Asynchronous Replication is shared by the ActiveCluster arrays and in the event that either array is offline, the alternate array will assume ownership of continual snapshot replication to the third array. In summary, you can replicate snapshots as desired between any number of arrays to any other number of arrays, requiring a defined array connection and Protection Group Target. These Protection Groups can also be moved into a pod for sharing between ActiveCluster arrays for disaster recovery purposes as well. The sequence of steps for enabling asynchronous replication: Connect arrays so the source and target arrays are aware of each other Create a Protection Group with desired snapshot policies Add any array to replicate snapshots to the Target field If using in an ActiveCluster pair, move the Protection Group into the ActiveCluster podSolvedDoes anyone have any advice for stork pods that keep restarting
Does anyone have any advice for stork pods that keep restarting with: ``` Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x29aad38]``` I'm running storkctl Version: 2.7.0-2e5098a and k8s version 1.22.5Solved131Views0likes14CommentsHaving issues with Long HBA rescan times
Having issues with Long HBA rescan times. Any point in the right direction is appreciated. We already have a high priority ticket in but this coummunity has been helpful more than once! Vsphere 7.0.3.00600, Cisco UCS, Cisco MDS 9132 switches, Pure X90R3 Arrays.91Views0likes9CommentsIs there a way, on the flasharray (either GUI or CLI) to check connections hitting a specific port on one of the controllers?
I've got a support case open on this as well, but it appears that we have no hosts hitting one particular port on our array despite all configs looking correct88Views0likes10CommentsA customer wants to be able to see host side performance metrics through Pure1
Hi everyone! A customer wants to be able to see host side performance metrics through Pure1 ( port status, HBAs..etc) through a storage monitoring tool ( like Hyper scale from IBM), something that Pure1 does not offer. Do we API with a third party that can do that? or with Hyper scale for that matter? Another request is the ability to manage the whole fleet from a single pane of glass vs having to log in each array separately ? do we have a solution for this?Solved32Views0likes2CommentsHello All, I am a Sr. Architect with the Pure Professional Services team
Hello All, I am a Sr. Architect with the Pure Professional Services team. In our PS Delivery team, we spend a large amount of my time automating against FlashArray for various tasks, such as database refreshes, automated volume provisioning, DR/BC, zero touch provisioning, etc Recently, we have had a thought that it would be mutually beneficial for us to share our successes with the customer base, as well as create an open forum for you all to share with each other where you have had success. I have made a slack channel topic-automation-user-group where you can join if interested. If we can spark enough interest, we will attempt to meet quarterly. We will have an agenda for demos, guest speakers (including from your peers), open Q&A etc.23Views5likes0Comments