Recent Content
Announcing the General Availability of Purity//FA 6.7.4 LLR
We are happy to announce the general availability of 6.7.4, the fifth release in the 6.7 Long-Life Release (LLR) line! This release line is based on the feature set introduced in 6.6, providing long-term consistency in capabilities, user experience, and interoperability, with the latest fixes and security updates. When the 6.7 LLR line demonstrates sufficient accumulated runtime data to be recommended for critical customer workloads, it will be declared Enterprise Ready (ER). Until then, Purity//FA 6.5 is the latest ER-designated LLR line. For more detailed information about bug fixes and security updates included in each release, see the release notes. UPGRADE RECOMMENDATIONS AND EOL SCHEDULE We recommend customers with compatible hardware who are looking for the latest feature set offered for long-term maintenance upgrade to this long-life release. When possible, customers should make use of Self-Service Upgrades (SSU) to ease the process of planning and executing non-disruptive Purity upgrades for their fleet. The 6.7 LLR line is planned for development through October 2027. HARDWARE SUPPORT This release is supported on the following FlashArray Platforms: Cloud Block Store for Azure and AWS, FA//X (R2, R3, R4), FA//C, FA//XL, and FA//E. Note, DFS software version 2.2.3 is recommended with this release. ACKNOWLEDGEMENTS We would like to thank everyone within the engineering, support, technical program management, product management, product marketing, finance and technical product specialist teams who contributed to this release. LINKS AND REFERENCES Purity//FA 6.7 Release Notes Purity//FA 6.6/6.7 Feature Content Self-Service Upgrades Purity//FA Release and End-of-Life Schedule FlashArray Hardware and End-of-Support DirectFlash Shelf Software Compatibility Matrix FlashArray Capacity and Feature Limits FlashArray Feature Interoperability Matrix14Views2likes0CommentsFlashblade S3 Integration with VMware vCloud
Does anyone know if Pure has an integration with Flashblade and vCloud or is maybe working on one? Dell and Hitachi seem to have extensions for vCloud but nothing for Pure. We want to support Pure Object user consumption through the vCloud model.Solved65Views0likes1CommentBackup and Restore FA Configuration
Hi All, Purestorage has how to save or backup and restore configuration when an issue occurs? For example, the software crashed and could not be used anymore. Once the problem was fixed, we restored the existing configuration file from backup.Solved125Views2likes6CommentsPortworx Enterprise Version 3.2.3 - Release Announcement
We are excited to announce that Portworx Enterprise 3.2.3 is now Generally Available. This version consists of solutions for some key customer asks and support for FlashArray capabilities Key Highlights General Availability of FlashArray Direct Access (FADA) shared raw block (RWX) volumes Support for FADA shared raw block (RWX) volumes, to enable live migration of KubeVirt VMs with high-performance raw block storage Early access support for FlashArray Direct Access volumes with FlashArray Active Cluster feature Early access of the ability to create custom labels for storage pools. This helps placement of volumes and replicas based on environment or workload requirements Other Notable Improvements Support sticky option for FlashArray Direct Access Volumes Scalability improvements to reduce the load of the Kubernetes API Server Usage Instructions Prerequisites Supported Kernels Install, Upgrade and Usage Documentation Additional Resources and Documentation Release Notes Portworx Enterprise Documentation Run KubeVirt VMs with FlashArray Direct Access shared raw block (RWX) volumes ActiveCluster on FlashArray Direct Access volumes (Early Access) Custom labels for device pools (Early Access)7Views0likes0CommentsPurity//FB 4.5.7 is now Generally Available!
We are pleased to announce that Purity//FB 4.5.7 is now Generally Available! Purity//FB 4.5.7 supports first generation FlashBlade, FlashBlade//S, and FlashBlade//E. Documentation and Resources: Release and Upgrade Guidelines: Recommended for Proof of Concepts, new installs, and upgrades of FlashBlade//S and FlashBlade//E systems. Object Multi-Site Replication features require workload fit assessment and feature enablement is required. Please contact your Pure Storage representative. Please refer to the Release Guidelines for more up-to-date information. Release Package: Download here For details on fixes included in 4.5.7, please refer to the links below: External Release Notes (Support Site) IMPORTANT: Both 3.3.x and 4.1.x are bridgehead releases which means any existing FlashBlade systems will need to upgrade to 3.3.x and then 4.1.x before upgrading to 4.5.0.28Views0likes0CommentsGetting Started with Pure Storage Fusion: A Quick Guide to Unified Fleet Management
One of the most powerful updates in the Pure Storage ecosystem is the ability to federate arrays into a unified fleet with Fusion. Whether you're scaling out infrastructure or simplifying operations across data centers, Fusion makes multi-array management seamless—and the setup process is refreshingly simple. Here’s a quick walkthrough to get your fleet up and running: 🔹 Step 1: Create or Join a Fleet From the Fleet Management tab in the Purity UI, you can either create a new fleet or join an existing one. Creating a fleet? Just assign a memorable name and generate a one-time fleet key. This key acts like a secure handshake, ensuring that only authorized arrays can join. 🔹 Step 2: Add Arrays to the Fleet On each array you want to bring into the fold: Select Join Fleet, enter the fleet name, and paste in the fleet key. Once verified, the array becomes part of your managed fleet. 🔹 Step 3: Manage as One With federation complete, you now have a single, unified control plane. Any array in the fleet can serve as your management entry point—configure, monitor, and operate across the entire environment from one location. This capability is a big leap forward for simplifying scale and operations—especially for hybrid cloud or multi-site environments. If you're testing it out, I’d love to hear how it's working for you or what use cases you're solving.69Views5likes1CommentAnnouncing the General Availability of Purity//FA 6.8.6
We are happy to announce the general availability of 6.8.6, the seventh release in the 6.8 Feature Release line, including the SMB Continuous Availability feature, which guarantees zero downtime for customers' businesses during controller disruptions and upgrades, ensuring uninterrupted access to shared files. Some of the improvements to Purity contained in this release include: SMB Continuous Availability preserves file handles to ensure uninterrupted SMB access during controller failovers and upgrades. Target Pods for Pgroup Replication allows customers to target a specific pod for protection groups, avoiding the clutter of snapshots replicating to an array’s root pod. CBS for AWS Write Optimization for Amazon S3 improves how its data is committed and managed on Amazon S3 and can significantly drop the AWS infrastructure operating cost, providing customers with write or replication heavy workloads with cost reductions of up to 50%. Allow NVMe Read-Only Volumes for ActiveDR eliminates restriction on promotion/demotion of pods containing NVMe-connected volumes, saving customers from unexpected command failures and time-consuming workarounds. For more detailed information about features, bug fixes, and security updates included in each release, see the release notes. UPGRADE RECOMMENDATIONS AND EOL SCHEDULE We recommend customers with compatible hardware who are looking for the fastest access to the latest features upgrade to this new feature release. When possible, customers should make use of Self-Service Upgrades (SSU) to ease the process of planning and executing non-disruptive Purity upgrades for their fleet. This 6.8 release line is planned for feature development through May 2025 with additional fixes coming in June 2025 to end the release line. HARDWARE SUPPORT This release is supported on the following FlashArray Platforms: Cloud Block Store for Azure and AWS, FA//X (R3, R4), FA//C (R3, R4), FA//XL, FA//E, and FA//RC (starting with 6.8.5). Note, DFS firmware version 2.2.3 is recommended with this release. ACKNOWLEDGEMENTS We would like to thank everyone within the engineering, support, technical program management, product management, product marketing, finance and technical product specialist teams who contributed to this release. LINKS AND REFERENCES Purity//FA 6.8 Release Notes Self-Service Upgrades Purity//FA Release and End-of-Life Schedule FlashArray Hardware and End-of-Support DirectFlash Shelf Software Compatibility Matrix FlashArray Capacity and Feature Limits35Views1like0CommentsSnapshots and growth
I have a question about snapshot growth and retention. Last week we had 14 days worth of snapshots and due to some storage growth, we changed this to 7 days worth of snapshots. Before the change was made snapshots were taking up about 21 TB of space, after the change that number is around 10 TB. This reduction of space was more than expected. We expected around a 5 TB reduction. We just added up days 8-14 to get the 5 number. The other 6 TB reduction came from the most recent snapshot which at the time was 11 TB in size and now its down to around 5 TB in size. Does anybody know why the most current snapshot also had a large reduction after making this change? We are trying to figure out future growth including snapshot growth.150Views0likes4CommentsActiveCluster Asynchronous
Hello Having question about ActiveCluster Asynchronous is it possible to configure between 2 existing clusters Poland to Germany . Idea is to have ability to perform DR tests in Datacenters located in different country The customer is currently running a configuration based on X20R2/R4. They have two systems in the Poland and two in the Germany. Local replication between arrays is configured using ActiveCluster Synchronous. ################## Reviewed documentation is ActiveCluster over Fibre Channel and ActiveDR or Async is supported on the same system starting with Purity 6.1.3+. ActiveDR must be configured with separate volumes (and pods) from ActiveCluster volumes. ActiveCluster asynchronous works in such way that that both arrays Metro Arrays replicate data to 3d array protection group snapshot only Leveraging asynchronous replication is easy to do, it's a simple matter of defining a Target array in a Protection Group after connecting the array. Once defined in a Protection Group, the Protection Group itself can be moved into an ActiveCluster (our synchronous replication, RPO0 replication service) Pod, where the Protection Group is owned by two arrays. The defined Target can replicate regularly scheduled snapshots to a third array. This Active-Active Asynchronous Replication is shared by the ActiveCluster arrays and in the event that either array is offline, the alternate array will assume ownership of continual snapshot replication to the third array. In summary, you can replicate snapshots as desired between any number of arrays to any other number of arrays, requiring a defined array connection and Protection Group Target. These Protection Groups can also be moved into a pod for sharing between ActiveCluster arrays for disaster recovery purposes as well. The sequence of steps for enabling asynchronous replication: Connect arrays so the source and target arrays are aware of each other Create a Protection Group with desired snapshot policies Add any array to replicate snapshots to the Target field If using in an ActiveCluster pair, move the Protection Group into the ActiveCluster podSolvedDoes anyone have any advice for stork pods that keep restarting
Does anyone have any advice for stork pods that keep restarting with: ``` Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x29aad38]``` I'm running storkctl Version: 2.7.0-2e5098a and k8s version 1.22.5Solved181Views1like14Comments