New Reference Architecture: SQL Server on Azure VM's with Pure Cloud Block Store
This is a brand new, weeks-old reference architecture — and I’m really excited about this one. During development, one of the most surprising discoveries was just how much Azure VM performance is limited by the IOPS cap tied to managed disks. It caught me off guard how much planning it takes just to size storage and compute together when you go the native route. With CBS, I was able to bypass those constraints. It felt more like working with enterprise storage (which is what its meant to do!) , I could pull from a pool, scale performance independently of VM size, and provision storage volumes in a clean and easy way. This new RA covers: SQL Server architecture on Azure VMs with Pure Cloud Block Store Snapshot-based backup and restore DR patterns using ActiveDR™ and HA using ActiveCluster™ Dev/test database cloning with volume snapshots Performance benchmarking vs. Azure Premium SSD v2 It prooved: ~40% more transactional throughput (TPROC-C) ~93% better analytical query performance (TPROC-H) (using queries per minute normalization) 3–5x data reduction vs. raw data Download the full reference architecture here Would love to hear your thoughts on this architecture and how we could improve the expirience!5Views1like0CommentsAnnouncing the General Availability of Purity//FA 6.7.4 LLR
We are happy to announce the general availability of 6.7.4, the fifth release in the 6.7 Long-Life Release (LLR) line! This release line is based on the feature set introduced in 6.6, providing long-term consistency in capabilities, user experience, and interoperability, with the latest fixes and security updates. When the 6.7 LLR line demonstrates sufficient accumulated runtime data to be recommended for critical customer workloads, it will be declared Enterprise Ready (ER). Until then, Purity//FA 6.5 is the latest ER-designated LLR line. For more detailed information about bug fixes and security updates included in each release, see the release notes. UPGRADE RECOMMENDATIONS AND EOL SCHEDULE We recommend customers with compatible hardware who are looking for the latest feature set offered for long-term maintenance upgrade to this long-life release. When possible, customers should make use of Self-Service Upgrades (SSU) to ease the process of planning and executing non-disruptive Purity upgrades for their fleet. The 6.7 LLR line is planned for development through October 2027. HARDWARE SUPPORT This release is supported on the following FlashArray Platforms: Cloud Block Store for Azure and AWS, FA//X (R2, R3, R4), FA//C, FA//XL, and FA//E. Note, DFS software version 2.2.3 is recommended with this release. ACKNOWLEDGEMENTS We would like to thank everyone within the engineering, support, technical program management, product management, product marketing, finance and technical product specialist teams who contributed to this release. LINKS AND REFERENCES Purity//FA 6.7 Release Notes Purity//FA 6.6/6.7 Feature Content Self-Service Upgrades Purity//FA Release and End-of-Life Schedule FlashArray Hardware and End-of-Support DirectFlash Shelf Software Compatibility Matrix FlashArray Capacity and Feature Limits FlashArray Feature Interoperability Matrix14Views2likes0CommentsBackup and Restore FA Configuration
Hi All, Purestorage has how to save or backup and restore configuration when an issue occurs? For example, the software crashed and could not be used anymore. Once the problem was fixed, we restored the existing configuration file from backup.Solved125Views2likes6CommentsAnnouncing the General Availability of Purity//FA 6.8.6
We are happy to announce the general availability of 6.8.6, the seventh release in the 6.8 Feature Release line, including the SMB Continuous Availability feature, which guarantees zero downtime for customers' businesses during controller disruptions and upgrades, ensuring uninterrupted access to shared files. Some of the improvements to Purity contained in this release include: SMB Continuous Availability preserves file handles to ensure uninterrupted SMB access during controller failovers and upgrades. Target Pods for Pgroup Replication allows customers to target a specific pod for protection groups, avoiding the clutter of snapshots replicating to an array’s root pod. CBS for AWS Write Optimization for Amazon S3 improves how its data is committed and managed on Amazon S3 and can significantly drop the AWS infrastructure operating cost, providing customers with write or replication heavy workloads with cost reductions of up to 50%. Allow NVMe Read-Only Volumes for ActiveDR eliminates restriction on promotion/demotion of pods containing NVMe-connected volumes, saving customers from unexpected command failures and time-consuming workarounds. For more detailed information about features, bug fixes, and security updates included in each release, see the release notes. UPGRADE RECOMMENDATIONS AND EOL SCHEDULE We recommend customers with compatible hardware who are looking for the fastest access to the latest features upgrade to this new feature release. When possible, customers should make use of Self-Service Upgrades (SSU) to ease the process of planning and executing non-disruptive Purity upgrades for their fleet. This 6.8 release line is planned for feature development through May 2025 with additional fixes coming in June 2025 to end the release line. HARDWARE SUPPORT This release is supported on the following FlashArray Platforms: Cloud Block Store for Azure and AWS, FA//X (R3, R4), FA//C (R3, R4), FA//XL, FA//E, and FA//RC (starting with 6.8.5). Note, DFS firmware version 2.2.3 is recommended with this release. ACKNOWLEDGEMENTS We would like to thank everyone within the engineering, support, technical program management, product management, product marketing, finance and technical product specialist teams who contributed to this release. LINKS AND REFERENCES Purity//FA 6.8 Release Notes Self-Service Upgrades Purity//FA Release and End-of-Life Schedule FlashArray Hardware and End-of-Support DirectFlash Shelf Software Compatibility Matrix FlashArray Capacity and Feature Limits35Views1like0CommentsSnapshots and growth
I have a question about snapshot growth and retention. Last week we had 14 days worth of snapshots and due to some storage growth, we changed this to 7 days worth of snapshots. Before the change was made snapshots were taking up about 21 TB of space, after the change that number is around 10 TB. This reduction of space was more than expected. We expected around a 5 TB reduction. We just added up days 8-14 to get the 5 number. The other 6 TB reduction came from the most recent snapshot which at the time was 11 TB in size and now its down to around 5 TB in size. Does anybody know why the most current snapshot also had a large reduction after making this change? We are trying to figure out future growth including snapshot growth.150Views0likes4CommentsActiveCluster Asynchronous
Hello Having question about ActiveCluster Asynchronous is it possible to configure between 2 existing clusters Poland to Germany . Idea is to have ability to perform DR tests in Datacenters located in different country The customer is currently running a configuration based on X20R2/R4. They have two systems in the Poland and two in the Germany. Local replication between arrays is configured using ActiveCluster Synchronous. ################## Reviewed documentation is ActiveCluster over Fibre Channel and ActiveDR or Async is supported on the same system starting with Purity 6.1.3+. ActiveDR must be configured with separate volumes (and pods) from ActiveCluster volumes. ActiveCluster asynchronous works in such way that that both arrays Metro Arrays replicate data to 3d array protection group snapshot only Leveraging asynchronous replication is easy to do, it's a simple matter of defining a Target array in a Protection Group after connecting the array. Once defined in a Protection Group, the Protection Group itself can be moved into an ActiveCluster (our synchronous replication, RPO0 replication service) Pod, where the Protection Group is owned by two arrays. The defined Target can replicate regularly scheduled snapshots to a third array. This Active-Active Asynchronous Replication is shared by the ActiveCluster arrays and in the event that either array is offline, the alternate array will assume ownership of continual snapshot replication to the third array. In summary, you can replicate snapshots as desired between any number of arrays to any other number of arrays, requiring a defined array connection and Protection Group Target. These Protection Groups can also be moved into a pod for sharing between ActiveCluster arrays for disaster recovery purposes as well. The sequence of steps for enabling asynchronous replication: Connect arrays so the source and target arrays are aware of each other Create a Protection Group with desired snapshot policies Add any array to replicate snapshots to the Target field If using in an ActiveCluster pair, move the Protection Group into the ActiveCluster podSolved116Views1like3CommentsPurity FA 6.6.10* Introduces Better Data Security
Purity FA 6.6.10 introduces better data security by auditing file access. This means for all SMB and NFS shares, all access events can be captured and recorded to a local file or sent to a remote CIS log server. Check out the Pure Storage //Launch Round-Up for August! Have more questions? We’re all ears!31Views1like0CommentsGeneral Availability of Purity//FB 4.5.6
We are excited to announce the general availability of Purity//FB 4.5.6 Purity 4.5.6 Release Highlights Introducing Purity//FB 4.5.6, designed to simplify managing a fleet of systems, optimize geographically distributed workflows, and expand the environments that can deploy FlashBlade. Key highlights include: Fusion for FlashBlade: Support for FlashBlade is available now on Pure Fusion, enabling FlashBlade to create, or join a fleet of arrays, simplifying deployments, scaling, and management of data across both FlashArrays and FlashBlades. Pure Fusion now also delivers a single consistent interface for deploying file and object workloads. Rapid Replicas: The remote fetching and caching capability enables file data to be distributed efficiently and allows collaborative development across multiple Data centers, remote sites or, increasingly, workloads in the cloud. QoS ceiling for File System: Purity//FB adds support for creating custom QoS policies defining ceiling limits for IOPS and bandwidth per filesystem. Storage administrators can leverage QoS ceiling to ensure predictable performance and mitigate resource contention. Legal Hold support for File System: Allows users to apply legal hold on files, folders and sub-folders. Once applied the file/folder can’t be deleted by the user until the legal hold is removed. This capability supports compliance for enterprises in regulated industries by providing mandatory legal hold capabilities for files and folders. 18.6TB QLC DFM Support for S100: provides lower entry-point (130TB raw) for customers to experience the power and goodness of FlashBlade//S and also provides heterogeneous expansion capability on //S100 systems with 18.6TB and 37.5TB DFMs. FlashBlade//S500 with NVIDIA DGX SuperPOD: Purity//FB 4.5.6 introduces the integration of FlashBlade with NVIDIA DGX SuperPOD to provide a high-performance, scalable solution for AI and other high-performance computing applications. Object Secure Token Service: Secure Token Services helps FlashBlade to integrate with Single Sign ON (SSO) architectures (Federated Identity models). It simplifies management of user and group access to FlashBlade resources like buckets and objects. Increased Object Account and Bucket Scale: Scalability in FlashBlade Object storage is enhanced with the latest release. FlashBlade will be able to support up to 30K replication relations and up to 5K remote credentials while setting up replication. Object Active-Active Replication: Beginning with Purity//FB 4.5.6, Active-Active Replication between multi-site writable buckets is now Generally Available. No qualification document or approval for setting up replication is needed. However, active-active replication of classic buckets still requires PM approval. Please contact your Pure Storage Representative for assessment and approval for classic buckets. We recommend using multi-site writable buckets for all scenarios including single site deployments. Capacity Consolidation on FB//S & FB//E: provides support for upgrading smaller capacity DFMs to larger capacity DFMs in FlashBlade//S and FlashBlade//E systems. TLS Policy and Ciphers: TLS Policy allows customers to create and maintain custom TLS policies & Ciphers for FlashBlade network interfaces. TLS policy helps centralize management and controls for all inbound network traffic. The policy helps narrow down attack surface and vulnerability exploitation for transports over the network. Updates to Zero Move Tiering (ZMT): FlashBlade//ZMT system adds support for 1,2,3 and 4 DFM per blade configs in 1S:1E offering. New SKUs offer low starting capacity points for Hot storage class and large Cold archival storage. This indirectly improves TCO for customers with predictable performance and cost efficiency. ABE Support for File System: Access-based Enumeration (ABE) allows administrators to hide objects (files and folders) from users who don’t have permissions (Read or List) on a network shared folder in order to access them. MMC Support for File System: Administrators can now go into Microsoft Management Console (MMC) to list open inaccessible files and choose to close them. The full list of new features and enhancements can be found in the Purity//FB 4.5.6 Release Notes.114Views2likes0CommentsTry Fusion Presets and Workloads in Pure Test Drive
Pure Test Drive now has a hands on lab designed to let you demo the Presets and Workloads capability included in Purity//FA 6.8.3. A Preset is a reusable template that describes how to provision and configure the Purity objects that make up a Workload; and Workloads are a container object that holds references to other Purity objects, which will enable them to be managed and monitored as a set. Watch a recorded walkthrough or roll up your sleeves and try it your self by filling out this form or asking your local Pure SE for a voucher for the Pure Fusion Presets & Workloads lab.22Views1like0Comments