ActiveCluster Asynchronous
Hello Having question about ActiveCluster Asynchronous is it possible to configure between 2 existing clusters Poland to Germany . Idea is to have ability to perform DR tests in Datacenters located in different country The customer is currently running a configuration based on X20R2/R4. They have two systems in the Poland and two in the Germany. Local replication between arrays is configured using ActiveCluster Synchronous. ################## Reviewed documentation is ActiveCluster over Fibre Channel and ActiveDR or Async is supported on the same system starting with Purity 6.1.3+. ActiveDR must be configured with separate volumes (and pods) from ActiveCluster volumes. ActiveCluster asynchronous works in such way that that both arrays Metro Arrays replicate data to 3d array protection group snapshot only Leveraging asynchronous replication is easy to do, it's a simple matter of defining a Target array in a Protection Group after connecting the array. Once defined in a Protection Group, the Protection Group itself can be moved into an ActiveCluster (our synchronous replication, RPO0 replication service) Pod, where the Protection Group is owned by two arrays. The defined Target can replicate regularly scheduled snapshots to a third array. This Active-Active Asynchronous Replication is shared by the ActiveCluster arrays and in the event that either array is offline, the alternate array will assume ownership of continual snapshot replication to the third array. In summary, you can replicate snapshots as desired between any number of arrays to any other number of arrays, requiring a defined array connection and Protection Group Target. These Protection Groups can also be moved into a pod for sharing between ActiveCluster arrays for disaster recovery purposes as well. The sequence of steps for enabling asynchronous replication: Connect arrays so the source and target arrays are aware of each other Create a Protection Group with desired snapshot policies Add any array to replicate snapshots to the Target field If using in an ActiveCluster pair, move the Protection Group into the ActiveCluster pod23Views1like1CommentPurity FA 6.6.10* Introduces Better Data Security
Purity FA 6.6.10 introduces better data security by auditing file access. This means for all SMB and NFS shares, all access events can be captured and recorded to a local file or sent to a remote CIS log server. Check out the Pure Storage //Launch Round-Up for August! Have more questions? We’re all ears!24Views1like0CommentsGeneral Availability of Purity//FB 4.5.6
We are excited to announce the general availability of Purity//FB 4.5.6 Purity 4.5.6 Release Highlights Introducing Purity//FB 4.5.6, designed to simplify managing a fleet of systems, optimize geographically distributed workflows, and expand the environments that can deploy FlashBlade. Key highlights include: Fusion for FlashBlade: Support for FlashBlade is available now on Pure Fusion, enabling FlashBlade to create, or join a fleet of arrays, simplifying deployments, scaling, and management of data across both FlashArrays and FlashBlades. Pure Fusion now also delivers a single consistent interface for deploying file and object workloads. Rapid Replicas: The remote fetching and caching capability enables file data to be distributed efficiently and allows collaborative development across multiple Data centers, remote sites or, increasingly, workloads in the cloud. QoS ceiling for File System: Purity//FB adds support for creating custom QoS policies defining ceiling limits for IOPS and bandwidth per filesystem. Storage administrators can leverage QoS ceiling to ensure predictable performance and mitigate resource contention. Legal Hold support for File System: Allows users to apply legal hold on files, folders and sub-folders. Once applied the file/folder can’t be deleted by the user until the legal hold is removed. This capability supports compliance for enterprises in regulated industries by providing mandatory legal hold capabilities for files and folders. 18.6TB QLC DFM Support for S100: provides lower entry-point (130TB raw) for customers to experience the power and goodness of FlashBlade//S and also provides heterogeneous expansion capability on //S100 systems with 18.6TB and 37.5TB DFMs. FlashBlade//S500 with NVIDIA DGX SuperPOD: Purity//FB 4.5.6 introduces the integration of FlashBlade with NVIDIA DGX SuperPOD to provide a high-performance, scalable solution for AI and other high-performance computing applications. Object Secure Token Service: Secure Token Services helps FlashBlade to integrate with Single Sign ON (SSO) architectures (Federated Identity models). It simplifies management of user and group access to FlashBlade resources like buckets and objects. Increased Object Account and Bucket Scale: Scalability in FlashBlade Object storage is enhanced with the latest release. FlashBlade will be able to support up to 30K replication relations and up to 5K remote credentials while setting up replication. Object Active-Active Replication: Beginning with Purity//FB 4.5.6, Active-Active Replication between multi-site writable buckets is now Generally Available. No qualification document or approval for setting up replication is needed. However, active-active replication of classic buckets still requires PM approval. Please contact your Pure Storage Representative for assessment and approval for classic buckets. We recommend using multi-site writable buckets for all scenarios including single site deployments. Capacity Consolidation on FB//S & FB//E: provides support for upgrading smaller capacity DFMs to larger capacity DFMs in FlashBlade//S and FlashBlade//E systems. TLS Policy and Ciphers: TLS Policy allows customers to create and maintain custom TLS policies & Ciphers for FlashBlade network interfaces. TLS policy helps centralize management and controls for all inbound network traffic. The policy helps narrow down attack surface and vulnerability exploitation for transports over the network. Updates to Zero Move Tiering (ZMT): FlashBlade//ZMT system adds support for 1,2,3 and 4 DFM per blade configs in 1S:1E offering. New SKUs offer low starting capacity points for Hot storage class and large Cold archival storage. This indirectly improves TCO for customers with predictable performance and cost efficiency. ABE Support for File System: Access-based Enumeration (ABE) allows administrators to hide objects (files and folders) from users who don’t have permissions (Read or List) on a network shared folder in order to access them. MMC Support for File System: Administrators can now go into Microsoft Management Console (MMC) to list open inaccessible files and choose to close them. The full list of new features and enhancements can be found in the Purity//FB 4.5.6 Release Notes.42Views2likes0CommentsTry Fusion Presets and Workloads in Pure Test Drive
Pure Test Drive now has a hands on lab designed to let you demo the Presets and Workloads capability included in Purity//FA 6.8.3. A Preset is a reusable template that describes how to provision and configure the Purity objects that make up a Workload; and Workloads are a container object that holds references to other Purity objects, which will enable them to be managed and monitored as a set. Watch a recorded walkthrough or roll up your sleeves and try it your self by filling out this form or asking your local Pure SE for a voucher for the Pure Fusion Presets & Workloads lab.10Views1like0CommentsDo you think companies are doing enough to make us feel confident about their data practices?
It’s a little unsettling when we don’t know exactly what’s happening with our personal data. Do you think companies are doing enough to make us feel confident about their data practices, or do you feel like they could do more? Ever wonder what really happens to your data once you hit "delete"? This blog takes a deep dive into data retention and deletion policies, exploring how companies are balancing compliance and your privacy.6Views0likes0CommentsWhat’s the most frustrating part of managing large-scale storage for you?
Managing storage at scale can be a lot at times! What’s the most frustrating part of managing large-scale storage for you? Is it the constant troubleshooting, juggling different systems, or just the pressure of making sure nothing goes down? Check out these demos on how to simplify operations, automate the tedious stuff, and stay ahead of issues before they happen.5Views0likes0Comments