Forum Discussion

qsummers's avatar
qsummers
Day Hiker III
1 day ago

ActiveCluster for File

We’re proud to announce the availability of ActiveCluster for file, Everpure’s premier business continuity solution and a fundamental enabler of our Enterprise Data Cloud vision,  where Service Level Agreements define what storage, network and compute resources are assigned dynamically to application data sets rather than an hardware-to-app architectures. With ActiveCluster for file, Everpure is extending the benefits of data mobility, continuous access and policy-driven management to file workloads.

What is ActiveCluster?

Everpure launched ActiveCluster in 2017,  and rapidly took the mission critical,  enterprise block storage world by storm.  ActiveCluster rapidly enabled enterprise customers with the most demanding block workloads to deploy synchronous, always available, always up-to-date, LUNs or volumes to hosts stretched across geographic distances.

What set ActiveCluster apart from the existing solutions at the time,  and even now, is how simple to set up Everpure RTO-0 and RPO-0 file solutions are,  and how flexible and adaptable to the ever changing business needs hosting these data sets become after being deployed on a Everpure Fusion fleets.

Today, we’re adding file protocol support like NFSv3,  NFSv4.1, SMB 2.0, and SMB 3.0 w/ continuously available shares to our ActiveCluster solution.

Realms as a new container

ActiveCluster for file utilizes a new,  high-level container called a Realm, to synchronously mirror both user data and storage configuration information necessary to provide data access to authorized users on either side of the stretched file system(s). 

Realms are handy to put applications with similar Recovery Point Objectives and similar Recovery Time Objectives together. 

Realm Synchronous Replication

The act of synchronously mirroring both the user data and storage configuration information across two different FlashArrays is called ‘stretching’.  Similar to how a pod is stretched across two FlashArrays,  a Realm can be stretched between any FlashArray system that has no more than  11ms Round Trip Time average latencies on their array replication links. Either Fibrechannel or Ethernet array replication links can be used to replicate file data synchronously. 



 

 

Figure 1.  ActiveCluster for file can be deployed in different modalities 

Realms as namespaces for policies

Realms contain unique snapshot,  audit logging, replication and export policies.  These policies are only viewable and attachable to storage objects within the Realm,  creating a building block for hosting multiple different end customers or tenants on Fusion fleets.   These policies are automatically replicated over to the other array if the Realm is stretched,  reducing operator burden in failover scenarios.  

To prevent split brain scenarios (where a network partition in the array links or replication links stop communication between the pair of FlashArrays) Everpure’s fully managed Cloud Mediator service will determine which remaining FlashArray controller pair can process writes,  and which array will not.  Unlike other business continuity solutions,  ActiveCluster customers don’t have to worry about patching or maintaining the security of separate VMs to act as a mediator service to prevent split brain scenarios.

Multiple servers supported per Realm,  different IDPs allowed.

Each Realm can have one or more servers configured in it,  which act as protocol end points for clients and hosts to connect to.  Each server in a Realm can have a different IP address,  or utilize a different Identity Provider Service.  When a failover condition occurs (like a site disaster on one side),  automatic failover and the clients in either data center are on the same Ethernet segment or broadcast domain,  a failover condition will emit a gratuitous Reverse Address Resolution Protocol request (RARP),  mapping the new MAC address of the ethernet interface on one side to a same IP address being used.   Applications may see a small pause in reads or writes being serviced,  but will not have to re-issue I/O or remount / remap shares or exports. 

Managed directory quotas can also be used for any filesystem or managed directory attached to the servers in the Realm being stretched.  These quota policies automatically get replicated with user data,  so the same customer experience in terms of usable space exists both before and after an automatic failover. 

New Guided Setup available for ActiveCluster for file

 

 

Deploying new ActiveCluster for file solutions can occur in less than five minutes on already racked and powered arrays.  A Guided Setup wizard is available to quickly capture the necessary information to stretch a Realm.  This wizard can be started from multiple locations within the Purity GUI. 

ActiveCluster for file fully takes advantage of Fusion fleets and the ability to manage storage infrastructure as code, programmatically and via policy.  

Realms are not tied to hardware,  and can ‘float’

Realms with ActiveCluster for file support not only provide a 0-RTO and 0-Recovery Point Object at the storage layer for mission critical applications,  but they also provide a mechanism to transparently move the data and configuration in the Realm non-disruptively somewhere else within your fleet,  whether it’s follow the Sun type round-robin hops,  where the Realm’s location changes depending on the time of day,  or is moved as a part of a data-center migration.  Coupled with Fusion, Everpure’s  intelligent control plane,  ActiveCluster for file enables workloads and application data and their configuration information to dynamically and seamlessly move to the right location, at the right time, at the right granularity.  Seamless movement across greater geographic distances can be accomplished by stretching and unstretching the same Realm between different arrays,  as long as the RTT latency between them is <11ms. 

 

 

 

Service Level Agreements are the lingua franca of the Enterprise Data Cloud

Service Level Agreements are the natural language of business owners, and are integral for companies who want to move away from managing storage arrays to managing their business data. They capture answers to questions like “How fast do you need access to this data? Does it need to be backed up or otherwise protected against site-wide failure?  SLAs are what forms our vision behind the App-to-data operational model. This App-to-data model takes abstract, high level business requirements as input,  and then automatically configures and deploys the required storage services to meet the service level agreement just defined. 

A Fusion fleet manager’s perspective is one of many different application tiles, and their health, not just a series of HA pairs spread out across different data centers. 

Data management operations, like instant backups, cloning,  movement is applied as “verbs” to the application data set’s name or workload ID,  and not to a mismatched storage container whose hardware boundaries impose limits on your app team. 

An Intelligent, unified control plane manages and enforces SLA’s across the fleet autonomously,  like a modern cloud operating model but that can be deployed in any modality,  whether on-prem,  in the cloud or a hybrid. 

This scalable model,  with Fusion’s intelligent control plane,  supports ALL workloads,  from modern AI workloads, containers and High Performance Workloads to extremely large image or rich media archives. 

 

 

 

An Enterprise Data Cloud,  made up of discrete nodes tied loosely together, where Service Level Definitions define autonomous system behavior.  Stop managing your storage arrays,  and start managing your data.

 

Learn more about ActiveCluster for file

Read the support documentation for Purity 6.12.0

Test and deploy Fusion fleets and file presets

Ask your account executive or system engineer for a demo!

No RepliesBe the first to reply