FlashArray File Multi-Server
File support on FlashArray gets another high demanded feature. With version 6.8.7, purity introduces a concept of Server, which connects exports and directory services and all other necessary objects, which are required for this setup, namely DNS configuration and networking. From this version onwards, all directory exports are associated with exactly one server. To recap, server has (associations) to following objects: DNS Active Directory / Directory Service (LDAP) Directory Export Local Directory Service Local Directory Service is another new entity introduced in version 6.8.7 and it represents a container for Local Users and Groups. Each server has it's own Local Directory Service (LDS) assigned to it and LDS also has a domain name, which means "domain" is no longer hardcoded name of a local domain, but it's user-configurable option. All of these statements do imply lots of changes in user experience. Fortunately, commonly this is about adding a reference or possibility to link a server and our GUI contains newly Server management page, including Server details page, which puts everything together and makes a Server configuration easy to understand, validate and modify. One question which you might be asking right now is - can I use File services without Servers? The answer is - no, not really. But don't be alarmed. Significant effort has been made to keep all commands and flows backwards compatible, so unless some script is parsing exact output and needs to be aligned because there is a new "Server" column added, there should be any need for changing those. How did we managed to do that? Special Server called _array_server has been created and if your configuration has anything file related, it will be migrated during upgrade. Let me also offer a taste of how the configuration could look like once the array is updated to the latest version List of Servers # pureserver list Name Dns Directory Services Local Directory Service Created _array_server management - domain 2025-06-09 01:00:26 MDT prod prod - prod 2025-06-09 01:38:14 MDT staging management stage staging 2025-06-09 01:38:12 MDT testing management testing testing 2025-06-09 01:38:11 MDT List of Active Directory accounts Since we can join multiple AD servers, we now can have multiple AD accounts, up to one per server # puread account list Name Domain Computer Name TLS Source ad-array <redacted>.local ad-array required - prod::ad-prod <redacted>.local ad-prod required - ad-array is a configuration for the _array_server and for backwards compatibility reasons, the prefix of the server name hasn't been added. The prefix is there for account connected to server prod (and to any other server). List of Directory Services (LDAP) Directory services got also slightly reworked, since before 6.8.7 there were only two configurations, management and data. Obviously, that's not enough for more than one server (management is reserved for array management access and can't be used for File services). After 6.8.7 release, it's possible to completely manage Directory Service configurations and linking them to individual servers. # pureserver list Name Dns Directory Services Local Directory Service Created _array_server management - domain 2025-06-09 01:00:26 MDT prod prod - prod 2025-06-09 01:38:14 MDT staging management stage staging 2025-06-09 01:38:12 MDT testing management testing testing 2025-06-09 01:38:11 MDT Please note that these objects are intentionally not enabled / not configured. List of Directory exports # puredir export list Name Export Name Server Directory Path Policy Type Enabled prod::smb::accounting accounting prod prodpod::accounting:root / prodpod::smb-simple smb True prod::smb::engineering engineering prod prodpod::engineering:root / prodpod::smb-simple smb True prod::smb::sales sales prod prodpod::sales:root / prodpod::smb-simple smb True prod::smb::shipping shipping prod prodpod::shipping:root / prodpod::smb-simple smb True staging::smb::accounting accounting staging stagingpod::accounting:root / stagingpod::smb-simple smb True staging::smb::engineering engineering staging stagingpod::engineering:root / stagingpod::smb-simple smb True staging::smb::sales sales staging stagingpod::sales:root / stagingpod::smb-simple smb True staging::smb::shipping shipping staging stagingpod::shipping:root / stagingpod::smb-simple smb True testing::smb::accounting accounting testing testpod::accounting:root / testpod::smb-simple smb True testing::smb::engineering engineering testing testpod::engineering:root / testpod::smb-simple smb True testing::smb::sales sales testing testpod::sales:root / testpod::smb-simple smb True testing::smb::shipping shipping testing testpod::shipping:root / testpod::smb-simple smb True The notable change here is that the Export Name and Name has slightly different meaning. Pre-6.8.7 version used the Export Name as a unique identifier, since we had single (implicit, now explicit) server, which naturally created a scope. Now, the Export Name can be the same as long as it's unique in scope of a single server, as seen in this example. The Name is different and provides array-unique export identifier. It is a combination of server name, protocol name and the export name. List of Network file interfaces # purenetwork eth list --service file Name Enabled Type Subnet Address Mask Gateway MTU MAC Speed Services Subinterfaces Servers array False vif - - - - 1500 56:e0:c2:c6:f2:1a 0.00 b/s file - _array_server prod False vif - - - - 1500 de:af:0e:80:bc:76 0.00 b/s file - prod staging False vif - - - - 1500 f2:95:53:3d:0a:0a 0.00 b/s file - staging testing False vif - - - - 1500 7e:c3:89:94:8d:5d 0.00 b/s file - testing As seen above, File network VIFs now are referencing specific server. (this list is particularly artificial, since neither of them is properly configured nor enabled, anyway the main message is that File VIF now "points" to a specific server). Local Directory Services Local Directory Service (LDS) is a newly introduced container for Local Users and Groups. # pureds local ds list Name Domain domain domain testing testing staging staging.mycorp prod prod.mycorp As already mentioned, all local users and groups now has to belong to a LDS, which means management of those also contains that information # pureds local user list Name Local Directory Service Built In Enabled Primary Group Uid Administrator domain True True Administrators 0 Guest domain True False Guests 65534 Administrator prod True True Administrators 0 Guest prod True False Guests 65534 Administrator staging True True Administrators 0 Guest staging True False Guests 65534 Administrator testing True True Administrators 0 Guest testing True False Guests 65534 # pureds local group list Name Local Directory Service Built In Gid Audit Operators domain True 65536 Administrators domain True 0 Guests domain True 65534 Backup Operators domain True 65535 Audit Operators prod True 65536 Administrators prod True 0 Guests prod True 65534 Backup Operators prod True 65535 Audit Operators staging True 65536 Administrators staging True 0 Guests staging True 65534 Backup Operators staging True 65535 Audit Operators testing True 65536 Administrators testing True 0 Guests testing True 65534 Backup Operators testing True 65535 Conclusion I did show how the FA configuration might look like, without providing much details about the actual way how to configure or test these configs, anyway, this article should provide a good overview about what to expect from 6.8.7 version. There is plenty of information about this particular aspect of the release in the updated product documentation. Please let me know if there is any demand to deep-dive into any aspect of this feature.471Views2likes1CommentGetting started with FlashArray File Multi-Server
Previous blog post FlashArray File Multi-Server was a feature overview from the perspective of a system, which already has that setup. Let's look at the same feature from a viewpoint of a storage admin, who needs to start using it. For purpose of this blogpost, I'll be starting with an empty test-array. Please let me know if there is a demand for similar post focused on brown field use-case. Setting up a data to share Let's create a filesystem and a managed directory # purefs create myfs Name Created myfs 2025-06-27 06:30:53 MDT # puredir create myfs:mydir --path dir Name Path File System Created myfs:mydir /dir myfs 2025-06-27 06:31:27 MDT So far nothing new.. Setting up a server To create a server on FlashArray, Local Directory Service needs to be either created during Server creation or reference to existing one needs to be provided. What's Local Directory Service? It's a container for Local Users and Local Groups. It's a new container, which helps to manage users for different servers. # pureds local ds create mylds --domain domain.my Name Domain mylds domain.my Nothing prevents us now to create a actual Server object. # pureserver create myserver --local-ds mylds Name Dns Directory Services Local Directory Service Created myserver management - mylds 2025-06-27 06:41:49 MDT (Another option would be to use "built in" server, which is guaranteed to be there - "_array_server". That would also be a server, which contains all the exports which were created before migration to Multi-Server enabled release. As stated before, this post is focusing on a green field scenario, thus creating a new server object.) Setting up an export The server can now be used when creating export # puredir export create --dir myfs:mydir --policy smb-simple --server myserver --export-name myexport Name Export Name Server Directory Path Policy Type Enabled myserver::smb::myexport myexport myserver myfs:mydir /dir smb-simple smb True One configuration object wasn't created as part of this blog post - policy "smb-simple". That's a pre-created policy, which (unless modified) only sets the protocol to "SMB" and accepts all clients. The name of the export has been set to "myexport", meaning that this is the string to be used by the client while mounting. The address of this export will be "${hostname}/myexport". Setting up networking This is a bit tough to follow, since networking pretty much depends on the local network setup and won't be reproducible in your environment, but let's see what needs to be done in the lab setup, hopefully it would be similar to what needs to be done in simple "test" scenario any reader could be doing. Let's create and enable a simplest File VIF possible # purenetwork eth create vif vif1 --address 192.168.1.100/24 --subinterfacelist ct0.eth2 --serverlist myserver Name Enabled Type Subnet Address Mask Gateway MTU MAC Speed Services Subinterfaces Servers vif1 False vif 192.168.1.100 255.255.255.0 - 8950 22:39:87:5e:f4:79 10.00 Gb/s file eth2 myserver # purenetwork eth enable vif1 Name Enabled Type Subnet Address Mask Gateway MTU MAC Speed Services Subinterfaces Servers vif1 True vif - 192.168.1.100 255.255.255.0 - 8950 22:39:87:5e:f4:79 10.00 Gb/s file eth2 myserver That should do it. Setting up Directory Services Server on FlashArray always has Local Directory Service and additionally it can be configured either to verify users against LDAP or Active Directory. LDAP configuration would be set up as # pureds create myds ... # pureserver setattr --ds myds myserver Or we can opt-in to join the Active Directory # puread account create myserver::ad-myserver --domain some.domain --computer-name myserver-computer-name But we don't have to! Let's make this very simple and use the Local Directory Service which has been created before - it's already user by our Server, so the only thing left is to create a user (and let's join Administrators group.. because we can) # pureds local user create pure --primary-group Administrators --password --local-ds mylds Enter password: Retype password: Name Local Directory Service Built In Enabled Primary Group Uid pure mylds False True Administrators 1000 Now, we should have everything set up for client to mount exposed share. Mounting an export (on linux) Let's use a linux client, since it will fit nicely to the rest of the operations and command line examples we have so far. At this point, the share can be easily mounted on any Windows box as well and also all the configuration made on the command line can be easily done on the GUI. client # mount -v -t cifs -o 'user=pure,domain=domain.my,pass=pure,vers=3.02' //192.168.1.100/myexport /mnt client # mount | grep mnt //192.168.1.100/myexport on /mnt type cifs (rw,relatime,vers=3.02,sec=ntlmssp,cache=strict,username=pure,domain=domain.my,uid=0,noforceuid,gid=0,noforcegid,addr=192.168.1.100,file_mode=0755,dir_mode=0755,nounix,serverino,mapposix,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1) And now, the "/mnt" directory on the client machine represents the Managed Directory "myfs:mydir" created before and can be used up to the permissions the user "pure" has. (And since this user is a member of Administrators group, it can do anything). Conclusion This post shows how to set-up File Export on FlashArray with using Servers. We can use the same Flash Array to create another server and export same or different managed directory, while using another Network interfaces or Directory Services.359Views1like0CommentsWhat We Learned About ActiveCluster for File from the Latest “Ask Us Everything”
The newly-announced ActiveCluster for file extends Everpure’s synchronous replication to unstructured workloads–so it was no surprise that the latest Ask Us Everything session drew a lot of attention. Attendees came ready with practical questions about how it works, where it fits, and what it could mean for real production environments. And host Don Poorman, Product Manager Quinn Summers, and Principal Technologist Russell Pope brought the Everpure answers. The conversation showed just how this new approach can help modernize resiliency, mobility, and day-to-day operations. Let’s break down the biggest takeaways. “Is This Just HA… or Something More?” One of the most interesting threads came early: is ActiveCluster for file just another high availability solution? Short answer: no. Attendees pushed on this, and the response from Everpure’s team was clear—this is about data mobility and policy-driven management, not just surviving a failure. Instead of treating HA as a one-off configuration, ActiveCluster is designed to align storage behavior with business intent. That shift matters. In traditional environments, HA is often bolted on and managed manually. Here, policies define things like performance, protection, and placement—and the system enforces them automatically across the fleet. For many in the session, that was a “wait, this is different” moment. The Big Comparison: Legacy Replication vs. ActiveCluster A standout question came from someone evaluating ActiveCluster as a replacement for legacy approaches like NetApp SVMDR. The discussion highlighted a key difference: granularity and consistency. Legacy solutions often replicate at a coarser level (think entire systems or large aggregates), which doesn’t always align with how applications are structured. ActiveCluster instead works at the realm level, where both data and configuration are synchronously mirrored. That means: No mismatched failover scope No rebuilding configs on the other side No “did we forget something?” during a failover It’s a cleaner, more application-aligned model—and that resonated with the audience. “What Actually Happens During a Failover?” Attendees asked the right questions: Is failover automatic? What about DNS changes? How fast does it happen? The answers were refreshingly direct. In a stretched Layer 2 setup, failover is fully automatic and transparent—clients don’t even notice. In more complex network designs, there may be some redirection (like DNS updates), but the data is already in sync. And timing? The expectation is on the order of seconds (often under 10). This is a variable currently unmatched by any legacy storage competitor to Everpure. There was also a lot of interest in how Everpure avoids split-brain scenarios. The mediator service—hosted by Everpure or deployed locally if needed—acts as a lightweight “tie breaker” during network partitions. No extra infrastructure to manage in most cases, and no guesswork about which side should stay active. Simplicity Came Up… A Lot If there was one theme that kept coming back, it was simplicity. One attendee asked about setup, and the answer was basically: it’s wizard-driven. That sparked a broader discussion about how legacy storage often assumes admins have time to relearn complex workflows. In reality, most teams are juggling multiple systems. The ability to stand up synchronous replication with a few guided steps—not scripts, not custom tooling—landed well. Even testing reflects that philosophy. Instead of complex test procedures, the guidance was simple: pull cables, simulate real failures, and observe behavior. No artificial “test modes”—just real-world validation. Data Mobility Is the Real Story Another strong theme was mobility. ActiveCluster doesn’t just protect data—it enables you to move it. The “stretch and unstretch” workflow means datasets can be mirrored, shifted, and re-homed without disruption. That’s a big departure from traditional models, where moving data often means downtime, migration projects, or both. For teams thinking about workload placement, lifecycle management, or hybrid environments, this opens up new options. Real-World Use Cases The audience also pushed beyond file shares into real workloads: Financial trading and payment systems Healthcare imaging and research data VMware/NFS environments The takeaway: if it’s mission-critical and file-based, it’s a candidate. Final Thought: Even More on the Horizon Even with some initial constraints (like starting with new file systems), the field feedback shared during the session was telling: customers are ready to adopt this early. Why? Because the core value—resiliency, mobility, and simplicity—is already there. And if the session proved anything, it’s that Everpure is building this in close collaboration with the community. The questions weren’t just answered—they’re shaping what comes next. If you’re evaluating how to modernize file services, Everpure’s approach is definitely one to consider. Check out this and all our other Ask Us Everything sessions. And, keep the conversation going by jumping into the Everpure Community.91Views0likes0CommentsBoosting SQL Server Backup/Restore Performance: Threads and Parallelism
In this post, we’ll discuss day 1 tuning you can do on your database hosts to take full advantage of your new high-performance backup storage. We’ll go over a few tricks around database layout and backup configuration for maximum throughput, discuss some quirks with SMB, and finally discuss using S3 effectively.65Views1like0Comments