FlashArray File Multi-Server
File support on FlashArray gets another high demanded feature. With version 6.8.7, purity introduces a concept of Server, which connects exports and directory services and all other necessary objects, which are required for this setup, namely DNS configuration and networking. From this version onwards, all directory exports are associated with exactly one server. To recap, server has (associations) to following objects: DNS Active Directory / Directory Service (LDAP) Directory Export Local Directory Service Local Directory Service is another new entity introduced in version 6.8.7 and it represents a container for Local Users and Groups. Each server has it's own Local Directory Service (LDS) assigned to it and LDS also has a domain name, which means "domain" is no longer hardcoded name of a local domain, but it's user-configurable option. All of these statements do imply lots of changes in user experience. Fortunately, commonly this is about adding a reference or possibility to link a server and our GUI contains newly Server management page, including Server details page, which puts everything together and makes a Server configuration easy to understand, validate and modify. One question which you might be asking right now is - can I use File services without Servers? The answer is - no, not really. But don't be alarmed. Significant effort has been made to keep all commands and flows backwards compatible, so unless some script is parsing exact output and needs to be aligned because there is a new "Server" column added, there should be any need for changing those. How did we managed to do that? Special Server called _array_server has been created and if your configuration has anything file related, it will be migrated during upgrade. Let me also offer a taste of how the configuration could look like once the array is updated to the latest version List of Servers # pureserver list Name Dns Directory Services Local Directory Service Created _array_server management - domain 2025-06-09 01:00:26 MDT prod prod - prod 2025-06-09 01:38:14 MDT staging management stage staging 2025-06-09 01:38:12 MDT testing management testing testing 2025-06-09 01:38:11 MDT List of Active Directory accounts Since we can join multiple AD servers, we now can have multiple AD accounts, up to one per server # puread account list Name Domain Computer Name TLS Source ad-array <redacted>.local ad-array required - prod::ad-prod <redacted>.local ad-prod required - ad-array is a configuration for the _array_server and for backwards compatibility reasons, the prefix of the server name hasn't been added. The prefix is there for account connected to server prod (and to any other server). List of Directory Services (LDAP) Directory services got also slightly reworked, since before 6.8.7 there were only two configurations, management and data. Obviously, that's not enough for more than one server (management is reserved for array management access and can't be used for File services). After 6.8.7 release, it's possible to completely manage Directory Service configurations and linking them to individual servers. # pureserver list Name Dns Directory Services Local Directory Service Created _array_server management - domain 2025-06-09 01:00:26 MDT prod prod - prod 2025-06-09 01:38:14 MDT staging management stage staging 2025-06-09 01:38:12 MDT testing management testing testing 2025-06-09 01:38:11 MDT Please note that these objects are intentionally not enabled / not configured. List of Directory exports # puredir export list Name Export Name Server Directory Path Policy Type Enabled prod::smb::accounting accounting prod prodpod::accounting:root / prodpod::smb-simple smb True prod::smb::engineering engineering prod prodpod::engineering:root / prodpod::smb-simple smb True prod::smb::sales sales prod prodpod::sales:root / prodpod::smb-simple smb True prod::smb::shipping shipping prod prodpod::shipping:root / prodpod::smb-simple smb True staging::smb::accounting accounting staging stagingpod::accounting:root / stagingpod::smb-simple smb True staging::smb::engineering engineering staging stagingpod::engineering:root / stagingpod::smb-simple smb True staging::smb::sales sales staging stagingpod::sales:root / stagingpod::smb-simple smb True staging::smb::shipping shipping staging stagingpod::shipping:root / stagingpod::smb-simple smb True testing::smb::accounting accounting testing testpod::accounting:root / testpod::smb-simple smb True testing::smb::engineering engineering testing testpod::engineering:root / testpod::smb-simple smb True testing::smb::sales sales testing testpod::sales:root / testpod::smb-simple smb True testing::smb::shipping shipping testing testpod::shipping:root / testpod::smb-simple smb True The notable change here is that the Export Name and Name has slightly different meaning. Pre-6.8.7 version used the Export Name as a unique identifier, since we had single (implicit, now explicit) server, which naturally created a scope. Now, the Export Name can be the same as long as it's unique in scope of a single server, as seen in this example. The Name is different and provides array-unique export identifier. It is a combination of server name, protocol name and the export name. List of Network file interfaces # purenetwork eth list --service file Name Enabled Type Subnet Address Mask Gateway MTU MAC Speed Services Subinterfaces Servers array False vif - - - - 1500 56:e0:c2:c6:f2:1a 0.00 b/s file - _array_server prod False vif - - - - 1500 de:af:0e:80:bc:76 0.00 b/s file - prod staging False vif - - - - 1500 f2:95:53:3d:0a:0a 0.00 b/s file - staging testing False vif - - - - 1500 7e:c3:89:94:8d:5d 0.00 b/s file - testing As seen above, File network VIFs now are referencing specific server. (this list is particularly artificial, since neither of them is properly configured nor enabled, anyway the main message is that File VIF now "points" to a specific server). Local Directory Services Local Directory Service (LDS) is a newly introduced container for Local Users and Groups. # pureds local ds list Name Domain domain domain testing testing staging staging.mycorp prod prod.mycorp As already mentioned, all local users and groups now has to belong to a LDS, which means management of those also contains that information # pureds local user list Name Local Directory Service Built In Enabled Primary Group Uid Administrator domain True True Administrators 0 Guest domain True False Guests 65534 Administrator prod True True Administrators 0 Guest prod True False Guests 65534 Administrator staging True True Administrators 0 Guest staging True False Guests 65534 Administrator testing True True Administrators 0 Guest testing True False Guests 65534 # pureds local group list Name Local Directory Service Built In Gid Audit Operators domain True 65536 Administrators domain True 0 Guests domain True 65534 Backup Operators domain True 65535 Audit Operators prod True 65536 Administrators prod True 0 Guests prod True 65534 Backup Operators prod True 65535 Audit Operators staging True 65536 Administrators staging True 0 Guests staging True 65534 Backup Operators staging True 65535 Audit Operators testing True 65536 Administrators testing True 0 Guests testing True 65534 Backup Operators testing True 65535 Conclusion I did show how the FA configuration might look like, without providing much details about the actual way how to configure or test these configs, anyway, this article should provide a good overview about what to expect from 6.8.7 version. There is plenty of information about this particular aspect of the release in the updated product documentation. Please let me know if there is any demand to deep-dive into any aspect of this feature.3Views0likes0CommentsPure Storage Delivers Critical Cyber Outcomes
“We don’t have storage problems. We have outcome problems.” - Pure customer in a recent cyber briefing No matter what we are buying, what we are buying is a desired outcome. If you buy a car, you are buying some sort of outcome or multiple outcomes. Point A to Point B, comfort, dependability, seat heaters, or if you are like me, a real, live Florida Man, seat coolers! The same is true when solving for cyber outcomes, and often overlooked is a storage foundation to drive cyber resilience. A strong storage foundation improves data security, resilience and recovery. With these characteristics, organizations can recover in hours vs. days. Here are some top cyber resilience outcomes Pure Storage is delivering. Native, Layered Resilience Fast Analytics Rapid Restore Enhanced Visibility We will tackle all of these in this blog space (multi-part post alert!), but let’s start with the native, layered resilience Pure provides customers. Layered Resilience refers to a comprehensive approach to ensuring data protection and recovery through multiple layers of security and redundancy. This architecture is designed to provide robust protection against data loss, corruption, and cyber threats, ensuring business continuity and rapid recovery in the event of a disaster. Why is layered resilience important? Different data needs different protection. My photo collection, while important to me, doesn’t require the same level of protection as critical application data needed to keep the company running. Layered resilience indicates that there needs to be different layers of resilience and recovery. Super critical data needs super critical recovery. We are referring to the applications that are the life-blood of organizations, order processing, patient services or trading applications. These may only account for 5% of your data, but drive 95% of the revenue. Many organizations protect these with high availability which provides excellent resilience against disasters and system outages. But for malicious events, such as ransomware, protection is needed to ensure that recoverable data is available if an attack corrupts or destroys the production data. Scheduled snapshots can protect that data from the time the data is born. Little baby data. Protect the baby! Pure Snapshots are a critical feature, providing efficient, zero-footprint copies of data that can be quickly created and restored, ensuring data protection and business continuity. Pure snapshots are optimized for data reduction, ensuring minimal space consumption. This is achieved through global data reduction technologies that compress and deduplicate data, making snapshots space-efficient. They are designed to be simple and flexible, with zero performance overhead and the ability to create tens of thousands of snapshots instantly. They are also integrated with Pure1 (part of our Enhanced Visibility discussion) for enhanced visibility, management and security, reducing the need for complex orchestration and manual intervention. Snapshots can be used to create new volumes with full capabilities, allowing for mounting, reading, writing, and further snapshotting without dependencies on one another. This flexibility supports various use cases, including point-in-time restores and data recovery. In events that require clean recovery, and secure recovery at that, it would be much more desirable to leverage snapshots for recovery, where you could scan and determine cleanliness and safeness, often in parallel efforts and the reset time for going to an earlier period of time is a matter of seconds rather than days. But not even these amazing local snapshots are enough. What if your local site is rendered unavailable for some reason? Do you have control of your data to be able to recover in that scenario? Replicating those local snapshots to a second site could enable more flexibility in recovery. We have had customers leverage our High Availability solution (ActiveCluster) across sites and then engage snapshots and asynchronous replication to a third site as a part of their recovery plan. Data that requires extended retention and granularity is typically handled by a data control plane application that will stream a backup copy to a repository. This is usually a last line of defense in case of an event, as the recovery time objective is longer when considering a streaming recovery of 50%, 75%, or 100% of a data center. Still, this is a layer of resiliency that a comprehensive plan should account for. And if these repositories are on Pure Storage, these also can be protected by SafeMode methodologies and other security measures such as Object Lock API, Freeze Locked Objects, and WORM compliance. And most importantly, this last line of defense can be supercharged for recovery by the predictable, performant platform Pure provides. Some outcomes of this layer of resilience involves Isolated Recovery Environments to incorporate even security and create those Clean Rooms to isolate recovery to ensure you will not re-introduce the event origin back into production. In these solutions, the speed benefits that Pure provides is critical to making these designs a reality. Of course, the final frontier is the archive layer. This is a part of the plan that usually falls into compliance SLA, where data is required to be maintained for longer periods of time. Still, more and more, there are performance and warm data requirements for even these data sets, where AI and other queries can benefit from even the oldest of data. One never knows what layer of resilience is required for any single event. Having the best possible resilience enables any company to recover, and recover quickly, from an attack. But native resilience is just one of the outcomes we deliver. Come back to read how we are delivering fast analytics outcomes in an environment that seeks to discover anomalies as fast as possible. Exit Question: How resilient is your data today? Jason Walker is a technical strategy director for cyber related areas at Pure Storage and a real, live, Florida Man. No animals or humans were injured in the creation of this post.85Views5likes1Comment