OT: The Architecture of Interoperability
In previous post, we explored the fundamental divide between Information Technology (IT) and Operational Technology (OT). We established that while IT manages data and applications, OT controls the physical heartbeat of our world from factory floors to water treatment plants. In this post we are diving deeper into the bridge that connects them: Interoperability. As Industry 4.0 and the Internet of Things (IoT) accelerate, the "air gap" that once separated these domains is evolving. For modern enterprises, the goal isn't just to have IT and OT coexist, but to have them communicate seamlessly. Whether the use-cases are security, real time quality control, or predictive maintenance, to name a few, this is why interoperability becomes the critical engine for operational excellence. The Interoperability Architecture Interoperability is more than just connecting cables; it’s about creating a unified architecture where data flows securely between the shop floor and the “top floor”. In legacy environments, OT systems (like SCADA and PLCs) often run on isolated, proprietary networks that don’t speak the same language as IT’s cloud-based analytics platforms. To bridge this, a robust interoperability architecture is required. This architecture must support: Industrial Data Lake: A single storage platform that can handle block, file, and object data is essential for bridging the gap between IT and OT. This unified approach prevents data silos by allowing proprietary OT sensor data to coexist on the same high-performance storage as IT applications (such as ERP and CRM). The benefit is the creation of a high-performance Industrial Data Lake, where OT and IT data from various sources can be streamed directly, minimizing the need for data movement, a critical efficiency gain. Real Time Analytics: OT sensors continuously monitor machine conditions including: vibration, temperature, and other critical parameters, generating real-time telemetry data. An interoperable architecture built on high performance flash storage enables instant processing of this data stream. By integrating IT analytics platforms with predictive algorithms, the system identifies anomalies before they escalate, accelerating maintenance response, optimizing operations, and streamlining exception handling. This approach reduces downtime, lowers maintenance costs, and extends overall asset life. Standards Based Design: As outlined in recent cybersecurity research, modern OT environments require datasets that correlate physical process data with network traffic logs to detect anomalies effectively. An interoperable architecture facilitates this by centralizing data for analysis without compromising the security posture. Also, IT/OT convergence requires a platform capable of securely managing OT data, often through IT standards. An API-First Design allows the entire platform to be built on robust APIs, enabling IT to easily integrate storage provisioning, monitoring, and data protection into standard, policy-driven IT automation tools (e.g., Kubernetes, orchestration software). Pure Storage addresses these interoperability requirements with the Purity operating environment, which abstracts the complexity of underlying hardware and provides a seamless, multiprotocol experience (NFS, SMB, S3, FC, iSCSI). This ensures that whether data originates from a robotic arm or a CRM application, it is stored, protected, and accessible through a single, unified data plane. Real-World Application: A Large Regional Water District Consider a large regional water district, a major provider serving millions of residents. In an environment like this, maintaining water quality and service reliability is a 24/7 mission-critical OT function. Its infrastructure relies on complex SCADA systems to monitor variables like flow rates, tank levels, and chemical compositions across hundreds of miles of pipelines and treatment facilities. By adopting an interoperable architecture, an organization like this can break down the silos between its operational data and its IT capabilities. Instead of SCADA data remaining locked in a control room, it can be securely replicated to IT environments for long-term trending and capacity planning. For instance, historical flow data combined with predictive analytics can help forecast demand spikes or identify aging infrastructure before a leak occurs. This convergence transforms raw operational data into actionable business intelligence, ensuring reliability for the communities they serve. Why We Champion Compliance and Governance Opening up OT systems to IT networks can introduce new risks. In the world of OT, "move fast and break things" is not an option; reliability and safety are paramount. This is why Pure Storage wraps interoperability in a framework of compliance and governance, not limited to: FIPS 140-2 Certification & Common Criteria: We utilize FIPS 140-2 certified encryption modules and have achieved Common Criteria certification. Data Sovereignty: Our architecture includes built-in governance features like Always-On Encryption and rapid data locking to ensure compliance with domestic and international regulations, protecting sensitive data regardless of where it resides. Compliance: Pure Fusion delivers policy defined storage provisioning, automating the deployment with specified requirements for tags, protection, and replication. By embedding these standards directly into the storage array, Pure Storage allows organizations to innovate with interoperability while maintaining the security posture that critical OT infrastructure demands. Next in the series: We will explore further into IT/OT interoperability and processing of data at the edge. Stay tuned!26Views0likes0CommentsUnderstanding Deduplication Ratios
It’s super important to understand where deduplication ratios, in relation to backup applications and data storage, come from. Deduplication prevents the same data from being stored again, lowering the data storage footprint. In terms of hosting virtual environments, like FlashArray//X™ and FlashArray//C™, you can see tremendous amounts of native deduplication due to the repetitive nature of these environments. Backup applications and targets have a different makeup. Even still, deduplication ratios have long been a talking point in the data storage industry and continue to be a decision point and factor in buying cycles. Data Domain pioneered this tactic to overstate its effectiveness, leaving customers thinking the vendor’s appliance must have a magic wand to reduce data by 40:1. I wanted to take the time to explain how deduplication ratios are derived in this industry and the variables to look for in figuring out exactly what to expect in terms of deduplication and data footprint. Let’s look at a simple example of a data protection scenario. Example: A company has 100TB of assorted data it wants to protect with its backup application. The necessary and configured agents go about doing the intelligent data collection and send the data to the target. Initially, and typically, the application will leverage both software compression and deduplication. Compression by itself will almost always yield a decent amount of data reduction. In this example, we’ll assume 2:1, which would mean the first data set goes from 100TB to 50TB. Deduplication doesn’t usually do much data reduction on the first baseline backup. Sometimes there are some efficiencies, like the repetitive data in virtual machines, but for the sake of this generic example scenario, we’ll leave it at 50TB total. So, full backup 1 (baseline): 50TB Now, there are scheduled incremental backups that occur daily from Monday to Friday. Let’s say these daily changes are 1% of the aforementioned data set. Each day, then, there would be 1TB of additional data stored. 5 days at 1TB = 5TB. Let’s add the compression in to reduce that 2:1, and you have an additional 2.5TB added. 50TB baseline plus 2.5TB of unique blocks means a total of 52.5TB of data stored. Let’s check the deduplication rate now. 105TB/52.5TB = 2x You may ask: “Wait, that 2:1 is really just the compression? Where is the deduplication?” Great question and the reason why I’m writing this blog. Deduplication prevents the same data from being stored again. With a single full backup and incremental backups, you wouldn’t see much more than just the compression. Where deduplication measures impact is in the assumption that you would be sending duplicate data to your target. This is usually discussed as data under management. Data under management is the logical data footprint of your backup data, as if you were regularly backing up the entire data set, not just changes, without deduplication or compression. For example, let’s say we didn’t schedule incremental backups but scheduled full backups every day instead. Without compression/deduplication, the data load would be 100TB for the initial baseline and then the same 100TB plus the daily growth. Day 0 (baseline): 100TB Day 1 (baseline+changes): 101TB Day 2 (baseline+changes): 102TB Day 3 (baseline+changes): 103TB Day 4 (baseline+changes): 104TB Day 5 (baseline+changes): 105TB Total, if no compression/deduplication: 615TB This 615TB total is data under management. Now, if we looked at our actual, post-compression/post-dedupe number from before (52.5TB), we can figure out the deduplication impact: 615/52.5 = 11.714x Looking at this over a 30-day period, you can see how the dedupe ratios can get really aggressive. For example: 100TB x 30 days = 3,000TB + (1TB x 30 days) = 3,030TB 3,030TB/65TB (actual data stored) = 46.62x dedupe ratio In summary: 100TB, 1% change rate, 1 week: Full backup + daily incremental backups = 52.5TB stored, and a 2x DRR Full daily backups = 52.5TB stored, and an 11.7x DRR That is how deduplication ratios really work—it’s a fictional function of “what if dedupe didn’t exist, but you stored everything on the disk anyway” scenarios. They’re a math exercise, not a reality exercise. Front-end data size, daily change rate, and retention are the biggest variables to look at when sizing or understanding the expected data footprint and the related data reduction/deduplication impact. In our scenario, we’re looking at one particular data set. Most companies will have multiple data types, and there can be even greater redundancy when accounting for full backups across those as well. So while it matters, consider that a bonus.48Views1like1CommentAsk Us Everything Recap: Making Purity Upgrades Simple
At our recent Ask Us Everything session, we put a spotlight on something every storage admin has an opinion about: software upgrades. Traditionally, storage upgrades have been dreaded — late nights, service windows, and the fear of downtime. But as attendees quickly learned, Pure Storage Purity upgrades are designed to be a very different experience. Our panel of Pure Storage experts included our host Don Poorman, Technical Evangelist, and special guests Sean Kennedy and Rob Quast, Principal Technologists. Here are the questions that sparked the most conversation, and the insights our panel shared. “Are Purity upgrades really non-disruptive?” This one came up right away, and for good reason. Many admins have scars from upgrade events at other vendors. Pure experts emphasized that non-disruptive upgrades (NDUs) are the default. With thousands performed in the field — even for mission-critical applications — upgrades run safely in the background. Customers don’t need to schedule middle-of-the-night windows just to stay current. “Do I need to wait for a major release?” Attendees wanted to know how often they should upgrade, and whether “dot-zero” releases are safe. The advice: don’t wait too long. With Pure’s long-life releases (like Purity 6.9), you can stay current without chasing every new feature release. And because Purity upgrades are included in your Evergreen subscription, you’re not paying extra to get value — you just need to install the latest version. Session attendees found this slide helpful, illustrating the different kinds of Purity releases. “How do self-service upgrades work?” Admins were curious about how much they can do themselves versus involving Pure Storage support. The good news: self-service upgrades are straightforward through Pure1, but you’re never on your own. Pure Technical Services knows that you're running an upgrade, and if an issue arises you’re automatically moved to the front of the queue. If you want a co-pilot, then of course Pure Storage support can walk you through it live. Either way, the process is fast, repeatable, and built for confidence. Upgrading your Purity version has never been easier, now that Self Service Upgrades lets you modernize on your schedule. “Why should I upgrade regularly?” This is where the conversation shifted from fear to excitement. Staying current doesn’t just keep systems secure — it unlocks new capabilities like: Pure Fusion™: a unified, fleet-wide control plane for storage. FlashArray™ Files: modern file services, delivered from the same trusted platform. Ongoing performance, security, and automation enhancements that come with every release. One attendee summed it up perfectly: “Upgrading isn’t about fixing problems — it’s about getting new toys.” The Takeaway The biggest lesson from this session? Purity upgrades aren’t something to fear — they’re something to look forward to. They’re included with your Evergreen subscription, they don’t disrupt your environment, and they unlock powerful features that make storage easier to manage. So if you’ve been putting off your next upgrade, take a fresh look. Chances are, Fusion, Files, or another feature you’ve been waiting for is already there — you just need to turn it on. 👉 Want to keep the conversation going? Join the discussion in the Pure Community and share your own upgrade tips and stories. Be sure to join our next Ask Us Everything session, and catch up with past sessions here!169Views3likes2CommentsDon’t Wait, Innovate: Long‑Life Release 6.9.0 Is Your Gateway to Continuous Innovation
How Pure Releases Work (and Why You Should Care) Pure Storage doesn’t make you choose between stability and innovation: Feature Releases arrive monthly and are supported for 9 months. They’re production‑ready and ideal if you like to live on the cutting edge. Long‑Life Releases (LLRs) bundle those feature releases into a thoroughly tested version which is supported for three years. LLR 6.9.0 is essentially all the innovation of those Feature releases, rolled into one update. This dual approach means you can adopt new features as soon as they’re ready or wait for the next stable release—either way, you keep moving forward. Not sure what features you’re missing? Not a problem as we have a tool for that. A coworker reminded me: Pure1’s AI Copilot can tell you exactly what you’ve been missing. Here’s how easy it is to find out: Log into Pure1, click on the AI Copilot tab, and type your question. My coworker reminded me of this last week, so I tried: “Please provide all features for FlashArray since version 6.4 of Purity OS.” Copilot returned a detailed rundown of new capabilities across each release. In just a couple of minutes, I saw everything I’d overlooked—no digging through release notes or calling support required. A Taste of What You’ve Been Missing Here’s a snapshot of the goodies you may have missed across the last few year releases: Platform enhancements: FlashArray//E platform (6.6.0) extends Pure’s simplicity to tier‑3 workloads. Gen 2 chassis support (6.8.0) delivers more performance and density with better efficiency. 150 TB DirectFlash modules (6.8.2) boost capacity without compromising speed. File services advancements: FlashArray File (GA in 6.8.2) lets you manage block and file workloads from the same array. SMB Continuous Availability shares (6.8.6) keep file services online through failures. Multi‑server/domain support (6.8.7) scales file services across larger environments. Security and protection: Enhanced SafeMode protection (6.4.3) quadruples local snapshot capacity and adds hardware tokens for instant data locking which is vital in a ransomware era. Over‑the‑wire encryption (6.6.7) secures asynchronous replication. Pure Fusion: We can't talk about this enough Think of this as fleet intelligence. Fusion applies your policies across every array and optimizes placement automatically, cutting operational overhead . Purity OS: It’s Not Just Firmware Every Purity OS update adds value to your existing hardware. Recent improvements include support for new NAND sources, “titanium” efficiency power supplies, and advanced diagnostics. These aren’t minor tweaks; they’re part of Pure’s Evergreen promise that your hardware investment keeps getting better over time. Why Waiting Doesn’t Pay Off It’s tempting to delay updates, but with Pure, waiting often means you’re missing out on: Security upgrades that counter new threats. Performance gains like NVMe/TCP support and ActiveCluster improvements. Operational efficiencies such as open metrics and better diagnostics. Future‑proofing features that prepare you for upcoming innovations. Your Roadmap to Capture These Benefits Assess your current state: Use AI Copilot to see exactly what you’d gain by moving to LLR 6.9.0. Plan your update: Pure’s non‑disruptive upgrades let you modernize without downtime. Explore new features: Dive into Fusion, enhanced file services, and expanded security capabilities. Connect with the community: Share experiences with other users to accelerate your learning curve. The Bottom Line Pure’s Evergreen model means your hardware doesn’t just retain value it continues to gain it. Long‑Life Release 6.9.0 is a gateway to innovation. In a world where data is your competitive edge, standing still is equivalent to moving backward. Ready to see what you’ve been missing? Log into Pure1, fire up Copilot, and let it show you the difference between where you are and where you could be.441Views4likes0CommentsHyper-V: The Municipal Fleet Pickup: Familiar, Capable, and Still Worth Considering
Hyper-V remains a practical, cost-efficient option for Windows-centric environments, offering strong features and seamless Azure integration. This blog explores where it shines, where it struggles, and how Pure ensures enterprise-grade data protection no matter which virtualization road you take.105Views1like0CommentsTips for High Availability SQL Server Environments with ActiveCluster
Tip 1: Use Synchronous Replication for Zero RPO/RTO Why it matters: ActiveCluster mirrors every write across two FlashArrays before acknowledging the operation to the host. This ensures zero Recovery Point Objective (RPO) and zero Recovery Time Objective (RTO), which are critical for maintaining business continuity in SQL Server environments. Best Practice: Keep inter-site latency below 5 ms for optimal performance. While the system tolerates up to 11 ms, staying under 5 ms minimizes write latencies and transactional slowdowns. Tip 2: Group Related Volumes with Stretched Pods Why it matters: Stretched pods ensure all volumes within them are synchronously replicated as a unit, maintaining data consistency and simplifying management. This is crucial for SQL Server deployments where data, log, and tempdb volumes need to failover together. Best practice: Place all volumes related to a single SQL Server instance into the same pod. Use separate pods only for unrelated SQL Server instances or non-database workloads that have different replication, performance, or management requirements. Tip 3: Use Uniform Host Access with SCSI ALUA Optimization Why it matters: Uniform host access allows each SQL Server node to see both arrays. Combined with SCSI ALUA (Asymmetric Logical Unit Access), this setup enables the host to prefer the local array, improving latency while maintaining redundancy. Best practice: Use the Preferred Array setting in FlashArray for each host to route I/O to the closest array. This avoids redundant round-trips across WAN links, especially in multi-site or metro-cluster topologies. Install the correct MPIO drivers, validate paths, and use load-balancing policies like Round Robin or Least Queue Depth. Tip 4: Test Failover with a regular cadence Why it matters: ActiveCluster is designed for transparent failover, but you shouldn’t assume it just works. Testing failover with a regular schedule validates the full stack, from storage to SQL Server clustering and exposes misconfigurations before they cause downtime. Best practice: Simulate array failure by disconnecting one side and verifying that SQL Server remains online via the surviving array. Monitor replication and quorum health using Pure1, and ensure Windows Server Failover Clustering (WSFC) responds correctly. Tip 5: Use ActiveCluster for Seamless Storage Migration Why it matters: Storage migrations are inevitable for lifecycle refreshes, performance upgrades, or datacenter moves. ActiveCluster lets you replicate and migrate SQL Server databases with zero downtime. Best practice: Follow a 6-step phased migration: 1. Assess and plan 2. Set up environment 3. Configure ActiveCluster 4. Test replication and failover 5. Migrate by removing paths from source array 6. Validate with DBCC CHECKDB and application testing This ensures a smooth handover with no data loss or service interruption. Tip 6: Align with VMware for Virtualized SQL Server Deployments Why it matters: Many SQL Server instances run on VMware. Using ActiveCluster with vSphere VMFS or vVols brings granular control, high availability, and site-aware storage policies. Best practice: Deploy SQL Server on vVols for tighter storage integration, or use VMFS when simplicity is preferred. Stretch datastores across sites with ActiveCluster for seamless VM failover and workload mobility. Tip 7: Avoid Unsupported Topologies Why it matters: ActiveCluster is designed for two-site, synchronous setups. Misusing it across unsupported configurations like hybrid cloud sync or mixing non-uniform host access with SQL Server FCI can break failover logic and introduce data risks. Best practice: Do not use ActiveCluster between cloud and on-prem FlashArrays. Avoid non-uniform host access in SQL Server Failover Cluster Instances. Failover will not be coordinated. Instead, use ActiveDR™ or asynchronous replication for cloud or multi-site DR scenarios. Next Steps Pure Storage ActiveCluster simplifies high availability for SQL Server without extra licensing or complex configuration. If you want to go deeper, check out this whitepaper on FlashArray ActiveCluster for more details.86Views1like0CommentsGetting started with FlashArray File Multi-Server
Previous blog post FlashArray File Multi-Server was a feature overview from the perspective of a system, which already has that setup. Let's look at the same feature from a viewpoint of a storage admin, who needs to start using it. For purpose of this blogpost, I'll be starting with an empty test-array. Please let me know if there is a demand for similar post focused on brown field use-case. Setting up a data to share Let's create a filesystem and a managed directory # purefs create myfs Name Created myfs 2025-06-27 06:30:53 MDT # puredir create myfs:mydir --path dir Name Path File System Created myfs:mydir /dir myfs 2025-06-27 06:31:27 MDT So far nothing new.. Setting up a server To create a server on FlashArray, Local Directory Service needs to be either created during Server creation or reference to existing one needs to be provided. What's Local Directory Service? It's a container for Local Users and Local Groups. It's a new container, which helps to manage users for different servers. # pureds local ds create mylds --domain domain.my Name Domain mylds domain.my Nothing prevents us now to create a actual Server object. # pureserver create myserver --local-ds mylds Name Dns Directory Services Local Directory Service Created myserver management - mylds 2025-06-27 06:41:49 MDT (Another option would be to use "built in" server, which is guaranteed to be there - "_array_server". That would also be a server, which contains all the exports which were created before migration to Multi-Server enabled release. As stated before, this post is focusing on a green field scenario, thus creating a new server object.) Setting up an export The server can now be used when creating export # puredir export create --dir myfs:mydir --policy smb-simple --server myserver --export-name myexport Name Export Name Server Directory Path Policy Type Enabled myserver::smb::myexport myexport myserver myfs:mydir /dir smb-simple smb True One configuration object wasn't created as part of this blog post - policy "smb-simple". That's a pre-created policy, which (unless modified) only sets the protocol to "SMB" and accepts all clients. The name of the export has been set to "myexport", meaning that this is the string to be used by the client while mounting. The address of this export will be "${hostname}/myexport". Setting up networking This is a bit tough to follow, since networking pretty much depends on the local network setup and won't be reproducible in your environment, but let's see what needs to be done in the lab setup, hopefully it would be similar to what needs to be done in simple "test" scenario any reader could be doing. Let's create and enable a simplest File VIF possible # purenetwork eth create vif vif1 --address 192.168.1.100/24 --subinterfacelist ct0.eth2 --serverlist myserver Name Enabled Type Subnet Address Mask Gateway MTU MAC Speed Services Subinterfaces Servers vif1 False vif 192.168.1.100 255.255.255.0 - 8950 22:39:87:5e:f4:79 10.00 Gb/s file eth2 myserver # purenetwork eth enable vif1 Name Enabled Type Subnet Address Mask Gateway MTU MAC Speed Services Subinterfaces Servers vif1 True vif - 192.168.1.100 255.255.255.0 - 8950 22:39:87:5e:f4:79 10.00 Gb/s file eth2 myserver That should do it. Setting up Directory Services Server on FlashArray always has Local Directory Service and additionally it can be configured either to verify users against LDAP or Active Directory. LDAP configuration would be set up as # pureds create myds ... # pureserver setattr --ds myds myserver Or we can opt-in to join the Active Directory # puread account create myserver::ad-myserver --domain some.domain --computer-name myserver-computer-name But we don't have to! Let's make this very simple and use the Local Directory Service which has been created before - it's already user by our Server, so the only thing left is to create a user (and let's join Administrators group.. because we can) # pureds local user create pure --primary-group Administrators --password --local-ds mylds Enter password: Retype password: Name Local Directory Service Built In Enabled Primary Group Uid pure mylds False True Administrators 1000 Now, we should have everything set up for client to mount exposed share. Mounting an export (on linux) Let's use a linux client, since it will fit nicely to the rest of the operations and command line examples we have so far. At this point, the share can be easily mounted on any Windows box as well and also all the configuration made on the command line can be easily done on the GUI. client # mount -v -t cifs -o 'user=pure,domain=domain.my,pass=pure,vers=3.02' //192.168.1.100/myexport /mnt client # mount | grep mnt //192.168.1.100/myexport on /mnt type cifs (rw,relatime,vers=3.02,sec=ntlmssp,cache=strict,username=pure,domain=domain.my,uid=0,noforceuid,gid=0,noforcegid,addr=192.168.1.100,file_mode=0755,dir_mode=0755,nounix,serverino,mapposix,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1) And now, the "/mnt" directory on the client machine represents the Managed Directory "myfs:mydir" created before and can be used up to the permissions the user "pure" has. (And since this user is a member of Administrators group, it can do anything). Conclusion This post shows how to set-up File Export on FlashArray with using Servers. We can use the same Flash Array to create another server and export same or different managed directory, while using another Network interfaces or Directory Services.200Views1like0CommentsFlashArray File Multi-Server
File support on FlashArray gets another high demanded feature. With version 6.8.7, purity introduces a concept of Server, which connects exports and directory services and all other necessary objects, which are required for this setup, namely DNS configuration and networking. From this version onwards, all directory exports are associated with exactly one server. To recap, server has (associations) to following objects: DNS Active Directory / Directory Service (LDAP) Directory Export Local Directory Service Local Directory Service is another new entity introduced in version 6.8.7 and it represents a container for Local Users and Groups. Each server has it's own Local Directory Service (LDS) assigned to it and LDS also has a domain name, which means "domain" is no longer hardcoded name of a local domain, but it's user-configurable option. All of these statements do imply lots of changes in user experience. Fortunately, commonly this is about adding a reference or possibility to link a server and our GUI contains newly Server management page, including Server details page, which puts everything together and makes a Server configuration easy to understand, validate and modify. One question which you might be asking right now is - can I use File services without Servers? The answer is - no, not really. But don't be alarmed. Significant effort has been made to keep all commands and flows backwards compatible, so unless some script is parsing exact output and needs to be aligned because there is a new "Server" column added, there should be any need for changing those. How did we managed to do that? Special Server called _array_server has been created and if your configuration has anything file related, it will be migrated during upgrade. Let me also offer a taste of how the configuration could look like once the array is updated to the latest version List of Servers # pureserver list Name Dns Directory Services Local Directory Service Created _array_server management - domain 2025-06-09 01:00:26 MDT prod prod - prod 2025-06-09 01:38:14 MDT staging management stage staging 2025-06-09 01:38:12 MDT testing management testing testing 2025-06-09 01:38:11 MDT List of Active Directory accounts Since we can join multiple AD servers, we now can have multiple AD accounts, up to one per server # puread account list Name Domain Computer Name TLS Source ad-array <redacted>.local ad-array required - prod::ad-prod <redacted>.local ad-prod required - ad-array is a configuration for the _array_server and for backwards compatibility reasons, the prefix of the server name hasn't been added. The prefix is there for account connected to server prod (and to any other server). List of Directory Services (LDAP) Directory services got also slightly reworked, since before 6.8.7 there were only two configurations, management and data. Obviously, that's not enough for more than one server (management is reserved for array management access and can't be used for File services). After 6.8.7 release, it's possible to completely manage Directory Service configurations and linking them to individual servers. # pureserver list Name Dns Directory Services Local Directory Service Created _array_server management - domain 2025-06-09 01:00:26 MDT prod prod - prod 2025-06-09 01:38:14 MDT staging management stage staging 2025-06-09 01:38:12 MDT testing management testing testing 2025-06-09 01:38:11 MDT Please note that these objects are intentionally not enabled / not configured. List of Directory exports # puredir export list Name Export Name Server Directory Path Policy Type Enabled prod::smb::accounting accounting prod prodpod::accounting:root / prodpod::smb-simple smb True prod::smb::engineering engineering prod prodpod::engineering:root / prodpod::smb-simple smb True prod::smb::sales sales prod prodpod::sales:root / prodpod::smb-simple smb True prod::smb::shipping shipping prod prodpod::shipping:root / prodpod::smb-simple smb True staging::smb::accounting accounting staging stagingpod::accounting:root / stagingpod::smb-simple smb True staging::smb::engineering engineering staging stagingpod::engineering:root / stagingpod::smb-simple smb True staging::smb::sales sales staging stagingpod::sales:root / stagingpod::smb-simple smb True staging::smb::shipping shipping staging stagingpod::shipping:root / stagingpod::smb-simple smb True testing::smb::accounting accounting testing testpod::accounting:root / testpod::smb-simple smb True testing::smb::engineering engineering testing testpod::engineering:root / testpod::smb-simple smb True testing::smb::sales sales testing testpod::sales:root / testpod::smb-simple smb True testing::smb::shipping shipping testing testpod::shipping:root / testpod::smb-simple smb True The notable change here is that the Export Name and Name has slightly different meaning. Pre-6.8.7 version used the Export Name as a unique identifier, since we had single (implicit, now explicit) server, which naturally created a scope. Now, the Export Name can be the same as long as it's unique in scope of a single server, as seen in this example. The Name is different and provides array-unique export identifier. It is a combination of server name, protocol name and the export name. List of Network file interfaces # purenetwork eth list --service file Name Enabled Type Subnet Address Mask Gateway MTU MAC Speed Services Subinterfaces Servers array False vif - - - - 1500 56:e0:c2:c6:f2:1a 0.00 b/s file - _array_server prod False vif - - - - 1500 de:af:0e:80:bc:76 0.00 b/s file - prod staging False vif - - - - 1500 f2:95:53:3d:0a:0a 0.00 b/s file - staging testing False vif - - - - 1500 7e:c3:89:94:8d:5d 0.00 b/s file - testing As seen above, File network VIFs now are referencing specific server. (this list is particularly artificial, since neither of them is properly configured nor enabled, anyway the main message is that File VIF now "points" to a specific server). Local Directory Services Local Directory Service (LDS) is a newly introduced container for Local Users and Groups. # pureds local ds list Name Domain domain domain testing testing staging staging.mycorp prod prod.mycorp As already mentioned, all local users and groups now has to belong to a LDS, which means management of those also contains that information # pureds local user list Name Local Directory Service Built In Enabled Primary Group Uid Administrator domain True True Administrators 0 Guest domain True False Guests 65534 Administrator prod True True Administrators 0 Guest prod True False Guests 65534 Administrator staging True True Administrators 0 Guest staging True False Guests 65534 Administrator testing True True Administrators 0 Guest testing True False Guests 65534 # pureds local group list Name Local Directory Service Built In Gid Audit Operators domain True 65536 Administrators domain True 0 Guests domain True 65534 Backup Operators domain True 65535 Audit Operators prod True 65536 Administrators prod True 0 Guests prod True 65534 Backup Operators prod True 65535 Audit Operators staging True 65536 Administrators staging True 0 Guests staging True 65534 Backup Operators staging True 65535 Audit Operators testing True 65536 Administrators testing True 0 Guests testing True 65534 Backup Operators testing True 65535 Conclusion I did show how the FA configuration might look like, without providing much details about the actual way how to configure or test these configs, anyway, this article should provide a good overview about what to expect from 6.8.7 version. There is plenty of information about this particular aspect of the release in the updated product documentation. Please let me know if there is any demand to deep-dive into any aspect of this feature.247Views2likes0Comments