Don’t Wait, Innovate: Long‑Life Release 6.9.0 Is Your Gateway to Continuous Innovation
How Pure Releases Work (and Why You Should Care) Pure Storage doesn’t make you choose between stability and innovation: Feature Releases arrive monthly and are supported for 9 months. They’re production‑ready and ideal if you like to live on the cutting edge. Long‑Life Releases (LLRs) bundle those feature releases into a thoroughly tested version which is supported for three years. LLR 6.9.0 is essentially all the innovation of those Feature releases, rolled into one update. This dual approach means you can adopt new features as soon as they’re ready or wait for the next stable release—either way, you keep moving forward. Not sure what features you’re missing? Not a problem as we have a tool for that. A coworker reminded me: Pure1’s AI Copilot can tell you exactly what you’ve been missing. Here’s how easy it is to find out: Log into Pure1, click on the AI Copilot tab, and type your question. My coworker reminded me of this last week, so I tried: “Please provide all features for FlashArray since version 6.4 of Purity OS.” Copilot returned a detailed rundown of new capabilities across each release. In just a couple of minutes, I saw everything I’d overlooked—no digging through release notes or calling support required. A Taste of What You’ve Been Missing Here’s a snapshot of the goodies you may have missed across the last few year releases: Platform enhancements: FlashArray//E platform (6.6.0) extends Pure’s simplicity to tier‑3 workloads. Gen 2 chassis support (6.8.0) delivers more performance and density with better efficiency. 150 TB DirectFlash modules (6.8.2) boost capacity without compromising speed. File services advancements: FlashArray File (GA in 6.8.2) lets you manage block and file workloads from the same array. SMB Continuous Availability shares (6.8.6) keep file services online through failures. Multi‑server/domain support (6.8.7) scales file services across larger environments. Security and protection: Enhanced SafeMode protection (6.4.3) quadruples local snapshot capacity and adds hardware tokens for instant data locking which is vital in a ransomware era. Over‑the‑wire encryption (6.6.7) secures asynchronous replication. Pure Fusion: We can't talk about this enough Think of this as fleet intelligence. Fusion applies your policies across every array and optimizes placement automatically, cutting operational overhead . Purity OS: It’s Not Just Firmware Every Purity OS update adds value to your existing hardware. Recent improvements include support for new NAND sources, “titanium” efficiency power supplies, and advanced diagnostics. These aren’t minor tweaks; they’re part of Pure’s Evergreen promise that your hardware investment keeps getting better over time. Why Waiting Doesn’t Pay Off It’s tempting to delay updates, but with Pure, waiting often means you’re missing out on: Security upgrades that counter new threats. Performance gains like NVMe/TCP support and ActiveCluster improvements. Operational efficiencies such as open metrics and better diagnostics. Future‑proofing features that prepare you for upcoming innovations. Your Roadmap to Capture These Benefits Assess your current state: Use AI Copilot to see exactly what you’d gain by moving to LLR 6.9.0. Plan your update: Pure’s non‑disruptive upgrades let you modernize without downtime. Explore new features: Dive into Fusion, enhanced file services, and expanded security capabilities. Connect with the community: Share experiences with other users to accelerate your learning curve. The Bottom Line Pure’s Evergreen model means your hardware doesn’t just retain value it continues to gain it. Long‑Life Release 6.9.0 is a gateway to innovation. In a world where data is your competitive edge, standing still is equivalent to moving backward. Ready to see what you’ve been missing? Log into Pure1, fire up Copilot, and let it show you the difference between where you are and where you could be.190Views3likes0CommentsHyper-V: The Municipal Fleet Pickup: Familiar, Capable, and Still Worth Considering
Hyper-V remains a practical, cost-efficient option for Windows-centric environments, offering strong features and seamless Azure integration. This blog explores where it shines, where it struggles, and how Pure ensures enterprise-grade data protection no matter which virtualization road you take.68Views1like0CommentsTips for High Availability SQL Server Environments with ActiveCluster
Tip 1: Use Synchronous Replication for Zero RPO/RTO Why it matters: ActiveCluster mirrors every write across two FlashArrays before acknowledging the operation to the host. This ensures zero Recovery Point Objective (RPO) and zero Recovery Time Objective (RTO), which are critical for maintaining business continuity in SQL Server environments. Best Practice: Keep inter-site latency below 5 ms for optimal performance. While the system tolerates up to 11 ms, staying under 5 ms minimizes write latencies and transactional slowdowns. Tip 2: Group Related Volumes with Stretched Pods Why it matters: Stretched pods ensure all volumes within them are synchronously replicated as a unit, maintaining data consistency and simplifying management. This is crucial for SQL Server deployments where data, log, and tempdb volumes need to failover together. Best practice: Place all volumes related to a single SQL Server instance into the same pod. Use separate pods only for unrelated SQL Server instances or non-database workloads that have different replication, performance, or management requirements. Tip 3: Use Uniform Host Access with SCSI ALUA Optimization Why it matters: Uniform host access allows each SQL Server node to see both arrays. Combined with SCSI ALUA (Asymmetric Logical Unit Access), this setup enables the host to prefer the local array, improving latency while maintaining redundancy. Best practice: Use the Preferred Array setting in FlashArray for each host to route I/O to the closest array. This avoids redundant round-trips across WAN links, especially in multi-site or metro-cluster topologies. Install the correct MPIO drivers, validate paths, and use load-balancing policies like Round Robin or Least Queue Depth. Tip 4: Test Failover with a regular cadence Why it matters: ActiveCluster is designed for transparent failover, but you shouldn’t assume it just works. Testing failover with a regular schedule validates the full stack, from storage to SQL Server clustering and exposes misconfigurations before they cause downtime. Best practice: Simulate array failure by disconnecting one side and verifying that SQL Server remains online via the surviving array. Monitor replication and quorum health using Pure1, and ensure Windows Server Failover Clustering (WSFC) responds correctly. Tip 5: Use ActiveCluster for Seamless Storage Migration Why it matters: Storage migrations are inevitable for lifecycle refreshes, performance upgrades, or datacenter moves. ActiveCluster lets you replicate and migrate SQL Server databases with zero downtime. Best practice: Follow a 6-step phased migration: 1. Assess and plan 2. Set up environment 3. Configure ActiveCluster 4. Test replication and failover 5. Migrate by removing paths from source array 6. Validate with DBCC CHECKDB and application testing This ensures a smooth handover with no data loss or service interruption. Tip 6: Align with VMware for Virtualized SQL Server Deployments Why it matters: Many SQL Server instances run on VMware. Using ActiveCluster with vSphere VMFS or vVols brings granular control, high availability, and site-aware storage policies. Best practice: Deploy SQL Server on vVols for tighter storage integration, or use VMFS when simplicity is preferred. Stretch datastores across sites with ActiveCluster for seamless VM failover and workload mobility. Tip 7: Avoid Unsupported Topologies Why it matters: ActiveCluster is designed for two-site, synchronous setups. Misusing it across unsupported configurations like hybrid cloud sync or mixing non-uniform host access with SQL Server FCI can break failover logic and introduce data risks. Best practice: Do not use ActiveCluster between cloud and on-prem FlashArrays. Avoid non-uniform host access in SQL Server Failover Cluster Instances. Failover will not be coordinated. Instead, use ActiveDR™ or asynchronous replication for cloud or multi-site DR scenarios. Next Steps Pure Storage ActiveCluster simplifies high availability for SQL Server without extra licensing or complex configuration. If you want to go deeper, check out this whitepaper on FlashArray ActiveCluster for more details.42Views1like0CommentsGetting started with FlashArray File Multi-Server
Previous blog post FlashArray File Multi-Server was a feature overview from the perspective of a system, which already has that setup. Let's look at the same feature from a viewpoint of a storage admin, who needs to start using it. For purpose of this blogpost, I'll be starting with an empty test-array. Please let me know if there is a demand for similar post focused on brown field use-case. Setting up a data to share Let's create a filesystem and a managed directory # purefs create myfs Name Created myfs 2025-06-27 06:30:53 MDT # puredir create myfs:mydir --path dir Name Path File System Created myfs:mydir /dir myfs 2025-06-27 06:31:27 MDT So far nothing new.. Setting up a server To create a server on FlashArray, Local Directory Service needs to be either created during Server creation or reference to existing one needs to be provided. What's Local Directory Service? It's a container for Local Users and Local Groups. It's a new container, which helps to manage users for different servers. # pureds local ds create mylds --domain domain.my Name Domain mylds domain.my Nothing prevents us now to create a actual Server object. # pureserver create myserver --local-ds mylds Name Dns Directory Services Local Directory Service Created myserver management - mylds 2025-06-27 06:41:49 MDT (Another option would be to use "built in" server, which is guaranteed to be there - "_array_server". That would also be a server, which contains all the exports which were created before migration to Multi-Server enabled release. As stated before, this post is focusing on a green field scenario, thus creating a new server object.) Setting up an export The server can now be used when creating export # puredir export create --dir myfs:mydir --policy smb-simple --server myserver --export-name myexport Name Export Name Server Directory Path Policy Type Enabled myserver::smb::myexport myexport myserver myfs:mydir /dir smb-simple smb True One configuration object wasn't created as part of this blog post - policy "smb-simple". That's a pre-created policy, which (unless modified) only sets the protocol to "SMB" and accepts all clients. The name of the export has been set to "myexport", meaning that this is the string to be used by the client while mounting. The address of this export will be "${hostname}/myexport". Setting up networking This is a bit tough to follow, since networking pretty much depends on the local network setup and won't be reproducible in your environment, but let's see what needs to be done in the lab setup, hopefully it would be similar to what needs to be done in simple "test" scenario any reader could be doing. Let's create and enable a simplest File VIF possible # purenetwork eth create vif vif1 --address 192.168.1.100/24 --subinterfacelist ct0.eth2 --serverlist myserver Name Enabled Type Subnet Address Mask Gateway MTU MAC Speed Services Subinterfaces Servers vif1 False vif 192.168.1.100 255.255.255.0 - 8950 22:39:87:5e:f4:79 10.00 Gb/s file eth2 myserver # purenetwork eth enable vif1 Name Enabled Type Subnet Address Mask Gateway MTU MAC Speed Services Subinterfaces Servers vif1 True vif - 192.168.1.100 255.255.255.0 - 8950 22:39:87:5e:f4:79 10.00 Gb/s file eth2 myserver That should do it. Setting up Directory Services Server on FlashArray always has Local Directory Service and additionally it can be configured either to verify users against LDAP or Active Directory. LDAP configuration would be set up as # pureds create myds ... # pureserver setattr --ds myds myserver Or we can opt-in to join the Active Directory # puread account create myserver::ad-myserver --domain some.domain --computer-name myserver-computer-name But we don't have to! Let's make this very simple and use the Local Directory Service which has been created before - it's already user by our Server, so the only thing left is to create a user (and let's join Administrators group.. because we can) # pureds local user create pure --primary-group Administrators --password --local-ds mylds Enter password: Retype password: Name Local Directory Service Built In Enabled Primary Group Uid pure mylds False True Administrators 1000 Now, we should have everything set up for client to mount exposed share. Mounting an export (on linux) Let's use a linux client, since it will fit nicely to the rest of the operations and command line examples we have so far. At this point, the share can be easily mounted on any Windows box as well and also all the configuration made on the command line can be easily done on the GUI. client # mount -v -t cifs -o 'user=pure,domain=domain.my,pass=pure,vers=3.02' //192.168.1.100/myexport /mnt client # mount | grep mnt //192.168.1.100/myexport on /mnt type cifs (rw,relatime,vers=3.02,sec=ntlmssp,cache=strict,username=pure,domain=domain.my,uid=0,noforceuid,gid=0,noforcegid,addr=192.168.1.100,file_mode=0755,dir_mode=0755,nounix,serverino,mapposix,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1) And now, the "/mnt" directory on the client machine represents the Managed Directory "myfs:mydir" created before and can be used up to the permissions the user "pure" has. (And since this user is a member of Administrators group, it can do anything). Conclusion This post shows how to set-up File Export on FlashArray with using Servers. We can use the same Flash Array to create another server and export same or different managed directory, while using another Network interfaces or Directory Services.77Views1like0CommentsFlashArray File Multi-Server
File support on FlashArray gets another high demanded feature. With version 6.8.7, purity introduces a concept of Server, which connects exports and directory services and all other necessary objects, which are required for this setup, namely DNS configuration and networking. From this version onwards, all directory exports are associated with exactly one server. To recap, server has (associations) to following objects: DNS Active Directory / Directory Service (LDAP) Directory Export Local Directory Service Local Directory Service is another new entity introduced in version 6.8.7 and it represents a container for Local Users and Groups. Each server has it's own Local Directory Service (LDS) assigned to it and LDS also has a domain name, which means "domain" is no longer hardcoded name of a local domain, but it's user-configurable option. All of these statements do imply lots of changes in user experience. Fortunately, commonly this is about adding a reference or possibility to link a server and our GUI contains newly Server management page, including Server details page, which puts everything together and makes a Server configuration easy to understand, validate and modify. One question which you might be asking right now is - can I use File services without Servers? The answer is - no, not really. But don't be alarmed. Significant effort has been made to keep all commands and flows backwards compatible, so unless some script is parsing exact output and needs to be aligned because there is a new "Server" column added, there should be any need for changing those. How did we managed to do that? Special Server called _array_server has been created and if your configuration has anything file related, it will be migrated during upgrade. Let me also offer a taste of how the configuration could look like once the array is updated to the latest version List of Servers # pureserver list Name Dns Directory Services Local Directory Service Created _array_server management - domain 2025-06-09 01:00:26 MDT prod prod - prod 2025-06-09 01:38:14 MDT staging management stage staging 2025-06-09 01:38:12 MDT testing management testing testing 2025-06-09 01:38:11 MDT List of Active Directory accounts Since we can join multiple AD servers, we now can have multiple AD accounts, up to one per server # puread account list Name Domain Computer Name TLS Source ad-array <redacted>.local ad-array required - prod::ad-prod <redacted>.local ad-prod required - ad-array is a configuration for the _array_server and for backwards compatibility reasons, the prefix of the server name hasn't been added. The prefix is there for account connected to server prod (and to any other server). List of Directory Services (LDAP) Directory services got also slightly reworked, since before 6.8.7 there were only two configurations, management and data. Obviously, that's not enough for more than one server (management is reserved for array management access and can't be used for File services). After 6.8.7 release, it's possible to completely manage Directory Service configurations and linking them to individual servers. # pureserver list Name Dns Directory Services Local Directory Service Created _array_server management - domain 2025-06-09 01:00:26 MDT prod prod - prod 2025-06-09 01:38:14 MDT staging management stage staging 2025-06-09 01:38:12 MDT testing management testing testing 2025-06-09 01:38:11 MDT Please note that these objects are intentionally not enabled / not configured. List of Directory exports # puredir export list Name Export Name Server Directory Path Policy Type Enabled prod::smb::accounting accounting prod prodpod::accounting:root / prodpod::smb-simple smb True prod::smb::engineering engineering prod prodpod::engineering:root / prodpod::smb-simple smb True prod::smb::sales sales prod prodpod::sales:root / prodpod::smb-simple smb True prod::smb::shipping shipping prod prodpod::shipping:root / prodpod::smb-simple smb True staging::smb::accounting accounting staging stagingpod::accounting:root / stagingpod::smb-simple smb True staging::smb::engineering engineering staging stagingpod::engineering:root / stagingpod::smb-simple smb True staging::smb::sales sales staging stagingpod::sales:root / stagingpod::smb-simple smb True staging::smb::shipping shipping staging stagingpod::shipping:root / stagingpod::smb-simple smb True testing::smb::accounting accounting testing testpod::accounting:root / testpod::smb-simple smb True testing::smb::engineering engineering testing testpod::engineering:root / testpod::smb-simple smb True testing::smb::sales sales testing testpod::sales:root / testpod::smb-simple smb True testing::smb::shipping shipping testing testpod::shipping:root / testpod::smb-simple smb True The notable change here is that the Export Name and Name has slightly different meaning. Pre-6.8.7 version used the Export Name as a unique identifier, since we had single (implicit, now explicit) server, which naturally created a scope. Now, the Export Name can be the same as long as it's unique in scope of a single server, as seen in this example. The Name is different and provides array-unique export identifier. It is a combination of server name, protocol name and the export name. List of Network file interfaces # purenetwork eth list --service file Name Enabled Type Subnet Address Mask Gateway MTU MAC Speed Services Subinterfaces Servers array False vif - - - - 1500 56:e0:c2:c6:f2:1a 0.00 b/s file - _array_server prod False vif - - - - 1500 de:af:0e:80:bc:76 0.00 b/s file - prod staging False vif - - - - 1500 f2:95:53:3d:0a:0a 0.00 b/s file - staging testing False vif - - - - 1500 7e:c3:89:94:8d:5d 0.00 b/s file - testing As seen above, File network VIFs now are referencing specific server. (this list is particularly artificial, since neither of them is properly configured nor enabled, anyway the main message is that File VIF now "points" to a specific server). Local Directory Services Local Directory Service (LDS) is a newly introduced container for Local Users and Groups. # pureds local ds list Name Domain domain domain testing testing staging staging.mycorp prod prod.mycorp As already mentioned, all local users and groups now has to belong to a LDS, which means management of those also contains that information # pureds local user list Name Local Directory Service Built In Enabled Primary Group Uid Administrator domain True True Administrators 0 Guest domain True False Guests 65534 Administrator prod True True Administrators 0 Guest prod True False Guests 65534 Administrator staging True True Administrators 0 Guest staging True False Guests 65534 Administrator testing True True Administrators 0 Guest testing True False Guests 65534 # pureds local group list Name Local Directory Service Built In Gid Audit Operators domain True 65536 Administrators domain True 0 Guests domain True 65534 Backup Operators domain True 65535 Audit Operators prod True 65536 Administrators prod True 0 Guests prod True 65534 Backup Operators prod True 65535 Audit Operators staging True 65536 Administrators staging True 0 Guests staging True 65534 Backup Operators staging True 65535 Audit Operators testing True 65536 Administrators testing True 0 Guests testing True 65534 Backup Operators testing True 65535 Conclusion I did show how the FA configuration might look like, without providing much details about the actual way how to configure or test these configs, anyway, this article should provide a good overview about what to expect from 6.8.7 version. There is plenty of information about this particular aspect of the release in the updated product documentation. Please let me know if there is any demand to deep-dive into any aspect of this feature.113Views2likes0CommentsPure Storage Delivers Critical Cyber Outcomes
“We don’t have storage problems. We have outcome problems.” - Pure customer in a recent cyber briefing No matter what we are buying, what we are buying is a desired outcome. If you buy a car, you are buying some sort of outcome or multiple outcomes. Point A to Point B, comfort, dependability, seat heaters, or if you are like me, a real, live Florida Man, seat coolers! The same is true when solving for cyber outcomes, and often overlooked is a storage foundation to drive cyber resilience. A strong storage foundation improves data security, resilience and recovery. With these characteristics, organizations can recover in hours vs. days. Here are some top cyber resilience outcomes Pure Storage is delivering. Native, Layered Resilience Fast Analytics Rapid Restore Enhanced Visibility We will tackle all of these in this blog space (multi-part post alert!), but let’s start with the native, layered resilience Pure provides customers. Layered Resilience refers to a comprehensive approach to ensuring data protection and recovery through multiple layers of security and redundancy. This architecture is designed to provide robust protection against data loss, corruption, and cyber threats, ensuring business continuity and rapid recovery in the event of a disaster. Why is layered resilience important? Different data needs different protection. My photo collection, while important to me, doesn’t require the same level of protection as critical application data needed to keep the company running. Layered resilience indicates that there needs to be different layers of resilience and recovery. Super critical data needs super critical recovery. We are referring to the applications that are the life-blood of organizations, order processing, patient services or trading applications. These may only account for 5% of your data, but drive 95% of the revenue. Many organizations protect these with high availability which provides excellent resilience against disasters and system outages. But for malicious events, such as ransomware, protection is needed to ensure that recoverable data is available if an attack corrupts or destroys the production data. Scheduled snapshots can protect that data from the time the data is born. Little baby data. Protect the baby! Pure Snapshots are a critical feature, providing efficient, zero-footprint copies of data that can be quickly created and restored, ensuring data protection and business continuity. Pure snapshots are optimized for data reduction, ensuring minimal space consumption. This is achieved through global data reduction technologies that compress and deduplicate data, making snapshots space-efficient. They are designed to be simple and flexible, with zero performance overhead and the ability to create tens of thousands of snapshots instantly. They are also integrated with Pure1 (part of our Enhanced Visibility discussion) for enhanced visibility, management and security, reducing the need for complex orchestration and manual intervention. Snapshots can be used to create new volumes with full capabilities, allowing for mounting, reading, writing, and further snapshotting without dependencies on one another. This flexibility supports various use cases, including point-in-time restores and data recovery. In events that require clean recovery, and secure recovery at that, it would be much more desirable to leverage snapshots for recovery, where you could scan and determine cleanliness and safeness, often in parallel efforts and the reset time for going to an earlier period of time is a matter of seconds rather than days. But not even these amazing local snapshots are enough. What if your local site is rendered unavailable for some reason? Do you have control of your data to be able to recover in that scenario? Replicating those local snapshots to a second site could enable more flexibility in recovery. We have had customers leverage our High Availability solution (ActiveCluster) across sites and then engage snapshots and asynchronous replication to a third site as a part of their recovery plan. Data that requires extended retention and granularity is typically handled by a data control plane application that will stream a backup copy to a repository. This is usually a last line of defense in case of an event, as the recovery time objective is longer when considering a streaming recovery of 50%, 75%, or 100% of a data center. Still, this is a layer of resiliency that a comprehensive plan should account for. And if these repositories are on Pure Storage, these also can be protected by SafeMode methodologies and other security measures such as Object Lock API, Freeze Locked Objects, and WORM compliance. And most importantly, this last line of defense can be supercharged for recovery by the predictable, performant platform Pure provides. Some outcomes of this layer of resilience involves Isolated Recovery Environments to incorporate even security and create those Clean Rooms to isolate recovery to ensure you will not re-introduce the event origin back into production. In these solutions, the speed benefits that Pure provides is critical to making these designs a reality. Of course, the final frontier is the archive layer. This is a part of the plan that usually falls into compliance SLA, where data is required to be maintained for longer periods of time. Still, more and more, there are performance and warm data requirements for even these data sets, where AI and other queries can benefit from even the oldest of data. One never knows what layer of resilience is required for any single event. Having the best possible resilience enables any company to recover, and recover quickly, from an attack. But native resilience is just one of the outcomes we deliver. Come back to read how we are delivering fast analytics outcomes in an environment that seeks to discover anomalies as fast as possible. Exit Question: How resilient is your data today? Jason Walker is a technical strategy director for cyber related areas at Pure Storage and a real, live, Florida Man. No animals or humans were injured in the creation of this post.181Views5likes1Comment