Don’t Wait, Innovate: Long‑Life Release 6.9.0 Is Your Gateway to Continuous Innovation
How Pure Releases Work (and Why You Should Care) Pure Storage doesn’t make you choose between stability and innovation: Feature Releases arrive monthly and are supported for 9 months. They’re production‑ready and ideal if you like to live on the cutting edge. Long‑Life Releases (LLRs) bundle those feature releases into a thoroughly tested version which is supported for three years. LLR 6.9.0 is essentially all the innovation of those Feature releases, rolled into one update. This dual approach means you can adopt new features as soon as they’re ready or wait for the next stable release—either way, you keep moving forward. Not sure what features you’re missing? Not a problem as we have a tool for that. A coworker reminded me: Pure1’s AI Copilot can tell you exactly what you’ve been missing. Here’s how easy it is to find out: Log into Pure1, click on the AI Copilot tab, and type your question. My coworker reminded me of this last week, so I tried: “Please provide all features for FlashArray since version 6.4 of Purity OS.” Copilot returned a detailed rundown of new capabilities across each release. In just a couple of minutes, I saw everything I’d overlooked—no digging through release notes or calling support required. A Taste of What You’ve Been Missing Here’s a snapshot of the goodies you may have missed across the last few year releases: Platform enhancements: FlashArray//E platform (6.6.0) extends Pure’s simplicity to tier‑3 workloads. Gen 2 chassis support (6.8.0) delivers more performance and density with better efficiency. 150 TB DirectFlash modules (6.8.2) boost capacity without compromising speed. File services advancements: FlashArray File (GA in 6.8.2) lets you manage block and file workloads from the same array. SMB Continuous Availability shares (6.8.6) keep file services online through failures. Multi‑server/domain support (6.8.7) scales file services across larger environments. Security and protection: Enhanced SafeMode protection (6.4.3) quadruples local snapshot capacity and adds hardware tokens for instant data locking which is vital in a ransomware era. Over‑the‑wire encryption (6.6.7) secures asynchronous replication. Pure Fusion: We can't talk about this enough Think of this as fleet intelligence. Fusion applies your policies across every array and optimizes placement automatically, cutting operational overhead . Purity OS: It’s Not Just Firmware Every Purity OS update adds value to your existing hardware. Recent improvements include support for new NAND sources, “titanium” efficiency power supplies, and advanced diagnostics. These aren’t minor tweaks; they’re part of Pure’s Evergreen promise that your hardware investment keeps getting better over time. Why Waiting Doesn’t Pay Off It’s tempting to delay updates, but with Pure, waiting often means you’re missing out on: Security upgrades that counter new threats. Performance gains like NVMe/TCP support and ActiveCluster improvements. Operational efficiencies such as open metrics and better diagnostics. Future‑proofing features that prepare you for upcoming innovations. Your Roadmap to Capture These Benefits Assess your current state: Use AI Copilot to see exactly what you’d gain by moving to LLR 6.9.0. Plan your update: Pure’s non‑disruptive upgrades let you modernize without downtime. Explore new features: Dive into Fusion, enhanced file services, and expanded security capabilities. Connect with the community: Share experiences with other users to accelerate your learning curve. The Bottom Line Pure’s Evergreen model means your hardware doesn’t just retain value it continues to gain it. Long‑Life Release 6.9.0 is a gateway to innovation. In a world where data is your competitive edge, standing still is equivalent to moving backward. Ready to see what you’ve been missing? Log into Pure1, fire up Copilot, and let it show you the difference between where you are and where you could be.190Views3likes0CommentsTips for High Availability SQL Server Environments with ActiveCluster
Tip 1: Use Synchronous Replication for Zero RPO/RTO Why it matters: ActiveCluster mirrors every write across two FlashArrays before acknowledging the operation to the host. This ensures zero Recovery Point Objective (RPO) and zero Recovery Time Objective (RTO), which are critical for maintaining business continuity in SQL Server environments. Best Practice: Keep inter-site latency below 5 ms for optimal performance. While the system tolerates up to 11 ms, staying under 5 ms minimizes write latencies and transactional slowdowns. Tip 2: Group Related Volumes with Stretched Pods Why it matters: Stretched pods ensure all volumes within them are synchronously replicated as a unit, maintaining data consistency and simplifying management. This is crucial for SQL Server deployments where data, log, and tempdb volumes need to failover together. Best practice: Place all volumes related to a single SQL Server instance into the same pod. Use separate pods only for unrelated SQL Server instances or non-database workloads that have different replication, performance, or management requirements. Tip 3: Use Uniform Host Access with SCSI ALUA Optimization Why it matters: Uniform host access allows each SQL Server node to see both arrays. Combined with SCSI ALUA (Asymmetric Logical Unit Access), this setup enables the host to prefer the local array, improving latency while maintaining redundancy. Best practice: Use the Preferred Array setting in FlashArray for each host to route I/O to the closest array. This avoids redundant round-trips across WAN links, especially in multi-site or metro-cluster topologies. Install the correct MPIO drivers, validate paths, and use load-balancing policies like Round Robin or Least Queue Depth. Tip 4: Test Failover with a regular cadence Why it matters: ActiveCluster is designed for transparent failover, but you shouldn’t assume it just works. Testing failover with a regular schedule validates the full stack, from storage to SQL Server clustering and exposes misconfigurations before they cause downtime. Best practice: Simulate array failure by disconnecting one side and verifying that SQL Server remains online via the surviving array. Monitor replication and quorum health using Pure1, and ensure Windows Server Failover Clustering (WSFC) responds correctly. Tip 5: Use ActiveCluster for Seamless Storage Migration Why it matters: Storage migrations are inevitable for lifecycle refreshes, performance upgrades, or datacenter moves. ActiveCluster lets you replicate and migrate SQL Server databases with zero downtime. Best practice: Follow a 6-step phased migration: 1. Assess and plan 2. Set up environment 3. Configure ActiveCluster 4. Test replication and failover 5. Migrate by removing paths from source array 6. Validate with DBCC CHECKDB and application testing This ensures a smooth handover with no data loss or service interruption. Tip 6: Align with VMware for Virtualized SQL Server Deployments Why it matters: Many SQL Server instances run on VMware. Using ActiveCluster with vSphere VMFS or vVols brings granular control, high availability, and site-aware storage policies. Best practice: Deploy SQL Server on vVols for tighter storage integration, or use VMFS when simplicity is preferred. Stretch datastores across sites with ActiveCluster for seamless VM failover and workload mobility. Tip 7: Avoid Unsupported Topologies Why it matters: ActiveCluster is designed for two-site, synchronous setups. Misusing it across unsupported configurations like hybrid cloud sync or mixing non-uniform host access with SQL Server FCI can break failover logic and introduce data risks. Best practice: Do not use ActiveCluster between cloud and on-prem FlashArrays. Avoid non-uniform host access in SQL Server Failover Cluster Instances. Failover will not be coordinated. Instead, use ActiveDR™ or asynchronous replication for cloud or multi-site DR scenarios. Next Steps Pure Storage ActiveCluster simplifies high availability for SQL Server without extra licensing or complex configuration. If you want to go deeper, check out this whitepaper on FlashArray ActiveCluster for more details.42Views1like0CommentsGetting started with FlashArray File Multi-Server
Previous blog post FlashArray File Multi-Server was a feature overview from the perspective of a system, which already has that setup. Let's look at the same feature from a viewpoint of a storage admin, who needs to start using it. For purpose of this blogpost, I'll be starting with an empty test-array. Please let me know if there is a demand for similar post focused on brown field use-case. Setting up a data to share Let's create a filesystem and a managed directory # purefs create myfs Name Created myfs 2025-06-27 06:30:53 MDT # puredir create myfs:mydir --path dir Name Path File System Created myfs:mydir /dir myfs 2025-06-27 06:31:27 MDT So far nothing new.. Setting up a server To create a server on FlashArray, Local Directory Service needs to be either created during Server creation or reference to existing one needs to be provided. What's Local Directory Service? It's a container for Local Users and Local Groups. It's a new container, which helps to manage users for different servers. # pureds local ds create mylds --domain domain.my Name Domain mylds domain.my Nothing prevents us now to create a actual Server object. # pureserver create myserver --local-ds mylds Name Dns Directory Services Local Directory Service Created myserver management - mylds 2025-06-27 06:41:49 MDT (Another option would be to use "built in" server, which is guaranteed to be there - "_array_server". That would also be a server, which contains all the exports which were created before migration to Multi-Server enabled release. As stated before, this post is focusing on a green field scenario, thus creating a new server object.) Setting up an export The server can now be used when creating export # puredir export create --dir myfs:mydir --policy smb-simple --server myserver --export-name myexport Name Export Name Server Directory Path Policy Type Enabled myserver::smb::myexport myexport myserver myfs:mydir /dir smb-simple smb True One configuration object wasn't created as part of this blog post - policy "smb-simple". That's a pre-created policy, which (unless modified) only sets the protocol to "SMB" and accepts all clients. The name of the export has been set to "myexport", meaning that this is the string to be used by the client while mounting. The address of this export will be "${hostname}/myexport". Setting up networking This is a bit tough to follow, since networking pretty much depends on the local network setup and won't be reproducible in your environment, but let's see what needs to be done in the lab setup, hopefully it would be similar to what needs to be done in simple "test" scenario any reader could be doing. Let's create and enable a simplest File VIF possible # purenetwork eth create vif vif1 --address 192.168.1.100/24 --subinterfacelist ct0.eth2 --serverlist myserver Name Enabled Type Subnet Address Mask Gateway MTU MAC Speed Services Subinterfaces Servers vif1 False vif 192.168.1.100 255.255.255.0 - 8950 22:39:87:5e:f4:79 10.00 Gb/s file eth2 myserver # purenetwork eth enable vif1 Name Enabled Type Subnet Address Mask Gateway MTU MAC Speed Services Subinterfaces Servers vif1 True vif - 192.168.1.100 255.255.255.0 - 8950 22:39:87:5e:f4:79 10.00 Gb/s file eth2 myserver That should do it. Setting up Directory Services Server on FlashArray always has Local Directory Service and additionally it can be configured either to verify users against LDAP or Active Directory. LDAP configuration would be set up as # pureds create myds ... # pureserver setattr --ds myds myserver Or we can opt-in to join the Active Directory # puread account create myserver::ad-myserver --domain some.domain --computer-name myserver-computer-name But we don't have to! Let's make this very simple and use the Local Directory Service which has been created before - it's already user by our Server, so the only thing left is to create a user (and let's join Administrators group.. because we can) # pureds local user create pure --primary-group Administrators --password --local-ds mylds Enter password: Retype password: Name Local Directory Service Built In Enabled Primary Group Uid pure mylds False True Administrators 1000 Now, we should have everything set up for client to mount exposed share. Mounting an export (on linux) Let's use a linux client, since it will fit nicely to the rest of the operations and command line examples we have so far. At this point, the share can be easily mounted on any Windows box as well and also all the configuration made on the command line can be easily done on the GUI. client # mount -v -t cifs -o 'user=pure,domain=domain.my,pass=pure,vers=3.02' //192.168.1.100/myexport /mnt client # mount | grep mnt //192.168.1.100/myexport on /mnt type cifs (rw,relatime,vers=3.02,sec=ntlmssp,cache=strict,username=pure,domain=domain.my,uid=0,noforceuid,gid=0,noforcegid,addr=192.168.1.100,file_mode=0755,dir_mode=0755,nounix,serverino,mapposix,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1) And now, the "/mnt" directory on the client machine represents the Managed Directory "myfs:mydir" created before and can be used up to the permissions the user "pure" has. (And since this user is a member of Administrators group, it can do anything). Conclusion This post shows how to set-up File Export on FlashArray with using Servers. We can use the same Flash Array to create another server and export same or different managed directory, while using another Network interfaces or Directory Services.77Views1like0CommentsWhy Are We Still Designing IT Like It's 2012?
Let’s talk about complexity in IT. Not the fun kind—like building a Raspberry Pi-powered coffee machine or arguing over whether Terraform should be capitalized. I mean the kind of complexity that slows teams down, bloats your stack, and makes security people question their career choices. You know the type: five backup platforms, three monitoring tools, two storage vendors “for resilience,” and a bunch of scripts someone wrote in 2019 that nobody’s brave enough to touch. We tell ourselves it’s “best-of-breed,” “cloud-first,” or my personal favorite—“strategic.” But let’s call it what it is: chaos without any direction. Enter Conway’s Law (aka the Mirror You’ve Been Avoiding) Melvin Conway dropped this gem in 1967: “Organizations design systems that mirror their own communication structures.” Still true. Still brutal. If your company has six teams that don’t talk to each other except through passive-aggressive Jira tickets, your architecture is going to reflect that—fragmented, redundant, over-engineered, and impossible to secure. Conway’s Law isn’t just a quirky observation. It’s a diagnostic tool. If your architecture feels like a group project gone off the rails, chances are it’s because your org works that way too. Cloud Chaos: Now with More Vendors! And just when you thought it couldn’t get worse—we bring in the cloud. Or clouds. Somewhere between “cloud-first” and “cloud-only,” we lost the plot. We started treating hyperscalers like interchangeable gas stations: need compute? Just pull over at the nearest one. We’ve seen it: Migrations from AWS to Azure to GCP like it’s some weird tech pilgrimage Applications lifted and shifted with zero refactoring Hybrid architectures that “just sort of happened” Look, the cloud’s not the problem. I like cloud and I believe it is here to stay. But designing 100% for the cloud without actually understanding your why? That’s Conway’s Law, just with bigger invoices. Even worse? Bouncing between cloud providers because someone read a Forrester report and got nervous about lock-in. That’s not strategy—that’s cloud-induced panic. The Two-Vendor Lie We Keep Telling Ourselves Ah yes, the old two-vendor strategy. Meant to be safe. Designed to reduce dependency. What it really does? Doubles your complexity and halves your team’s sanity. Two vendors = two playbooks, two consoles, two support teams blaming each other It’s not more resilient—it’s just more confusing Gartner even calls it out: more vendors = more risk, not less If you think managing multiple tools with overlapping functions is safer than consolidation, congrats—you’ve just invented the world’s most expensive “Oops” button. Manual ≠ Secure. It Just Feels That Way Let’s talk about the weird rituals we still do in the name of security: Manually copying data to “safe zones” Turning off network access like it’s a security blanket Spinning up siloed sandboxes to avoid risk It’s not protection. It’s procrastination. Manual controls introduce human error, waste time, and don’t scale. If your “strategy” depends on someone remembering to toggle a firewall rule every Thursday, you're not secure—you’re just lucky. And outsourcing that chaos to a vendor doesn’t make it better. Handing over management to a provider that’s Frankensteined a bunch of loosely integrated tech with bailing wire and hope isn’t a strategy—it’s just renting someone else’s mess. If there’s no real roadmap, no cohesion, no architectural vision—it’s not a partnership. It’s a future support ticket waiting to happen. Hybrid Cloud Needs Purpose, Not Permission Hybrid isn’t a backup plan. It’s a design decision. Too many shops end up hybrid by accident—because apps don’t refactor, budgets don’t stretch, or politics get in the way. The result is an environment that’s technically working but operationally exhausting. A good hybrid strategy is opinionated. You should know: What runs where (and why) How data moves What your north star architecture looks like If you don’t have answers to that? You’re not doing hybrid—you’re doing hope. So What Do We Do About It? We simplify. On purpose. Relentlessly. Design like a startup, not a committee. Keep the stack lean. Less is more when you have tools that actually integrate. Use Conway’s Law in reverse. Want systems that work together? Build teams that do too. Break silos before they become dependencies. Treat cloud like architecture, not an escape route. Cloud is amazing if you design for it. Otherwise, it’s just someone else’s complexity in your billing statement. Stop solving people problems with platform purchases. Most complexity isn’t technical—it’s cultural. No vendor can fix your org chart. Final Thought: Complexity Is a Tax. Stop Paying It. Every extra platform, every vendor “just in case,” every manual handoff is a tax. And it’s compounding interest on your ability to execute. If you want to move fast, secure your data, and stay sane—you’ve got to design with purpose. That means fewer tools, better alignment, and architectures that reflect how you want to operate, not how your politics force you to. You want resilience? Start with intention. But what I’m really curious about is your perspective: How are you dealing with complexity? Is hybrid working for you—or just holding you hostage? Have you successfully simplified your architecture without sacrificing flexibility? Let's make this a real convo—not another “cloud is the answer” thread. —Zane Allyn76Views5likes0Comments