Why You Should Make Adopting Current Long-Life Releases a Habit
Hey everyone — At Pure Storage, we see many customers who still think about storage upgrades like old-school firmware: “set it and forget it” until it’s forced to change. But FlashArray isn’t firmware it’s modern, continually improved, and designed for an agile, secure, predictable data platform. That means it’s time to make adopting recent Long-Life Releases (LLRs) a regular habit not just something you reluctantly do, "when you have to". LLRs should be your standard practice: ✅ Fresh Features, Mature Code Each LLR is built on code that’s been running in production for at least 8 months before it branches. That means you get the innovations from recent Feature Releases — tested, stabilized, and production-proven. You avoid missing out on valuable improvements while still benefiting from enterprise-grade predictability. ✅ Consistent Security and Compliance Aging too far behind, even on an LLR, can expose you to security vulnerabilities and unsupported configurations. By habitually adopting recent LLRs, you ensure you’re in the supported window for critical patches and compliance audits and avoiding fire drills later. ✅ Reduce Technical Debt Getting stuck on very old LLRs can build up technical debt. Skipping multiple versions makes your next upgrade harder, riskier, and more time-consuming. Keeping up with recent LLRs means smoother transitions, less operational friction, and easier adoption of the next improvements. ✅ Keep Innovation Flowing The idea that an LLR is “old code” is a myth. Recent LLRs contain carefully chosen, well-hardened feature improvements. If you wait too long, you lock yourself out of meaningful performance, efficiency, and capability gains that your peers are already using. ✅ Break the Firmware Mentality FlashArray is software-driven, and has a rapid but reliable development model. Treating it like outdated firmware, and you miss the true value. The LLR program is designed precisely to let you safely adopt modern features and maintain enterprise-grade stability and maintain a predictable cadence. Bottom line? Adopting recent Long-Life Releases, habitually, is the best way to get modern features, maintain security, reduce upgrade risk, and keep your environment aligned with Pure’s best practices. You deserve innovation and peace of mind. Don’t settle for less by sticking with outdated code. If you want help reviewing which LLR is right for you, or understanding the timelines, just reach out — we’re here to help you stay current, secure, and ahead of the game.1KViews8likes2CommentsActiveCluster for File
We’re proud to announce the availability of ActiveCluster for file, Everpure’s premier business continuity solution and a fundamental enabler of our Enterprise Data Cloud vision, where Service Level Agreements define what storage, network and compute resources are assigned dynamically to application data sets rather than an hardware-to-app architectures. With ActiveCluster for file, Everpure is extending the benefits of data mobility, continuous access and policy-driven management to file workloads. What is ActiveCluster? Everpure launched ActiveCluster in 2017, and rapidly took the mission critical, enterprise block storage world by storm. ActiveCluster rapidly enabled enterprise customers with the most demanding block workloads to deploy synchronous, always available, always up-to-date, LUNs or volumes to hosts stretched across geographic distances. What set ActiveCluster apart from the existing solutions at the time, and even now, is how simple to set up Everpure RTO-0 and RPO-0 file solutions are, and how flexible and adaptable to the ever changing business needs hosting these data sets become after being deployed on a Everpure Fusion fleets. Today, we’re adding file protocol support like NFSv3, NFSv4.1, SMB 2.0, and SMB 3.0 w/ continuously available shares to our ActiveCluster solution. Realms as a new container ActiveCluster for file utilizes a new, high-level container called a Realm, to synchronously mirror both user data and storage configuration information necessary to provide data access to authorized users on either side of the stretched file system(s). Realms are handy to put applications with similar Recovery Point Objectives and similar Recovery Time Objectives together. Realm Synchronous Replication The act of synchronously mirroring both the user data and storage configuration information across two different FlashArrays is called ‘stretching’. Similar to how a pod is stretched across two FlashArrays, a Realm can be stretched between any FlashArray system that has no more than 11ms Round Trip Time average latencies on their array replication links. Either Fibrechannel or Ethernet array replication links can be used to replicate file data synchronously. Figure 1. ActiveCluster for file can be deployed in different modalities Realms as namespaces for policies Realms contain unique snapshot, audit logging, replication and export policies. These policies are only viewable and attachable to storage objects within the Realm, creating a building block for hosting multiple different end customers or tenants on Fusion fleets. These policies are automatically replicated over to the other array if the Realm is stretched, reducing operator burden in failover scenarios. To prevent split brain scenarios (where a network partition in the array links or replication links stop communication between the pair of FlashArrays) Everpure’s fully managed Cloud Mediator service will determine which remaining FlashArray controller pair can process writes, and which array will not. Unlike other business continuity solutions, ActiveCluster customers don’t have to worry about patching or maintaining the security of separate VMs to act as a mediator service to prevent split brain scenarios. Multiple servers supported per Realm, different IDPs allowed. Each Realm can have one or more servers configured in it, which act as protocol end points for clients and hosts to connect to. Each server in a Realm can have a different IP address, or utilize a different Identity Provider Service. When a failover condition occurs (like a site disaster on one side), automatic failover and the clients in either data center are on the same Ethernet segment or broadcast domain, a failover condition will emit a gratuitous Reverse Address Resolution Protocol request (RARP), mapping the new MAC address of the ethernet interface on one side to a same IP address being used. Applications may see a small pause in reads or writes being serviced, but will not have to re-issue I/O or remount / remap shares or exports. Managed directory quotas can also be used for any filesystem or managed directory attached to the servers in the Realm being stretched. These quota policies automatically get replicated with user data, so the same customer experience in terms of usable space exists both before and after an automatic failover. New Guided Setup available for ActiveCluster for file Deploying new ActiveCluster for file solutions can occur in less than five minutes on already racked and powered arrays. A Guided Setup wizard is available to quickly capture the necessary information to stretch a Realm. This wizard can be started from multiple locations within the Purity GUI. ActiveCluster for file fully takes advantage of Fusion fleets and the ability to manage storage infrastructure as code, programmatically and via policy. Realms are not tied to hardware, and can ‘float’ Realms with ActiveCluster for file support not only provide a 0-RTO and 0-Recovery Point Object at the storage layer for mission critical applications, but they also provide a mechanism to transparently move the data and configuration in the Realm non-disruptively somewhere else within your fleet, whether it’s follow the Sun type round-robin hops, where the Realm’s location changes depending on the time of day, or is moved as a part of a data-center migration. Coupled with Fusion, Everpure’s intelligent control plane, ActiveCluster for file enables workloads and application data and their configuration information to dynamically and seamlessly move to the right location, at the right time, at the right granularity. Seamless movement across greater geographic distances can be accomplished by stretching and unstretching the same Realm between different arrays, as long as the RTT latency between them is <11ms. Service Level Agreements are the lingua franca of the Enterprise Data Cloud Service Level Agreements are the natural language of business owners, and are integral for companies who want to move away from managing storage arrays to managing their business data. They capture answers to questions like “How fast do you need access to this data? Does it need to be backed up or otherwise protected against site-wide failure? SLAs are what forms our vision behind the App-to-data operational model. This App-to-data model takes abstract, high level business requirements as input, and then automatically configures and deploys the required storage services to meet the service level agreement just defined. A Fusion fleet manager’s perspective is one of many different application tiles, and their health, not just a series of HA pairs spread out across different data centers. Data management operations, like instant backups, cloning, movement is applied as “verbs” to the application data set’s name or workload ID, and not to a mismatched storage container whose hardware boundaries impose limits on your app team. An Intelligent, unified control plane manages and enforces SLA’s across the fleet autonomously, like a modern cloud operating model but that can be deployed in any modality, whether on-prem, in the cloud or a hybrid. This scalable model, with Fusion’s intelligent control plane, supports ALL workloads, from modern AI workloads, containers and High Performance Workloads to extremely large image or rich media archives. An Enterprise Data Cloud, made up of discrete nodes tied loosely together, where Service Level Definitions define autonomous system behavior. Stop managing your storage arrays, and start managing your data. Learn more about ActiveCluster for file Read the support documentation for Purity 6.12.0 Test and deploy Fusion fleets and file presets Ask your account executive or system engineer for a demo!316Views3likes0CommentsTop 10 Reasons to Love Purity 6.9
(Because 6.7 is so 2024) 10. 🏋️♂️ Long-Life Release means it’s supported until June 2028 — which is about three years longer than that gym membership you swore you’d use. 9. 🌐 Works with all the latest FlashArray platforms, AWS, Azure… pretty much everything except your toaster (for now). 8. 🕵️♂️ Security updates so strong, even your data will feel like it’s in the witness protection program. 7. 🚀 Turn on File Services without downtime or approval from Pure product management — finally, a software update you don’t have to schedule for “that one weekend in Q4 when no one’s looking.” 6. 🙌 Encourages Self-Service Upgrades. Translation: fewer support tickets, more “Look, Mom, I did it myself!” moments. 5. 🔑 Default password warning. Yes, “pureuser” is adorable… until it becomes a resume-generating event. 4. 🍍 VMware improvements so good, your virtual machines just sent a fruit basket. 3. 🎛️ Fusion, Fusion, Fusion! Which is like having a universal remote for your data… without the panic of losing it between the couch cushions. 2. 📜 REST API 2.x release notes so thorough, they make War and Peace look like a sticky note. 🏆 You get to tell your boss you're on a "Long-Life Release," which sounds much more impressive than "I'm not doing an upgrade for a while." Check out the release notes for more! https://support.purestorage.com/bundle/m_flasharray_release/page/FlashArray/FlashArray_Release/01_Purity_FA_Release_Notes/topics/concept/c_purityfa_69x_release_notes.html834Views3likes0CommentsA Platform for the Future
So, I’ve been at Pure Storage for a few seasons now. Hint: when I joined, the “//M” generation of FlashArray was still a little wet behind the ears, and was then styled as a lowercase “//m” (bonus points if you can guess the year I joined in your reply!). One of the things that has always impressed me the most about Pure, is how purposeful and thoughtful our development and engineering teams are. Most of us here had realized for years that Pure isn’t a collection of disparate products and features–it’s a real, integrated, intelligent storage platform! One OS (Purity). One flash architecture (DirectFlash). A universal NDU architecture, both software and hardware (Evergreen). Last year, we launched the Pure Storage Platform to make that engineering vision official. Today's announcements mark another huge milestone in the evolution of the Pure Storage Platform. We’ve unified operations across distributed infrastructures, maximized efficiency for AI, and embedded cyber resilience at every layer. Our engineers have outdone themselves once again. It all works together, so your organization can master its data, while you get more done. Pure Storage is helping enterprises turn data into a true business advantage. From edge to core to cloud, the message is clear: data should be unified, efficient, and resilient — so organizations like yours can innovate without compromise. Find out more about what we announced today in our blog. And let us know what you think below!96Views2likes0CommentsFlashArray File Multi-Server
File support on FlashArray gets another high demanded feature. With version 6.8.7, purity introduces a concept of Server, which connects exports and directory services and all other necessary objects, which are required for this setup, namely DNS configuration and networking. From this version onwards, all directory exports are associated with exactly one server. To recap, server has (associations) to following objects: DNS Active Directory / Directory Service (LDAP) Directory Export Local Directory Service Local Directory Service is another new entity introduced in version 6.8.7 and it represents a container for Local Users and Groups. Each server has it's own Local Directory Service (LDS) assigned to it and LDS also has a domain name, which means "domain" is no longer hardcoded name of a local domain, but it's user-configurable option. All of these statements do imply lots of changes in user experience. Fortunately, commonly this is about adding a reference or possibility to link a server and our GUI contains newly Server management page, including Server details page, which puts everything together and makes a Server configuration easy to understand, validate and modify. One question which you might be asking right now is - can I use File services without Servers? The answer is - no, not really. But don't be alarmed. Significant effort has been made to keep all commands and flows backwards compatible, so unless some script is parsing exact output and needs to be aligned because there is a new "Server" column added, there should be any need for changing those. How did we managed to do that? Special Server called _array_server has been created and if your configuration has anything file related, it will be migrated during upgrade. Let me also offer a taste of how the configuration could look like once the array is updated to the latest version List of Servers # pureserver list Name Dns Directory Services Local Directory Service Created _array_server management - domain 2025-06-09 01:00:26 MDT prod prod - prod 2025-06-09 01:38:14 MDT staging management stage staging 2025-06-09 01:38:12 MDT testing management testing testing 2025-06-09 01:38:11 MDT List of Active Directory accounts Since we can join multiple AD servers, we now can have multiple AD accounts, up to one per server # puread account list Name Domain Computer Name TLS Source ad-array <redacted>.local ad-array required - prod::ad-prod <redacted>.local ad-prod required - ad-array is a configuration for the _array_server and for backwards compatibility reasons, the prefix of the server name hasn't been added. The prefix is there for account connected to server prod (and to any other server). List of Directory Services (LDAP) Directory services got also slightly reworked, since before 6.8.7 there were only two configurations, management and data. Obviously, that's not enough for more than one server (management is reserved for array management access and can't be used for File services). After 6.8.7 release, it's possible to completely manage Directory Service configurations and linking them to individual servers. # pureserver list Name Dns Directory Services Local Directory Service Created _array_server management - domain 2025-06-09 01:00:26 MDT prod prod - prod 2025-06-09 01:38:14 MDT staging management stage staging 2025-06-09 01:38:12 MDT testing management testing testing 2025-06-09 01:38:11 MDT Please note that these objects are intentionally not enabled / not configured. List of Directory exports # puredir export list Name Export Name Server Directory Path Policy Type Enabled prod::smb::accounting accounting prod prodpod::accounting:root / prodpod::smb-simple smb True prod::smb::engineering engineering prod prodpod::engineering:root / prodpod::smb-simple smb True prod::smb::sales sales prod prodpod::sales:root / prodpod::smb-simple smb True prod::smb::shipping shipping prod prodpod::shipping:root / prodpod::smb-simple smb True staging::smb::accounting accounting staging stagingpod::accounting:root / stagingpod::smb-simple smb True staging::smb::engineering engineering staging stagingpod::engineering:root / stagingpod::smb-simple smb True staging::smb::sales sales staging stagingpod::sales:root / stagingpod::smb-simple smb True staging::smb::shipping shipping staging stagingpod::shipping:root / stagingpod::smb-simple smb True testing::smb::accounting accounting testing testpod::accounting:root / testpod::smb-simple smb True testing::smb::engineering engineering testing testpod::engineering:root / testpod::smb-simple smb True testing::smb::sales sales testing testpod::sales:root / testpod::smb-simple smb True testing::smb::shipping shipping testing testpod::shipping:root / testpod::smb-simple smb True The notable change here is that the Export Name and Name has slightly different meaning. Pre-6.8.7 version used the Export Name as a unique identifier, since we had single (implicit, now explicit) server, which naturally created a scope. Now, the Export Name can be the same as long as it's unique in scope of a single server, as seen in this example. The Name is different and provides array-unique export identifier. It is a combination of server name, protocol name and the export name. List of Network file interfaces # purenetwork eth list --service file Name Enabled Type Subnet Address Mask Gateway MTU MAC Speed Services Subinterfaces Servers array False vif - - - - 1500 56:e0:c2:c6:f2:1a 0.00 b/s file - _array_server prod False vif - - - - 1500 de:af:0e:80:bc:76 0.00 b/s file - prod staging False vif - - - - 1500 f2:95:53:3d:0a:0a 0.00 b/s file - staging testing False vif - - - - 1500 7e:c3:89:94:8d:5d 0.00 b/s file - testing As seen above, File network VIFs now are referencing specific server. (this list is particularly artificial, since neither of them is properly configured nor enabled, anyway the main message is that File VIF now "points" to a specific server). Local Directory Services Local Directory Service (LDS) is a newly introduced container for Local Users and Groups. # pureds local ds list Name Domain domain domain testing testing staging staging.mycorp prod prod.mycorp As already mentioned, all local users and groups now has to belong to a LDS, which means management of those also contains that information # pureds local user list Name Local Directory Service Built In Enabled Primary Group Uid Administrator domain True True Administrators 0 Guest domain True False Guests 65534 Administrator prod True True Administrators 0 Guest prod True False Guests 65534 Administrator staging True True Administrators 0 Guest staging True False Guests 65534 Administrator testing True True Administrators 0 Guest testing True False Guests 65534 # pureds local group list Name Local Directory Service Built In Gid Audit Operators domain True 65536 Administrators domain True 0 Guests domain True 65534 Backup Operators domain True 65535 Audit Operators prod True 65536 Administrators prod True 0 Guests prod True 65534 Backup Operators prod True 65535 Audit Operators staging True 65536 Administrators staging True 0 Guests staging True 65534 Backup Operators staging True 65535 Audit Operators testing True 65536 Administrators testing True 0 Guests testing True 65534 Backup Operators testing True 65535 Conclusion I did show how the FA configuration might look like, without providing much details about the actual way how to configure or test these configs, anyway, this article should provide a good overview about what to expect from 6.8.7 version. There is plenty of information about this particular aspect of the release in the updated product documentation. Please let me know if there is any demand to deep-dive into any aspect of this feature.551Views2likes1CommentWhy Object Storage Still Matters
In Part 2, I wrote a line that, at the time, felt almost like a side comment — something I typed without fully appreciating how much it would change the direction of the story: “BREAKING NEWS: The FlashArray now supports Object??? What in the world? I may need to write an article about that!!” That reaction wasn’t planned, and it definitely wasn’t me being clever. It was me looking at the GUI and thinking, “that can’t be right… can it?” It didn’t line up with how I’ve been modeling storage architectures in my head for years, which usually means one of two things: either something fundamentally changed… or I’ve been confidently wrong about part of this for a while. And if I’m being completely honest, there was also a second reaction happening in parallel — one that I didn’t write down at the time because it sounded slightly ridiculous even in my own head: “Wait… do I actually understand why object storage exists in the first place? And more importantly… what exactly was wrong with files?” That’s the part nobody likes to admit out loud. We’ve all spent years confidently explaining block, file, and object as if we were born with that knowledge, when in reality most of us learned it incrementally, retroactively, and with just enough conviction to sound credible in front of a customer. Object storage, in particular, has always carried this aura of inevitability — like of course it’s better, of course it scales, of course it’s what modern applications need — without always forcing us to question why the previous model stopped being enough. Because for as long as most of us have been designing infrastructure, object storage has not simply been another protocol layered onto an existing system. It has represented a fundamentally different way of organizing and accessing data, one that required its own architectural approach, its own scaling model, and, more often than not, its own dedicated platform. The separation between block, file, and object was not arbitrary; it was a reflection of how deeply different those paradigms were in terms of metadata handling, access patterns, and performance expectations. This is precisely why platforms such as Everpure FlashBlade exist in the first place. They were not created as extensions of traditional storage systems but as purpose-built architectures designed to treat unstructured data — and particularly object data — as a first-class citizen. The use of distributed metadata services, sharded across independent nodes, combined with a key-value store storage model, allows such systems to achieve levels of parallelism and throughput that simply cannot be replicated within a controller-based design. In that context, object storage is not something that is “added” to the system; it is the system. Which is why seeing S3 support appear on FlashArray required a pause. Not excitement. Not skepticism alone. Something closer to intellectual friction. Reconciling Two Architectural Worlds The most important step in understanding what FlashArray has introduced is to resist the temptation to treat it as a direct comparison to FlashBlade. These aren’t two different ways of solving the same problem. They’re two different answers to two different problems—and pretending otherwise is where people get themselves into trouble. FlashBlade is built for object, not adapted to it. S3 talks directly to a distributed engine that thinks in objects, not files pretending to be objects. Metadata is spread across blades instead of becoming a centralized choke point, and the whole system scales the way modern workloads actually need it to. There’s no file system layer to fight with, no directory structure to navigate, no POSIX semantics getting in the way. It just does what you’d expect when you remove all of that: it goes fast, it scales cleanly, and it keeps up with workloads like HPC, AI and analytics without breaking a sweat. FlashArray takes a very different path, and in reality, it’s not what most people expect. It doesn’t try to reinvent itself as an object platform, and it doesn’t throw an S3 gateway in front of the array and call it a day. With Purity 6.10.5+, S3 just shows up as another protocol the system understands, right next to block and file. That distinction matters more than it seems. This isn’t something duct-taped on the side — it’s part of the same control plane, the same data path, the same system you’ve already been running. But let’s not pretend it turned into FlashBlade overnight. This is still a controller-driven architecture. The primary controller does the heavy lifting — handling requests, authenticating them, coordinating operations — before anything actually hits the storage engine. Which means it behaves differently, especially as workloads scale. So it ends up in this interesting middle ground. Not a native object system in the pure sense, but not a hack either. Just a different way of exposing what’s already there. The Translation Layer and Its Consequences It would be irresponsible to discuss FlashArray S3 without explicitly addressing the implications of this design. Even with its native integration into Purity, S3 operations are still subject to the realities of a controller-bound architecture. Every request must be processed, authenticated, and coordinated before it is executed, introducing a measurable difference in behavior compared to both native block operations and distributed object systems. The most immediate effect is latency. While FlashArray continues to deliver sub-150 microsecond performance for block workloads, S3 operations typically operate at higher latencies (in 1 millisecond range) due to the additional processing steps involved. This is not a flaw; it is the natural outcome of introducing a protocol that was designed for scale and flexibility into a system optimized for low-latency transactional workloads. Metadata handling further reinforces this distinction. FlashBlade distributes metadata across its architecture, enabling massive parallelism and consistent performance at scale. FlashArray processes metadata through its controller framework, which introduces natural serialization points under high concurrency. As workloads become increasingly metadata-heavy — particularly with small objects — this difference becomes more pronounced. The system also enforces clearly defined operational limits to maintain predictable performance. As of Purity 6.10.5+, FlashArray supports up to 250 S3 buckets per array and a maximum of 1,000,000 objects per bucket. FlashArray Object Store Limits Object storage operates at the array scope and does not integrate with multi-tenancy or “realms”, which has implications for service provider models and strict tenant isolation requirements. These constraints are not arbitrary limitations; they are guardrails that ensure the system behaves consistently within its architectural boundaries. Where the Architecture Becomes Secondary Having established those boundaries, the conversation naturally shifts from “how it works” to “why it matters”. In many enterprise environments, particularly within SLED organizations, the challenge is not achieving exabyte-scale throughput or supporting billions of objects. The challenge is delivering capabilities in a way that is operationally sustainable, economically efficient, and aligned with existing infrastructure. This is where FlashArray’s approach becomes compelling. By exposing object storage within the same platform that already supports block and file workloads, it eliminates the need to introduce a separate system, a separate operational model, and a separate set of dependencies. The same management interface, the same automation framework, and the same data services extend across all protocols. More importantly, object data inherits the full set of Purity capabilities. Global inline deduplication and compression apply to S3 workloads, significantly improving storage efficiency compared to many object-native platforms. SafeMode snapshots extend immutability to object storage, providing a critical layer of protection against ransomware. ActiveCluster, combined with ActiveDR, enables a three-site resilience model that ensures data availability across multiple locations with zero RPO between primary sites. These are not incremental improvements. They represent a shift in how object storage can be consumed within an enterprise. Practical Use Cases in a Unified Model When viewed through this lens, the use cases for FlashArray S3 become both clear and grounded in reality. Development and Staging Environments Some applications rely on S3 APIs but do not require massive scale, FlashArray provides a consistent and integrated object interface without introducing additional infrastructure. Developers can build and test against a familiar model while remaining within the same operational environment. Backup and Recovery Workflows FlashArray S3 enables modern data protection strategies that leverage object storage while benefiting from flash performance, deduplication, and indelible snapshots. This combination improves both recovery times and storage efficiency. Tier-two repositories and application-integrated storage represent another natural fit. Workloads such as document management systems, logs, and archival data often require object semantics but do not justify the higher cost of a dedicated object platform. Consolidating these workloads onto FlashArray simplifies operations while maintaining reliability and performance. Where the Boundaries Still Matter None of this diminishes the importance of selecting the appropriate platform for workloads that demand a different architecture. High-performance AI pipelines, large-scale analytics environments, and use cases requiring massive parallelism remain firmly within the domain of FlashBlade. The ability to scale performance linearly, distribute metadata across many nodes, and support billions of objects is not optional in these scenarios — it is essential. What has changed is not the relevance of those systems, but the necessity of deploying them for every object storage use case. A Subtle but Significant Shift The introduction of S3 on FlashArray does not represent a replacement of one architecture with another. It represents a convergence of capabilities within a unified operational framework. Object storage, in this model, is no longer a destination that requires its own platform. It becomes a capability — one of several ways to access and manage data within the same system. That shift is easy to overlook, but its implications are significant. It allows organizations to design around outcomes rather than protocols, to reduce complexity without sacrificing capability, and to align infrastructure more closely with the needs of modern applications. Closing Reflection Looking back at that line in Part 2, it is clear that the reaction was not just about a new feature appearing in the interface. It was about the recognition — however incomplete at the time — that something foundational was beginning to change. Object storage did not suddenly become simpler, nor did it lose the architectural complexity that defines it. What changed is where it lives. And once that becomes clear, you start asking a slightly uncomfortable but very honest question: If this works… and it works well enough for most of what I actually need… why was I so convinced it had to live somewhere else in the first place? That is usually where the interesting work begins. Appreciate you reading. Dmitry Gorbatov © 2025 Dmitry Gorbatov | #dmitrywashere27Views1like0CommentsStop Running File Servers on VMs
Dmitry Gorbatov Apr 06, 2026 One of the superstar Pre-Sales Systems Engineers on my team was in a customer meeting not too long ago, walking through what was, by all accounts, a well-run environment. The team knew what they were doing, the infrastructure was stable, and nothing stood out as particularly problematic. It was one of those conversations where everything feels “fine,” which in our world usually means there are inefficiencies hiding in plain sight. Then he started asking questions about enterprise file services. They were running a couple of Windows Server virtual machines on top of VMware vSphere, serving SMB shares to the rest of the organization. Again, nothing unusual there. This is still the default design in a lot of places, and it works well enough that nobody feels compelled to question it. But as the meeting on, a few details started to surface. One of the VMs was consistently running hot during backup windows. Another one hadn’t been patched in a while because nobody wanted to risk disrupting access to shared data. The storage policies applied at the VM layer didn’t quite line up with what was actually configured on the array. And there was an unspoken understanding that maintaining these systems was just part of the job — something you deal with, not something you optimize. What made it more interesting was that the same environment had an Everpure FlashArray running their critical workloads. It was handling databases, transactional systems, and anything else that required consistent performance and reliable data services. It was protected, replicated, and trusted. File services, however, were living on top of virtual machines, with their own lifecycle (please, please… don’t say VMware snapshots), their own dependencies, and their own set of operational overhead. That disconnect is what stuck with me. So instead of continuing the theoretical discussion about architecture and “best practices,” I went back to my lab and decided to try something very simple. I wanted to see what would actually happen if I enabled file services directly on the array and treated it as a first-class file platform instead of assuming that role belonged to something else. There was no redesign exercise, no migration plan, and no phased rollout. I wasn’t trying to prove a point on a whiteboard. I just wanted to turn it on and see if the experience matched what we tend to claim in conversations. Nothing broke. Nothing felt forced. And more importantly, nothing about it felt like a compromise. This post walks through exactly what I did to enable and run file services on a FlashArray //X20R4 running Purity 6.9.2. The goal is not to explain the architecture in abstract terms, but to show how straightforward it is to take something that already exists in your environment and use it in a way that removes unnecessary complexity. What I realized (and why this matters) Once everything was up and running, the first realization was that this is not a workaround or a secondary feature designed to fill a gap. FlashArray File is integrated into the platform in a way that makes it behave like a natural extension of what the system already does well. It uses the same controllers, the same global storage pool, and the same data services that are already in place for block workloads. There is no separate management layer, no additional appliance (remember Data Movers and NAS Personas?), and no need to think about it as something different from the rest of the system. That by itself is useful, but it is not the most important part. What stood out more was the amount of operational overhead that simply disappeared. When file services run on virtual machines, you inherit everything that comes with them. You are responsible for the guest operating system, including patching cycles, security updates, and the occasional issue that appears at the worst possible time. You are also consuming hypervisor resources and, in many cases, paying for licensing that exists solely to support a function that could be handled elsewhere. On top of that, you end up managing data protection, performance, and capacity in two different places (remember RDMs, or in-guest iSCSI?), which introduces opportunities for inconsistency. By moving file services onto the array, that entire layer is removed. You are not just changing where the workload runs; you are simplifying how it is operated, protected, and maintained over time. The second realization was that this approach aligns with where things are clearly heading. Everpure is already extending these capabilities with ActiveCluster for File, which will bring synchronous replication and continuous availability to unstructured data. I do not have that running in my lab yet, but it is not difficult to see the direction. As those capabilities become more widely available, the remaining reasons to maintain separate file platforms will continue to shrink. That will be a conversation for a future post. Let’s tentatively call it Part 3 of the series. Before you start (the part that actually matters) Enabling file services on the array is straightforward. The part that tends to create friction is everything that surrounds the configuration, particularly networking and integration with existing services. The first consideration is the choice of network interfaces. Although the array provides 1GbE management ports, those interfaces are not intended for serving file workloads. Using them for SMB or NFS traffic introduces an artificial bottleneck that will affect performance and, more importantly, perception. File services should be configured on the 10 or 25GbE data ports, which are designed to handle production traffic and provide the throughput expected from the platform. Here is what my array looked like earlier today: The highlighted ports are ETH10 and ETH11 on both controllers. Redundancy should be planned, but it does not need to be over engineered. A simple and reliable starting point is to use at least two ports per controller, ensuring that the configuration remains consistent across both sides. The goal is to achieve predictable failover behavior rather than to build a complex network design that is difficult to troubleshoot. One concept that is worth understanding early is the File Virtual Network Interface, or File VIF. This is the logical identity of the file service—the IP address that clients use to connect. It is designed to move between controllers as needed, maintaining availability during failover events. Once this concept is clear, the rest of the networking configuration becomes much easier to follow. My lab was built within budgetary constraints - that means I don’t have separate ethernet switches and I don’t have the time to build a separate DNS Server for FA File Services. Everpure recommends separating file client traffic from management traffic, but that’s a best practice, not a requirement. Since my lab switch is a single flat, untagged network and the environment is really just 192.168.1.0/24, I will just us the most practical approach - put the FA File VIFs on that same 192.168.1.0/24 network with their own IP addresses. Here is what I did: I just kept the file VIFs on 192.168.1.0/24 since that is the only real network available. FlashArray expects unique layer-3 subnets and does not support overlapping networks. DNS In my specific configuration, I don’t need a new DNS server. My existing management DNS servers can resolve the AD/DC hostnames and the FA File names/computer object. FA File can use the same DNS as management with no extra file-DNS configuration. By default, DNS lookups will go out the management interfaces, so my DNS server just needs to be reachable from the management network. And it is. Let’s turn the lights on, shall we? After assigning the IP addresses and enabling the ports, the lights came on. Important design note I will use one client-facing VIF IP for the file service, for example: File VIF IP: 192.168.1.135 Netmask: 255.255.255.0 Gateway: 192.168.1.2 default gateway Do not try to use 192.168.1.131-134 as four separate FA File IPs unless you intentionally want multiple VIFs. The ct*.eth* ports are transport underlay, not the SMB/NFS endpoint IPs. Configuring a File Server and File VIF Open the File Services server page Go to Storage → Servers. Open the default server (_array_server) or create a new file server if you want a dedicated namespace. Stay on that server’s details page. 3. Create the File VIF Use physical bonding first; it’s the simplest. In the Virtual Interfaces section, click + Create VIF. Choose Physical Bonding. Select the underlying port pairs: Pair 1: ct0.eth10 and ct1.eth10 Pair 2: ct0.eth11 and ct1.eth11 Name the VIF something simple, e.g.: filevip1 Enter network settings: IP Address: 192.168.1.135 Netmask: 255.255.255.0 Gateway: 192.168.1.1 Leave VLAN blank since there are no VLANs. Save and Enable the VIF. That creates the client-facing IP for SMB/NFS. 4. Configure DNS Integration with DNS and Active Directory is another area where a bit of preparation goes a long way. File services rely on proper name resolution and domain integration, and it is important to recognize that file-related DNS settings are separate from the array’s management DNS configuration. The system effectively becomes a participant in the domain as a file server, which means that DNS records, domain join operations, and permissions should be planned accordingly rather than improvised during setup. Since my DNS is 192.168.1.2 and I want to reuse management DNS: Go to the server’s DNS Settings. My management DNS is already configured and points to 192.168.1.2 If you want to explicitly add file DNS: Click + in DNS Name: file-dns Domain suffix: your AD/domain suffix DNS server: 192.168.1.2 Service: file Source interface can remain default unless you specifically need file VIF sourcing. 5. Create required DNS A records On my DNS server 192.168.1.2, I created an A record for the file service name pointing to the File VIF IP. Name: fa-file01 IP: 192.168.1.135 If you are joining AD for SMB/Kerberos: Make sure DNS also has A records for all relevant domain controllers. Create the A record that matches the AD computer object / FA File service name. 6. Join Active Directory or configure LDAP If using SMB Use Active Directory. Go to: Storage → Servers → _array_server Then look for one of these panels: Remote Directory Service Click Edit Configuration Select Active Directory Enter: Name Domain DNS Name Computer Name Use Existing Account if applicable AD User Password TLS Mode Save / Join This part took me 2 hours. I was getting some crazy error messaged that I’m simply embarrassed to share here. It was not the DNS. It was an NTP server misconfiguration that was causing Kerberos to not authenticate properly. There was a 10 minute time skew between the FlashArray and the domain controller. 7. Create a File System The file system is the top-level container for your unstructured data. GUI Method: Navigate to Storage > File Systems and click the plus sign (+). Enter a name and click Create. CLI Method: Use the following command: purefs create <file-system-name>. 8. Create a Managed Directory Managed directories allow you to apply specific policies (like quotas or snapshots) to subfolders within a file system. GUI Method: Go to Storage > File Systems. Click on the name of the file system you just created. Select the Directories tab and click the plus sign (+). Enter the directory name and the internal path (e.g., /users). CLI Method: Use the following command: puredir create filesystem1:users --path /users. 9. Create an Export The export makes the managed directory accessible to clients over the network. GUI Method: Navigate to Storage > Policies > Export Policies. Select an existing policy (e.g., a standard SMB or NFS policy) or create a new one. Within the policy view, click the plus sign (+) to add an export. Select your Managed Directory, choose the appropriate Server (use _array_server for standard configurations), and provide an Export Name (this is the name clients will use to mount the share). CLI Method: Use the following command: puredir export create --dir <file-system-name>:<directory-name> --policy <policy-name> --server <server-name> --export-name <client-facing-name>. A quick validation step At this point, it is worth validating access from a client system. Map the SMB share and perform a simple set of operations—create files, read data, and verify permissions. This is less about testing performance and more about confirming that networking, authentication, and access controls are behaving as expected. In most cases, if the earlier steps around DNS and Active Directory were done correctly, this validation step is uneventful, which is exactly what you want. And now let the data migration begin. I am actually doing it from my Mac. And it just works!!! What becomes apparent after completing these steps is how little effort is required to stand up a fully functional file platform on infrastructure that is already in place. Unless, of course, your NTP server crashed. The system behaves predictably, integrates cleanly with existing services, and avoids many of the operational burdens associated with VM-based file servers. And that is where things start to get interesting. Because everything described so far is still being done manually—selecting where things live, defining configurations, and applying policies one step at a time. It works, and it works well, but it also mirrors the way storage has traditionally been managed. In the next post, I will show what happens when you stop doing these steps manually and let Pure Fusion handle placement, policy, and provisioning instead. Appreciate you reading. © 2025 Dmitry Gorbatov | #dmitrywashere13Views1like0CommentsBoosting SQL Server Backup/Restore Performance: Threads and Parallelism
In this post, we’ll discuss day 1 tuning you can do on your database hosts to take full advantage of your new high-performance backup storage. We’ll go over a few tricks around database layout and backup configuration for maximum throughput, discuss some quirks with SMB, and finally discuss using S3 effectively.93Views1like0CommentsAUE - Key Insights
Good morning/afternoon/evening everyone! This is Rich Barlow, Principal Technologist @ Pure. It was super fun to proctor this AUE session with Antonia and Jon. Hopefully everyone got in all of the questions that they wanted to ask - we had so many that we had to answer many of them out of band. So thank you for your enthusiasm and support. Looking forward to the next one! Here's a rundown of the most interesting and impactful questions we were asked. If you have any more please feel free to reach out. FlashArray File: Your Questions, Our Answers (Ask Us Everything Recap) Our latest "Ask Us Everything" webinar with Pure Storage experts Rich Barlow, Antonia Abu Matar, and Jonathan Carnes was another great session. You came ready with sharp questions, making it clear you're all eager to leverage the simplicity of your FlashArray to ditch the complexity of legacy file storage. Here are some the best shared insights from the session: Unify Everything: Performance By Design You asked about the foundation—and it's a game-changer. No Middleman, Low Latency: Jon Carnes confirmed that FlashArray File isn't a bolt-on solution. Since the file service lands directly on the drives, just like block data, there's effectively "no middle layer." The takeaway? You get the same awesome, low-latency performance for file that you rely on for block workloads. Kill the Data Silos: Antonia Abu Matar emphasized the vision behind FlashArray File: combining block and file on a single, shared storage pool. This isn't just tidy; it means you benefit from global data reduction and unified data services across everything. Scale, Simplicity, and Your Weekends Back The community was focused on escaping the complexities of traditional NAS systems. Always-On File Shares: Worried about redundancy? Jon confirmed that FlashArray File implements an "always-on" version of Continuous Available (CA) shares for SMB3 (in Purity 6.9/6.10). It’s on by default for transparent failover and simple client access. Multi-Server Scale-Up: For customers migrating from legacy vendors and needing lots of "multi-servers," we're on it. Jon let us know that engineering is actively working to significantly raise the current limits (aiming for around 100 in the next Purity release), stressing that Pure increases these limits non-disruptively to ensure stability. NDU—Always and Forever: The best part? No more weekend maintenance marathons. The FlashArray philosophy is a "data in place, non-disruptive upgrade." That applies to both block and file, eliminating the painful data migrations you’re used to. Visibility at Your Fingertips: You can grab real-time IOPS and throughput from the GUI or via APIs. For auditing, file access events are pushed via syslog in native JSON format, which makes integrating with tools like Splunk super easy. Conquering Distance and Bandwidth A tough question came in about supporting 800 ESRI users across remote Canadian sites (Yellowknife, Iqaluit, etc.) with real-time file access despite low bandwidth. Smart Access over Replication: Jon suggested looking at Rapid Replicas (available on FlashBlade File). This isn't full replication; it’s a smart solution that synchronizes metadata across sites and only pulls the full data on demand (pull-on-access). This is key for remote locations because it dramatically cuts down on the constant bandwidth consumption of typical data replication. Ready to Simplify? FlashArray File Services lets you consolidate your infrastructure and get back to solving bigger problems—not babysitting your storage. Start leveraging the power of a truly unified and non-disruptive platform today! Join the conversation and share your own experiences in the Pure Community.74Views1like0CommentsWhy Your Writes Are Always Safe on FlashArray
The promise of modern storage is simple: when the system says “yes,” your data better be safe. No matter what happens next; power failure, controller hiccup, or the universe throwing what else it has at you writes need to stay acknowledged. FlashArray is engineered around this non‑negotiable principle. Let me walk you through how we deliver on it. Durable First, Fast Always When your application issues a write to FlashArray, here’s the path it takes: Land in DRAM for inline data reduction (dedupe, compression, you know the lightweight stuff). Persist redundantly in NVRAM (mirrored or RAID‑6/DNVR, depending on platform), in a log accessible by either controller. Acknowledge to the host ← This is the critical moment. Flush to flash media in the background, efficiently and asynchronously. Notice what happens between steps 2 and 3? We don’t acknowledge until data is durably persisted in non‑volatile memory. Not “mostly safe,” not “probably fine” but safe and durable. This isn’t a write‑back cache we’ll get around to flushing later. The acknowledgement means your data survived the critical path and is now protected, period. Power Loss? No Problem. FlashArray NVRAM modules include integrated supercapacitors that provide power hold‑up during unexpected power events. When the power drops, these capacitors ensure the buffered write log is safely preserved without batteries to maintain, no external UPS required just to have write safety. Though it is recommended, no external UPS is necessary for write safety; many sites still deploy UPS for broader data center and facility reasons. Because durability is achieved at the NVRAM layer, we eliminate the most common failure mode in legacy systems: the volatile write cache that promises safety but can’t deliver when it matters most. Simpler Path with Integrated DNVR In our latest architectures, we integrate Distributed NVRAM (DNVR) directly into the DirectFlash Module (DFMD). This simplifies the write path fewer hops, tighter integration, better efficiency. And scales NVRAM bandwidth and capacity with the number of modules. By bringing persistence closer to the media, we’re not just maintaining our durability guarantees we’re increasing capacity and streamlining the data path at the same time. Graceful Under Pressure What happens if write ingress temporarily exceeds what the system can flush to flash? FlashArray applies deterministic backpressure you may see latency increase but I/O is not being dropped. Thus data is not at risk. Background processes yield and lower‑priority internal tasks are throttled to prioritize destage operations, keeping the system stable and predictable. Translation: we slow down gracefully and don't fail unpredictably. High Availability by Design Controllers are stateless, with writes durably persisted in NVRAM accessible by either controller. If one controller faults, the peer automatically takes over, replays any in‑flight operations from the durable log, and resumes service. A brief I/O pause may occur during takeover; platforms are sized so a single controller can handle the full workload afterward to minimize disruption to your applications. No acknowledged data is lost. No manual intervention required. Just continuous operation. Beyond the ACK: Protection on Flash After the destage, data on flash is protected with wide‑striped erasure coding for fast, predictable rebuilds and multi‑device fault tolerance. And NO hot‑spare overhead. The Bottom Line Modern flash gives you incredible performance, but performance means nothing if your data isn't safe. FlashArray's architecture makes durability the first principle—not an optimization, not an add-on, but the foundation everything else is built on. When FlashArray says your write is safe, it's safe. That's not marketing. That's engineering. This approach to write safety is part of Pure's commitment to Better Science, doing things the right way, not the easy way. We didn't just swap drives in an existing architecture; we reimagined the entire system from the ground up, from how we co-design hardware and software with DirectFlash to how we map and manage petabytes of metadata at scale. Want to dive deeper? Better Science, Volume 1 — Hardware and Software Co‑design with DirectFlash https://blog.purestorage.com/products/better-science-volume-1-hardware-and-software-co-design-with-directflash/ Better Science, Volume 2 — Maps, Metadata, and the Pyramid https://blog.purestorage.com/perspectives/better-science-volume-2-maps-metadata-and-the-pyramid/ The Pure Report — Better Science Vol. 1 (DirectFlash) https://podcasts.apple.com/gb/podcast/better-science-volume-1-directflash/id1392639991?i=1000569574821200Views1like0Comments