FlashCrew London & Glasgow May/June 2025 !!!! Register NOW...
I'd like to invite you to our upcoming FlashCrew Customer User Group in London on May 15th, from midday. Throughout May, we'll be taking our FlashCrew User Group on the road to share ideas, best practices and network on all things Pure over some drinks and food. Plus, as a thank you for your continued support and attendance we will of course have the latest FlashCrew branded gifts for you to take with you! If you can make it, please register at this link below. London 10-11 Carlton House Terrace Thursday 15th May: REGISTER HERE for FLASHCREW LONDON Glasgow Radisson Blu Hotel Thursday 5th June: REGISTER HERE for FLASHCREW GLASGOW These are user group meetings, targeted at a technical audience across Pure's existing customers. Not only will you hear the latest news on the Pure Enterprise Data Cloud, but will also get to network with other like-minded users and exchange ideas and experiences. Agenda: 12:00 - 12:50 Arrival, Lunch and Welcome 13:00 - 14:00 Pure Platform: Features and Roadmap: with demo 14:00 - 14:15 Break 14:15 - 14:45 SQL Databases and Pure 14:45 - 15:15 Voice of the Customer 15:15 - 15:30 Break 15:30 - 16:15 Portworx and the Enterprise Data Cloud 16:15 - 16:45 Modern Virtualisation 16:45 - 17:00 Open Floor Q&A, Raffle, Wrap Up 17:00 - 19:00 Drinks and Networking176Views5likes0CommentsWhy Are We Still Designing IT Like It's 2012?
Let’s talk about complexity in IT. Not the fun kind—like building a Raspberry Pi-powered coffee machine or arguing over whether Terraform should be capitalized. I mean the kind of complexity that slows teams down, bloats your stack, and makes security people question their career choices. You know the type: five backup platforms, three monitoring tools, two storage vendors “for resilience,” and a bunch of scripts someone wrote in 2019 that nobody’s brave enough to touch. We tell ourselves it’s “best-of-breed,” “cloud-first,” or my personal favorite—“strategic.” But let’s call it what it is: chaos without any direction. Enter Conway’s Law (aka the Mirror You’ve Been Avoiding) Melvin Conway dropped this gem in 1967: “Organizations design systems that mirror their own communication structures.” Still true. Still brutal. If your company has six teams that don’t talk to each other except through passive-aggressive Jira tickets, your architecture is going to reflect that—fragmented, redundant, over-engineered, and impossible to secure. Conway’s Law isn’t just a quirky observation. It’s a diagnostic tool. If your architecture feels like a group project gone off the rails, chances are it’s because your org works that way too. Cloud Chaos: Now with More Vendors! And just when you thought it couldn’t get worse—we bring in the cloud. Or clouds. Somewhere between “cloud-first” and “cloud-only,” we lost the plot. We started treating hyperscalers like interchangeable gas stations: need compute? Just pull over at the nearest one. We’ve seen it: Migrations from AWS to Azure to GCP like it’s some weird tech pilgrimage Applications lifted and shifted with zero refactoring Hybrid architectures that “just sort of happened” Look, the cloud’s not the problem. I like cloud and I believe it is here to stay. But designing 100% for the cloud without actually understanding your why? That’s Conway’s Law, just with bigger invoices. Even worse? Bouncing between cloud providers because someone read a Forrester report and got nervous about lock-in. That’s not strategy—that’s cloud-induced panic. The Two-Vendor Lie We Keep Telling Ourselves Ah yes, the old two-vendor strategy. Meant to be safe. Designed to reduce dependency. What it really does? Doubles your complexity and halves your team’s sanity. Two vendors = two playbooks, two consoles, two support teams blaming each other It’s not more resilient—it’s just more confusing Gartner even calls it out: more vendors = more risk, not less If you think managing multiple tools with overlapping functions is safer than consolidation, congrats—you’ve just invented the world’s most expensive “Oops” button. Manual ≠ Secure. It Just Feels That Way Let’s talk about the weird rituals we still do in the name of security: Manually copying data to “safe zones” Turning off network access like it’s a security blanket Spinning up siloed sandboxes to avoid risk It’s not protection. It’s procrastination. Manual controls introduce human error, waste time, and don’t scale. If your “strategy” depends on someone remembering to toggle a firewall rule every Thursday, you're not secure—you’re just lucky. And outsourcing that chaos to a vendor doesn’t make it better. Handing over management to a provider that’s Frankensteined a bunch of loosely integrated tech with bailing wire and hope isn’t a strategy—it’s just renting someone else’s mess. If there’s no real roadmap, no cohesion, no architectural vision—it’s not a partnership. It’s a future support ticket waiting to happen. Hybrid Cloud Needs Purpose, Not Permission Hybrid isn’t a backup plan. It’s a design decision. Too many shops end up hybrid by accident—because apps don’t refactor, budgets don’t stretch, or politics get in the way. The result is an environment that’s technically working but operationally exhausting. A good hybrid strategy is opinionated. You should know: What runs where (and why) How data moves What your north star architecture looks like If you don’t have answers to that? You’re not doing hybrid—you’re doing hope. So What Do We Do About It? We simplify. On purpose. Relentlessly. Design like a startup, not a committee. Keep the stack lean. Less is more when you have tools that actually integrate. Use Conway’s Law in reverse. Want systems that work together? Build teams that do too. Break silos before they become dependencies. Treat cloud like architecture, not an escape route. Cloud is amazing if you design for it. Otherwise, it’s just someone else’s complexity in your billing statement. Stop solving people problems with platform purchases. Most complexity isn’t technical—it’s cultural. No vendor can fix your org chart. Final Thought: Complexity Is a Tax. Stop Paying It. Every extra platform, every vendor “just in case,” every manual handoff is a tax. And it’s compounding interest on your ability to execute. If you want to move fast, secure your data, and stay sane—you’ve got to design with purpose. That means fewer tools, better alignment, and architectures that reflect how you want to operate, not how your politics force you to. You want resilience? Start with intention. But what I’m really curious about is your perspective: How are you dealing with complexity? Is hybrid working for you—or just holding you hostage? Have you successfully simplified your architecture without sacrificing flexibility? Let's make this a real convo—not another “cloud is the answer” thread. —Zane Allyn164Views5likes0CommentsDon’t Wait, Innovate: Long‑Life Release 6.9.0 Is Your Gateway to Continuous Innovation
How Pure Releases Work (and Why You Should Care) Pure Storage doesn’t make you choose between stability and innovation: Feature Releases arrive monthly and are supported for 9 months. They’re production‑ready and ideal if you like to live on the cutting edge. Long‑Life Releases (LLRs) bundle those feature releases into a thoroughly tested version which is supported for three years. LLR 6.9.0 is essentially all the innovation of those Feature releases, rolled into one update. This dual approach means you can adopt new features as soon as they’re ready or wait for the next stable release—either way, you keep moving forward. Not sure what features you’re missing? Not a problem as we have a tool for that. A coworker reminded me: Pure1’s AI Copilot can tell you exactly what you’ve been missing. Here’s how easy it is to find out: Log into Pure1, click on the AI Copilot tab, and type your question. My coworker reminded me of this last week, so I tried: “Please provide all features for FlashArray since version 6.4 of Purity OS.” Copilot returned a detailed rundown of new capabilities across each release. In just a couple of minutes, I saw everything I’d overlooked—no digging through release notes or calling support required. A Taste of What You’ve Been Missing Here’s a snapshot of the goodies you may have missed across the last few year releases: Platform enhancements: FlashArray//E platform (6.6.0) extends Pure’s simplicity to tier‑3 workloads. Gen 2 chassis support (6.8.0) delivers more performance and density with better efficiency. 150 TB DirectFlash modules (6.8.2) boost capacity without compromising speed. File services advancements: FlashArray File (GA in 6.8.2) lets you manage block and file workloads from the same array. SMB Continuous Availability shares (6.8.6) keep file services online through failures. Multi‑server/domain support (6.8.7) scales file services across larger environments. Security and protection: Enhanced SafeMode protection (6.4.3) quadruples local snapshot capacity and adds hardware tokens for instant data locking which is vital in a ransomware era. Over‑the‑wire encryption (6.6.7) secures asynchronous replication. Pure Fusion: We can't talk about this enough Think of this as fleet intelligence. Fusion applies your policies across every array and optimizes placement automatically, cutting operational overhead . Purity OS: It’s Not Just Firmware Every Purity OS update adds value to your existing hardware. Recent improvements include support for new NAND sources, “titanium” efficiency power supplies, and advanced diagnostics. These aren’t minor tweaks; they’re part of Pure’s Evergreen promise that your hardware investment keeps getting better over time. Why Waiting Doesn’t Pay Off It’s tempting to delay updates, but with Pure, waiting often means you’re missing out on: Security upgrades that counter new threats. Performance gains like NVMe/TCP support and ActiveCluster improvements. Operational efficiencies such as open metrics and better diagnostics. Future‑proofing features that prepare you for upcoming innovations. Your Roadmap to Capture These Benefits Assess your current state: Use AI Copilot to see exactly what you’d gain by moving to LLR 6.9.0. Plan your update: Pure’s non‑disruptive upgrades let you modernize without downtime. Explore new features: Dive into Fusion, enhanced file services, and expanded security capabilities. Connect with the community: Share experiences with other users to accelerate your learning curve. The Bottom Line Pure’s Evergreen model means your hardware doesn’t just retain value it continues to gain it. Long‑Life Release 6.9.0 is a gateway to innovation. In a world where data is your competitive edge, standing still is equivalent to moving backward. Ready to see what you’ve been missing? Log into Pure1, fire up Copilot, and let it show you the difference between where you are and where you could be.609Views4likes0CommentsAsk Us Everything ... Evergreen//One edition!
💬 Have more questions for our experts around Evergreen//One after today's live "Ask Us Everything"? Feel free to drop them below and our experts will answer! dpoorman abarnes and Tago- - Tag! You're it! Or, check out some of these self-serve resources: EG//1 website Introduction to Evergreen//One (video) Evergreen//One for AI: Modern Storage Economics for the AI Era (blog) The Economics of Pure Storage Evergreen Subscriptions (blog) DATIC Protects Citizen Data from Attack (customer case study)164Views3likes0CommentsAsk Us Everything Recap: Making Purity Upgrades Simple
At our recent Ask Us Everything session, we put a spotlight on something every storage admin has an opinion about: software upgrades. Traditionally, storage upgrades have been dreaded — late nights, service windows, and the fear of downtime. But as attendees quickly learned, Pure Storage Purity upgrades are designed to be a very different experience. Our panel of Pure Storage experts included our host Don Poorman, Technical Evangelist, and special guests Sean Kennedy and Rob Quast, Principal Technologists. Here are the questions that sparked the most conversation, and the insights our panel shared. “Are Purity upgrades really non-disruptive?” This one came up right away, and for good reason. Many admins have scars from upgrade events at other vendors. Pure experts emphasized that non-disruptive upgrades (NDUs) are the default. With thousands performed in the field — even for mission-critical applications — upgrades run safely in the background. Customers don’t need to schedule middle-of-the-night windows just to stay current. “Do I need to wait for a major release?” Attendees wanted to know how often they should upgrade, and whether “dot-zero” releases are safe. The advice: don’t wait too long. With Pure’s long-life releases (like Purity 6.9), you can stay current without chasing every new feature release. And because Purity upgrades are included in your Evergreen subscription, you’re not paying extra to get value — you just need to install the latest version. Session attendees found this slide helpful, illustrating the different kinds of Purity releases. “How do self-service upgrades work?” Admins were curious about how much they can do themselves versus involving Pure Storage support. The good news: self-service upgrades are straightforward through Pure1, but you’re never on your own. Pure Technical Services knows that you're running an upgrade, and if an issue arises you’re automatically moved to the front of the queue. If you want a co-pilot, then of course Pure Storage support can walk you through it live. Either way, the process is fast, repeatable, and built for confidence. Upgrading your Purity version has never been easier, now that Self Service Upgrades lets you modernize on your schedule. “Why should I upgrade regularly?” This is where the conversation shifted from fear to excitement. Staying current doesn’t just keep systems secure — it unlocks new capabilities like: Pure Fusion™: a unified, fleet-wide control plane for storage. FlashArray™ Files: modern file services, delivered from the same trusted platform. Ongoing performance, security, and automation enhancements that come with every release. One attendee summed it up perfectly: “Upgrading isn’t about fixing problems — it’s about getting new toys.” The Takeaway The biggest lesson from this session? Purity upgrades aren’t something to fear — they’re something to look forward to. They’re included with your Evergreen subscription, they don’t disrupt your environment, and they unlock powerful features that make storage easier to manage. So if you’ve been putting off your next upgrade, take a fresh look. Chances are, Fusion, Files, or another feature you’ve been waiting for is already there — you just need to turn it on. 👉 Want to keep the conversation going? Join the discussion in the Pure Community and share your own upgrade tips and stories. Be sure to join our next Ask Us Everything session, and catch up with past sessions here!427Views3likes2CommentsAsk Us Everything About Evergreen//One
Got questions about Evergreen//One? Get answers Get answers. December 11, 2025 | 09:00am PT • 12:00pm ET In this month’s episode of Ask Us Everything, we’re diving into Evergreen//One™—our storage-as-a-service solution that gives you flexibility, protection, and cloud-ready capabilities. Whether you already use Evergreen//One or are exploring it for the first time, you’ll see how to get more value from your storage—without added cost or complexity. Then it’s your turn. Our experts will answer your questions and show you how Evergreen//One enables you to focus on business outcomes, instead of storage management. Reserve your seat! Ask a question for your chance to win: The first 10 eligible Pure Storage customers to submit a question during the live webinar will receive one (1) Pure Storage Customer Appreciation Kit (approximate retail value: $65). Limit one kit per customer. Offer valid only during the live event and while supplies last. See Terms and Conditions.86Views2likes0CommentsWhy Object Storage Still Matters
In Part 2, I wrote a line that, at the time, felt almost like a side comment — something I typed without fully appreciating how much it would change the direction of the story: “BREAKING NEWS: The FlashArray now supports Object??? What in the world? I may need to write an article about that!!” That reaction wasn’t planned, and it definitely wasn’t me being clever. It was me looking at the GUI and thinking, “that can’t be right… can it?” It didn’t line up with how I’ve been modeling storage architectures in my head for years, which usually means one of two things: either something fundamentally changed… or I’ve been confidently wrong about part of this for a while. And if I’m being completely honest, there was also a second reaction happening in parallel — one that I didn’t write down at the time because it sounded slightly ridiculous even in my own head: “Wait… do I actually understand why object storage exists in the first place? And more importantly… what exactly was wrong with files?” That’s the part nobody likes to admit out loud. We’ve all spent years confidently explaining block, file, and object as if we were born with that knowledge, when in reality most of us learned it incrementally, retroactively, and with just enough conviction to sound credible in front of a customer. Object storage, in particular, has always carried this aura of inevitability — like of course it’s better, of course it scales, of course it’s what modern applications need — without always forcing us to question why the previous model stopped being enough. Because for as long as most of us have been designing infrastructure, object storage has not simply been another protocol layered onto an existing system. It has represented a fundamentally different way of organizing and accessing data, one that required its own architectural approach, its own scaling model, and, more often than not, its own dedicated platform. The separation between block, file, and object was not arbitrary; it was a reflection of how deeply different those paradigms were in terms of metadata handling, access patterns, and performance expectations. This is precisely why platforms such as Everpure FlashBlade exist in the first place. They were not created as extensions of traditional storage systems but as purpose-built architectures designed to treat unstructured data — and particularly object data — as a first-class citizen. The use of distributed metadata services, sharded across independent nodes, combined with a key-value store storage model, allows such systems to achieve levels of parallelism and throughput that simply cannot be replicated within a controller-based design. In that context, object storage is not something that is “added” to the system; it is the system. Which is why seeing S3 support appear on FlashArray required a pause. Not excitement. Not skepticism alone. Something closer to intellectual friction. Reconciling Two Architectural Worlds The most important step in understanding what FlashArray has introduced is to resist the temptation to treat it as a direct comparison to FlashBlade. These aren’t two different ways of solving the same problem. They’re two different answers to two different problems—and pretending otherwise is where people get themselves into trouble. FlashBlade is built for object, not adapted to it. S3 talks directly to a distributed engine that thinks in objects, not files pretending to be objects. Metadata is spread across blades instead of becoming a centralized choke point, and the whole system scales the way modern workloads actually need it to. There’s no file system layer to fight with, no directory structure to navigate, no POSIX semantics getting in the way. It just does what you’d expect when you remove all of that: it goes fast, it scales cleanly, and it keeps up with workloads like HPC, AI and analytics without breaking a sweat. FlashArray takes a very different path, and in reality, it’s not what most people expect. It doesn’t try to reinvent itself as an object platform, and it doesn’t throw an S3 gateway in front of the array and call it a day. With Purity 6.10.5+, S3 just shows up as another protocol the system understands, right next to block and file. That distinction matters more than it seems. This isn’t something duct-taped on the side — it’s part of the same control plane, the same data path, the same system you’ve already been running. But let’s not pretend it turned into FlashBlade overnight. This is still a controller-driven architecture. The primary controller does the heavy lifting — handling requests, authenticating them, coordinating operations — before anything actually hits the storage engine. Which means it behaves differently, especially as workloads scale. So it ends up in this interesting middle ground. Not a native object system in the pure sense, but not a hack either. Just a different way of exposing what’s already there. The Translation Layer and Its Consequences It would be irresponsible to discuss FlashArray S3 without explicitly addressing the implications of this design. Even with its native integration into Purity, S3 operations are still subject to the realities of a controller-bound architecture. Every request must be processed, authenticated, and coordinated before it is executed, introducing a measurable difference in behavior compared to both native block operations and distributed object systems. The most immediate effect is latency. While FlashArray continues to deliver sub-150 microsecond performance for block workloads, S3 operations typically operate at higher latencies (in 1 millisecond range) due to the additional processing steps involved. This is not a flaw; it is the natural outcome of introducing a protocol that was designed for scale and flexibility into a system optimized for low-latency transactional workloads. Metadata handling further reinforces this distinction. FlashBlade distributes metadata across its architecture, enabling massive parallelism and consistent performance at scale. FlashArray processes metadata through its controller framework, which introduces natural serialization points under high concurrency. As workloads become increasingly metadata-heavy — particularly with small objects — this difference becomes more pronounced. The system also enforces clearly defined operational limits to maintain predictable performance. As of Purity 6.10.5+, FlashArray supports up to 250 S3 buckets per array and a maximum of 1,000,000 objects per bucket. FlashArray Object Store Limits Object storage operates at the array scope and does not integrate with multi-tenancy or “realms”, which has implications for service provider models and strict tenant isolation requirements. These constraints are not arbitrary limitations; they are guardrails that ensure the system behaves consistently within its architectural boundaries. Where the Architecture Becomes Secondary Having established those boundaries, the conversation naturally shifts from “how it works” to “why it matters”. In many enterprise environments, particularly within SLED organizations, the challenge is not achieving exabyte-scale throughput or supporting billions of objects. The challenge is delivering capabilities in a way that is operationally sustainable, economically efficient, and aligned with existing infrastructure. This is where FlashArray’s approach becomes compelling. By exposing object storage within the same platform that already supports block and file workloads, it eliminates the need to introduce a separate system, a separate operational model, and a separate set of dependencies. The same management interface, the same automation framework, and the same data services extend across all protocols. More importantly, object data inherits the full set of Purity capabilities. Global inline deduplication and compression apply to S3 workloads, significantly improving storage efficiency compared to many object-native platforms. SafeMode snapshots extend immutability to object storage, providing a critical layer of protection against ransomware. ActiveCluster, combined with ActiveDR, enables a three-site resilience model that ensures data availability across multiple locations with zero RPO between primary sites. These are not incremental improvements. They represent a shift in how object storage can be consumed within an enterprise. Practical Use Cases in a Unified Model When viewed through this lens, the use cases for FlashArray S3 become both clear and grounded in reality. Development and Staging Environments Some applications rely on S3 APIs but do not require massive scale, FlashArray provides a consistent and integrated object interface without introducing additional infrastructure. Developers can build and test against a familiar model while remaining within the same operational environment. Backup and Recovery Workflows FlashArray S3 enables modern data protection strategies that leverage object storage while benefiting from flash performance, deduplication, and indelible snapshots. This combination improves both recovery times and storage efficiency. Tier-two repositories and application-integrated storage represent another natural fit. Workloads such as document management systems, logs, and archival data often require object semantics but do not justify the higher cost of a dedicated object platform. Consolidating these workloads onto FlashArray simplifies operations while maintaining reliability and performance. Where the Boundaries Still Matter None of this diminishes the importance of selecting the appropriate platform for workloads that demand a different architecture. High-performance AI pipelines, large-scale analytics environments, and use cases requiring massive parallelism remain firmly within the domain of FlashBlade. The ability to scale performance linearly, distribute metadata across many nodes, and support billions of objects is not optional in these scenarios — it is essential. What has changed is not the relevance of those systems, but the necessity of deploying them for every object storage use case. A Subtle but Significant Shift The introduction of S3 on FlashArray does not represent a replacement of one architecture with another. It represents a convergence of capabilities within a unified operational framework. Object storage, in this model, is no longer a destination that requires its own platform. It becomes a capability — one of several ways to access and manage data within the same system. That shift is easy to overlook, but its implications are significant. It allows organizations to design around outcomes rather than protocols, to reduce complexity without sacrificing capability, and to align infrastructure more closely with the needs of modern applications. Closing Reflection Looking back at that line in Part 2, it is clear that the reaction was not just about a new feature appearing in the interface. It was about the recognition — however incomplete at the time — that something foundational was beginning to change. Object storage did not suddenly become simpler, nor did it lose the architectural complexity that defines it. What changed is where it lives. And once that becomes clear, you start asking a slightly uncomfortable but very honest question: If this works… and it works well enough for most of what I actually need… why was I so convinced it had to live somewhere else in the first place? That is usually where the interesting work begins. Appreciate you reading. Dmitry Gorbatov © 2025 Dmitry Gorbatov | #dmitrywashere27Views1like0CommentsStop Running File Servers on VMs
Dmitry Gorbatov Apr 06, 2026 One of the superstar Pre-Sales Systems Engineers on my team was in a customer meeting not too long ago, walking through what was, by all accounts, a well-run environment. The team knew what they were doing, the infrastructure was stable, and nothing stood out as particularly problematic. It was one of those conversations where everything feels “fine,” which in our world usually means there are inefficiencies hiding in plain sight. Then he started asking questions about enterprise file services. They were running a couple of Windows Server virtual machines on top of VMware vSphere, serving SMB shares to the rest of the organization. Again, nothing unusual there. This is still the default design in a lot of places, and it works well enough that nobody feels compelled to question it. But as the meeting on, a few details started to surface. One of the VMs was consistently running hot during backup windows. Another one hadn’t been patched in a while because nobody wanted to risk disrupting access to shared data. The storage policies applied at the VM layer didn’t quite line up with what was actually configured on the array. And there was an unspoken understanding that maintaining these systems was just part of the job — something you deal with, not something you optimize. What made it more interesting was that the same environment had an Everpure FlashArray running their critical workloads. It was handling databases, transactional systems, and anything else that required consistent performance and reliable data services. It was protected, replicated, and trusted. File services, however, were living on top of virtual machines, with their own lifecycle (please, please… don’t say VMware snapshots), their own dependencies, and their own set of operational overhead. That disconnect is what stuck with me. So instead of continuing the theoretical discussion about architecture and “best practices,” I went back to my lab and decided to try something very simple. I wanted to see what would actually happen if I enabled file services directly on the array and treated it as a first-class file platform instead of assuming that role belonged to something else. There was no redesign exercise, no migration plan, and no phased rollout. I wasn’t trying to prove a point on a whiteboard. I just wanted to turn it on and see if the experience matched what we tend to claim in conversations. Nothing broke. Nothing felt forced. And more importantly, nothing about it felt like a compromise. This post walks through exactly what I did to enable and run file services on a FlashArray //X20R4 running Purity 6.9.2. The goal is not to explain the architecture in abstract terms, but to show how straightforward it is to take something that already exists in your environment and use it in a way that removes unnecessary complexity. What I realized (and why this matters) Once everything was up and running, the first realization was that this is not a workaround or a secondary feature designed to fill a gap. FlashArray File is integrated into the platform in a way that makes it behave like a natural extension of what the system already does well. It uses the same controllers, the same global storage pool, and the same data services that are already in place for block workloads. There is no separate management layer, no additional appliance (remember Data Movers and NAS Personas?), and no need to think about it as something different from the rest of the system. That by itself is useful, but it is not the most important part. What stood out more was the amount of operational overhead that simply disappeared. When file services run on virtual machines, you inherit everything that comes with them. You are responsible for the guest operating system, including patching cycles, security updates, and the occasional issue that appears at the worst possible time. You are also consuming hypervisor resources and, in many cases, paying for licensing that exists solely to support a function that could be handled elsewhere. On top of that, you end up managing data protection, performance, and capacity in two different places (remember RDMs, or in-guest iSCSI?), which introduces opportunities for inconsistency. By moving file services onto the array, that entire layer is removed. You are not just changing where the workload runs; you are simplifying how it is operated, protected, and maintained over time. The second realization was that this approach aligns with where things are clearly heading. Everpure is already extending these capabilities with ActiveCluster for File, which will bring synchronous replication and continuous availability to unstructured data. I do not have that running in my lab yet, but it is not difficult to see the direction. As those capabilities become more widely available, the remaining reasons to maintain separate file platforms will continue to shrink. That will be a conversation for a future post. Let’s tentatively call it Part 3 of the series. Before you start (the part that actually matters) Enabling file services on the array is straightforward. The part that tends to create friction is everything that surrounds the configuration, particularly networking and integration with existing services. The first consideration is the choice of network interfaces. Although the array provides 1GbE management ports, those interfaces are not intended for serving file workloads. Using them for SMB or NFS traffic introduces an artificial bottleneck that will affect performance and, more importantly, perception. File services should be configured on the 10 or 25GbE data ports, which are designed to handle production traffic and provide the throughput expected from the platform. Here is what my array looked like earlier today: The highlighted ports are ETH10 and ETH11 on both controllers. Redundancy should be planned, but it does not need to be over engineered. A simple and reliable starting point is to use at least two ports per controller, ensuring that the configuration remains consistent across both sides. The goal is to achieve predictable failover behavior rather than to build a complex network design that is difficult to troubleshoot. One concept that is worth understanding early is the File Virtual Network Interface, or File VIF. This is the logical identity of the file service—the IP address that clients use to connect. It is designed to move between controllers as needed, maintaining availability during failover events. Once this concept is clear, the rest of the networking configuration becomes much easier to follow. My lab was built within budgetary constraints - that means I don’t have separate ethernet switches and I don’t have the time to build a separate DNS Server for FA File Services. Everpure recommends separating file client traffic from management traffic, but that’s a best practice, not a requirement. Since my lab switch is a single flat, untagged network and the environment is really just 192.168.1.0/24, I will just us the most practical approach - put the FA File VIFs on that same 192.168.1.0/24 network with their own IP addresses. Here is what I did: I just kept the file VIFs on 192.168.1.0/24 since that is the only real network available. FlashArray expects unique layer-3 subnets and does not support overlapping networks. DNS In my specific configuration, I don’t need a new DNS server. My existing management DNS servers can resolve the AD/DC hostnames and the FA File names/computer object. FA File can use the same DNS as management with no extra file-DNS configuration. By default, DNS lookups will go out the management interfaces, so my DNS server just needs to be reachable from the management network. And it is. Let’s turn the lights on, shall we? After assigning the IP addresses and enabling the ports, the lights came on. Important design note I will use one client-facing VIF IP for the file service, for example: File VIF IP: 192.168.1.135 Netmask: 255.255.255.0 Gateway: 192.168.1.2 default gateway Do not try to use 192.168.1.131-134 as four separate FA File IPs unless you intentionally want multiple VIFs. The ct*.eth* ports are transport underlay, not the SMB/NFS endpoint IPs. Configuring a File Server and File VIF Open the File Services server page Go to Storage → Servers. Open the default server (_array_server) or create a new file server if you want a dedicated namespace. Stay on that server’s details page. 3. Create the File VIF Use physical bonding first; it’s the simplest. In the Virtual Interfaces section, click + Create VIF. Choose Physical Bonding. Select the underlying port pairs: Pair 1: ct0.eth10 and ct1.eth10 Pair 2: ct0.eth11 and ct1.eth11 Name the VIF something simple, e.g.: filevip1 Enter network settings: IP Address: 192.168.1.135 Netmask: 255.255.255.0 Gateway: 192.168.1.1 Leave VLAN blank since there are no VLANs. Save and Enable the VIF. That creates the client-facing IP for SMB/NFS. 4. Configure DNS Integration with DNS and Active Directory is another area where a bit of preparation goes a long way. File services rely on proper name resolution and domain integration, and it is important to recognize that file-related DNS settings are separate from the array’s management DNS configuration. The system effectively becomes a participant in the domain as a file server, which means that DNS records, domain join operations, and permissions should be planned accordingly rather than improvised during setup. Since my DNS is 192.168.1.2 and I want to reuse management DNS: Go to the server’s DNS Settings. My management DNS is already configured and points to 192.168.1.2 If you want to explicitly add file DNS: Click + in DNS Name: file-dns Domain suffix: your AD/domain suffix DNS server: 192.168.1.2 Service: file Source interface can remain default unless you specifically need file VIF sourcing. 5. Create required DNS A records On my DNS server 192.168.1.2, I created an A record for the file service name pointing to the File VIF IP. Name: fa-file01 IP: 192.168.1.135 If you are joining AD for SMB/Kerberos: Make sure DNS also has A records for all relevant domain controllers. Create the A record that matches the AD computer object / FA File service name. 6. Join Active Directory or configure LDAP If using SMB Use Active Directory. Go to: Storage → Servers → _array_server Then look for one of these panels: Remote Directory Service Click Edit Configuration Select Active Directory Enter: Name Domain DNS Name Computer Name Use Existing Account if applicable AD User Password TLS Mode Save / Join This part took me 2 hours. I was getting some crazy error messaged that I’m simply embarrassed to share here. It was not the DNS. It was an NTP server misconfiguration that was causing Kerberos to not authenticate properly. There was a 10 minute time skew between the FlashArray and the domain controller. 7. Create a File System The file system is the top-level container for your unstructured data. GUI Method: Navigate to Storage > File Systems and click the plus sign (+). Enter a name and click Create. CLI Method: Use the following command: purefs create <file-system-name>. 8. Create a Managed Directory Managed directories allow you to apply specific policies (like quotas or snapshots) to subfolders within a file system. GUI Method: Go to Storage > File Systems. Click on the name of the file system you just created. Select the Directories tab and click the plus sign (+). Enter the directory name and the internal path (e.g., /users). CLI Method: Use the following command: puredir create filesystem1:users --path /users. 9. Create an Export The export makes the managed directory accessible to clients over the network. GUI Method: Navigate to Storage > Policies > Export Policies. Select an existing policy (e.g., a standard SMB or NFS policy) or create a new one. Within the policy view, click the plus sign (+) to add an export. Select your Managed Directory, choose the appropriate Server (use _array_server for standard configurations), and provide an Export Name (this is the name clients will use to mount the share). CLI Method: Use the following command: puredir export create --dir <file-system-name>:<directory-name> --policy <policy-name> --server <server-name> --export-name <client-facing-name>. A quick validation step At this point, it is worth validating access from a client system. Map the SMB share and perform a simple set of operations—create files, read data, and verify permissions. This is less about testing performance and more about confirming that networking, authentication, and access controls are behaving as expected. In most cases, if the earlier steps around DNS and Active Directory were done correctly, this validation step is uneventful, which is exactly what you want. And now let the data migration begin. I am actually doing it from my Mac. And it just works!!! What becomes apparent after completing these steps is how little effort is required to stand up a fully functional file platform on infrastructure that is already in place. Unless, of course, your NTP server crashed. The system behaves predictably, integrates cleanly with existing services, and avoids many of the operational burdens associated with VM-based file servers. And that is where things start to get interesting. Because everything described so far is still being done manually—selecting where things live, defining configurations, and applying policies one step at a time. It works, and it works well, but it also mirrors the way storage has traditionally been managed. In the next post, I will show what happens when you stop doing these steps manually and let Pure Fusion handle placement, policy, and provisioning instead. Appreciate you reading. © 2025 Dmitry Gorbatov | #dmitrywashere12Views1like0CommentsWe are just one week away PUG#3
January 28th, the Cincinnati Pure User Group will be convening at Ace's Pickleball to discuss Enterprise file. We will be joined by Matt Niederhelman Unstructured Data Field Solutions Architect to help guide conversation and answer questions about what he is experiencing amongst other customers. Click the link below to register and come join us. Help us guide the conversation with your ideas for future topics. https://info.purestorage.com/2025-Q4AMS-COMREPLTFSCincinnatiPUG-LP_01---Registration-Page.html56Views1like0Comments