Why Object Storage Still Matters
In Part 2, I wrote a line that, at the time, felt almost like a side comment — something I typed without fully appreciating how much it would change the direction of the story: “BREAKING NEWS: The FlashArray now supports Object??? What in the world? I may need to write an article about that!!” That reaction wasn’t planned, and it definitely wasn’t me being clever. It was me looking at the GUI and thinking, “that can’t be right… can it?” It didn’t line up with how I’ve been modeling storage architectures in my head for years, which usually means one of two things: either something fundamentally changed… or I’ve been confidently wrong about part of this for a while. And if I’m being completely honest, there was also a second reaction happening in parallel — one that I didn’t write down at the time because it sounded slightly ridiculous even in my own head: “Wait… do I actually understand why object storage exists in the first place? And more importantly… what exactly was wrong with files?” That’s the part nobody likes to admit out loud. We’ve all spent years confidently explaining block, file, and object as if we were born with that knowledge, when in reality most of us learned it incrementally, retroactively, and with just enough conviction to sound credible in front of a customer. Object storage, in particular, has always carried this aura of inevitability — like of course it’s better, of course it scales, of course it’s what modern applications need — without always forcing us to question why the previous model stopped being enough. Because for as long as most of us have been designing infrastructure, object storage has not simply been another protocol layered onto an existing system. It has represented a fundamentally different way of organizing and accessing data, one that required its own architectural approach, its own scaling model, and, more often than not, its own dedicated platform. The separation between block, file, and object was not arbitrary; it was a reflection of how deeply different those paradigms were in terms of metadata handling, access patterns, and performance expectations. This is precisely why platforms such as Everpure FlashBlade exist in the first place. They were not created as extensions of traditional storage systems but as purpose-built architectures designed to treat unstructured data — and particularly object data — as a first-class citizen. The use of distributed metadata services, sharded across independent nodes, combined with a key-value store storage model, allows such systems to achieve levels of parallelism and throughput that simply cannot be replicated within a controller-based design. In that context, object storage is not something that is “added” to the system; it is the system. Which is why seeing S3 support appear on FlashArray required a pause. Not excitement. Not skepticism alone. Something closer to intellectual friction. Reconciling Two Architectural Worlds The most important step in understanding what FlashArray has introduced is to resist the temptation to treat it as a direct comparison to FlashBlade. These aren’t two different ways of solving the same problem. They’re two different answers to two different problems—and pretending otherwise is where people get themselves into trouble. FlashBlade is built for object, not adapted to it. S3 talks directly to a distributed engine that thinks in objects, not files pretending to be objects. Metadata is spread across blades instead of becoming a centralized choke point, and the whole system scales the way modern workloads actually need it to. There’s no file system layer to fight with, no directory structure to navigate, no POSIX semantics getting in the way. It just does what you’d expect when you remove all of that: it goes fast, it scales cleanly, and it keeps up with workloads like HPC, AI and analytics without breaking a sweat. FlashArray takes a very different path, and in reality, it’s not what most people expect. It doesn’t try to reinvent itself as an object platform, and it doesn’t throw an S3 gateway in front of the array and call it a day. With Purity 6.10.5+, S3 just shows up as another protocol the system understands, right next to block and file. That distinction matters more than it seems. This isn’t something duct-taped on the side — it’s part of the same control plane, the same data path, the same system you’ve already been running. But let’s not pretend it turned into FlashBlade overnight. This is still a controller-driven architecture. The primary controller does the heavy lifting — handling requests, authenticating them, coordinating operations — before anything actually hits the storage engine. Which means it behaves differently, especially as workloads scale. So it ends up in this interesting middle ground. Not a native object system in the pure sense, but not a hack either. Just a different way of exposing what’s already there. The Translation Layer and Its Consequences It would be irresponsible to discuss FlashArray S3 without explicitly addressing the implications of this design. Even with its native integration into Purity, S3 operations are still subject to the realities of a controller-bound architecture. Every request must be processed, authenticated, and coordinated before it is executed, introducing a measurable difference in behavior compared to both native block operations and distributed object systems. The most immediate effect is latency. While FlashArray continues to deliver sub-150 microsecond performance for block workloads, S3 operations typically operate at higher latencies (in 1 millisecond range) due to the additional processing steps involved. This is not a flaw; it is the natural outcome of introducing a protocol that was designed for scale and flexibility into a system optimized for low-latency transactional workloads. Metadata handling further reinforces this distinction. FlashBlade distributes metadata across its architecture, enabling massive parallelism and consistent performance at scale. FlashArray processes metadata through its controller framework, which introduces natural serialization points under high concurrency. As workloads become increasingly metadata-heavy — particularly with small objects — this difference becomes more pronounced. The system also enforces clearly defined operational limits to maintain predictable performance. As of Purity 6.10.5+, FlashArray supports up to 250 S3 buckets per array and a maximum of 1,000,000 objects per bucket. FlashArray Object Store Limits Object storage operates at the array scope and does not integrate with multi-tenancy or “realms”, which has implications for service provider models and strict tenant isolation requirements. These constraints are not arbitrary limitations; they are guardrails that ensure the system behaves consistently within its architectural boundaries. Where the Architecture Becomes Secondary Having established those boundaries, the conversation naturally shifts from “how it works” to “why it matters”. In many enterprise environments, particularly within SLED organizations, the challenge is not achieving exabyte-scale throughput or supporting billions of objects. The challenge is delivering capabilities in a way that is operationally sustainable, economically efficient, and aligned with existing infrastructure. This is where FlashArray’s approach becomes compelling. By exposing object storage within the same platform that already supports block and file workloads, it eliminates the need to introduce a separate system, a separate operational model, and a separate set of dependencies. The same management interface, the same automation framework, and the same data services extend across all protocols. More importantly, object data inherits the full set of Purity capabilities. Global inline deduplication and compression apply to S3 workloads, significantly improving storage efficiency compared to many object-native platforms. SafeMode snapshots extend immutability to object storage, providing a critical layer of protection against ransomware. ActiveCluster, combined with ActiveDR, enables a three-site resilience model that ensures data availability across multiple locations with zero RPO between primary sites. These are not incremental improvements. They represent a shift in how object storage can be consumed within an enterprise. Practical Use Cases in a Unified Model When viewed through this lens, the use cases for FlashArray S3 become both clear and grounded in reality. Development and Staging Environments Some applications rely on S3 APIs but do not require massive scale, FlashArray provides a consistent and integrated object interface without introducing additional infrastructure. Developers can build and test against a familiar model while remaining within the same operational environment. Backup and Recovery Workflows FlashArray S3 enables modern data protection strategies that leverage object storage while benefiting from flash performance, deduplication, and indelible snapshots. This combination improves both recovery times and storage efficiency. Tier-two repositories and application-integrated storage represent another natural fit. Workloads such as document management systems, logs, and archival data often require object semantics but do not justify the higher cost of a dedicated object platform. Consolidating these workloads onto FlashArray simplifies operations while maintaining reliability and performance. Where the Boundaries Still Matter None of this diminishes the importance of selecting the appropriate platform for workloads that demand a different architecture. High-performance AI pipelines, large-scale analytics environments, and use cases requiring massive parallelism remain firmly within the domain of FlashBlade. The ability to scale performance linearly, distribute metadata across many nodes, and support billions of objects is not optional in these scenarios — it is essential. What has changed is not the relevance of those systems, but the necessity of deploying them for every object storage use case. A Subtle but Significant Shift The introduction of S3 on FlashArray does not represent a replacement of one architecture with another. It represents a convergence of capabilities within a unified operational framework. Object storage, in this model, is no longer a destination that requires its own platform. It becomes a capability — one of several ways to access and manage data within the same system. That shift is easy to overlook, but its implications are significant. It allows organizations to design around outcomes rather than protocols, to reduce complexity without sacrificing capability, and to align infrastructure more closely with the needs of modern applications. Closing Reflection Looking back at that line in Part 2, it is clear that the reaction was not just about a new feature appearing in the interface. It was about the recognition — however incomplete at the time — that something foundational was beginning to change. Object storage did not suddenly become simpler, nor did it lose the architectural complexity that defines it. What changed is where it lives. And once that becomes clear, you start asking a slightly uncomfortable but very honest question: If this works… and it works well enough for most of what I actually need… why was I so convinced it had to live somewhere else in the first place? That is usually where the interesting work begins. Appreciate you reading. Dmitry Gorbatov © 2025 Dmitry Gorbatov | #dmitrywashere27Views1like0CommentsFusion for the Win: You No Longer Have to Decide Where the Data Lives
Dmitry Gorbatov Apr 10, 2026 In the first post, I walked through enabling file services on a FlashArray. There was nothing particularly complicated about it. The process was clean, predictable, and by the end of it I had a fully functional file platform running on the same system that was already supporting the rest of the environment. It behaved exactly the way you would expect it to behave. And that is precisely what started to bother me. Because if you step back and look at what we actually did, the workflow has not really changed in years. I still made a series of decisions in a very specific order. I chose where the workload should live, I created the file system, I attached protection, and I made sure everything was named and organized in a way that made sense at that moment. It was structured. It was controlled. It was also entirely dependent on me. That model works well enough when the environment is small or when the same person is making the same decisions repeatedly. But as soon as you introduce scale, or simply more people, those decisions start to drift. Not in a dramatic way, but in small inconsistencies that accumulate over time. A slightly different naming convention here, a missed policy there, a workload placed somewhere because it “felt right.” Nothing breaks. It just becomes harder to operate. When the model stops making sense What stood out to me after going through the manual process is that we are still treating storage as something that needs to be individually managed, even though the platform itself has already moved beyond that. We have systems that can deliver consistent performance, global data services, and non-disruptive operations, yet we still rely on human judgment to decide where things go and how they should be configured. That disconnect is where Everpure Fusion begins to make sense. Not as an additional feature, but as a way to remove an entire class of decisions that we have simply accepted as part of the job. From managing infrastructure to defining intent The idea behind the Enterprise Data Cloud is not particularly complicated, but it does require a shift in perspective. Instead of treating each array as a separate system with its own boundaries, the environment becomes a unified pool of resources. Data is no longer something that you place on a specific array. It is something that exists within a global pool, governed by policies that define how it should behave. Once you start thinking this way, the questions change. You are no longer asking where a workload should go. You are asking what that workload needs to look like. Performance expectations, protection requirements, naming, and lifecycle behavior become the inputs, and the system automation takes responsibility for everything else. That is the role of Everpure Fusion. What actually changes in practice The easiest way to understand Fusion is to look at what it removes. In the manual model, every step is explicit. You build storage object by object, and then you attach policies to those objects. You rely on memory, experience, and sometimes documentation to make sure everything is done correctly. With Fusion, that entire process becomes declarative. Instead of building storage step by step, you define a preset. A preset is a reusable definition of what “correct” looks like for a given workload. It captures performance expectations, protection policies, naming conventions, and any constraints that should apply. Once that definition exists, it becomes the standard. When you create a workload from that preset, Fusion evaluates the environment and places it on the array that best satisfies those requirements. It creates the necessary objects, applies the policies, and ensures that everything is consistent with the definition. The important shift is not that tasks are automated. It is that decisions are no longer made ad hoc. Trying it in the lab After building file services manually in the previous post, I wanted to see what this would look like using the same environment, but driven through Fusion. I started by defining a fleet, grouping the array into a logical boundary where resources and policies could be managed collectively. Once the array becomes part of a fleet, you stop thinking of it as an individual system and start treating it as part of a shared pool. From there, identity becomes the next requirement. Fusion relies on centralized authentication, typically through secure LDAP backed by Active Directory. This is what governs access to presets and workloads, and it ensures that everything aligns with existing organizational controls. Up to this point, everything felt exactly like I expected. Then I moved to the part I was actually interested in. Where things didn’t quite line up The goal was to take the file services I had already built and express them as a preset. I wanted a single definition that would describe the file system, its structure, its policies, and its behavior, and then use that definition to create workloads without going through the manual steps again. Conceptually, that is exactly what Fusion is supposed to do. In practice, I ran into a limit that I had not fully appreciated at the start. I was running Purity OS 6.9.2. Which, to be fair, is where most production environments should be. It is a Long-Life Release, stable, predictable, and already capable of delivering Fusion for fleet management, intelligent placement, and policy-driven storage classes. You can create Presets and Workloads for block workloads. What it does not include is full support for File Presets on FlashArray. That capability, where a file system, its directories, and its access policies are all defined and deployed as a single unit, arrives in the 6.10.X Feature Release line. Which means that the exact outcome I was trying to demonstrate was sitting just one version ahead of me. This is where I had to laugh at myself There is always a moment in a lab where you realize that the limitation is not the platform. It is you. In this case, it was me getting ahead of the version I was actually running. My intentions were “ever” so “pure” (IYKYK). The execution was slightly behind the feature set. So I upgraded One of the advantages of working with this platform is that upgrading does not carry the same weight it used to. The system is designed for non-disruptive operations, and moving between versions does not require downtime or migration. The upgrade to 6.10.5 was uneventful in the best possible way. Controllers were updated in sequence, workloads continued to run, and the system transitioned to a new set of capabilities without introducing risk. There is something very satisfying about performing an upgrade not because something is broken, but because you want access to what comes next. BREAKING NEWS: The FlashArray now supports Object??? What in the world? I may need to write an article about that!! When it finally clicks Once on 6.10.5, the model finally aligns with the intent. Once I clicked on Create Your First Preset, it gave me these options: I defined a preset that described the file workload I had previously built manually. It included the expected behavior, protection policies, and naming conventions. Instead of creating individual components, I was defining the service as a whole. Now this was really neat - when you select Storage Class, it knows that arrays are available in your environment. In my case, I only have FA //X. At this point a new field opens and allows you to select the Storage Resources. Once I hit “Publish'“ this was the result: Think of this entire process like this: Define your Recipe (Preset) Order from the Menu (Workload) Lets create a workload from that preset. Once I clicked on + to add a new Workload, the Wizard opened: Give a name to that Workload: Since Fusion Fleet has both of my lab arrays, I have an option to select an array for the workload placement. Our of curiosity I clicked: “Get Recommendations” and this was the result: Once I hit Deploy, within seconds, the workflow executed and I had my File System created. How awesome is this? Come on, give me a cheer! Think about the magnitude of what just happened. I provided minimal input, and Fusion handled the rest. It selected the appropriate array based on capacity and performance, created the file system, applied the policies, and ensured that everything matched the definition. There was no second pass. There were no additional steps. The outcome matched the intent. By moving to this model, I just shifted from being a "storage admin" to a "data architect." I defined the outcomes and it happened “automagically”. Why this matters more than efficiency It would be easy to describe this as a way to reduce manual effort, but that misses the point. The real value is consistency. When every workload is created from a defined preset, variability disappears. Policies are enforced by default. Naming is consistent. Placement is based on a complete view of the environment rather than individual judgment. Over time, that consistency reduces operational friction and lowers risk in ways that are difficult to measure but easy to recognize. Environments behave predictably, scaling becomes simpler, and the likelihood of human error decreases. Where this leads In the first post, I showed that file services can run natively on the array without additional infrastructure. In this post, the focus shifted to removing the manual decisions involved in building and managing those services. The next step is where things move beyond automation. As capabilities like ActiveCluster for File continue to evolve, the conversation shifts toward mobility and continuous availability. At that point, it is no longer just about simplifying operations, but about removing the constraints that tie workloads to a specific system or location. That is a conversation for Part 4. Appreciate you reading. © 2025 Dmitry Gorbatov | #dmitrywashere42Views0likes0CommentsStop Running File Servers on VMs
Dmitry Gorbatov Apr 06, 2026 One of the superstar Pre-Sales Systems Engineers on my team was in a customer meeting not too long ago, walking through what was, by all accounts, a well-run environment. The team knew what they were doing, the infrastructure was stable, and nothing stood out as particularly problematic. It was one of those conversations where everything feels “fine,” which in our world usually means there are inefficiencies hiding in plain sight. Then he started asking questions about enterprise file services. They were running a couple of Windows Server virtual machines on top of VMware vSphere, serving SMB shares to the rest of the organization. Again, nothing unusual there. This is still the default design in a lot of places, and it works well enough that nobody feels compelled to question it. But as the meeting on, a few details started to surface. One of the VMs was consistently running hot during backup windows. Another one hadn’t been patched in a while because nobody wanted to risk disrupting access to shared data. The storage policies applied at the VM layer didn’t quite line up with what was actually configured on the array. And there was an unspoken understanding that maintaining these systems was just part of the job — something you deal with, not something you optimize. What made it more interesting was that the same environment had an Everpure FlashArray running their critical workloads. It was handling databases, transactional systems, and anything else that required consistent performance and reliable data services. It was protected, replicated, and trusted. File services, however, were living on top of virtual machines, with their own lifecycle (please, please… don’t say VMware snapshots), their own dependencies, and their own set of operational overhead. That disconnect is what stuck with me. So instead of continuing the theoretical discussion about architecture and “best practices,” I went back to my lab and decided to try something very simple. I wanted to see what would actually happen if I enabled file services directly on the array and treated it as a first-class file platform instead of assuming that role belonged to something else. There was no redesign exercise, no migration plan, and no phased rollout. I wasn’t trying to prove a point on a whiteboard. I just wanted to turn it on and see if the experience matched what we tend to claim in conversations. Nothing broke. Nothing felt forced. And more importantly, nothing about it felt like a compromise. This post walks through exactly what I did to enable and run file services on a FlashArray //X20R4 running Purity 6.9.2. The goal is not to explain the architecture in abstract terms, but to show how straightforward it is to take something that already exists in your environment and use it in a way that removes unnecessary complexity. What I realized (and why this matters) Once everything was up and running, the first realization was that this is not a workaround or a secondary feature designed to fill a gap. FlashArray File is integrated into the platform in a way that makes it behave like a natural extension of what the system already does well. It uses the same controllers, the same global storage pool, and the same data services that are already in place for block workloads. There is no separate management layer, no additional appliance (remember Data Movers and NAS Personas?), and no need to think about it as something different from the rest of the system. That by itself is useful, but it is not the most important part. What stood out more was the amount of operational overhead that simply disappeared. When file services run on virtual machines, you inherit everything that comes with them. You are responsible for the guest operating system, including patching cycles, security updates, and the occasional issue that appears at the worst possible time. You are also consuming hypervisor resources and, in many cases, paying for licensing that exists solely to support a function that could be handled elsewhere. On top of that, you end up managing data protection, performance, and capacity in two different places (remember RDMs, or in-guest iSCSI?), which introduces opportunities for inconsistency. By moving file services onto the array, that entire layer is removed. You are not just changing where the workload runs; you are simplifying how it is operated, protected, and maintained over time. The second realization was that this approach aligns with where things are clearly heading. Everpure is already extending these capabilities with ActiveCluster for File, which will bring synchronous replication and continuous availability to unstructured data. I do not have that running in my lab yet, but it is not difficult to see the direction. As those capabilities become more widely available, the remaining reasons to maintain separate file platforms will continue to shrink. That will be a conversation for a future post. Let’s tentatively call it Part 3 of the series. Before you start (the part that actually matters) Enabling file services on the array is straightforward. The part that tends to create friction is everything that surrounds the configuration, particularly networking and integration with existing services. The first consideration is the choice of network interfaces. Although the array provides 1GbE management ports, those interfaces are not intended for serving file workloads. Using them for SMB or NFS traffic introduces an artificial bottleneck that will affect performance and, more importantly, perception. File services should be configured on the 10 or 25GbE data ports, which are designed to handle production traffic and provide the throughput expected from the platform. Here is what my array looked like earlier today: The highlighted ports are ETH10 and ETH11 on both controllers. Redundancy should be planned, but it does not need to be over engineered. A simple and reliable starting point is to use at least two ports per controller, ensuring that the configuration remains consistent across both sides. The goal is to achieve predictable failover behavior rather than to build a complex network design that is difficult to troubleshoot. One concept that is worth understanding early is the File Virtual Network Interface, or File VIF. This is the logical identity of the file service—the IP address that clients use to connect. It is designed to move between controllers as needed, maintaining availability during failover events. Once this concept is clear, the rest of the networking configuration becomes much easier to follow. My lab was built within budgetary constraints - that means I don’t have separate ethernet switches and I don’t have the time to build a separate DNS Server for FA File Services. Everpure recommends separating file client traffic from management traffic, but that’s a best practice, not a requirement. Since my lab switch is a single flat, untagged network and the environment is really just 192.168.1.0/24, I will just us the most practical approach - put the FA File VIFs on that same 192.168.1.0/24 network with their own IP addresses. Here is what I did: I just kept the file VIFs on 192.168.1.0/24 since that is the only real network available. FlashArray expects unique layer-3 subnets and does not support overlapping networks. DNS In my specific configuration, I don’t need a new DNS server. My existing management DNS servers can resolve the AD/DC hostnames and the FA File names/computer object. FA File can use the same DNS as management with no extra file-DNS configuration. By default, DNS lookups will go out the management interfaces, so my DNS server just needs to be reachable from the management network. And it is. Let’s turn the lights on, shall we? After assigning the IP addresses and enabling the ports, the lights came on. Important design note I will use one client-facing VIF IP for the file service, for example: File VIF IP: 192.168.1.135 Netmask: 255.255.255.0 Gateway: 192.168.1.2 default gateway Do not try to use 192.168.1.131-134 as four separate FA File IPs unless you intentionally want multiple VIFs. The ct*.eth* ports are transport underlay, not the SMB/NFS endpoint IPs. Configuring a File Server and File VIF Open the File Services server page Go to Storage → Servers. Open the default server (_array_server) or create a new file server if you want a dedicated namespace. Stay on that server’s details page. 3. Create the File VIF Use physical bonding first; it’s the simplest. In the Virtual Interfaces section, click + Create VIF. Choose Physical Bonding. Select the underlying port pairs: Pair 1: ct0.eth10 and ct1.eth10 Pair 2: ct0.eth11 and ct1.eth11 Name the VIF something simple, e.g.: filevip1 Enter network settings: IP Address: 192.168.1.135 Netmask: 255.255.255.0 Gateway: 192.168.1.1 Leave VLAN blank since there are no VLANs. Save and Enable the VIF. That creates the client-facing IP for SMB/NFS. 4. Configure DNS Integration with DNS and Active Directory is another area where a bit of preparation goes a long way. File services rely on proper name resolution and domain integration, and it is important to recognize that file-related DNS settings are separate from the array’s management DNS configuration. The system effectively becomes a participant in the domain as a file server, which means that DNS records, domain join operations, and permissions should be planned accordingly rather than improvised during setup. Since my DNS is 192.168.1.2 and I want to reuse management DNS: Go to the server’s DNS Settings. My management DNS is already configured and points to 192.168.1.2 If you want to explicitly add file DNS: Click + in DNS Name: file-dns Domain suffix: your AD/domain suffix DNS server: 192.168.1.2 Service: file Source interface can remain default unless you specifically need file VIF sourcing. 5. Create required DNS A records On my DNS server 192.168.1.2, I created an A record for the file service name pointing to the File VIF IP. Name: fa-file01 IP: 192.168.1.135 If you are joining AD for SMB/Kerberos: Make sure DNS also has A records for all relevant domain controllers. Create the A record that matches the AD computer object / FA File service name. 6. Join Active Directory or configure LDAP If using SMB Use Active Directory. Go to: Storage → Servers → _array_server Then look for one of these panels: Remote Directory Service Click Edit Configuration Select Active Directory Enter: Name Domain DNS Name Computer Name Use Existing Account if applicable AD User Password TLS Mode Save / Join This part took me 2 hours. I was getting some crazy error messaged that I’m simply embarrassed to share here. It was not the DNS. It was an NTP server misconfiguration that was causing Kerberos to not authenticate properly. There was a 10 minute time skew between the FlashArray and the domain controller. 7. Create a File System The file system is the top-level container for your unstructured data. GUI Method: Navigate to Storage > File Systems and click the plus sign (+). Enter a name and click Create. CLI Method: Use the following command: purefs create <file-system-name>. 8. Create a Managed Directory Managed directories allow you to apply specific policies (like quotas or snapshots) to subfolders within a file system. GUI Method: Go to Storage > File Systems. Click on the name of the file system you just created. Select the Directories tab and click the plus sign (+). Enter the directory name and the internal path (e.g., /users). CLI Method: Use the following command: puredir create filesystem1:users --path /users. 9. Create an Export The export makes the managed directory accessible to clients over the network. GUI Method: Navigate to Storage > Policies > Export Policies. Select an existing policy (e.g., a standard SMB or NFS policy) or create a new one. Within the policy view, click the plus sign (+) to add an export. Select your Managed Directory, choose the appropriate Server (use _array_server for standard configurations), and provide an Export Name (this is the name clients will use to mount the share). CLI Method: Use the following command: puredir export create --dir <file-system-name>:<directory-name> --policy <policy-name> --server <server-name> --export-name <client-facing-name>. A quick validation step At this point, it is worth validating access from a client system. Map the SMB share and perform a simple set of operations—create files, read data, and verify permissions. This is less about testing performance and more about confirming that networking, authentication, and access controls are behaving as expected. In most cases, if the earlier steps around DNS and Active Directory were done correctly, this validation step is uneventful, which is exactly what you want. And now let the data migration begin. I am actually doing it from my Mac. And it just works!!! What becomes apparent after completing these steps is how little effort is required to stand up a fully functional file platform on infrastructure that is already in place. Unless, of course, your NTP server crashed. The system behaves predictably, integrates cleanly with existing services, and avoids many of the operational burdens associated with VM-based file servers. And that is where things start to get interesting. Because everything described so far is still being done manually—selecting where things live, defining configurations, and applying policies one step at a time. It works, and it works well, but it also mirrors the way storage has traditionally been managed. In the next post, I will show what happens when you stop doing these steps manually and let Pure Fusion handle placement, policy, and provisioning instead. Appreciate you reading. © 2025 Dmitry Gorbatov | #dmitrywashere13Views1like0CommentsWhat We Learned About ActiveCluster for File from the Latest “Ask Us Everything”
The newly-announced ActiveCluster for file extends Everpure’s synchronous replication to unstructured workloads–so it was no surprise that the latest Ask Us Everything session drew a lot of attention. Attendees came ready with practical questions about how it works, where it fits, and what it could mean for real production environments. And host Don Poorman, Product Manager Quinn Summers, and Principal Technologist Russell Pope brought the Everpure answers. The conversation showed just how this new approach can help modernize resiliency, mobility, and day-to-day operations. Let’s break down the biggest takeaways. “Is This Just HA… or Something More?” One of the most interesting threads came early: is ActiveCluster for file just another high availability solution? Short answer: no. Attendees pushed on this, and the response from Everpure’s team was clear—this is about data mobility and policy-driven management, not just surviving a failure. Instead of treating HA as a one-off configuration, ActiveCluster is designed to align storage behavior with business intent. That shift matters. In traditional environments, HA is often bolted on and managed manually. Here, policies define things like performance, protection, and placement—and the system enforces them automatically across the fleet. For many in the session, that was a “wait, this is different” moment. The Big Comparison: Legacy Replication vs. ActiveCluster A standout question came from someone evaluating ActiveCluster as a replacement for legacy approaches like NetApp SVMDR. The discussion highlighted a key difference: granularity and consistency. Legacy solutions often replicate at a coarser level (think entire systems or large aggregates), which doesn’t always align with how applications are structured. ActiveCluster instead works at the realm level, where both data and configuration are synchronously mirrored. That means: No mismatched failover scope No rebuilding configs on the other side No “did we forget something?” during a failover It’s a cleaner, more application-aligned model—and that resonated with the audience. “What Actually Happens During a Failover?” Attendees asked the right questions: Is failover automatic? What about DNS changes? How fast does it happen? The answers were refreshingly direct. In a stretched Layer 2 setup, failover is fully automatic and transparent—clients don’t even notice. In more complex network designs, there may be some redirection (like DNS updates), but the data is already in sync. And timing? The expectation is on the order of seconds (often under 10). This is a variable currently unmatched by any legacy storage competitor to Everpure. There was also a lot of interest in how Everpure avoids split-brain scenarios. The mediator service—hosted by Everpure or deployed locally if needed—acts as a lightweight “tie breaker” during network partitions. No extra infrastructure to manage in most cases, and no guesswork about which side should stay active. Simplicity Came Up… A Lot If there was one theme that kept coming back, it was simplicity. One attendee asked about setup, and the answer was basically: it’s wizard-driven. That sparked a broader discussion about how legacy storage often assumes admins have time to relearn complex workflows. In reality, most teams are juggling multiple systems. The ability to stand up synchronous replication with a few guided steps—not scripts, not custom tooling—landed well. Even testing reflects that philosophy. Instead of complex test procedures, the guidance was simple: pull cables, simulate real failures, and observe behavior. No artificial “test modes”—just real-world validation. Data Mobility Is the Real Story Another strong theme was mobility. ActiveCluster doesn’t just protect data—it enables you to move it. The “stretch and unstretch” workflow means datasets can be mirrored, shifted, and re-homed without disruption. That’s a big departure from traditional models, where moving data often means downtime, migration projects, or both. For teams thinking about workload placement, lifecycle management, or hybrid environments, this opens up new options. Real-World Use Cases The audience also pushed beyond file shares into real workloads: Financial trading and payment systems Healthcare imaging and research data VMware/NFS environments The takeaway: if it’s mission-critical and file-based, it’s a candidate. Final Thought: Even More on the Horizon Even with some initial constraints (like starting with new file systems), the field feedback shared during the session was telling: customers are ready to adopt this early. Why? Because the core value—resiliency, mobility, and simplicity—is already there. And if the session proved anything, it’s that Everpure is building this in close collaboration with the community. The questions weren’t just answered—they’re shaping what comes next. If you’re evaluating how to modernize file services, Everpure’s approach is definitely one to consider. Check out this and all our other Ask Us Everything sessions. And, keep the conversation going by jumping into the Everpure Community.105Views0likes0CommentsFlashArray File Multi-Server
File support on FlashArray gets another high demanded feature. With version 6.8.7, purity introduces a concept of Server, which connects exports and directory services and all other necessary objects, which are required for this setup, namely DNS configuration and networking. From this version onwards, all directory exports are associated with exactly one server. To recap, server has (associations) to following objects: DNS Active Directory / Directory Service (LDAP) Directory Export Local Directory Service Local Directory Service is another new entity introduced in version 6.8.7 and it represents a container for Local Users and Groups. Each server has it's own Local Directory Service (LDS) assigned to it and LDS also has a domain name, which means "domain" is no longer hardcoded name of a local domain, but it's user-configurable option. All of these statements do imply lots of changes in user experience. Fortunately, commonly this is about adding a reference or possibility to link a server and our GUI contains newly Server management page, including Server details page, which puts everything together and makes a Server configuration easy to understand, validate and modify. One question which you might be asking right now is - can I use File services without Servers? The answer is - no, not really. But don't be alarmed. Significant effort has been made to keep all commands and flows backwards compatible, so unless some script is parsing exact output and needs to be aligned because there is a new "Server" column added, there should be any need for changing those. How did we managed to do that? Special Server called _array_server has been created and if your configuration has anything file related, it will be migrated during upgrade. Let me also offer a taste of how the configuration could look like once the array is updated to the latest version List of Servers # pureserver list Name Dns Directory Services Local Directory Service Created _array_server management - domain 2025-06-09 01:00:26 MDT prod prod - prod 2025-06-09 01:38:14 MDT staging management stage staging 2025-06-09 01:38:12 MDT testing management testing testing 2025-06-09 01:38:11 MDT List of Active Directory accounts Since we can join multiple AD servers, we now can have multiple AD accounts, up to one per server # puread account list Name Domain Computer Name TLS Source ad-array <redacted>.local ad-array required - prod::ad-prod <redacted>.local ad-prod required - ad-array is a configuration for the _array_server and for backwards compatibility reasons, the prefix of the server name hasn't been added. The prefix is there for account connected to server prod (and to any other server). List of Directory Services (LDAP) Directory services got also slightly reworked, since before 6.8.7 there were only two configurations, management and data. Obviously, that's not enough for more than one server (management is reserved for array management access and can't be used for File services). After 6.8.7 release, it's possible to completely manage Directory Service configurations and linking them to individual servers. # pureserver list Name Dns Directory Services Local Directory Service Created _array_server management - domain 2025-06-09 01:00:26 MDT prod prod - prod 2025-06-09 01:38:14 MDT staging management stage staging 2025-06-09 01:38:12 MDT testing management testing testing 2025-06-09 01:38:11 MDT Please note that these objects are intentionally not enabled / not configured. List of Directory exports # puredir export list Name Export Name Server Directory Path Policy Type Enabled prod::smb::accounting accounting prod prodpod::accounting:root / prodpod::smb-simple smb True prod::smb::engineering engineering prod prodpod::engineering:root / prodpod::smb-simple smb True prod::smb::sales sales prod prodpod::sales:root / prodpod::smb-simple smb True prod::smb::shipping shipping prod prodpod::shipping:root / prodpod::smb-simple smb True staging::smb::accounting accounting staging stagingpod::accounting:root / stagingpod::smb-simple smb True staging::smb::engineering engineering staging stagingpod::engineering:root / stagingpod::smb-simple smb True staging::smb::sales sales staging stagingpod::sales:root / stagingpod::smb-simple smb True staging::smb::shipping shipping staging stagingpod::shipping:root / stagingpod::smb-simple smb True testing::smb::accounting accounting testing testpod::accounting:root / testpod::smb-simple smb True testing::smb::engineering engineering testing testpod::engineering:root / testpod::smb-simple smb True testing::smb::sales sales testing testpod::sales:root / testpod::smb-simple smb True testing::smb::shipping shipping testing testpod::shipping:root / testpod::smb-simple smb True The notable change here is that the Export Name and Name has slightly different meaning. Pre-6.8.7 version used the Export Name as a unique identifier, since we had single (implicit, now explicit) server, which naturally created a scope. Now, the Export Name can be the same as long as it's unique in scope of a single server, as seen in this example. The Name is different and provides array-unique export identifier. It is a combination of server name, protocol name and the export name. List of Network file interfaces # purenetwork eth list --service file Name Enabled Type Subnet Address Mask Gateway MTU MAC Speed Services Subinterfaces Servers array False vif - - - - 1500 56:e0:c2:c6:f2:1a 0.00 b/s file - _array_server prod False vif - - - - 1500 de:af:0e:80:bc:76 0.00 b/s file - prod staging False vif - - - - 1500 f2:95:53:3d:0a:0a 0.00 b/s file - staging testing False vif - - - - 1500 7e:c3:89:94:8d:5d 0.00 b/s file - testing As seen above, File network VIFs now are referencing specific server. (this list is particularly artificial, since neither of them is properly configured nor enabled, anyway the main message is that File VIF now "points" to a specific server). Local Directory Services Local Directory Service (LDS) is a newly introduced container for Local Users and Groups. # pureds local ds list Name Domain domain domain testing testing staging staging.mycorp prod prod.mycorp As already mentioned, all local users and groups now has to belong to a LDS, which means management of those also contains that information # pureds local user list Name Local Directory Service Built In Enabled Primary Group Uid Administrator domain True True Administrators 0 Guest domain True False Guests 65534 Administrator prod True True Administrators 0 Guest prod True False Guests 65534 Administrator staging True True Administrators 0 Guest staging True False Guests 65534 Administrator testing True True Administrators 0 Guest testing True False Guests 65534 # pureds local group list Name Local Directory Service Built In Gid Audit Operators domain True 65536 Administrators domain True 0 Guests domain True 65534 Backup Operators domain True 65535 Audit Operators prod True 65536 Administrators prod True 0 Guests prod True 65534 Backup Operators prod True 65535 Audit Operators staging True 65536 Administrators staging True 0 Guests staging True 65534 Backup Operators staging True 65535 Audit Operators testing True 65536 Administrators testing True 0 Guests testing True 65534 Backup Operators testing True 65535 Conclusion I did show how the FA configuration might look like, without providing much details about the actual way how to configure or test these configs, anyway, this article should provide a good overview about what to expect from 6.8.7 version. There is plenty of information about this particular aspect of the release in the updated product documentation. Please let me know if there is any demand to deep-dive into any aspect of this feature.551Views2likes1CommentAsk Us Everything About ActiveCluster for File!
💬 No April Fools here! Get ready to kick off this month with an edition of Ask Us Everything, this Friday, April 3rd at 9 AM Pacific. For this session, we are all about ActiveCluster for File! If you have a burning question, feel free to ask it here early and we'll add it to the list to answer on Friday. Or if we can't get it answered live, our Everpure experts can follow up right here on this thread. Russell Pope, qsummers Flashman & dpoorman are the experts rallying and answering your questions during the conversation as well as here on the community. See you all this Friday! (Oh, and if you haven't registered yet, there's still time!) Or, check out these self-serve resources: Blog Post: https://blog.purestorage.com/products/introducing-activecluster-for-file/ Demo: https://www.purestorage.com/demos/platform/what-is-the-pure-storage-platform/introducing-activecluster-for-file-on-flasharray/6390644453112.html253Views0likes0CommentsBoosting SQL Server Backup/Restore Performance: Threads and Parallelism
In this post, we’ll discuss day 1 tuning you can do on your database hosts to take full advantage of your new high-performance backup storage. We’ll go over a few tricks around database layout and backup configuration for maximum throughput, discuss some quirks with SMB, and finally discuss using S3 effectively.93Views1like0CommentsA list of useful Purity CLI commands to manage Pure Flash Storage arrays.
"pureadmin" commands The pureadmin command displays and manage administrative accounts in Pure Flash Storage Array (22 Commands) Explanation pureadmin create testuser --api-token Generate an API token for the user testuser pureadmin create testuser --api-token --timeout 2h Create API Token for testuser valid for 2 hours pureadmin create testuser --role storage_admin Create user testuser with storage_admin role. Possible roles are readonly, ops_admin, storage_admin, array_admin pureadmin delete --api-token Delete API Token for current user pureadmin delete testuser Delete user testuser from Flash Array pureadmin delete testuser --api-token Delete API Token for user testuser pureadmin global disable --single-sign-on This will disable single sign-on on the current array. Enabling single sign-on gives LDAP users the ability to navigate seamlessly from Pure1 Manage to the current array through a single login. pureadmin global enable --single-sign-on This enables single sign-on on the current array. Enabling single sign-on gives LDAP users the ability to navigate seamlessly from Pure1 Manage to the current array through a single login. pureadmin global list List the global administration attributes like Lockout Duration, Maximum Login Attempts, Minimum Password Length, etc.. pureadmin global setattr --lockout-duration 1m Set the lockout duration to 1 minute after maximum unsuccessful login attempts. pureadmin global setattr --max-login-attempts 3 Set the maximum failed login attempts to 3 before the user get locked out. pureadmin global setattr --min-password-length 8 Set the minimum length of characters required for all the local user account passwords to 8. Minimum length allowed is 1. This will not affect the existing user accounts, but all future password assignment must meet the new value. pureadmin list List all the users configured in the Flash Array pureadmin list --api-token List all the users with api tokens configured pureadmin list --api-token --expose List all the users with api tokens configured and expose the api token for the current user loggedin. pureadmin list --lockout List all the user accounts that are currently lockout pureadmin refresh --clear Clears the permission cache for all the users pureadmin refresh --clear testuser Clears the permission cache for testuser pureadmin refresh testuser Refresh the permission cache for testuser pureadmin reset testuser --lockout Unlock locked user testuser pureadmin setattr testuser --password Change the password for the user testuser pureadmin setattr testuser --role array_admin Change the role of the user testuser to array_admin role. Possible roles are readonly, ops_admin, storage_admin, array_admin "purealert" commands The purealert command manages alert history and the list of designated email addresses for alert notifications (8 Commands) Explanation purealert flag 121212 Flag an alert with ID 121212. This will appear in the flagged alert list. purealert list List all the alerts generated in the Pure Flash Array purealert list --filter "issue='failure'" List all the alerts generated for failures purealert list --filter "severity='critical'" List all the alerts with Critical severity. purealert list --filter "state='closed'" List all the closed alerts purealert list --filter "state='open'" List all the alerts in Open state purealert list --flagged List all the alerts that are flagged. By default all alerts are flagged. We can unflag command once those are resolved. purealert unflag 121212 Unflag alert with ID 121212. This will not appear in the flagged alert list. "purearray" commands The purearray command displays attributes and monitors I/O performance in Pure Flash Storage Array (24 Commands) Explanation purearray connect --management-address 10.0.0.1 --type async-replication --connection-key Connects the local array to remote array 10.0.0.1 for asynchronous replication using the connection key. The Connection key will be prompted to enter. purearray connect --management-address 10.0.0.1 --type sync-replication --connection-key Connects the local array to remote array 10.0.0.1 for synchronous replication using the connection key. The Connection key will be prompted to enter. purearray connect --management-address 10.0.0.1 --type sync-replication --replication-transport ip -- connection-key Connects the local array to remote array 10.0.0.1 for synchronous replication via Ethernet transport using the connection key. The Connection key will be prompted to enter. purearray disable phonehome Disable phonehome or dialhome feature of array. purearray disconnect 10.0.0.1 Disconnects array 10.0.0.1 from the local array connected for remote replication. purearray enable phonehome Enable phonehome or dialhome feature of array. purearray list Display the array name,serial number and firmware version purearray list --connect Display remotely connected arrays for replication purearray list --connect --path Display arrays connected for remote replication along with connection paths purearray list --connect --throttle Display the replication throttle limit purearray list --connection-key Display the connection key that can be used to connect to the array purearray list --controller List all the controllers connected to the Array. This will also display the model and status of each controller purearray list --ntpserver List the NTP servers configured purearray list --phonehome Display the dial home configuration status of the Array purearray list --space Display the capacity and usage statistics information of the Array. purearray list --space --historical 30d Display the capacity and usage statistics information of the Array since last 30 days purearray list --syslogserver List the syslog server names configured to push the logs in pure array purearray monitor --interval 4 --repeat 5 Display the array-wide IO performance of a Flash Array in every 4 seconds for 5 times. purearray remoteassist --status check the Remote Assist is active or inactive purearray rename MYARRAY001 Set the name of the array to MYARRAY001 purearray setattr --ntpserver '' Remove all the NTP servers configured for pure array purearray setattr --ntpserver time.google.com Set the NTP server purearray setattr --syslogserver '' Remove all the syslog server servers configured for pure array purearray setattr --syslogserver log.server.com set the syslog server for pure array "pureaudit" commands The pureaudit command displays and manages the audit logs record details in Pure Flash Storage Array (7 Commands) Explanation pureaudit list Display the list of audit records. Audit trail records are created whenever administrative actions are perfromed by a user (for eg: creating, destroying, eradicating a volume) pureaudit list --filter 'command="purepod" and subcommand="create"' List all the audit records for purepod create command executed in the array pureaudit list --filter 'command="purepod" and user="pureuser"' List all the audit records for purepod commands executed by pureuser in the array pureaudit list --filter 'command="purepod"' List all the audit records for purepod command executed in the array pureaudit list --filter 'user = "root"' Display the list of audit records for the root user pureaudit list --limit 10 Display the first 10 rows of audit records pureaudit list --sort user Display the list of audit records sorted by the user field. By default the records are sorted by the time field "pureconfig" commands The pureconfig command provides commands to reproduce the current Pure Flash Storage Array configuration (4 Commands) Explanation pureconfig list Display list of commands to reproduce the volumes, hosts, host groups, connections, network, alert and array configurations. Copying this and running in another array will create an exact copy. pureconfig list --all Displays all the commands required to reproduce the current FlashAarray configuration of hosts, host groups, pods, protection groups, volumes, volume groups, connections, file systems and directories, alert, network, policies, and support. pureconfig list --object Displays the object configuration of the FlashArray including hosts, host groups, pods, protection groups, volumes, volume groups, and connections, as well as file systems and directories if file services are enabled. pureconfig list --system Displays the system configuration of the flah array including network, policies, alert and support puredns commands The puredns command manages the DNS attributes for an arrays administrative network. (4 Commands) "puredns" list Display the current DNS parameters configured in the array. This includes the domain suffixes and IP addresses of the name servers Explanation puredns setattr --domain "" Removes the domain suffix from Purity//FA DNS queries puredns setattr --domain test.com --nameservers 192.168.0.10,192.168.2.11 Add the IPv4 addresses of two DNS servers for Array to use to resolve hostnames to IP addresses, and the domain suffix test.com for DNS searches. puredns setattr --nameservers"" Unassigns DNS server IP addresses from the DNS entry. This will stop making DNS entries. "puredrive" commands The puredrive command provides information about the Flash Drives and NVRAM modules in Pure Flash Storage Array (6 Commands) Explanation puredrive admit Admit all drive modules that have been added or connected but not yet admitted to the array. Once successfully admitted, the status of the drive modules will change from unadmitted to healthy. puredrive list List all the flash drive modules in an Array. This will also display the capacity of each module. puredrive list --spec List all the flash drive modules in an Array along with Protocol( SAS/NVME) information puredrive list --total List all the flash drive modules in an Array with the total capacity figure puredrive list CH0.BAY10 Display information about flash drive BAY10 in CH0 puredrive list CH0.BAY10 --pack Display information about flash drive BAY10 in CH0 and all other drives in the same pack1.1KViews0likes0CommentsPure FlashArray CLI Quick References (daily feeds)
Questions Commands Explanations How to reduce the size of a Volume in Pure Flash Array purevol truncate --size 1G MY_VOL_001 Reduce the size of MY_VOL_001 to 1GB ( from current size of 8GB for example ) How to list all flash drives and NVRAM modules in a Pure Flash Array with total capacity puredrive list --total List all the flash drive modules in an Array with the total capacity figure How to disconnect volume from host in Pure Flash Array purevol disconnect MY_VOL_001 --host MY-SERVER-001 Disconnect volume MY_VOL_001 from host MY-SERVER-001. This will remove the visibility of the volume to the host. How to create a hostgroup with existing hosts in Pure Flash Array purehgroup create MY-HOSTS --hostlist MY-HOST-001,MY-HOST-002 Create hostgroup MY-HOSTS and add existing hosts MY-HOST-001 and MY-HOST-002 in to it How to stretch a POD purepod add --array PFAX70-REMOTE MYPOD001 Add the remote array PFAX70-REMOTE to the POD MYPOD001. This will stretch the POD and volume data inside the POD synchronously replicated between two arrays. The arrays in a stretched POD are considered as peers, there is no concept of source and target. Volumes within the POD will be visible in each arrays with same serial numbers. How to create multiple Volume in a Pure Flash Array purevol create --size 10G MY_VOLUME_001 MY_VOLUME_002 Create Virtual volumes MY_VOLUME_001 and MY_VOL_SIZE_002 of size 10GB How to remove hosts from hostgroups in Pure Flash Array purehgroup setattr MY-HOSTS --remhostlist MY-HOST-002,MY-HOST-003 Remove MY-HOST-002 and MY-HOST-003 from hostgroup MY-HOSTS How to delete host object in a Pure Flash Array purehost delete MY-SERVER-001 Delete host MY-SERVER-001 How to search for HBA WWN and on which FC port its been logged in to on Flash Array pureport list --initiator --raw --filter "initiator.wwn='1000000000000001'" Search for HBA WWN 1000000000000001 and on which FC port its been logged in to. How to list all the closed alerts in the Pure Flash array purealert list --filter "state='closed'" List all the closed alerts How to disconnect a specific volume from the host in Pure Flash Array purehost disconnect MY-SERVER-001 --vol MY_VOL_001 Disconnect volume MY_VOL_001 from host MY-SERVER-001. This will remove the visibility of the volume to the host. How to display the connection key that can be used to connect to a Pure Flash Array purearray list --connection-key Display the connection key that can be used to connect to the array How to list all the users configured in the Flash Array pureadmin list List all the users configured in the Flash Array How to list all the Volumes in a Pure Flash Array purevol list List all the Virtual Volume How to list all the drive modules in a Pure Flash Array purehw list --type bay List all the Drive modules in an Array How to display all the Flash Array Target Ports pureport list Display all the target ports within the Flash Array. This includes FC, iSCSI and NVME ports. This command also displays WWNs of FC ports iSCSI Qualified Names(IQNs) for iSCSI ports and NVMe Qualified Names(NQNs) for NVMe ports. How to move the volume to the pod in a Pure Flash array purevol move vol001 MYPOD001 Move the volume vol001 to the non-stretched pod. This will throw an error message if trying to add to a stretched pod. How to display the current DNS parameters configured in the Pure Flash array puredns list Display the current DNS parameters configured in the array. This includes the domain suffixes and IP addresses of the name servers How to destroy a POD purepod destroy MYPOD001 Destroy or delete POD MYPOD001. The POD must be empty and unstretched to destroy. POD will not be destructed immediately, but placed under 24hr eradication pending period. How to display the syslog server setting for a Pure Flash Array purearray list --syslogserver List the syslog server names configured to push the logs in pure array How to list all flash drives and NVRAM modules in a Pure Flash Array puredrive list List all the flash drive modules in an Array. This will also display the capacity of each module. How to monitor replica links in a Pure Flash array purepod replica-link monitor --replication Monitor the data transfer speed on the replica links on the array. If the replication link is pause, the speed will show as 0 How to list all the volumes connected to a Host purehost list MY-SERVER-001 --connect List all the volumes connected to Host MY-SERVER-001 How to list all the PODs in the Pure Flash array purepod list List all the PODs in the Pure Flash array. This will shows the arrays in POD and status of each. How to display the arraywide IO performance of a Pure Flash Array purearray monitor --interval 4 --repeat 5 Display the array-wide IO performance of a Flash Array in every 4 seconds for 5 times. How to display all the FC Ports in a controller within the Flash Arrays pureport list --raw --filter "name='CT0.FC*'" Display all the Fibre Channel Ports in Controller 0 with its WWNs in the Flash Arrays How to disconnect volume from hostgroup in Pure Flash Array purehgroup disconnect MY-HOSTS --vol MY_VOL_001 Disconnect volume MY_VOLUME_001 from hostgroup MY-HOSTS How to remove all the hosts from a hostgroup in Pure Flash Array purehgroup setattr MY-HOSTS --hostlist "" Remove all the hosts from hostgroup MY-HOSTS How to disconnect remote array from the local Pure Flash array purearray disconnect 10.0.0.1 Disconnects array 10.0.0.1 from the local array connected for remote replication. How to set the minimum password length of user accounts in a Pure Flash array pureadmin global setattr --min-password-length 8 Set the minimum length of characters required for all the local user account passwords to 8. Minimum length allowed is 1. This will not affect the existing user accounts, but all future password assignment must meet the new value. How to connect volume with hostgroup in Pure Flash Array purehgroup connect MY-HOSTS --vol MY_VOL_001 Connect volume MY_VOLUME_001 to hostgroup MY-HOSTS. This will assign a lun id to the volume. The lun id will start from 254 and go down up to 1If all LUNs in the [1...254] range are taken, Purity//FA starts at LUN 255 and counts up to the maximum LUN 16383, assigning the first available LUN to the connection. How to rename a host object in a Pure Flash Array purehost rename MY-SERVER-001 YOUR-SERVER-001 Rename host MY-SERVER-001 to YOUR-SERVER-001 How to demote a pod in Pure Flash array purepod demote MYPOD001 Demote the pod MYPOD001 How to change the bandwidth a Volume in Pure Flash Array purevol setattr --bw-limit 1M MY_VOL_001 Change the bandwidth limit of MY_VOL_001 to 1MB/s How to connect volume to host in Pure Flash Array purevol connect MY_VOL_001 --host MY-SERVER-001 Connect volume MY_VOL_001 to host MY-SERVER-001. This will Provide the R/W access to the volume.Next available lun address will used by default. How to connect a volume to multiple hosts in Pure Flash Array purehost connect MY-SERVER-001 MY-SERVER-002 --vol MY_VOL_001 Connect volume MY_VOL_001 to hosts MY-SERVER-001 and MY-SERVER-002 How to connects the local Pure Flash array to remote array for asynchronous replication using the connection key purearray connect --management-address 10.0.0.1 --type async-replication --connection-key Connects the local array to remote array 10.0.0.1 for asynchronous replication using the connection key. The Connection key will be prompted to enter. How to display all the iSCSI Ports in aFlash Array pureport list --raw --filter "name='*ETH*'" Display all the iSCSI Ports with its IQNs in the Flash Arrays How to connect multiple volumes to host in a Pure Flash Array purevol connect MY_VOL_001 MY_VOL_002 --host MY-SERVER-001 Connect volumes MY_VOL_001 and MY_VOL_002 to host MY-SERVER-001 How to create a host object in Pure Flash Array purehost create MY-SERVER-001 Create a host object called MY-SERVER-001. HBA wwns can be added later using purehost setattr command. How to create snap shot of a Volume in a Pure Flash Array purevol snap MY_VOL_001 Create snap shot of MY_VOL_001. If it is first snap then MY_VOL_001.2 will be created How to increase the size of multiple Volume in Pure Flash Array purevol setattr --size 2G MY_VOL_001 MY_VOL_002 Increase the size of MY_VOL_001 and MY_VOL_002 to 2GB ( the current size of MY_VOL_001 is 500MB and MY_VOL_002 is 1GB, for example ) How to remove hba wwn from a host object in Pure Flash Array purehost setattr MY-SERVER-001 --remwwnlist 1000000000000003 Remove HBA wwn 1000000000000003 from host MY-SERVER-001 How to create multiple host objects in a Pure Flash Array purehost create MY-SERVER-001 MY-SERVER-002 Create hosts MY-SERVER-001 and MY-HOST-002 How to display the personality of a host purehost list MY-SERVER-001 --personality Display the personality of host MY-SERVER-001 How to list the hosts and personality assigned to each purehost list --personality Display the list hosts along with the personality set against each. The personality is define using the purehost setattr command. How to unlock a locked user in Flash Array pureadmin reset testuser --lockout Unlock locked user testuser How to list all the volumes sorted by serial number in descending order on a Pure Flash Array purevol list --sort "serial-" List all the volumes sorted by serial number descending order How to create a Volume in a Pure Flash Array purevol create --size 10G MY_VOLUME_001 Create a Virtual volume called MY_VOLUME_001 of size 10GB147Views0likes0CommentsPure FlashArray CLI Quick References (daily feeds)
Questions Commands Explanations How to display the NTP servers configured in a Pure Flash Array purearray list --ntpserver List the NTP servers configured How to enable phonehome in a Pure Flash Array purearray enable phonehome Enable phonehome or dialhome feature of array. How to list all the FC ports in a Pure Flash Array purehw list --type fc List all the FC ports in an Array with status and speed information How to configure the DNS attributes of a Pure Flash array puredns setattr --domain test.com --nameservers 192.168.0.10,192.168.2.11 Add the IPv4 addresses of two DNS servers for Array to use to resolve hostnames to IP addresses, and the domain suffix test.com for DNS searches. How to list all the connected volumes for a hostgroup in a Pure Flash array purehgroup list --connect MY-HOSTS List all the connected volumes for hostgroup MY-HOSTS How to add hosts to existing hostgroups in Pure Flash Array purehgroup setattr MY-HOSTS --addhostlist MY-HOST-002,MY-HOST-003 Add MY-HOST-002 and MY-HOST-003 to existing hostgroup MY-HOSTS How to list all the Controllers in a Pure Flash Array purehw list --type ct List all the Controller in an Array How to eradicate multiple Virtual Volumes in Pure Flash Array purevol eradicate MY_VOL_001 MY_VOL_002 Eradicate virtual volumes MY_VOL_001 and MY_VOL_002 which are destroyed earlier. This will fully destroy the volumes and not be able to recover further. How to add new HBA wwn to a host object in Pure Flash Array purehost setattr MY-SERVER-001 --addwwnlist 1000000000000003 Add new HBA wwn 1000000000000003 to host MY-HOST-001. 1000000000000003 should not be part of any other host. How to display all the host initiators know to the Flash Array pureport list --initiator Display all the host initiator WWNs, IQNs, NQNs known for the Flash Array. This also shows the target ports on which the initiators are eligible to communicate. How to list all the flagged alerts in a Pure Flash array purealert list --flagged List all the alerts that are flagged. By default all alerts are flagged. We can unflag command once those are resolved. How to display the Dial Home status of a Pure Flash Array purearray list --phonehome Display the dial home configuration status of the Array How to unflag an alert in the Pure Flash array purealert unflag 121212 Unflag alert with ID 121212. This will not appear in the flagged alert list. How to rename a Pure Flash Array purearray rename MYARRAY001 Set the name of the array to MYARRAY001 How to admit the newly connected drive modules in a Pure Flash array puredrive admit Admit all drive modules that have been added or connected but not yet admitted to the array. Once successfully admitted, the status of the drive modules will change from unadmitted to healthy. How to display the replication throttle limit of a Pure Flash Array purearray list --connect --throttle Display the replication throttle limit How to eradicate a Volume in Pure Flash Array purevol eradicate MY_VOL_001 Eradicate virtual volume MY_VOL_001 which is destroyed earlier. This will fully destroy the volume and not be able to recover further. How to unstretch a POD purepod remove --array PFAX70-REMOTE MYPOD001 Remove the remote array PFAX70-REMOTE from the POD MYPOD001. This will unstretch the POD and volume data inside the POD no longer synchronously replicated between two arrays. Volumes within the POD will be only visible in local array. How to list all the Open alerts in a Pure Flash array purealert list --filter "state='open'" List all the alerts in Open state How to list all the Hosts with connected volumes purehost list --connect List all the hosts in a Flash Array which have connected volumes How to create a volume and include in POD purevol create --size 1G MYPOD001::MY_VOL_001 Create a volume of 1GiB size and include it in MYPOD001. If MYPOD001 is stretched, the same volume will be created and visible on the remote arrays too. The volume name and WWN number will appear same from each arrays. How to list all the volumes sorted by size and consumption on a Pure Flash Array purevol list --space --sort size,total List all the volumes sorted by size of each volume and then total space consumed. Both fields are sorted in ascending order. How to pause the replication link in a Pure Flash array purepod replica-link pause PRDPOD001 --remote ARRAY002 --remote-pod DRPOD001 Pause the Active/DR replication by pausing the replica link connection between local and remote array. To continue the replication resume the replica link How to recover a Volume in Pure Flash Array purevol recover MY_VOL_001 Recover virtual volume MY_VOL_001 which is destroyed earlier. How to change the role of a user in Flash Array pureadmin setattr testuser --role array_admin Change the role of the user testuser to array_admin role. Possible roles are readonly, ops_admin, storage_admin, array_admin How to move the volume out of a pod in Pure Flash array purevol move MYPOD001::vol001 "" Move the volume vol001 out of the non-stretched pod MYPOD001. Will throw an error if trying to move from a stretched pod. How to connect volume to hostgroup in Pure Flash Array purevol connect MY_VOL_001 --hgroup MY-HOSTS Connect volume MY_VOL_001 to hostgroup MY-HOSTS. This will assign a lun id to the volume. The lun id will start from 1 and go up to 16383. How to list all the Hosts in a Flash Array purehost list List all the hosts in a Flash Array with its member WWNs or IQNs or NQNs. This will also show the Host Groups if it part of any. How to create a copy of Volume in Pure Flash Array purevol copy MY_VOL_001 MY_VOL_002 Create a copy MY_VOL_001 and name it as MY_VOL_002. If MY_VOL_002 already exists this will throw and error. How to rename a Volume in Pure Flash Array purevol rename MY_VOL_001 MY_VOL_002 Rename virtual volume MY_VOL_001 to MY_VOL_002 How to display the historical capacity and usage statistics information of a Pure Flash Array purearray list --space --historical 30d Display the capacity and usage statistics information of the Array since last 30 days How to connect host to volume with a specific LUN id in Pure Flash Array purehost connect MY-SERVER-001 --vol MY_VOL_001 --lun 10 Connect volume MY_VOL_001 to host MY-SERVER-001 and assign LUN id 10. This will Provide the R/W access to the volume. How to list all the snap shots in a Pure Flash Array purevol list --snap List all the snap shots How to list all the users with api tokens configured in the Flash Array pureadmin list --api-token List all the users with api tokens configured How to reduce the size of a Volume in Pure Flash Array purevol truncate --size 1G MY_VOL_001 Reduce the size of MY_VOL_001 to 1GB ( from current size of 8GB for example ) How to list all flash drives and NVRAM modules in a Pure Flash Array with total capacity puredrive list --total List all the flash drive modules in an Array with the total capacity figure How to disconnect volume from host in Pure Flash Array purevol disconnect MY_VOL_001 --host MY-SERVER-001 Disconnect volume MY_VOL_001 from host MY-SERVER-001. This will remove the visibility of the volume to the host. How to create a hostgroup with existing hosts in Pure Flash Array purehgroup create MY-HOSTS --hostlist MY-HOST-001,MY-HOST-002 Create hostgroup MY-HOSTS and add existing hosts MY-HOST-001 and MY-HOST-002 in to it How to stretch a POD purepod add --array PFAX70-REMOTE MYPOD001 Add the remote array PFAX70-REMOTE to the POD MYPOD001. This will stretch the POD and volume data inside the POD synchronously replicated between two arrays. The arrays in a stretched POD are considered as peers, there is no concept of source and target. Volumes within the POD will be visible in each arrays with same serial numbers. How to create multiple Volume in a Pure Flash Array purevol create --size 10G MY_VOLUME_001 MY_VOLUME_002 Create Virtual volumes MY_VOLUME_001 and MY_VOL_SIZE_002 of size 10GB How to remove hosts from hostgroups in Pure Flash Array purehgroup setattr MY-HOSTS --remhostlist MY-HOST-002,MY-HOST-003 Remove MY-HOST-002 and MY-HOST-003 from hostgroup MY-HOSTS How to delete host object in a Pure Flash Array purehost delete MY-SERVER-001 Delete host MY-SERVER-001 How to search for HBA WWN and on which FC port its been logged in to on Flash Array pureport list --initiator --raw --filter "initiator.wwn='1000000000000001'" Search for HBA WWN 1000000000000001 and on which FC port its been logged in to. How to list all the closed alerts in the Pure Flash array purealert list --filter "state='closed'" List all the closed alerts How to disconnect a specific volume from the host in Pure Flash Array purehost disconnect MY-SERVER-001 --vol MY_VOL_001 Disconnect volume MY_VOL_001 from host MY-SERVER-001. This will remove the visibility of the volume to the host.136Views0likes0Comments