FlashCrew London & Glasgow May/June 2025 !!!! Register NOW...
I'd like to invite you to our upcoming FlashCrew Customer User Group in London on May 15th, from midday. Throughout May, we'll be taking our FlashCrew User Group on the road to share ideas, best practices and network on all things Pure over some drinks and food. Plus, as a thank you for your continued support and attendance we will of course have the latest FlashCrew branded gifts for you to take with you! If you can make it, please register at this link below. London 10-11 Carlton House Terrace Thursday 15th May: REGISTER HERE for FLASHCREW LONDON Glasgow Radisson Blu Hotel Thursday 5th June: REGISTER HERE for FLASHCREW GLASGOW These are user group meetings, targeted at a technical audience across Pure's existing customers. Not only will you hear the latest news on the Pure Enterprise Data Cloud, but will also get to network with other like-minded users and exchange ideas and experiences. Agenda: 12:00 - 12:50 Arrival, Lunch and Welcome 13:00 - 14:00 Pure Platform: Features and Roadmap: with demo 14:00 - 14:15 Break 14:15 - 14:45 SQL Databases and Pure 14:45 - 15:15 Voice of the Customer 15:15 - 15:30 Break 15:30 - 16:15 Portworx and the Enterprise Data Cloud 16:15 - 16:45 Modern Virtualisation 16:45 - 17:00 Open Floor Q&A, Raffle, Wrap Up 17:00 - 19:00 Drinks and Networking176Views5likes0CommentsComing soon! The Pure Fusion MCP Server
Have you tried out the power and flexibility of using MCP Servers in your daily admin life? If you haven't, you shoulld really look into the power that they can provide. Pure has developed it's own MCP server for Pure Fusion and we will be releasing it soon. Check out this blog article to read more about the "sneak peek" into what is coming. And always remember - Automate! Automate! Automate!80Views3likes0CommentsAsk Us Everything Recap: Making Purity Upgrades Simple
At our recent Ask Us Everything session, we put a spotlight on something every storage admin has an opinion about: software upgrades. Traditionally, storage upgrades have been dreaded — late nights, service windows, and the fear of downtime. But as attendees quickly learned, Pure Storage Purity upgrades are designed to be a very different experience. Our panel of Pure Storage experts included our host Don Poorman, Technical Evangelist, and special guests Sean Kennedy and Rob Quast, Principal Technologists. Here are the questions that sparked the most conversation, and the insights our panel shared. “Are Purity upgrades really non-disruptive?” This one came up right away, and for good reason. Many admins have scars from upgrade events at other vendors. Pure experts emphasized that non-disruptive upgrades (NDUs) are the default. With thousands performed in the field — even for mission-critical applications — upgrades run safely in the background. Customers don’t need to schedule middle-of-the-night windows just to stay current. “Do I need to wait for a major release?” Attendees wanted to know how often they should upgrade, and whether “dot-zero” releases are safe. The advice: don’t wait too long. With Pure’s long-life releases (like Purity 6.9), you can stay current without chasing every new feature release. And because Purity upgrades are included in your Evergreen subscription, you’re not paying extra to get value — you just need to install the latest version. Session attendees found this slide helpful, illustrating the different kinds of Purity releases. “How do self-service upgrades work?” Admins were curious about how much they can do themselves versus involving Pure Storage support. The good news: self-service upgrades are straightforward through Pure1, but you’re never on your own. Pure Technical Services knows that you're running an upgrade, and if an issue arises you’re automatically moved to the front of the queue. If you want a co-pilot, then of course Pure Storage support can walk you through it live. Either way, the process is fast, repeatable, and built for confidence. Upgrading your Purity version has never been easier, now that Self Service Upgrades lets you modernize on your schedule. “Why should I upgrade regularly?” This is where the conversation shifted from fear to excitement. Staying current doesn’t just keep systems secure — it unlocks new capabilities like: Pure Fusion™: a unified, fleet-wide control plane for storage. FlashArray™ Files: modern file services, delivered from the same trusted platform. Ongoing performance, security, and automation enhancements that come with every release. One attendee summed it up perfectly: “Upgrading isn’t about fixing problems — it’s about getting new toys.” The Takeaway The biggest lesson from this session? Purity upgrades aren’t something to fear — they’re something to look forward to. They’re included with your Evergreen subscription, they don’t disrupt your environment, and they unlock powerful features that make storage easier to manage. So if you’ve been putting off your next upgrade, take a fresh look. Chances are, Fusion, Files, or another feature you’ve been waiting for is already there — you just need to turn it on. 👉 Want to keep the conversation going? Join the discussion in the Pure Community and share your own upgrade tips and stories. Be sure to join our next Ask Us Everything session, and catch up with past sessions here!427Views3likes2CommentsPure's Intelligent Control Plane: Powered by AI Copilot, MCP Connectivity and Workflow Orchestration
At Accelerate 2025, we announced two capabilities that change how you manage Pure Storage in your broader infrastructure: AI Copilot with Model Context Protocol (MCP) and Workflow Orchestration with production-ready templates. Here's what they do and why they matter. AI Copilot with MCP: Your Infrastructure, One Conversation The Problem Your infrastructure spans multiple platforms. Pure Storage managing your data, VMware running VMs, OpenShift handling containers, security tools monitoring threats, application platforms tracking performance - each with its own console, APIs, and workflows. When you need to migrate a VM or respond to a security incident, you're manually pulling information from each system, correlating it yourself, then executing actions across platforms. You become the integration layer. The Solution Pure1 now supports Model Context Protocol (MCP), taking Copilot from a suggestive assistant to an active operator. With MCP enabled, Copilot doesn’t just recommend - it acts. It serves as a secure bridge between natural language and your infrastructure, capable of fetching data, executing APIs, and orchestrating workflows across diverse systems. Here’s what makes this powerful: You deploy MCP servers within your environment—one for VMware, another for OpenShift, and others for the systems you use. Each server exposes your environment’s capabilities through a standard, interoperable protocol. Pure Storage AI Copilot connects seamlessly to these MCP servers, as well as to Pure services such as Data Intelligence, Workflow Orchestration, and Portworx Monitoring, enabling unified and secure automation across your hybrid ecosystem. What You Can Connect You can deploy an MCP server on any system whether it’s your VMware environment, Kubernetes clusters, security platforms like CrowdStrike, databases, monitoring tools, or custom applications. Pure Storage AI Copilot connects to these servers under your control, securely combining their data with Pure Storage services to deliver richer insights and automation. Getting Started: If you have a use-case around MCP, please contact your Pure Storage account team. Workflow Orchestration: Deploy in Minutes, Not Months The Problem Building production-grade automation takes months. You need error handling, integration with multiple systems, testing for edge cases, documentation, ongoing maintenance. Most teams end up with half-finished scripts that only one person understands. The Solution We built workflow templates for common operations, tested them at scale, and made them available in Pure1. Install them, customize to your needs, and run them in minutes. Key Templates VMware to OpenShift Migration with Portworx Handles complete migration: extracts VM metadata, identifies backing Pure volumes, checks OpenShift capacity, configures vVols Datastore and DirectAccess, uses array-based replication, converts to Portworx format. Traditional migration takes hours for TB-scale VMs. This takes 20 to 30 minutes. SQL / Oracle Database Clone and Copy Automates cloning and copying of SQL Server and Oracle databases for dev/test or refresh needs. Instantly creates storage-efficient clones from snapshots, mounts them to target environments, and applies Pure-optimized settings. The hours-long manual process becomes a quick, consistent workflow completed in minutes Daily Fleet Health Check Scans all arrays for capacity trends, performance issues, protection gaps, hardware health.Posts summary to Slack. Proactive visibility without manually checking each array. Rubrik Threat Detection Response When Rubrik detects a threat, automatically tags affected Pure volumes, creates isolated immutable snapshots, and notifies the security team. Security events propagate to your storage layer automatically. How It Works Workflow Orchestration is a SaaS feature in Pure1. Deploy lightweight agents (Windows, Linux, or Docker) in your data center to execute workflows locally. Group agents together for high availability and governance controls. Integrations Native Pure Storage: Pure1 Connector for full API access, Fusion Connector for storage provisioning (works for Fusion and non-Fusion FlashArray/FlashBlade customers) Third-Party: ServiceNow, Slack, Google, Microsoft,CrowdStrike, HTTP/Webhooks, Pagerduty, Salesforce and more. The connector library continues expanding. Getting Started: Opt-in now in Pure1 - Workflow. Introductory offer available at this time. Check with your Pure account team if you have questions. How They Work Together At Accelerate 2025 in New York, we showcased this capability in action. Here's the scenario: an organization wants to migrate VMs to Kubernetes. Action-enabled Copilot orchestrates communication with Pure Storage appliances and services as well as third-party MCP servers to collect the required information for addressing a problem across a heterogeneous environment. With Pure1 MCP, AI Copilot, and Workflows, there's now a programmatic way to collect information from OpenShift MCP, VMware MCP, and Pure1 storage insights- then recommend an approach on what VMs to migrate based on your selection criteria. You prompt Copilot: "How can I move my VMs to OpenShift in an efficient way?" Copilot communicates across: Your VMware MCP server - to get VM specifications, current configurations, resource usage Your OpenShift MCP server - to check available cluster capacity, validate compatibility Portworx monitoring - to understand current storage performance Copilot reasons across all this information, identifies ideal VM candidates based on your criteria, and recommends the migration approach- which VMs to move, target configurations, and how to preserve policies. Then it can trigger the migration workflow, keeping you updated throughout the process. Why This Matters Storage Admins: Stop being the bottleneck. Enable self-service while maintaining governance. DevOps Teams: Deploy production-tested automation without writing code. Security Teams: Build automated response workflows spanning detection, isolation, and recovery. Infrastructure Leaders: Reduce operational overhead. Teams focus on strategy, not repetitive tasks. Get Started MCP Integration:If you have a use-case around MCP, please contact your Pure Storage account team.. Workflow Orchestration:Opt-in at Pure1 → Workflows. Learn More: Documentation in Pure1 or contact your Pure Storage account team. Pure1 evolved from a monitoring platform to an Intelligent Control Plane. AI Copilot reasons across your infrastructure. Workflow Orchestration executes. Together, they change how you manage data with Pure Storage.338Views2likes0CommentsFlashBlade Ansible Collection 1.22.0 released!
🎊 FlashBlade Ansible Collection 1.22.0 THIS IS A SIGNIFICANT RELEASE as removes all REST v1 components from the collection and adds Fusion support! Update your collections! Download the Collection via Ansible command: ansible-galaxy collection install purestorage.flashblade Download it from Ansible Galaxy here Read the Release Notes here.94Views2likes0CommentsNew Pure Code site is live!
After many months of messing with some very old code, we have launched a revised site for the Pure Code Portal. It is much more minimalistic and cleaner than the old one, and we have plans to add our Code videos and Pure Employee website links in the near future. Have a look and feel free to leave a comment if you would like to see something on the site. https://code.purestorage.com/ Cheers, //Mike287Views2likes1CommentSome Fleet PowerShell code using Invoke-RestMethod
Hello fellow scripters! This script is a PowerShell script that uses native PowerShell cmdlets to do the tasks. It does not use the Pure Storage PowerShell SDK2. This is for folks who do raw API calls using automation packages, runbooks, and scripts. It is not intended to use in it's entirety, but rather to be used as code snippets and starters for your own scripts. The full script is available in this GitHub repository. This script will: Use native PowerShell (non-SDK) Invoke-RestMethod calls to the FlashArray API Authenticates an API Token user and gets the x-auth-token for requests Query a fleet and determine the fleet members Query fleet Presets & Workloads List fleet volumes and hosts (top X, configurable) Create a host, volume, and then connect the volume to the host on a member array. <# .SYNOPSIS Authenticates to Pure Storage FlashArray REST API and retrieves session token. .DESCRIPTION - Authenticates using API token. - Retrieves the x-auth-token from response headers for subsequent requests. - Dynamically queries the FlashArray for the latest available API version and uses it for requests. .PARAMETER Target Required. The FQDN or IP address of the FlashArray to target for REST API calls. .PARAMETER ApiToken Required. The API token used for authentication with the FlashArray REST API. .EXAMPLE .\Connect-FAApi.ps1 -Target "10.0.0.100" -ApiToken "<Your API Token here>" .NOTES Author: mnelson@purestorage.com Origin Date: 10/23/2023 Version: 1.1 #> param ( [Parameter(Mandatory = $true)] [string]$Target, [Parameter(Mandatory = $true)] [string]$ApiToken ) ################ SETUP ################ # Query the array for the latest available API version try { $apiVersions = Invoke-RestMethod -Uri "https://$Target/api/api_version" -Method Get -SkipCertificateCheck $numericApiVersions = $apiVersions.version | Where-Object { $_ -match '^\d+(\.\d+)*$' -and $_ -notmatch '^2\.x$' } $latestApiVersion = ($numericApiVersions | Sort-Object { [version]$_ } -Descending)[0] Write-Host "Latest API Version detected:" $latestApiVersion } catch { Write-Host "Could not retrieve API version, defaulting to 2.45" $latestApiVersion = "2.45" } # Set the Base Uri if ($latestApiVersion) { $baseUrl = "https://$Target/api/$latestApiVersion" } # Prepare headers for authentication $headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]" $headers["api-token"] = $ApiToken # Authenticate and get session token $response = Invoke-RestMethod "https://$Target/api/$latestApiVersion/login" -Method 'POST' -Headers $headers -SkipCertificateCheck -ResponseHeadersVariable "respHeaders" # Display the value of "username" from the response, if present if ($response.items -and $response.items[0].username) { Write-Host "Username:" $response.items[0].username } else { Write-Host "Username field not found in response." } # TO-DO: Check if user is LDAP or local # Parse "x-auth-token" from response headers and store in $xAuthHeader $xAuthHeader = $respHeaders["x-auth-token"] Write-Host "x-auth-token:" $xAuthHeader # Add x-auth-token to headers for subsequent requests $headers.Add("x-auth-token", $xAuthHeader) # You can now use $headers for further authenticated requests to the FA API ########################################################################### Add pagination, query the fleet: # optional pagination & limit code $continuation_token = $null $limit = 10 # Adjust as needed ################ FLEETS ################ # Get Fleet name $fleetsResponse = Invoke-RestMethod -Uri "$baseUrl/fleets" -Method Get -Headers $headers -SkipCertificateCheck $fleetName = $fleetsResponse.items[0].name #Write-Host "Fleet Name: $fleetName" # Get fleet members $membersUrl = "$baseUrl/fleets/members?fleet_name=$fleetName" $membersResponse = Invoke-RestMethod -Uri $membersUrl -Method Get -Headers $headers -SkipCertificateCheck if (-not $membersResponse.items -or $membersResponse.items.Count -eq 0) { Write-Error "No fleet members found." exit 1 } # Extract Fleet member names $VAR_RESULTS = @() foreach ($item in $membersResponse.items) { if ($item.member -and $item.member.name) { $VAR_RESULTS += $item.member.name } elseif ($item.name) { $VAR_RESULTS += $item.name } } if ($VAR_RESULTS.Count -eq 0) { Write-Error "No member names found in fleet members response." exit 1 } # Write out the fleet members #Write-Host "Extracted Member Names: $($VAR_RESULTS -join ', ')" Query for volumes, hosts: ################ FLEET VOLUMES QUERY ################ # Query volumes for extracted member names $volumesUrl = "$baseUrl/volumes?context_names=$($VAR_RESULTS -join ',')" ## uncomment for full response - no limit, and comment out pagination code below #$volumesResponse = Invoke-RestMethod -Uri $volumesUrl -Method Get -Headers $headers -SkipCertificateCheck #$volumesResponse | ConvertTo-Json -Depth 5 ## with paginated reponse do { ## Build the query string for pagination $queryString = "?limit=$limit" if ($continuation_token) { $queryString += "&continuation_token=$continuation_token" } $volumesUrl = "$baseUrl/volumes$queryString" ## Invoke REST method and capture response headers $volumesResponse = Invoke-RestMethod -Uri $volumesUrl -Method Get -Headers $headers -SkipCertificateCheck -ResponseHeadersVariable respHeaders ## Output volumes data $volumesResponse | ConvertTo-Json -Depth 5 ## Extract x-next-token from response headers for next page $continuation_token = $respHeaders["x-next-token"] ## Continue if x-next-token is present } while ($continuation_token) ################ FLEET HOSTS QUERY ################ # Query hosts for extracted member names $hostsUrl = "$baseUrl/hosts?context_names=$($VAR_RESULTS -join ',')" ## full response - no limit, and comment out pagination code below #$hostsResponse = Invoke-RestMethod -Uri $hostsUrl -Method Get -Headers $headers -SkipCertificateCheck #$hostsResponse | ConvertTo-Json -Depth 5 ## with paginated reponse do { ## Build the query string for pagination $queryString = "?limit=$limit" if ($continuation_token) { $queryString += "&continuation_token=$continuation_token" } $hostsUrl = "$baseUrl/hosts$queryString" ## Invoke REST method and capture response headers $hostsResponse = Invoke-RestMethod -Uri $hostsUrl -Method Get -Headers $headers -SkipCertificateCheck -ResponseHeadersVariable respHeaders ## Output hosts data $hostsResponse | ConvertTo-Json -Depth 5 ## Extract x-next-token from response headers for next page $continuation_token = $respHeaders["x-next-token"] ## Continue if x-next-token is present } while ($continuation_token) Query for Presets & Workloads: ################ FLEET PRESETS QUERY ################ $presetsUrl = "$baseUrl/presets?context_names=$($VAR_RESULTS -join ',')" $presetsResponse = Invoke-RestMethod -Uri $presetsUrl -Method Get -Headers $headers -SkipCertificateCheck -ResponseHeadersVariable respHeaders $presetsResponse | ConvertTo-Json -Depth 5 ################ FLEET WORKLOADS QUERY ################ $workloadsUrl = "$baseUrl/workloads?context_names=$($VAR_RESULTS -join ',')" $workloadsResponse = Invoke-RestMethod -Uri $workloadsUrl -Method Get -Headers $headers -SkipCertificateCheck -ResponseHeadersVariable respHeaders $workloadsResponse | ConvertTo-Json -Depth 5 Create a Host on a fleet member array, create a volume, connect the volume to the host: ################ CREATE VOLUME, HOST, AND CONNECT THEM ON ANOTHER FLASHARRAY IN THE FLEET ################ # Select a secondary FlashArray in the fleet $otherArrayName = $VAR_RESULTS | Where-Object { $_ -ne $Target } | Select-Object -First 1 if (-not $otherArrayName) { Write-Error "No other FlashArray found in the fleet." exit 1 } Write-Host "Selected secondary FlashArray for operations: $otherArrayName" # Create a new volume on the secondary FlashArray $newVolumeName = "APIDemo-Vol01" $volumePayload = @{ name = $newVolumeName size = 10737418240 # 10 GiB in bytes context = @{ name = $otherArrayName } } $createVolumeUrl = "$baseUrl/volumes" $createVolumeResponse = Invoke-RestMethod -Uri $createVolumeUrl -Method Post -Headers $headers -Body ($volumePayload | ConvertTo-Json) -ContentType "application/json" -SkipCertificateCheck Write-Host "Created volume:" $newVolumeName "on" $otherArrayName # Create a new host on the secondary FlashArray $newHostName = "FleetDemoHost01" $IQN = "iqn.2023-07.com.fleetdemo:host01" $hostPayload = @{ name = $newHostName iqn = @($IQN) context = @{ name = $otherArrayName } } $createHostUrl = "$baseUrl/hosts" $createHostResponse = Invoke-RestMethod -Uri $createHostUrl -Method Post -Headers $headers -Body ($hostPayload | ConvertTo-Json) -ContentType "application/json" -SkipCertificateCheck Write-Host "Created host:" $newHostName "with IQN:" $IQN "on" $otherArrayName # Connect the newly created volume to the newly created host $connectPayload = @{ volume = @{ name = $newVolumeName context = @{ name = $otherArrayName } } host = @{ name = $newHostName context = @{ name = $otherArrayName } } } $connectUrl = "$baseUrl/host-volume-connections" $connectResponse = Invoke-RestMethod -Uri $connectUrl -Method Post -Headers $headers -Body ($connectPayload | ConvertTo-Json) -ContentType "application/json" -SkipCertificateCheck Write-Host "Connected volume" $newVolumeName "to host" $newHostName "on" $otherArrayName # Output results $createVolumeResponse | ConvertTo-Json -Depth 5 $createHostResponse | ConvertTo-Json -Depth 5 $connectResponse | ConvertTo-Json -Depth 5108Views2likes0CommentsWhy Object Storage Still Matters
In Part 2, I wrote a line that, at the time, felt almost like a side comment — something I typed without fully appreciating how much it would change the direction of the story: “BREAKING NEWS: The FlashArray now supports Object??? What in the world? I may need to write an article about that!!” That reaction wasn’t planned, and it definitely wasn’t me being clever. It was me looking at the GUI and thinking, “that can’t be right… can it?” It didn’t line up with how I’ve been modeling storage architectures in my head for years, which usually means one of two things: either something fundamentally changed… or I’ve been confidently wrong about part of this for a while. And if I’m being completely honest, there was also a second reaction happening in parallel — one that I didn’t write down at the time because it sounded slightly ridiculous even in my own head: “Wait… do I actually understand why object storage exists in the first place? And more importantly… what exactly was wrong with files?” That’s the part nobody likes to admit out loud. We’ve all spent years confidently explaining block, file, and object as if we were born with that knowledge, when in reality most of us learned it incrementally, retroactively, and with just enough conviction to sound credible in front of a customer. Object storage, in particular, has always carried this aura of inevitability — like of course it’s better, of course it scales, of course it’s what modern applications need — without always forcing us to question why the previous model stopped being enough. Because for as long as most of us have been designing infrastructure, object storage has not simply been another protocol layered onto an existing system. It has represented a fundamentally different way of organizing and accessing data, one that required its own architectural approach, its own scaling model, and, more often than not, its own dedicated platform. The separation between block, file, and object was not arbitrary; it was a reflection of how deeply different those paradigms were in terms of metadata handling, access patterns, and performance expectations. This is precisely why platforms such as Everpure FlashBlade exist in the first place. They were not created as extensions of traditional storage systems but as purpose-built architectures designed to treat unstructured data — and particularly object data — as a first-class citizen. The use of distributed metadata services, sharded across independent nodes, combined with a key-value store storage model, allows such systems to achieve levels of parallelism and throughput that simply cannot be replicated within a controller-based design. In that context, object storage is not something that is “added” to the system; it is the system. Which is why seeing S3 support appear on FlashArray required a pause. Not excitement. Not skepticism alone. Something closer to intellectual friction. Reconciling Two Architectural Worlds The most important step in understanding what FlashArray has introduced is to resist the temptation to treat it as a direct comparison to FlashBlade. These aren’t two different ways of solving the same problem. They’re two different answers to two different problems—and pretending otherwise is where people get themselves into trouble. FlashBlade is built for object, not adapted to it. S3 talks directly to a distributed engine that thinks in objects, not files pretending to be objects. Metadata is spread across blades instead of becoming a centralized choke point, and the whole system scales the way modern workloads actually need it to. There’s no file system layer to fight with, no directory structure to navigate, no POSIX semantics getting in the way. It just does what you’d expect when you remove all of that: it goes fast, it scales cleanly, and it keeps up with workloads like HPC, AI and analytics without breaking a sweat. FlashArray takes a very different path, and in reality, it’s not what most people expect. It doesn’t try to reinvent itself as an object platform, and it doesn’t throw an S3 gateway in front of the array and call it a day. With Purity 6.10.5+, S3 just shows up as another protocol the system understands, right next to block and file. That distinction matters more than it seems. This isn’t something duct-taped on the side — it’s part of the same control plane, the same data path, the same system you’ve already been running. But let’s not pretend it turned into FlashBlade overnight. This is still a controller-driven architecture. The primary controller does the heavy lifting — handling requests, authenticating them, coordinating operations — before anything actually hits the storage engine. Which means it behaves differently, especially as workloads scale. So it ends up in this interesting middle ground. Not a native object system in the pure sense, but not a hack either. Just a different way of exposing what’s already there. The Translation Layer and Its Consequences It would be irresponsible to discuss FlashArray S3 without explicitly addressing the implications of this design. Even with its native integration into Purity, S3 operations are still subject to the realities of a controller-bound architecture. Every request must be processed, authenticated, and coordinated before it is executed, introducing a measurable difference in behavior compared to both native block operations and distributed object systems. The most immediate effect is latency. While FlashArray continues to deliver sub-150 microsecond performance for block workloads, S3 operations typically operate at higher latencies (in 1 millisecond range) due to the additional processing steps involved. This is not a flaw; it is the natural outcome of introducing a protocol that was designed for scale and flexibility into a system optimized for low-latency transactional workloads. Metadata handling further reinforces this distinction. FlashBlade distributes metadata across its architecture, enabling massive parallelism and consistent performance at scale. FlashArray processes metadata through its controller framework, which introduces natural serialization points under high concurrency. As workloads become increasingly metadata-heavy — particularly with small objects — this difference becomes more pronounced. The system also enforces clearly defined operational limits to maintain predictable performance. As of Purity 6.10.5+, FlashArray supports up to 250 S3 buckets per array and a maximum of 1,000,000 objects per bucket. FlashArray Object Store Limits Object storage operates at the array scope and does not integrate with multi-tenancy or “realms”, which has implications for service provider models and strict tenant isolation requirements. These constraints are not arbitrary limitations; they are guardrails that ensure the system behaves consistently within its architectural boundaries. Where the Architecture Becomes Secondary Having established those boundaries, the conversation naturally shifts from “how it works” to “why it matters”. In many enterprise environments, particularly within SLED organizations, the challenge is not achieving exabyte-scale throughput or supporting billions of objects. The challenge is delivering capabilities in a way that is operationally sustainable, economically efficient, and aligned with existing infrastructure. This is where FlashArray’s approach becomes compelling. By exposing object storage within the same platform that already supports block and file workloads, it eliminates the need to introduce a separate system, a separate operational model, and a separate set of dependencies. The same management interface, the same automation framework, and the same data services extend across all protocols. More importantly, object data inherits the full set of Purity capabilities. Global inline deduplication and compression apply to S3 workloads, significantly improving storage efficiency compared to many object-native platforms. SafeMode snapshots extend immutability to object storage, providing a critical layer of protection against ransomware. ActiveCluster, combined with ActiveDR, enables a three-site resilience model that ensures data availability across multiple locations with zero RPO between primary sites. These are not incremental improvements. They represent a shift in how object storage can be consumed within an enterprise. Practical Use Cases in a Unified Model When viewed through this lens, the use cases for FlashArray S3 become both clear and grounded in reality. Development and Staging Environments Some applications rely on S3 APIs but do not require massive scale, FlashArray provides a consistent and integrated object interface without introducing additional infrastructure. Developers can build and test against a familiar model while remaining within the same operational environment. Backup and Recovery Workflows FlashArray S3 enables modern data protection strategies that leverage object storage while benefiting from flash performance, deduplication, and indelible snapshots. This combination improves both recovery times and storage efficiency. Tier-two repositories and application-integrated storage represent another natural fit. Workloads such as document management systems, logs, and archival data often require object semantics but do not justify the higher cost of a dedicated object platform. Consolidating these workloads onto FlashArray simplifies operations while maintaining reliability and performance. Where the Boundaries Still Matter None of this diminishes the importance of selecting the appropriate platform for workloads that demand a different architecture. High-performance AI pipelines, large-scale analytics environments, and use cases requiring massive parallelism remain firmly within the domain of FlashBlade. The ability to scale performance linearly, distribute metadata across many nodes, and support billions of objects is not optional in these scenarios — it is essential. What has changed is not the relevance of those systems, but the necessity of deploying them for every object storage use case. A Subtle but Significant Shift The introduction of S3 on FlashArray does not represent a replacement of one architecture with another. It represents a convergence of capabilities within a unified operational framework. Object storage, in this model, is no longer a destination that requires its own platform. It becomes a capability — one of several ways to access and manage data within the same system. That shift is easy to overlook, but its implications are significant. It allows organizations to design around outcomes rather than protocols, to reduce complexity without sacrificing capability, and to align infrastructure more closely with the needs of modern applications. Closing Reflection Looking back at that line in Part 2, it is clear that the reaction was not just about a new feature appearing in the interface. It was about the recognition — however incomplete at the time — that something foundational was beginning to change. Object storage did not suddenly become simpler, nor did it lose the architectural complexity that defines it. What changed is where it lives. And once that becomes clear, you start asking a slightly uncomfortable but very honest question: If this works… and it works well enough for most of what I actually need… why was I so convinced it had to live somewhere else in the first place? That is usually where the interesting work begins. Appreciate you reading. Dmitry Gorbatov © 2025 Dmitry Gorbatov | #dmitrywashere27Views1like0CommentsSpring is Calling, and so is Reds Baseball
I don't know about you, but I am more than ready for Spring; though I could definitely skip the rain. Wiping muddy dog paws after every walk is getting old! On the bright side, who else is ready for some Reds baseball? I have a few exciting updates and resources to share with the community: 🚀 PUG Meeting Update charles_sheppar and I are currently hard at work on the next PUG meeting. Details to come. 🛡️ Strengthening Your Cyber Resilience Given the current geopolitical climate and the rise in cyber threats, now is the perfect time to audit your data protection. Features like SafeMode and Pure1 Security Assessments act as a resilient last line of defense. If you want to see these tools in action, we recently hosted an expert-led demo on building a foundation for cyber resilience. Watch the recording here: https://www.purestorage.com/video/webinars/the-foundations-of-cyber-resilience/6389889927112.html Questions? Reach out to your Everpure SE or partner for a deeper dive. 📅 Upcoming Events March 12: Nutanix Webinar Exploring virtualization alternatives? Nutanix is hosting a session tomorrow focused on simplifying IT operations and highlighting the Everpure partnership. https://event.nutanix.com/simplifyitandonprem March 19: Or perhaps you're interested in running virtual machines alongside containerized workloads within K8s clusters. If that's the case, join Greg McNutt and Sagar Srinivasa for Virtualization Reimagined: Inside the Everpure Journey. https://www.purestorage.com/events/webinars/virtualization-reimagined.html March 19: Ask Us Everything About Storage for Databases. Join experts Anthony Nocentino, Ryan Arsenault, and Don Poorman for a live Q&A session. https://www.purestorage.com/events/webinars/ask-us-everything-about-storage-for-databases.html March 24: Presets & Workloads for Consistent DB Environments. We’re extending the database conversation to discuss how Everpure helps you transition from "managing storage" to "managing data" through automated presets. https://www.purestorage.com/events/webinars/presets-and-workload-setups-for-consistent-database-environments.html92Views1like0Comments