Enabling Agentic AI via Pure1 Manage MCP Server
Everpure now offers a Pure1® Manage MCP Server so you can query information about your fleet using natural language questions. In this post, I’ll explain how the Pure1 Manage MCP Server works. The first section will explain MCP in general, and the second section will explain how to use our specific server. Feel free to skip to the Quick Start section if you’re already familiar with MCP and just need the parameters to plug into your host. What is MCP? MCP stands for "Model Context Protocol," and it's a way for users to connect their AI applications to external systems using tool calls. MCP tools are fundamentally rooted in application programming interfaces (APIs). An API is a set of rules and protocols that allows different software applications to communicate with each other. It acts as an intermediary, enabling one piece of software (the client) to request information or functionality from another piece of software (the server) without needing to know the server's internal workings. For instance, when you check the weather on your phone, the weather app uses an API to send a request to a weather service, which then returns the current weather data. AI applications have trouble making API calls directly because APIs are designed for completeness and correctness, not for an LLM to use easily. When an AI application wants to use an external system to handle a user’s request, it uses the MCP protocol to make a tool call. The AI (client) requests a function (the tool) from an external system (the server), and the system executes the function and returns a result. This makes MCP a system that standardizes and mediates API-like interactions, allowing AI models to leverage external, real-world capabilities. For more information, see this article on the MCP website: “What is the Model Context Protocol (MCP)?” How can customers benefit from the Pure1 Manage MCP Server? The Pure1 Manage MCP Server enables customers to securely integrate AI assistants, copilots, and agentic systems with live Pure1 telemetry and operational data—without building custom API integrations. It transforms Pure1 from a dashboard-centric experience into an AI-accessible platform, enabling natural language interaction, contextual automation, and real-time operational intelligence. Customers benefit from faster AI integration, reduced engineering effort, preserved security controls, and improved decision velocity across hybrid environments. What types of customer workflows are best suited for MCP? The Pure1 Manage MCP Server is particularly well-suited for agentic and AI-driven workflows, including: Fleet telemetry integration with customer copilots Expose Pure1 telemetry—arrays, volumes, workloads, metrics, and alerts—into internal copilots, chatbots, or AI platforms via MCP endpoints. Value: Unified operational visibility across hybrid and multi-platform environments Automation with context awareness Use MCP to validate storage state, health, performance, or capacity before executing provisioning, backup, or disaster recovery workflows. Value: Safer automation with contextual validation, reduced execution errors, and greater rollback confidence Hybrid cloud observability Correlate Everpure array performance and capacity metrics with application, VM, container, or cloud telemetry across environments. Value: Faster troubleshooting and improved end-to-end performance insights Conversational operational analytics Enable operators to ask real-time natural language questions, such as: “Which arrays are nearing capacity risk?” “Show me the top latency spikes in the last 24 hours.” “Summarize all critical alerts across regions.” Value: Rapid insight without navigating dashboards or exporting reports What the Pure1 Manage MCP Server can do The Pure1 Manage MCP Server exposes the Pure1 Manage REST APIs to AI applications. This means you can ask ad hoc questions about your fleet using natural language, and your AI application will use the server’s tools to query the answer. No programming required. Example conversation Here’s an example conversation demonstrating the use of the Pure1 Manage MCP Server: User: "What is the name of my oldest FlashArray and what is its serial number?" Chatbot:Processing Request: The AI recognizes this as a request for fleet information requiring a tool call. Tool Invocation: Get_hardware Response: The oldest FlashArray in your fleet is named FA-Prod-01 with serial number FA10123456. User: "Can you show me the average I/O latency for that array over the last 7 days?" Chatbot: Let me first check the available latency metrics, then query the historical data: Tool Invocation: Get_metrics Tool Invocation: Get_metrics_history Response: The average I/O latency for FA-Prod-01 (serial FA10123456) over the last 7 days was 0.385 milliseconds. User: "How many arrays do I have in total that are still under warranty?" Chatbot: Processing Request: The AI recognizes this as a request for a count of arrays filtered by warranty status. Tool Invocation: Get_array_support_contracts Response: You currently have 12 arrays in your fleet that are still under active warranty. Quick start Step 1: Register an API key in Pure1 Manage The Pure1 Manage MCP Server leverages the Pure1 Manage REST APIs. In order to access those APIs, you need to register an API key in Pure1 Manage. To do that, follow the directions in the The Pure1® REST API introductory blog post. After going through the instructions, you will have an application id and a private key file, which will be used to generate an access token to access the MCP server in step 2. Step 2: Set up the pure1_token_factory.py script Prerequisites: you need Python 3.12 or greater to run the script. Download pure1_token_factory.zip. Unzip the archive. Go to the unzipped folder in your command-line terminal. Optional but recommended: create and activate a Python virtual environment: python3 -m venv .venv source .venv/bin/activate Install the requirements: pip3 install -r requirements.txt. Run python3 pure1_token_factory.py <application_id> <private_key_file> Copy the generated access token from the script output for the next step. Step 3: Add remote MCP server to your AI application Follow the directions for your AI application to add a remote MCP server (see the Pure1 Manage MCP Server User Guide for instructions for specific chatbots). In general, they need the following information: Remote MCP Server address: https://api.pure1.purestorage.com/mcp Authorization type: header Header name: Authorization Header value: Bearer <access-token> Important: <access-token> is just a placeholder for the access token you generated in step 2. The actual header value should look something like “Bearer eyJ0eXAiO…” Important: you need to generate a new access token every 10 hours and copy it into your AI application You’ll need to run pure1_token_factory.py to generate a new access token every 10 hours, and manually copy the access token into your AI application’s config. Claude Desktop instructions Claude Desktop is a special case because it doesn’t let you set the Authorization header directly. You have to run the mcp-remote local MCP server and configure that to use the Pure1 Manage remote MCP server. Prerequisites You need to have Node.js version 18 or newer installed on your system. Configuration In Claude Desktop, go to Settings > Developer, and click Edit Config. Open the claude_desktop_config.json file in a plain-text editor like VS Code. Configure the mcp-remote server, which is necessary to pass the Authorization header to the Pure1 Manage MCP Server. Paste the token into the configuration file, then restart Claude Desktop. { "mcpServers": { "Pure1 API": { "command": "npx", "args": [ "-y", "mcp-remote", "https://api.pure1.purestorage.com/mcp", "--header", "Authorization:${AUTHORIZATION_HEADER}" ], "env": { "AUTHORIZATION_HEADER": " Bearer <paste access token here>" } } } Note: there might be other configuration options in this file. Be sure to leave them unchanged, and only insert the Pure1 API config in the mcpServers section. The space in the AUTHORIZATION_HEADER environment variable is important. It's there to work around a bug in Windows argument parsing. Please note that: The first time it uses a tool, it will ask you for permission. You can grant permission to all tools at once by going to Customize > Connectors > Pure1 API, and selecting Always Allow under Other tools. For more detailed instructions from Anthropic, please refer to: Connect to local MCP servers - Model Context Protocol.122Views0likes0CommentsFusion for the Win: You No Longer Have to Decide Where the Data Lives
Dmitry Gorbatov Apr 10, 2026 In the first post, I walked through enabling file services on a FlashArray. There was nothing particularly complicated about it. The process was clean, predictable, and by the end of it I had a fully functional file platform running on the same system that was already supporting the rest of the environment. It behaved exactly the way you would expect it to behave. And that is precisely what started to bother me. Because if you step back and look at what we actually did, the workflow has not really changed in years. I still made a series of decisions in a very specific order. I chose where the workload should live, I created the file system, I attached protection, and I made sure everything was named and organized in a way that made sense at that moment. It was structured. It was controlled. It was also entirely dependent on me. That model works well enough when the environment is small or when the same person is making the same decisions repeatedly. But as soon as you introduce scale, or simply more people, those decisions start to drift. Not in a dramatic way, but in small inconsistencies that accumulate over time. A slightly different naming convention here, a missed policy there, a workload placed somewhere because it “felt right.” Nothing breaks. It just becomes harder to operate. When the model stops making sense What stood out to me after going through the manual process is that we are still treating storage as something that needs to be individually managed, even though the platform itself has already moved beyond that. We have systems that can deliver consistent performance, global data services, and non-disruptive operations, yet we still rely on human judgment to decide where things go and how they should be configured. That disconnect is where Everpure Fusion begins to make sense. Not as an additional feature, but as a way to remove an entire class of decisions that we have simply accepted as part of the job. From managing infrastructure to defining intent The idea behind the Enterprise Data Cloud is not particularly complicated, but it does require a shift in perspective. Instead of treating each array as a separate system with its own boundaries, the environment becomes a unified pool of resources. Data is no longer something that you place on a specific array. It is something that exists within a global pool, governed by policies that define how it should behave. Once you start thinking this way, the questions change. You are no longer asking where a workload should go. You are asking what that workload needs to look like. Performance expectations, protection requirements, naming, and lifecycle behavior become the inputs, and the system automation takes responsibility for everything else. That is the role of Everpure Fusion. What actually changes in practice The easiest way to understand Fusion is to look at what it removes. In the manual model, every step is explicit. You build storage object by object, and then you attach policies to those objects. You rely on memory, experience, and sometimes documentation to make sure everything is done correctly. With Fusion, that entire process becomes declarative. Instead of building storage step by step, you define a preset. A preset is a reusable definition of what “correct” looks like for a given workload. It captures performance expectations, protection policies, naming conventions, and any constraints that should apply. Once that definition exists, it becomes the standard. When you create a workload from that preset, Fusion evaluates the environment and places it on the array that best satisfies those requirements. It creates the necessary objects, applies the policies, and ensures that everything is consistent with the definition. The important shift is not that tasks are automated. It is that decisions are no longer made ad hoc. Trying it in the lab After building file services manually in the previous post, I wanted to see what this would look like using the same environment, but driven through Fusion. I started by defining a fleet, grouping the array into a logical boundary where resources and policies could be managed collectively. Once the array becomes part of a fleet, you stop thinking of it as an individual system and start treating it as part of a shared pool. From there, identity becomes the next requirement. Fusion relies on centralized authentication, typically through secure LDAP backed by Active Directory. This is what governs access to presets and workloads, and it ensures that everything aligns with existing organizational controls. Up to this point, everything felt exactly like I expected. Then I moved to the part I was actually interested in. Where things didn’t quite line up The goal was to take the file services I had already built and express them as a preset. I wanted a single definition that would describe the file system, its structure, its policies, and its behavior, and then use that definition to create workloads without going through the manual steps again. Conceptually, that is exactly what Fusion is supposed to do. In practice, I ran into a limit that I had not fully appreciated at the start. I was running Purity OS 6.9.2. Which, to be fair, is where most production environments should be. It is a Long-Life Release, stable, predictable, and already capable of delivering Fusion for fleet management, intelligent placement, and policy-driven storage classes. You can create Presets and Workloads for block workloads. What it does not include is full support for File Presets on FlashArray. That capability, where a file system, its directories, and its access policies are all defined and deployed as a single unit, arrives in the 6.10.X Feature Release line. Which means that the exact outcome I was trying to demonstrate was sitting just one version ahead of me. This is where I had to laugh at myself There is always a moment in a lab where you realize that the limitation is not the platform. It is you. In this case, it was me getting ahead of the version I was actually running. My intentions were “ever” so “pure” (IYKYK). The execution was slightly behind the feature set. So I upgraded One of the advantages of working with this platform is that upgrading does not carry the same weight it used to. The system is designed for non-disruptive operations, and moving between versions does not require downtime or migration. The upgrade to 6.10.5 was uneventful in the best possible way. Controllers were updated in sequence, workloads continued to run, and the system transitioned to a new set of capabilities without introducing risk. There is something very satisfying about performing an upgrade not because something is broken, but because you want access to what comes next. BREAKING NEWS: The FlashArray now supports Object??? What in the world? I may need to write an article about that!! When it finally clicks Once on 6.10.5, the model finally aligns with the intent. Once I clicked on Create Your First Preset, it gave me these options: I defined a preset that described the file workload I had previously built manually. It included the expected behavior, protection policies, and naming conventions. Instead of creating individual components, I was defining the service as a whole. Now this was really neat - when you select Storage Class, it knows that arrays are available in your environment. In my case, I only have FA //X. At this point a new field opens and allows you to select the Storage Resources. Once I hit “Publish'“ this was the result: Think of this entire process like this: Define your Recipe (Preset) Order from the Menu (Workload) Lets create a workload from that preset. Once I clicked on + to add a new Workload, the Wizard opened: Give a name to that Workload: Since Fusion Fleet has both of my lab arrays, I have an option to select an array for the workload placement. Our of curiosity I clicked: “Get Recommendations” and this was the result: Once I hit Deploy, within seconds, the workflow executed and I had my File System created. How awesome is this? Come on, give me a cheer! Think about the magnitude of what just happened. I provided minimal input, and Fusion handled the rest. It selected the appropriate array based on capacity and performance, created the file system, applied the policies, and ensured that everything matched the definition. There was no second pass. There were no additional steps. The outcome matched the intent. By moving to this model, I just shifted from being a "storage admin" to a "data architect." I defined the outcomes and it happened “automagically”. Why this matters more than efficiency It would be easy to describe this as a way to reduce manual effort, but that misses the point. The real value is consistency. When every workload is created from a defined preset, variability disappears. Policies are enforced by default. Naming is consistent. Placement is based on a complete view of the environment rather than individual judgment. Over time, that consistency reduces operational friction and lowers risk in ways that are difficult to measure but easy to recognize. Environments behave predictably, scaling becomes simpler, and the likelihood of human error decreases. Where this leads In the first post, I showed that file services can run natively on the array without additional infrastructure. In this post, the focus shifted to removing the manual decisions involved in building and managing those services. The next step is where things move beyond automation. As capabilities like ActiveCluster for File continue to evolve, the conversation shifts toward mobility and continuous availability. At that point, it is no longer just about simplifying operations, but about removing the constraints that tie workloads to a specific system or location. That is a conversation for Part 4. Appreciate you reading. © 2025 Dmitry Gorbatov | #dmitrywashere42Views0likes0CommentsAsk Us Everything: Everpure & Databases - From Firefighting to Forward Thinking
Databases aren’t going anywhere—in fact, they’re becoming more important than ever. In this Ask Us Everything session, Don Poorman sat down with Everpure database experts Anthony Nocentino and Ryan Arsenault to talk all things structured data. And while AI continues to dominate headlines, one theme came through clearly: AI doesn’t replace databases—it depends on them. If you’re running Oracle, SQL Server, SAP, or anything mission-critical, here’s what stood out.41Views2likes0CommentsStop Prompting, Start Context Engineering
This blog post argues that Context Engineering is the critical new discipline for building autonomous, goal-driven AI agents. Since Large Language Models (LLMs) are stateless and forget information outside their immediate context window, Context Engineering focuses on assembling and managing the necessary information—such as session history, long-term memory (embeddings, RAG indexes), and tool outputs—for the agent every single turn. The post asserts that storage, not the LLM or the prompt, is the primary performance bottleneck for AI at scale. The speed of the underlying storage architecture dictates the agent's responsiveness because it must quickly retrieve and persist context data repeatedly.108Views3likes0CommentsOT: The Architecture of Interoperability
In previous post, we explored the fundamental divide between Information Technology (IT) and Operational Technology (OT). We established that while IT manages data and applications, OT controls the physical heartbeat of our world from factory floors to water treatment plants. In this post we are diving deeper into the bridge that connects them: Interoperability. As Industry 4.0 and the Internet of Things (IoT) accelerate, the "air gap" that once separated these domains is evolving. For modern enterprises, the goal isn't just to have IT and OT coexist, but to have them communicate seamlessly. Whether the use-cases are security, real time quality control, or predictive maintenance, to name a few, this is why interoperability becomes the critical engine for operational excellence. The Interoperability Architecture Interoperability is more than just connecting cables; it’s about creating a unified architecture where data flows securely between the shop floor and the “top floor”. In legacy environments, OT systems (like SCADA and PLCs) often run on isolated, proprietary networks that don’t speak the same language as IT’s cloud-based analytics platforms. To bridge this, a robust interoperability architecture is required. This architecture must support: Industrial Data Lake: A single storage platform that can handle block, file, and object data is essential for bridging the gap between IT and OT. This unified approach prevents data silos by allowing proprietary OT sensor data to coexist on the same high-performance storage as IT applications (such as ERP and CRM). The benefit is the creation of a high-performance Industrial Data Lake, where OT and IT data from various sources can be streamed directly, minimizing the need for data movement, a critical efficiency gain. Real Time Analytics: OT sensors continuously monitor machine conditions including: vibration, temperature, and other critical parameters, generating real-time telemetry data. An interoperable architecture built on high performance flash storage enables instant processing of this data stream. By integrating IT analytics platforms with predictive algorithms, the system identifies anomalies before they escalate, accelerating maintenance response, optimizing operations, and streamlining exception handling. This approach reduces downtime, lowers maintenance costs, and extends overall asset life. Standards Based Design: As outlined in recent cybersecurity research, modern OT environments require datasets that correlate physical process data with network traffic logs to detect anomalies effectively. An interoperable architecture facilitates this by centralizing data for analysis without compromising the security posture. Also, IT/OT convergence requires a platform capable of securely managing OT data, often through IT standards. An API-First Design allows the entire platform to be built on robust APIs, enabling IT to easily integrate storage provisioning, monitoring, and data protection into standard, policy-driven IT automation tools (e.g., Kubernetes, orchestration software). Pure Storage addresses these interoperability requirements with the Purity operating environment, which abstracts the complexity of underlying hardware and provides a seamless, multiprotocol experience (NFS, SMB, S3, FC, iSCSI). This ensures that whether data originates from a robotic arm or a CRM application, it is stored, protected, and accessible through a single, unified data plane. Real-World Application: A Large Regional Water District Consider a large regional water district, a major provider serving millions of residents. In an environment like this, maintaining water quality and service reliability is a 24/7 mission-critical OT function. Its infrastructure relies on complex SCADA systems to monitor variables like flow rates, tank levels, and chemical compositions across hundreds of miles of pipelines and treatment facilities. By adopting an interoperable architecture, an organization like this can break down the silos between its operational data and its IT capabilities. Instead of SCADA data remaining locked in a control room, it can be securely replicated to IT environments for long-term trending and capacity planning. For instance, historical flow data combined with predictive analytics can help forecast demand spikes or identify aging infrastructure before a leak occurs. This convergence transforms raw operational data into actionable business intelligence, ensuring reliability for the communities they serve. Why We Champion Compliance and Governance Opening up OT systems to IT networks can introduce new risks. In the world of OT, "move fast and break things" is not an option; reliability and safety are paramount. This is why Pure Storage wraps interoperability in a framework of compliance and governance, not limited to: FIPS 140-2 Certification & Common Criteria: We utilize FIPS 140-2 certified encryption modules and have achieved Common Criteria certification. Data Sovereignty: Our architecture includes built-in governance features like Always-On Encryption and rapid data locking to ensure compliance with domestic and international regulations, protecting sensitive data regardless of where it resides. Compliance: Pure Fusion delivers policy defined storage provisioning, automating the deployment with specified requirements for tags, protection, and replication. By embedding these standards directly into the storage array, Pure Storage allows organizations to innovate with interoperability while maintaining the security posture that critical OT infrastructure demands. Next in the series: We will explore further into IT/OT interoperability and processing of data at the edge. Stay tuned!86Views0likes0CommentsPure's Intelligent Control Plane: Powered by AI Copilot, MCP Connectivity and Workflow Orchestration
At Accelerate 2025, we announced two capabilities that change how you manage Pure Storage in your broader infrastructure: AI Copilot with Model Context Protocol (MCP) and Workflow Orchestration with production-ready templates. Here's what they do and why they matter. AI Copilot with MCP: Your Infrastructure, One Conversation The Problem Your infrastructure spans multiple platforms. Pure Storage managing your data, VMware running VMs, OpenShift handling containers, security tools monitoring threats, application platforms tracking performance - each with its own console, APIs, and workflows. When you need to migrate a VM or respond to a security incident, you're manually pulling information from each system, correlating it yourself, then executing actions across platforms. You become the integration layer. The Solution Pure1 now supports Model Context Protocol (MCP), taking Copilot from a suggestive assistant to an active operator. With MCP enabled, Copilot doesn’t just recommend - it acts. It serves as a secure bridge between natural language and your infrastructure, capable of fetching data, executing APIs, and orchestrating workflows across diverse systems. Here’s what makes this powerful: You deploy MCP servers within your environment—one for VMware, another for OpenShift, and others for the systems you use. Each server exposes your environment’s capabilities through a standard, interoperable protocol. Pure Storage AI Copilot connects seamlessly to these MCP servers, as well as to Pure services such as Data Intelligence, Workflow Orchestration, and Portworx Monitoring, enabling unified and secure automation across your hybrid ecosystem. What You Can Connect You can deploy an MCP server on any system whether it’s your VMware environment, Kubernetes clusters, security platforms like CrowdStrike, databases, monitoring tools, or custom applications. Pure Storage AI Copilot connects to these servers under your control, securely combining their data with Pure Storage services to deliver richer insights and automation. Getting Started: If you have a use-case around MCP, please contact your Pure Storage account team. Workflow Orchestration: Deploy in Minutes, Not Months The Problem Building production-grade automation takes months. You need error handling, integration with multiple systems, testing for edge cases, documentation, ongoing maintenance. Most teams end up with half-finished scripts that only one person understands. The Solution We built workflow templates for common operations, tested them at scale, and made them available in Pure1. Install them, customize to your needs, and run them in minutes. Key Templates VMware to OpenShift Migration with Portworx Handles complete migration: extracts VM metadata, identifies backing Pure volumes, checks OpenShift capacity, configures vVols Datastore and DirectAccess, uses array-based replication, converts to Portworx format. Traditional migration takes hours for TB-scale VMs. This takes 20 to 30 minutes. SQL / Oracle Database Clone and Copy Automates cloning and copying of SQL Server and Oracle databases for dev/test or refresh needs. Instantly creates storage-efficient clones from snapshots, mounts them to target environments, and applies Pure-optimized settings. The hours-long manual process becomes a quick, consistent workflow completed in minutes Daily Fleet Health Check Scans all arrays for capacity trends, performance issues, protection gaps, hardware health.Posts summary to Slack. Proactive visibility without manually checking each array. Rubrik Threat Detection Response When Rubrik detects a threat, automatically tags affected Pure volumes, creates isolated immutable snapshots, and notifies the security team. Security events propagate to your storage layer automatically. How It Works Workflow Orchestration is a SaaS feature in Pure1. Deploy lightweight agents (Windows, Linux, or Docker) in your data center to execute workflows locally. Group agents together for high availability and governance controls. Integrations Native Pure Storage: Pure1 Connector for full API access, Fusion Connector for storage provisioning (works for Fusion and non-Fusion FlashArray/FlashBlade customers) Third-Party: ServiceNow, Slack, Google, Microsoft,CrowdStrike, HTTP/Webhooks, Pagerduty, Salesforce and more. The connector library continues expanding. Getting Started: Opt-in now in Pure1 - Workflow. Introductory offer available at this time. Check with your Pure account team if you have questions. How They Work Together At Accelerate 2025 in New York, we showcased this capability in action. Here's the scenario: an organization wants to migrate VMs to Kubernetes. Action-enabled Copilot orchestrates communication with Pure Storage appliances and services as well as third-party MCP servers to collect the required information for addressing a problem across a heterogeneous environment. With Pure1 MCP, AI Copilot, and Workflows, there's now a programmatic way to collect information from OpenShift MCP, VMware MCP, and Pure1 storage insights- then recommend an approach on what VMs to migrate based on your selection criteria. You prompt Copilot: "How can I move my VMs to OpenShift in an efficient way?" Copilot communicates across: Your VMware MCP server - to get VM specifications, current configurations, resource usage Your OpenShift MCP server - to check available cluster capacity, validate compatibility Portworx monitoring - to understand current storage performance Copilot reasons across all this information, identifies ideal VM candidates based on your criteria, and recommends the migration approach- which VMs to move, target configurations, and how to preserve policies. Then it can trigger the migration workflow, keeping you updated throughout the process. Why This Matters Storage Admins: Stop being the bottleneck. Enable self-service while maintaining governance. DevOps Teams: Deploy production-tested automation without writing code. Security Teams: Build automated response workflows spanning detection, isolation, and recovery. Infrastructure Leaders: Reduce operational overhead. Teams focus on strategy, not repetitive tasks. Get Started MCP Integration:If you have a use-case around MCP, please contact your Pure Storage account team.. Workflow Orchestration:Opt-in at Pure1 → Workflows. Learn More: Documentation in Pure1 or contact your Pure Storage account team. Pure1 evolved from a monitoring platform to an Intelligent Control Plane. AI Copilot reasons across your infrastructure. Workflow Orchestration executes. Together, they change how you manage data with Pure Storage.338Views2likes0CommentsConfiguring Apache Spark on FlashBlade, Part 3: Tuning for True Parallelism
This post will explore how to diagnose and resolve performance bottlenecks that are not related to storage I/O, ensuring you can take full advantage of the high-performance, disaggregated architecture of FlashBlade. We'll use a real-world scenario to illustrate how specific tuning can unlock massive parallelism.267Views4likes0Comments