Enabling Agentic AI via Pure1 Manage MCP Server
Everpure now offers a Pure1® Manage MCP Server so you can query information about your fleet using natural language questions. In this post, I’ll explain how the Pure1 Manage MCP Server works. The first section will explain MCP in general, and the second section will explain how to use our specific server. Feel free to skip to the Quick Start section if you’re already familiar with MCP and just need the parameters to plug into your host. What is MCP? MCP stands for "Model Context Protocol," and it's a way for users to connect their AI applications to external systems using tool calls. MCP tools are fundamentally rooted in application programming interfaces (APIs). An API is a set of rules and protocols that allows different software applications to communicate with each other. It acts as an intermediary, enabling one piece of software (the client) to request information or functionality from another piece of software (the server) without needing to know the server's internal workings. For instance, when you check the weather on your phone, the weather app uses an API to send a request to a weather service, which then returns the current weather data. AI applications have trouble making API calls directly because APIs are designed for completeness and correctness, not for an LLM to use easily. When an AI application wants to use an external system to handle a user’s request, it uses the MCP protocol to make a tool call. The AI (client) requests a function (the tool) from an external system (the server), and the system executes the function and returns a result. This makes MCP a system that standardizes and mediates API-like interactions, allowing AI models to leverage external, real-world capabilities. For more information, see this article on the MCP website: “What is the Model Context Protocol (MCP)?” How can customers benefit from the Pure1 Manage MCP Server? The Pure1 Manage MCP Server enables customers to securely integrate AI assistants, copilots, and agentic systems with live Pure1 telemetry and operational data—without building custom API integrations. It transforms Pure1 from a dashboard-centric experience into an AI-accessible platform, enabling natural language interaction, contextual automation, and real-time operational intelligence. Customers benefit from faster AI integration, reduced engineering effort, preserved security controls, and improved decision velocity across hybrid environments. What types of customer workflows are best suited for MCP? The Pure1 Manage MCP Server is particularly well-suited for agentic and AI-driven workflows, including: Fleet telemetry integration with customer copilots Expose Pure1 telemetry—arrays, volumes, workloads, metrics, and alerts—into internal copilots, chatbots, or AI platforms via MCP endpoints. Value: Unified operational visibility across hybrid and multi-platform environments Automation with context awareness Use MCP to validate storage state, health, performance, or capacity before executing provisioning, backup, or disaster recovery workflows. Value: Safer automation with contextual validation, reduced execution errors, and greater rollback confidence Hybrid cloud observability Correlate Everpure array performance and capacity metrics with application, VM, container, or cloud telemetry across environments. Value: Faster troubleshooting and improved end-to-end performance insights Conversational operational analytics Enable operators to ask real-time natural language questions, such as: “Which arrays are nearing capacity risk?” “Show me the top latency spikes in the last 24 hours.” “Summarize all critical alerts across regions.” Value: Rapid insight without navigating dashboards or exporting reports What the Pure1 Manage MCP Server can do The Pure1 Manage MCP Server exposes the Pure1 Manage REST APIs to AI applications. This means you can ask ad hoc questions about your fleet using natural language, and your AI application will use the server’s tools to query the answer. No programming required. Example conversation Here’s an example conversation demonstrating the use of the Pure1 Manage MCP Server: User: "What is the name of my oldest FlashArray and what is its serial number?" Chatbot:Processing Request: The AI recognizes this as a request for fleet information requiring a tool call. Tool Invocation: Get_hardware Response: The oldest FlashArray in your fleet is named FA-Prod-01 with serial number FA10123456. User: "Can you show me the average I/O latency for that array over the last 7 days?" Chatbot: Let me first check the available latency metrics, then query the historical data: Tool Invocation: Get_metrics Tool Invocation: Get_metrics_history Response: The average I/O latency for FA-Prod-01 (serial FA10123456) over the last 7 days was 0.385 milliseconds. User: "How many arrays do I have in total that are still under warranty?" Chatbot: Processing Request: The AI recognizes this as a request for a count of arrays filtered by warranty status. Tool Invocation: Get_array_support_contracts Response: You currently have 12 arrays in your fleet that are still under active warranty. Quick start Step 1: Register an API key in Pure1 Manage The Pure1 Manage MCP Server leverages the Pure1 Manage REST APIs. In order to access those APIs, you need to register an API key in Pure1 Manage. To do that, follow the directions in the The Pure1® REST API introductory blog post. After going through the instructions, you will have an application id and a private key file, which will be used to generate an access token to access the MCP server in step 2. Step 2: Set up the pure1_token_factory.py script Prerequisites: you need Python 3.12 or greater to run the script. Download pure1_token_factory.zip. Unzip the archive. Go to the unzipped folder in your command-line terminal. Optional but recommended: create and activate a Python virtual environment: python3 -m venv .venv source .venv/bin/activate Install the requirements: pip3 install -r requirements.txt. Run python3 pure1_token_factory.py <application_id> <private_key_file> Copy the generated access token from the script output for the next step. Step 3: Add remote MCP server to your AI application Follow the directions for your AI application to add a remote MCP server (see the Pure1 Manage MCP Server User Guide for instructions for specific chatbots). In general, they need the following information: Remote MCP Server address: https://api.pure1.purestorage.com/mcp Authorization type: header Header name: Authorization Header value: Bearer <access-token> Important: <access-token> is just a placeholder for the access token you generated in step 2. The actual header value should look something like “Bearer eyJ0eXAiO…” Important: you need to generate a new access token every 10 hours and copy it into your AI application You’ll need to run pure1_token_factory.py to generate a new access token every 10 hours, and manually copy the access token into your AI application’s config. Claude Desktop instructions Claude Desktop is a special case because it doesn’t let you set the Authorization header directly. You have to run the mcp-remote local MCP server and configure that to use the Pure1 Manage remote MCP server. Prerequisites You need to have Node.js version 18 or newer installed on your system. Configuration In Claude Desktop, go to Settings > Developer, and click Edit Config. Open the claude_desktop_config.json file in a plain-text editor like VS Code. Configure the mcp-remote server, which is necessary to pass the Authorization header to the Pure1 Manage MCP Server. Paste the token into the configuration file, then restart Claude Desktop. { "mcpServers": { "Pure1 API": { "command": "npx", "args": [ "-y", "mcp-remote", "https://api.pure1.purestorage.com/mcp", "--header", "Authorization:${AUTHORIZATION_HEADER}" ], "env": { "AUTHORIZATION_HEADER": " Bearer <paste access token here>" } } } Note: there might be other configuration options in this file. Be sure to leave them unchanged, and only insert the Pure1 API config in the mcpServers section. The space in the AUTHORIZATION_HEADER environment variable is important. It's there to work around a bug in Windows argument parsing. Please note that: The first time it uses a tool, it will ask you for permission. You can grant permission to all tools at once by going to Customize > Connectors > Pure1 API, and selecting Always Allow under Other tools. For more detailed instructions from Anthropic, please refer to: Connect to local MCP servers - Model Context Protocol.122Views0likes0CommentsTechSummit: Seattle
May 14, Register Now! Details Looking to tackle today’s toughest infrastructure challenges head on? Join us at TechSummit, an exclusive, half-day technical event for IT leaders, architects, and data professionals like you. What we’ll cover: Enterprise Data Cloud (EDC) - Get an inside look at how a unified, intelligent data platform brings agility, resilience, and performance to any workload. AI - Learn the benefits of AI-ready infrastructure designed and optimized to support the evolving needs of AI applications and development workflows. Cyber Resilience - Discover the advantages of a proactive, layered, operationally viable cyber resilience strategy to not just survive a cyberattack, but thrive after one. Virtualization/Cloud - Explore ongoing disruptions in the server virtualization market and evaluate whether you should consider cloud-managed VMware solutions or take the leap into cloud native and containers. It won’t be all business. We’ll also make time for fun. After the insightful discussions and learning, we’ll unwind together at a relaxed happy hour. Spots are limited, so register now to learn more and save your seat. Register Now!89Views0likes0CommentsArchitectural Deep Dive: Building Data Pipelines for AI Agents
May 7 | Register Now! The leap from a "hello world" AI agent to a production-ready system is a massive data challenge. Autonomous agents are coming your way, and it's up to you to figure out how to get your data stack ready for production. In this live session, we'll build a high-velocity data pipeline for AI agents from scratch. Starting with the fundamentals of a strong data storage foundation, we'll walk through every layer end-to-end. We'll cover real-time data ingestion, vector storage, retrieval, orchestration, and inference. In this session you’ll learn: How to build a production-ready data storage pipeline for AI agents The foundational decisions IT, Data, and AI teams need to make to handle "context lag" and memory before the first agent goes live A practical framework for assessing whether your current infrastructure is ready to support AI agents at scale Register Now!262Views0likes0CommentsFusion for the Win: You No Longer Have to Decide Where the Data Lives
Dmitry Gorbatov Apr 10, 2026 In the first post, I walked through enabling file services on a FlashArray. There was nothing particularly complicated about it. The process was clean, predictable, and by the end of it I had a fully functional file platform running on the same system that was already supporting the rest of the environment. It behaved exactly the way you would expect it to behave. And that is precisely what started to bother me. Because if you step back and look at what we actually did, the workflow has not really changed in years. I still made a series of decisions in a very specific order. I chose where the workload should live, I created the file system, I attached protection, and I made sure everything was named and organized in a way that made sense at that moment. It was structured. It was controlled. It was also entirely dependent on me. That model works well enough when the environment is small or when the same person is making the same decisions repeatedly. But as soon as you introduce scale, or simply more people, those decisions start to drift. Not in a dramatic way, but in small inconsistencies that accumulate over time. A slightly different naming convention here, a missed policy there, a workload placed somewhere because it “felt right.” Nothing breaks. It just becomes harder to operate. When the model stops making sense What stood out to me after going through the manual process is that we are still treating storage as something that needs to be individually managed, even though the platform itself has already moved beyond that. We have systems that can deliver consistent performance, global data services, and non-disruptive operations, yet we still rely on human judgment to decide where things go and how they should be configured. That disconnect is where Everpure Fusion begins to make sense. Not as an additional feature, but as a way to remove an entire class of decisions that we have simply accepted as part of the job. From managing infrastructure to defining intent The idea behind the Enterprise Data Cloud is not particularly complicated, but it does require a shift in perspective. Instead of treating each array as a separate system with its own boundaries, the environment becomes a unified pool of resources. Data is no longer something that you place on a specific array. It is something that exists within a global pool, governed by policies that define how it should behave. Once you start thinking this way, the questions change. You are no longer asking where a workload should go. You are asking what that workload needs to look like. Performance expectations, protection requirements, naming, and lifecycle behavior become the inputs, and the system automation takes responsibility for everything else. That is the role of Everpure Fusion. What actually changes in practice The easiest way to understand Fusion is to look at what it removes. In the manual model, every step is explicit. You build storage object by object, and then you attach policies to those objects. You rely on memory, experience, and sometimes documentation to make sure everything is done correctly. With Fusion, that entire process becomes declarative. Instead of building storage step by step, you define a preset. A preset is a reusable definition of what “correct” looks like for a given workload. It captures performance expectations, protection policies, naming conventions, and any constraints that should apply. Once that definition exists, it becomes the standard. When you create a workload from that preset, Fusion evaluates the environment and places it on the array that best satisfies those requirements. It creates the necessary objects, applies the policies, and ensures that everything is consistent with the definition. The important shift is not that tasks are automated. It is that decisions are no longer made ad hoc. Trying it in the lab After building file services manually in the previous post, I wanted to see what this would look like using the same environment, but driven through Fusion. I started by defining a fleet, grouping the array into a logical boundary where resources and policies could be managed collectively. Once the array becomes part of a fleet, you stop thinking of it as an individual system and start treating it as part of a shared pool. From there, identity becomes the next requirement. Fusion relies on centralized authentication, typically through secure LDAP backed by Active Directory. This is what governs access to presets and workloads, and it ensures that everything aligns with existing organizational controls. Up to this point, everything felt exactly like I expected. Then I moved to the part I was actually interested in. Where things didn’t quite line up The goal was to take the file services I had already built and express them as a preset. I wanted a single definition that would describe the file system, its structure, its policies, and its behavior, and then use that definition to create workloads without going through the manual steps again. Conceptually, that is exactly what Fusion is supposed to do. In practice, I ran into a limit that I had not fully appreciated at the start. I was running Purity OS 6.9.2. Which, to be fair, is where most production environments should be. It is a Long-Life Release, stable, predictable, and already capable of delivering Fusion for fleet management, intelligent placement, and policy-driven storage classes. You can create Presets and Workloads for block workloads. What it does not include is full support for File Presets on FlashArray. That capability, where a file system, its directories, and its access policies are all defined and deployed as a single unit, arrives in the 6.10.X Feature Release line. Which means that the exact outcome I was trying to demonstrate was sitting just one version ahead of me. This is where I had to laugh at myself There is always a moment in a lab where you realize that the limitation is not the platform. It is you. In this case, it was me getting ahead of the version I was actually running. My intentions were “ever” so “pure” (IYKYK). The execution was slightly behind the feature set. So I upgraded One of the advantages of working with this platform is that upgrading does not carry the same weight it used to. The system is designed for non-disruptive operations, and moving between versions does not require downtime or migration. The upgrade to 6.10.5 was uneventful in the best possible way. Controllers were updated in sequence, workloads continued to run, and the system transitioned to a new set of capabilities without introducing risk. There is something very satisfying about performing an upgrade not because something is broken, but because you want access to what comes next. BREAKING NEWS: The FlashArray now supports Object??? What in the world? I may need to write an article about that!! When it finally clicks Once on 6.10.5, the model finally aligns with the intent. Once I clicked on Create Your First Preset, it gave me these options: I defined a preset that described the file workload I had previously built manually. It included the expected behavior, protection policies, and naming conventions. Instead of creating individual components, I was defining the service as a whole. Now this was really neat - when you select Storage Class, it knows that arrays are available in your environment. In my case, I only have FA //X. At this point a new field opens and allows you to select the Storage Resources. Once I hit “Publish'“ this was the result: Think of this entire process like this: Define your Recipe (Preset) Order from the Menu (Workload) Lets create a workload from that preset. Once I clicked on + to add a new Workload, the Wizard opened: Give a name to that Workload: Since Fusion Fleet has both of my lab arrays, I have an option to select an array for the workload placement. Our of curiosity I clicked: “Get Recommendations” and this was the result: Once I hit Deploy, within seconds, the workflow executed and I had my File System created. How awesome is this? Come on, give me a cheer! Think about the magnitude of what just happened. I provided minimal input, and Fusion handled the rest. It selected the appropriate array based on capacity and performance, created the file system, applied the policies, and ensured that everything matched the definition. There was no second pass. There were no additional steps. The outcome matched the intent. By moving to this model, I just shifted from being a "storage admin" to a "data architect." I defined the outcomes and it happened “automagically”. Why this matters more than efficiency It would be easy to describe this as a way to reduce manual effort, but that misses the point. The real value is consistency. When every workload is created from a defined preset, variability disappears. Policies are enforced by default. Naming is consistent. Placement is based on a complete view of the environment rather than individual judgment. Over time, that consistency reduces operational friction and lowers risk in ways that are difficult to measure but easy to recognize. Environments behave predictably, scaling becomes simpler, and the likelihood of human error decreases. Where this leads In the first post, I showed that file services can run natively on the array without additional infrastructure. In this post, the focus shifted to removing the manual decisions involved in building and managing those services. The next step is where things move beyond automation. As capabilities like ActiveCluster for File continue to evolve, the conversation shifts toward mobility and continuous availability. At that point, it is no longer just about simplifying operations, but about removing the constraints that tie workloads to a specific system or location. That is a conversation for Part 4. Appreciate you reading. © 2025 Dmitry Gorbatov | #dmitrywashere41Views0likes0CommentsBring your hardest questions Building an AI Factory Live with FlashStack
April 21 | Register Now! Most organizations racing to build AI infrastructure are assembling point solutions that create hidden bottlenecks before a single model ever trains. This session cuts through the complexity with a live, practitioner-led walkthrough of the Everpure and Cisco FlashStack® for AI CVD—a proven reference architecture purpose-built for the NVIDIA AI Factory. Experts from Everpure will show you exactly how FlashStack eliminates the guesswork from AI infrastructure deployment, from storage and networking to compute and data pipeline readiness. Key takeaways include: Why a CVD-backed reference architecture reduces deployment risk and accelerates time-to-AI How FlashStack integrates with NVIDIA technologies to support training, inference, and agentic workloads at scale Perspective on what enterprises get wrong when standing up AI infrastructure—and how to avoid the most costly mistakes Register Now!412Views0likes0CommentsAsk Us Everything: Everpure & Databases - From Firefighting to Forward Thinking
Databases aren’t going anywhere—in fact, they’re becoming more important than ever. In this Ask Us Everything session, Don Poorman sat down with Everpure database experts Anthony Nocentino and Ryan Arsenault to talk all things structured data. And while AI continues to dominate headlines, one theme came through clearly: AI doesn’t replace databases—it depends on them. If you’re running Oracle, SQL Server, SAP, or anything mission-critical, here’s what stood out.41Views2likes0CommentsNew IDC Research: Building Data Infrastructure for AI
March 5 | Register now! What's behind AI agents that are fast, safe, and reliable? It starts with smarter data infrastructure. Pure Storage and IDC have been studying what separates organizations that are pulling ahead in AI from those that are stalling. Join us to see the research, hear what's changing, and rethink your data strategy to get ready for AI. In this webinar, we’ll share: Drivers of AI performance today New IDC research on how leaders are building for AI What separates fast-movers from the rest A preview of where Pure Storage and IDC see the market heading Register Now!170Views0likes0CommentsSecrets to Success Workshop: Supercharge Your Adoption of AI
February 12 | Register Now! Get ready to supercharge and fast-track your adoption of AI! Join Pure Storage for an exclusive virtual workshop on Thursday, February 12th. We'll unlock the secrets for successfully transforming AI pilots and exploratory initiatives into experience-backed, production-ready deployments. Register now to secure your spot for this highly engaging, virtual experience exploring new horizons with AI: Hear about AI industry trends and best practices for successful AI deployments. Learn why the right storage platform makes a difference in production AI. Take a deep dive into MLOps workflows in production for various use cases as part of an interactive exercise. Collaborate with peers, learn best practices, and workshop potential solutions with AI experts. Immerse yourself in expert insight on architecting, deploying, removing risk, simplifying and fast-tracking AI into production through demonstrations and engagement with industry experts and peers. Register Now!62Views0likes0CommentsThe Only Constant Is Change: Architecting for AI, Cloud, Virtualization, and Beyond
February 10 | Register Now! 11:00AM PT • 2:00PM ET Foundations matter: Their flexibility enables or constrains what you can do years down the road with your applications and business initiatives. For February 2026, host Andrew Miller explores the idea of data center foundations and how Pure Storage works with Cisco around FlashStack® to enable both current and future business initiatives. He’ll be joined by Eugene McGrath, Principal Field Solutions Architect for FlashStack—and previously a partner and customer—who will do a deep dive on this topic. We’ll wander through: Data center architecture history - from Reference Architecture to Converged Infrastructure, to Hyperconverged Infrastructure, the benefits we hoped for from each architectural phase, and how well we achieved them at an industry level. FlashStack - the core design principles behind the Cisco and Pure Storage partnership and the benefits, including statelessness, innovation without disruption, benefits during Day 0/1/2, and Cisco Validated Designs. What matters today - new announcements and capabilities with Nutanix, hypervisor optionality, AI designs, AI Pods, and more! In short, FlashStack reduces risk with prevalidated solutions for AI and analytics, cyber resilience, business-critical applications, and modern virtualization.74Views0likes0Comments5 AI Predictions Every Infrastructure Leader Needs to Know in 2026
January 22 | Register Now! In 2026, AI moves from experimentation to execution—and infrastructure leaders are in the driver's seat. Organizations that build the right data foundations now will be the ones turning AI into a true competitive advantage. Join us as we explore five trends shaping the future of AI infrastructure, from unlocking the value of your data to architecting for production-scale AI workloads. Walk away with a clear roadmap for positioning your infrastructure as the engine that powers your organization's AI ambitions. In this webinar, you'll learn: How to get your data ready for AI All about the NVIDIA AI Factory and Data Platform What it takes to scale AI from pilot to production Practical ways to bring AI into your organization Register Now!61Views0likes0Comments