Architectural Deep Dive: Building Data Pipelines for AI Agents
May 7 | Register Now! The leap from a "hello world" AI agent to a production-ready system is a massive data challenge. Autonomous agents are coming your way, and it's up to you to figure out how to get your data stack ready for production. In this live session, we'll build a high-velocity data pipeline for AI agents from scratch. Starting with the fundamentals of a strong data storage foundation, we'll walk through every layer end-to-end. We'll cover real-time data ingestion, vector storage, retrieval, orchestration, and inference. In this session you’ll learn: How to build a production-ready data storage pipeline for AI agents The foundational decisions IT, Data, and AI teams need to make to handle "context lag" and memory before the first agent goes live A practical framework for assessing whether your current infrastructure is ready to support AI agents at scale Register Now!183Views0likes0CommentsEnabling Agentic AI via Pure1 Manage MCP Server
Everpure now offers a Pure1® Manage MCP Server so you can query information about your fleet using natural language questions. In this post, I’ll explain how the Pure1 Manage MCP Server works. The first section will explain MCP in general, and the second section will explain how to use our specific server. Feel free to skip to the Quick Start section if you’re already familiar with MCP and just need the parameters to plug into your host. What is MCP? MCP stands for "Model Context Protocol," and it's a way for users to connect their AI applications to external systems using tool calls. MCP tools are fundamentally rooted in application programming interfaces (APIs). An API is a set of rules and protocols that allows different software applications to communicate with each other. It acts as an intermediary, enabling one piece of software (the client) to request information or functionality from another piece of software (the server) without needing to know the server's internal workings. For instance, when you check the weather on your phone, the weather app uses an API to send a request to a weather service, which then returns the current weather data. AI applications have trouble making API calls directly because APIs are designed for completeness and correctness, not for an LLM to use easily. When an AI application wants to use an external system to handle a user’s request, it uses the MCP protocol to make a tool call. The AI (client) requests a function (the tool) from an external system (the server), and the system executes the function and returns a result. This makes MCP a system that standardizes and mediates API-like interactions, allowing AI models to leverage external, real-world capabilities. For more information, see this article on the MCP website: “What is the Model Context Protocol (MCP)?” How can customers benefit from the Pure1 Manage MCP Server? The Pure1 Manage MCP Server enables customers to securely integrate AI assistants, copilots, and agentic systems with live Pure1 telemetry and operational data—without building custom API integrations. It transforms Pure1 from a dashboard-centric experience into an AI-accessible platform, enabling natural language interaction, contextual automation, and real-time operational intelligence. Customers benefit from faster AI integration, reduced engineering effort, preserved security controls, and improved decision velocity across hybrid environments. What types of customer workflows are best suited for MCP? The Pure1 Manage MCP Server is particularly well-suited for agentic and AI-driven workflows, including: Fleet telemetry integration with customer copilots Expose Pure1 telemetry—arrays, volumes, workloads, metrics, and alerts—into internal copilots, chatbots, or AI platforms via MCP endpoints. Value: Unified operational visibility across hybrid and multi-platform environments Automation with context awareness Use MCP to validate storage state, health, performance, or capacity before executing provisioning, backup, or disaster recovery workflows. Value: Safer automation with contextual validation, reduced execution errors, and greater rollback confidence Hybrid cloud observability Correlate Everpure array performance and capacity metrics with application, VM, container, or cloud telemetry across environments. Value: Faster troubleshooting and improved end-to-end performance insights Conversational operational analytics Enable operators to ask real-time natural language questions, such as: “Which arrays are nearing capacity risk?” “Show me the top latency spikes in the last 24 hours.” “Summarize all critical alerts across regions.” Value: Rapid insight without navigating dashboards or exporting reports What the Pure1 Manage MCP Server can do The Pure1 Manage MCP Server exposes the Pure1 Manage REST APIs to AI applications. This means you can ask ad hoc questions about your fleet using natural language, and your AI application will use the server’s tools to query the answer. No programming required. Example conversation Here’s an example conversation demonstrating the use of the Pure1 Manage MCP Server: User: "What is the name of my oldest FlashArray and what is its serial number?" Chatbot:Processing Request: The AI recognizes this as a request for fleet information requiring a tool call. Tool Invocation: Get_hardware Response: The oldest FlashArray in your fleet is named FA-Prod-01 with serial number FA10123456. User: "Can you show me the average I/O latency for that array over the last 7 days?" Chatbot: Let me first check the available latency metrics, then query the historical data: Tool Invocation: Get_metrics Tool Invocation: Get_metrics_history Response: The average I/O latency for FA-Prod-01 (serial FA10123456) over the last 7 days was 0.385 milliseconds. User: "How many arrays do I have in total that are still under warranty?" Chatbot: Processing Request: The AI recognizes this as a request for a count of arrays filtered by warranty status. Tool Invocation: Get_array_support_contracts Response: You currently have 12 arrays in your fleet that are still under active warranty. Quick start Step 1: Register an API key in Pure1 Manage The Pure1 Manage MCP Server leverages the Pure1 Manage REST APIs. In order to access those APIs, you need to register an API key in Pure1 Manage. To do that, follow the directions in the The Pure1® REST API introductory blog post. After going through the instructions, you will have an application id and a private key file, which will be used to generate an access token to access the MCP server in step 2. Step 2: Set up the pure1_token_factory.py script Prerequisites: you need Python 3.12 or greater to run the script. Download pure1_token_factory.zip. Unzip the archive. Go to the unzipped folder in your command-line terminal. Optional but recommended: create and activate a Python virtual environment: python3 -m venv .venv source .venv/bin/activate Install the requirements: pip3 install -r requirements.txt. Run python3 pure1_token_factory.py <application_id> <private_key_file> Copy the generated access token from the script output for the next step. Step 3: Add remote MCP server to your AI application Follow the directions for your AI application to add a remote MCP server (see the Pure1 Manage MCP Server User Guide for instructions for specific chatbots). In general, they need the following information: Remote MCP Server address: https://api.purestorage.com/mcp Authorization type: header Header name: Authorization Header value: Bearer <access-token> Important: <access-token> is just a placeholder for the access token you generated in step 2. The actual header value should look something like “Bearer eyJ0eXAiO…” Important: you need to generate a new access token every 10 hours and copy it into your AI application You’ll need to run pure1_token_factory.py to generate a new access token every 10 hours, and manually copy the access token into your AI application’s config. Claude Desktop instructions Claude Desktop is a special case because it doesn’t let you set the Authorization header directly. You have to run the mcp-remote local MCP server and configure that to use the Pure1 Manage remote MCP server. Prerequisites You need to have Node.js version 18 or newer installed on your system. Configuration In Claude Desktop, go to Settings > Developer, and click Edit Config. Open the claude_desktop_config.json file in a plain-text editor like VS Code. Configure the mcp-remote server, which is necessary to pass the Authorization header to the Pure1 Manage MCP Server. Paste the token into the configuration file, then restart Claude Desktop. { "mcpServers": { "Pure1 API": { "command": "npx", "args": [ "-y", "mcp-remote", "https://api.pure1.purestorage.com/mcp", "--header", "Authorization:${AUTHORIZATION_HEADER}" ], "env": { "AUTHORIZATION_HEADER": " Bearer <paste access token here>" } } } Note: there might be other configuration options in this file. Be sure to leave them unchanged, and only insert the Pure1 API config in the mcpServers section. The space in the AUTHORIZATION_HEADER environment variable is important. It's there to work around a bug in Windows argument parsing. Please note that: The first time it uses a tool, it will ask you for permission. You can grant permission to all tools at once by going to Customize > Connectors > Pure1 API, and selecting Always Allow under Other tools. For more detailed instructions from Anthropic, please refer to: Connect to local MCP servers - Model Context Protocol.40Views0likes0CommentsBring your hardest questions Building an AI Factory Live with FlashStack
April 21 | Register Now! Most organizations racing to build AI infrastructure are assembling point solutions that create hidden bottlenecks before a single model ever trains. This session cuts through the complexity with a live, practitioner-led walkthrough of the Everpure and Cisco FlashStack® for AI CVD—a proven reference architecture purpose-built for the NVIDIA AI Factory. Experts from Everpure will show you exactly how FlashStack eliminates the guesswork from AI infrastructure deployment, from storage and networking to compute and data pipeline readiness. Key takeaways include: Why a CVD-backed reference architecture reduces deployment risk and accelerates time-to-AI How FlashStack integrates with NVIDIA technologies to support training, inference, and agentic workloads at scale Perspective on what enterprises get wrong when standing up AI infrastructure—and how to avoid the most costly mistakes Register Now!399Views0likes0CommentsCatching up
Hey all! It's been a while since I've posted here and I feel compelled to reach out to see what everyone is working on. Like all of us, I've been pulled in many different directions lately (power, cooling, security camera's), and it has made me appreciate that managing our Everpure environment allows me cycles to focus elsewhere. Current storage related projects are, Cloudsnap: working with the Everpure support team to get cloudsnap working so that we can investigate long term backups to our Flashblades or S3 in the cloud. Integration with CyberArk: Again, working with the Everpure support team to enable privileged users with rotating passwords to work with our Everpure management environment. Pureprotect: Chad Montieth and Suresh Madhu have been instrumental in our testing and development of a case to possibly replace SRM for DR failover and testing. Don't forget about Accelerate June 16th - 18th in Las Vegas. This is a worthwhile event that provides free training classes and certification tests. Jason Finley and I from SEHP get to attend this year. register here Begin Registration - Pure Accelerate 2026 What are you working on? Share with the group any success or challenges. Keep an eye on the community page next week for an update from Nick Fritsch. Happy Easter all! - Charlie178Views1like0CommentsBoosting SQL Server Backup/Restore Performance: Threads and Parallelism
In this post, we’ll discuss day 1 tuning you can do on your database hosts to take full advantage of your new high-performance backup storage. We’ll go over a few tricks around database layout and backup configuration for maximum throughput, discuss some quirks with SMB, and finally discuss using S3 effectively.81Views1like0Comments🍀 Don’t Rely on Luck: A St. Patrick’s Day Reminder to Secure Your Fleet
St. Patrick’s Day is a celebration of luck, fortune, and four-leaf clovers—but when it comes to cybersecurity, luck is not a strategy. You cannot rely on chance to secure your environment. You need visibility, control, and proactive remediation. As threats continue to evolve and vulnerabilities are discovered across the industry, the most important first step in protecting your infrastructure is simple: Know exactly what you’re running. Step 1: Build a current, accurate fleet inventory The adage "You can't protect what you can't see" is a fundamental principle of cybersecurity. A comprehensive, real-time inventory of your storage fleet sets the foundation for security hygiene. That includes: Every array in your fleet Every active version of the Purity operating environment Exposure to known security vulnerabilities Identification of arrays that may require upgrades or patches The Everpure Pure1® Fleet Security Assessment Center provides this visibility in a single, centralized view: 🔗 Pure1 Fleet Security Assessment Center (login required) https://pure1.purestorage.com/app/dashboard/assessment/security This dashboard identifies: All Purity versions active in your fleet Arrays running non-recommended versions Potential exposure to known CVEs Security posture gaps requiring action Step 2: Understand vulnerability exposure Staying informed about known vulnerabilities is critical. The Everpure CVE Database provides transparent tracking of security advisories affecting our products: 🔗 Everpure CVE Database (login required) https://support.purestorage.com/bundle/z-kb-articles-cve/page/cve-database.html This resource allows you to: Review impacted Purity versions Understand severity and CVSS scoring Identify fixed or remediated versions Access mitigation guidance Step 3: Upgrade or patch—don’t wait If your fleet assessment identifies risk exposure, action is required. We strongly urge customers to ensure: All arrays are upgraded to the recommended fixed Purity versions OR Appropriate patches are applied to remediate identified vulnerabilities Security is not static. Staying current ensures: Reduced attack surface Stronger cryptographic protections Hardened operating environments Continued alignment with best practices Reinforce with security best practices Beyond version management, follow our published security guidance for both FlashArray™ and FlashBlade® platforms: FlashArray Security Best Practices (login required) https://support.purestorage.com/bundle/m_flasharray_security/page/FlashArray/FlashArray_Security/topics/c_flasharray_security_overview_best_practices.html FlashBlade Security Best Practices (login required) https://support.purestorage.com/bundle/m_security_resources/page/FlashBlade/FlashBlade_Security/topics/concept/c_purityfb_4.5_security_best_practices.html These white papers outline: Secure configuration recommendations Access control hardening Encryption best practices Monitoring and logging guidance Final thought On St. Patrick’s Day, luck may bring you a pot of gold. But in cybersecurity, luck only buys you time—and time runs out. A secure environment requires: A current fleet inventory Continuous vulnerability awareness Timely upgrades and patching Adherence to security best practices Don’t rely on luck to protect your data. Take control of your security posture today. Happy St. Patrick’s Day—and stay secure. 🍀💪71Views0likes0CommentsUnlock AI Capabilities: Best Practices for Risk-Free Oracle 26ai Upgrades
April 16 | Register Now! Upgrading to Oracle AI Database 26ai with AutoUpgrade simplifies the database process. Yet many teams can struggle with the storage infrastructure risk from performance variability, downtime and operational complexity during the transition. This session explores how to remove this infrastructure friction from Oracle 19c to 26ai upgrades by leveraging non-disruptive storage operations, snapshot-based rollback, and consistent performance at scale. Learn how to: Upgrade to Oracle 26ai using infrastructure best practices Align to a unified data platform Provide a stable foundation for AI-enabled workloads Register Now!379Views0likes0CommentsAsk Us Everything: Everpure Object — What You Need to Know
Why Object Exists (and Why It’s Different) Justin opened with a reset that resonated: file and object may both store unstructured data, but they are built on different assumptions. File storage evolved from human workflows — folders, directories, locking semantics, POSIX guarantees. That model works well for users and shared drives. But those same assumptions become friction at cloud scale. Object storage was built for machines. It uses a flat namespace, atomic operations, embedded metadata, and native versioning. That’s why modern applications — backup platforms, analytics engines, AI frameworks — increasingly request S3 buckets instead of file shares. It’s not that file storage is going away; it’s that machines prefer object. Scale: 3.8 Trillion Objects and Counting One of the standout moments was a validation that Everpure ran for a customer, which tested 3.8 trillion objects in a single bucket on FlashBlade. They didn’t stop because they hit a ceiling — they stopped because they ran out of time. That matters because unlimited scaling isn’t guaranteed in most on-prem object systems. Many legacy solutions quietly impose metadata or bucket limits that don’t surface until you’re deep into production. If your roadmap includes AI datasets, large backup repositories, analytics pipelines, or content delivery use cases, scale limits quickly become real-world constraints. Object for AI: Performance Has Changed the Conversation Using object for AI dominated the Q&A — and for good reason. Training workloads demand enormous throughput, especially for checkpointing bursts across large GPU clusters. Inference workloads are more latency-sensitive and read-heavy. FlashBlade’s architecture, including S3 over RDMA, separates metadata authentication from the data path and enables direct, high-throughput access to data nodes. The team referenced performance in the hundreds of GB/sec range on multi-chassis systems. Justin made an important observation: AI initially landed on file systems simply because object storage wasn’t considered performant enough. That assumption is changing rapidly. Object on FlashArray: The “Alongside Block” Story A lot of questions focused on object running on FlashArray — resiliency, performance expectations, and which workloads are a fit. Writes are acknowledged only after safe persistence, and standard object retry logic handles failure scenarios cleanly. So, you can be sure of data integrity, even if a controller fails. FlashArray Object is designed for smaller-scale S3 use cases: artifact repositories, container workloads, image stores, edge environments, and test/dev scenarios. FlashBlade remains the scale-out platform for massive object footprints. Over time, Everpure Fusion will increasingly abstract placement decisions so workloads land on the right platform without adding operational complexity. Data Reduction and Garbage Collection: The Hidden Advantages One of the more practical differentiators discussed was garbage collection. Many legacy object systems struggle with delete churn because of layered indirection — objects are marked, then nodes are marked, then underlying file systems are marked, then media eventually reclaims space. Because Everpure controls the stack end-to-end — logical object through physical media — reclamation is cohesive and efficient. Combined with always-on compression and similarity-based DeepReduce techniques, customers see meaningful space savings without sacrificing performance. Migration: It’s an Application Decision Perhaps the most important takeaway: moving from file to object isn’t a storage copy exercise. It’s an application transition. Backup software, artifact repositories, and analytics platforms increasingly support object natively. Let the application drive the migration instead of trying to brute-force a file-to-object copy. Object is growing quickly, but the shift doesn’t require abandoning everything at once. With FlashArray for edge and unified workloads, FlashBlade for scale-out performance, and Everpure Fusion tying it together, we are building a platform where object can grow naturally alongside block — not replace it overnight. If you have follow-up questions, bring them into the Pure Community. The conversation around object is only getting bigger.23Views0likes0Comments