OT: The Architecture of Interoperability
In previous post, we explored the fundamental divide between Information Technology (IT) and Operational Technology (OT). We established that while IT manages data and applications, OT controls the physical heartbeat of our world from factory floors to water treatment plants. In this post we are diving deeper into the bridge that connects them: Interoperability. As Industry 4.0 and the Internet of Things (IoT) accelerate, the "air gap" that once separated these domains is evolving. For modern enterprises, the goal isn't just to have IT and OT coexist, but to have them communicate seamlessly. Whether the use-cases are security, real time quality control, or predictive maintenance, to name a few, this is why interoperability becomes the critical engine for operational excellence. The Interoperability Architecture Interoperability is more than just connecting cables; it’s about creating a unified architecture where data flows securely between the shop floor and the “top floor”. In legacy environments, OT systems (like SCADA and PLCs) often run on isolated, proprietary networks that don’t speak the same language as IT’s cloud-based analytics platforms. To bridge this, a robust interoperability architecture is required. This architecture must support: Industrial Data Lake: A single storage platform that can handle block, file, and object data is essential for bridging the gap between IT and OT. This unified approach prevents data silos by allowing proprietary OT sensor data to coexist on the same high-performance storage as IT applications (such as ERP and CRM). The benefit is the creation of a high-performance Industrial Data Lake, where OT and IT data from various sources can be streamed directly, minimizing the need for data movement, a critical efficiency gain. Real Time Analytics: OT sensors continuously monitor machine conditions including: vibration, temperature, and other critical parameters, generating real-time telemetry data. An interoperable architecture built on high performance flash storage enables instant processing of this data stream. By integrating IT analytics platforms with predictive algorithms, the system identifies anomalies before they escalate, accelerating maintenance response, optimizing operations, and streamlining exception handling. This approach reduces downtime, lowers maintenance costs, and extends overall asset life. Standards Based Design: As outlined in recent cybersecurity research, modern OT environments require datasets that correlate physical process data with network traffic logs to detect anomalies effectively. An interoperable architecture facilitates this by centralizing data for analysis without compromising the security posture. Also, IT/OT convergence requires a platform capable of securely managing OT data, often through IT standards. An API-First Design allows the entire platform to be built on robust APIs, enabling IT to easily integrate storage provisioning, monitoring, and data protection into standard, policy-driven IT automation tools (e.g., Kubernetes, orchestration software). Pure Storage addresses these interoperability requirements with the Purity operating environment, which abstracts the complexity of underlying hardware and provides a seamless, multiprotocol experience (NFS, SMB, S3, FC, iSCSI). This ensures that whether data originates from a robotic arm or a CRM application, it is stored, protected, and accessible through a single, unified data plane. Real-World Application: A Large Regional Water District Consider a large regional water district, a major provider serving millions of residents. In an environment like this, maintaining water quality and service reliability is a 24/7 mission-critical OT function. Its infrastructure relies on complex SCADA systems to monitor variables like flow rates, tank levels, and chemical compositions across hundreds of miles of pipelines and treatment facilities. By adopting an interoperable architecture, an organization like this can break down the silos between its operational data and its IT capabilities. Instead of SCADA data remaining locked in a control room, it can be securely replicated to IT environments for long-term trending and capacity planning. For instance, historical flow data combined with predictive analytics can help forecast demand spikes or identify aging infrastructure before a leak occurs. This convergence transforms raw operational data into actionable business intelligence, ensuring reliability for the communities they serve. Why We Champion Compliance and Governance Opening up OT systems to IT networks can introduce new risks. In the world of OT, "move fast and break things" is not an option; reliability and safety are paramount. This is why Pure Storage wraps interoperability in a framework of compliance and governance, not limited to: FIPS 140-2 Certification & Common Criteria: We utilize FIPS 140-2 certified encryption modules and have achieved Common Criteria certification. Data Sovereignty: Our architecture includes built-in governance features like Always-On Encryption and rapid data locking to ensure compliance with domestic and international regulations, protecting sensitive data regardless of where it resides. Compliance: Pure Fusion delivers policy defined storage provisioning, automating the deployment with specified requirements for tags, protection, and replication. By embedding these standards directly into the storage array, Pure Storage allows organizations to innovate with interoperability while maintaining the security posture that critical OT infrastructure demands. Next in the series: We will explore further into IT/OT interoperability and processing of data at the edge. Stay tuned!30Views0likes0CommentsPure's Intelligent Control Plane: Powered by AI Copilot, MCP Connectivity and Workflow Orchestration
At Accelerate 2025, we announced two capabilities that change how you manage Pure Storage in your broader infrastructure: AI Copilot with Model Context Protocol (MCP) and Workflow Orchestration with production-ready templates. Here's what they do and why they matter. AI Copilot with MCP: Your Infrastructure, One Conversation The Problem Your infrastructure spans multiple platforms. Pure Storage managing your data, VMware running VMs, OpenShift handling containers, security tools monitoring threats, application platforms tracking performance - each with its own console, APIs, and workflows. When you need to migrate a VM or respond to a security incident, you're manually pulling information from each system, correlating it yourself, then executing actions across platforms. You become the integration layer. The Solution Pure1 now supports Model Context Protocol (MCP), taking Copilot from a suggestive assistant to an active operator. With MCP enabled, Copilot doesn’t just recommend - it acts. It serves as a secure bridge between natural language and your infrastructure, capable of fetching data, executing APIs, and orchestrating workflows across diverse systems. Here’s what makes this powerful: You deploy MCP servers within your environment—one for VMware, another for OpenShift, and others for the systems you use. Each server exposes your environment’s capabilities through a standard, interoperable protocol. Pure Storage AI Copilot connects seamlessly to these MCP servers, as well as to Pure services such as Data Intelligence, Workflow Orchestration, and Portworx Monitoring, enabling unified and secure automation across your hybrid ecosystem. What You Can Connect You can deploy an MCP server on any system whether it’s your VMware environment, Kubernetes clusters, security platforms like CrowdStrike, databases, monitoring tools, or custom applications. Pure Storage AI Copilot connects to these servers under your control, securely combining their data with Pure Storage services to deliver richer insights and automation. Getting Started: If you have a use-case around MCP, please contact your Pure Storage account team. Workflow Orchestration: Deploy in Minutes, Not Months The Problem Building production-grade automation takes months. You need error handling, integration with multiple systems, testing for edge cases, documentation, ongoing maintenance. Most teams end up with half-finished scripts that only one person understands. The Solution We built workflow templates for common operations, tested them at scale, and made them available in Pure1. Install them, customize to your needs, and run them in minutes. Key Templates VMware to OpenShift Migration with Portworx Handles complete migration: extracts VM metadata, identifies backing Pure volumes, checks OpenShift capacity, configures vVols Datastore and DirectAccess, uses array-based replication, converts to Portworx format. Traditional migration takes hours for TB-scale VMs. This takes 20 to 30 minutes. SQL / Oracle Database Clone and Copy Automates cloning and copying of SQL Server and Oracle databases for dev/test or refresh needs. Instantly creates storage-efficient clones from snapshots, mounts them to target environments, and applies Pure-optimized settings. The hours-long manual process becomes a quick, consistent workflow completed in minutes Daily Fleet Health Check Scans all arrays for capacity trends, performance issues, protection gaps, hardware health.Posts summary to Slack. Proactive visibility without manually checking each array. Rubrik Threat Detection Response When Rubrik detects a threat, automatically tags affected Pure volumes, creates isolated immutable snapshots, and notifies the security team. Security events propagate to your storage layer automatically. How It Works Workflow Orchestration is a SaaS feature in Pure1. Deploy lightweight agents (Windows, Linux, or Docker) in your data center to execute workflows locally. Group agents together for high availability and governance controls. Integrations Native Pure Storage: Pure1 Connector for full API access, Fusion Connector for storage provisioning (works for Fusion and non-Fusion FlashArray/FlashBlade customers) Third-Party: ServiceNow, Slack, Google, Microsoft,CrowdStrike, HTTP/Webhooks, Pagerduty, Salesforce and more. The connector library continues expanding. Getting Started: Opt-in now in Pure1 - Workflow. Introductory offer available at this time. Check with your Pure account team if you have questions. How They Work Together At Accelerate 2025 in New York, we showcased this capability in action. Here's the scenario: an organization wants to migrate VMs to Kubernetes. Action-enabled Copilot orchestrates communication with Pure Storage appliances and services as well as third-party MCP servers to collect the required information for addressing a problem across a heterogeneous environment. With Pure1 MCP, AI Copilot, and Workflows, there's now a programmatic way to collect information from OpenShift MCP, VMware MCP, and Pure1 storage insights- then recommend an approach on what VMs to migrate based on your selection criteria. You prompt Copilot: "How can I move my VMs to OpenShift in an efficient way?" Copilot communicates across: Your VMware MCP server - to get VM specifications, current configurations, resource usage Your OpenShift MCP server - to check available cluster capacity, validate compatibility Portworx monitoring - to understand current storage performance Copilot reasons across all this information, identifies ideal VM candidates based on your criteria, and recommends the migration approach- which VMs to move, target configurations, and how to preserve policies. Then it can trigger the migration workflow, keeping you updated throughout the process. Why This Matters Storage Admins: Stop being the bottleneck. Enable self-service while maintaining governance. DevOps Teams: Deploy production-tested automation without writing code. Security Teams: Build automated response workflows spanning detection, isolation, and recovery. Infrastructure Leaders: Reduce operational overhead. Teams focus on strategy, not repetitive tasks. Get Started MCP Integration:If you have a use-case around MCP, please contact your Pure Storage account team.. Workflow Orchestration:Opt-in at Pure1 → Workflows. Learn More: Documentation in Pure1 or contact your Pure Storage account team. Pure1 evolved from a monitoring platform to an Intelligent Control Plane. AI Copilot reasons across your infrastructure. Workflow Orchestration executes. Together, they change how you manage data with Pure Storage.81Views2likes0CommentsFlashBlade Ansible Collection 1.22.0 released!
🎊 FlashBlade Ansible Collection 1.22.0 THIS IS A SIGNIFICANT RELEASE as removes all REST v1 components from the collection and adds Fusion support! Update your collections! Download the Collection via Ansible command: ansible-galaxy collection install purestorage.flashblade Download it from Ansible Galaxy here Read the Release Notes here.35Views2likes0CommentsPowerShell SDK v2.44.111 released with REST API 2.44 support!
🎉 The Pure PowerShell SDK v2 version 2.44.111 has been released! This release marks the full compatibility with Purity REST API version 2.44 and it contains many additions, changes, and deprecations for cmdlets. Please read the Release Notes for more detailed release information. To install the updated module: Install-Module -Name PureStoragePowerShellSDK2 Here is a brief review of the newest additions and changes: In this release, we added the cmdlets for managing Directories, DirectoryServices, Policies, ProtectionGroupSnapshot tags and Servers. Multiple endpoints got new parameters. Find detailed information about the cmdlets in the sections below. On this release we added the following 31 new cmdlet(s): Get-Pfa2DirectoryGroup Get-Pfa2DirectoryPolicyUserGroupQuota New-Pfa2DirectoryPolicyUserGroupQuota Remove-Pfa2DirectoryPolicyUserGroupQuota Get-Pfa2DirectoryUser Get-Pfa2DirectoryGroupQuota New-Pfa2DirectoryService Remove-Pfa2DirectoryService Get-Pfa2DirectoryServiceLocalDirectoryService New-Pfa2DirectoryServiceLocalDirectoryService Update-Pfa2DirectoryServiceLocalDirectoryService Remove-Pfa2DirectoryServiceLocalDirectoryService Get-Pfa2DirectoryUserQuota Get-Pfa2PolicyUserGroupQuota New-Pfa2PolicyUserGroupQuota Update-Pfa2PolicyUserGroupQuota Remove-Pfa2PolicyUserGroupQuota Get-Pfa2PolicyUserGroupQuotaMember New-Pfa2PolicyUserGroupQuotaMember Remove-Pfa2PolicyUserGroupQuotaMember Get-Pfa2PolicyUserGroupQuotaRule New-Pfa2PolicyUserGroupQuotaRule Update-Pfa2PolicyUserGroupQuotaRule Remove-Pfa2PolicyUserGroupQuotaRule Get-Pfa2ProtectionGroupSnapshotTag Remove-Pfa2ProtectionGroupSnapshotTag Set-Pfa2ProtectionGroupSnapshotTagBatch Get-Pfa2Servers New-Pfa2Servers Update-Pfa2Servers Remove-Pfa2Servers The following 29 cmdlet(s) have new parameters: 'New-Pfa2ActiveDirectory' have the following new parameter(s): SourcesId SourcesName 'Update-Pfa2ActiveDirectory' have the following new parameter(s): SourcesId SourcesName 'New-Pfa2DirectoryPolicyNfs' have the following new parameter(s): PoliciesServerId PoliciesServerName 'New-Pfa2DirectoryPolicySmb' have the following new parameter(s): PoliciesServerId PoliciesServerName 'Get-Pfa2DirectoryExport' have the following new parameter(s): Name 'New-Pfa2DirectoryExport' have the following new parameter(s): Name ServerId ServerName 'Remove-Pfa2DirectoryExport' have the following new parameter(s): Name 'Get-Pfa2DirectoryService' have the following new parameter(s): Id 'Update-Pfa2DirectoryService' have the following new parameter(s): Id CaCertificateRefId CaCertificateRefName CaCertificateRefResourceType SourcesId SourcesName 'Get-Pfa2DirectoryServiceLocalGroup' have the following new parameter(s): AllowErrors ContextName 'New-Pfa2DirectoryServiceLocalGroup' have the following new parameter(s): ContextName LocalDirectoryServiceIds LocalDirectoryServiceNames 'Update-Pfa2DirectoryServiceLocalGroup' have the following new parameter(s): ContextName LocalDirectoryServiceIds LocalDirectoryServiceNames ContextId 'Remove-Pfa2DirectoryServiceLocalGroup' have the following new parameter(s): ContextName LocalDirectoryServiceIds LocalDirectoryServiceNames 'Get-Pfa2DirectoryServiceLocalGroupMember' have the following new parameter(s): AllowErrors ContextName 'New-Pfa2DirectoryServiceLocalGroupMember' have the following new parameter(s): ContextName LocalDirectoryServiceIds LocalDirectoryServiceNames 'Remove-Pfa2DirectoryServiceLocalGroupMember' have the following new parameter(s): ContextName LocalDirectoryServiceIds LocalDirectoryServiceNames 'Get-Pfa2DirectoryServiceLocalUser' have the following new parameter(s): AllowErrors ContextName 'New-Pfa2DirectoryServiceLocalUser' have the following new parameter(s): ContextName LocalDirectoryServiceIds LocalDirectoryServiceNames 'Update-Pfa2DirectoryServiceLocalUser' have the following new parameter(s): ContextName LocalDirectoryServiceIds LocalDirectoryServiceNames 'Remove-Pfa2DirectoryServiceLocalUser' have the following new parameter(s): ContextName LocalDirectoryServiceIds LocalDirectoryServiceNames 'Get-Pfa2DirectoryServiceLocalUserMember' have the following new parameter(s): AllowErrors ContextName 'New-Pfa2DirectoryServiceLocalUserMember' have the following new parameter(s): ContextName LocalDirectoryServiceIds LocalDirectoryServiceNames 'Remove-Pfa2DirectoryServiceLocalUserMember' have the following new parameter(s): ContextName LocalDirectoryServiceIds LocalDirectoryServiceNames 'New-Pfa2NetworkInterface' have the following new parameter(s): AttachedServersId AttachedServersName 'Update-Pfa2NetworkInterface' have the following new parameter(s): AttachedServersId AttachedServersName 'New-Pfa2PolicyNfsMember' have the following new parameter(s): MembersServerId MembersServerName 'New-Pfa2PolicySmbMember' have the following new parameter(s): MembersServerId MembersServerName 'New-Pfa2ProtectionGroupSnapshot' have the following new parameter(s): TagsCopyable TagsKey TagsNamespace TagsValue TagsContextId TagsContextName TagsResourceId TagsResourceName 'New-Pfa2ProtectionGroupSnapshotTest' have the following new parameter(s): TagsCopyable TagsKey TagsNamespace TagsValue TagsContextId TagsContextName TagsResourceId TagsResourceName The following 16 cmdlet(s) had parameters dropped: 'Update-Pfa2Array' dropped the following parameter(s): ContextNames 'Update-Pfa2ContainerDefaultProtection' dropped the following parameter(s): ContextNames 'Update-Pfa2DirectoryService' dropped the following parameter(s): ContextNames 'Update-Pfa2DirectoryServiceRole' dropped the following parameter(s): ContextNames 'Set-Pfa2PresetWorkload' dropped the following parameter(s): ContextNames 'New-Pfa2ProtectionGroupSnapshot' dropped the following parameter(s): ContextNames 'Update-Pfa2ProtectionGroupSnapshot' dropped the following parameter(s): ContextNames 'New-Pfa2ProtectionGroupSnapshotTest' dropped the following parameter(s): ContextNames 'Update-Pfa2ProtectionGroup' dropped the following parameter(s): ContextNames 'New-Pfa2RemoteProtectionGroupSnapshot' dropped the following parameter(s): ContextNames 'New-Pfa2RemoteProtectionGroupSnapshotTest' dropped the following parameter(s): ContextNames 'Update-Pfa2RemoteProtectionGroup' dropped the following parameter(s): ContextNames 'New-Pfa2SyslogServer' dropped the following parameter(s): ContextNames 'Update-Pfa2SyslogServer' dropped the following parameter(s): ContextNames 'Update-Pfa2SyslogServerSetting' dropped the following parameter(s): ContextNames 'New-Pfa2WorkloadPlacementRecommendation' dropped the following parameter(s): ContextNames45Views1like0CommentsAsk Us Everything Recap: Making Purity Upgrades Simple
At our recent Ask Us Everything session, we put a spotlight on something every storage admin has an opinion about: software upgrades. Traditionally, storage upgrades have been dreaded — late nights, service windows, and the fear of downtime. But as attendees quickly learned, Pure Storage Purity upgrades are designed to be a very different experience. Our panel of Pure Storage experts included our host Don Poorman, Technical Evangelist, and special guests Sean Kennedy and Rob Quast, Principal Technologists. Here are the questions that sparked the most conversation, and the insights our panel shared. “Are Purity upgrades really non-disruptive?” This one came up right away, and for good reason. Many admins have scars from upgrade events at other vendors. Pure experts emphasized that non-disruptive upgrades (NDUs) are the default. With thousands performed in the field — even for mission-critical applications — upgrades run safely in the background. Customers don’t need to schedule middle-of-the-night windows just to stay current. “Do I need to wait for a major release?” Attendees wanted to know how often they should upgrade, and whether “dot-zero” releases are safe. The advice: don’t wait too long. With Pure’s long-life releases (like Purity 6.9), you can stay current without chasing every new feature release. And because Purity upgrades are included in your Evergreen subscription, you’re not paying extra to get value — you just need to install the latest version. Session attendees found this slide helpful, illustrating the different kinds of Purity releases. “How do self-service upgrades work?” Admins were curious about how much they can do themselves versus involving Pure Storage support. The good news: self-service upgrades are straightforward through Pure1, but you’re never on your own. Pure Technical Services knows that you're running an upgrade, and if an issue arises you’re automatically moved to the front of the queue. If you want a co-pilot, then of course Pure Storage support can walk you through it live. Either way, the process is fast, repeatable, and built for confidence. Upgrading your Purity version has never been easier, now that Self Service Upgrades lets you modernize on your schedule. “Why should I upgrade regularly?” This is where the conversation shifted from fear to excitement. Staying current doesn’t just keep systems secure — it unlocks new capabilities like: Pure Fusion™: a unified, fleet-wide control plane for storage. FlashArray™ Files: modern file services, delivered from the same trusted platform. Ongoing performance, security, and automation enhancements that come with every release. One attendee summed it up perfectly: “Upgrading isn’t about fixing problems — it’s about getting new toys.” The Takeaway The biggest lesson from this session? Purity upgrades aren’t something to fear — they’re something to look forward to. They’re included with your Evergreen subscription, they don’t disrupt your environment, and they unlock powerful features that make storage easier to manage. So if you’ve been putting off your next upgrade, take a fresh look. Chances are, Fusion, Files, or another feature you’ve been waiting for is already there — you just need to turn it on. 👉 Want to keep the conversation going? Join the discussion in the Pure Community and share your own upgrade tips and stories. Be sure to join our next Ask Us Everything session, and catch up with past sessions here!174Views3likes2CommentsFlashcrew Manchester
🤝 Flashcrew Pure Usergroup | Manchester | For our amazing customers! Connect with fellow Pure users and dive deep into the //Accelerate announcements. Learn how to extract even more value from the Pure ecosystem and get your technical questions answered by the experts. Register here: https://info.purestorage.com/2025-Q2EMEA-UKREPLHCOFY26Q3-FlashCrew-Manchester-LP_01---Registration-Page.html We also open these up to non-customers interested in Pure, helping you learn from those already benefitting from the Pure Enterprise Data Platform. Please DM me if you would like an invite.58Views1like0CommentsPure Storage PowerShell SDK2 version 2.43.46 released!
We released the Pure Storage PowerShell SDK version 2.43.30 not too long ago. We have since received reports of users having issue with the Connect-Pfa2Array cmdlet only when using PowerShell version 5.x. The error would something similar to this: Could not load file or assembly 'System.ComponentModel.Annotations, Version=4.2.0.0, Culture=neutral, PublicKeyToken=someToken' or one of its dependencies. We have released a fix for this in the PowerShell Gallery. https://www.powershellgallery.com/packages/PureStoragePowerShellSDK2/2.43.46. This was the only fix for this release and the issue does not affect the cmdlet when used with PowerShell version 7.x. To upgrade, run Install-Module: Install-Module -Name PureStoragePowerShellSDK2 -Repository PSGallery -RequiredVersion 2.43.46 -Verbose -Force We do apologize for any inconvenience this may have caused.37Views1like0Comments