OT: The Architecture of Interoperability
In previous post, we explored the fundamental divide between Information Technology (IT) and Operational Technology (OT). We established that while IT manages data and applications, OT controls the physical heartbeat of our world from factory floors to water treatment plants. In this post we are diving deeper into the bridge that connects them: Interoperability. As Industry 4.0 and the Internet of Things (IoT) accelerate, the "air gap" that once separated these domains is evolving. For modern enterprises, the goal isn't just to have IT and OT coexist, but to have them communicate seamlessly. Whether the use-cases are security, real time quality control, or predictive maintenance, to name a few, this is why interoperability becomes the critical engine for operational excellence. The Interoperability Architecture Interoperability is more than just connecting cables; it’s about creating a unified architecture where data flows securely between the shop floor and the “top floor”. In legacy environments, OT systems (like SCADA and PLCs) often run on isolated, proprietary networks that don’t speak the same language as IT’s cloud-based analytics platforms. To bridge this, a robust interoperability architecture is required. This architecture must support: Industrial Data Lake: A single storage platform that can handle block, file, and object data is essential for bridging the gap between IT and OT. This unified approach prevents data silos by allowing proprietary OT sensor data to coexist on the same high-performance storage as IT applications (such as ERP and CRM). The benefit is the creation of a high-performance Industrial Data Lake, where OT and IT data from various sources can be streamed directly, minimizing the need for data movement, a critical efficiency gain. Real Time Analytics: OT sensors continuously monitor machine conditions including: vibration, temperature, and other critical parameters, generating real-time telemetry data. An interoperable architecture built on high performance flash storage enables instant processing of this data stream. By integrating IT analytics platforms with predictive algorithms, the system identifies anomalies before they escalate, accelerating maintenance response, optimizing operations, and streamlining exception handling. This approach reduces downtime, lowers maintenance costs, and extends overall asset life. Standards Based Design: As outlined in recent cybersecurity research, modern OT environments require datasets that correlate physical process data with network traffic logs to detect anomalies effectively. An interoperable architecture facilitates this by centralizing data for analysis without compromising the security posture. Also, IT/OT convergence requires a platform capable of securely managing OT data, often through IT standards. An API-First Design allows the entire platform to be built on robust APIs, enabling IT to easily integrate storage provisioning, monitoring, and data protection into standard, policy-driven IT automation tools (e.g., Kubernetes, orchestration software). Pure Storage addresses these interoperability requirements with the Purity operating environment, which abstracts the complexity of underlying hardware and provides a seamless, multiprotocol experience (NFS, SMB, S3, FC, iSCSI). This ensures that whether data originates from a robotic arm or a CRM application, it is stored, protected, and accessible through a single, unified data plane. Real-World Application: A Large Regional Water District Consider a large regional water district, a major provider serving millions of residents. In an environment like this, maintaining water quality and service reliability is a 24/7 mission-critical OT function. Its infrastructure relies on complex SCADA systems to monitor variables like flow rates, tank levels, and chemical compositions across hundreds of miles of pipelines and treatment facilities. By adopting an interoperable architecture, an organization like this can break down the silos between its operational data and its IT capabilities. Instead of SCADA data remaining locked in a control room, it can be securely replicated to IT environments for long-term trending and capacity planning. For instance, historical flow data combined with predictive analytics can help forecast demand spikes or identify aging infrastructure before a leak occurs. This convergence transforms raw operational data into actionable business intelligence, ensuring reliability for the communities they serve. Why We Champion Compliance and Governance Opening up OT systems to IT networks can introduce new risks. In the world of OT, "move fast and break things" is not an option; reliability and safety are paramount. This is why Pure Storage wraps interoperability in a framework of compliance and governance, not limited to: FIPS 140-2 Certification & Common Criteria: We utilize FIPS 140-2 certified encryption modules and have achieved Common Criteria certification. Data Sovereignty: Our architecture includes built-in governance features like Always-On Encryption and rapid data locking to ensure compliance with domestic and international regulations, protecting sensitive data regardless of where it resides. Compliance: Pure Fusion delivers policy defined storage provisioning, automating the deployment with specified requirements for tags, protection, and replication. By embedding these standards directly into the storage array, Pure Storage allows organizations to innovate with interoperability while maintaining the security posture that critical OT infrastructure demands. Next in the series: We will explore further into IT/OT interoperability and processing of data at the edge. Stay tuned!29Views0likes0CommentsPittsburgh PUG - Launch Party @ River's Casino Drum Bar
You’re Invited! PUG - Time to Launch Celebrating Pure Storage + Nutanix + Expedient Join us for a special Pure User Group event as we celebrate the launch of two of the industry’s most loved technologies: Pure Storage and Nutanix are coming together in a powerful new way. Even better: Expedient becomes the FIRST Cloud Service Provider to bring this combined solution to market. This event is all about bringing our Pittsburgh-area community together to learn, connect, and celebrate a major milestone in the hybrid cloud and on-prem cloud ecosystem. What You’ll Experience A deep dive into the new Pure Storage + Nutanix integration How Expedient is delivering it as a fully managed cloud service Real-world use cases for cloud-smart modernization Customer-driven conversation, not vendor slides Networking with peers, experts, and the local PUG community Food, drinks, and launch-party fun Why This Matters This three-way partnership brings customers NVMe-fast, always-on performance Effortless scalability and hybrid cloud freedom A cloud service built for simplicity and resiliency Lower operational overhead no firefighting, no forklift upgrades It’s the stack that “just works,” so your teams can focus on innovation instead of maintenance.26Views0likes0CommentsAnnouncing the General Availability of Purity//FA 6.7.7 LLR
We are happy to announce the general availability of 6.7.7, the eighth release in the 6.7 Long-Life Release (LLR) line! This release line is based on the feature set introduced in 6.6, providing long-term consistency in capabilities, user experience, and interoperability, with the latest fixes and security updates. For more detailed information about bug fixes and security updates included in each release, see the release notes. UPGRADE RECOMMENDATIONS AND EOL SCHEDULE We recommend customers already running 6.7 who are looking for the latest fixes and updates to upgrade to this long-life release. Customers looking for a newer feature set, including Fusion fleet management, should consider an upgrade to the 6.9 LLR. When possible, customers should make use of Self-Service Upgrades (SSU) to ease the process of planning and executing non-disruptive Purity upgrades for their fleet. The 6.7 LLR line is planned for development through October 2027. HARDWARE SUPPORT This release is supported on the following FlashArray Platforms: FA//X (R2, R3, R4), FA//C (R1, R3, R4), FA//XL (R1), FA//E, and Pure Storage Cloud Dedicated. The PSC Dedicated release may take up to a week to be available on the AWS Marketplace and Azure Marketplace. Note, DFS software version 2.2.4 is recommended with this release LINKS AND REFERENCES Purity//FA 6.7 Release Notes Purity//FA 6.6/6.7 Feature Content Self-Service Upgrades Purity//FA Release and End-of-Life Schedule FlashArray Hardware and End-of-Support DirectFlash Shelf Software Compatibility Matrix FlashArray Capacity and Feature Limits43Views0likes0CommentsAsk Us Everything about FlashArray File Services
Got questions about managing files with FlashArray? Get answers. November 21, 2025 | 09:00am PT • 12:00pm ET In this month’s episode of Ask Us Everything, we’re talking file storage—without the headaches. We’ll start with a quick overview of how Pure Storage FlashArray™ File Services simplify file management, helping you get up and running quickly, manage data stress-free, scale with flexibility, and protect your files with confidence. Then it’s your turn. Bring your questions about capabilities like intuitive management, snapshots and replication, or seamless virtualization support. Our experts will help you cut through the complexity of legacy systems and build confidence in keeping your files available, secure, and easy to manage. Reserve your seat! Ask a question for your chance to win: The first 10 eligible Pure Storage customers to submit a question during the live webinar will receive one (1) Pure Storage Customer Appreciation Kit (approximate retail value: $65). Limit one kit per customer. Offer valid only during the live event and while supplies last. See Terms and Conditions.48Views0likes0CommentsUnderstanding Deduplication Ratios
It’s super important to understand where deduplication ratios, in relation to backup applications and data storage, come from. Deduplication prevents the same data from being stored again, lowering the data storage footprint. In terms of hosting virtual environments, like FlashArray//X™ and FlashArray//C™, you can see tremendous amounts of native deduplication due to the repetitive nature of these environments. Backup applications and targets have a different makeup. Even still, deduplication ratios have long been a talking point in the data storage industry and continue to be a decision point and factor in buying cycles. Data Domain pioneered this tactic to overstate its effectiveness, leaving customers thinking the vendor’s appliance must have a magic wand to reduce data by 40:1. I wanted to take the time to explain how deduplication ratios are derived in this industry and the variables to look for in figuring out exactly what to expect in terms of deduplication and data footprint. Let’s look at a simple example of a data protection scenario. Example: A company has 100TB of assorted data it wants to protect with its backup application. The necessary and configured agents go about doing the intelligent data collection and send the data to the target. Initially, and typically, the application will leverage both software compression and deduplication. Compression by itself will almost always yield a decent amount of data reduction. In this example, we’ll assume 2:1, which would mean the first data set goes from 100TB to 50TB. Deduplication doesn’t usually do much data reduction on the first baseline backup. Sometimes there are some efficiencies, like the repetitive data in virtual machines, but for the sake of this generic example scenario, we’ll leave it at 50TB total. So, full backup 1 (baseline): 50TB Now, there are scheduled incremental backups that occur daily from Monday to Friday. Let’s say these daily changes are 1% of the aforementioned data set. Each day, then, there would be 1TB of additional data stored. 5 days at 1TB = 5TB. Let’s add the compression in to reduce that 2:1, and you have an additional 2.5TB added. 50TB baseline plus 2.5TB of unique blocks means a total of 52.5TB of data stored. Let’s check the deduplication rate now. 105TB/52.5TB = 2x You may ask: “Wait, that 2:1 is really just the compression? Where is the deduplication?” Great question and the reason why I’m writing this blog. Deduplication prevents the same data from being stored again. With a single full backup and incremental backups, you wouldn’t see much more than just the compression. Where deduplication measures impact is in the assumption that you would be sending duplicate data to your target. This is usually discussed as data under management. Data under management is the logical data footprint of your backup data, as if you were regularly backing up the entire data set, not just changes, without deduplication or compression. For example, let’s say we didn’t schedule incremental backups but scheduled full backups every day instead. Without compression/deduplication, the data load would be 100TB for the initial baseline and then the same 100TB plus the daily growth. Day 0 (baseline): 100TB Day 1 (baseline+changes): 101TB Day 2 (baseline+changes): 102TB Day 3 (baseline+changes): 103TB Day 4 (baseline+changes): 104TB Day 5 (baseline+changes): 105TB Total, if no compression/deduplication: 615TB This 615TB total is data under management. Now, if we looked at our actual, post-compression/post-dedupe number from before (52.5TB), we can figure out the deduplication impact: 615/52.5 = 11.714x Looking at this over a 30-day period, you can see how the dedupe ratios can get really aggressive. For example: 100TB x 30 days = 3,000TB + (1TB x 30 days) = 3,030TB 3,030TB/65TB (actual data stored) = 46.62x dedupe ratio In summary: 100TB, 1% change rate, 1 week: Full backup + daily incremental backups = 52.5TB stored, and a 2x DRR Full daily backups = 52.5TB stored, and an 11.7x DRR That is how deduplication ratios really work—it’s a fictional function of “what if dedupe didn’t exist, but you stored everything on the disk anyway” scenarios. They’re a math exercise, not a reality exercise. Front-end data size, daily change rate, and retention are the biggest variables to look at when sizing or understanding the expected data footprint and the related data reduction/deduplication impact. In our scenario, we’re looking at one particular data set. Most companies will have multiple data types, and there can be even greater redundancy when accounting for full backups across those as well. So while it matters, consider that a bonus.52Views1like1CommentWho's using Pure Protect?
Hey everyone, Just wondering if anyone else is using Pure Protect yet. We have gone through the quick start guide and have a VMWare to VMWare configuration setup. We have configured our first policy and group utilizing a test VM but it seems to be stuck in the protection phase. I would be very interested to hear what others have seen or experienced. -Charles107Views1like3CommentsAnnouncing the General Availability of Purity//FA 6.9.2
We are happy to announce the general availability of 6.9.2, the third release of the 6.9 Long-Life Release (LLR) line and the thirteenth release based on the code line from 6.8! This LLR line provides long-term maintenance of the complete feature set introduced in the 6.8 Feature Release Line, including Fusion, with consistency in capabilities, user experience, and interoperability. This release includes support for R5 Controllers for FlashArray //X and //C, bringing performance, density, and data protection improvements to the 6.9 LLR line. For more detailed information about bug fixes and security updates included in each release, see the release notes. UPGRADE RECOMMENDATIONS AND EOL SCHEDULE Customers who are looking for long-term maintenance of the complete 6.8 feature set are encouraged to upgrade to the 6.9 LLR. Customers who are looking for continued delivery of all the newest capabilities as soon as they are available should upgrade to the 6.10 Feature Release line. When possible, customers should make use of Self-Service Upgrades (SSU) to ease the process of planning and executing non-disruptive Purity upgrades for their fleet. The 6.9 LLR line is planned for development through June 2028. HARDWARE SUPPORT This release is supported on the following FlashArray Platforms: FA//X (R3, R4, R5), FA//C (R3, R4, R5), FA//XL (R1, R5), FA//E, FA//RC20, and Pure Storage Cloud Dedicated (PSCD) for Azure and AWS. The PSCD release may take up to a week to be available on the AWS Marketplace and Azure Marketplace. Note, DFS software version 2.2.5 is recommended with this release. LINKS AND REFERENCES Purity//FA 6.9 Release Notes Self-Service Upgrades Purity//FA Release and End-of-Life Schedule FlashArray Hardware and End-of-Support DirectFlash Shelf Software Compatibility Matrix FlashArray Capacity and Feature Limits FlashArray Feature Interoperability Matrix466Views0likes0CommentsAsk Us Everything Recap: Making Purity Upgrades Simple
At our recent Ask Us Everything session, we put a spotlight on something every storage admin has an opinion about: software upgrades. Traditionally, storage upgrades have been dreaded — late nights, service windows, and the fear of downtime. But as attendees quickly learned, Pure Storage Purity upgrades are designed to be a very different experience. Our panel of Pure Storage experts included our host Don Poorman, Technical Evangelist, and special guests Sean Kennedy and Rob Quast, Principal Technologists. Here are the questions that sparked the most conversation, and the insights our panel shared. “Are Purity upgrades really non-disruptive?” This one came up right away, and for good reason. Many admins have scars from upgrade events at other vendors. Pure experts emphasized that non-disruptive upgrades (NDUs) are the default. With thousands performed in the field — even for mission-critical applications — upgrades run safely in the background. Customers don’t need to schedule middle-of-the-night windows just to stay current. “Do I need to wait for a major release?” Attendees wanted to know how often they should upgrade, and whether “dot-zero” releases are safe. The advice: don’t wait too long. With Pure’s long-life releases (like Purity 6.9), you can stay current without chasing every new feature release. And because Purity upgrades are included in your Evergreen subscription, you’re not paying extra to get value — you just need to install the latest version. Session attendees found this slide helpful, illustrating the different kinds of Purity releases. “How do self-service upgrades work?” Admins were curious about how much they can do themselves versus involving Pure Storage support. The good news: self-service upgrades are straightforward through Pure1, but you’re never on your own. Pure Technical Services knows that you're running an upgrade, and if an issue arises you’re automatically moved to the front of the queue. If you want a co-pilot, then of course Pure Storage support can walk you through it live. Either way, the process is fast, repeatable, and built for confidence. Upgrading your Purity version has never been easier, now that Self Service Upgrades lets you modernize on your schedule. “Why should I upgrade regularly?” This is where the conversation shifted from fear to excitement. Staying current doesn’t just keep systems secure — it unlocks new capabilities like: Pure Fusion™: a unified, fleet-wide control plane for storage. FlashArray™ Files: modern file services, delivered from the same trusted platform. Ongoing performance, security, and automation enhancements that come with every release. One attendee summed it up perfectly: “Upgrading isn’t about fixing problems — it’s about getting new toys.” The Takeaway The biggest lesson from this session? Purity upgrades aren’t something to fear — they’re something to look forward to. They’re included with your Evergreen subscription, they don’t disrupt your environment, and they unlock powerful features that make storage easier to manage. So if you’ve been putting off your next upgrade, take a fresh look. Chances are, Fusion, Files, or another feature you’ve been waiting for is already there — you just need to turn it on. 👉 Want to keep the conversation going? Join the discussion in the Pure Community and share your own upgrade tips and stories. Be sure to join our next Ask Us Everything session, and catch up with past sessions here!169Views3likes2CommentsNon-disruptive DR Testing with SQL Server and ActiveDR
October 28 | Register Now! Ensuring data recovery during an outage is one of the most critical responsibilities when managing SQL Server. A disaster recovery strategy is essential. But the real question is, how do you know your plan will actually work when it matters most? The only way is through consistent, rigorous testing. In this Expert-led Demos, you’ll discover how Pure Storage ActiveDR™ technology can: Simplify disaster recovery planning with continuous replication and near-instant failover Validate your strategy through seamless, non-disruptive testing Minimize downtime and risk by ensuring your SQL Server data is always protected and recoverable Register Now!36Views0likes0CommentsLayered Cyber Resiliency: Superna Data Security Pit Stop
October 15 | Register now! Safeguard your organisation’s data against sophisticated ransomware, insider threats, and compliance risks, powered by Superna Data Security Essentials seamlessly integrated with Pure Storage FlashBlade and FlashArray. Join our upcoming webinar to explore how Superna cyberstorage delivers active, real-time protection directly at the data layer, going far beyond conventional security approaches. Discover a hands-on demonstration with actionable insights on: Real-time ransomware and unauthorised access detection with continuous monitoring of file activity, catching threats before they inflict damage Automated alerts and instantaneous response actions, including account lockout and integration with Active Directory, to swiftly halt attacks and prevent data compromise Lightweight, scalable deployment, no extra hardware or infrastructure required. Superna overlays seamlessly on your Pure Storage environment for instant protection and uninterrupted performance Comprehensive auditing, granular security policies, and forensic reporting, supporting regulatory compliance and accelerating incident investigations Seamless integration with SIEM, SOAR, and IT operations platforms for unified security management and rapid incident response Whether you operate in AI, analytics, healthcare, financial services, or any data-driven sector, this session demonstrates how Superna equips your Pure Storage infrastructure to withstand evolving threats and meet today’s compliance demands. Register now to stay ahead of threats, see real-world attack containment demos, and get expert answers on building true cyber resilience with Superna and Pure Storage.42Views0likes0Comments