OT: The Architecture of Interoperability
In previous post, we explored the fundamental divide between Information Technology (IT) and Operational Technology (OT). We established that while IT manages data and applications, OT controls the physical heartbeat of our world from factory floors to water treatment plants. In this post we are diving deeper into the bridge that connects them: Interoperability. As Industry 4.0 and the Internet of Things (IoT) accelerate, the "air gap" that once separated these domains is evolving. For modern enterprises, the goal isn't just to have IT and OT coexist, but to have them communicate seamlessly. Whether the use-cases are security, real time quality control, or predictive maintenance, to name a few, this is why interoperability becomes the critical engine for operational excellence. The Interoperability Architecture Interoperability is more than just connecting cables; it’s about creating a unified architecture where data flows securely between the shop floor and the “top floor”. In legacy environments, OT systems (like SCADA and PLCs) often run on isolated, proprietary networks that don’t speak the same language as IT’s cloud-based analytics platforms. To bridge this, a robust interoperability architecture is required. This architecture must support: Industrial Data Lake: A single storage platform that can handle block, file, and object data is essential for bridging the gap between IT and OT. This unified approach prevents data silos by allowing proprietary OT sensor data to coexist on the same high-performance storage as IT applications (such as ERP and CRM). The benefit is the creation of a high-performance Industrial Data Lake, where OT and IT data from various sources can be streamed directly, minimizing the need for data movement, a critical efficiency gain. Real Time Analytics: OT sensors continuously monitor machine conditions including: vibration, temperature, and other critical parameters, generating real-time telemetry data. An interoperable architecture built on high performance flash storage enables instant processing of this data stream. By integrating IT analytics platforms with predictive algorithms, the system identifies anomalies before they escalate, accelerating maintenance response, optimizing operations, and streamlining exception handling. This approach reduces downtime, lowers maintenance costs, and extends overall asset life. Standards Based Design: As outlined in recent cybersecurity research, modern OT environments require datasets that correlate physical process data with network traffic logs to detect anomalies effectively. An interoperable architecture facilitates this by centralizing data for analysis without compromising the security posture. Also, IT/OT convergence requires a platform capable of securely managing OT data, often through IT standards. An API-First Design allows the entire platform to be built on robust APIs, enabling IT to easily integrate storage provisioning, monitoring, and data protection into standard, policy-driven IT automation tools (e.g., Kubernetes, orchestration software). Pure Storage addresses these interoperability requirements with the Purity operating environment, which abstracts the complexity of underlying hardware and provides a seamless, multiprotocol experience (NFS, SMB, S3, FC, iSCSI). This ensures that whether data originates from a robotic arm or a CRM application, it is stored, protected, and accessible through a single, unified data plane. Real-World Application: A Large Regional Water District Consider a large regional water district, a major provider serving millions of residents. In an environment like this, maintaining water quality and service reliability is a 24/7 mission-critical OT function. Its infrastructure relies on complex SCADA systems to monitor variables like flow rates, tank levels, and chemical compositions across hundreds of miles of pipelines and treatment facilities. By adopting an interoperable architecture, an organization like this can break down the silos between its operational data and its IT capabilities. Instead of SCADA data remaining locked in a control room, it can be securely replicated to IT environments for long-term trending and capacity planning. For instance, historical flow data combined with predictive analytics can help forecast demand spikes or identify aging infrastructure before a leak occurs. This convergence transforms raw operational data into actionable business intelligence, ensuring reliability for the communities they serve. Why We Champion Compliance and Governance Opening up OT systems to IT networks can introduce new risks. In the world of OT, "move fast and break things" is not an option; reliability and safety are paramount. This is why Pure Storage wraps interoperability in a framework of compliance and governance, not limited to: FIPS 140-2 Certification & Common Criteria: We utilize FIPS 140-2 certified encryption modules and have achieved Common Criteria certification. Data Sovereignty: Our architecture includes built-in governance features like Always-On Encryption and rapid data locking to ensure compliance with domestic and international regulations, protecting sensitive data regardless of where it resides. Compliance: Pure Fusion delivers policy defined storage provisioning, automating the deployment with specified requirements for tags, protection, and replication. By embedding these standards directly into the storage array, Pure Storage allows organizations to innovate with interoperability while maintaining the security posture that critical OT infrastructure demands. Next in the series: We will explore further into IT/OT interoperability and processing of data at the edge. Stay tuned!30Views0likes0CommentsHealthcare AI: Why the "Build Reflex" is Killing Your ROI
In this article, For Healthcare Leaders, Build vs. Buy Determines ROI on Enterprise AI, featuring Matthew Crowson, MD, of Wolters Kluwer, Matthew argues that healthcare organizations must abandon their traditional "build reflex" for AI solutions, citing a high 95% failure rate. This traditional habit in the healthcare system is in stark contrast with tight margins and the competitive AI talent market. Crowson advocates for a shift to a hybrid partnership model where the organization "buys" a vendor's customizable platform. This model is crucial because it addresses trust issues by ensuring that sensitive patient data (PHI) remains secure behind the facility's firewall. He stresses to first focus on problem diagnosis, be realistic about their in-house talent, and ensure their data foundation is clean before engaging any vendors. This pragmatic approach is essential for achieving a positive ROI on enterprise AI. Community Question: What do you think? Is your organization currently struggling with the build vs. buy decision? Let's discuss! Click through to read the entire article above and let us know your thoughts around it in the comments below!16Views0likes0CommentsUnderstanding Deduplication Ratios
It’s super important to understand where deduplication ratios, in relation to backup applications and data storage, come from. Deduplication prevents the same data from being stored again, lowering the data storage footprint. In terms of hosting virtual environments, like FlashArray//X™ and FlashArray//C™, you can see tremendous amounts of native deduplication due to the repetitive nature of these environments. Backup applications and targets have a different makeup. Even still, deduplication ratios have long been a talking point in the data storage industry and continue to be a decision point and factor in buying cycles. Data Domain pioneered this tactic to overstate its effectiveness, leaving customers thinking the vendor’s appliance must have a magic wand to reduce data by 40:1. I wanted to take the time to explain how deduplication ratios are derived in this industry and the variables to look for in figuring out exactly what to expect in terms of deduplication and data footprint. Let’s look at a simple example of a data protection scenario. Example: A company has 100TB of assorted data it wants to protect with its backup application. The necessary and configured agents go about doing the intelligent data collection and send the data to the target. Initially, and typically, the application will leverage both software compression and deduplication. Compression by itself will almost always yield a decent amount of data reduction. In this example, we’ll assume 2:1, which would mean the first data set goes from 100TB to 50TB. Deduplication doesn’t usually do much data reduction on the first baseline backup. Sometimes there are some efficiencies, like the repetitive data in virtual machines, but for the sake of this generic example scenario, we’ll leave it at 50TB total. So, full backup 1 (baseline): 50TB Now, there are scheduled incremental backups that occur daily from Monday to Friday. Let’s say these daily changes are 1% of the aforementioned data set. Each day, then, there would be 1TB of additional data stored. 5 days at 1TB = 5TB. Let’s add the compression in to reduce that 2:1, and you have an additional 2.5TB added. 50TB baseline plus 2.5TB of unique blocks means a total of 52.5TB of data stored. Let’s check the deduplication rate now. 105TB/52.5TB = 2x You may ask: “Wait, that 2:1 is really just the compression? Where is the deduplication?” Great question and the reason why I’m writing this blog. Deduplication prevents the same data from being stored again. With a single full backup and incremental backups, you wouldn’t see much more than just the compression. Where deduplication measures impact is in the assumption that you would be sending duplicate data to your target. This is usually discussed as data under management. Data under management is the logical data footprint of your backup data, as if you were regularly backing up the entire data set, not just changes, without deduplication or compression. For example, let’s say we didn’t schedule incremental backups but scheduled full backups every day instead. Without compression/deduplication, the data load would be 100TB for the initial baseline and then the same 100TB plus the daily growth. Day 0 (baseline): 100TB Day 1 (baseline+changes): 101TB Day 2 (baseline+changes): 102TB Day 3 (baseline+changes): 103TB Day 4 (baseline+changes): 104TB Day 5 (baseline+changes): 105TB Total, if no compression/deduplication: 615TB This 615TB total is data under management. Now, if we looked at our actual, post-compression/post-dedupe number from before (52.5TB), we can figure out the deduplication impact: 615/52.5 = 11.714x Looking at this over a 30-day period, you can see how the dedupe ratios can get really aggressive. For example: 100TB x 30 days = 3,000TB + (1TB x 30 days) = 3,030TB 3,030TB/65TB (actual data stored) = 46.62x dedupe ratio In summary: 100TB, 1% change rate, 1 week: Full backup + daily incremental backups = 52.5TB stored, and a 2x DRR Full daily backups = 52.5TB stored, and an 11.7x DRR That is how deduplication ratios really work—it’s a fictional function of “what if dedupe didn’t exist, but you stored everything on the disk anyway” scenarios. They’re a math exercise, not a reality exercise. Front-end data size, daily change rate, and retention are the biggest variables to look at when sizing or understanding the expected data footprint and the related data reduction/deduplication impact. In our scenario, we’re looking at one particular data set. Most companies will have multiple data types, and there can be even greater redundancy when accounting for full backups across those as well. So while it matters, consider that a bonus.54Views1like1CommentMinutes to Meltdown
Join Pure Storage and Commvault for... 💥 Minutes to Meltdown | Manchester This isn't your usual Cyber Security event! You'll take part in a LIVE simulated cyber-attack. Experience firsthand how security, infrastructure, legal, leadership and others must unite when the worst happens. Gain insights in how your business can be better prepared and learn from those that have been through this before. Register here: 👉 https://discover.commvault.com/event-minutes-to-meltdown-manchester-with-pure-storage-registration.html39Views2likes0CommentsFlashcrew Manchester
🤝 Flashcrew Pure Usergroup | Manchester | For our amazing customers! Connect with fellow Pure users and dive deep into the //Accelerate announcements. Learn how to extract even more value from the Pure ecosystem and get your technical questions answered by the experts. Register here: https://info.purestorage.com/2025-Q2EMEA-UKREPLHCOFY26Q3-FlashCrew-Manchester-LP_01---Registration-Page.html We also open these up to non-customers interested in Pure, helping you learn from those already benefitting from the Pure Enterprise Data Platform. Please DM me if you would like an invite.58Views1like0CommentsVideo Storage: Less Cost, More Reliability
Learn how a leading MSO switched from cloud-based video storage to Pure Storage FlashBlade//E and saved 87% on cloud costs. And all while getting a more reliable service and a better customer experience via reduced latency. Read all about it.33Views0likes0CommentsHow can I help?
Hey Pure Community, I’m Brian Heck, and I get to be your Senior Systems Engineer if you’re in Alaska, Washington, or Oregon (yup, that’s me—the person behind the emails!). I’m part of the SLED team at Pure, which just means I spend my days helping schools, local governments, and all sorts of public organizations like Tribal make the most of their data. I’ve always believed the secret sauce at Pure isn’t just our tech (though I’m pretty biased about how rock solid flash storage is). It’s actually the people on our team and in this community. There’s zero ego, just lots of curiosity and a drive to solve real problems for real folks. So if you ever just want to ask a question, vent about a challenge, or swap stories about your favorite upgrades, trust me, you’re speaking my language. If you’re wondering: What’s it really like working at Pure (spoiler: the culture here rocks) What new tech or trends I’m excited about in storage, cloud, or AI How to get the most out of your SE team (or what the heck SEs actually do behind the scenes) …please shout! I love sharing tips, diving into rabbit holes, and figuring out better ways to do things together. I’m always up for good conversation, honest feedback, or brainstorming sessions, whether it’s in the forums or over coffee (virtual or real). This community means a lot to me, and I’d really love to hear your stories, see your questions, and learn from your experiences. I've been a part of the VMUG for quite awhile, so things like this are my jam. I’ll do my best to share the good stuff—tech advice, a peek at life at Pure, and maybe a few dad jokes if you’re lucky. How can I help? What’s on your mind?34Views1like0CommentsAI is changing everything in Telco
Like every industry, AI is having a massive impact on telecom. But what do your peers think about it? We partnered with NVIDIA to sponser a research report about AI in the telecom space. It includes expert opinions plus thoughts from AI leaders at MetTel, Telus and Verizon. There's also a very informative webinar featuring Pure Storage Telecom Field CTO Patrick Lopez and Chris Penrose, VP, Head of Telecoms Business Development at NVIDIA. All the links are available in this blog post.36Views0likes0CommentsJoin us for TechSummit: File Day Atlanta!
Looking to tackle today’s toughest infrastructure challenges head on? Join us at TechSummit, an exclusive, half-day technical event for IT leaders, architects, and data professionals like you. What we’ll cover: Enterprise Data Cloud (EDC) - Get an inside look at how a unified, intelligent data platform brings agility, resilience, and performance to any workload. Real-time Enterprise File - See how a platform-driven approach to unstructured data speeds up AI, analytics, and mission-critical applications. AI - Learn the benefits of AI-ready infrastructure designed and optimized to support the evolving needs of AI applications and development workflows. Cyber Resilience - Discover the advantages of a proactive, layered, operationally viable cyber resilience strategy to not just survive a cyberattack, but thrive after one. It won’t be all business. We’ll also make time for fun. After the insightful discussions and learning, we’ll unwind together at a relaxed happy hour. Speakers Joey Clark | Principal Technologist Josh Lay | Field Solutions Architect - AI / Analytics / HPC Antonia Abu Matar | Field Solutions Architect Drew Kessel l Field Solutions Architect49Views0likes0Comments