Microsoft Azure: Get All-flash Storage for Azure Local
May 21 | Register Now! Azure Local now supports on-premises storage, and Everpure is a certified solution. It offers a new alternative to legacy virtualization platforms, using familiar Windows and Hyper-V technology. Why add local storage to Azure? Delivers data sovereignty as all data remains on premises Ultra-low latency storage for your most demanding workloads, such as databases and AI Separate compute from storage, to allow granular scaling and avoid over-spending Join our webinar to learn how easy it is to deploy Everpure FlashArray™ with Azure Local and how it can completely transform your virtualization journey. Register Now!7Views0likes0CommentsTechSummit: Seattle
May 14, Register Now! Details Looking to tackle today’s toughest infrastructure challenges head on? Join us at TechSummit, an exclusive, half-day technical event for IT leaders, architects, and data professionals like you. What we’ll cover: Enterprise Data Cloud (EDC) - Get an inside look at how a unified, intelligent data platform brings agility, resilience, and performance to any workload. AI - Learn the benefits of AI-ready infrastructure designed and optimized to support the evolving needs of AI applications and development workflows. Cyber Resilience - Discover the advantages of a proactive, layered, operationally viable cyber resilience strategy to not just survive a cyberattack, but thrive after one. Virtualization/Cloud - Explore ongoing disruptions in the server virtualization market and evaluate whether you should consider cloud-managed VMware solutions or take the leap into cloud native and containers. It won’t be all business. We’ll also make time for fun. After the insightful discussions and learning, we’ll unwind together at a relaxed happy hour. Spots are limited, so register now to learn more and save your seat. Register Now!89Views0likes0CommentsWhen Data Becomes the Mission
Why state and local government, cities, and research universities are reorganizing infrastructure around data itself If you remember one thing from this article: infrastructure used to organize around applications. Increasingly, now it organizes around data. If you spend enough time around enterprise infrastructure, you start to notice something about how conversations begin. Someone asks about storage. Not in a philosophical way. In a practical way. How much capacity do we have left? What’s the refresh cycle? Is this staying on premises or moving to cloud? What’s the backup strategy? For years, that framing made perfect sense. Infrastructure was the foundation, and the job of infrastructure teams was to keep the lights on and the foundation solid. But lately, in conversations with customers across state and local government, municipalities, cities, and universities, something feels different. Because eventually someone says something like this: “We have this data… but we can’t actually use it.” And that is when the real conversation begins. Why the public sector reveals the truth about data There’s a perspective I heard recently that stuck with me. The public sector isn’t a niche market. It’s a microcosm of the entire enterprise technology world. At first that sounds counterintuitive. The stereotype is that government IT has been quietly living under a rock since the previous century, next to a beige server and a stack of COBOL manuals. But if you look closely, the opposite is true. State agencies, cities, and research institutions operate in environments that combine nearly every architectural challenge the private sector faces — all at once. Massive datasets Highly distributed users Strict security requirements Long retention policies Global collaboration And an absolute requirement that systems remain available when people need them most. In other words, the public sector experiences the full spectrum of data challenges simultaneously. If you want to stress-test a data architecture, put it inside government. Think about it. A state government may run thousands of systems across dozens of agencies, each serving different missions but increasingly sharing the same underlying data. A city manages infrastructure at the physical edge of society — traffic, water, SCADA, emergency services — where real-time decisions depend on accurate information. Universities generate some of the largest research datasets on earth while collaborating across institutions and countries. Each of these environments demands something slightly different from infrastructure. But they all demand the same thing from data: Security. Integrity. Mobility. Context. Availability. And when those requirements collide in one environment, something interesting happens. The solutions that work there tend to work everywhere. A laboratory for the modern data enterprise This is why many technology leaders quietly view the public sector as something more than a vertical market. It’s a laboratory for enterprise-scale data architecture. If a platform can operate in a world where: sensitive personal data must remain protected • systems span thousands of locations • regulatory oversight is constant • and uptime has real public consequences …then that architecture will almost certainly succeed in commercial environments. Banks, manufacturers, healthcare providers, and global enterprises face the same challenges. Just rarely all at once. Government simply compresses those problems into a single environment. Solve the data problem for government, and you solve it for the enterprise. That’s one reason the shift toward data-centric platforms is becoming so important. When organizations treat infrastructure as a place to store files, they solve only a small part of the problem. But when they treat data as the central operational asset — something that must be understood, governed, protected, and made usable across environments — the architecture begins to look very different. And the public sector, with all its complexity, becomes the place where those architectures are tested first. Which brings us back to the shift we’re seeing across the industry. Because once you start looking at infrastructure through the lens of data itself, something else becomes obvious. The center of gravity has moved. When multiple systems depend on the same dataset, the data becomes part of the operating foundation. And once that happens, moving it — or even restructuring it — becomes dramatically harder. Which brings us to the concept that explains a lot of what is happening right now. The quiet physics of data gravity The first time I heard the term “data gravity” wasn’t in a conference keynote or a vendor presentation. It was in 2015, when a recruiter from a startup called DataGravity (now Anomalo) reached out and asked if I would be interested in interviewing. At the time, the idea sounded fascinating — and slightly theoretical. The company was built around the premise that data itself was becoming the most valuable asset in the data center, and that infrastructure needed to understand the content, context, and behavior of data, not just store it. The name alone hinted at something deeper: the idea that as datasets grow, they start exerting a kind of gravitational pull on the systems around them. Back then, it felt like an interesting concept. Today it feels like a description of reality. The term “data gravity” itself was introduced by Dave McCrory back in 2010, and it turns out to be a remarkably accurate way to describe modern infrastructure. Dave McCrory Blog The idea is simple. As datasets grow, they become harder to move. More applications depend on them. More workflows connect to them. More policies govern them. Eventually, the architecture starts organizing around the data itself. Not because someone designed it that way. Because the physics of large systems leave you very little choice. Imagine trying to relocate a state Medicaid dataset that has been integrated with multiple benefit programs, identity verification systems, and fraud detection tools. Technically possible? Sure. Operationally trivial? Not even close. The larger and more interconnected the dataset becomes, the stronger its gravitational pull. Compute moves closer to the data. Applications move closer to the data. Infrastructure reorganizes around the data. This is why organizations that once talked primarily about storage capacity are now talking about data platforms. The center of gravity moved. When data stops being passive The moment data becomes operational, everything changes. For years, most organizations treated data as something that accumulated quietly inside systems. Applications produced it. Storage kept it safe. Backups made sure it could be restored. But that model starts to break down when the data itself becomes part of real-time decision making. You can see this most clearly in environments that generate enormous volumes of information. Cities now run infrastructure that continuously streams telemetry — traffic sensors, utility meters, environmental monitors, emergency response platforms. A water meter that once reported usage once a month might now generate thousands of readings per year. A traffic system that once relied on static timing can adapt dynamically to real-time conditions. Each improvement creates more data. More importantly, it creates operational dependence on that data. Universities experience the same phenomenon in a different form. Research environments produce extraordinary datasets across genomics, climate science, and artificial intelligence. Sequencing a single human genome generates roughly 100 gigabytes of raw data, and large research programs may create terabytes or petabytes of new information every week. In those environments the challenge isn’t just storing data. It’s feeding it fast enough to the systems that depend on it. Modern research clusters and GPU environments can process enormous volumes of information, but only if the underlying data pipeline keeps up. When storage cannot deliver data fast enough, expensive compute resources sit idle and discovery slows down. And that reveals an important truth about modern infrastructure. When systems depend on data in real time, the question stops being where the infrastructure lives. The question becomes whether the data is available, trustworthy, and recoverable. That distinction also explains why ransomware has become so disruptive to public institutions. Attackers understand that the real leverage is not the servers or the network. It’s the data. When access to data disappears, the services built on top of it disappear as well. Which brings us back to the deeper shift happening across the industry. If data has become this central to operations, services, and discovery, then managing it as a passive byproduct of infrastructure is no longer enough. Infrastructure alone is no longer the strategic layer. The strategic layer is the data itself. Organizations still need performance, availability, and resilience. Those fundamentals have not changed. What has changed is the expectation that infrastructure should also help organizations understand, govern, protect, and use their data more effectively. That is a very different problem than simply storing it. And it is the reason the conversation is evolving from storage management to data management platforms. The real punch line Public sector organizations didn’t set out to become data enterprises. Over time the data accumulated. Then the dependencies formed. And eventually everything started orbiting the datasets that mattered most. Data has gravity. Data has risk. Data has power. Infrastructure still matters. But increasingly, the real mission is something else entirely. The mission is the data. Appreciate you reading. Dmitry Gorbatov © 2025 Dmitry Gorbatov | #dmitrywashere39Views0likes0CommentsWhat We Learned About ActiveCluster for File from the Latest “Ask Us Everything”
The newly-announced ActiveCluster for file extends Everpure’s synchronous replication to unstructured workloads–so it was no surprise that the latest Ask Us Everything session drew a lot of attention. Attendees came ready with practical questions about how it works, where it fits, and what it could mean for real production environments. And host Don Poorman, Product Manager Quinn Summers, and Principal Technologist Russell Pope brought the Everpure answers. The conversation showed just how this new approach can help modernize resiliency, mobility, and day-to-day operations. Let’s break down the biggest takeaways. “Is This Just HA… or Something More?” One of the most interesting threads came early: is ActiveCluster for file just another high availability solution? Short answer: no. Attendees pushed on this, and the response from Everpure’s team was clear—this is about data mobility and policy-driven management, not just surviving a failure. Instead of treating HA as a one-off configuration, ActiveCluster is designed to align storage behavior with business intent. That shift matters. In traditional environments, HA is often bolted on and managed manually. Here, policies define things like performance, protection, and placement—and the system enforces them automatically across the fleet. For many in the session, that was a “wait, this is different” moment. The Big Comparison: Legacy Replication vs. ActiveCluster A standout question came from someone evaluating ActiveCluster as a replacement for legacy approaches like NetApp SVMDR. The discussion highlighted a key difference: granularity and consistency. Legacy solutions often replicate at a coarser level (think entire systems or large aggregates), which doesn’t always align with how applications are structured. ActiveCluster instead works at the realm level, where both data and configuration are synchronously mirrored. That means: No mismatched failover scope No rebuilding configs on the other side No “did we forget something?” during a failover It’s a cleaner, more application-aligned model—and that resonated with the audience. “What Actually Happens During a Failover?” Attendees asked the right questions: Is failover automatic? What about DNS changes? How fast does it happen? The answers were refreshingly direct. In a stretched Layer 2 setup, failover is fully automatic and transparent—clients don’t even notice. In more complex network designs, there may be some redirection (like DNS updates), but the data is already in sync. And timing? The expectation is on the order of seconds (often under 10). This is a variable currently unmatched by any legacy storage competitor to Everpure. There was also a lot of interest in how Everpure avoids split-brain scenarios. The mediator service—hosted by Everpure or deployed locally if needed—acts as a lightweight “tie breaker” during network partitions. No extra infrastructure to manage in most cases, and no guesswork about which side should stay active. Simplicity Came Up… A Lot If there was one theme that kept coming back, it was simplicity. One attendee asked about setup, and the answer was basically: it’s wizard-driven. That sparked a broader discussion about how legacy storage often assumes admins have time to relearn complex workflows. In reality, most teams are juggling multiple systems. The ability to stand up synchronous replication with a few guided steps—not scripts, not custom tooling—landed well. Even testing reflects that philosophy. Instead of complex test procedures, the guidance was simple: pull cables, simulate real failures, and observe behavior. No artificial “test modes”—just real-world validation. Data Mobility Is the Real Story Another strong theme was mobility. ActiveCluster doesn’t just protect data—it enables you to move it. The “stretch and unstretch” workflow means datasets can be mirrored, shifted, and re-homed without disruption. That’s a big departure from traditional models, where moving data often means downtime, migration projects, or both. For teams thinking about workload placement, lifecycle management, or hybrid environments, this opens up new options. Real-World Use Cases The audience also pushed beyond file shares into real workloads: Financial trading and payment systems Healthcare imaging and research data VMware/NFS environments The takeaway: if it’s mission-critical and file-based, it’s a candidate. Final Thought: Even More on the Horizon Even with some initial constraints (like starting with new file systems), the field feedback shared during the session was telling: customers are ready to adopt this early. Why? Because the core value—resiliency, mobility, and simplicity—is already there. And if the session proved anything, it’s that Everpure is building this in close collaboration with the community. The questions weren’t just answered—they’re shaping what comes next. If you’re evaluating how to modernize file services, Everpure’s approach is definitely one to consider. Check out this and all our other Ask Us Everything sessions. And, keep the conversation going by jumping into the Everpure Community.105Views0likes0CommentsSpring is Calling, and so is Reds Baseball
I don't know about you, but I am more than ready for Spring; though I could definitely skip the rain. Wiping muddy dog paws after every walk is getting old! On the bright side, who else is ready for some Reds baseball? I have a few exciting updates and resources to share with the community: 🚀 PUG Meeting Update charles_sheppar and I are currently hard at work on the next PUG meeting. Details to come. 🛡️ Strengthening Your Cyber Resilience Given the current geopolitical climate and the rise in cyber threats, now is the perfect time to audit your data protection. Features like SafeMode and Pure1 Security Assessments act as a resilient last line of defense. If you want to see these tools in action, we recently hosted an expert-led demo on building a foundation for cyber resilience. Watch the recording here: https://www.purestorage.com/video/webinars/the-foundations-of-cyber-resilience/6389889927112.html Questions? Reach out to your Everpure SE or partner for a deeper dive. 📅 Upcoming Events March 12: Nutanix Webinar Exploring virtualization alternatives? Nutanix is hosting a session tomorrow focused on simplifying IT operations and highlighting the Everpure partnership. https://event.nutanix.com/simplifyitandonprem March 19: Or perhaps you're interested in running virtual machines alongside containerized workloads within K8s clusters. If that's the case, join Greg McNutt and Sagar Srinivasa for Virtualization Reimagined: Inside the Everpure Journey. https://www.purestorage.com/events/webinars/virtualization-reimagined.html March 19: Ask Us Everything About Storage for Databases. Join experts Anthony Nocentino, Ryan Arsenault, and Don Poorman for a live Q&A session. https://www.purestorage.com/events/webinars/ask-us-everything-about-storage-for-databases.html March 24: Presets & Workloads for Consistent DB Environments. We’re extending the database conversation to discuss how Everpure helps you transition from "managing storage" to "managing data" through automated presets. https://www.purestorage.com/events/webinars/presets-and-workload-setups-for-consistent-database-environments.html92Views1like0CommentsLevel Up Your Virtualization Game
The virtualization landscape doesn’t stop evolving—and neither should your strategy. Join us at Topgolf King of Prussia on April 15 for an in-person Everpure User Group session featuring Cody Hosterman, Senior Director – Product Management. Cody will walk through the latest shifts across core virtualization technologies and what they mean for your environment today and tomorrow. This interactive, technical conversation is designed for practitioners and architects who want real-world guidance—not just slideware. What We’ll Cover Cody will break down what’s new, what’s next, and what actually matters across: VMware & Hyper-V – Current state, roadmap signals, and practical considerations OpenShift & OpenStack – Where they fit, how they’re evolving, and key design decisions New Entrants & Emerging Platforms – Who’s worth watching and why Expect candid discussion, best practices, and plenty of time for Q&A with your peers and the Everpure team. Register here to join in!102Views0likes0CommentsVirtualization Reimagined: Inside the Everpure Journey
March 19 | Register Now! Rising virtualization costs triggered a mandate for Everpure to find an alternative—fast. What began as an exploration of Kubernetes with Portworx® evolved into a virtualization strategy built on KubeVirt to simplify and accelerate VM migration. Today, more than 5,000 virtual machines run on KubeVirt alongside containerized workloads within Kubernetes clusters. This TechTalks webinar highlights key milestones in our journey and technical solutions to help customers accelerate similar migrations. Key takeaways: How to migrate large-scale VM workloads using KubeVirt Running virtual machines and containers side by side on a single platform The role of Portworx in automating storage and data management at scale Practical lessons from operating KubeVirt in production Register Now!97Views1like0CommentsThe Only Constant Is Change: Architecting for AI, Cloud, Virtualization, and Beyond
February 10 | Register Now! 11:00AM PT • 2:00PM ET Foundations matter: Their flexibility enables or constrains what you can do years down the road with your applications and business initiatives. For February 2026, host Andrew Miller explores the idea of data center foundations and how Pure Storage works with Cisco around FlashStack® to enable both current and future business initiatives. He’ll be joined by Eugene McGrath, Principal Field Solutions Architect for FlashStack—and previously a partner and customer—who will do a deep dive on this topic. We’ll wander through: Data center architecture history - from Reference Architecture to Converged Infrastructure, to Hyperconverged Infrastructure, the benefits we hoped for from each architectural phase, and how well we achieved them at an industry level. FlashStack - the core design principles behind the Cisco and Pure Storage partnership and the benefits, including statelessness, innovation without disruption, benefits during Day 0/1/2, and Cisco Validated Designs. What matters today - new announcements and capabilities with Nutanix, hypervisor optionality, AI designs, AI Pods, and more! In short, FlashStack reduces risk with prevalidated solutions for AI and analytics, cyber resilience, business-critical applications, and modern virtualization.74Views0likes0CommentsAsk Us Everything: Pure Storage + Nutanix — What the Community Really Wanted to Know
The January Ask Us Everything (AUE) session tackled one of the hottest topics in infrastructure right now: what Pure Storage and Nutanix are doing together—and what that means for our customers. Judging by the volume and depth of questions, it’s clear that many of you are actively evaluating next-generation virtualization options and want real answers, not marketing slides. With Cody Hosterman (Sr Director Product Management, Pure Storage), Thomas Brown (Field CTO, Nutanix), myself - Joe Houghes (Field Solutions Architect, Pure Storage), and our host Don Poorman (Technical Evangelist, Pure Storage), the conversation went deep into architecture, migration realities, and the practical problems this joint solution is designed to solve. Here are the biggest takeaways from what attendees asked—and what they learned. This is joint engineering, not just “interoperability” One of the most important clarifications came early: this isn’t a case of “here’s a LUN, good luck.” Nutanix has natively integrated Pure Storage FlashArray APIs directly into the Nutanix stack. That means: No plugins to install No bolt-on frameworks to manage No separate operational silos In Prism, the Nutanix management plane, Pure Storage behaves like a first-class storage backend. Snapshots, protection, provisioning, and automation are driven from Nutanix, while Pure Storage delivers its strengths—performance, data reduction, SafeMode, and simplicity—under the covers. NVMe/TCP support is a deliberate, forward-looking choice Several attendees asked why Fibre Channel or legacy protocols weren’t the focus. The answer: this solution is built for where infrastructure is going, not where it’s been. By standardizing on NVMe/TCP over Ethernet, Pure and Nutanix: Avoid decades of SCSI and FC tech debt Enable massive bandwidth scalability (100G, 400G, and beyond) Lay the groundwork for modern security features like TLS and in-band authentication This is a design meant to still make sense 10 years from now. Object-style vDisks eliminate old datastore limits A recurring “aha” moment came when attendees learned how vDisks are implemented. Instead of traditional filesystem-based datastores (with all their historical limits), each virtual disk maps directly to a Pure Storage volume. What that unlocks: Petabyte-scale virtual disks (no more 64TB ceilings) No datastore gymnastics to scale performance No artificial limits inherited from legacy file systems This felt especially relevant for customers running large databases, analytics platforms, or fast-growing enterprise apps. HCI isn’t going away—this complements it A key question from the audience: Does this replace Nutanix HCI? The answer was a clear no. Nutanix HCI still makes perfect sense for many workloads. But when customers: Need to scale storage independently of compute Have performance-heavy or capacity-dense workloads Want an “apples-to-apples” replacement for traditional VMware + external storage …Pure Storage + Nutanix provides a clean alternative without forcing architectural compromises. Migration is real, and the hard parts were addressed honestly Migration questions dominated the session—and the tone was refreshingly pragmatic. Attendees learned: Nutanix Move is fully supported and preserves Purity’s data reduction–which makes this a zero-cost migration in terms of storage capacity VMware NSX rules can be translated into Nutanix Flow during migration Backup tools (Veeam, Rubrik, Commvault, Cohesity, etc.) continue to work without re-engineering or changes in backup operations Most migration risk doesn’t lie in the hypervisor—it’s overlooked third-party dependencies The guidance was consistent: plan carefully, take stock of any dependencies, and don’t rush a wholesale cutover just to meet an artificial deadline. No user ever wants to be forced to do that. Operational simplicity is a major design goal A subtle but powerful theme emerged: you don’t need to tune this solution. VMware users often ask about “nerd knobs” and the need to tweak things to get them working right. In this solution, they’re mostly gone—and intentionally so. Best practices for queue depths, multipathing, performance tuning and more are already baked into the platform by the joint engineering teams. Improvements are managed through upgrades, eliminating the need for manual scripting or implementing performance tweaks for a "snowflake" deployment. The result of this best-of-breed, jointly-engineered solution is consistency, predictability, and easier support—especially during migrations–so that you can focus on the work that makes your business run. The roadmap is active—and community feedback matters This solution was not positioned as “done and dusted.” The GA release is the foundation, not the finish line. Capabilities like Kubernetes support, deeper snapshot orchestration, VDI validation, and migration optimizations are all on the roadmap. And importantly: your use cases drive priorities. And the Pure Storage Community is a great place to drop your feedback for the teams! Keep the conversation going This partnership sparked a lot of interest for a reason: it’s not just about changing hypervisors—it’s about modernizing how infrastructure works. If you missed the live session—or want to dive deeper—join the ongoing discussion in the Pure Storage Community: 👉 https://purecommunity.purestorage.com/discussions/virtualization/ask-us-everything-about-pure-storage--nutanix/3634 You’ll find Pure Storage and Nutanix experts answering follow-ups, clarifying edge cases, and sharing lessons learned from real deployments. While you’re there, be sure to check out past Ask Us Everything events—they’re packed with practical, practitioner-level insights.238Views1like0Comments