Node Navigation
Get Started Here
Recent Discussions
The Idea That Was Supposed to Fail
Why DirectFlash and Evergreen//One suddenly look a lot smarter in a world of NAND and DRAM price shocks Dmitry Gorbatov Mar 20, 2026 Important Note for my readers: Writing this piece took me a lot longer than I normally spend on a post. It took a lot of reading and research. Many articles and blogs were written on the subject before NAND and DRAM costs went crazy. The dry-humor version is that the storage industry spent years insisting flash was just disk with better manners, and then acted surprised when the underlying physics eventually asked to speak with management. Now, let’s get to it. I can still picture the room. It wasn’t anything special — just another corporate competitive training session, the kind you’ve sat through many times if you’ve spent enough years in enterprise tech. This was at NetApp, in 2015 or 2016, back when flash was still a question mark. Not if, but how. The industry had not fully committed yet, and everyone was trying to figure out what role it would play. The presenter clicked to the next slide, paused for a second, and said something that stuck with me in a way most of those sessions never do: “Pure Storage is crazy! They’re building their own flash modules. That’s stupid. It’s not sustainable. They won’t survive.” It wasn’t said for effect. There was no dramatic pause afterward, no attempt to persuade. It was delivered as a simple, almost obvious conclusion. And to be fair, it felt obvious. Because the entire storage industry operated on a shared assumption: you didn’t build components, you assembled them. You relied on a mature ecosystem of suppliers who specialized in drives, storage controllers, and memory, and you focused your differentiation on software features and integration. That was the efficient path. That was the scalable path. That was how serious companies behaved. What Pure was proposing at the time — what would later become Everpure — felt like a deviation from that logic. Building your own flash modules didn’t just introduce complexity; it seemed to reject the economic advantages of the broader supply chain. It looked like a risk without a clear payoff. So the conclusion made sense. Until it didn’t. Looking Back, Differently If I think back to that training session now, I do not really see it as a moment where someone was foolish. I see it as a moment where the industry was trapped inside the logic of its own assumptions. If you believe flash should look like disk, then building your own flash modules sounds silly. If you believe storage is just a sequence of refresh cycles, then a model built around non-disruptive evolution sounds unnecessary. If you believe component pricing will keep trending in the right direction forever, then architectural efficiency feels like an academic luxury. But once those assumptions start to crack, the logic changes. And when it changes, the things that once looked eccentric start to look oddly prescient. A Change You Don’t Notice Right Away For years, nothing about that statement felt particularly worth revisiting. The industry moved forward in predictable ways. Flash became mainstream. Performance improved. Density increased. Vendors competed on features, benchmarks, and price points. The conversations most of us had with customers followed familiar patterns. If anything, the abstraction layers built around flash made things easier to consume. SSDs behaved like faster disks — and that was good enough. There is a reason they showed up in familiar HDD form factors. The industry was trying to preserve the old world while sneaking in a new medium. Keep the slots. Keep the enclosures. Keep the assumptions. Change as little as possible. That made adoption easier, but it also buried the problem. Because flash is not a disk. It never was. It does not behave like one, and it does not particularly enjoy being treated like one. The only reason the illusion worked is because the industry built a fairly elaborate translation layer to maintain it. That translation layer is where the story really starts. The Trick That Made Flash Look Simple When commodity SSDs became the standard way to bring flash into enterprise storage, they depended on a piece of internal firmware called the Flash Translation Layer, or FTL. Its job was deceptively simple: make raw NAND look like a disk. That sounds harmless enough until you think about what that actually requires. NAND cannot just overwrite data in place the way the rest of the stack would like it to. It has to handle erase cycles, wear leveling, garbage collection, bad block management, and the constant translation between logical addresses and physical locations on the media. So every SSD became its own little self-contained world, complete with its own controller, its own metadata tables, and its own DRAM to keep track of everything. In other words, every drive became a tiny independent computer, making local decisions in isolation. That design solved the adoption problem. It did not solve the architecture problem. For a while, the tradeoff seemed worth it. The drives were fast enough, the packaging was familiar, and the whole system kept pretending that flash was just a much nicer version of disk. But what looked neat and modular at small scale turned out to be awkward and expensive at enterprise scale. And that is where the “stupid” decision begins to look a lot smarter. What Commodity SSDs Actually Drag Along With Them The more I researched this topic (and believe me I did), the more I realized how much of the industry got comfortable with an abstraction that was doing a lot of quiet damage. Commodity SSDs carry four structural inefficiencies that matter much more today than they did when pricing was stable. Trapped DRAM. Every SSD maintains its own mapping tables, so large-scale systems end up carrying a remarkable amount of DRAM inside the drives themselves. That memory is necessary for the SSD to function, but it does not really help the array think globally. It is duplicated overhead, repeated again and again, drive by drive. In a petabyte-scale system, that is not a rounding error. It is cost, power, and complexity hiding in plain sight. Unpredictable Latency. Garbage collection inside a traditional SSD happens when the drive decides it needs to happen. When that occurs, the drive may become temporarily less responsive, and in an array full of independent drives, those little stalls start to show up as tail-latency spikes. The system is always vulnerable to one drive having a private crisis at exactly the wrong time. Write Amplification. Because the SSD does not really understand the workload or the data structures above it, it moves data more often than necessary. More movement means more writes. More writes mean more wear. More wear means the media gets consumed faster than it should. Over-provisioning. Every SSD holds back some raw capacity for its own housekeeping and spare-cell management, but that reserved space is siloed. The array cannot use it intelligently across the system because each drive is managing its own private affairs. None of this sounded especially dramatic when NAND kept getting cheaper and the economics of flash kept improving. It sounded like engineering trivia. The sort of thing infrastructure people argue about while everyone else waits for the quote. Today it is not trivia. Today it is exposure. Why AI Made This Suddenly Everyone’s Problem For years, one of the quiet assumptions in enterprise IT was that storage capacity would continue to become cheaper and more abundant over time. Not perfectly, not smoothly, but predictably enough that the inefficiencies of the underlying architecture could be tolerated. That assumption is now not only under pressure, it is getting decimated. AI did not just create a new category of interesting workloads. It created a global appetite for silicon that is large enough to bend supply curves. The cute part of AI is easy to mock. The cat kicking the T-Rex. The surreal generated videos. The deepfakes that make you look twice and then sigh a little for civilization. But behind every one of those outputs is a less funny reality: extraordinary consumption of DRAM, NAND, GPUs, and supporting infrastructure. The novelty at the edge is powered by very serious resource demand at the core. And that demand is landing directly on the components enterprise storage depends on. This is the part customers are beginning to feel in ways that are no longer abstract. Expansion quotes do not look as comfortable as they once did. Refresh cycles feel more expensive. Delivery windows stretch. Budgets built on assumptions from even two years ago suddenly need more explaining than anyone wanted. There is a tendency to call this inflation because that is the easiest word available. It is not really inflation. It is supply and demand, with a side of semiconductor reality. And that matters, because a traditional SSD array is exposed to both sides of the problem at once. It is exposed to NAND because that is the medium you are buying, and it is exposed to DRAM because every SSD drags its own DRAM overhead along for the ride. When those two markets tighten at the same time, the cost of the architecture gets hit twice. That is not just a technical nuance. That is economics. Revisiting the “Stupid” Decision This is where the old training-room comment starts to age badly. Because what looked like unnecessary vertical integration was really a decision to stop pretending flash was a disk and start treating it like what it actually is: semiconductor media with very specific physical behaviors that should be managed at the system level, not hidden inside dozens of drives. That is the DirectFlash idea in plain English. Take the Flash Translation Layer out of the individual drive. Pull media management into the operating environment. Let Purity manage flash globally instead of leaving each device to improvise its own local strategy. That changes more than performance charts. It means metadata no longer has to be duplicated and trapped inside every SSD. It means wear leveling can happen across the full system instead of inside the borders of a single device. It means bad block handling, garbage collection, and data placement can be coordinated with global context. It means the platform can see the difference between data that should live together and data that should not, which dramatically reduces unnecessary movement and lowers write amplification. And when write amplification drops, the economics change. The NAND lasts longer. The useful life of the media extends. Lower-endurance flash, like QLC, becomes viable for serious enterprise use because the software is smart enough not to abuse it. The system extracts more useful work from the same raw silicon. That is not just clever engineering. That is insulation from volatility. The reason this matters now is that DirectFlash changes the ratio between the silicon you buy and the value you get from it. If the rest of the market is paying more for NAND and more for DRAM, an architecture that reduces trapped DRAM, minimizes wasted writes, extends media life, and packs far more capacity into far denser modules is not just elegant. It is economically defensive. This is where the old “they build their own flash” criticism misses the point. Building your own flash modules was never the point by itself. The point was controlling the relationship between software and media well enough to eliminate the inefficiencies the commodity model had normalized. Why Purity Is the Real Story DirectFlash makes for a good visual. It is a module. You can point to it. You can talk about density and reliability and the fact that a 150TB module can do work that would have required a small army of traditional devices not all that long ago. But the real story is Purity Operating Environment, i.e. software. Purity is where the architectural bet pays off. It is what turns raw NAND into a coordinated system instead of a pile of politely disagreeing SSDs. Because Purity sees the entire media pool, it can write more intelligently. It can group data with similar expected lifespans together, so that when a snapshot or a temporary workload disappears, whole regions of storage can be retired cleanly instead of forcing background reshuffling of still-live data. That reduces unnecessary churn. Less churn means fewer writes. Fewer writes mean longer media life. Because Purity sees when a NAND die is busy with an erase or program cycle, it can avoid letting that become a host-visible performance problem. RAID-3D and system-level awareness allow the platform to reconstruct data from parity rather than simply waiting for a busy drive to get its act together. The end result is deterministic performance rather than a roulette wheel of occasional latency spikes. Because Purity owns media management globally, the over-provisioning and spare resources are no longer trapped in per-drive silos. The system can use them strategically. I know that all of this sounds a bit scientific, and to be fair, it is. I did spend over 7 years working for Everpure and a few weeks researching for this post. I wanted to sit with that science for a bit. Where the Economics Start to Matter The moment component pricing becomes unpredictable, architecture stops being an engineering preference and starts becoming a financial strategy. That is the part that matters most to customers right now. A traditional buying model assumes that at some point you will hit a refresh cycle, a capacity wall, or a migration event that forces a purchase whether the market timing is good or terrible. You buy when you have to buy. If NAND is expensive, that is unfortunate. If DRAM is expensive too, even better, because apparently the universe enjoys symmetry. That is what makes the combination of DirectFlash and Evergreen so important. DirectFlash reduces the amount of waste, duplication, and premature wear in the system. Evergreen removes the old habit of tying innovation to forklift replacement. Controllers evolve. Capacity can be consolidated into denser modules over time. Data stays in place. The customer is not forced into rebuying the whole environment every few years just to remain current. That already changes the economics. But it still leaves one more question: who is carrying the price risk? And this is where Evergreen//One matters more than ever. The Part I Actually Wanted to Get To Evergreen//One is not just a consumption model. It is not just a nicer way to finance storage. It is a mechanism for moving volatility away from the customer. That is the conclusion I wanted to earn, not just declare. When NAND and DRAM prices start climbing, most traditional models push that turbulence straight into the customer’s planning cycle. The customer eats the increase, absorbs the uncertainty, and tries to explain to the business why the infrastructure line now behaves like it has a gambling problem. Evergreen//One changes that relationship. The customer consumes capacity as a service. Everpure owns the burden of the underlying hardware lifecycle, the media strategy, and the ongoing optimization. DirectFlash makes that model stronger because the platform is structurally more efficient with the silicon it uses. It needs less trapped DRAM, wastes fewer writes, extends media life, and supports denser modules that deliver more usable capacity per unit of power, space, and raw media. Purity compounds that advantage with data reduction, ongoing software improvements, and smarter system-wide media management. Put differently, Everpure is in a much better position to absorb and manage component volatility than a customer buying boxes on a refresh schedule. That is the real price protection story. Not some magical promise that economics no longer apply. They do. NAND still costs what NAND costs. DRAM still costs what DRAM costs. Physics remains annoyingly undefeated. The difference is who is exposed to that volatility, how much inefficiency is built into the system before the customer ever sees it, and whether the operating model gives the customer a stable runway instead of a quarterly surprise. DirectFlash reduces the waste. Evergreen removes the forced disruption. Evergreen//One shifts the risk. That combination is a lot more interesting than it sounded in that room 11 years ago. The Part I Didn’t Appreciate Then What I did not understand sitting in that room 11 years ago was that some decisions are made for futures that have not arrived yet. The market eventually caught up to the architecture. That does not happen often enough in enterprise tech to ignore when it does. DirectFlash was never interesting just because it was different. It was interesting because it removed layers of inherited inefficiency that the rest of the market had accepted as normal. And in a period where NAND and DRAM pricing are under pressure, removing inefficiency is no longer just a performance story. It is a protection story. That is why this matters now. Not because it makes for a clever slide. Because it gives customers a more predictable way forward when the underlying component markets are anything but predictable. And in the current environment, that might be the most practical definition of innovation there is. Appreciate you reading. Dmitry Gorbatov © 2025 Dmitry Gorbatov | #dmitrywashere36Views0likes0CommentsWhen Data Becomes the Mission
Why state and local government, cities, and research universities are reorganizing infrastructure around data itself If you remember one thing from this article: infrastructure used to organize around applications. Increasingly, now it organizes around data. If you spend enough time around enterprise infrastructure, you start to notice something about how conversations begin. Someone asks about storage. Not in a philosophical way. In a practical way. How much capacity do we have left? What’s the refresh cycle? Is this staying on premises or moving to cloud? What’s the backup strategy? For years, that framing made perfect sense. Infrastructure was the foundation, and the job of infrastructure teams was to keep the lights on and the foundation solid. But lately, in conversations with customers across state and local government, municipalities, cities, and universities, something feels different. Because eventually someone says something like this: “We have this data… but we can’t actually use it.” And that is when the real conversation begins. Why the public sector reveals the truth about data There’s a perspective I heard recently that stuck with me. The public sector isn’t a niche market. It’s a microcosm of the entire enterprise technology world. At first that sounds counterintuitive. The stereotype is that government IT has been quietly living under a rock since the previous century, next to a beige server and a stack of COBOL manuals. But if you look closely, the opposite is true. State agencies, cities, and research institutions operate in environments that combine nearly every architectural challenge the private sector faces — all at once. Massive datasets Highly distributed users Strict security requirements Long retention policies Global collaboration And an absolute requirement that systems remain available when people need them most. In other words, the public sector experiences the full spectrum of data challenges simultaneously. If you want to stress-test a data architecture, put it inside government. Think about it. A state government may run thousands of systems across dozens of agencies, each serving different missions but increasingly sharing the same underlying data. A city manages infrastructure at the physical edge of society — traffic, water, SCADA, emergency services — where real-time decisions depend on accurate information. Universities generate some of the largest research datasets on earth while collaborating across institutions and countries. Each of these environments demands something slightly different from infrastructure. But they all demand the same thing from data: Security. Integrity. Mobility. Context. Availability. And when those requirements collide in one environment, something interesting happens. The solutions that work there tend to work everywhere. A laboratory for the modern data enterprise This is why many technology leaders quietly view the public sector as something more than a vertical market. It’s a laboratory for enterprise-scale data architecture. If a platform can operate in a world where: sensitive personal data must remain protected • systems span thousands of locations • regulatory oversight is constant • and uptime has real public consequences …then that architecture will almost certainly succeed in commercial environments. Banks, manufacturers, healthcare providers, and global enterprises face the same challenges. Just rarely all at once. Government simply compresses those problems into a single environment. Solve the data problem for government, and you solve it for the enterprise. That’s one reason the shift toward data-centric platforms is becoming so important. When organizations treat infrastructure as a place to store files, they solve only a small part of the problem. But when they treat data as the central operational asset — something that must be understood, governed, protected, and made usable across environments — the architecture begins to look very different. And the public sector, with all its complexity, becomes the place where those architectures are tested first. Which brings us back to the shift we’re seeing across the industry. Because once you start looking at infrastructure through the lens of data itself, something else becomes obvious. The center of gravity has moved. When multiple systems depend on the same dataset, the data becomes part of the operating foundation. And once that happens, moving it — or even restructuring it — becomes dramatically harder. Which brings us to the concept that explains a lot of what is happening right now. The quiet physics of data gravity The first time I heard the term “data gravity” wasn’t in a conference keynote or a vendor presentation. It was in 2015, when a recruiter from a startup called DataGravity (now Anomalo) reached out and asked if I would be interested in interviewing. At the time, the idea sounded fascinating — and slightly theoretical. The company was built around the premise that data itself was becoming the most valuable asset in the data center, and that infrastructure needed to understand the content, context, and behavior of data, not just store it. The name alone hinted at something deeper: the idea that as datasets grow, they start exerting a kind of gravitational pull on the systems around them. Back then, it felt like an interesting concept. Today it feels like a description of reality. The term “data gravity” itself was introduced by Dave McCrory back in 2010, and it turns out to be a remarkably accurate way to describe modern infrastructure. Dave McCrory Blog The idea is simple. As datasets grow, they become harder to move. More applications depend on them. More workflows connect to them. More policies govern them. Eventually, the architecture starts organizing around the data itself. Not because someone designed it that way. Because the physics of large systems leave you very little choice. Imagine trying to relocate a state Medicaid dataset that has been integrated with multiple benefit programs, identity verification systems, and fraud detection tools. Technically possible? Sure. Operationally trivial? Not even close. The larger and more interconnected the dataset becomes, the stronger its gravitational pull. Compute moves closer to the data. Applications move closer to the data. Infrastructure reorganizes around the data. This is why organizations that once talked primarily about storage capacity are now talking about data platforms. The center of gravity moved. When data stops being passive The moment data becomes operational, everything changes. For years, most organizations treated data as something that accumulated quietly inside systems. Applications produced it. Storage kept it safe. Backups made sure it could be restored. But that model starts to break down when the data itself becomes part of real-time decision making. You can see this most clearly in environments that generate enormous volumes of information. Cities now run infrastructure that continuously streams telemetry — traffic sensors, utility meters, environmental monitors, emergency response platforms. A water meter that once reported usage once a month might now generate thousands of readings per year. A traffic system that once relied on static timing can adapt dynamically to real-time conditions. Each improvement creates more data. More importantly, it creates operational dependence on that data. Universities experience the same phenomenon in a different form. Research environments produce extraordinary datasets across genomics, climate science, and artificial intelligence. Sequencing a single human genome generates roughly 100 gigabytes of raw data, and large research programs may create terabytes or petabytes of new information every week. In those environments the challenge isn’t just storing data. It’s feeding it fast enough to the systems that depend on it. Modern research clusters and GPU environments can process enormous volumes of information, but only if the underlying data pipeline keeps up. When storage cannot deliver data fast enough, expensive compute resources sit idle and discovery slows down. And that reveals an important truth about modern infrastructure. When systems depend on data in real time, the question stops being where the infrastructure lives. The question becomes whether the data is available, trustworthy, and recoverable. That distinction also explains why ransomware has become so disruptive to public institutions. Attackers understand that the real leverage is not the servers or the network. It’s the data. When access to data disappears, the services built on top of it disappear as well. Which brings us back to the deeper shift happening across the industry. If data has become this central to operations, services, and discovery, then managing it as a passive byproduct of infrastructure is no longer enough. Infrastructure alone is no longer the strategic layer. The strategic layer is the data itself. Organizations still need performance, availability, and resilience. Those fundamentals have not changed. What has changed is the expectation that infrastructure should also help organizations understand, govern, protect, and use their data more effectively. That is a very different problem than simply storing it. And it is the reason the conversation is evolving from storage management to data management platforms. The real punch line Public sector organizations didn’t set out to become data enterprises. Over time the data accumulated. Then the dependencies formed. And eventually everything started orbiting the datasets that mattered most. Data has gravity. Data has risk. Data has power. Infrastructure still matters. But increasingly, the real mission is something else entirely. The mission is the data. Appreciate you reading. Dmitry Gorbatov © 2025 Dmitry Gorbatov | #dmitrywashere22Views0likes0CommentsAsk Us Everything About Intelligent Control with Everpure Fusion + Pure1
💬 Get ready for our April 2026 edition of Ask Us Everything, this Friday, April 17th at 9 AM Pacific. This month is all Intelligent Control with Everpure Fusion + Pure1. If you have a burning question, feel free to ask it here early and we'll add it to the list to answer on Friday. Or if we can't get it answered live, our Everpure experts can follow up here. Allynz, mikenelson-pure, plus dpoorman are the moderators and experts answering your questions during the conversation as well as here on the community. See you this Friday! (Oh, and if you haven't registered yet, there's still time!) Or, check out this self-serve resource: Presets and Workloads in Everpure Fusion video Pure Fusion Presets and Workloads: Enabling Automation Innovation for Storage Workloads Unlock the Future of Data Management with Pure Fusion File Presets91Views0likes0CommentsProxy Exlusions
We are setting up two new Arrays (Purity//FA 6.9.4) and for the phone home we need to access the internet. In our environment that is possible via a proxy. But for internal traffic (servers, etc) I don't want to go via the proxy server. Is it possible the set a proxy exclusion list in the array itself? And how do I do that?116Views0likes1CommentCommunity Meetups at 2026 Pure//Accelerate
Excited to announce that we are growing our community meetups at Accelerate in June. Building on the successful Modern Virtualization and Cyber gatherings last year, we have added new topics so you can engage with peers and fellow attendees to share and learn in the topics you care most about. Check out the community meetup sessions at this link and plan to add to your session builder once it goes live in the Event app. All meetups will run at 5:30PM local time and feature customers and Everpure discussion leaders across Pure1, Database solutions, Modern Virtualization, Cyber, Cloud and a special Women in Tech meetup led by our own Flashmaven Valerie Harrison. https://www.purestorage.com/accelerate/learning-tracks.html?filter5=tag%3Ainline%2Faccelerate%2Fsession-type%2Fcommunity-meetup100Views0likes0CommentsDid anyone attend RSA 2026?
Everpure exhibited and attended at RSA 2026, the biggest annual gathering of cyber security and cyber security professional and companies. Besides an booth, we presented and sponsored several activities. Let us know if you attended and share what your observed with the community. Here are key trends noted by Everpure at the RSA 2026 conference: The RSA 2026 Narrative RSA 2026 signaled a significant shift in the industry’s mindset, moving away from reactive defense toward a proactive business configuration that leverages "active" systems to sense, pivot, and self-correct. Agentic AI: We are officially in an "AI vs. AI" war. RSAC 2026 highlighted that adversaries now have the upper hand, leveraging Agentic AI to expose vulnerabilities that have remained undiscovered by humans for 10+ years. Because human-led defense cannot keep pace with machine-speed exploits, the focus has shifted from "human-in-the-loop" to "human-on-the-loop." This model relies on autonomous, self-healing systems to isolate threats and restore environments in real-time, allowing humans to act as strategic governors of AI insights rather than manual controllers of the recovery process. In addition, identity security must deal with emerging polymorphic social engineering attacks. MTTA: JPMorgan introduced Mean Time to Adapt, prioritizing real-time posture reconfiguration over static recovery (RTO) to neutralize active threats. Data Integrity: Bruce Schneier identified a "resilience gap" from silent AI corruption, making integrity checks a mandatory prerequisite for trustworthy recovery. Quantum Readiness: Resilience now requires migrating to Post-Quantum Cryptography (PQC) to shield long-lived data from "Harvest Now, Decrypt Later" tactics. Defense to Disruption: "Active Defense" aims to increase attacker costs and efforts. Future Threats: Panels warned of "Harvest Now, Decrypt Later" quantum risks and polymorphic social engineering, while honoring quantum networking breakthroughs.56Views0likes0CommentsData Intelligence and Cyber Resilience
Over the next few months you will be hearing more about data intelligence from Everpure. What is it? How is it relevant to cyber resilience? Data intelligence is the practice of transforming raw data into actionable insights through automated discovery, classification, and metadata analysis. In the modern threat landscape, it is the essential bridge between simple "backup" and true Active Resilience. Without intelligence, resilience is blind. Data intelligence provides the "who, what, and where" of your digital estate, allowing you to: Prioritize Recovery: Identify mission-critical applications and sensitive PII to ensure the most vital services are restored first. Accelerate Detection: Use AI-driven behavioral analysis to spot "silent" corruption or unauthorized access at the storage layer. Ensure Clean Restoration: Precisely tag compromised data to prevent re-infecting environments during recovery. By unifying data security with intelligence, organizations move from being passive targets to Active Defenders, ensuring operational survivability even in the face of sophisticated agentic attacks.72Views1like0CommentsCincinnati PUG Community; We Need Your Help!
"In the midst of chaos, there is also opportunity". These are the words of Sun Tzu who famously wrote The Art of War. While customers, partners, and Puritans all get acquainted to the rebrand from Pure Storage, to EverPure; it also provides us the opportunity to create a unique identity for our PUG chapter. And this is where we need your help. We know our community is filled with creative minds. And Cincinnati has many unique identities, from our beautiful skyline (and chili), to our sports teams (Reds, Bengals, FCC, Cyclones, UC, X), to our landmarks (Roebling Bridge, the Museum Center and Zoo, Fountain Square). This is your chance to help us create our own PUG identity. Get your creative juices flowing and visit the below link for additional details and how YOU can help us create the Cincinnati PUG logo. Help create your chapter's logo | Everpure Community636Views0likes1CommentCatching up
Hey all! It's been a while since I've posted here and I feel compelled to reach out to see what everyone is working on. Like all of us, I've been pulled in many different directions lately (power, cooling, security camera's), and it has made me appreciate that managing our Everpure environment allows me cycles to focus elsewhere. Current storage related projects are, Cloudsnap: working with the Everpure support team to get cloudsnap working so that we can investigate long term backups to our Flashblades or S3 in the cloud. Integration with CyberArk: Again, working with the Everpure support team to enable privileged users with rotating passwords to work with our Everpure management environment. Pureprotect: Chad Montieth and Suresh Madhu have been instrumental in our testing and development of a case to possibly replace SRM for DR failover and testing. Don't forget about Accelerate June 16th - 18th in Las Vegas. This is a worthwhile event that provides free training classes and certification tests. Jason Finley and I from SEHP get to attend this year. register here Begin Registration - Pure Accelerate 2026 What are you working on? Share with the group any success or challenges. Keep an eye on the community page next week for an update from Nick Fritsch. Happy Easter all! - Charlie180Views1like0CommentsSpring is Calling, and so is Reds Baseball
I don't know about you, but I am more than ready for Spring; though I could definitely skip the rain. Wiping muddy dog paws after every walk is getting old! On the bright side, who else is ready for some Reds baseball? I have a few exciting updates and resources to share with the community: 🚀 PUG Meeting Update charles_sheppar and I are currently hard at work on the next PUG meeting. Details to come. 🛡️ Strengthening Your Cyber Resilience Given the current geopolitical climate and the rise in cyber threats, now is the perfect time to audit your data protection. Features like SafeMode and Pure1 Security Assessments act as a resilient last line of defense. If you want to see these tools in action, we recently hosted an expert-led demo on building a foundation for cyber resilience. Watch the recording here: https://www.purestorage.com/video/webinars/the-foundations-of-cyber-resilience/6389889927112.html Questions? Reach out to your Everpure SE or partner for a deeper dive. 📅 Upcoming Events March 12: Nutanix Webinar Exploring virtualization alternatives? Nutanix is hosting a session tomorrow focused on simplifying IT operations and highlighting the Everpure partnership. https://event.nutanix.com/simplifyitandonprem March 19: Or perhaps you're interested in running virtual machines alongside containerized workloads within K8s clusters. If that's the case, join Greg McNutt and Sagar Srinivasa for Virtualization Reimagined: Inside the Everpure Journey. https://www.purestorage.com/events/webinars/virtualization-reimagined.html March 19: Ask Us Everything About Storage for Databases. Join experts Anthony Nocentino, Ryan Arsenault, and Don Poorman for a live Q&A session. https://www.purestorage.com/events/webinars/ask-us-everything-about-storage-for-databases.html March 24: Presets & Workloads for Consistent DB Environments. We’re extending the database conversation to discuss how Everpure helps you transition from "managing storage" to "managing data" through automated presets. https://www.purestorage.com/events/webinars/presets-and-workload-setups-for-consistent-database-environments.html87Views1like0Comments
Upcoming Events
- Apr21Tuesday, Apr 21, 2026, 09:00 AM PDT
- Apr30Thursday, Apr 30, 2026, 02:00 PM PDT
- May7Thursday, May 07, 2026, 09:00 AM PDT
- May14Thursday, May 14, 2026, 04:00 AM PDT
- May21Thursday, May 21, 2026, 04:00 AM PDT
Featured Places
Introductions
Welcome! Please introduce yourself to the Pure Storage Community.Pure User Groups
Explore groups and meetups near you./CODE
The Everpure /Code community is where collaboration thrives and everyone, from beginners taking their first steps to experts honing their craft, comes together to learn, share, and grow. In this inclusive space, find support, inspiration, and opportunities to elevate your automation, scripting, and coding skills, no matter your starting point or career position. The goal is to break barriers, solve challenges, and most of all, learn from each other.Career Growth
A forum to discuss career growth and skill development for technology professionals.
Featured Content
Featured Content
We're excited to announce that the latest Purity//FA release is now GA!
With the latest Purity//FA 6.10.5 release, customers get a tighter combination of performance, protection, and access control...
155Views
1like
0Comments
As enterprises modernize and accelerate their infrastructure through automation, blind spots become more expensive. When systems move faster, teams need telemetry that’s reliable, portable, and easy ...
482Views
0likes
0Comments
We’re constantly trying to improve and look for ways to make this community the best it can be for you all. In order to do that, we need your unique perspective.
We’ve put together a quick C...
864Views
1like
0Comments