Pure Storage Delivers Critical Cyber Outcomes
“We don’t have storage problems. We have outcome problems.” - Pure customer in a recent cyber briefing No matter what we are buying, what we are buying is a desired outcome. If you buy a car, you are buying some sort of outcome or multiple outcomes. Point A to Point B, comfort, dependability, seat heaters, or if you are like me, a real, live Florida Man, seat coolers! The same is true when solving for cyber outcomes, and often overlooked is a storage foundation to drive cyber resilience. A strong storage foundation improves data security, resilience and recovery. With these characteristics, organizations can recover in hours vs. days. Here are some top cyber resilience outcomes Pure Storage is delivering. Native, Layered Resilience Fast Analytics Rapid Restore Enhanced Visibility We will tackle all of these in this blog space (multi-part post alert!), but let’s start with the native, layered resilience Pure provides customers. Layered Resilience refers to a comprehensive approach to ensuring data protection and recovery through multiple layers of security and redundancy. This architecture is designed to provide robust protection against data loss, corruption, and cyber threats, ensuring business continuity and rapid recovery in the event of a disaster. Why is layered resilience important? Different data needs different protection. My photo collection, while important to me, doesn’t require the same level of protection as critical application data needed to keep the company running. Layered resilience indicates that there needs to be different layers of resilience and recovery. Super critical data needs super critical recovery. We are referring to the applications that are the life-blood of organizations, order processing, patient services or trading applications. These may only account for 5% of your data, but drive 95% of the revenue. Many organizations protect these with high availability which provides excellent resilience against disasters and system outages. But for malicious events, such as ransomware, protection is needed to ensure that recoverable data is available if an attack corrupts or destroys the production data. Scheduled snapshots can protect that data from the time the data is born. Little baby data. Protect the baby! Pure Snapshots are a critical feature, providing efficient, zero-footprint copies of data that can be quickly created and restored, ensuring data protection and business continuity. Pure snapshots are optimized for data reduction, ensuring minimal space consumption. This is achieved through global data reduction technologies that compress and deduplicate data, making snapshots space-efficient. They are designed to be simple and flexible, with zero performance overhead and the ability to create tens of thousands of snapshots instantly. They are also integrated with Pure1 (part of our Enhanced Visibility discussion) for enhanced visibility, management and security, reducing the need for complex orchestration and manual intervention. Snapshots can be used to create new volumes with full capabilities, allowing for mounting, reading, writing, and further snapshotting without dependencies on one another. This flexibility supports various use cases, including point-in-time restores and data recovery. In events that require clean recovery, and secure recovery at that, it would be much more desirable to leverage snapshots for recovery, where you could scan and determine cleanliness and safeness, often in parallel efforts and the reset time for going to an earlier period of time is a matter of seconds rather than days. But not even these amazing local snapshots are enough. What if your local site is rendered unavailable for some reason? Do you have control of your data to be able to recover in that scenario? Replicating those local snapshots to a second site could enable more flexibility in recovery. We have had customers leverage our High Availability solution (ActiveCluster) across sites and then engage snapshots and asynchronous replication to a third site as a part of their recovery plan. Data that requires extended retention and granularity is typically handled by a data control plane application that will stream a backup copy to a repository. This is usually a last line of defense in case of an event, as the recovery time objective is longer when considering a streaming recovery of 50%, 75%, or 100% of a data center. Still, this is a layer of resiliency that a comprehensive plan should account for. And if these repositories are on Pure Storage, these also can be protected by SafeMode methodologies and other security measures such as Object Lock API, Freeze Locked Objects, and WORM compliance. And most importantly, this last line of defense can be supercharged for recovery by the predictable, performant platform Pure provides. Some outcomes of this layer of resilience involves Isolated Recovery Environments to incorporate even security and create those Clean Rooms to isolate recovery to ensure you will not re-introduce the event origin back into production. In these solutions, the speed benefits that Pure provides is critical to making these designs a reality. Of course, the final frontier is the archive layer. This is a part of the plan that usually falls into compliance SLA, where data is required to be maintained for longer periods of time. Still, more and more, there are performance and warm data requirements for even these data sets, where AI and other queries can benefit from even the oldest of data. One never knows what layer of resilience is required for any single event. Having the best possible resilience enables any company to recover, and recover quickly, from an attack. But native resilience is just one of the outcomes we deliver. Come back to read how we are delivering fast analytics outcomes in an environment that seeks to discover anomalies as fast as possible. Exit Question: How resilient is your data today? Jason Walker is a technical strategy director for cyber related areas at Pure Storage and a real, live, Florida Man. No animals or humans were injured in the creation of this post.363Views5likes1CommentAsk us Everything About Cyber Resilience
Our latest Ask Us Everything session landed right in the middle of Cybersecurity Awareness Month, and the timing couldn’t have been better. The Pure Storage Community came ready with smart, practical questions about one thing every IT team has top of mind: how to build cyber resilience before an attack happens.136Views3likes1CommentPure1 Manage Assessment
Hey Cincy PUG, I found a cool feature for detecting changes on your Flash Array. Looking at Data Protection under the Assessment menu I saw a lightning bolt on one of my arrays. That lightning bolt led me to an evaluation showing that there had been a significant drop in DRR for a group of volumes. Turns out that change was benign because one of my teammates refreshed an environment causing the change in the Data Reduction Ratio. I see this as just another way Pure 1 Manage can help admins detect threats or problems with data sets. How are you using the tools in Pure1? Share something with the group! -Charles31Views2likes0CommentsMinutes to Meltdown
Join Pure Storage and Commvault for... 💥 Minutes to Meltdown | Manchester This isn't your usual Cyber Security event! You'll take part in a LIVE simulated cyber-attack. Experience firsthand how security, infrastructure, legal, leadership and others must unite when the worst happens. Gain insights in how your business can be better prepared and learn from those that have been through this before. Register here: 👉 https://discover.commvault.com/event-minutes-to-meltdown-manchester-with-pure-storage-registration.html67Views2likes0CommentsThe SafeMode Seance: A Cyber Security Haunting
Topic: How are you protecting your data from cyber threats? Are you both protecting your data, while also preparing to recover in the event that your organization is impacted by a cyber event? Join us for a spooky, cyber-focused meeting; a supportive and open forum where we’ll share scary stories, and explore solutions to ensure your data is protected from even the most ghoulish threats. This customer-driven discussion will focus on your experiences and challenges with protecting your data from a cyber attack. Pure Storage experts will offer insights and guidance to help you protect your data from the zombie apocalypse. Get Involved: We're looking for security-focused individuals who would be willing to attend and share their perspective on how they are helping their organization protect against cyber threats, and prepare in the case that recovery is needed. And all are welcome. You don't have to be a Pure Storage customer to attend. Join the community, talk to your peers, and have some fun. Agenda: Welcome, Introductions, Updates Customer Presentation - How to use Pure1 Assessments to review and improve your security posture Customer Presentation - Something strange is happening, but we don't know what it is. How I used Pure1 AI CoPilot along with Varonis to narrow the scope on the "strange stuff" Pure Presentation (w/ alliance partners) - SafeMode, Cyber Resiliency and Isolated Recovery Environments Panel/Q&A - Open discussion amongst the community; w/ security-focused individuals (hopefully) in attendance. Anonymous Group Feedback: Share your thoughts and experiences in regards to data protection. What’s working? What’s not? Where could you use some feedback from the community? Understanding Your Needs: What does your organization need to fully protect your data, and recover if you were ever attacked? We’ll help you pinpoint what truly matters. Exploration Circle: Hear from Pure’s subject matter experts on what they are seeing regarding the latest cyber security and cyber resiliency topics. Support & Resources: Find out where you can get additional help, training, and resources. Date: Wednesday, October 15th, 2-4pm ET. Location: Aces Pickleball, 2730 Maverick Dr, Norwood, OH 45212 (Factory 52) RSVP: https://info.purestorage.com/2025-Q3AMS-COMREPLCRCincinnatiPUGLP_01---Registration-Page.html Stick around after the Pure User Group meeting and enjoy Pies & Pints with Pure Storage, our partners, and fellow customers.294Views2likes1CommentSpring is Calling, and so is Reds Baseball
I don't know about you, but I am more than ready for Spring; though I could definitely skip the rain. Wiping muddy dog paws after every walk is getting old! On the bright side, who else is ready for some Reds baseball? I have a few exciting updates and resources to share with the community: 🚀 PUG Meeting Update charles_sheppar and I are currently hard at work on the next PUG meeting. We can't share the specifics just yet, but we are planning something unique and fun for the community. Stay tuned! 🛡️ Strengthening Your Cyber Resilience Given the current geopolitical climate and the rise in cyber threats, now is the perfect time to audit your data protection. Features like SafeMode and Pure1 Security Assessments act as a resilient last line of defense. If you want to see these tools in action, we recently hosted an expert-led demo on building a foundation for cyber resilience. Watch the recording here: https://www.purestorage.com/video/webinars/the-foundations-of-cyber-resilience/6389889927112.html Questions? Reach out to your Everpure SE or partner for a deeper dive. 📅 Upcoming Events March 12: Nutanix Webinar Exploring virtualization alternatives? Nutanix is hosting a session tomorrow focused on simplifying IT operations and highlighting the Everpure partnership. https://event.nutanix.com/simplifyitandonprem March 19: Or perhaps you're interested in running virtual machines alongside containerized workloads within K8s clusters. If that's the case, join Greg McNutt and Sagar Srinivasa for Virtualization Reimagined: Inside the Everpure Journey. https://www.purestorage.com/events/webinars/virtualization-reimagined.html March 19: Ask Us Everything About Storage for Databases. Join experts Anthony Nocentino, Ryan Arsenault, and Don Poorman for a live Q&A session. https://www.purestorage.com/events/webinars/ask-us-everything-about-storage-for-databases.html March 24: Presets & Workloads for Consistent DB Environments. We’re extending the database conversation to discuss how Everpure helps you transition from "managing storage" to "managing data" through automated presets. https://www.purestorage.com/events/webinars/presets-and-workload-setups-for-consistent-database-environments.html17Views1like0CommentsUnderstanding Deduplication Ratios
It’s super important to understand where deduplication ratios, in relation to backup applications and data storage, come from. Deduplication prevents the same data from being stored again, lowering the data storage footprint. In terms of hosting virtual environments, like FlashArray//X™ and FlashArray//C™, you can see tremendous amounts of native deduplication due to the repetitive nature of these environments. Backup applications and targets have a different makeup. Even still, deduplication ratios have long been a talking point in the data storage industry and continue to be a decision point and factor in buying cycles. Data Domain pioneered this tactic to overstate its effectiveness, leaving customers thinking the vendor’s appliance must have a magic wand to reduce data by 40:1. I wanted to take the time to explain how deduplication ratios are derived in this industry and the variables to look for in figuring out exactly what to expect in terms of deduplication and data footprint. Let’s look at a simple example of a data protection scenario. Example: A company has 100TB of assorted data it wants to protect with its backup application. The necessary and configured agents go about doing the intelligent data collection and send the data to the target. Initially, and typically, the application will leverage both software compression and deduplication. Compression by itself will almost always yield a decent amount of data reduction. In this example, we’ll assume 2:1, which would mean the first data set goes from 100TB to 50TB. Deduplication doesn’t usually do much data reduction on the first baseline backup. Sometimes there are some efficiencies, like the repetitive data in virtual machines, but for the sake of this generic example scenario, we’ll leave it at 50TB total. So, full backup 1 (baseline): 50TB Now, there are scheduled incremental backups that occur daily from Monday to Friday. Let’s say these daily changes are 1% of the aforementioned data set. Each day, then, there would be 1TB of additional data stored. 5 days at 1TB = 5TB. Let’s add the compression in to reduce that 2:1, and you have an additional 2.5TB added. 50TB baseline plus 2.5TB of unique blocks means a total of 52.5TB of data stored. Let’s check the deduplication rate now. 105TB/52.5TB = 2x You may ask: “Wait, that 2:1 is really just the compression? Where is the deduplication?” Great question and the reason why I’m writing this blog. Deduplication prevents the same data from being stored again. With a single full backup and incremental backups, you wouldn’t see much more than just the compression. Where deduplication measures impact is in the assumption that you would be sending duplicate data to your target. This is usually discussed as data under management. Data under management is the logical data footprint of your backup data, as if you were regularly backing up the entire data set, not just changes, without deduplication or compression. For example, let’s say we didn’t schedule incremental backups but scheduled full backups every day instead. Without compression/deduplication, the data load would be 100TB for the initial baseline and then the same 100TB plus the daily growth. Day 0 (baseline): 100TB Day 1 (baseline+changes): 101TB Day 2 (baseline+changes): 102TB Day 3 (baseline+changes): 103TB Day 4 (baseline+changes): 104TB Day 5 (baseline+changes): 105TB Total, if no compression/deduplication: 615TB This 615TB total is data under management. Now, if we looked at our actual, post-compression/post-dedupe number from before (52.5TB), we can figure out the deduplication impact: 615/52.5 = 11.714x Looking at this over a 30-day period, you can see how the dedupe ratios can get really aggressive. For example: 100TB x 30 days = 3,000TB + (1TB x 30 days) = 3,030TB 3,030TB/65TB (actual data stored) = 46.62x dedupe ratio In summary: 100TB, 1% change rate, 1 week: Full backup + daily incremental backups = 52.5TB stored, and a 2x DRR Full daily backups = 52.5TB stored, and an 11.7x DRR That is how deduplication ratios really work—it’s a fictional function of “what if dedupe didn’t exist, but you stored everything on the disk anyway” scenarios. They’re a math exercise, not a reality exercise. Front-end data size, daily change rate, and retention are the biggest variables to look at when sizing or understanding the expected data footprint and the related data reduction/deduplication impact. In our scenario, we’re looking at one particular data set. Most companies will have multiple data types, and there can be even greater redundancy when accounting for full backups across those as well. So while it matters, consider that a bonus.174Views1like1CommentAsk Us Everything about Cyber Resilience
Got questions about cyber resilience + storage? 🤔 Get answers. Register Now | October 17, 2025 | 09:00am PT • 12:00pm ET In this month’s episode of Ask Us Everything, we’re tackling cyber resilience head on. We'll start with a quick overview of how to use the features already built into your Pure Storage systems—to help you defend your platform against malicious users, detect cyber threats and ransomware attacks, and minimize disruption with reliable and rapid recovery. Then, we’ll answer your questions about existing capabilities like SafeMode™ Snapshots, layered resilience, or our latest Fall launch announcements. Our experts will help you prepare and build your confidence in keeping your data secure and available. Ask a question for your chance to win: The first 10 eligible Pure Storage customers to submit a question during the live webinar will receive one (1) Pure Storage Customer Appreciation Kit (approximate retail value: $65). Limit one kit per customer. Offer valid only during the live event and while supplies last. See Terms and Conditions.69Views1like0CommentsFlashcrew Manchester
🤝 Flashcrew Pure Usergroup | Manchester | For our amazing customers! Connect with fellow Pure users and dive deep into the //Accelerate announcements. Learn how to extract even more value from the Pure ecosystem and get your technical questions answered by the experts. Register here: https://info.purestorage.com/2025-Q2EMEA-UKREPLHCOFY26Q3-FlashCrew-Manchester-LP_01---Registration-Page.html We also open these up to non-customers interested in Pure, helping you learn from those already benefitting from the Pure Enterprise Data Platform. Please DM me if you would like an invite.85Views1like0Comments