Getting Started: 5 Steps to Get the Most Out of the Pure Customer Community
Welcome! You've taken the first step and created an account here. What to do next you ask? Here's five simple steps to take after registering to ensure you're getting the most out of this community. Fill out your Profile: Let the community know who you are! Click on your avatar in the top right corner of this window and select 'My Settings' from that dropdown. Fill in your name, location, and bio information. Plus select from one of several default avatars or upload your own image. Write an Introduction post: Head over to the Social Space and write your intro post. Tell us about yourself, your role at your company, and your goals for participating in this community. What have you been thinking about a lot lately at work? (And we won't shy away from pictures of your pets either!) Follow a couple of Forum areas: Find the products you use most and solution areas you're most focused on in our Forums and be sure to click the bell icon in the upper right of those forums to be sure you get notifications on the latest activity in those areas. If you work in Finance, Healthcare, Public Sector, or Telco there's groups dedicated to the unique needs of your industry areas too. And if you're an open source or automation fan, Cloud Native and Kubernetes devotee, or a Pure Partner, there's dedicated group you can join for each of those areas too. Join your local Pure User Group: Click on Groups in the top nav and select Pure User Groups. (fka FlashCrew) Select your region & find the group for your local area. Click on that group and then click 'Join Group'. This will ensure you hear about any Pure events happening in your local area, including when & where the next meetup is. Pick 3-5 tags to follow: This community makes heavy use of tags. As you browse a forum, you'll notice each thread has tags. That is because we require them for every post. Find the tags most relevant to your interest areas and click the bell icon on those pages so you can keep up to date with the latest posts in those categories, regardless of what forum or group the discussion happens in. Finally, feel free to ask questions! Your friendly admins (bmcdougall and Ludes) are here to answer any questions you have and take suggestions. And we have deputized experts across Pure Storage to be on hand to answer deep technical questions. So don't be shy, there's always someone around to help you out.1.8KViews17likes9CommentsPure Certifications
Hey gang, If any of you currently hold a Flash Array certification there is an alternative to retaking the test to renew your cert. The Continuing Pure Education (CPE) program takes into account learning activities and community engagement and contribution hours to renew your FA certification. I just successfully renewed my Flash Array Storage Professional cert by tracking my activities. Below are the details I received from Pure. Customers can earn 1 CPE credit per hour of session attendance at Accelerate, for a maximum of 10 CPEs total (i.e., up to 10 hours of sessions). Sessions must be attended live. I would go ahead and add all the sessions you attended at Accelerate to the CPE_Submission form. Associate-level certifications will auto-renew as long as there is at least one active higher-level certification (e.g., Data Storage Associate will auto-renew anytime a Professional-level cert is renewed). All certifications other than the Data Storage Associate should be renewed separately. At this time, the CPE program only applies to FlashArray-based exams. Non- FA exams may be renewed by retaking the respective test every three years. You should be able to get the CPE submission form from your account team. Once complete email your recertification Log to peak-education@purestorage.com for formal processing.901Views4likes1CommentCincinnati PUG Community; We Need Your Help!
"In the midst of chaos, there is also opportunity". These are the words of Sun Tzu who famously wrote The Art of War. While customers, partners, and Puritans all get acquainted to the rebrand from Pure Storage, to EverPure; it also provides us the opportunity to create a unique identity for our PUG chapter. And this is where we need your help. We know our community is filled with creative minds. And Cincinnati has many unique identities, from our beautiful skyline (and chili), to our sports teams (Reds, Bengals, FCC, Cyclones, UC, X), to our landmarks (Roebling Bridge, the Museum Center and Zoo, Fountain Square). This is your chance to help us create our own PUG identity. Get your creative juices flowing and visit the below link for additional details and how YOU can help us create the Cincinnati PUG logo. Help create your chapter's logo | Everpure Community713Views0likes2CommentsDon’t Wait, Innovate: Long‑Life Release 6.9.0 Is Your Gateway to Continuous Innovation
How Pure Releases Work (and Why You Should Care) Pure Storage doesn’t make you choose between stability and innovation: Feature Releases arrive monthly and are supported for 9 months. They’re production‑ready and ideal if you like to live on the cutting edge. Long‑Life Releases (LLRs) bundle those feature releases into a thoroughly tested version which is supported for three years. LLR 6.9.0 is essentially all the innovation of those Feature releases, rolled into one update. This dual approach means you can adopt new features as soon as they’re ready or wait for the next stable release—either way, you keep moving forward. Not sure what features you’re missing? Not a problem as we have a tool for that. A coworker reminded me: Pure1’s AI Copilot can tell you exactly what you’ve been missing. Here’s how easy it is to find out: Log into Pure1, click on the AI Copilot tab, and type your question. My coworker reminded me of this last week, so I tried: “Please provide all features for FlashArray since version 6.4 of Purity OS.” Copilot returned a detailed rundown of new capabilities across each release. In just a couple of minutes, I saw everything I’d overlooked—no digging through release notes or calling support required. A Taste of What You’ve Been Missing Here’s a snapshot of the goodies you may have missed across the last few year releases: Platform enhancements: FlashArray//E platform (6.6.0) extends Pure’s simplicity to tier‑3 workloads. Gen 2 chassis support (6.8.0) delivers more performance and density with better efficiency. 150 TB DirectFlash modules (6.8.2) boost capacity without compromising speed. File services advancements: FlashArray File (GA in 6.8.2) lets you manage block and file workloads from the same array. SMB Continuous Availability shares (6.8.6) keep file services online through failures. Multi‑server/domain support (6.8.7) scales file services across larger environments. Security and protection: Enhanced SafeMode protection (6.4.3) quadruples local snapshot capacity and adds hardware tokens for instant data locking which is vital in a ransomware era. Over‑the‑wire encryption (6.6.7) secures asynchronous replication. Pure Fusion: We can't talk about this enough Think of this as fleet intelligence. Fusion applies your policies across every array and optimizes placement automatically, cutting operational overhead . Purity OS: It’s Not Just Firmware Every Purity OS update adds value to your existing hardware. Recent improvements include support for new NAND sources, “titanium” efficiency power supplies, and advanced diagnostics. These aren’t minor tweaks; they’re part of Pure’s Evergreen promise that your hardware investment keeps getting better over time. Why Waiting Doesn’t Pay Off It’s tempting to delay updates, but with Pure, waiting often means you’re missing out on: Security upgrades that counter new threats. Performance gains like NVMe/TCP support and ActiveCluster improvements. Operational efficiencies such as open metrics and better diagnostics. Future‑proofing features that prepare you for upcoming innovations. Your Roadmap to Capture These Benefits Assess your current state: Use AI Copilot to see exactly what you’d gain by moving to LLR 6.9.0. Plan your update: Pure’s non‑disruptive upgrades let you modernize without downtime. Explore new features: Dive into Fusion, enhanced file services, and expanded security capabilities. Connect with the community: Share experiences with other users to accelerate your learning curve. The Bottom Line Pure’s Evergreen model means your hardware doesn’t just retain value it continues to gain it. Long‑Life Release 6.9.0 is a gateway to innovation. In a world where data is your competitive edge, standing still is equivalent to moving backward. Ready to see what you’ve been missing? Log into Pure1, fire up Copilot, and let it show you the difference between where you are and where you could be.600Views4likes0CommentsComplexity Creeps. Let’s Audit It Before It Breaks You.
Complexity in IT isn’t built overnight—and it won’t be unwound that way either. This blog walks through a practical, no-fluff approach to auditing and simplifying your IT environment. From building a visibility map of your tools and integrations to prioritizing what to fix, executing cleanly, and proving the value with real metrics—this is about intentional, incremental change. Win big. Choose simplicity.538Views4likes4CommentsSimplifying Observability: Native OpenTelemetry in Purity
As enterprises modernize and accelerate their infrastructure through automation, blind spots become more expensive. When systems move faster, teams need telemetry that’s reliable, portable, and easy to integrate across a heterogeneous stack. Pure Storage’s Enterprise Data Cloud vision reflects that shift: infrastructure that delivers cloud-like simplicity and speed while preserving the control, security, and performance enterprises expect. Fusion supports this by standardizing and scaling self-service workflows, turning storage into an on-demand platform. But faster operations require a stronger feedback loop. As automation increases, teams need confidence that systems remain healthy and predictable. That’s why consolidated observability is foundational. Instead of running separate monitoring tools per layer, organizations are centralizing telemetry into a single observability platform that can correlate signals end-to-end; from the end user’s experience (e.g. browser or mobile app), through the network and application code, all the way down to infrastructure like servers, databases, containers, and storage. This consolidation reduces redundant tools and fragmented dashboards while giving teams the correlated insights they need to resolve incidents faster and make better decisions. The Siloed Vendor Problem Yet achieving this unified vision has proven challenging. Traditional infrastructure vendors have long provided proprietary monitoring tools designed exclusively for their own products. A storage vendor offers one monitoring interface, the compute vendor another, and the network vendor yet another. Each tool uses different data formats, separate dashboards, and incompatible alerting mechanisms. For organizations running heterogeneous environments (which is nearly all of them), this creates an untenable situation. IT teams must context-switch between multiple tools, correlate data manually across platforms, and maintain expertise in numerous vendor-specific interfaces. When an application performance issue arises, determining whether the root cause lies in storage latency, network congestion, or compute resource exhaustion becomes an exercise in detective work across disconnected systems. The promise of consolidated observability cannot be realized with vendor-specific, siloed monitoring tools. A different approach is needed. The Open Standard Solution This challenge has driven the industry toward open, vendor-agnostic standards that enable telemetry interoperability. OpenMetrics emerged as one such standard, providing a common data model for exposing metrics (counters, gauges, and histograms) in a format that any observability platform can consume. By standardizing metric exposition, OpenMetrics reduced vendor lock-in and became foundational to Prometheus-based monitoring at scale. However, standardizing the format of metrics is only one part of what organizations need to make consolidated observability work in practice. Enterprises also need consistency in how telemetry is named, described, transported, and exported, so that infrastructure data can flow cleanly across heterogeneous environments without bespoke integrations. Enter OpenTelemetry, which expands on the same vendor-neutral principles to create a comprehensive observability framework. In other words, it helps ensure telemetry isn’t just emitted in a readable format, but is also structured and delivered in a way that remains portable across vendors and backends. Think of it as establishing the equivalent of a USB standard for telemetry data: any "device" (an application or infrastructure component) can plug into any "peripheral" (an observability platform) without requiring proprietary connectors. The primary benefit is profound: freedom from vendor lock-in. Organizations can choose best-of-breed observability platforms based on capabilities and cost rather than being constrained by what their infrastructure vendors support. The External Agent Bottleneck OpenTelemetry and OpenMetrics have made consolidated observability technically feasible, but most storage vendors have adopted these standards through what can only be described as a "bolt-on" approach. This forces customers to manage a complex chain of external agents, sidecars, or dedicated VMs, just to get telemetry from their platforms visualized onto their dashboards. This presents a problem that’s two-fold: Operational Overhead: Instead of simply consuming data, IT teams are burdened with sizing, patching, and troubleshooting the monitoring infrastructure itself. New Failure Modes: If an agent crashes or becomes misconfigured, visibility into critical infrastructure disappears precisely when it's needed most. Teams find themselves monitoring their monitoring infrastructure; a meta-problem that defeats the original purpose. The Native Integration Imperative In the Pure Storage platform, observability is a first-class capability instead of an afterthought. Thus, Pure Storage has taken a different path: an OpenTelemetry collector embedded into Purity OS. Instead of asking customers to deploy and maintain external agents, exporters, or intermediary infrastructure, Pure Storage platforms will now expose telemetry in standardized OpenTelemetry format as an intrinsic platform capability. The result is sending storage telemetry directly into any OpenTelemetry-compatible Observability platform-of-choice (eg., Datadog, Dynatrace, Splunk, Grafana, etc.). Fig. Numbers represent the sequence of steps in the workflow Pure Storage’s commitment has always been simplicity. Native OpenTelemetry in Purity OS extends that principle to observability: less integration friction, fewer moving parts, and more time spent acting on insight instead of maintaining the pipeline. More information on the native integration of OpenTelemetry Collector within Purity//FB can be found here. Purity//FA to follow soon.500Views0likes0CommentsWho's using Pure Protect?
Hey everyone, Just wondering if anyone else is using Pure Protect yet. We have gone through the quick start guide and have a VMWare to VMWare configuration setup. We have configured our first policy and group utilizing a test VM but it seems to be stuck in the protection phase. I would be very interested to hear what others have seen or experienced. -Charles428Views2likes4CommentsBoston Pure User Group (PUG) at Trillium - Fort Point!
Simplify IT, Empower Data - Over a Pint at Trillium - Fort Point Join us August 21 2025, and connect, learn, and engage with your fellow IT pros for an afternoon filled with exciting announcements from the recent Pure//Accelerate event and our vision for the Enterprise Data Cloud, as well as an engaging discussion on modern virtualization and a demo of Fusion. Fusion is a fully integrated platform that federates multiple arrays—such as FlashArray and FlashBlade—into a unified fleet, enabling centralized, cloud-like management, streamlined resource provisioning, and enhanced visibility across multi-array environments. Rob Quast, Principal Technologist from Pure Storage will be presenting on the above topics with additional input from other Pure technologists. The complete agenda is below. Please register if you plan on attending: Register Here Agenda 2:00 PM - Welcome & Cheers Light intro by host & Pure representatives. Local brew served!! 2:15 PM - Accelerate Highlights: What You Missed (or Want More Of) A high-level recap of key announcements: Fusion, Evergreen One, FlashBlade//SR2 2:45 PM - Enterprise Data Cloud: The Vision and The Why Why it matters: cutting complexity, controlling cost, and scaling for AI 3:15 PM - Fusion in Action: Simplifying Storage with Intelligence Live demo or use-case storytelling around automation, presets, and governance 3:45 PM - Break & Bites Grab a drink, mingle, enjoy local food 4:15 PM - Rethinking Virtualization in 2025 What’s next after VMware? Discuss Pure + Nutanix, KubeVirt, Azure/AWS paths 4:45 PM - Ask Me Anything (AMA) Panel Interactive Q&A with Pure team + customer guest if available 5:15 PM - Cheers & Networking Open networking, brewery tour optional We look forward to seeing you!414Views0likes1CommentPure Storage Delivers Critical Cyber Outcomes
“We don’t have storage problems. We have outcome problems.” - Pure customer in a recent cyber briefing No matter what we are buying, what we are buying is a desired outcome. If you buy a car, you are buying some sort of outcome or multiple outcomes. Point A to Point B, comfort, dependability, seat heaters, or if you are like me, a real, live Florida Man, seat coolers! The same is true when solving for cyber outcomes, and often overlooked is a storage foundation to drive cyber resilience. A strong storage foundation improves data security, resilience and recovery. With these characteristics, organizations can recover in hours vs. days. Here are some top cyber resilience outcomes Pure Storage is delivering. Native, Layered Resilience Fast Analytics Rapid Restore Enhanced Visibility We will tackle all of these in this blog space (multi-part post alert!), but let’s start with the native, layered resilience Pure provides customers. Layered Resilience refers to a comprehensive approach to ensuring data protection and recovery through multiple layers of security and redundancy. This architecture is designed to provide robust protection against data loss, corruption, and cyber threats, ensuring business continuity and rapid recovery in the event of a disaster. Why is layered resilience important? Different data needs different protection. My photo collection, while important to me, doesn’t require the same level of protection as critical application data needed to keep the company running. Layered resilience indicates that there needs to be different layers of resilience and recovery. Super critical data needs super critical recovery. We are referring to the applications that are the life-blood of organizations, order processing, patient services or trading applications. These may only account for 5% of your data, but drive 95% of the revenue. Many organizations protect these with high availability which provides excellent resilience against disasters and system outages. But for malicious events, such as ransomware, protection is needed to ensure that recoverable data is available if an attack corrupts or destroys the production data. Scheduled snapshots can protect that data from the time the data is born. Little baby data. Protect the baby! Pure Snapshots are a critical feature, providing efficient, zero-footprint copies of data that can be quickly created and restored, ensuring data protection and business continuity. Pure snapshots are optimized for data reduction, ensuring minimal space consumption. This is achieved through global data reduction technologies that compress and deduplicate data, making snapshots space-efficient. They are designed to be simple and flexible, with zero performance overhead and the ability to create tens of thousands of snapshots instantly. They are also integrated with Pure1 (part of our Enhanced Visibility discussion) for enhanced visibility, management and security, reducing the need for complex orchestration and manual intervention. Snapshots can be used to create new volumes with full capabilities, allowing for mounting, reading, writing, and further snapshotting without dependencies on one another. This flexibility supports various use cases, including point-in-time restores and data recovery. In events that require clean recovery, and secure recovery at that, it would be much more desirable to leverage snapshots for recovery, where you could scan and determine cleanliness and safeness, often in parallel efforts and the reset time for going to an earlier period of time is a matter of seconds rather than days. But not even these amazing local snapshots are enough. What if your local site is rendered unavailable for some reason? Do you have control of your data to be able to recover in that scenario? Replicating those local snapshots to a second site could enable more flexibility in recovery. We have had customers leverage our High Availability solution (ActiveCluster) across sites and then engage snapshots and asynchronous replication to a third site as a part of their recovery plan. Data that requires extended retention and granularity is typically handled by a data control plane application that will stream a backup copy to a repository. This is usually a last line of defense in case of an event, as the recovery time objective is longer when considering a streaming recovery of 50%, 75%, or 100% of a data center. Still, this is a layer of resiliency that a comprehensive plan should account for. And if these repositories are on Pure Storage, these also can be protected by SafeMode methodologies and other security measures such as Object Lock API, Freeze Locked Objects, and WORM compliance. And most importantly, this last line of defense can be supercharged for recovery by the predictable, performant platform Pure provides. Some outcomes of this layer of resilience involves Isolated Recovery Environments to incorporate even security and create those Clean Rooms to isolate recovery to ensure you will not re-introduce the event origin back into production. In these solutions, the speed benefits that Pure provides is critical to making these designs a reality. Of course, the final frontier is the archive layer. This is a part of the plan that usually falls into compliance SLA, where data is required to be maintained for longer periods of time. Still, more and more, there are performance and warm data requirements for even these data sets, where AI and other queries can benefit from even the oldest of data. One never knows what layer of resilience is required for any single event. Having the best possible resilience enables any company to recover, and recover quickly, from an attack. But native resilience is just one of the outcomes we deliver. Come back to read how we are delivering fast analytics outcomes in an environment that seeks to discover anomalies as fast as possible. Exit Question: How resilient is your data today? Jason Walker is a technical strategy director for cyber related areas at Pure Storage and a real, live, Florida Man. No animals or humans were injured in the creation of this post.402Views5likes1Comment