The Foundations of Cyber Resilience: Visibility and Indelibility
February 24 | Register now! Ransomware and operational risk haven’t gone away, yet many organizations still overlook the fundamentals that provide the strongest protection. In this back-to-basics webinar, we’ll break down how Pure Storage SafeMode™ Snapshots and Pure1® security assessments work together to form a resilient last line of defense for your data. In this live demonstration, you’ll see how they protect data integrity, accelerate recovery, and simplify security operations. Key takeaways: How immutable SafeMode Snapshots protect data from ransomware and insider threats Best practices for snapshot policies that balance recovery speed and operational efficiency What “secure by default” looks like when these features work together in real environments Register Now!18Views0likes0CommentsSimplifying Observability: Native OpenTelemetry in Purity
As enterprises modernize and accelerate their infrastructure through automation, blind spots become more expensive. When systems move faster, teams need telemetry that’s reliable, portable, and easy to integrate across a heterogeneous stack. Pure Storage’s Enterprise Data Cloud vision reflects that shift: infrastructure that delivers cloud-like simplicity and speed while preserving the control, security, and performance enterprises expect. Fusion supports this by standardizing and scaling self-service workflows, turning storage into an on-demand platform. But faster operations require a stronger feedback loop. As automation increases, teams need confidence that systems remain healthy and predictable. That’s why consolidated observability is foundational. Instead of running separate monitoring tools per layer, organizations are centralizing telemetry into a single observability platform that can correlate signals end-to-end; from the end user’s experience (e.g. browser or mobile app), through the network and application code, all the way down to infrastructure like servers, databases, containers, and storage. This consolidation reduces redundant tools and fragmented dashboards while giving teams the correlated insights they need to resolve incidents faster and make better decisions. The Siloed Vendor Problem Yet achieving this unified vision has proven challenging. Traditional infrastructure vendors have long provided proprietary monitoring tools designed exclusively for their own products. A storage vendor offers one monitoring interface, the compute vendor another, and the network vendor yet another. Each tool uses different data formats, separate dashboards, and incompatible alerting mechanisms. For organizations running heterogeneous environments (which is nearly all of them), this creates an untenable situation. IT teams must context-switch between multiple tools, correlate data manually across platforms, and maintain expertise in numerous vendor-specific interfaces. When an application performance issue arises, determining whether the root cause lies in storage latency, network congestion, or compute resource exhaustion becomes an exercise in detective work across disconnected systems. The promise of consolidated observability cannot be realized with vendor-specific, siloed monitoring tools. A different approach is needed. The Open Standard Solution This challenge has driven the industry toward open, vendor-agnostic standards that enable telemetry interoperability. OpenMetrics emerged as one such standard, providing a common data model for exposing metrics (counters, gauges, and histograms) in a format that any observability platform can consume. By standardizing metric exposition, OpenMetrics reduced vendor lock-in and became foundational to Prometheus-based monitoring at scale. However, standardizing the format of metrics is only one part of what organizations need to make consolidated observability work in practice. Enterprises also need consistency in how telemetry is named, described, transported, and exported, so that infrastructure data can flow cleanly across heterogeneous environments without bespoke integrations. Enter OpenTelemetry, which expands on the same vendor-neutral principles to create a comprehensive observability framework. In other words, it helps ensure telemetry isn’t just emitted in a readable format, but is also structured and delivered in a way that remains portable across vendors and backends. Think of it as establishing the equivalent of a USB standard for telemetry data: any "device" (an application or infrastructure component) can plug into any "peripheral" (an observability platform) without requiring proprietary connectors. The primary benefit is profound: freedom from vendor lock-in. Organizations can choose best-of-breed observability platforms based on capabilities and cost rather than being constrained by what their infrastructure vendors support. The External Agent Bottleneck OpenTelemetry and OpenMetrics have made consolidated observability technically feasible, but most storage vendors have adopted these standards through what can only be described as a "bolt-on" approach. This forces customers to manage a complex chain of external agents, sidecars, or dedicated VMs, just to get telemetry from their platforms visualized onto their dashboards. This presents a problem that’s two-fold: Operational Overhead: Instead of simply consuming data, IT teams are burdened with sizing, patching, and troubleshooting the monitoring infrastructure itself. New Failure Modes: If an agent crashes or becomes misconfigured, visibility into critical infrastructure disappears precisely when it's needed most. Teams find themselves monitoring their monitoring infrastructure; a meta-problem that defeats the original purpose. The Native Integration Imperative In the Pure Storage platform, observability is a first-class capability instead of an afterthought. Thus, Pure Storage has taken a different path: an OpenTelemetry collector embedded into Purity OS. Instead of asking customers to deploy and maintain external agents, exporters, or intermediary infrastructure, Pure Storage platforms will now expose telemetry in standardized OpenTelemetry format as an intrinsic platform capability. The result is sending storage telemetry directly into any OpenTelemetry-compatible Observability platform-of-choice (eg., Datadog, Dynatrace, Splunk, Grafana, etc.). Fig. Numbers represent the sequence of steps in the workflow Pure Storage’s commitment has always been simplicity. Native OpenTelemetry in Purity OS extends that principle to observability: less integration friction, fewer moving parts, and more time spent acting on insight instead of maintaining the pipeline. More information on the native integration of OpenTelemetry Collector within Purity//FB can be found here. Purity//FA to follow soon.118Views0likes0CommentsPure1 Manage Assessment
Hey Cincy PUG, I found a cool feature for detecting changes on your Flash Array. Looking at Data Protection under the Assessment menu I saw a lightning bolt on one of my arrays. That lightning bolt led me to an evaluation showing that there had been a significant drop in DRR for a group of volumes. Turns out that change was benign because one of my teammates refreshed an environment causing the change in the Data Reduction Ratio. I see this as just another way Pure 1 Manage can help admins detect threats or problems with data sets. How are you using the tools in Pure1? Share something with the group! -Charles8Views0likes0CommentsFlashBlade & SQL Server: Enterprise Scale Rapid Recovery
February 19 | Register now! SQL Server estates are continually growing, and traditional backup targets often become the primary bottleneck in failing to meet aggressive recovery time objectives. Stop managing backups and start orchestrating them. Explore how Pure Storage FlashBlade®, as part of the Enterprise Data Cloud, overcomes constraints through a scale-out architecture for rapid backup and recovery throughput. We will share baseline performance data, including restore speeds exceeding 100 TB/hr, and provide practical guidance on integrating FlashBlade with native T-SQL backup using SMB and S3 protocols. Key Takeaways: Operational simplicity: Uses policy templates to eliminate manual configuration and human error. Flexible multi-protocol integration: Advantages of using both SMB and S3-compatible object storage as native backup targets, including the use of multiple Virtual Interfaces (VIFs) to maximize throughput. Optimized performance tuning: Gain insights from real-world validation data on how to balance host CPU usage and compression (using ZSTD) to achieve the most efficient backup and restore windows for your environment. Register Now!84Views0likes0CommentsPure Certifications
Hey gang, If any of you currently hold a Flash Array certification there is an alternative to retaking the test to renew your cert. The Continuing Pure Education (CPE) program takes into account learning activities and community engagement and contribution hours to renew your FA certification. I just successfully renewed my Flash Array Storage Professional cert by tracking my activities. Below are the details I received from Pure. Customers can earn 1 CPE credit per hour of session attendance at Accelerate, for a maximum of 10 CPEs total (i.e., up to 10 hours of sessions). Sessions must be attended live. I would go ahead and add all the sessions you attended at Accelerate to the CPE_Submission form. Associate-level certifications will auto-renew as long as there is at least one active higher-level certification (e.g., Data Storage Associate will auto-renew anytime a Professional-level cert is renewed). All certifications other than the Data Storage Associate should be renewed separately. At this time, the CPE program only applies to FlashArray-based exams. Non- FA exams may be renewed by retaking the respective test every three years. You should be able to get the CPE submission form from your account team. Once complete email your recertification Log to peak-education@purestorage.com for formal processing.23Views3likes0CommentsAsk Us Everything: Evergreen//One™ Edition — What the Community Learned
A recent Ask Us Everything (AUE) session on Pure Storage Evergreen//One™ was a lively, deeply technical conversation—and exactly the kind of dialogue that makes the Pure Community special. Here are some of the biggest takeaways, organized around the questions asked and the insights that followed.22Views0likes0CommentsMigrating and Managing Nutanix Workloads on Pure Storage FlashArray
January 27 | Register Now! Let’s move past the slide deck and get into the Nutanix and Pure Storage solution. Join two of our senior technical experts for a live, end-to-end demonstration of this new integrated solution. We will simulate a real-world deployment scenario, showing you exactly how to leverage the performance of Pure Storage FlashArray™ within your Nutanix environment. This is a technical "how-to" designed for architects and admins who want to see the plumbing behind the partnership. Here’s what we’ll demo live: Connectivity and setup: A step-by-step connection of the FlashArray to the Nutanix cluster, ensuring optimal configuration for low-latency workloads. Seamless migration: The workflow for migrating active workloads onto the joint solution without breaking a sweat. Provisioning in action: Create a Virtual Machine and track the corresponding volume directly on the FlashArray management console. Advanced data protection: How to execute and manage high-performance snapshots for instant recovery and data mobility. Register now!61Views0likes0CommentsWe are just one week away PUG#3
January 28th, the Cincinnati Pure User Group will be convening at Ace's Pickleball to discuss Enterprise file. We will be joined by Matt Niederhelman Unstructured Data Field Solutions Architect to help guide conversation and answer questions about what he is experiencing amongst other customers. Click the link below to register and come join us. Help us guide the conversation with your ideas for future topics. https://info.purestorage.com/2025-Q4AMS-COMREPLTFSCincinnatiPUG-LP_01---Registration-Page.html17Views1like0CommentsCincinnati Pure User Group: Real-time Enterprise File
Register Now => Join us for an exclusive Pure User Group (PUG) session dedicated to the future of file services. This isn't just a technical briefing; it’s a community gathering designed for peer-to-peer learning and strategic roadmap building. We’re diving deep into the Real-time Enterprise File vision—exploring how to unify your environment across FlashArray and FlashBlade to eliminate silos and escape the "forklift upgrade" trap forever. Whether you’re managing simple departmental shares or complex AI/ML pipelines, this is your chance to connect with local experts, share battle-tested insights, and see how to make your data plane as agile as your business demands. What You’ll Learn The Power of Choice: Understand how Pure’s file capabilities span the entire portfolio. We’ll clarify exactly when to leverage FlashArray vs. FlashBlade for workloads ranging from VDI and VMware over NFS to massive AI/ML repositories. Production-Ready Excellence: Go beyond the basics with a look at the capabilities that matter in the real world: multi-protocol support (SMB/NFS), directory integration, Kerberos security, and multi-tenancy for segmented environments. The "Last Refresh" Strategy: Get practical, no-nonsense guidance on sizing and migration tooling. Learn how to consolidate legacy filers and execute a migration that ensures you never have to do a forklift upgrade again. Peer-to-Peer Wisdom: This is a user group first. You’ll hear directly from local customers about their real-world journeys—what worked, what didn't, and the lessons they learned that you can apply to your own data center tomorrow. Event Agenda 2:00 PM | Welcome & Round-the-Room: We start with quick intros. We want to know who you are and exactly what technical hurdles you’re looking to clear. 2:15 PM | The Real-time Enterprise File Vision: An overview of the vision and where the portfolio is headed. See what’s new and what’s next for FlashArray and FlashBlade. 2:40 PM | Deep Dive: Design Patterns & Use Cases: We’ll walk through common architectural designs for home directories, content repositories, and NFS datastores, including proven protection and recovery patterns. 3:10 PM | Customer Spotlight & Panel: A 25-minute interactive session with local peers. Hear their architecture stories and get your toughest questions answered in an open Q&A. 3:35 PM | Whiteboard Session: Your File Roadmap: An open, interactive conversation about your specific challenges—from unstructured data growth to migration blockers. Let’s map out where Pure can help. 3:55 PM | Wrap-up & Next Steps: Key takeaways, resources for your team, and a preview of our next PUG event. 4:00 PM | Networking & Happy Hour Date & Time January 28, 2026 2:00 PM - 4:00 PM EST Location Aces Pickleball 2730 Maverick Dr Norwood, OH 45212 (Factory 52)44Views1like1CommentStop Prompting, Start Context Engineering
This blog post argues that Context Engineering is the critical new discipline for building autonomous, goal-driven AI agents. Since Large Language Models (LLMs) are stateless and forget information outside their immediate context window, Context Engineering focuses on assembling and managing the necessary information—such as session history, long-term memory (embeddings, RAG indexes), and tool outputs—for the agent every single turn. The post asserts that storage, not the LLM or the prompt, is the primary performance bottleneck for AI at scale. The speed of the underlying storage architecture dictates the agent's responsiveness because it must quickly retrieve and persist context data repeatedly.65Views2likes0Comments