Blog Post

User Blogs
4 MIN READ

Pure Storage Delivers Critical Cyber Outcomes, Part Two: Fast Analytics

jasonwalker's avatar
9 days ago

"You think you hate it now, wait 'til you drive it" - Eugene Levy, Vacation, and probably after reading this blog.

“We don’t have storage problems. We have outcome problems.” - Pure customer in a recent cyber briefing

No matter what we are buying, what we are buying is a desired outcome. 

If you buy a car, you are buying some sort of outcome or multiple outcomes. Point A to Point B, comfort, dependability, seat heaters, or if you are like me, a real, live Florida Man, seat coolers!

The same is true when solving for cyber outcomes, and often overlooked is a storage foundation to drive cyber resilience.  A strong storage foundation improves data security, resilience and recovery.  With these characteristics, organizations can recover in hours vs. days. Here are some top cyber resilience outcomes Pure Storage is delivering.

  1. Native, Layered Resilience
  2. Fast Analytics
  3. Rapid Restore
  4. Enhanced Visibility

We tackled Layered Resilience in our first offering, but what about Fast Analytics?

Fast Analytics refers to native log storage in an attempt to review and determine possible anomalies and other potential threats to an environment.  This is a category of outcomes that has been moved, by the vendors themselves and, therefore, also customers, but is seeing a repatriation trend back to on premises.

Why is repatriation occurring in this space? 

This is a trend that we are seeing in larger enterprises due to the rising ingest rates and runaway growth of logs occurring.

It is important, more important than ever, to discover attacks as soon as possible. Rising costs of downtime and work time to recover are working hand in hand in making every attack more costly than the next attack.

To discover anomalies quickly, logs must be interrogated as fast as possible. To keep up with this, vendor solutions have beefed up their compute functions in their cloud offerings. Next-Gen SIEM is moving the formerly, classic, static rules mode of their offerings to an AI-driven, adaptive set of rules, geared toward evolving on the fly, in order to detect issues as quickly as possible.

To deliver that outcome, you need a storage platform to deliver the fastest possible reads allowed. As stated, vendors with their cloud offerings attempt to do this by raising compute performance. But what we see the enterprises dealing with is the rising costs of these solutions in the cloud. 

How is this affecting these customers?

As organizations ingest more log and telemetry data (driven by cloud adoption, endpoint proliferation, and compliance), costs soar due to the vendor’s reliance on ingest-based and workload-based pricing.

More data means larger daily ingestion, rapidly pushing customers into higher pricing tiers, resulting in substantial cost increases if volumes are not carefully managed.

Increasing needs for real-time anomaly detection translate to greater compute demands and more frequent queries, which for workload-based models triggers faster consumption of compute credits and higher overall bills.

To control costs, many organizations limit which data sources they ingest or perform data tiering, risking reduced visibility and slower detection for some threats.

How does an on-premises solution relieve some of these issues?

An on-premises solution, such as Pure Storage FlashBlade, offers the power of all-flash and fast read to provide faster detection of anomalies to support the dynamic aspects of next-gen SIEM tools, but also offer more control around storage growth and associated costs, without sacrificing needed outcomes.

For example, our partnership with Splunk allows customers to retain more logs for richer analysis, run more concurrent queries in less time, and test new analysis and innovate faster. 

Visual 1: Snazzy, high level look at Fast Analytics with our technology alliance partners

Customers at our annual user extravaganza, Accelerate, told us about their process of bringing their logs back on-prem, in order to address some of these issues. 

One customer in particular, FiServ, told their story in our Cyber Resilience breakout session, where we were speaking on what to do before, during, and after an attack, specifically in the area of visibility, where the race is on to identify threats faster. They told of their own desire to reign in the cost of growth, to regain control of their environment.

There is nothing wrong with cloud solutions, but the economies of scaling those solutions have had real world consequences and bringing those workloads back on-prem, to a proven, predictable, platform for performance, is beginning to be a better long term strategy to battle the ongoing fight for cybersecurity and resilience.

On-premises storage is a valuable tool for managing the financial impact of growing data ingestion and analytics needs, by supporting precision data management, retention policy enforcement, and infrastructure sizing, while reducing expensive cloud subscription fees for long-term, large-scale operations. 

Exit Question: Are you seeing these issues developing in your log strategies? Are you considering on-premises for your log workloads today?  

Jason Walker is a technical strategy director for cyber related areas at Pure Storage and a real, live, Florida Man. No animals or humans, nor the author himself, were injured in the creation of this post.

Published 9 days ago
Version 1.0
No CommentsBe the first to comment