Node Navigation
Get Started Here
Recent Discussions
Veeam v13 Integration and Plugin
Hi Everyone, We're new Pure customers this year and have two Flasharray C models, one for virtual infrastructure and the other will be used solely as a storage repository to back up those virtual machines using Veeam Backup and Replication. Our plan is to move away from the current windows-based Veeam v12 in favor of Veeam v13 hardened Linux appliances. We're in the design phase now but have Veeam v13 working great in separate environment with VMware and HPE Nimble. Our question is around Pure Storage and Veeam v13 integration and Plugin support. Veeam's product team mentions there is native integrations in v12, but that storage vendors should be "adopting USAPI" going forward. Is this something that Pure is working on, or maybe already has completed with Veeam Backup and Replication v13?2.1KViews4likes17CommentsEverpure Accelerate 2026
Hey gang, Everpure Accelerate is approaching fast. June 16-18th in Las Vegas. Chime in if you will be attending. I've been fortunate to have attended the last few years and always find this event to be full of great information. Not to mention the training classes and certification tests are included. Hope to see some of you out there!17Views1like1CommentClaude Mythos: The Next Frontier of Autonomous Cyber Intelligence
Model Performance and Capabilities Claude Mythos represents a significant performance leap for Anthropic, reportedly beating their current best Opus model by a large margin. This kind of improvement hasn't been seen since OpenAI released their reasoning model O1 in September 2024. Key performance metrics include: Coding ability: 77% on SWE-Bench Pro (compared to Opus at 53%) Terminal usage: Substantial improvements in the model's ability to use terminal commands General purpose: Despite the cybersecurity focus in marketing, Mythos is a general-purpose LLM like other Claude models Cybersecurity Focus and Access Restrictions Anthropic has positioned Mythos around cybersecurity concerns, emphasizing AI as a potential national security risk - similar to OpenAI's approach with GPT-2 in 2019. However, the model is not cybersecurity-specific but rather a general-purpose AI. Limited Release Strategy: Anthropic has restricted access to select partners, most of whom are investors in the company: Microsoft (Series C and G investor) NVIDIA (Series G) JP Morgan (conventional loan, May 2025) Google (Series C and E, plus convertible debt) Amazon (Series D and E) Cisco (Series E) Market Implications and Competitive Advantages This restricted access creates what the video calls "privatization of tokens," giving certain companies advantages in: Cybersecurity: Finding vulnerabilities (benefiting companies like Cisco, Palo Alto) Legal services: Discovering legal loopholes and litigation strategies Finance and software development: Enhanced capabilities across various domains The core issue isn't cybersecurity itself, but rather the rapid improvement in AI capabilities outpacing society's ability to adapt. Infrastructure and Pricing Infrastructure Dependencies: Despite committing $50 billion to data centers in Texas and New York, Anthropic still relies on partners (Amazon, Google, Microsoft) for training and inference. Pricing Structure: Mythos will cost $125 per million output tokens Available through cloud APIs (Amazon Bedrock, Google Cloud Vertex, Microsoft Foundry) Unlikely to be included in subsidized Pro and Max plans Comparable to OpenAI's GPT-4 Pro at $180 per million tokens Business Strategy and Market Position IPO Positioning: The Mythos release strategically positions Anthropic for a potential IPO, with the company recently surpassing OpenAI by achieving $30 billion in annualized run rate (ARR) - though this is run rate rather than the more conservative annual recurring revenue metric. Adoption Challenges: The rapid advancement creates both excitement and concern, highlighting the growing divide between companies that adopt AI quickly and those that don't. The key is matching the right level of AI intelligence to appropriate tasks rather than using premium models for basic workflows. Future Outlook Based on historical patterns (like DeepSeek R1 catching up to OpenAI's O1 within 5 months), the performance gap created by Mythos will likely be bridged by competitors relatively quickly. The real competitive advantage lies in how quickly companies can adopt and properly allocate AI intelligence to solve complex problems.10Views1like0CommentsAsk Us Everything about Pure1® Self-service!
💬 Get ready for our first May 2026 edition of Ask Us Everything, this Friday, May 1st at 9 AM Pacific. This month is all about Pure1® Self-service. If you have a burning question, feel free to ask it here early and we'll add it to the list to answer on Friday. Or if we can't get it answered live, our Everpure experts can follow up here. jclark, mbradford plus dpoorman dpoorman are the moderators and experts answering your questions during the conversation as well as here on the community. See you this Friday! (Oh, and if you haven't registered yet, there's still time!)222Views1like0CommentsAsk Us Everything Recap: Rethinking Storage with the Intelligent Control Plane
The latest Ask Us Everything session focused on a topic that’s quickly becoming central to Everpure’s strategy: the intelligent control plane. And based on the questions from the community, it’s clear that many teams are starting to think beyond individual arrays and toward managing storage as a unified platform. Here are the key takeaways—driven by the questions attendees asked and the answers from Everpure experts Don Poorman, Zane Allyn and Mike Nelson. “Do I need to rebuild my automation to use Everpure Fusion?” Most teams already have automation in place, whether it’s Terraform, Ansible, or years of scripts. The good news: you don’t have to start over. Everpure Fusion is API-driven, so existing workflows can stay intact. In practice, you’re simply shifting from targeting individual arrays to targeting the fleet as a whole. That often means adding a parameter, not rewriting everything. Everpure Fusion picks up where any existing automation gets bogged down, so tasks get simpler as you scale, not more complex. The takeaway: Everpure Fusion helps you scale your existing automation—it simplifies, standardizes and extends it across your data estate. “What does API-first really mean here?” At Everpure, API-first isn’t just a label. The APIs are built before the GUI, which means everything you can do in the GUI is already available programmatically. For practitioners, that translates to flexibility. Whether you’re scripting, using infrastructure-as-code, or experimenting with AI-driven workflows, you’re not waiting for features to be exposed—you already have access. It’s a subtle difference from legacy storage, where automation often lags behind the interface. “How do I approach automation without losing control?” Attendees raised a common concern: automation can feel risky. The advice was straightforward—start with outcomes, not everything at once. Automate a single workflow, apply guardrails, and expand gradually. Automation here isn’t about removing control. It’s about: Reducing repetitive work Minimizing human error Freeing up time for higher-value tasks For most admins juggling multiple systems, that shift is practical—not theoretical. “What does this look like in real workflows?” One of the most relatable examples discussed was a ServiceNow-style request flow. Instead of manually provisioning storage across multiple systems, a user submits a request describing what they need—performance, protection, and resiliency. From there, Everpure Fusion and Pure1 handle the process automatically. The result is faster, more consistent delivery with fewer manual steps. More importantly, it abstracts the complexity away from both the admin and the requester. That’s a major difference from legacy environments, where admins must manage each step across each array. “What do I actually need to install?” This answer surprised some people. Everpure Fusion isn’t a separate product. It’s built into Purity. Once you’re on the right version (Purity//FA 6.8.1 or later, Purity//FB 4.5.5 or later), getting started is simple: Create a fleet Add arrays That’s it. No additional infrastructure, no separate control plane to deploy. This lowers the barrier significantly and makes it easy to start small and build as your needs require. “How does this scale?” As expected, scale came up quickly. Instead of managing arrays individually, Everpure Fusion introduces fleet-level management. New capabilities like topology groups allow further organization within that fleet—by region, workload, or compliance requirements. This is where Everpure’s approach really diverges from legacy storage. You’re no longer limited to thinking in terms of hardware. You can organize storage in ways that reflect how your business actually operates. “What happens if something fails?” Everpure Fusion is distributed across the arrays in the fleet. There’s no single point of failure. If one system goes offline, the rest of the fleet continues operating normally. That design keeps management resilient while still enabling centralized control. Final thoughts The biggest shift highlighted in this session is simple: Stop managing arrays. Start managing outcomes. With the intelligent control plane—powered by Everpure Fusion and Pure1—Everpure enables: Policy-driven automation Fleet-scale visibility Simpler, faster operations For storage teams, that means less time on manual tasks and more time focused on how data supports the business. And based on the conversation, that’s exactly where our customers want to go. Find out more about the Everpure Intelligent Control Plane here. Check out this and all our other Ask Us Everything sessions. And, keep the conversation going by jumping into the Everpure Community.175Views1like0CommentsGA: What's new in Purity//FA 6.10.5
We're excited to announce that the latest Purity//FA release is now GA! With the latest Purity//FA 6.10.5 release, customers get a tighter combination of performance, protection, and access control across their FlashArray estate. By combining ActiveCluster™ with near‑synchronous ActiveDR™, the platform now delivers a three‑site resilience capability that can help meet compliance expectations. You get zero‑RPO protection between two primary sites with ActiveCluster, replicating to a third site with ActiveDR. It’s engineered to deliver sub‑30‑second RPO and sub‑minute RTO. At the same time, support for FlashArray//ST™ R5 is here as part of the FlashArray family. It’s optimized to deliver consistent ultra low‑latency performance for OLTP, in‑memory, and real‑time workloads. Take advantage of extreme performance while maintaining consistent enterprise-grade data services. Support for NVMe/TCP with VMware 9.x and ActiveCluster means the most demanding VMware workloads don’t have to choose between performance and resilience. ActiveCluster provides zero-RPO protection, automatic failover, and active-active access for stretched datastores across NVMe/TCP. Rounding it out, Everpure Fusion now supports SAML 2.0 SSO, giving you a faster, simpler, and secure authentication experience across the Everpure Platform. Replace legacy local/LDAP logins with MFA‑backed SSO and IP‑based controls that harden your perimeter without adding operational friction while accessing FlashArray, FlashBlade, and Everpure Fusion fleets. Update to Purity//FlashArray 6.10.5 to leverage these latest innovations. Want to learn what else is new? Check out the Everpure What's New webpage today!713Views1like2CommentsCincinnati PUG Community; We Need Your Help!
"In the midst of chaos, there is also opportunity". These are the words of Sun Tzu who famously wrote The Art of War. While customers, partners, and Puritans all get acquainted to the rebrand from Pure Storage, to EverPure; it also provides us the opportunity to create a unique identity for our PUG chapter. And this is where we need your help. We know our community is filled with creative minds. And Cincinnati has many unique identities, from our beautiful skyline (and chili), to our sports teams (Reds, Bengals, FCC, Cyclones, UC, X), to our landmarks (Roebling Bridge, the Museum Center and Zoo, Fountain Square). This is your chance to help us create our own PUG identity. Get your creative juices flowing and visit the below link for additional details and how YOU can help us create the Cincinnati PUG logo. Help create your chapter's logo | Everpure Community787Views0likes2CommentsNVMe-over-Fabrics (NVMe-oF) with Windows Server Initiator and Everpure FlashArray
Are you actively using or considering to use NVMe-oF with Linux and FlashArray? Do you know that Microsoft recently announced a preview of Windows Server NVMe-oF initiator? For those who are interested in this topic, I tested the initial preview with FlashArray and posted the results into the Everpure blog page here302Views1like0CommentsThe Idea That Was Supposed to Fail
Why DirectFlash and Evergreen//One suddenly look a lot smarter in a world of NAND and DRAM price shocks Dmitry Gorbatov Mar 20, 2026 Important Note for my readers: Writing this piece took me a lot longer than I normally spend on a post. It took a lot of reading and research. Many articles and blogs were written on the subject before NAND and DRAM costs went crazy. The dry-humor version is that the storage industry spent years insisting flash was just disk with better manners, and then acted surprised when the underlying physics eventually asked to speak with management. Now, let’s get to it. I can still picture the room. It wasn’t anything special — just another corporate competitive training session, the kind you’ve sat through many times if you’ve spent enough years in enterprise tech. This was at NetApp, in 2015 or 2016, back when flash was still a question mark. Not if, but how. The industry had not fully committed yet, and everyone was trying to figure out what role it would play. The presenter clicked to the next slide, paused for a second, and said something that stuck with me in a way most of those sessions never do: “Pure Storage is crazy! They’re building their own flash modules. That’s stupid. It’s not sustainable. They won’t survive.” It wasn’t said for effect. There was no dramatic pause afterward, no attempt to persuade. It was delivered as a simple, almost obvious conclusion. And to be fair, it felt obvious. Because the entire storage industry operated on a shared assumption: you didn’t build components, you assembled them. You relied on a mature ecosystem of suppliers who specialized in drives, storage controllers, and memory, and you focused your differentiation on software features and integration. That was the efficient path. That was the scalable path. That was how serious companies behaved. What Pure was proposing at the time — what would later become Everpure — felt like a deviation from that logic. Building your own flash modules didn’t just introduce complexity; it seemed to reject the economic advantages of the broader supply chain. It looked like a risk without a clear payoff. So the conclusion made sense. Until it didn’t. Looking Back, Differently If I think back to that training session now, I do not really see it as a moment where someone was foolish. I see it as a moment where the industry was trapped inside the logic of its own assumptions. If you believe flash should look like disk, then building your own flash modules sounds silly. If you believe storage is just a sequence of refresh cycles, then a model built around non-disruptive evolution sounds unnecessary. If you believe component pricing will keep trending in the right direction forever, then architectural efficiency feels like an academic luxury. But once those assumptions start to crack, the logic changes. And when it changes, the things that once looked eccentric start to look oddly prescient. A Change You Don’t Notice Right Away For years, nothing about that statement felt particularly worth revisiting. The industry moved forward in predictable ways. Flash became mainstream. Performance improved. Density increased. Vendors competed on features, benchmarks, and price points. The conversations most of us had with customers followed familiar patterns. If anything, the abstraction layers built around flash made things easier to consume. SSDs behaved like faster disks — and that was good enough. There is a reason they showed up in familiar HDD form factors. The industry was trying to preserve the old world while sneaking in a new medium. Keep the slots. Keep the enclosures. Keep the assumptions. Change as little as possible. That made adoption easier, but it also buried the problem. Because flash is not a disk. It never was. It does not behave like one, and it does not particularly enjoy being treated like one. The only reason the illusion worked is because the industry built a fairly elaborate translation layer to maintain it. That translation layer is where the story really starts. The Trick That Made Flash Look Simple When commodity SSDs became the standard way to bring flash into enterprise storage, they depended on a piece of internal firmware called the Flash Translation Layer, or FTL. Its job was deceptively simple: make raw NAND look like a disk. That sounds harmless enough until you think about what that actually requires. NAND cannot just overwrite data in place the way the rest of the stack would like it to. It has to handle erase cycles, wear leveling, garbage collection, bad block management, and the constant translation between logical addresses and physical locations on the media. So every SSD became its own little self-contained world, complete with its own controller, its own metadata tables, and its own DRAM to keep track of everything. In other words, every drive became a tiny independent computer, making local decisions in isolation. That design solved the adoption problem. It did not solve the architecture problem. For a while, the tradeoff seemed worth it. The drives were fast enough, the packaging was familiar, and the whole system kept pretending that flash was just a much nicer version of disk. But what looked neat and modular at small scale turned out to be awkward and expensive at enterprise scale. And that is where the “stupid” decision begins to look a lot smarter. What Commodity SSDs Actually Drag Along With Them The more I researched this topic (and believe me I did), the more I realized how much of the industry got comfortable with an abstraction that was doing a lot of quiet damage. Commodity SSDs carry four structural inefficiencies that matter much more today than they did when pricing was stable. Trapped DRAM. Every SSD maintains its own mapping tables, so large-scale systems end up carrying a remarkable amount of DRAM inside the drives themselves. That memory is necessary for the SSD to function, but it does not really help the array think globally. It is duplicated overhead, repeated again and again, drive by drive. In a petabyte-scale system, that is not a rounding error. It is cost, power, and complexity hiding in plain sight. Unpredictable Latency. Garbage collection inside a traditional SSD happens when the drive decides it needs to happen. When that occurs, the drive may become temporarily less responsive, and in an array full of independent drives, those little stalls start to show up as tail-latency spikes. The system is always vulnerable to one drive having a private crisis at exactly the wrong time. Write Amplification. Because the SSD does not really understand the workload or the data structures above it, it moves data more often than necessary. More movement means more writes. More writes mean more wear. More wear means the media gets consumed faster than it should. Over-provisioning. Every SSD holds back some raw capacity for its own housekeeping and spare-cell management, but that reserved space is siloed. The array cannot use it intelligently across the system because each drive is managing its own private affairs. None of this sounded especially dramatic when NAND kept getting cheaper and the economics of flash kept improving. It sounded like engineering trivia. The sort of thing infrastructure people argue about while everyone else waits for the quote. Today it is not trivia. Today it is exposure. Why AI Made This Suddenly Everyone’s Problem For years, one of the quiet assumptions in enterprise IT was that storage capacity would continue to become cheaper and more abundant over time. Not perfectly, not smoothly, but predictably enough that the inefficiencies of the underlying architecture could be tolerated. That assumption is now not only under pressure, it is getting decimated. AI did not just create a new category of interesting workloads. It created a global appetite for silicon that is large enough to bend supply curves. The cute part of AI is easy to mock. The cat kicking the T-Rex. The surreal generated videos. The deepfakes that make you look twice and then sigh a little for civilization. But behind every one of those outputs is a less funny reality: extraordinary consumption of DRAM, NAND, GPUs, and supporting infrastructure. The novelty at the edge is powered by very serious resource demand at the core. And that demand is landing directly on the components enterprise storage depends on. This is the part customers are beginning to feel in ways that are no longer abstract. Expansion quotes do not look as comfortable as they once did. Refresh cycles feel more expensive. Delivery windows stretch. Budgets built on assumptions from even two years ago suddenly need more explaining than anyone wanted. There is a tendency to call this inflation because that is the easiest word available. It is not really inflation. It is supply and demand, with a side of semiconductor reality. And that matters, because a traditional SSD array is exposed to both sides of the problem at once. It is exposed to NAND because that is the medium you are buying, and it is exposed to DRAM because every SSD drags its own DRAM overhead along for the ride. When those two markets tighten at the same time, the cost of the architecture gets hit twice. That is not just a technical nuance. That is economics. Revisiting the “Stupid” Decision This is where the old training-room comment starts to age badly. Because what looked like unnecessary vertical integration was really a decision to stop pretending flash was a disk and start treating it like what it actually is: semiconductor media with very specific physical behaviors that should be managed at the system level, not hidden inside dozens of drives. That is the DirectFlash idea in plain English. Take the Flash Translation Layer out of the individual drive. Pull media management into the operating environment. Let Purity manage flash globally instead of leaving each device to improvise its own local strategy. That changes more than performance charts. It means metadata no longer has to be duplicated and trapped inside every SSD. It means wear leveling can happen across the full system instead of inside the borders of a single device. It means bad block handling, garbage collection, and data placement can be coordinated with global context. It means the platform can see the difference between data that should live together and data that should not, which dramatically reduces unnecessary movement and lowers write amplification. And when write amplification drops, the economics change. The NAND lasts longer. The useful life of the media extends. Lower-endurance flash, like QLC, becomes viable for serious enterprise use because the software is smart enough not to abuse it. The system extracts more useful work from the same raw silicon. That is not just clever engineering. That is insulation from volatility. The reason this matters now is that DirectFlash changes the ratio between the silicon you buy and the value you get from it. If the rest of the market is paying more for NAND and more for DRAM, an architecture that reduces trapped DRAM, minimizes wasted writes, extends media life, and packs far more capacity into far denser modules is not just elegant. It is economically defensive. This is where the old “they build their own flash” criticism misses the point. Building your own flash modules was never the point by itself. The point was controlling the relationship between software and media well enough to eliminate the inefficiencies the commodity model had normalized. Why Purity Is the Real Story DirectFlash makes for a good visual. It is a module. You can point to it. You can talk about density and reliability and the fact that a 150TB module can do work that would have required a small army of traditional devices not all that long ago. But the real story is Purity Operating Environment, i.e. software. Purity is where the architectural bet pays off. It is what turns raw NAND into a coordinated system instead of a pile of politely disagreeing SSDs. Because Purity sees the entire media pool, it can write more intelligently. It can group data with similar expected lifespans together, so that when a snapshot or a temporary workload disappears, whole regions of storage can be retired cleanly instead of forcing background reshuffling of still-live data. That reduces unnecessary churn. Less churn means fewer writes. Fewer writes mean longer media life. Because Purity sees when a NAND die is busy with an erase or program cycle, it can avoid letting that become a host-visible performance problem. RAID-3D and system-level awareness allow the platform to reconstruct data from parity rather than simply waiting for a busy drive to get its act together. The end result is deterministic performance rather than a roulette wheel of occasional latency spikes. Because Purity owns media management globally, the over-provisioning and spare resources are no longer trapped in per-drive silos. The system can use them strategically. I know that all of this sounds a bit scientific, and to be fair, it is. I did spend over 7 years working for Everpure and a few weeks researching for this post. I wanted to sit with that science for a bit. Where the Economics Start to Matter The moment component pricing becomes unpredictable, architecture stops being an engineering preference and starts becoming a financial strategy. That is the part that matters most to customers right now. A traditional buying model assumes that at some point you will hit a refresh cycle, a capacity wall, or a migration event that forces a purchase whether the market timing is good or terrible. You buy when you have to buy. If NAND is expensive, that is unfortunate. If DRAM is expensive too, even better, because apparently the universe enjoys symmetry. That is what makes the combination of DirectFlash and Evergreen so important. DirectFlash reduces the amount of waste, duplication, and premature wear in the system. Evergreen removes the old habit of tying innovation to forklift replacement. Controllers evolve. Capacity can be consolidated into denser modules over time. Data stays in place. The customer is not forced into rebuying the whole environment every few years just to remain current. That already changes the economics. But it still leaves one more question: who is carrying the price risk? And this is where Evergreen//One matters more than ever. The Part I Actually Wanted to Get To Evergreen//One is not just a consumption model. It is not just a nicer way to finance storage. It is a mechanism for moving volatility away from the customer. That is the conclusion I wanted to earn, not just declare. When NAND and DRAM prices start climbing, most traditional models push that turbulence straight into the customer’s planning cycle. The customer eats the increase, absorbs the uncertainty, and tries to explain to the business why the infrastructure line now behaves like it has a gambling problem. Evergreen//One changes that relationship. The customer consumes capacity as a service. Everpure owns the burden of the underlying hardware lifecycle, the media strategy, and the ongoing optimization. DirectFlash makes that model stronger because the platform is structurally more efficient with the silicon it uses. It needs less trapped DRAM, wastes fewer writes, extends media life, and supports denser modules that deliver more usable capacity per unit of power, space, and raw media. Purity compounds that advantage with data reduction, ongoing software improvements, and smarter system-wide media management. Put differently, Everpure is in a much better position to absorb and manage component volatility than a customer buying boxes on a refresh schedule. That is the real price protection story. Not some magical promise that economics no longer apply. They do. NAND still costs what NAND costs. DRAM still costs what DRAM costs. Physics remains annoyingly undefeated. The difference is who is exposed to that volatility, how much inefficiency is built into the system before the customer ever sees it, and whether the operating model gives the customer a stable runway instead of a quarterly surprise. DirectFlash reduces the waste. Evergreen removes the forced disruption. Evergreen//One shifts the risk. That combination is a lot more interesting than it sounded in that room 11 years ago. The Part I Didn’t Appreciate Then What I did not understand sitting in that room 11 years ago was that some decisions are made for futures that have not arrived yet. The market eventually caught up to the architecture. That does not happen often enough in enterprise tech to ignore when it does. DirectFlash was never interesting just because it was different. It was interesting because it removed layers of inherited inefficiency that the rest of the market had accepted as normal. And in a period where NAND and DRAM pricing are under pressure, removing inefficiency is no longer just a performance story. It is a protection story. That is why this matters now. Not because it makes for a clever slide. Because it gives customers a more predictable way forward when the underlying component markets are anything but predictable. And in the current environment, that might be the most practical definition of innovation there is. Appreciate you reading. Dmitry Gorbatov © 2025 Dmitry Gorbatov | #dmitrywashere301Views0likes0Comments
Upcoming Events
- May6Wednesday, May 06, 2026, 09:00 AM PDT
- May14Thursday, May 14, 2026, 04:00 AM PDT
- May14Thursday, May 14, 2026, 04:30 AM PDT
- May14Thursday, May 14, 2026, 12:00 PM PDT
- May18Monday, May 18, 2026, 04:00 AM PDT
- May19Tuesday, May 19, 2026, 04:00 AM PDT
- May20Wednesday, May 20, 2026, 12:00 AM PDT
- May21Thursday, May 21, 2026, 04:00 AM PDT
- May21Thursday, May 21, 2026, 09:00 AM PDT
- May26Tuesday, May 26, 2026, 09:00 AM PDT
- Jun16Tuesday, Jun 16, 2026, 04:00 PM PDT
Featured Places
Introductions
Welcome! Please introduce yourself to the Pure Storage Community.Pure User Groups
Explore groups and meetups near you./CODE
The Everpure /Code community is where collaboration thrives and everyone, from beginners taking their first steps to experts honing their craft, comes together to learn, share, and grow. In this inclusive space, find support, inspiration, and opportunities to elevate your automation, scripting, and coding skills, no matter your starting point or career position. The goal is to break barriers, solve challenges, and most of all, learn from each other.Career Growth
A forum to discuss career growth and skill development for technology professionals.
Featured Content
Featured Content
Purity//FA 6.10.6 introduces NFS over TLS for FlashArray File Services: an in-transit encryption layer that wraps NFSv3 and NFSv4.1 RPC traffic in a TLS 1.3 session as defined by RFC 9289 - Towards R...
31Views
1like
0Comments
The latest Ask Us Everything session focused on a topic that’s quickly becoming central to Everpure’s strategy: the intelligent control plane. And based on the questions from the community, it...
175Views
1like
0Comments
We’re constantly trying to improve and look for ways to make this community the best it can be for you all. In order to do that, we need your unique perspective.
We’ve put together a quick C...
1.1KViews
2likes
0Comments