New Toys!
Hey folks, today we announced some really cool new stuff. Apps are everywhere - Edge, Core, Cloud - and are changing the way we do business. A unified way to manage it all - one ring to rule them all - is what is needed. Scale, function, supporting AI while being simultaneously sustainable for the planet - really important stuff. Improvements to the Enterprise Data Cloud, better support for containers and automation, and protecting yourself from cyber criminality is what's in a brand new toolbox for you. Check out this blog for more info.39Views3likes0CommentsNew stuff!!!
Hey everyone, joining the community. I'm JP, the launch lead for Pure. This week at our NYC event we're announcing some cool new stuff. If you don't have the opportunity to attend, we'll be posting lots of info in communities and links to where you can find more info. Stay tuned, and watch for posts on Thursday!19Views2likes0CommentsThanks for Joining the INAUGURAL Ask Us Everything for Fusion
Hello Pure Community! Thanks a TON to everybody who tuned in for the Fusion Ask Us Everything session this morning. We had a blast hosting it, and were stoked to see all the questions that were sent in...it was great to see everybody is looking at Fusion's capabilities and how it can add value to their array fleet management. MUCHO thanks to jd and RichBarlow for navigating through all the questions and talking about it all. Also, BIG SHOUTOUT to gcarl for parachuting in and answering some Federal questions! Be sure to join us next month for the next one, which will cover Purity upgrades - we are looking forward to a spirited discussion that will convince even the biggest skeptic that upgrading your Pure array is not like the scary old days of upgrading legacy storage. Until then, enjoy this awesome meme of a skateboarding possum. See you in September! DP23Views2likes0CommentsDoes anyone have any advice for stork pods that keep restarting
Does anyone have any advice for stork pods that keep restarting with: ``` Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x29aad38]``` I'm running storkctl Version: 2.7.0-2e5098a and k8s version 1.22.5Solved766Views1like14CommentsPure Storage PowerShell SDK v2.16 for FlashArray is Released!
This release contains 150+ new cmdlets that allow for automating the latest features of the Purity 2.16 API, includes extended cmdlet Help, connection persistence, and a Invoke REST API cmdlet, and more! SDK 2.16 introduces the latest and greatest Purity features, so we are encouraging everyone, Puritans and customers, to start planning for migrating their SDK 1.x scripts to the SDK version 2. Quick Start: • https://www.powershellgallery.com/packages/PureStoragePowerShellSDK2|PowerShell Gallery or just Install-Module -Name PureStoragePowerShellSDK from a PowerShell session. • https://github.com/PureStorage-Connect/PowerShellSDK2|GitHub Repository • https://support.purestorage.com/Solutions/Microsoft_Platform_Guide/a_Windows_PowerShell/Pure_Storage_PowerShell_SDK|Documentation the Pure Storage Microsoft Platform Guide Thanks, Microsoft Integrations Team95Views0likes0CommentsAttention coders! Pure Dev event is in May
Pure Dev event is on May 11th and features an industry panel with Intel, HashiCorp, CTO Advisor and Pure Storage. Topics include building cloud with infrastructure-as-code, self-service storage and data services on Kubernetes. Register here66Views0likes0CommentsWe have had a major outage
We have had a major outage because of a combination of network outage and the way we designed our environment. Working with Cisco and Pure support to get to the exact details, but main issue is: Our design has ESXi hosts only connected to Pure array in DC1 and we have pods that replicate SYNC to DC2. During network maintenance there was a planned core switch failover that caused a longer outage then expected, which made the mediator unavailable and the latency for the preferred site too high. This all caused the pods to become unavailable on our preferred site and "switch" over the non-preferred site. We're still looking into the details, but I'm also searching for a possible work around during future core maintenance. For that, the question I have is if it is possible to automate (RestAPI / Powershell) the sync of pod's to pause, to prevent failovers during network maintenance and of course enable it again later on.Solved319Views0likes4Comments