Node Navigation
Get Started Here
Recent Content
Lyrical Memory Challenge: What Song Do You Know By Heart? 🎤
Happy Monday, Pure Storage Community! It's the beginning of the week during this December month, so why don't we kick things off with some fun to get to know you all a little better. Since last week we gave you a tidbit of music history, let's keep the music theme going and ask you this week's Monday Funday question: What is a song you know every word and lyric to? Is it a classic rock anthem, a 90s pop hit, a musical showstopper or maybe a more recent track that instantly lodged itself in your brain? We're talking about the song you can sing, rap or belt out from start to finish without missing a beat. Share your lyrical mementos in the comments below! Bonus points if you have a story behind why it's so memorable to you. Don't forget to give a 'like' to the songs that transport you back as well!catud12 hours agoCommunity Manager1View0likes0CommentsProxmox VE
Hi all Hope you're all having a great day. We have several customers going down the Proxmox VE road. One of my colleagues was put onto https://github.com/kolesa-team/pve-purestorage-plugin as a possible solution (as using Pure behind Proxmox (using the native Proxmox release) is not a particularly Pure-like experience. Could someone from Pure comment on the plugin's validity/supportability?richard_raymond15 hours agoNovice I451Views2likes3CommentsHow to Leverage Object Storage via Fuse Filesystems
12 MIN READ This article originally appeared on Medium.com and is republished with permission from the author. Cloud-native applications must often co-exist with legacy applications. Those legacy applications are hardened and just work, so rewriting can seem hardly worth the trouble. For legacy applications to take advantage of new technology requires bridges, and fuse clients for object storage are a bridge that allow most (but not all) applications that expect to read and write files to work in the new world of object storage. I will focus on three different implementations of a fuse-based filesystem on top of object storage, s3fs, goofys, and rclone. Prior work on performance comparisons of s3fs and goofys include theoretical upper bounds and the goofys GitHub readme. General guidelines for when to use a fuse filesystem adaptor for object storage: The application expecting files requires only moderate performance and does not have complicated dependencies on POSIX semantics. You are using the filesystem adaptor for either reads or writes of the data, but not both. If your application is both reading and writing files, then it’s best to use a real filesystem for the working data and copy only the final results to an object store. You are using the adaptor because one part of your data pipeline is an application that expects files, whereas other applications expect objects. If you find yourself primarily copying data between local filesystems and remote object storage, then tools like s5cmd or rclone will provide better performance. There is also a Python library s3fs with similar functionality, but despite the names being the same, they are distinct pieces of software. The Python version indeed makes access to objects much easier than direct boto3code but is not as performant due to the nature of Python itself. Of the three choices, I personally suggest using goofys due to significantly better performance. It may have less POSIX compatibility, but if that difference matters to your use case, then a fuse client might not be the right answer. Fuse Best Practices and Limitations First, a FUSE client is a filesystem client written in userspace. This is in contrast to most standard filesystem clients, like EXT4 or NFS, which are implemented in the Linux kernel. This leads to more flexibility to implement filesystems, including ones that only roughly resemble a traditional filesystem. It also means you can more easily mount fuse filesystems without root privileges. Conceptually, these fuse clients are lightweight client-side gateways that translate between objects and files. You could also run a separate server that acts as a gateway, but that incurs the additional cost and complexity of an extra server. A fuse client is most useful when one part of a workflow requires simple reading or writing files, whereas the rest of your workflow directly accesses objects via native S3 API. In other words, a fuse client is a tactical choice for bringing a data set and associated workflow from filesystem to object storage, where the fuse client specifically bridges the gap where an application expects to read or write files. Things to avoid when using a fuse client: Do not expect ownership or permissions to work right. Control permissions with your S3 key policies instead. Do not use renames (‘mv’ command). Lots of directory listing operations. Write to files sequentially and avoid random writes or appending to existing files. Do not use symlinks or hard links. Do not expect consistency across clients; avoid sharing files through multiple clients with fuse mounts. No really large files (1TB or larger). Both s3fs and goofys publish their respective limitations. One advantage of s3fs is that it preserves file owner/group bits as object custom metadata. In short, the application using the fuse filesystem should be a simple reader or writer of files. If that does not match your use case, I would suggest careful consideration before proceeding. Installation and Mounting Instructions Basics Installing s3fs is straightforward on a variety of platforms such as ‘apt’ on Ubuntu. sudo apt install s3fs The mount operation uses two additional options to specify the endpoint as the FlashBlade® data VIP and to use path-style requests. sudo mkdir -p /mnt/fuse_s3fs && sudo chown $USER /mnt/fuse_s3fs s3fs $BUCKETNAME /mnt/fuse_s3fs -o url=https://10.62.64.200 -o use_path_request_style The FlashBlade’s data VIP is 10.62.64.200 in all the example commands. Install goofys by downloading the standalone binary from the GitHub release page: wget -N https://github.com/kahing/goofys/releases/latest/download/goofys chmod a+x goofys Then mount a bucket as a filesystem as follows: sudo mkdir -p /mnt/fuse_goofys && sudo chown $USER /mnt/fuse_goofys ./goofys --endpoint=https://10.62.64.200 $BUCKETNAME /mnt/fuse_goofys With goofys you can also mount specific prefixes, i.e., mount only a “subdirectory” and limit the visibility of data via fuse to just a certain key prefix. goofys <bucket:prefix> <mountpoint> Rclone-mount relies on the same installation and configuration as standard rclone. This means that if you’re already using rclone, then it is trivial to also mount a bucket as follows where “fb” refers to my FlashBlade’s rclone.conf s3 configuration: [fb] type = s3 env_auth = true region = us-east-1 endpoint = https://10.62.64.200 Replace the endpoint with the appropriate IP address and then mount with the following command: rclone --vfs-cache-mode writes mount fb:$BUCKETNAME /mnt/fuse_rclone & Note that I use the ampersand operator to background the mounting operation as the default is to keep rclone in the foreground. Simulating a Directory Structure with Object Keys When using a fuse client with S3, a “mkdir” operation corresponds to creating an empty object with a key that ends in a “/” character. In other words, the directory marker is explicitly created even though the “/” is not a special character in an object store. The “/” indicates a directory by convention. The other common approach leaves directories implicit in the key structure, meaning no extra empty placeholder objects. While this may complicate some tooling, it also means that the fuse client approach supports empty directories as you would expect in a filesystem. But if you are reading a file structure that was laid out using implicit directories, it will still work the same! Permissions One of the main challenges of using fuse clients is the fact that standard POSIX permissions no longer work as expected. Due to the mismatch between file and object permission models, I recommend restricting permissions by using access policies on the keys used by the fuse client. This means that regardless of how fuse clients apply or even ignore permissions bits (via “chmod”), the read/write/delete permissions are strictly enforced at the storage layer. Angle 1: Reader The following two FlashBlade Access Policies are required to configure the fuse client for read-only application usage: object-list and object-read. Note that if clients try to write files without permission, it is possible to see inconsistencies. For example, if I touch a file with read-only permission and goofys, an immediate listing (‘ls’) will see a phantom file which eventually goes away. The ‘touch’ command does fail, so many but not all programs or scripts that unexpectedly write should fail. $ touch foo touch: failed to close ‘foo’: Permission denied $ ls foo linux-5.12.13 … $ ls linux-5.12.13 Most operations fail without the “list” permission due to expectations of being able to browse directory structures, but, for example, it is still possible to read individual files with ‘cat’ without the object-list policy enabled. Alternatively, you can mount using goofys’s flag “-o r“ for read-only access, but using keys and access policies provides stronger protections than mounting in read-only mode. Restricting permission with keys avoids users simply re-mounting without “-o r” to work around an issue. And of course, without the object-read permission, the client can list directories and files but not access any of the file content. $ cat pod.yaml cat: pod.yaml: Permission denied Angle 2: Writer The second major way to use fuse clients for S3 access is for file-based applications to write data to an object store. For these applications, the required policies are object-list and object-write. With write and list permissions, I can write files and read them back locally for a short period of time due to local caching. Note that it appears to require ‘list’ permissions and also enables overwrites. Enabling Deletions Sometimes in addition to write permissions, the client also needs the ability to delete files. Enable the “pure:policy/object-delete” to allow for “rm” commands. See the following section on “undo” for more information about how to combine deletions with the ability to undo those deletions when necessary. Full Control For most flexible control of files within the mount, use the following policies: This avoids giving users more permissions than necessary, for example, the ability to create and delete buckets, etc., but they can still write, read, and delete files. Bonus: Undo an Accidental Deletion Object stores support object versioning, which provides functionality beyond traditional filesystems. Versioning keeps multiple copies of an object if a key is overwritten and inserts a DeleteMarker instead of erasing data when deletes are issued. An associated lifecycle policy ensures that deleted or overwritten data is eventually deleted. First, enable versioning on the bucket if it isn’t already. In the FlashBlade GUI’s bucket view, the “Enable versioning…” can be accessed in the upper right corner. And then in order to undelete files that have been accidentally deleted, you can simply go find the delete marker and remove it. There is no “undelete” operation at the filesystem level, so this needs to be out-of-band through a different mechanism or script. An example Python script (gist here) to undelete an object by removing its DeleteMarker: #!/usr/bin/python3 import boto3 import sys FB_DATAVIP='10.62.64.200' if len(sys.argv) != 3: print("Usage: {} bucketname key".format(sys.argv[0])) sys.exit(1) bucketname = sys.argv[1] key = sys.argv[2] s3 = boto3.resource('s3', endpoint_url='https://' + FB_DATAVIP) kwargs = {'Bucket' : bucketname, 'Prefix' : key} pageresponse = s3.meta.client.get_paginator('list_object_versions').paginate(**kwargs) for pageobject in pageresponse: if ‘DeleteMarkers’ in pageobject.keys() and pageobject[‘DeleteMarkers’][0][‘Key’] == key: print("Undeleting s3://{}/{}".format(bucketname, key)) s3.ObjectVersion(bucketname, key, pageobject['DeleteMarkers'][0]['VersionId']).delete() And then the object can be undeleted as simply as this: ./s3-undelete.py phrex temp/pod.yaml Undeleting s3://phrex/temp/pod.yaml A safe and secure undelete would restrict the usage of this script to an administrator in order to limit the use of keys with broader delete permissions. Finally, create a lifecycle rule to automatically clean up old object versions, i.e., if an object is no longer the most recent, it can be eventually deleted so that space is reclaimed. Similarly, if an object is deleted, the original will be kept for this long allowing a user to undo that deletion within the lifecycle’s time window. Object Storage Performance Testing While a fuse client for S3 is never the highest-performing data access path, it is important to understand the performance differences between the two clients, s3fs and goofys, as well as traditional shared filesystems like NFS. The goal of this section is to understand when fuse clients are useful and the performance differences between s3fs and goofys. This section presents performance testing of basic scenarios to help understand when and where the S3 fuse clients are useful. In each test, I compare the fuse clients presenting an object bucket as a “filesystem” with a true NFS shared filesystem. Test scenario: All tests run against a small nine-blade FlashBlade Client is 16 core, 96GB DRAM, Ubuntu 20.04 Ramdisk used as the source or sink for write and read tests respectively A direct S3 performance test gets 1.1GB/s writes and 1.5GB/s reads. I also compare with a high-performance NFS filesystem, backed by the same FlashBlade, to illustrate the fuse-client overhead. Tested goofys version 0.24.0, s3fs version v1.86, and rclone version 1.50.2 I use filesystem tools like “cp,” “rm,” and “cat” for these tests, but it is important to note that in most cases the filesystem operations will be built into existing legacy applications, e.g., fwrite() and fread(). I chose these tools because they achieve good throughput on native filesystems, are simple to understand, and are easily reproducible. The summary of performance results is that across read/write and metadata-intensive tests, the performance ordering is goofys, s3fs, and then rclone as the slowest. Throughput Results The first test reads and writes large files to determine basic throughput of each fuse client. I either write via “cp” or read via “cat” 24 files, each 1GB in size. Each test is repeated with files accessed serially or in parallel. As an example, writing to the fuse filesystem serially: for i in {1..24}; do cp /mnt/ramdisk/file_1G /mnt/$d/temp/file_1G_$i done The parallel version uses ‘&’ to launch each copy in the background and then ‘wait’ blocks until all background processes complete: for i in {1..24}; do cp /mnt/ramdisk/file_1G /mnt/$d/temp/file_1G_$i & done wait Two observations from the write results. First, goofys is significantly faster than the other fuse clients on serial writes, though still slightly slower than direct NFS. Second, parallelizing the filesystem operations results in improved write speeds in all cases, but goofys is still the fastest. The second test uses ‘cat’ to read files through the fuse clients, using the same set of 24 1GB files. As with the writes, the reads are tested both serially and in parallel. Performance trends are similar with goofys fastest for serial reads, but s3fs handles parallel reads slightly better. The more surprising result is that both goofys and s3fs are faster than true NFS for serial reads. This is a consequence of how the Linux kernel NFS client performs readahead less aggressively than the fuse clients. Metadata Results The next set of tests focuses on metadata-intensive workloads: small files, nested directories, listings, and recursive deletes. The test data set is the linux-5.12.13 source code, which contains roughly 1GB of data in 4,700 directories and 71k files. The average file size is 14KB. Goofys is fastest for both the untar and the removal operations, but the gap is larger when compared to a native NFS. This indicates that these workloads suffer a larger performance penalty relative to native NFS. The test to populate the source repo untars files directly into object storage using the fuse layer as intermediary. But this pushes at the edge of where a fuse client makes sense from a performance perspective. Directly untarring to an NFS mount is 6x faster. In this case, an alternative approach of untarring to local storage and then using s5cmd to upload directly to the object store is 5x faster (257 seconds) than goofys! Using local storage as a staging area is faster because the local storage has lower latencies for the serial untar operation and then s5cmd can upload files concurrently. Of course, this technique only works if the local storage has capacity for the temporary storage. The last test uses the “find” command to find files with a certain extension (“.h” in this case) and exercises metadata responsiveness exclusively. As with the other tests, goofys performs best. Comparing to AWS Next, I focus on the fastest client, goofys, and compare performance when using either the FlashBlade as backing object store or AWS S3. I compare relative performance on the four major test scenarios previously presented: writing and reading large files, and then copying and removing a source code repository with directories and mixed file sizes. To match the VM used to test against the FlashBlade, I used a single m5.4xlarge instance with Ubuntu 20.04. The test scenarios here consist of serial access patterns because this is the default in most workflows. Parallelization often involves modifications of source programs in which case it is better to simply switch to native S3 accesses. Note that due to the fuse client, none of these tests actually stress the FlashBlade or AWS throughput bounds. The achieved lower latency of S3 operations on the FlashBlade results in better performance. For simple large, i.e., 1GB, file operations, the FlashBlade’s lower latency results in 28% faster runtimes relative to AWS S3. In contrast, when writing or removing nested directories with small-to-medium file sizes, the performance advantage increases to 3x-6x faster in favor of FlashBlade. This indicates that the metadata overheads of LIST operations and small objects are much higher with AWS S3. Summary Goofys, s3fs, and rclone-mount are fuse clients that enable the use of an object store with applications that expect files. These fuse clients enable the migration of workflows to object storage even when you have legacy file-based applications. Those applications expecting files can still work with objects through the fuse client layer. Summarizing best practices for when and how to use s3 fuse clients: Best to use for only one part of your data workflow, either simple writing or reading of files. Do not rely on POSIX filesystem features like permissions, file renames, random overwrites, etc. Prefer goofys as a fuse client choice because of superior performancecatud5 days agoCommunity Manager9Views0likes0Comments💡Technology Tip Thursday💡: Napster & Your All-Time Playlist 🎶
Hello everyone, the Thanksgiving holiday passed and another Thursday is here: a 💡Technology Tip Thursday💡 to be exact! This is a thread where we sprinkle tidbits of knowledge to the community on technology milestones that happened throughout the years during this month! This week we're bringing you some crucial musical tech history. Did you know that is was this month, December 7, 1999 The Recording Industry Association of America Sues Napster, the music file sharing service? The Recording Industry Association of America (RIAA) sued Napster in December 1999 for copyright infringement, alleging the peer-to-peer file-sharing service enabled the illegal distribution of copyrighted music. The lawsuit, which involved multiple record companies, argued Napster was facilitating widespread music piracy, leading to a landmark legal battle that eventually contributed to Napster's shutdown in 2001. Napster didn't exactly invent file sharing, but its peer-to-peer (P2P) created a massive, decentralized data ecosystem seemingly overnight. This legal battle eventually pushed the music industry fully into the digital age. It forever changed how we access and listen to music, leading us all to create massive digital libraries! So for our community question this week: What is your go-to song or songs and albums that you could listen to over and over? Share your memories below! Be sure to comment on your peers' replies!catud5 days agoCommunity Manager7Views0likes0CommentsJoin us for Nutanix IT Unplugged with Isaac Slade
Our friends at Nutanix are sponsering a fun virtual event on Dec. 5 for IT Unplugged featuring a live performance by Isaac Slade, formerly of The Fray. You'll hear from both Nutanix and Pure about our new joint solution, along with some musical fun. Go here for more details and to register: https://event.nutanix.com/itunplugged-december2025?utm_source=PureStorage16Views0likes0CommentsOT: The Architecture of Interoperability
4 MIN READ In previous post, we explored the fundamental divide between Information Technology (IT) and Operational Technology (OT). We established that while IT manages data and applications, OT controls the physical heartbeat of our world from factory floors to water treatment plants. In this post we are diving deeper into the bridge that connects them: Interoperability. As Industry 4.0 and the Internet of Things (IoT) accelerate, the "air gap" that once separated these domains is evolving. For modern enterprises, the goal isn't just to have IT and OT coexist, but to have them communicate seamlessly. Whether the use-cases are security, real time quality control, or predictive maintenance, to name a few, this is why interoperability becomes the critical engine for operational excellence. The Interoperability Architecture Interoperability is more than just connecting cables; it’s about creating a unified architecture where data flows securely between the shop floor and the “top floor”. In legacy environments, OT systems (like SCADA and PLCs) often run on isolated, proprietary networks that don’t speak the same language as IT’s cloud-based analytics platforms. To bridge this, a robust interoperability architecture is required. This architecture must support: Industrial Data Lake: A single storage platform that can handle block, file, and object data is essential for bridging the gap between IT and OT. This unified approach prevents data silos by allowing proprietary OT sensor data to coexist on the same high-performance storage as IT applications (such as ERP and CRM). The benefit is the creation of a high-performance Industrial Data Lake, where OT and IT data from various sources can be streamed directly, minimizing the need for data movement, a critical efficiency gain. Real Time Analytics: OT sensors continuously monitor machine conditions including: vibration, temperature, and other critical parameters, generating real-time telemetry data. An interoperable architecture built on high performance flash storage enables instant processing of this data stream. By integrating IT analytics platforms with predictive algorithms, the system identifies anomalies before they escalate, accelerating maintenance response, optimizing operations, and streamlining exception handling. This approach reduces downtime, lowers maintenance costs, and extends overall asset life. Standards Based Design: As outlined in recent cybersecurity research, modern OT environments require datasets that correlate physical process data with network traffic logs to detect anomalies effectively. An interoperable architecture facilitates this by centralizing data for analysis without compromising the security posture. Also, IT/OT convergence requires a platform capable of securely managing OT data, often through IT standards. An API-First Design allows the entire platform to be built on robust APIs, enabling IT to easily integrate storage provisioning, monitoring, and data protection into standard, policy-driven IT automation tools (e.g., Kubernetes, orchestration software). Pure Storage addresses these interoperability requirements with the Purity operating environment, which abstracts the complexity of underlying hardware and provides a seamless, multiprotocol experience (NFS, SMB, S3, FC, iSCSI). This ensures that whether data originates from a robotic arm or a CRM application, it is stored, protected, and accessible through a single, unified data plane. Real-World Application: A Large Regional Water District Consider a large regional water district, a major provider serving millions of residents. In an environment like this, maintaining water quality and service reliability is a 24/7 mission-critical OT function. Its infrastructure relies on complex SCADA systems to monitor variables like flow rates, tank levels, and chemical compositions across hundreds of miles of pipelines and treatment facilities. By adopting an interoperable architecture, an organization like this can break down the silos between its operational data and its IT capabilities. Instead of SCADA data remaining locked in a control room, it can be securely replicated to IT environments for long-term trending and capacity planning. For instance, historical flow data combined with predictive analytics can help forecast demand spikes or identify aging infrastructure before a leak occurs. This convergence transforms raw operational data into actionable business intelligence, ensuring reliability for the communities they serve. Why We Champion Compliance and Governance Opening up OT systems to IT networks can introduce new risks. In the world of OT, "move fast and break things" is not an option; reliability and safety are paramount. This is why Pure Storage wraps interoperability in a framework of compliance and governance, not limited to: FIPS 140-2 Certification & Common Criteria: We utilize FIPS 140-2 certified encryption modules and have achieved Common Criteria certification. Data Sovereignty: Our architecture includes built-in governance features like Always-On Encryption and rapid data locking to ensure compliance with domestic and international regulations, protecting sensitive data regardless of where it resides. Compliance: Pure Fusion delivers policy defined storage provisioning, automating the deployment with specified requirements for tags, protection, and replication. By embedding these standards directly into the storage array, Pure Storage allows organizations to innovate with interoperability while maintaining the security posture that critical OT infrastructure demands. Next in the series: We will explore further into IT/OT interoperability and processing of data at the edge. Stay tuned!29Views0likes0Comments👤 Addressing the "Shadow AI" Threat in Healthcare Security
Driven by clinician burnout and the desperate need for efficiency, healthcare providers are increasingly turning to unsanctioned, public-facing AI tools (like general-purpose chatbots) to assist with tasks. This practice, often referred to as Shadow AI, creates a major security risk because the data entered into these tools can leave Protected Health Information (PHI) exposed and compromise compliance with regulations like HIPAA. In the article, "In Healthcare, Threat of Shadow AI Outpaces Security as Clinician Adoption Accelerates", according to Nate Moore, Founder of Enlite IT Solutions Inc., the problem is that the pace of AI adoption is quickly outpacing security governance. The goal isn't to ban innovation, but to enable it safely. In lieu, Moore recommends a shift: instead of banning AI, organizations must create secure "AI sandboxes." These governed environments enable staff to test pre-vetted models safely, balancing innovation with data protection. 📣 Community Question: Given the balance between enhancing clinician efficiency and maintaining strict patient data security, what is the most vital step healthcare IT leadership should take right now to effectively manage the risks of Shadow AI? Let's discuss! Click through to read the entire article above and let us know your thoughts around it in the comments below!catud6 days agoCommunity Manager7Views0likes0CommentsNo Downtime, No Surprises: Best Practices to De-Risk Oracle Database Storage
December 4 | Register Now! Modernizing Oracle database infrastructure is essential for performance and agility, but legacy storage often turns upgrades, migrations, and refreshes into high-risk projects. In this session, discover how organizations are eliminating those risks with infrastructure designed for continuous availability, predictable costs, and simplified operations. Key takeaways: Keep Oracle environments online during upgrades and migrations with resilient, non-disruptive infrastructure. Recover instantly from outages or data loss to maintain business continuity. Simplify backup, recovery, and cloning for faster test and dev cycles. Ensure consistent performance and scalability across on-premises and hybrid cloud environments. Register Now!8Views0likes0CommentsFeature Request: Certificate Automation with ACME
Hi Pure people, How about reducing my workload a little by supporting the ACME protocol for certificate renewal? . Certficate lifespans are just getting shorter, and while I have a horrid expect script to renew certificates via ssh to flasharray, it would be much simpler if Purity ran an ACME client itself. PS We use the DNS Challenge method to avoid having to run webservices where they aren't needed.jasoncormie8 days agoDay Hiker II47Views1like2CommentsAsk Us Everything About Evergreen//One
Got questions about Evergreen//One? Get answers Get answers. December 11, 2025 | 09:00am PT • 12:00pm ET In this month’s episode of Ask Us Everything, we’re diving into Evergreen//One™—our storage-as-a-service solution that gives you flexibility, protection, and cloud-ready capabilities. Whether you already use Evergreen//One or are exploring it for the first time, you’ll see how to get more value from your storage—without added cost or complexity. Then it’s your turn. Our experts will answer your questions and show you how Evergreen//One enables you to focus on business outcomes, instead of storage management. Reserve your seat! Ask a question for your chance to win: The first 10 eligible Pure Storage customers to submit a question during the live webinar will receive one (1) Pure Storage Customer Appreciation Kit (approximate retail value: $65). Limit one kit per customer. Offer valid only during the live event and while supplies last. See Terms and Conditions.nmuoio8 days agoPuritan13Views1like0Comments
Upcoming Events
Featured Places
Introductions
Welcome! Please introduce yourself to the Pure Storage Community.Pure User Groups
Explore groups and meetups near you./CODE
The Pure /Code community is where collaboration thrives and everyone, from beginners taking their first steps to experts honing their craft, comes together to learn, share, and grow. In this inclusive space, you'll find support, inspiration, and opportunities to elevate your automation, scripting, and coding skills, no matter your starting point or career position. The goal is to break barriers, solve challenges, and most of all, learn from each other.Career Growth
A forum to discuss career growth and skill development for technology professionals.
Featured Content
Featured Content
This article originally appeared on Medium.com and is republished with permission from the author.
Cloud-native applications must often co-exist with legacy applications. Those legacy applications ...
9Views
0likes
0Comments
We know our community is best because of members like you!
And we want to grow our network and reward you for helping us.
To celebrate your commitment and expand our peer-to-peer connectio...
109Views
0likes
0Comments