A list of useful Purity CLI commands to manage Pure Flash Storage arrays.
"pureadmin" commands The pureadmin command displays and manage administrative accounts in Pure Flash Storage Array (22 Commands) Explanation pureadmin create testuser --api-token Generate an API token for the user testuser pureadmin create testuser --api-token --timeout 2h Create API Token for testuser valid for 2 hours pureadmin create testuser --role storage_admin Create user testuser with storage_admin role. Possible roles are readonly, ops_admin, storage_admin, array_admin pureadmin delete --api-token Delete API Token for current user pureadmin delete testuser Delete user testuser from Flash Array pureadmin delete testuser --api-token Delete API Token for user testuser pureadmin global disable --single-sign-on This will disable single sign-on on the current array. Enabling single sign-on gives LDAP users the ability to navigate seamlessly from Pure1 Manage to the current array through a single login. pureadmin global enable --single-sign-on This enables single sign-on on the current array. Enabling single sign-on gives LDAP users the ability to navigate seamlessly from Pure1 Manage to the current array through a single login. pureadmin global list List the global administration attributes like Lockout Duration, Maximum Login Attempts, Minimum Password Length, etc.. pureadmin global setattr --lockout-duration 1m Set the lockout duration to 1 minute after maximum unsuccessful login attempts. pureadmin global setattr --max-login-attempts 3 Set the maximum failed login attempts to 3 before the user get locked out. pureadmin global setattr --min-password-length 8 Set the minimum length of characters required for all the local user account passwords to 8. Minimum length allowed is 1. This will not affect the existing user accounts, but all future password assignment must meet the new value. pureadmin list List all the users configured in the Flash Array pureadmin list --api-token List all the users with api tokens configured pureadmin list --api-token --expose List all the users with api tokens configured and expose the api token for the current user loggedin. pureadmin list --lockout List all the user accounts that are currently lockout pureadmin refresh --clear Clears the permission cache for all the users pureadmin refresh --clear testuser Clears the permission cache for testuser pureadmin refresh testuser Refresh the permission cache for testuser pureadmin reset testuser --lockout Unlock locked user testuser pureadmin setattr testuser --password Change the password for the user testuser pureadmin setattr testuser --role array_admin Change the role of the user testuser to array_admin role. Possible roles are readonly, ops_admin, storage_admin, array_admin "purealert" commands The purealert command manages alert history and the list of designated email addresses for alert notifications (8 Commands) Explanation purealert flag 121212 Flag an alert with ID 121212. This will appear in the flagged alert list. purealert list List all the alerts generated in the Pure Flash Array purealert list --filter "issue='failure'" List all the alerts generated for failures purealert list --filter "severity='critical'" List all the alerts with Critical severity. purealert list --filter "state='closed'" List all the closed alerts purealert list --filter "state='open'" List all the alerts in Open state purealert list --flagged List all the alerts that are flagged. By default all alerts are flagged. We can unflag command once those are resolved. purealert unflag 121212 Unflag alert with ID 121212. This will not appear in the flagged alert list. "purearray" commands The purearray command displays attributes and monitors I/O performance in Pure Flash Storage Array (24 Commands) Explanation purearray connect --management-address 10.0.0.1 --type async-replication --connection-key Connects the local array to remote array 10.0.0.1 for asynchronous replication using the connection key. The Connection key will be prompted to enter. purearray connect --management-address 10.0.0.1 --type sync-replication --connection-key Connects the local array to remote array 10.0.0.1 for synchronous replication using the connection key. The Connection key will be prompted to enter. purearray connect --management-address 10.0.0.1 --type sync-replication --replication-transport ip -- connection-key Connects the local array to remote array 10.0.0.1 for synchronous replication via Ethernet transport using the connection key. The Connection key will be prompted to enter. purearray disable phonehome Disable phonehome or dialhome feature of array. purearray disconnect 10.0.0.1 Disconnects array 10.0.0.1 from the local array connected for remote replication. purearray enable phonehome Enable phonehome or dialhome feature of array. purearray list Display the array name,serial number and firmware version purearray list --connect Display remotely connected arrays for replication purearray list --connect --path Display arrays connected for remote replication along with connection paths purearray list --connect --throttle Display the replication throttle limit purearray list --connection-key Display the connection key that can be used to connect to the array purearray list --controller List all the controllers connected to the Array. This will also display the model and status of each controller purearray list --ntpserver List the NTP servers configured purearray list --phonehome Display the dial home configuration status of the Array purearray list --space Display the capacity and usage statistics information of the Array. purearray list --space --historical 30d Display the capacity and usage statistics information of the Array since last 30 days purearray list --syslogserver List the syslog server names configured to push the logs in pure array purearray monitor --interval 4 --repeat 5 Display the array-wide IO performance of a Flash Array in every 4 seconds for 5 times. purearray remoteassist --status check the Remote Assist is active or inactive purearray rename MYARRAY001 Set the name of the array to MYARRAY001 purearray setattr --ntpserver '' Remove all the NTP servers configured for pure array purearray setattr --ntpserver time.google.com Set the NTP server purearray setattr --syslogserver '' Remove all the syslog server servers configured for pure array purearray setattr --syslogserver log.server.com set the syslog server for pure array "pureaudit" commands The pureaudit command displays and manages the audit logs record details in Pure Flash Storage Array (7 Commands) Explanation pureaudit list Display the list of audit records. Audit trail records are created whenever administrative actions are perfromed by a user (for eg: creating, destroying, eradicating a volume) pureaudit list --filter 'command="purepod" and subcommand="create"' List all the audit records for purepod create command executed in the array pureaudit list --filter 'command="purepod" and user="pureuser"' List all the audit records for purepod commands executed by pureuser in the array pureaudit list --filter 'command="purepod"' List all the audit records for purepod command executed in the array pureaudit list --filter 'user = "root"' Display the list of audit records for the root user pureaudit list --limit 10 Display the first 10 rows of audit records pureaudit list --sort user Display the list of audit records sorted by the user field. By default the records are sorted by the time field "pureconfig" commands The pureconfig command provides commands to reproduce the current Pure Flash Storage Array configuration (4 Commands) Explanation pureconfig list Display list of commands to reproduce the volumes, hosts, host groups, connections, network, alert and array configurations. Copying this and running in another array will create an exact copy. pureconfig list --all Displays all the commands required to reproduce the current FlashAarray configuration of hosts, host groups, pods, protection groups, volumes, volume groups, connections, file systems and directories, alert, network, policies, and support. pureconfig list --object Displays the object configuration of the FlashArray including hosts, host groups, pods, protection groups, volumes, volume groups, and connections, as well as file systems and directories if file services are enabled. pureconfig list --system Displays the system configuration of the flah array including network, policies, alert and support puredns commands The puredns command manages the DNS attributes for an arrays administrative network. (4 Commands) "puredns" list Display the current DNS parameters configured in the array. This includes the domain suffixes and IP addresses of the name servers Explanation puredns setattr --domain "" Removes the domain suffix from Purity//FA DNS queries puredns setattr --domain test.com --nameservers 192.168.0.10,192.168.2.11 Add the IPv4 addresses of two DNS servers for Array to use to resolve hostnames to IP addresses, and the domain suffix test.com for DNS searches. puredns setattr --nameservers"" Unassigns DNS server IP addresses from the DNS entry. This will stop making DNS entries. "puredrive" commands The puredrive command provides information about the Flash Drives and NVRAM modules in Pure Flash Storage Array (6 Commands) Explanation puredrive admit Admit all drive modules that have been added or connected but not yet admitted to the array. Once successfully admitted, the status of the drive modules will change from unadmitted to healthy. puredrive list List all the flash drive modules in an Array. This will also display the capacity of each module. puredrive list --spec List all the flash drive modules in an Array along with Protocol( SAS/NVME) information puredrive list --total List all the flash drive modules in an Array with the total capacity figure puredrive list CH0.BAY10 Display information about flash drive BAY10 in CH0 puredrive list CH0.BAY10 --pack Display information about flash drive BAY10 in CH0 and all other drives in the same pack1.1KViews0likes0CommentsWhy You Should Make Adopting Current Long-Life Releases a Habit
Hey everyone — At Pure Storage, we see many customers who still think about storage upgrades like old-school firmware: “set it and forget it” until it’s forced to change. But FlashArray isn’t firmware it’s modern, continually improved, and designed for an agile, secure, predictable data platform. That means it’s time to make adopting recent Long-Life Releases (LLRs) a regular habit not just something you reluctantly do, "when you have to". LLRs should be your standard practice: ✅ Fresh Features, Mature Code Each LLR is built on code that’s been running in production for at least 8 months before it branches. That means you get the innovations from recent Feature Releases — tested, stabilized, and production-proven. You avoid missing out on valuable improvements while still benefiting from enterprise-grade predictability. ✅ Consistent Security and Compliance Aging too far behind, even on an LLR, can expose you to security vulnerabilities and unsupported configurations. By habitually adopting recent LLRs, you ensure you’re in the supported window for critical patches and compliance audits and avoiding fire drills later. ✅ Reduce Technical Debt Getting stuck on very old LLRs can build up technical debt. Skipping multiple versions makes your next upgrade harder, riskier, and more time-consuming. Keeping up with recent LLRs means smoother transitions, less operational friction, and easier adoption of the next improvements. ✅ Keep Innovation Flowing The idea that an LLR is “old code” is a myth. Recent LLRs contain carefully chosen, well-hardened feature improvements. If you wait too long, you lock yourself out of meaningful performance, efficiency, and capability gains that your peers are already using. ✅ Break the Firmware Mentality FlashArray is software-driven, and has a rapid but reliable development model. Treating it like outdated firmware, and you miss the true value. The LLR program is designed precisely to let you safely adopt modern features and maintain enterprise-grade stability and maintain a predictable cadence. Bottom line? Adopting recent Long-Life Releases, habitually, is the best way to get modern features, maintain security, reduce upgrade risk, and keep your environment aligned with Pure’s best practices. You deserve innovation and peace of mind. Don’t settle for less by sticking with outdated code. If you want help reviewing which LLR is right for you, or understanding the timelines, just reach out — we’re here to help you stay current, secure, and ahead of the game.1KViews8likes2CommentsFile services permissions FA
Hello everyone! Is there a possibility to apply file-level permissions through Purity? It's just a doubt because I've already researched and couldn't find anything. I believe not, but maybe some of you have a client who has asked about this possibility. Tnx900Views0likes2CommentsTop 10 Reasons to Love Purity 6.9
(Because 6.7 is so 2024) 10. 🏋️♂️ Long-Life Release means it’s supported until June 2028 — which is about three years longer than that gym membership you swore you’d use. 9. 🌐 Works with all the latest FlashArray platforms, AWS, Azure… pretty much everything except your toaster (for now). 8. 🕵️♂️ Security updates so strong, even your data will feel like it’s in the witness protection program. 7. 🚀 Turn on File Services without downtime or approval from Pure product management — finally, a software update you don’t have to schedule for “that one weekend in Q4 when no one’s looking.” 6. 🙌 Encourages Self-Service Upgrades. Translation: fewer support tickets, more “Look, Mom, I did it myself!” moments. 5. 🔑 Default password warning. Yes, “pureuser” is adorable… until it becomes a resume-generating event. 4. 🍍 VMware improvements so good, your virtual machines just sent a fruit basket. 3. 🎛️ Fusion, Fusion, Fusion! Which is like having a universal remote for your data… without the panic of losing it between the couch cushions. 2. 📜 REST API 2.x release notes so thorough, they make War and Peace look like a sticky note. 🏆 You get to tell your boss you're on a "Long-Life Release," which sounds much more impressive than "I'm not doing an upgrade for a while." Check out the release notes for more! https://support.purestorage.com/bundle/m_flasharray_release/page/FlashArray/FlashArray_Release/01_Purity_FA_Release_Notes/topics/concept/c_purityfa_69x_release_notes.html809Views3likes0CommentsPurity//FA 6.9 is (Finally) Enterprise Ready!
A few months ago I wrote about the top 10 reasons to upgrade to Purity 6.9, and here are 10 more reasons; because…..6.9 has just gone Enterprise Ready! https://support.purestorage.com/bundle/m_flasharray_release/page/FlashArray/FlashArray_Release/01_Purity_FA_Release_Notes/topics/concept/c_purityfa_69x_release_notes.html 10 💍 It's "Long-Life"! Stability until June 2028. That's a longer, more successful relationship than 90% of reality TV couples achieve. 9⚰️ Your Pure SE Won’t Keep Bugging You About Running an EOL Release. You know who you are…. 8💯 It's Been to College. It met the criteria for "customer fleet adoption, cumulative runtime, and observed uptime." Basically, it passed the field test with flying colors. 7🤝 You Get a Side of Fusion. Upgrade to 6.9 and get the powerful, simple-to-use multi-array storage platform management system included. You know you want it! 6😴 The Engineers Can Finally Go Home. A big thank you to the engineering, support, technical program management, and product management teams for all the hard work. Go take a nap! 5🛡️ We Have a Stable Alternative to Chasing New Features. For customers who want rock-solid reliability, you can skip the Feature Release (FR) line drama and stick with the LLR. 4✅ It's The Complete 6.8 Feature Set. You don't lose any capabilities; you just gain the confidence of a battle-tested release. Full meal deal, no compromises. 3🖱️ It's So Easy to Get There, Even The Intern Could Do It. Compatible hardware customers are encouraged to use Self-Service Upgrades (SSU). Less work, more coffee breaks. 2🔒 Guaranteed Bug Fixes and Security Updates. This release is officially maintained, meaning your security team can finally relax... slightly. 1🚨 When You Call Support, We Won’t Start With "Did You Upgrade Yet?"701Views1like0CommentsFlashArray File Multi-Server
File support on FlashArray gets another high demanded feature. With version 6.8.7, purity introduces a concept of Server, which connects exports and directory services and all other necessary objects, which are required for this setup, namely DNS configuration and networking. From this version onwards, all directory exports are associated with exactly one server. To recap, server has (associations) to following objects: DNS Active Directory / Directory Service (LDAP) Directory Export Local Directory Service Local Directory Service is another new entity introduced in version 6.8.7 and it represents a container for Local Users and Groups. Each server has it's own Local Directory Service (LDS) assigned to it and LDS also has a domain name, which means "domain" is no longer hardcoded name of a local domain, but it's user-configurable option. All of these statements do imply lots of changes in user experience. Fortunately, commonly this is about adding a reference or possibility to link a server and our GUI contains newly Server management page, including Server details page, which puts everything together and makes a Server configuration easy to understand, validate and modify. One question which you might be asking right now is - can I use File services without Servers? The answer is - no, not really. But don't be alarmed. Significant effort has been made to keep all commands and flows backwards compatible, so unless some script is parsing exact output and needs to be aligned because there is a new "Server" column added, there should be any need for changing those. How did we managed to do that? Special Server called _array_server has been created and if your configuration has anything file related, it will be migrated during upgrade. Let me also offer a taste of how the configuration could look like once the array is updated to the latest version List of Servers # pureserver list Name Dns Directory Services Local Directory Service Created _array_server management - domain 2025-06-09 01:00:26 MDT prod prod - prod 2025-06-09 01:38:14 MDT staging management stage staging 2025-06-09 01:38:12 MDT testing management testing testing 2025-06-09 01:38:11 MDT List of Active Directory accounts Since we can join multiple AD servers, we now can have multiple AD accounts, up to one per server # puread account list Name Domain Computer Name TLS Source ad-array <redacted>.local ad-array required - prod::ad-prod <redacted>.local ad-prod required - ad-array is a configuration for the _array_server and for backwards compatibility reasons, the prefix of the server name hasn't been added. The prefix is there for account connected to server prod (and to any other server). List of Directory Services (LDAP) Directory services got also slightly reworked, since before 6.8.7 there were only two configurations, management and data. Obviously, that's not enough for more than one server (management is reserved for array management access and can't be used for File services). After 6.8.7 release, it's possible to completely manage Directory Service configurations and linking them to individual servers. # pureserver list Name Dns Directory Services Local Directory Service Created _array_server management - domain 2025-06-09 01:00:26 MDT prod prod - prod 2025-06-09 01:38:14 MDT staging management stage staging 2025-06-09 01:38:12 MDT testing management testing testing 2025-06-09 01:38:11 MDT Please note that these objects are intentionally not enabled / not configured. List of Directory exports # puredir export list Name Export Name Server Directory Path Policy Type Enabled prod::smb::accounting accounting prod prodpod::accounting:root / prodpod::smb-simple smb True prod::smb::engineering engineering prod prodpod::engineering:root / prodpod::smb-simple smb True prod::smb::sales sales prod prodpod::sales:root / prodpod::smb-simple smb True prod::smb::shipping shipping prod prodpod::shipping:root / prodpod::smb-simple smb True staging::smb::accounting accounting staging stagingpod::accounting:root / stagingpod::smb-simple smb True staging::smb::engineering engineering staging stagingpod::engineering:root / stagingpod::smb-simple smb True staging::smb::sales sales staging stagingpod::sales:root / stagingpod::smb-simple smb True staging::smb::shipping shipping staging stagingpod::shipping:root / stagingpod::smb-simple smb True testing::smb::accounting accounting testing testpod::accounting:root / testpod::smb-simple smb True testing::smb::engineering engineering testing testpod::engineering:root / testpod::smb-simple smb True testing::smb::sales sales testing testpod::sales:root / testpod::smb-simple smb True testing::smb::shipping shipping testing testpod::shipping:root / testpod::smb-simple smb True The notable change here is that the Export Name and Name has slightly different meaning. Pre-6.8.7 version used the Export Name as a unique identifier, since we had single (implicit, now explicit) server, which naturally created a scope. Now, the Export Name can be the same as long as it's unique in scope of a single server, as seen in this example. The Name is different and provides array-unique export identifier. It is a combination of server name, protocol name and the export name. List of Network file interfaces # purenetwork eth list --service file Name Enabled Type Subnet Address Mask Gateway MTU MAC Speed Services Subinterfaces Servers array False vif - - - - 1500 56:e0:c2:c6:f2:1a 0.00 b/s file - _array_server prod False vif - - - - 1500 de:af:0e:80:bc:76 0.00 b/s file - prod staging False vif - - - - 1500 f2:95:53:3d:0a:0a 0.00 b/s file - staging testing False vif - - - - 1500 7e:c3:89:94:8d:5d 0.00 b/s file - testing As seen above, File network VIFs now are referencing specific server. (this list is particularly artificial, since neither of them is properly configured nor enabled, anyway the main message is that File VIF now "points" to a specific server). Local Directory Services Local Directory Service (LDS) is a newly introduced container for Local Users and Groups. # pureds local ds list Name Domain domain domain testing testing staging staging.mycorp prod prod.mycorp As already mentioned, all local users and groups now has to belong to a LDS, which means management of those also contains that information # pureds local user list Name Local Directory Service Built In Enabled Primary Group Uid Administrator domain True True Administrators 0 Guest domain True False Guests 65534 Administrator prod True True Administrators 0 Guest prod True False Guests 65534 Administrator staging True True Administrators 0 Guest staging True False Guests 65534 Administrator testing True True Administrators 0 Guest testing True False Guests 65534 # pureds local group list Name Local Directory Service Built In Gid Audit Operators domain True 65536 Administrators domain True 0 Guests domain True 65534 Backup Operators domain True 65535 Audit Operators prod True 65536 Administrators prod True 0 Guests prod True 65534 Backup Operators prod True 65535 Audit Operators staging True 65536 Administrators staging True 0 Guests staging True 65534 Backup Operators staging True 65535 Audit Operators testing True 65536 Administrators testing True 0 Guests testing True 65534 Backup Operators testing True 65535 Conclusion I did show how the FA configuration might look like, without providing much details about the actual way how to configure or test these configs, anyway, this article should provide a good overview about what to expect from 6.8.7 version. There is plenty of information about this particular aspect of the release in the updated product documentation. Please let me know if there is any demand to deep-dive into any aspect of this feature.500Views2likes1Comment🧠 Deep Dive: Configuring File Services Policies & File Systems on FlashArray
Continuing our technical walkthrough series on Pure Storage FlashArray File Services, this new video dives into the nuts and bolts of setting up policies and file systems to create your first SMB file share. If you’ve already followed along with the previous video on setting up networking, DNS, and Active Directory integration, this next step completes the foundation — showing exactly how to configure: Export Policies for SMB access and permissions Quota Policies to manage capacity limits Audit and AutoDir Policies for visibility and governance And finally, how to create and assign a file system for your department or team shares The demo walks through the FlashArray UI and even steps into Windows file share management to validate access-based enumeration and permissions in action — proving just how simple and powerful FlashArray file services can be. 👉 Watch the video on Pure360 to see how easy it is to go from blank configuration to a fully functional SMB file share environment in minutes. -Jason405Views1like1CommentGetting started with FlashArray File Multi-Server
Previous blog post FlashArray File Multi-Server was a feature overview from the perspective of a system, which already has that setup. Let's look at the same feature from a viewpoint of a storage admin, who needs to start using it. For purpose of this blogpost, I'll be starting with an empty test-array. Please let me know if there is a demand for similar post focused on brown field use-case. Setting up a data to share Let's create a filesystem and a managed directory # purefs create myfs Name Created myfs 2025-06-27 06:30:53 MDT # puredir create myfs:mydir --path dir Name Path File System Created myfs:mydir /dir myfs 2025-06-27 06:31:27 MDT So far nothing new.. Setting up a server To create a server on FlashArray, Local Directory Service needs to be either created during Server creation or reference to existing one needs to be provided. What's Local Directory Service? It's a container for Local Users and Local Groups. It's a new container, which helps to manage users for different servers. # pureds local ds create mylds --domain domain.my Name Domain mylds domain.my Nothing prevents us now to create a actual Server object. # pureserver create myserver --local-ds mylds Name Dns Directory Services Local Directory Service Created myserver management - mylds 2025-06-27 06:41:49 MDT (Another option would be to use "built in" server, which is guaranteed to be there - "_array_server". That would also be a server, which contains all the exports which were created before migration to Multi-Server enabled release. As stated before, this post is focusing on a green field scenario, thus creating a new server object.) Setting up an export The server can now be used when creating export # puredir export create --dir myfs:mydir --policy smb-simple --server myserver --export-name myexport Name Export Name Server Directory Path Policy Type Enabled myserver::smb::myexport myexport myserver myfs:mydir /dir smb-simple smb True One configuration object wasn't created as part of this blog post - policy "smb-simple". That's a pre-created policy, which (unless modified) only sets the protocol to "SMB" and accepts all clients. The name of the export has been set to "myexport", meaning that this is the string to be used by the client while mounting. The address of this export will be "${hostname}/myexport". Setting up networking This is a bit tough to follow, since networking pretty much depends on the local network setup and won't be reproducible in your environment, but let's see what needs to be done in the lab setup, hopefully it would be similar to what needs to be done in simple "test" scenario any reader could be doing. Let's create and enable a simplest File VIF possible # purenetwork eth create vif vif1 --address 192.168.1.100/24 --subinterfacelist ct0.eth2 --serverlist myserver Name Enabled Type Subnet Address Mask Gateway MTU MAC Speed Services Subinterfaces Servers vif1 False vif 192.168.1.100 255.255.255.0 - 8950 22:39:87:5e:f4:79 10.00 Gb/s file eth2 myserver # purenetwork eth enable vif1 Name Enabled Type Subnet Address Mask Gateway MTU MAC Speed Services Subinterfaces Servers vif1 True vif - 192.168.1.100 255.255.255.0 - 8950 22:39:87:5e:f4:79 10.00 Gb/s file eth2 myserver That should do it. Setting up Directory Services Server on FlashArray always has Local Directory Service and additionally it can be configured either to verify users against LDAP or Active Directory. LDAP configuration would be set up as # pureds create myds ... # pureserver setattr --ds myds myserver Or we can opt-in to join the Active Directory # puread account create myserver::ad-myserver --domain some.domain --computer-name myserver-computer-name But we don't have to! Let's make this very simple and use the Local Directory Service which has been created before - it's already user by our Server, so the only thing left is to create a user (and let's join Administrators group.. because we can) # pureds local user create pure --primary-group Administrators --password --local-ds mylds Enter password: Retype password: Name Local Directory Service Built In Enabled Primary Group Uid pure mylds False True Administrators 1000 Now, we should have everything set up for client to mount exposed share. Mounting an export (on linux) Let's use a linux client, since it will fit nicely to the rest of the operations and command line examples we have so far. At this point, the share can be easily mounted on any Windows box as well and also all the configuration made on the command line can be easily done on the GUI. client # mount -v -t cifs -o 'user=pure,domain=domain.my,pass=pure,vers=3.02' //192.168.1.100/myexport /mnt client # mount | grep mnt //192.168.1.100/myexport on /mnt type cifs (rw,relatime,vers=3.02,sec=ntlmssp,cache=strict,username=pure,domain=domain.my,uid=0,noforceuid,gid=0,noforcegid,addr=192.168.1.100,file_mode=0755,dir_mode=0755,nounix,serverino,mapposix,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1) And now, the "/mnt" directory on the client machine represents the Managed Directory "myfs:mydir" created before and can be used up to the permissions the user "pure" has. (And since this user is a member of Administrators group, it can do anything). Conclusion This post shows how to set-up File Export on FlashArray with using Servers. We can use the same Flash Array to create another server and export same or different managed directory, while using another Network interfaces or Directory Services.359Views1like0CommentsActiveCluster for File
We’re proud to announce the availability of ActiveCluster for file, Everpure’s premier business continuity solution and a fundamental enabler of our Enterprise Data Cloud vision, where Service Level Agreements define what storage, network and compute resources are assigned dynamically to application data sets rather than an hardware-to-app architectures. With ActiveCluster for file, Everpure is extending the benefits of data mobility, continuous access and policy-driven management to file workloads. What is ActiveCluster? Everpure launched ActiveCluster in 2017, and rapidly took the mission critical, enterprise block storage world by storm. ActiveCluster rapidly enabled enterprise customers with the most demanding block workloads to deploy synchronous, always available, always up-to-date, LUNs or volumes to hosts stretched across geographic distances. What set ActiveCluster apart from the existing solutions at the time, and even now, is how simple to set up Everpure RTO-0 and RPO-0 file solutions are, and how flexible and adaptable to the ever changing business needs hosting these data sets become after being deployed on a Everpure Fusion fleets. Today, we’re adding file protocol support like NFSv3, NFSv4.1, SMB 2.0, and SMB 3.0 w/ continuously available shares to our ActiveCluster solution. Realms as a new container ActiveCluster for file utilizes a new, high-level container called a Realm, to synchronously mirror both user data and storage configuration information necessary to provide data access to authorized users on either side of the stretched file system(s). Realms are handy to put applications with similar Recovery Point Objectives and similar Recovery Time Objectives together. Realm Synchronous Replication The act of synchronously mirroring both the user data and storage configuration information across two different FlashArrays is called ‘stretching’. Similar to how a pod is stretched across two FlashArrays, a Realm can be stretched between any FlashArray system that has no more than 11ms Round Trip Time average latencies on their array replication links. Either Fibrechannel or Ethernet array replication links can be used to replicate file data synchronously. Figure 1. ActiveCluster for file can be deployed in different modalities Realms as namespaces for policies Realms contain unique snapshot, audit logging, replication and export policies. These policies are only viewable and attachable to storage objects within the Realm, creating a building block for hosting multiple different end customers or tenants on Fusion fleets. These policies are automatically replicated over to the other array if the Realm is stretched, reducing operator burden in failover scenarios. To prevent split brain scenarios (where a network partition in the array links or replication links stop communication between the pair of FlashArrays) Everpure’s fully managed Cloud Mediator service will determine which remaining FlashArray controller pair can process writes, and which array will not. Unlike other business continuity solutions, ActiveCluster customers don’t have to worry about patching or maintaining the security of separate VMs to act as a mediator service to prevent split brain scenarios. Multiple servers supported per Realm, different IDPs allowed. Each Realm can have one or more servers configured in it, which act as protocol end points for clients and hosts to connect to. Each server in a Realm can have a different IP address, or utilize a different Identity Provider Service. When a failover condition occurs (like a site disaster on one side), automatic failover and the clients in either data center are on the same Ethernet segment or broadcast domain, a failover condition will emit a gratuitous Reverse Address Resolution Protocol request (RARP), mapping the new MAC address of the ethernet interface on one side to a same IP address being used. Applications may see a small pause in reads or writes being serviced, but will not have to re-issue I/O or remount / remap shares or exports. Managed directory quotas can also be used for any filesystem or managed directory attached to the servers in the Realm being stretched. These quota policies automatically get replicated with user data, so the same customer experience in terms of usable space exists both before and after an automatic failover. New Guided Setup available for ActiveCluster for file Deploying new ActiveCluster for file solutions can occur in less than five minutes on already racked and powered arrays. A Guided Setup wizard is available to quickly capture the necessary information to stretch a Realm. This wizard can be started from multiple locations within the Purity GUI. ActiveCluster for file fully takes advantage of Fusion fleets and the ability to manage storage infrastructure as code, programmatically and via policy. Realms are not tied to hardware, and can ‘float’ Realms with ActiveCluster for file support not only provide a 0-RTO and 0-Recovery Point Object at the storage layer for mission critical applications, but they also provide a mechanism to transparently move the data and configuration in the Realm non-disruptively somewhere else within your fleet, whether it’s follow the Sun type round-robin hops, where the Realm’s location changes depending on the time of day, or is moved as a part of a data-center migration. Coupled with Fusion, Everpure’s intelligent control plane, ActiveCluster for file enables workloads and application data and their configuration information to dynamically and seamlessly move to the right location, at the right time, at the right granularity. Seamless movement across greater geographic distances can be accomplished by stretching and unstretching the same Realm between different arrays, as long as the RTT latency between them is <11ms. Service Level Agreements are the lingua franca of the Enterprise Data Cloud Service Level Agreements are the natural language of business owners, and are integral for companies who want to move away from managing storage arrays to managing their business data. They capture answers to questions like “How fast do you need access to this data? Does it need to be backed up or otherwise protected against site-wide failure? SLAs are what forms our vision behind the App-to-data operational model. This App-to-data model takes abstract, high level business requirements as input, and then automatically configures and deploys the required storage services to meet the service level agreement just defined. A Fusion fleet manager’s perspective is one of many different application tiles, and their health, not just a series of HA pairs spread out across different data centers. Data management operations, like instant backups, cloning, movement is applied as “verbs” to the application data set’s name or workload ID, and not to a mismatched storage container whose hardware boundaries impose limits on your app team. An Intelligent, unified control plane manages and enforces SLA’s across the fleet autonomously, like a modern cloud operating model but that can be deployed in any modality, whether on-prem, in the cloud or a hybrid. This scalable model, with Fusion’s intelligent control plane, supports ALL workloads, from modern AI workloads, containers and High Performance Workloads to extremely large image or rich media archives. An Enterprise Data Cloud, made up of discrete nodes tied loosely together, where Service Level Definitions define autonomous system behavior. Stop managing your storage arrays, and start managing your data. Learn more about ActiveCluster for file Read the support documentation for Purity 6.12.0 Test and deploy Fusion fleets and file presets Ask your account executive or system engineer for a demo!305Views3likes0Comments