Tips & Tricks from Our CTO

Creating an OSNexus QuantaStor Storage Appliance on HPE Synergy with D3940 Storage Module

This post will go through an install of OSNexus QuantaStor on a Synergy 480 Gen9 node, using disk capacity from the Synergy D3940 Storage Module. This process does not result in a truly highly-available appliance as the node itself is a single-point of failure (SPoF). That said, HPE Synergy was designed with as much redundancy as possible, and so the real SPoF is the motherboard of the node. Should the node fail, a replacement node could take its place to quickly resume normal operation.  Additionally, by connecting multiple Synergy nodes to external storage (external to the Frame), you CAN achieve a true, highly-available storage appliance!

Note: This post assumes the reader is familiar with HPE OneView and the configuration of the Synergy Frame has already been completed. It also assumes that the HPE Synergy D3940 Storage Module is included in your configuration. To be clear, the Storage Module is NOT required to install and use QuantaStor, but without it, or external storage, the storage capacity available to you will be limited to the internal drives in the node you install on. This post also assumes that you configure networking in the following way:

  • Connection 1: Untagged management interface (Access port)
  • Connection 2: Tagged Trunk interface (Trunk port)

You’re welcome to configure your networking differently, and if you do so please modify the steps below to reflect your changes.

The D3940 Storage Module uses a divvying process to assign drives to Synergy nodes. It CANNOT connect drives to multiple nodes simultaneously, which would allow for node failover. Other, shared-nothing technologies can allow for a pooling of the storage across multiple Synergy nodes.

This post also assumes that the user be familiar with the use of HPE Insight Lights-Out (iLO) technology, and as such use of iLO will not be covered.

Also note, for this activity I’ll be setting up two, individual QuantaStor appliances and managing them in a grid. This is different than a cluster, as it’s a management function rather than a storage function. This is optional.

Without further ado, let’s jump in.

Create Server Profile

The first thing we’re going to do is to create a Server Profile in HPE OneView. Here are the pertinent sections we need to take care of:

  • General Section:
  • Firmware Section:
    While technically this is a suggestion, I HIGHLY suggest upgrading the firmware with the latest Synergy-specific updates.  I’ve encountered numerous issues with old firmware that didn’t jive with more recent Synergy appliance versions.  Also, as an aside, I recommend upgrading the appliance firmware if possible.
  • Connections Section:
  • Local Storage Section:
    For now, we’re just going to attach the boot drives, not the data drives in the SAS enclosure.
  • Boot Settings:
    Note that the current version of QuantaStor seems to require the Legacy BIOS when used with Synergy.
  • Advanced:

Install QuantaStor

For installing QuantaStor you’ll be using iLO’s remote console functionality.  This activity is one of the very few for which I prefer using Internet Explorer, due to the .NET applet integration.  By default, OneView can start an iLO session to an individual node using its Single Sign-On (SSO) capability.  This keeps you from having to change the individual iLO passwords.  You can find the iLO IP addresses and the SSO links in the Server Hardware section of OneView.

In the iLO Remote Console, connect the QuantaStor ISO to the node via the Virtual Drives menu, and power on the node.  When the graphical boot screen comes up, press F11 to bring up the One-Time Boot Menu.

Then, once the Boot Menu comes up, select:
iLO Virtual USB 2: HPE iLO Virtual USB CD/DVD ROM

From here on out, it’s an easy Linux install.

Select your language:

Select your location:

Select No for Detect keyboard layout:

Select the base language for keyboard:

Select your keyboard layout:

Select eth0 for Primary network interface:

Enter IP address for eth0:

Enter the Subnet Mask for eth0:

Enter the Gateway for eth0:

Enter the DNS servers for eth0, separated by spaces:

Select Continue when the network configuration error displays.  This is due to a network driver issue in the installer environment, but has no impact on the post-install system.

Enter the Hostname:

Select Timezone:

Select Guided – use entire disk for Partitioning method:

Select the default disk:

First Boot

The installer will automatically reboot the node, and if you’re still watching iLO, you’ll see this after POST:

And then, finally, you’ll come to the login screen.  This screen lets you know that QuantaStor is running, and the web-based GUI can be accessed at the URL in the middle of the screen (redacted here).

QuantaStor Base Configuration

At this point you should jump into your favorite web browser and go to the URL provided:

Clicking the Login button without entering a password will use the default password (which is “password”).

Next you’re presented with the Configuration Workflow Manager:

Complete at least the first three items and then close the Configuration Workflow Manager.  (In a second tab I’ve complete the same steps for my second node.)

After completing those steps for all node, return to the browser tab for your first node, click the Storage Grid Setup tab at the bottom of the Configuration Workflow Manager window, click Create Storage Grid, give it an applicable name and click OK:

Next, click Add System, entering the IP address and credentials for each of the subsequent nodes that you’ve installed.

Now, closing the Configuration Workflow Manager windows shows the main QuantaStor interface.  The interface is very intuitive.

QuantaStor Network Configuration

Now, let’s create a VLAN interface on eth1.  Right-click on eth1 and select Create VLAN Interface.  Set the VLAN ID to whatever VLAN you need to talk to on your network and add a corresponding IP address and subnet mask.  For these VLAN addresses I’m a fan of using the following convention, but your network admins will dictate what you can really do:
192.168.[VLAN ID].[same last octet of eth0]

At this point, ssh into the node(s) and ping a known address on the network.  Use qadmin as the username and password.  And, while you’re in there, it would be a good idea to change both the qadmin and root passwords:

  • Change root password: sudo passwd
  • Change qadmin password: passwd

Repeat the same process for any additional VLANs your storage appliance needs to talk to.  Also, remember to test the addresses via ssh and ping.

At this point I’m done with my networking.  Here’s what my interface looks like:

QuantaStor Storage Pool Configuration

Now we need to add the disks we’ll use for capacity for the appliance.  For this we need to power the nodes down.  You can run “sudo shutdown -h now” via ssh, or you can go to the Storage Management tab in the browser interface, right-click on the node and select Shutdown Storage System.  Note that if you’re creating a grid like I am, you’ll want to shutdown the node you’re connected to last.

After your nodes are shutdown, go back into the Server Profiles for your node(s) in OneView and add the disks from the shared SAS module.  Note that I’ve selected HBA mode rather than RAID mode.  This is because ZFS is both a volume manager and a filesystem.  There are cases where you may want ZFS to stripe across RAID volumes, but for this activity I’ll be letting ZFS protect the data.  Also, you need to check the box to re-initialize the storage controller on the next profile update, if the controller was in RAID mode to begin with.

Once the profile has been pushed to the node(s), power it/them back on.  When the GUI becomes available, log in and select the Physical Disks section in the left pane.

Notice that there seems to be a hodgepodge of disks and they aren’t very uniform.  This is due to the fact that the Synergy Storage Module uses dual-ported SAS drives and our configuration doesn’t yet account for that.  So what we’re really seeing is the two paths to the same drives.  Let’s fix this.  Click on Multipath Config in the tool ribbon:

This brings up the Multi-path Device Configurator:

Notice next to the Add button, the HP:EH0600JDYTL.  This is the model that is reported back from the drives, based on the installed firmware.  When you talk to a traditional storage array, like a 3PAR, the array manipulates this response so the client can properly multipath the presented LUNs (rather than individual drives).  In this case, because each individual drive has a different model string, and because new drives are released all the time, it’s impossible for a storage vendor to include a complete list of models.  So, because we have a dual-ported (dual-pathed) drive, we need to tell QuantaStor to look for multiple paths to individual drives and how to work with the two paths.  QuantaStor makes this really easy.  Just click the Add button, which will move the entry up to the make pane, and click OK.  Just tweaking the multipath config doesn’t yet change anything.  In order to see change, we need to rescan for disks.  Right-clicking on the Physical Disks header (or right-clicking the server name in the Physical Disks pane, or clicking Scan in the tool ribbon) allows you to initiate a disk rescan:

Make sure Rescan Multipath Configuration is checked and the Storage System is the one you want to scan before clicking OK:

After the task completes, the Physical Disks pane will look much better:

Now, each entry represents both paths to an individual disk, and clicking the triangle next to the dm-uuid… entry in the Physical Disks pane on the left will show the native Linux devices assocaited with the paths.

Now that we’re talking properly to our disks, let’s create a Storage Pool.  Select the Storage Pools pane on the left, and then right-click the Storage Pools label or in the white space to bring up the context menu and select Create Storage Pool:

This brings up the Create Storage Pool configuration screen:

This tab allows you to name the pool, define your protection scheme or RAID Type, and select the disks to include in the pool.  For those readers who aren’t familiar with ZFS, I suggest you do a bit of research about the technology.  ZFS has provided some really good functionality that takes us beyond what traditional protection technologies (RAID) provide.  In this case, I’ve selected RAID50, which stripes (RAID 0) across multiple protected volumes (raidz1 vdevs, in ZFS terms; single-parity protected volumes in traditional terms).  You can define the width of the single-parity stripe using the Set Size setting.  I’ve left it at auto and selected all of the disks (12), which in this case creates a pool of 4 striped 3-disk raidz1 vdevs.  Clicking Next takes us to the Advanced tab:

Here you can control the storage technology under the covers, ZFS or XFS (and there are use cases for both), and use a profile to tune the performance against the workload you’ll be running on the storage.  ZFS supports in-line compression, which, assuming the dataset is cooperative, will positively impact your usable:raw storage ratio.  It can also positively impact performance, so for many workloads it’s good to leave it on.  Clicking OK creates the storage pool:

Create Network Share

Before we actually create a network share, let’s have a bit of a discussion.  QuantaStor supports both SMB (Windows shares) and NFS.  SMB is based on user access mechanisms, while NFS is based on computer access mechanism (NFSv4 uses both).  QuantaStor can connect to your Active Directory services to centralize authentication, or use local accounts.  We’ll keep things simple for this blog and not do any user management.  This creates a publicly accessible share.  Click on Network Shares in the left pane, and either in the header or in the white space, right-click and select Create Share:

This brings up the Create Network Share dialog:

You’ll notice the Share Options at the bottom.  By default the share will be publicly open for both SMB and NFS.  The User tab is where you would lock down the SMB share to specific users and groups, whether local or in AD.  We’ll just click OK here, and the share is created:

Now, we’ll try to connect from a Windows client:

And, voila!  We see the share, and can go into share01.  Something to note is that QuantaStor can create snapshots, and you can find previous versions (snapshots) in the _snaps directory.  Snapshots are beyond the scope of this blog, so suffice it to say that they’re really flexible and cool!

Connecting to the NFS share is equally as easy, but rather than show you CLI I’ll opt to show you how to get the connection information.  First, expanding the share will show you the NFS access configurations — in this case Public:

Then, right-clicking the share and selecting Properties, or expanding the Properties pane on the right displays the Network Share Properties:

Note the Export Path is /export/share01.  This display specifically doesn’t show an IP address since, depending on your scenarion, it could be multiple.  So, if mounting from a Linux machine, you would use the following command:

mount <ip address>:/export/share01 <mountpoint>

Adding options to your mount command is possible, but beyond the scope of this blog.

Create Storage Volume

Next, we’ll move on to creating an iSCSI volume.  Select the Storage Volumes pane on the right, then either in the header or the white space, right-click and select Create Storage Volume:

This brings up the Create Storage Volume dialog:

There are some good options here, but we’ll keep it simple and just click OK, creating our storage volume:

In order to have a client connect to the volume we need to create a Host entry for it. So, like the previous steps, click the Hosts entry in the left pane, and right-click to show the context menu, selecting Add Host:

And, as you might guess, this brings up a dialog where we will give the host entry a name and OS type, as well as give it the iQN:

Now that the host is created, right-clicking it and selecting Assign Volumes brings up the dialog to select volumes.  Just check the volume and click OK:

Note that you can also assign host to volumes from the Storage Volumes pane.  Here’s what our Hosts pane now looks like:

Now, I’ll connect to it from the Windows host that I added.  Go into the iSCSI Initiator Properties, and we’ll just do a Quick Connect.  Enter the IP address of the QuantaStor appliance in the Target field and click Quick Connect:

This should yield a success dialog:

And then we’ll go into Disk Management to be sure we’re good:

Conclusion

This blog only scratches the surface of what QuantaStor can do.  In addition to traditional file and block storage, it can also do object storage (with Ceph) and scale-out file (with Gluster).  All of these have use cases.  The thing to note about QuantaStor though is that whenever you deploy a specific type of storage it uses technology under the covers to specifically deliver that — it’s not a bolt-on technology to a baseline technology.  And you can tune it, if necessary!

OSNexus has done an amazing job of making storage easy to build and use, at a fraction of the cost of traditional approaches.  It’s been put through the ringer by service providers and has outperformed other solutions in HPC environments.  It shines at the norm, and excels in the extreme!