Please enable javascript, or click here to visit my ecommerce web site powered by Shopify.

Community Forum > HBA vs. HW RAID setup

Hi,
I'll be building my first quantstor box on a SuperMicro SC847E26 box I'm getting on loan…

My first question would be, as I prepare to start playing around with this setup
Is which path should I take in order to maximize raw performance (read performance is my key interest)
Should I configure the drives as ZFS software raid by treating the whole thing as a set of dumb disks,
Or should I go for LSI HW RAID setup

Would any/both setups be capable of taking advantage of the multipath aspects of the E26?

I'll have more (many more) questions later on, but I'd like to start with this basic one in order to better understand my options

August 24, 2014 | Registered CommenterDan Shechter

Hi Dan,
Yes, I would recommend going with HW RAID configuration. If you connect up two mini-SAS cables to the JBOD from your LSI 9286-8e (or derivative) it'll make dual path connections to the storage.
Here's a quick summary of the pros/cons:

Pros for hardware RAID:
- easier setup
- faster rebuilds for parity based RAID5/6/50/60
- faster write performance with the NVRAM write cache enabled (get the CacheVault module)
- best hot spare management

Pros/reccomenations for ZFS software RAID:
- bit-rot protection
- required for HA configurations (use LSI HBA)
- typically used with RAID10 layout

In configuring the HW RAID card for best performance, it'll depend on the workload. If you have many writers you'll want RAID10, if you plan to do one big ingest then the rest will be read IO then go with a parity based RAID like RAID6 or RAID60. RAID5 based configurations are only marginally faster.. drives are large these days, rebuilds take longer, so avoid RAID5 unless it is a small RAID unit (6 or less disks). Note that if you do a sequential IO test with a 9286/9271 you'll find that the controller will max out at about 1.8GB/sec. When you have many readers it'll go down but the read-ahead and other mechanisms will help keep it high.

Let us know if that answers your questions,
Best,
-Steve

August 24, 2014 | Registered CommenterSteve

Hi Steve, And thanks for you quick comments,
I've been reading up, on some SuperMicro build and such, and I've stumbled upon this one

Which describes a build based on the SC847A variant of SMC's storage line.
The build consists of getting 5 different (1 onboard + 4 PCIe) LSI 2008 cards, and doing a RAIDZ3 (in his case).
Since he's using the A variant, he has 9 SFF 8087 ports which allow for all 36 disks to have direct SAS/SATA connections.
the 5 LSI 2008 cards give him the required 9 SFF ports and what I assume would be ample bandwidth to the disks.

While this setup would certainly have great limitations (no multipath etc.) it does have great appeal, and seems to hit a very sweet price/performance point.
Would this be considered a good setup for OSNEXUS as well (Assuming 3 x RAIDZ-3 / RAIDZ-2 of 12 x 4TB disks making up a rather large volume)?

How would this compare to a 9271 based solution (I'm assuming you means dual 9271's)? Would a the above setup with a beefy CPU be able to compete or even beat H/W raid setups?

Any comments?

August 24, 2014 | Registered CommenterDan Shechter

Yes, that'd be a good setup for QuantaStor as well. If you were to use a hardware RAID controller with the SC847A I would go with one of the newer 7xxx series Adaptec cards like the 72405 (24 internal ports) or the 71605 with the AFM-700 cache module. That'll be much simpler to setup and then you get 1GB of DDR3 write cache as well which will greatly improve write performance.
-Steve

August 24, 2014 | Registered CommenterSteve

>"Would a the above setup with a beefy CPU be able to compete or even beat H/W raid setups?"

It's hard to say. Dual LSI 9271s only gives you 16 internal ports so it's not a cost effective way to go for the "A" series SMC enclosures but there fine for the E16, E26. I think the 3x RAIDZ2 would do very well but I'd lean towards the HW RAID for parity based RAID as the NVRAM cache is a big deal. Note, higher CPU GHz is generally more important than core count unless you have lots and lots of clients like a render farm.
Best,
-Steve

August 24, 2014 | Registered CommenterSteve

The setup I've linked to used 5xLSI 9211-8i cards to acheive a total of 10 SAS ports on as many PCIe lanes as humanely possible in that sort of a setup. The 847A has 9 SAS ports, so the 10 ports are just above what is required.

Going the H/W Raid path would add the additional caches of course, as you say, but I was thinking that most ZFS builds talk about putting the $$$ into better use by buying a good ZIL device like this or that.
In addition, H/W raids don't do RAIDZ3 AFAIK, so that would be another downside to going the H/W raid path.

August 25, 2014 | Registered CommenterDan Shechter

Hi Dan,
Agreed, for the type of setup you're looking at it might make more sense to just go the HBA route. Btw, we added support for RAIDZ3 into the web management interface for the upcoming 3.13 release due out within the next few days. You noted that read performance was a key need for your workload, for that you might look at adding SSD for L2ARC first. Note also that the rebuild speed is going to be lower with the RAIDZ3/HBA route so be sure to use a high quality SAS drive if you go that way.
Best,
-Steve

September 11, 2014 | Registered CommenterSteve