Please enable javascript, or click here to visit my ecommerce web site powered by Shopify.

Community Forum > CEPH or GlusterFS Redundancy

What type of redundancy does QuantaStor have when using CEPH or GlusterFS in terms if storage brick was to go offline? Does CEPH or GlusterFS provide brick redundancy other than replicaiton of the entire CEPH or Gluster "cluster node"?

If you know about Nutanix, does it some kind of redundancy like that (Network based RAID .... aka RAIN, where the "N" is Nodes)

January 11, 2016 | Registered CommenterDon Nguyen

Hello Don,

Yes, our scale-out File (Gluster) and Scale-out Block (Ceph) technologies provide redundancy that would protect against a Hardware component or complete node failure.

Our Scale-out Block (Ceph) uses OSD's stored on Storage Pools on the individual nodes, the OSD's are then combined into a Ceph Storage Pool and can be configured with redundnacy of 2 or 3 for storing two copies or three copies of the data across the Ceph Pool. this allows for any component a node or the entire node to fail and there will be no impact to client access as there is still a useable copy of the data.

The scale-out File (Gluster) solution uses Bricks stored on Storage Pools on the QuantaStor nodes and also provide a replica count of 2 or 3 as well as erasure coding features when you create a Gluster Volume out of those bricks. These features ensure that a hardware or node failure will not affect data availability or client access.

Note you can create multiple Storage Pools on the various nodes so you could have a Scale-out File and Scale-our Block co-existing on the same hardware.

We have more information on the various quantastor configurations possible in our new workflows page at the link below that you can review:

Please let us know if you have any questions.

Thank You,
Chris Golden
OSNEXUS support

January 11, 2016 | Registered CommenterChris Golden

Thank Chris,

I didn't get the node failure information from the article you sent. I've also read the same article prior to posting.

I think I read in one of QS wikis is that GlusterFS would be faster than CEPH in SMB/CIFS shares? Also are both technologies not suitable for VMware environments? What are OSDs in Ceph technology as I did not see the acronym translation.

Thanks again!

January 11, 2016 | Registered CommenterDon Nguyen

Hello Don,

For CIFS access you will want to use our scale out file solution based on Gluster. Our Scale-out block solution based on Ceph is currently only for Block Level for iSCSI/Fibre Channel Access of Storage Volumes.

You can use either technology with VMware, although the scale-out file performance would be lower than what you could achieve with our scale-out block technology.

OSD's are how Ceph stores and manages the portion of a Ceph pool on a particular QuantaStor Storage Pool. OSD stands for object storage daemon.

The questions you are asking here are about Enterprise level features and you are posting to our Community Edition forum.

I recommend that you contact our Sales Engineering team via for more information on our Enterprise Scale-out and HA features. Our Sales Engineers would be able to provide you with a Demo of the features and discuss your needs in more detail.

Thank You,
Chris Golden
OSNEXUS Support.

January 12, 2016 | Registered CommenterChris Golden