Please enable javascript, or click here to visit my ecommerce web site powered by Shopify.

Community Forum > Default ZFS Write Cache Size


What is the default ZFS write cache size in RAM for QuantaStor? This is without any SLOG/ZIL SSD device. So if my system has 128GB of RAM, what is the default ZFS Write Cache?

Also is there such a flag in QuantaStor for a ' Log Bias ' for latency or throughput?

Thanks as always!

September 19, 2015 | Registered CommenterDon Nguyen

Hi Don,
We actually have an a simple algorithm for it in a script that runs as part of the installation process. You can set it to the default for the amount of ram using the CLI command:

sudo qs-util setzfsarcmax auto

Or you can set it to a specific percentage of the system RAM, for example, to set it to 50% of the available RAM run this:

sudo qs-util setzfsarcmax 50

It will apply the settings immediately so there's no need to reboot. The 'auto' mode applies ARC settings of 80% of the RAM for >64GB, 75% for >32GB, 70% for >16GB, 60% for > 10GB, and 50% for >6GB. Anything lower than that and the auto mode does nothing.


October 4, 2015 | Registered CommenterSteve

I thought the ARC is a read only cache in RAM? How about write cache?

October 7, 2015 | Registered CommenterDon Nguyen

Hello Don,

Yes, the ARC is read cache in System memory.

ZFS does not have a write cache. ZFS does have an Intent Log that helps combine mutltiple random I/O writes into larger sequential stile write queues. This helps reduce how often ZFS has to hit the Backend data disks in the Storage Pool for writes.

The ZIL(ZFS Intent Log) has a flush period of typically ~2GB or 30 seconds, whichever it flush criteria first will cause a flush of the data as a Transaction group to the Data disks, ensuring the data is safely written to disk. SYNC Based I/O calls will not ACK back to their client until the TXG flush has completed. You can use an SSD with High Performance and High Write Endurance as a ZIL SLOG device that will mirror any SYNC I/O's in the ZIL in memory to the SLOG device to help accelerate the SYNC write acknowledgement back to the client. Please note that this only will benefit SYNC based writes under 32KB in size. The ZIL SLOG device will only use upto the ZIL capacity in memory (again ~2GB) So your ZIL SLOG SSD does not need to be very large.

With ZFS, If you have a use case that needs High Write IOPS, you will want to ensure that your Storage Pool Data disks are capable of providing the IOPS performance that you need. If you ave a mixed use case environemnt, with Archive/Capacity needs and High I/O/throughput needs, please consider creating multiple storage Pools with different disk configurations. for example, for a Storage Pool intended for Virtual Desktop usage, we recommend using all ssd's for the Data disks in a RAID10 to maximize IOPS. For a backup/archive use case, you would want to consider a RAID50 configuration of Platter disks.

Please let us know if you have any other Questions.

Thank You,
Chris Golden

October 7, 2015 | Registered CommenterChris Golden

So if I am using iSCSI, I don't have to worry about a SLOG device? How about sudden power outage without an SLOG device? Would my data be safe?

October 14, 2015 | Registered CommenterDon Nguyen

Hello Don,

Yes, without an SLOG QuantaStor will not acknowledge back to the client that the write was completed safely unless the data has been written safely to the backend data disks. With an SLOG, SYNC based writes 32KB or smaller would have somewhere safe to be stored as a journal in the event of a power outage, this allows for the client to receive an acknowledgement back from the QuantaStor much quicker than it would take to write to the backend disks and can help accelerate random SYNC based write I/O from the client to the QuantaStor until the ZIL SLOG has reached a flush criteria(~30 seconds or ~2GB).

As noted earlier, if your I/O requirement is for sustained Write I/O you will need to ensure that your backend data disks are capable of handling that I/O demand as the SLOG is really only helpful for periods of bursty SYNC based I/O and is not a write tier or write cache.

Please also note that ASYNC based I/O calls will ignore the ZIL SLOG and proceed directly to the data disks.

Please let us know if you have any other questions.

Thank You,
Chris Golden

October 14, 2015 | Registered CommenterChris Golden