Please enable javascript, or click here to visit my ecommerce web site powered by Shopify.

Community Forum > What's filling up RAM?

Hi all,

one of my 3 QuantaStor nodes ist regularly filling up the RAM to 100%. The other 2 are good. The one with the filling up the RAM is the receiver of my backups with large files. They are transmitted per SMB. But why is this system filling up the RAM to 100% where the other 2 are not? They get the same data amount. And one of the other nodes is receiving the movie files of my IP cameras, too.

When the RAM is at 100% the speed of the backup jobs is extremely decreasing. The only workaround is to reboot this node regularly. Each node has 4 GB RAM and serves 2 HDDs with 2 TB for each HDD. I can increase the RAM but I think that this won't solve the issue. I think the node will fill up the increased RAM size, too. But it takes a longer time until I have to reboot this node.

Kind Regards


October 29, 2018 | Registered CommenterStefan Mössner


Though the currently listed minimum memory requirements say 4GB there are many factors which determine if this is enough. Our current recommendation is 1GB RAM for 1TB of disk. You have 4x1TB drives which is close to that 1GB per 1TB rule.

It is difficult to determine how the memory is being used without more detail about your backups configuration. This can be found by right mouse click on backup policy / Modify BackupMigration Policy. One important setting is in the Advanced Settings tab - Parallelized Backup or Serialized Backup. These two backup methods work differently. Each will use memory differently. Knowing which one you have configured will aid diagnosis.

Other items that can affect the memory are the destination file share type (NFS, CIFS) as well as the underlying pool type config (Gluster, XFS, ZFS). All of these will impact memory requirements.

Since our rule of thumb is 1GB RAM per 1TB disk, and you are currently 4GBx4TB, it may fix the problem by adding 2GB or 4GB more as you mentioned.

QuantaStor boxes can be ssh'd into allowing you to run tools such as free and top. If you can run ssh (putty) and login to your host, run "top" and then type the captial M key. This will sort the top output by memory with largest process at the top of the list. If you do that and then start an output, it might help to identify what is using the memory.


November 1, 2018 | Registered CommenterBill Saunders


as described earlier I have 3 nodes with 2 HDDs at 2 TB each. So each node has 4 TB disk space and 4 GB RAM. This meets the system requirements. I setup a scale-out NAS server with GlusterFS. Each disk of all nodes are configured as separate storage pools. So there are 6 storage pools and 6 Gluster bricks. The underlying filesystem is XFS as this is the recommended filesystem for Gluster. The shares are accessible per SMB.

The backups are nearly serialized and are written or read per SMB. During the day the Veeam Agent backups of systems are written to the NAS. These beckups are system image files which are very large. The nightly backup jobs are copying the backup files via file sync to a Windows machine for having the data outside the NAS environment in case of emergency.

I'm wondering why the RAM is never freed up again. On one of the other nodes there are IP network cameras writing permantly small movie files per SMB to the NAS and there's no such issue. The RAM is at 30% to 40%. On the third node it's the same level of RAM usage and there's no permanent SMB access. These 2 nodes are also getting the backup files because I'm using erasure coding 2+1.

Today the RAM of the one node is again filled up to 100%. The node is running for 4 days since the last reboot. Here's the TOP output:

top - 10:45:50 up 4 days, 4:08, 1 user, load average: 1.35, 0.98, 1.21
Tasks: 492 total, 1 running, 491 sleeping, 0 stopped, 0 zombie
%Cpu(s): 9.6 us, 15.0 sy, 0.0 ni, 27.8 id, 40.3 wa, 0.0 hi, 7.3 si, 0.0 st

3639 root 20 0 5449888 2.824g 0 S 10.3 73.5 301:54.98 glusterfs
3299 root 20 0 1142316 61052 0 S 4.3 1.5 83:51.31 glusterfsd
13712 backup 20 0 414464 89056 11196 S 2.3 2.2 50:53.06 smbd
1503 root 20 0 170848 9992 1916 S 1.3 0.2 31:07.01 corosync
28 root 20 0 0 0 0 S 0.7 0.0 14:30.62 kswapd0
1825 telegraf 20 0 255916 11652 1116 S 0.7 0.3 22:41.02 telegraf
2023 root 20 0 1638044 18772 6900 S 0.7 0.5 23:38.23 qs_service

TOP shows that the Gluster process is using a lot of RAM. Instead of rebooting the node I stopped and restartet the Gluster volume today. But the RAM isn't freed up:

top - 11:21:08 up 4 days, 4:43, 1 user, load average: 0.10, 0.87, 1.44
Tasks: 492 total, 1 running, 491 sleeping, 0 stopped, 0 zombie
%Cpu(s): 2.0 us, 1.7 sy, 0.0 ni, 90.3 id, 6.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem: 4028380 total, 3792216 used, 236164 free, 9864 buffers
KiB Swap: 4192252 total, 2137812 used, 2054440 free. 134984 cached Mem

1503 root 20 0 170848 9952 1876 S 0.7 0.2 31:17.82 corosync
1565 influxdb 20 0 682132 163108 7456 S 0.7 4.0 10:15.30 influxd
2023 root 20 0 1638044 26776 14716 S 0.7 0.7 23:48.05 qs_service

As you can see, TOP isn't showing the Gluster process with high memory usage any more. But the RAM is still in use at nearly 100% and I can't see which process is still filling up the RAM now. So only a reboot of the node will help.

I don't think it will help to increase the RAM when the RAM will never be freed up by the processes.

Kind Regards


November 2, 2018 | Registered CommenterStefan Mössner

You're going to need 16GB or more of RAM on those systems, 4GB is not ok. QuantaStor has it's core service, then tomcat for the web UI and then samba and the glusterfs client use a fair about of RAM can CPU cycles.

November 5, 2018 | Registered CommenterSteve


I tested the behavior of the system with increasing the RAM but it's still filling up RAM and does never free it up again as I expected when opening this thread at 29 Oct 2018: "I can increase the RAM but I think that this won't solve the issue. I think the node will fill up the increased RAM size, too. But it takes a longer time until I have to reboot this node.".

Now I'm testing QuantaStor for nearly one year and I still can't use it as production storage because I always find new issues which are permitting the migration of all my data to this storage system.

Kind Regards


December 13, 2018 | Registered CommenterStefan Mössner