High memory usage

Hi, i just migrated from direct deployment, to docker compose - (the forum guide was perfect) but i have a strange issue.
if i run docker compose up --detached ram increases quickly to ~10GB and if i reduce the memory for e.g 8GB i have 100% CPU and mayan frontend is not accessible.

I run mayan in an LXC on Proxmox host. About 1000 documents.

Why is the ram consumption so high?

PS: i see a lot of worker processes running (about 100)

 	194.01 MiB 	/opt/mayan-edms/bin/python3 /opt/mayan-edms/bin/celery -A mayan worker --hostnam ...


Docker inside LXC is a configuration we have not tried. There might be artifacts affecting the deployment. LXC might be presenting a CPU core that is confusing Celery’s auto scale formula and causing it to launch ~100 worker processes.

Ok, and is there a way to prevent this? Maybe a env setting for a maximum of worker processes?

migrated to a VM and now ram usage is normal

1 Like

This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.