Whoosh search not working, initiate reindex search yields to errors

Hello all,

I had some problems with the switch to the whoosh search backend. I was able to solve the problems in the meantime. (Solution see below.)

My problems were very similar to the write access errors mentioned here: [4.2.1] whoosh search not working, initiate reindex search yields to errors in log (#1096) · Issues · Mayan EDMS / Mayan EDMS · GitLab

The problems with the concurrency limitations of Whoosh I could solve with the following setting in the config.yml of mayan.

SEARCH_INDEXING_CHUNK_SIZE: 1

The explanation is already hidden in the mayan documentation here: Search — Mayan EDMS 4.4.3 documentation

The idea was, only 1 object should be entered into the whoosh database at the same time. Because according to the explanation in issue #1096 currently not more than one process can write to the whoosh database at the same time.

I hope my contribution helps other mayan users. I look forward to further experiences and contributions to the new search backend whoosh.

Many greetings,
Jörg

1 Like

I am so sorry. But the fix
SEARCH_INDEXING_CHUNK_SIZE: 1
does not work after all. After many, many hours of successful indexing, errors like the following appear again:

ERROR/ForkPoolWorker-####] Task mayan.apps.dynamic_search.tasks.task_index_instance[...] raised unexpected: LockError()

I suspect that a WRITELOCK of another process was not cleared away before. After that all further write attempts to the whoosh database fail, because the lock is never cleared again. Before the LockErrors the following is in the log:

[### ERROR/MainProcess] Timed out waiting for UP message from <ForkProcess(ForkPoolWorker-####, started daemon)>
[### ERROR/MainProcess] Process 'ForkPoolWorker-####' pid:123456 exited with 'signal 9 (SIGKILL)'.

Process 123456 probably set the WRITELOCK, then got stuck and was terminated by SIGKILL. But the WRITELOCK remained and all further write attempts fail.

How can I find out, where the process 123456 started by Worker-B, got stuck?

Thanks for any help.

Hello all out there,

has nobody any hint, how to look inside Worker-B and how to debug the reason for the error mentioned above?

Thank you for your grat work and any help :slight_smile:
Best regards, Jörg

Changing SEARCH_INDEXING_CHUNK_SIZE to 1 does not solve the concurrency issue, it only makes it worse.

This setting allows indexing many things in a single request by building a data structure of many objects. It is only used for bulk indexing tasks, specially to avoid creating a task entry for each object in the system, such as if you have 1M document this avoid 1M task entries in RabbitMQ and the associated memory usage spike. Bulk indexing is not used for search refresh during normal use.

The idea was, only 1 object should be entered into the whoosh database at the same time. Because according to the explanation in issue #1096 currently not more than one process can write to the whoosh database at the same time.

That is correct, however Mayan already handles locking in a generalized way to prevent this and ensure only one task is writing to the Whoosh index any given time. Mayan’s lock system is agnostic and used for many different things, plus it is distributed.

Raising a LockError is not fatal and is the intended protection. This is similar to an Ethernet collision (Collision domain - Wikipedia). It just means the lock system worked and the later task is going to be re-schuled and retried later.

Process 123456 probably set the WRITELOCK, then got stuck and was terminated by SIGKILL. But the WRITELOCK remained and all further write attempts fail.

This is an interesting hypothesis. Research will be needed to determine if a new Whoosh process is able to invalidate or take control of an existing orphan Whoosh WRITELOCK. That should be the case given that Whoosh works as a single process per index update and this would be a common scenario expected by the library.

How can I find out, where the process 123456 started by Worker-B, got stuck?

You need to increase the log level to DEBUG and examine the logs with docker compose logs -f and search for the output of the process. In debug level logging, the process ID is shown in brackets.

Pass the environment variable MAYAN_LOGGING_LEVEL=DEBUG.

Dear Roberto,

thank you very much for your explanations. I now understand some things better than before. I will try the debugging and then give feedback if I could find something interesting. I use direct deployment and not a docker environment. So I need to slightly modify your commands for debugging. But I know what I need to do.

If I understand you correctly, the writelocks are set by whoosh library and not by a mayan subprocess. Is that correct?

See you soon, hopefully I will find something. Because I am really looking forward to a more effective search backend.

Thank you so much for mayan :slight_smile:
Jörg

2 Likes

I found the following while reviewing the log files:

[2023-02-18 17:00:51 +0100] [1989658] [INFO] Handling signal: term
[2023-02-18 16:00:51 +0000] [1989690] [INFO] Worker exiting (pid: 1989690)
[2023-02-18 16:00:51 +0000] [1989691] [INFO] Worker exiting (pid: 1989691)
[2023-02-18 16:00:51 +0000] [1989693] [INFO] Worker exiting (pid: 1989693)
[2023-02-18 17:00:57 +0100] [1989658] [INFO] Shutting down: Master

Noticeable is the time difference of 1h. But the lines were written at the same time. Can there be a problem with daylight saving and standard time? If the writelocks are time-dependent, a shift of one hour could cause the whole concept to fail.

Further, the following lines are interesting.

[2023-02-18 16:44:42 +0100] [1989221] [INFO] Booting worker with pid: 1989221
mayan.apps.task_manager.apps <1989221> [DEBUG] "check_broker_connectivity() line 33 e[36mStarting Celery broker connectivity teste[0m"
mayan.apps.task_manager.apps <1989221> [DEBUG] "check_results_backend_connectivity() line 60 e[36mStarting Celery result backend connectivity teste[0m"
mayan.apps.common.apps <1989221> [DEBUG] "ready() line 119 e[36mInitializing app: mayan.apps.lock_managere[0m"
mayan.apps.lock_manager.apps <1989221> [DEBUG] "ready() line 24 e[36mStarting lock backend connectivity teste[0m"
mayan.apps.lock_manager.backends.base <1989221> [DEBUG] "acquire_lock() line 31 e[36macquiring lock: _mayan_test_lock, timeout: 1e[0m"
[2023-02-18 15:45:27 +0000] [1989221] [ERROR] Exception in worker process
Traceback (most recent call last):
  File "/opt/mayan-edms/lib/python3.9/site-packages/mayan/apps/lock_manager/apps.py", line 30, in ready
    lock = lock_instance.acquire_lock(
  File "/opt/mayan-edms/lib/python3.9/site-packages/mayan/apps/lock_manager/backends/base.py", line 32, in acquire_lock
    return cls._acquire_lock(name=name, timeout=timeout)
  File "/opt/mayan-edms/lib/python3.9/site-packages/mayan/apps/lock_manager/backends/redis_lock.py", line 18, in _acquire_lock
    return RedisLock(name=name, timeout=timeout)
  File "/opt/mayan-edms/lib/python3.9/site-packages/mayan/apps/lock_manager/backends/base.py", line 48, in __init__
    return self._init(*args, **kwargs)
  File "/opt/mayan-edms/lib/python3.9/site-packages/mayan/apps/lock_manager/backends/redis_lock.py", line 73, in _init
    raise LockError
mayan.apps.lock_manager.exceptions.LockError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/mayan-edms/lib/python3.9/site-packages/gunicorn/arbiter.py", line 589, in spawn_worker
    worker.init_process()
  File "/opt/mayan-edms/lib/python3.9/site-packages/gunicorn/workers/base.py", line 134, in init_process
    self.load_wsgi()
  File "/opt/mayan-edms/lib/python3.9/site-packages/gunicorn/workers/base.py", line 146, in load_wsgi
    self.wsgi = self.app.wsgi()
  File "/opt/mayan-edms/lib/python3.9/site-packages/gunicorn/app/base.py", line 67, in wsgi
    self.callable = self.load()
  File "/opt/mayan-edms/lib/python3.9/site-packages/gunicorn/app/wsgiapp.py", line 58, in load
    return self.load_wsgiapp()
  File "/opt/mayan-edms/lib/python3.9/site-packages/gunicorn/app/wsgiapp.py", line 48, in load_wsgiapp
    return util.import_app(self.app_uri)
  File "/opt/mayan-edms/lib/python3.9/site-packages/gunicorn/util.py", line 359, in import_app
    mod = importlib.import_module(module)
  File "/usr/lib/python3.9/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
  File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 790, in exec_module
  File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
  File "/opt/mayan-edms/lib/python3.9/site-packages/mayan/wsgi.py", line 16, in <module>
    application = get_wsgi_application()
  File "/opt/mayan-edms/lib/python3.9/site-packages/django/core/wsgi.py", line 12, in get_wsgi_application
    django.setup(set_prefix=False)
  File "/opt/mayan-edms/lib/python3.9/site-packages/django/__init__.py", line 24, in setup
    apps.populate(settings.INSTALLED_APPS)
  File "/opt/mayan-edms/lib/python3.9/site-packages/django/apps/registry.py", line 122, in populate
    app_config.ready()
  File "/opt/mayan-edms/lib/python3.9/site-packages/mayan/apps/lock_manager/apps.py", line 35, in ready
    raise RuntimeError(
RuntimeError: Error initializing the locking backend: mayan.apps.lock_manager.backends.redis_lock.RedisLock; 
[2023-02-18 15:45:27 +0000] [1989221] [INFO] Worker exiting (pid: 1989221)

Apparently a lock_manager process fails. The last line says that it might have something to do with the redis server.

Error initializing the locking backend: mayan.apps.lock_manager.backends.redis_lock.RedisLock;

So next I will have a look in the Redis log to see if I can find more information there.

Correct. The files ending with WRITELOCK are created and managed by Whoosh. This is a file level lock. Mayan manages it own high level locking before the search update task even launches.

Noticeable is the time difference of 1h. But the lines were written at the same time. Can there be a problem with daylight saving and standard time?

Good question. Are all workers running in the same computer/VM? I can’t think of any reason why processes running in the same host have different times.

If the writelocks are time-dependent, a shift of one hour could cause the whole concept to fail.

That is a valid assumption. Sounds reasonable that if the writelock happened with a time in the future, other processes with the correct time would not be able to acquire or invalidate it.

Mayan uses a backend system for the lock manager to abstract how it is actually executed. This allows using either the database or Redis as a distributed lock provider. So a Mayan lock is a wrapper for another low level lock, a RedisLock by default in recent versions.

The message:

Error initializing the locking backend:

is worth investigating. Even if Mayan is able to properly initialize the lock manager Redis backend (driver), this message suggests there is a communication problem that could cause locking artifacts during runtime and might mean locking is not trust worthy in your scenario.

Dear Roberto,

thank you very much for your explanations. Again I now understand some things better than before.

Are all workers running in the same computer/VM?

Yes. I use direct deployment and nearly the latest version of mayan-edms 4.4.3.

I wrote some logs last two days and I have to look inside to see, if there are more informations about locking problems.

Thank you for your grat work and any help :slight_smile:
Best regards, Jörg

I looked inside my redis “info” page and saw

# Clients
connected_clients:61
client_recent_max_input_buffer:8
client_recent_max_output_buffer:0
blocked_clients:4
tracking_clients:0
clients_in_timeout_table:4

I think the problem is behind

blocked_clients:4
clients_in_timeout_table:4

Only Mayan uses redis. So for some reason redis blocks the connection. I will look inside this problem.

I have read all the logs generated and found nothing interesting to tell. The was another

Error initializing the locking backend.

So I started to figure out, why this error happens.
I stopped supervisor.service, apache2.service and redis-server.service. (stopping apache2 was neccesary because a nextcloud instance uses redis as well.) Then I started only redis-server.service and started monitoring what happens using this command:

redis-cli monitor

In a second console I started supervisor.service. But before that I changed the Mayan config for the supervisor from

MAYAN_WORKER_A_CONCURRENCY=""
MAYAN_WORKER_B_CONCURRENCY=""
MAYAN_WORKER_C_CONCURRENCY=""
MAYAN_WORKER_D_CONCURRENCY="--concurrency=1"

to

MAYAN_WORKER_A_CONCURRENCY="--concurrency=1"
MAYAN_WORKER_B_CONCURRENCY="--concurrency=1"
MAYAN_WORKER_C_CONCURRENCY="--concurrency=1"
MAYAN_WORKER_D_CONCURRENCY="--concurrency=1"

The idea was, to besure that only one process per worker is started. And now the error is gone. If I stop supervisor.service, all WRITELOCK files in the whoosh directory get deleted. This wasn’t the case without this little change. I started using mayan and started apache2.service as well and used nextcloud.

There was many output on the “redis-cli monitor” console, but until now there was no error any more.

Conclusion: The change in /etc/supervisor/conf.d/mayan.conf did the job, but I do not understand why.

Dear Roberto,

is the following assumption correct?

In Mayan there may be only one lock-manager!

Which process starts the lock-manager? I suppose that I had two lock-managers at the same time. This should also lead to contradictory locks.

Many greetings,
Jörg

If setting the concurrency to 1 solves the issue, there is a problem. With the way Mayan is installed, a networking issue, or Redis. Redis should be allowing many connections to it.

Try a clean installation of Mayan using the supported Docker Compose method. It may be that your installation method and/or changes are introducing some artifacts.

In Mayan there may be only one lock-manager!

Which process starts the lock-manager? I suppose that I had two lock-managers at the same time. This should also lead to contradictory locks.

That is incorrect. There is a single lock-manager app, but every instance of Mayan instantiates a copy of the lock manager (as they do with every single app), otherwise locking would not work. That is the very purpose of locking, every instance has an initialized connection of the lock system (with the same settings) and they use it to obtain a lock for a resource. Locking is a fundamental principle of concurrent programming, it does not correctly without it.

Who provided that answer?

I have been very busy the last few days. Therefore the late feedback. I apologize for the delay.

I have seen in the log that two connection tests of the lock manager run under two different PID. So I thought myself that this could be a problem.

I will investigate the problem further and get back to you when I can provide more detailed information.

I started with mayan when version 3.4 was current. See here: Version 3.4 — Mayan EDMS 4.4.4 documentation
I did two installations strictly using the direct deploy method and then continued all updates as instructed.

I will also double check all steps inside Direct deployment — Mayan EDMS 4.4.4 documentation

1 Like

In the meantime, I have made further progress. First I checked the installation. Everything is set up correctly and I could not find any errors. Mayan also works wonderfully on both installations I maintain, as long as the database is used as search backend.

However, upon checking, I noticed an error in the installation instructions. see: Direct deployment — Mayan EDMS 4.4.5 documentation The Inftructions say that three redis databases are needed. This is wrong in the meantime. Only two redis databases are used according to the current instructions. Databases 1 and 2 are used. Database 0 was used for the queues in older instructions. But the setup has been replaced by RabbitMQ and AMPQ in the meantime.

I also watched the locking in the database and checked if errors happen. Thereby I could not find any obvious errors in the locking mechnism of Mayan.

:grinning: :grinning: :grinning:
I found two solutions to the index build up problem and tested them successfully:
:grinning: :grinning: :grinning:

  1. Keep triggering the reindex process until all documents are indexed. Several triggers may be neccessary. Once all files are indexed, the update of the index in whoosh subsequently works without errors, for example when new documents are added or existing documents are massively changed.

  2. Split the Worker B into two Workers B and B2. Worker B2 only processes the queues search and search_slow. Only for Worker B2 --concurrency=1 is set in the supervisor configuration. As soon as the system has completed the initial indexing, --concurrency=1 can be reset. Once all files are indexed, the update of the index in whoosh subsequently works without errors, for example when new documents are added or existing documents are massively changed.

About the cause of the problem:
I have a guess as to what might be causing the problem. The locks in whoosh are created and managed by the whoosh library. According to the whoosh documentation, the locks are created as soon as a writer object is requested by the api. If several processes request a writer object almost at the same time, I could imagine that these objects block each other. There are several file locks, one for each index. If a write object should write several indexes it might get access to one index but not to the others. Instead of writing everything that is currently possible, it waits for the missing write permission . Concretely I could observe that in a minimal installation with only two workers B2 with --concurrency=2 both processes were in the queue and waited 180 seconds for the release of the write permission. Once that time was up, there happened to be several options. If one worker got all the write permissions, the index continued to be created or written. If both workers requested write permissions from the whoosh api at nearly the same time again, it was usually the case that they sent each other back into the 180s queue. Then again nothing happens and the index is not built up. If my assumption is correct, it also explains why both solutions found and described above work.

The simplest solution to this problem would be to make sure that the workers never request a file lock from the whoosh api at the same time. This can be achieved very easily by time skewing using random numbers. Before each whoosh api call for creating a writer object just wait randomly eg. 3s to 30s. This should ensure that only 1 worker (namely the randomly fastest one) gets all the write permissions it needs.

Attached is my config for the supervisor daemon:

[supervisord]
environment=
    PYTHONPATH="/home/mayan/media/user_settings",
    MAYAN_ALLOWED_HOSTS='["*"]',
    MAYAN_MEDIA_ROOT="/home/mayan/media/",
    MAYAN_PYTHON_BIN_DIR=/opt/mayan-edms/bin/,
    MAYAN_GUNICORN_BIN=/opt/mayan-edms/bin/gunicorn,
    MAYAN_GUNICORN_LIMIT_REQUEST_LINE=4094,
    MAYAN_GUNICORN_MAX_REQUESTS=500,
    MAYAN_GUNICORN_REQUESTS_JITTER=50,
    MAYAN_GUNICORN_TEMPORARY_DIRECTORY="",
    MAYAN_GUNICORN_TIMEOUT=120,
    MAYAN_GUNICORN_WORKER_CLASS=sync,
    MAYAN_GUNICORN_WORKERS=3,
    MAYAN_SETTINGS_MODULE=mayan.settings.production,
    MAYAN_WORKER_A_CONCURRENCY="",
    MAYAN_WORKER_A_MAX_MEMORY_PER_CHILD="--max-memory-per-child=300000",
    MAYAN_WORKER_A_MAX_TASKS_PER_CHILD="--max-tasks-per-child=100",
    MAYAN_WORKER_B_CONCURRENCY="",
    MAYAN_WORKER_B_MAX_MEMORY_PER_CHILD="--max-memory-per-child=300000",
    MAYAN_WORKER_B_MAX_TASKS_PER_CHILD="--max-tasks-per-child=100",
    MAYAN_WORKER_B2_CONCURRENCY="--concurrency=1",
    MAYAN_WORKER_B2_MAX_MEMORY_PER_CHILD="--max-memory-per-child=300000",
    MAYAN_WORKER_B2_MAX_TASKS_PER_CHILD="--max-tasks-per-child=100",
    MAYAN_WORKER_C_CONCURRENCY="",
    MAYAN_WORKER_C_MAX_MEMORY_PER_CHILD="--max-memory-per-child=300000",
    MAYAN_WORKER_C_MAX_TASKS_PER_CHILD="--max-tasks-per-child=100",
    MAYAN_WORKER_D_CONCURRENCY="--concurrency=1",
    MAYAN_WORKER_D_MAX_MEMORY_PER_CHILD="--max-memory-per-child=300000",
    MAYAN_WORKER_D_MAX_TASKS_PER_CHILD="--max-tasks-per-child=10",
    _LAST_LINE=""

[program:mayan-edms-gunicorn]
autorestart = true
autostart = true
command = %(ENV_MAYAN_GUNICORN_BIN)s --bind 127.0.0.1:8000 --env DJANGO_SETTINGS_MODULE=%(ENV_MAYAN_SETTINGS_MODULE)s --limit-request-line %(ENV_MAYAN_GUNICORN_LIMIT_REQUEST_LINE)s --max-requests %(ENV_MAYAN_GUNICORN_MAX_REQUESTS)s --max-requests-jitter %(ENV_MAYAN_GUNICORN_REQUESTS_JITTER)s %(ENV_MAYAN_GUNICORN_TEMPORARY_DIRECTORY)s --worker-class %(ENV_MAYAN_GUNICORN_WORKER_CLASS)s --timeout %(ENV_MAYAN_GUNICORN_TIMEOUT)s --workers %(ENV_MAYAN_GUNICORN_WORKERS)s mayan.wsgi
environment =
  DJANGO_SETTINGS_MODULE=%(ENV_MAYAN_SETTINGS_MODULE)s
redirect_stderr = true
user = mayan

[program:mayan-edms-worker_a]
autorestart = true
autostart = true
command = nice -n 0 %(ENV_COMMAND)s
environment =
  COMMAND = "%(ENV_MAYAN_PYTHON_BIN_DIR)scelery -A mayan worker %(ENV_MAYAN_WORKER_A_CONCURRENCY)s --hostname=mayan-edms-worker_a.%%h --loglevel=ERROR -Ofair --queues=converter,sources_fast %(ENV_MAYAN_WORKER_A_MAX_MEMORY_PER_CHILD)s %(ENV_MAYAN_WORKER_A_MAX_TASKS_PER_CHILD)s --without-gossip --without-heartbeat",
  DJANGO_SETTINGS_MODULE=%(ENV_MAYAN_SETTINGS_MODULE)s
killasgroup = true
numprocs = 1
priority = 998
startsecs = 10
stopwaitsecs = 1
user = mayan

[program:mayan-edms-worker_b]
autorestart = true
autostart = true
command = nice -n 2 %(ENV_COMMAND)s
environment =
  COMMAND = "%(ENV_MAYAN_PYTHON_BIN_DIR)scelery -A mayan worker %(ENV_MAYAN_WORKER_B_CONCURRENCY)s --hostname=mayan-edms-worker_b.%%h --loglevel=ERROR -Ofair --queues=document_states_medium,documents,duplicates,file_caching,file_metadata,indexing,metadata,parsing,sources %(ENV_MAYAN_WORKER_B_MAX_MEMORY_PER_CHILD)s %(ENV_MAYAN_WORKER_B_MAX_TASKS_PER_CHILD)s --without-gossip --without-heartbeat",
  DJANGO_SETTINGS_MODULE=%(ENV_MAYAN_SETTINGS_MODULE)s
killasgroup = true
numprocs = 1
priority = 998
startsecs = 10
stopwaitsecs = 1
user = mayan

[program:mayan-edms-worker_b2]
autorestart = true
autostart = true
command = nice -n 2 %(ENV_COMMAND)s
environment =
  COMMAND = "%(ENV_MAYAN_PYTHON_BIN_DIR)scelery -A mayan worker %(ENV_MAYAN_WORKER_B2_CONCURRENCY)s --hostname=mayan-edms-worker_b2.%%h --loglevel=ERROR -Ofair --queues=search,search_slow %(ENV_MAYAN_WORKER_B2_MAX_MEMORY_PER_CHILD)s %(ENV_MAYAN_WORKER_B2_MAX_TASKS_PER_CHILD)s --without-gossip --without-heartbeat",
  DJANGO_SETTINGS_MODULE=%(ENV_MAYAN_SETTINGS_MODULE)s
killasgroup = true
numprocs = 1
priority = 998
startsecs = 10
stopwaitsecs = 1
user = mayan

[program:mayan-edms-worker_c]
autorestart = true
autostart = true
command = nice -n 10 %(ENV_COMMAND)s
environment =
  COMMAND = "%(ENV_MAYAN_PYTHON_BIN_DIR)scelery -A mayan worker %(ENV_MAYAN_WORKER_C_CONCURRENCY)s --hostname=mayan-edms-worker_c.%%h --loglevel=ERROR -Ofair --queues=checkouts_periodic,documents_periodic,events,mailing,signatures,sources_periodic,statistics,uploads %(ENV_MAYAN_WORKER_C_MAX_MEMORY_PER_CHILD)s %(ENV_MAYAN_WORKER_C_MAX_TASKS_PER_CHILD)s --without-gossip --without-heartbeat",
  DJANGO_SETTINGS_MODULE=%(ENV_MAYAN_SETTINGS_MODULE)s
killasgroup = true
numprocs = 1
priority = 998
startsecs = 10
stopwaitsecs = 1
user = mayan

[program:mayan-edms-worker_d]
autorestart = true
autostart = true
command = nice -n 15 %(ENV_COMMAND)s
environment =
  COMMAND = "%(ENV_MAYAN_PYTHON_BIN_DIR)scelery -A mayan worker %(ENV_MAYAN_WORKER_D_CONCURRENCY)s --hostname=mayan-edms-worker_d.%%h --loglevel=ERROR -Ofair --queues=ocr,storage_periodic,tools %(ENV_MAYAN_WORKER_D_MAX_MEMORY_PER_CHILD)s %(ENV_MAYAN_WORKER_D_MAX_TASKS_PER_CHILD)s --without-gossip --without-heartbeat",
  DJANGO_SETTINGS_MODULE=%(ENV_MAYAN_SETTINGS_MODULE)s
killasgroup = true
numprocs = 1
priority = 998
startsecs = 10
stopwaitsecs = 1
user = mayan

[program:mayan-edms-celery-beat]
autorestart = true
autostart = true
command = nice -n 1 %(ENV_COMMAND)s
environment =
  COMMAND = "%(ENV_MAYAN_PYTHON_BIN_DIR)scelery -A mayan beat --pidfile= -l ERROR",
  DJANGO_SETTINGS_MODULE=%(ENV_MAYAN_SETTINGS_MODULE)s
killasgroup = true
numprocs = 1
priority = 998
startsecs = 10
stopwaitsecs = 1
user = mayan

I I hope that all my observations will help to make mayan even better. I wish everyone success in this wonderful task.