[Old Wiki topic] FreeNAS 11.2 FreeBSD jail deployment

Community contributed guides or tutorials for multiple topics like installations for other operating systems or platforms, monitoring, log aggregation, etc.
Post Reply
User avatar
rosarior
Developer
Developer
Posts: 490
Joined: Tue Aug 21, 2018 3:28 am
Location: Puerto Rico
Contact:

[Old Wiki topic] FreeNAS 11.2 FreeBSD jail deployment

Post by rosarior »

This post was copied from the retired Wiki by user request. It is community contributed and not officially supported.

Post by user @Boybert

Thank you.
---------------------------------------

These instructions are adapted from and borrow from the Mayan EDMS Basic deployment instructions, and were tested on a FreeNAS 11.2 Beta 3 installation. They should work with little or no modification on a bare metal FreeBSD system.

Being a Django and a Python project, familiarity with these technologies is recommended to better understand why Mayan EDMS does some of the things it does.

Add the Mayan user

1. Navigate to Account > Users in the FreeNAS GUI
2. Click the yellow "+" button to add a user
3. Fill out the form as follows:

Username: mayan
Full name: Mayan EDMS User
User ID: 973
New Primary group: Unchecked
Primary group: wheel
Home directory: /nonexistent
Enable password login: no

Notes

* Other values can be left at their defaults
* The User ID should be an unused number that you note; you will need it later

Optional: Create a Dataset for Mayan

I chose to create a discrete dataset in my main FreeNAS storage pool to store Mayan's uploaded documents. This is ultimately the storage that's referred to later in configuration as MAYAN_MEDIA_ROOT. You can just let Mayan store its documents in the jail storage (/mnt/iocage/jails/mayan) if you'd prefer.

If you would like to store your Mayan documents in a FreeNAS dataset for simpler storage management, backups, and perhaps security configurations, follow the steps in this subsection then look for the underlined text in the following steps for configuration changes to look out for.

Create the dataset

* Navigate to Storage > Pools in the FreeNAS GUI
* Choose Add Dataset from the overflow menu of the pool you would like to add to
* For this guide, we'll name the dataset mayan but you can call it whatever you'd like as long as you can find it later

Modify the dataset's permissions

* Choose Edit Permissions from the overflow menu of the newly-created mayan dataset. Use the following values:

ACL Type: Unix
Apply User: Checked
User: mayan
Apply Group: Checked
Group: wheel
Apply Mode: Checked
Owner: Read, Write, Execute
Group: Read, Write, Execute
Other: None

Create the Jail

1. Navigate to Jails in the FreeNAS GUI
2. Click the yellow "+" button to add a jail
3. Fill out the form as follows:

Jail name: mayan
Release: 11.2-RELEASE
DHCP Autoconfigure IPv4: Checked

Optional: Create Mount Point

If you're using a FreeNAS dataset for storage, you'll need to share it with the jail. If you skipped the optional steps above, skip ahead to the next section.

1. Navigate to Shell in the FreeNAS GUI

Here, you are accessing your FreeNAS server's shell, not the new Mayan jail (yet)

2. Execute the following commands as root:

Code: Select all

mkdir /mnt/iocage/jails/mayan/root/mnt/storage
For this guide we've chosen /mnt/storage as the jail's mount point for the shared-in dataset. This can be changed to be whatever you'd like, but be sure to adjust further references in this guide to whatever you choose.

Code: Select all

chown mayan:wheel /mnt/iocage/jails/mayan/root/mnt/storage
3. Back in the FreeNAS GUI Jails interface, select Mount Points from the Mayan jail's overflow menu
4. Click the yellow "+" button to add a mount point

* In the Source field, navigate to the mayan dataset you created earlier (e.g. /mnt/your_pool/mayan)
* In the destination field, navigate to the mount point you created in the previous step (e.g. (/mnt/iocage/jails/mayan)/mnt/storage)

Start and Enter the Jail

1. Navigate to Shell in the FreeNAS GUI

2. Enter the following command which will start the Mayan jail and enter its shell:

Code: Select all

sudo iocage exec mayan
Configure the Jail or FreeBSD Environment

Install binary dependencies:

Code: Select all

pkg update

Code: Select all

The package management tool is not yet installed on your system.
Do you want to fetch and install it now? [y/N]: y

Code: Select all

kg install sudo nano py27-sqlite3 gcc ghostscript9-base gnupg1 graphviz libjpeg-turbo py27-filemagic png libreoffice tiff poppler-utils postgresql10-server python27 py27-pip py27-virtualenv redis sane-backends py27-supervisor tesseract p5-Image-ExifTool

Code: Select all

Proceed with this action? [y/N]: y
Somewhere in the neighborhood of 240 packages will be installed

2. Set dependencies to run at boot:

Code: Select all

sysrc postgresql_enable=YES redis_enable=YES supervisord_enable=YES
3. Initialize the PostgreSQL server:

Code: Select all

service postgresql initdb
4. Start the database server:

Code: Select all

service postgresql start
5. Make sure Mayan EDMS can find the binaries it needs:

Code: Select all

ln -s /usr/local/bin/gpg /usr/bin/gpg
ln -s /usr/local/bin/libreoffice /usr/bin/libreoffice
ln -s /usr/local/bin/pdfinfo /usr/bin/pdfinfo
ln -s /usr/local/bin/pdftoppm /usr/bin/pdftoppm
ln -s /usr/local/bin/pdftotext /usr/bin/pdftotext
ln -s /usr/local/bin/tesseract /usr/bin/tesseract
ln -s /usr/local/bin/scanimage /usr/bin/scanimage
ln -s /usr/local/bin/exiftool /usr/bin/exiftool
6. Create a user account for the installation:

Code: Select all

adduser

Code: Select all

Username: mayan
Full name: Mayan EDMS User
Uid (Leave empty for default): 973
Login group [mayan]:
Login group is mayan. Invite mayan into other groups? []: wheel
Login class [default]:
Shell (sh csh tcsh zsh nologin) [sh]: nologin
Home directory [/home/mayan]:
Home directory permissions (Leave empty for default):
Use password-based authentication? [yes]:
Use an empty password? (yes/no) [no]: yes
Lock out the account after creation? [no]: yes
Username   : mayan
Password   : <blank>
Full Name  : Mayan EDMS User
Uid        : 973
Class      : 
Groups     : wheel 
Home       : /home/mayan
Home Mode  : 
Shell      : /usr/sbin/nologin
Locked     : yes
OK? (yes/no): yes
adduser: INFO: Successfully added (mayan) to the user database.
adduser: INFO: Account (mayan) is locked.
Add another user? (yes/no): no
Goodbye!
* Use the same User ID (Uid) you used in FreeNAS, above.

7. Create the parent directory where the project will be deployed:

Code: Select all

mkdir -p /opt
8. Create the Python virtual environment for the installation:

Code: Select all

virtualenv /opt/mayan-edms
9. Make the mayan user the owner of the installation directory:

Code: Select all

chown -R mayan:mayan /opt/mayan-edms
10. Install Mayan EDMS:

Code: Select all

sudo -u mayan /opt/mayan-edms/bin/pip install --no-cache-dir mayan-edms
11. Install the Python client for PostgreSQL and Redis:

Code: Select all

sudo -u mayan /opt/mayan-edms/bin/pip install --no-cache-dir psycopg2==2.7.3.2 redis==2.10.6
12. Create the database for the installation:

Code: Select all

sudo -u postgres psql -c "CREATE USER mayan WITH password 'mayanuserpass';"
sudo -u postgres createdb -O mayan mayan
* Replace mayanuserpass with a password of your choosing, if desired

13. Initialize the project:

Code: Select all

sudo -u mayan MAYAN_DATABASE_ENGINE=django.db.backends.postgresql MAYAN_DATABASE_NAME=mayan MAYAN_DATABASE_PASSWORD=mayanuserpass MAYAN_DATABASE_USER=mayan MAYAN_DATABASE_HOST=127.0.0.1 MAYAN_MEDIA_ROOT=/opt/mayan-edms/media /opt/mayan-edms/bin/mayan-edms.py initialsetup
* Enter the same MAYAN_DATABASE_PASSWORD you used in the previous step
* If you're following the optional FreeNAS dataset steps, replace the MAYAN_MEDIA_ROOT path with /mnt/storage or whatever you assigned as a mount point earlier.

14. Collect the static files:

Code: Select all

sudo -u mayan MAYAN_MEDIA_ROOT=/opt/mayan-edms/media /opt/mayan-edms/bin/mayan-edms.py collectstatic --noinput
* If you're following the optional FreeNAS dataset steps, replace the MAYAN_MEDIA_ROOT path with /mnt/storage or whatever you assigned as a mount point earlier.

15. Create the supervisor file at /etc/supervisor/conf.d/mayan.conf:

Code: Select all

mkdir -p /etc/supervisor/conf.d

Code: Select all

nano /etc/supervisor/conf.d/mayan.conf

Code: Select all

[supervisord]
environment=
    MAYAN_ALLOWED_HOSTS='["*"]',  # Allow access to other network hosts other than localhost
    MAYAN_CELERY_RESULT_BACKEND="redis://127.0.0.1:6379/0",
    MAYAN_BROKER_URL="redis://127.0.0.1:6379/0",
    PYTHONPATH=/opt/mayan-edms/lib/python2.7/site-packages:/opt/mayan-edms/data,
    MAYAN_MEDIA_ROOT=/opt/mayan-edms/media,
    MAYAN_DATABASE_ENGINE=django.db.backends.postgresql,
    MAYAN_DATABASE_HOST=127.0.0.1,
    MAYAN_DATABASE_NAME=mayan,
    MAYAN_DATABASE_PASSWORD=mayanuserpass,
    MAYAN_DATABASE_USER=mayan,
    MAYAN_DATABASE_CONN_MAX_AGE=60,
    DJANGO_SETTINGS_MODULE=mayan.settings.production,
    MAYAN_SIGNATURES_GPG_PATH=/usr/bin/gpg

[program:mayan-gunicorn]
autorestart = true
autostart = true
command = /opt/mayan-edms/bin/gunicorn -w 2 mayan.wsgi --max-requests 500 --max-requests-jitter 50 --worker-class gevent --bind 0.0.0.0:8000 --timeout 120
user = mayan

[program:mayan-worker-fast]
autorestart = true
autostart = true
command = nice -n 1 /opt/mayan-edms/bin/mayan-edms.py celery worker -Ofair -l ERROR -Q converter -n mayan-worker-fast.%%h --concurrency=1
killasgroup = true
numprocs = 1
priority = 998
startsecs = 10
stopwaitsecs = 1
user = mayan

[program:mayan-worker-medium]
autorestart = true
autostart = true
command = nice -n 18 /opt/mayan-edms/bin/mayan-edms.py celery worker -Ofair -l ERROR -Q checkouts_periodic,documents_periodic,indexing,metadata,sources,sources_periodic,uploads,documents -n mayan-worker-medium.%%h --concurrency=1
killasgroup = true
numprocs = 1
priority = 998
startsecs = 10
stopwaitsecs = 1
user = mayan

[program:mayan-worker-slow]
autorestart = true
autostart = true
command = nice -n 19 /opt/mayan-edms/bin/mayan-edms.py celery worker -Ofair -l ERROR -Q mailing,tools,statistics,parsing,ocr -n mayan-worker-slow.%%h --concurrency=1
killasgroup = true
numprocs = 1
priority = 998
startsecs = 10
stopwaitsecs = 1
user = mayan

[program:mayan-celery-beat]
autorestart = true
autostart = true
command = nice -n 1 /opt/mayan-edms/bin/mayan-edms.py celery beat --pidfile= -l ERROR
killasgroup = true
numprocs = 1
priority = 998
startsecs = 10
stopwaitsecs = 1
user = mayan
Be very careful that:
* There are exactly four spaces before each line of the environment variables at the top of the config
* There is a comma after each environment variable except the last
* Each line of the [program] blocks contains exactly one key = value pair (be sure there aren't any surprise carriage returns/newlines)
* If you're following the optional FreeNAS dataset steps, replace the MAYAN_MEDIA_ROOT path with /mnt/storage or whatever you assigned as a mount point earlier.

16. Edit the Supervisor config file to read this new configuration:

Code: Select all

nano /usr/local/etc/supervisord.conf
17. Edit the last two lines to read (be sure to remove the commenting semicolons):

Code: Select all

[include]
files = /etc/supervisor/conf.d/*.conf
18. Configure Redis to discard data when it runs out of memory:

Code: Select all

nano /usr/local/etc/redis.conf
19. Add these lines to the end:

Code: Select all

maxmemory-policy allkeys-lru
save ""
databases 1
20. Start the services:

Code: Select all

service redis start
service supervisord start
Connect to your Mayan EDMS instance!

1. In the FreeNAS Jails GUI, note the IP address for your Mayan jail (or run ifconfig in the jail's shell)
2. Point a web browser to that IP address, port 8000. e.g. 192.168.1.100:8000

Troubleshooting

* Are the workers working?

Run:

Code: Select all

supervisorctl status
in the jail. Each job and its status should be listed:

Code: Select all

mayan-celery-beat                RUNNING   pid 40698, uptime 1:47:55
mayan-gunicorn                   RUNNING   pid 40699, uptime 1:47:55
mayan-worker-fast                RUNNING   pid 40697, uptime 1:47:55
mayan-worker-medium              RUNNING   pid 40696, uptime 1:47:55
mayan-worker-slow                RUNNING   pid 40695, uptime 1:47:55
* Ask Supervisor to log verbosely

Set:

Code: Select all

loglevel=debug
in /usr/local/etc/supervisord.conf.

* Check the worker logs

Navigate to /tmp in the jail, then take a look at the mayan supervisor log files.

* Check the Supervisord log

Navigate to /var/log and take a look at supervisord.log

Published by wiki user Boybert.
Last edited by rosarior on Mon Dec 30, 2019 11:46 pm, edited 1 time in total.

stiphy
Posts: 2
Joined: Sun Dec 29, 2019 7:12 am

Re: [Old Wiki topic] FreeNAS 11.2 FreeBSD jail deployment

Post by stiphy »

First of all thanks for the software, if I get it up and running correctly I think it will work well for my needs as a document manager of pdf files at home. Perhaps a bit overkill for my needs but I think I can expand my use of the software when I understand it better.
Now about getting it to run, I first wanted to get it up and running with Python3(.6) instead of the soon to be orphaned Python2.7.
There is a bug in pip for Python3.6 that results in an error when installing mayan-edms wheel, so I moved on to Python3.7 ( I have also tried with Python2.7)
My system is currently FreeNAS 11.2 (in a jail and am patiently waiting for the final version of 11.3 before upgrading).


I appear to be having a problem with connections to the Redis messaging cue.
The "/etc/supervisor/conf.d/mayan.conf" file in the instructions appears to be old or inaccurate as it was incapable of launching the workers or celery beat. Using the floowing command

Code: Select all

MAYAN_DATABASE_ENGINE=django.db.backends.postgresql MAYAN_DATABASE_NAME=mayan MAYAN_DATABASE_PASSWORD=mayanuserpass MAYAN_DATABASE_USER=mayan MAYAN_DATABASE_HOST=127.0.0.1 MAYAN_MEDIA_ROOT=opt/mayan-edms/media /opt/mayan-edms/bin/mayan-edms.py platformtemplate supervisord
I can up with the following modifications:

Code: Select all

[supervisord]
environment=
    MAYAN_ALLOWED_HOSTS='["*"]',  # Allow access to other network hosts other than localhost
    MAYAN_CELERY_RESULT_BACKEND="redis://127.0.0.1:6379/1",
    MAYAN_BROKER_URL="redis://127.0.0.1:6379/0",
    PYTHONPATH=/opt/mayan-edms/lib/python3.7/site-packages:/opt/mayan-edms/data,
    MAYAN_MEDIA_ROOT=/mnt/storage,
    MAYAN_DATABASE_ENGINE=django.db.backends.postgresql,
    MAYAN_DATABASE_HOST=127.0.0.1,
    MAYAN_DATABASE_NAME=mayan,
    MAYAN_DATABASE_PASSWORD=stiphysbooks,
    MAYAN_DATABASE_USER=mayan,
    MAYAN_DATABASE_CONN_MAX_AGE=60,
    DJANGO_SETTINGS_MODULE=mayan.settings.production,
    MAYAN_SIGNATURES_GPG_PATH=/usr/bin/gpg

[program:mayan-gunicorn]
autorestart = true
autostart = true
command = /opt/mayan-edms/bin/gunicorn -w 2 mayan.wsgi --max-requests 500 --max-requests-jitter 50 --worker-class sync --bind 0.0.0.0:8000 --timeout 120
user = mayan


[program:mayan-worker-fast]
autorestart = true
autostart = true
command = nice -n 1 /opt/mayan-edms/bin/celery worker -A mayan -Ofair -l ERROR -Q document_states_fast,converter,sources_fast -n mayan-worker-fast.%%h --concurrency=1
killasgroup = true
numprocs = 1
priority = 998
startsecs = 10
stopwaitsecs = 1
user = mayan

[program:mayan-worker-medium]
autorestart = true
autostart = true
command = nice -n 18 /opt/mayan-edms/bin/celery worker -A mayan -Ofair -l ERROR -Q default,checkouts_periodic,indexing,signatures,documents_periodic,uploads,documents,file_metadata,metadata,sources,sources_periodic -n mayan-worker-medium.%%h --concurrency=1
killasgroup = true
numprocs = 1
priority = 998
startsecs = 10
stopwaitsecs = 1
user = mayan

[program:mayan-worker-slow]
autorestart = true
autostart = true
command = nice -n 19 /opt/mayan-edms/bin/celery worker -A mayan -Ofair -l ERROR -Q statistics,tools,common_periodic,parsing,document_states,mailing,ocr -n mayan-worker-slow.%%h --concurrency=1
killasgroup = true
numprocs = 1
priority = 998
startsecs = 10
stopwaitsecs = 1
user = mayan


[program:mayan-celery-beat]
autorestart = true
autostart = true
command = nice -n 1 /opt/mayan-edms/bin/celery beat -A mayan --pidfile= -l ERROR
killasgroup = true
numprocs = 1
priority = 998
startsecs = 10
stopwaitsecs = 1
user = mayan
After making these modifications supervisorctl status showed that all 4 services were RUNNING.
This made me happy but the inability to upload documents made me realize that things were not as they should be. The error logs for the mayan-workers are showing
[2019-12-29 07:55:50,737: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**@127.0.0.1:5672//: [Errno 61] Connection refused.
Which if I understand correctly indicates that the Redis messaging cue is not correctly running and the system is looking for the rabbitmq.

The celery beat error log shows:
[2019-12-29 06:53:59,173: ERROR/MainProcess] beat: Connection error: [Errno 61] Connection refused. Trying again in x seconds.
Any help in understanding and resolving this issue would be greatly appreciated.

Stephan

User avatar
rssfed23
Moderator
Moderator
Posts: 185
Joined: Mon Oct 14, 2019 1:18 pm
Location: United Kingdom
Contact:

Re: [Old Wiki topic] FreeNAS 11.2 FreeBSD jail deployment

Post by rssfed23 »

Hi Stephan.

I don't have a freenas box to test with, but I can be pretty sure I know what the problem is. The instructions above were written for 3.2.X
When we released 3.3 and beyond there were a lot of changes. The supervisor config file you've posted is still using all the old references.
One example is MAYAN_BROKER_URL should be MAYAN_CELERY_BROKER_URL=
That's why it's trying to connect to rabbitmq instead of redis - it can't read your true redis variable as the name was changed.

If you take a look at the 3.3 release notes https://docs.mayan-edms.com/releases/3.3.html (and check newer ones as well to be safe) you'll see the value changes documented:
Celery was updated to version 4.3.0. This changes some settings:
MAYAN_BROKER_URL to MAYAN_CELERY_BROKER_URL
MAYAN_CELERY_ALWAYS_EAGER to MAYAN_CELERY_TASK_ALWAYS_EAGER


You in effect have to "upgrade" your supervisor config file to the 3.3 version.

When we do a traditional install we have mayan generate the supervisor config for us to avoid this issue (copying peoples pasted versions of configs is a bad idea for precisely this reason as they change as versions change and as this was a community contribution it wasn't updated).
Like how you ran the collectstatic step if you could run:
sudo -u mayan MAYAN_DATABASE_ENGINE=django.db.backends.postgresql MAYAN_DATABASE_NAME=mayan MAYAN_DATABASE_PASSWORD=mayanuserpass MAYAN_DATABASE_USER=mayan MAYAN_DATABASE_HOST=127.0.0.1 MAYAN_MEDIA_ROOT=/opt/mayan-edms/media /opt/mayan-edms/bin/mayan-edms.py platformtemplate supervisord > /tmp/supervisorconfig.tmp
This will create a file in /tmp/supervisorconfig with what the latest values would contain. You can then compare what's produced there to the one you're actually using and see what the differences are. I strongly suspect it's the broker URL value I mention above though :)

Also check your config.yml file in /opt/mayan-edms/media to make sure that also has the new values. When you start up mayan with a correct supervisor config it **should** migrate any old settings to the new format automatically. But it's another place mayan may be getting the rabbitmq broker URL from instead of the Redis one you want so check that if things still aren't looking right after fixing your supervisor config and restarting supervisor.

Good luck!
Rob
Please don't PM for general support; start a new thread with your issue instead.

User avatar
rssfed23
Moderator
Moderator
Posts: 185
Joined: Mon Oct 14, 2019 1:18 pm
Location: United Kingdom
Contact:

Re: [Old Wiki topic] FreeNAS 11.2 FreeBSD jail deployment

Post by rssfed23 »

Ah sorry - I saw in your post you already ran the platformtemplate command. That explains how you changed the worker startup command to the correct celery worker command instead of the old mayanedms.py celery command.
I think when you did that you only missed the redis value so changing MAYAN_BROKER_URL= to MAYAN_CELERY_BROKER_URL= up the top **should** be all that's needed.

Also I note your PYTHONPATH may need tweaking (I'm not sure if this is different because of freenas). On my environments it's PYTHONPATH="/opt/mayan-edms/media/mayan_settings".

Furthermore, I think how settings were quoted changed in 3.3 (it's in the release notes). On my environment I have:
MAYAN_ALLOWED_HOSTS="['*']",
(the double and single quotes are the other way around in yours. I don't know if this would cause any issue though).

It's little things like that that can be very hard to catch when doing a visual check of the platform template produced file vs your custom file. That's why I recommend a tool like vimdiff which will make it super easy to spot the small differences (I had exactly the same issues as you had when first upgrading to 3.3 as use a custom worker config so couldn't use the automatically generated file directly)

It may also be worth removing the database_conn_max_age value. When using gunicorn workers they can't re-use existing DB connections as each request you makes spawns a new microthread so that value leaves DB connections open for no reason so we wan't a new DB connection for each web request. That should be set to 0 not 60 to avoid you running out of connections (https://docs.mayan-edms.com/parts/troubleshooting.html).

If all else fails, just use the platformtemplate produced file changing only the DB connection user/pass etc. Test it and if that does work okay then start changing additional values/adding more settings in so you know you're starting from a known good config would be my recommendation.
Please don't PM for general support; start a new thread with your issue instead.

boybert
Posts: 7
Joined: Fri Sep 21, 2018 4:37 pm

Re: [Old Wiki topic] FreeNAS 11.2 FreeBSD jail deployment

Post by boybert »

Hi @stiphy, FreeNAS instructions OP here. Looks like you got some good guidance from @rssfed23! Just wanted to post the contents of my working /etc/supervisor/conf.d/mayan.conf for your reference as I'm now on a very similar setup to yours: FreeNAS 11.2-U7 (also waiting waiting waiting for 11.3), Mayan 3.3.6, although I am having success with Python 3.6.

Maybe once you're up and running we could work together on updating this guide for the modern age - have been meaning to take another run at it to get it current with FreeNAS and Python.

Code: Select all

[supervisord]
environment=
    PYTHONPATH="/mnt/storage/media/mayan_settings", #note that this is pointing to a mounted FreeNAS dataset, see MAYAN_MEDIA_ROOT below
    DJANGO_SETTINGS_MODULE=mayan.settings.production,
    MAYAN_MEDIA_ROOT="/mnt/storage/media", #again this is a mounted FreeNAS dataset
    MAYAN_ALLOWED_HOSTS="['*']",
    MAYAN_CELERY_RESULT_BACKEND="redis://127.0.0.1:6379/1",
    MAYAN_CELERY_BROKER_URL="redis://127.0.0.1:6379/0",
    MAYAN_DATABASES="{default: {ENGINE: django.db.backends.postgresql, HOST: 127.0.0.1, NAME: mayan, PASSWORD: #####, USER: mayan}}"

[program:mayan-gunicorn]
autorestart = true
autostart = true
command = /opt/mayan-edms/bin/gunicorn -w 2 mayan.wsgi --max-requests 500 --max-requests-jitter 50 --worker-class sync --bind 0.0.0.0:8000 --timeout 120
user = mayan


[program:mayan-worker-fast]
autorestart = true
autostart = true
command = nice -n 1 /opt/mayan-edms/bin/celery worker -A mayan -Ofair -l ERROR -Q document_states_fast,converter,sources_fast -n mayan-worker-fast.%%h --concurrency=1
killasgroup = true
numprocs = 1
priority = 998
startsecs = 10
stopwaitsecs = 1
user = mayan

[program:mayan-worker-medium]
autorestart = true
autostart = true
command = nice -n 18 /opt/mayan-edms/bin/celery worker -A mayan -Ofair -l ERROR -Q default,checkouts_periodic,indexing,signatures,documents_periodic,uploads,documents,file_metadata,metadata,sources,sources_periodic -n mayan-worker-medium.%%h --concurrency=1
killasgroup = true
numprocs = 1
priority = 998
startsecs = 10
stopwaitsecs = 1
user = mayan

[program:mayan-worker-slow]
autorestart = true
autostart = true
command = nice -n 19 /opt/mayan-edms/bin/celery worker -A mayan -Ofair -l ERROR -Q statistics,tools,common_periodic,parsing,document_states,mailing,ocr -n mayan-worker-slow.%%h --concurrency=1
killasgroup = true
numprocs = 1
priority = 998
startsecs = 10
stopwaitsecs = 1
user = mayan


[program:mayan-celery-beat]
autorestart = true
autostart = true
command = nice -n 1 /opt/mayan-edms/bin/celery beat -A mayan --pidfile= -l ERROR
killasgroup = true
numprocs = 1
priority = 998
startsecs = 10
stopwaitsecs = 1
user = mayan

stiphy
Posts: 2
Joined: Sun Dec 29, 2019 7:12 am

Re: [Old Wiki topic] FreeNAS 11.2 FreeBSD jail deployment

Post by stiphy »

Okay Thanks to rssfed23 and boybert things appear to be working. I still have 2 questions

First about the PYTHONPATH environmental variable I set it to my storage directory ( a FreeNAS dataset) plus '/mayan_settings', yet there is no directory called mayan_settings for this path nor does there appear to be one created by the Mayan. Should I have created and mounted another dataset Is that normal? Leaving it empty makes starting the supervisord impossible so it is being used for something.

Secondly the documents are stored in the MAYAN_MEDIA_ROOT but what about the posgresql database, where is this stored, in the jail dataset? I ask this to inquire about portability, can a new mayan jail be created and be pointed to the dataset for the postgresql database, the media datase, and the PYTHONPATH? Or do I need to modify the set up of the postgresql database?

As you can see I'm no expert here, just stumbling around and looking into where the errors occur to see if I can fix them but I can help the best that I can with a guide for setting up Mayan in a FreeNas jail.

Stephan

boybert
Posts: 7
Joined: Fri Sep 21, 2018 4:37 pm

Re: [Old Wiki topic] FreeNAS 11.2 FreeBSD jail deployment

Post by boybert »

Hi @stiphy,
First about the PYTHONPATH environmental variable I set it to my storage directory ( a FreeNAS dataset) plus '/mayan_settings', yet there is no directory called mayan_settings for this path nor does there appear to be one created by the Mayan. Should I have created and mounted another dataset Is that normal? Leaving it empty makes starting the supervisord impossible so it is being used for something.
I believe the mayan_settings dir should have been created with the initialsetup command (step 13 in the FreeNAS guide). Perhaps you hadn't set the MAYAN_MEDIA_ROOT environmental in that command, or specified it incorrectly? At any rate, my media/mayan_settings folder just contains an empty __init__.py file, so it should be easy enough to recreate...

Code: Select all

sudo -u mayan mkdir /path/to/media/mayan_settings
sudo -u mayan touch /path/to/media/mayan_settings/__init__.py
If that doesn't get you up and running I wonder if you might need to re-run initalsetup (or perhaps performupgrade and preparestatic - sorry these are kind of dark magic to me... might need someone else to weigh in)
Secondly the documents are stored in the MAYAN_MEDIA_ROOT but what about the posgresql database, where is this stored, in the jail dataset? I ask this to inquire about portability, can a new mayan jail be created and be pointed to the dataset for the postgresql database, the media datase, and the PYTHONPATH? Or do I need to modify the set up of the postgresql database?
Your PostgreSQL data directory is (probably) /var/db/postgres/ where you'll find config and other files. As far as connecting to the database, I think you'd want to configure your PSQL server to listen for connections from other hosts and set up your new jail's Mayan instance to point to the first jail's (the one running PSQL) IP address/host name. In FreeNAS you could share the the dataset(s) that contain(s) your media directory with your new jail as well so that it's/they're mounted "locally." Not sure if sharing directories between Mayan instances is possible/recommended though. Haven't run more than one Mayan instance myself.

Hope this helps! I'm really only a notch above noob myself, and have sort of felt my way through this too, and relied on the generous help of the community. Let me know if I can help further, and let us know how you get along!

User avatar
rssfed23
Moderator
Moderator
Posts: 185
Joined: Mon Oct 14, 2019 1:18 pm
Location: United Kingdom
Contact:

Re: [Old Wiki topic] FreeNAS 11.2 FreeBSD jail deployment

Post by rssfed23 »

As far as I'm aware, the PYTHONPATH mayan_settings directory doesn't need to actually exist. It doesn't exist on mine.
The default for mayan is PYTHONPATH="{{ MEDIA_ROOT }}/mayan_settings" - so as long as it's pointing to your media root then things should work okay (but I'm no expert in this area).

You would know if it wasn't set correctly as mayan wouldn't start it would fail with "unable to import Django" type errors.
From my limited knowledge, the PYTHONPATH is similar to the $PATH setup in the bash world - a directory where custom applications and configuration changes can be searched for (if you're using .py files to configure Mayan) and executed using short/relative paths rather than full paths to the binaries. The important bit is that the directory before /mayan_settings is your media root folder.
Not sure if sharing directories between Mayan instances is possible/recommended though. Haven't run more than one Mayan instance myself.
This is exactly how we can scale up mayan - sharing the media_root directory from a shared storage and mount it into multible nodes. Actually by default you *have* to share it out if you want to run multible celery worker VM's/nodes, otherwise when a task goes to run (like OCR) it might expect a file in the tmp directory that isn't there in a local only version of the cache. Sharing the directory allows that to happen.

In my environment I don't share mayan-media across nodes but change the document storage location in the config.yml file:
DOCUMENTS_STORAGE_BACKEND_ARGUMENTS:
location: /opt/mayan-shared/document_storage

This way I can keep my documents stored on my NAS shared over NFS but not share the whole mayan_media directory. The reason I don't like sharing that over NFS is at the moment I only need one worker node that uses the media directory (using mountindex to serve mayan indexes out via samba doesn't require the media directory so I use a seperate VM for that) and the media directory is where the cache and tmp files live - I wan't those cached locally as I found serving them over NFS slowed things down even with 4 seperate worker nodes generating cache files due to the speed of my NAS.
Please don't PM for general support; start a new thread with your issue instead.

Post Reply