Mitel Forums - The Unofficial Source
Mitel Forums => Mitel Software Applications => Topic started by: devnull on February 16, 2023, 10:54:33 AM
-
Our micollab server (9.6.0.105-03) on msl 11.0.99, won't "settle down". Load averages are constantly nearly eating all cores we throw at the VM (currently 4 cores, 2.1GHz/core, and averaging 3.68, 3.4, 3.45). The server supports up to 200 users most of who use the micollab client (mix of legacy and next gen) and we have the nupoint vmail service.
This is on VMware with the VM's compatibility set to vm version 19. We allocated 16GB ram of which most is being used for cache. The vdisk also resides on an SSD array.
Any insight would be appreciated. Thanks
-
what does 'top' say in micollab?
-
top - 10:25:29 up 1 day, 6:56, 1 user, load average: 4.84, 4.90, 4.57
Tasks: 917 total, 2 running, 913 sleeping, 0 stopped, 2 zombie
%Cpu(s): 31.6 us, 20.0 sy, 24.1 ni, 22.8 id, 0.0 wa, 0.0 hi, 1.5 si, 0.0 st
KiB Mem : 16266480 total, 622036 free, 8937160 used, 6707284 buff/cache
KiB Swap: 8388604 total, 8388604 free, 0 used. 6903356 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7114 root 25 5 1044892 38792 5196 S 116.1 0.2 1355:01 python2.7
50526 admin 20 0 432760 66584 8896 R 51.4 0.4 3:59.01 uwsgi
38709 root 20 0 162844 3216 1592 R 3.5 0.0 0:00.79 top
39123 postgres 20 0 307564 27644 25016 S 3.5 0.2 0:00.13 postgres
10999 root 20 0 3106668 521456 18764 S 2.6 3.2 25:08.25 java
10984 root 20 0 1912916 145976 16860 S 2.3 0.9 15:09.16 java
11230 root 20 0 191796 40228 17664 S 2.3 0.2 17:27.70 wdModuleEntry
56647 root 20 0 692256 49256 7656 S 2.3 0.3 6:19.08 python
2330 root 20 0 309300 186764 7504 S 1.9 1.1 13:19.91 mcu
<<trunc>>
-
interesting.
what does
ps fax | grep python
show?
-
You may need to throw more CPU at it, I have a similar sized system running the same version, it's happy with 8 CPU cores at 2.9GHz each. I find the recommended resources in the documentation tend to be optimistic, I've never seen a system have issues if you give it a little more than recommended.
-
ps fax | grep python
1017 ? S 1:23 | \_ /usr/bin/python /usr/bin/gunicorn --pythonpath /etc/e-smith/web/django/clientinterface -c /etc/e-smith/web/django/clientinterface/gunicorn_config.py --pid=/var/run/gunicorn-clientinterface.pid clientinterface.wsgi
58847 ? S 0:02 | \_ /usr/bin/python /usr/bin/gunicorn --pythonpath /etc/e-smith/web/django/clientinterface -c /etc/e-smith/web/django/clientinterface/gunicorn_config.py --pid=/var/run/gunicorn-clientinterface.pid clientinterface.wsgi
58850 ? S 0:01 | \_ /usr/bin/python /usr/bin/gunicorn --pythonpath /etc/e-smith/web/django/clientinterface -c /etc/e-smith/web/django/clientinterface/gunicorn_config.py --pid=/var/run/gunicorn-clientinterface.pid clientinterface.wsgi
2395 ? S 0:11 | \_ /usr/lib/tug/env/bin/python /usr/sbin/scapy_server.py --listen=localhost:11000 --wait-interval=10
56647 ? Sl 12:10 | \_ /usr/lib/tug/env/bin/python /usr/sbin/tug_eventd.py --conf /etc/tug-eventd.ini --name=MBG
1015 ? S 1:24 | \_ /usr/bin/python /usr/bin/gunicorn --pythonpath /etc/e-smith/web/django/clientdeployment_api -c /etc/e-smith/web/django/clientdeployment_api/gunicorn_config.py --pid=/var/run/gunicorn-clientdeployment_api.pid clientdeployment_api.wsgi:application
58853 ? S 0:00 | \_ /usr/bin/python /usr/bin/gunicorn --pythonpath /etc/e-smith/web/django/clientdeployment_api -c /etc/e-smith/web/django/clientdeployment_api/gunicorn_config.py --pid=/var/run/gunicorn-clientdeployment_api.pid clientdeployment_api.wsgi:application
58854 ? S 0:00 | \_ /usr/bin/python /usr/bin/gunicorn --pythonpath /etc/e-smith/web/django/clientdeployment_api -c /etc/e-smith/web/django/clientdeployment_api/gunicorn_config.py --pid=/var/run/gunicorn-clientdeployment_api.pid clientdeployment_api.wsgi:application
6889 ? SN 10:25 | \_ /usr/bin/python /usr/mas/tw/tw-masauditd --period=300
3118 ? S 0:02 | \_ python /usr/bin/denyhosts.py --config=/etc/denyhosts/denyhosts.cfg --foreground --verbose --purge
2450 ? Sl 9:39 | \_ /usr/lib/tug/env/bin/python /usr/sbin/mbgrest.py --listen=127.0.0.1:3334 --testdb --nossl --versions=v1,v2,latest
3962 pts/0 S+ 0:00 | \_ grep --color=auto python
3641 ? R 0:05 \_ /usr/lib/tug/env/bin/python /usr/sbin/tug_sysmetrics.py
6780 ? Sl 1:57 /usr/bin/python2 -s /usr/bin/fail2ban-server -s /var/run/fail2ban/fail2ban.sock -p /var/run/fail2ban/fail2ban.pid -x -b
7114 ? SNl 2465:50 python2.7 /usr/vm/python_scripts/python_server_for_graph.py
-
clearly something gone wonky
7114 ? SNl 2465:50 python2.7 /usr/vm/python_scripts/python_server_for_graph.py
i'm not familiar with that service, i'll ask.
-
You may need to throw more CPU at it, I have a similar sized system running the same version, it's happy with 8 CPU cores at 2.9GHz each. I find the recommended resources in the documentation tend to be optimistic, I've never seen a system have issues if you give it a little more than recommended.
Thanks for the feedback. I had thought about that and am trying not to overprovision our VMs. That said when I look at th trend for the VM for the past week, the usage has steadily climbed with peaks obviously falling into business hours. During idle time, it tended to idle around 25% CPU usage however just recently has it started climbing. Rebooting the VM may resolve it temporarily but I'm wondering if there's an underlying condition. Here's the CPU usage graph from vcenter perspective (which isn't 100% accurate.)
https://imgur.com/zQ5dr5O
I also see there's an update to Micollab waiting (pushed out yesterday) v9.6.2.9-01. Will have to check out the release notes for that.
-
Hi @dilkie, any update on this issue? Thanks