`_
At the end of playbook execution, you should have an operational service:
Checking Status
===============
Cluster
-------
Cluster status can be checked with command ``om mon``
.. raw:: html
Threads demo1 demo2
daemon running |
hb#1.rx running [::]:10000 | / O
hb#1.tx running | / O
listener running :1214
monitor running
scheduler running
Nodes demo1 demo2
score | 69 70
load 15m | 0.0 0.0
mem | 15/98%:3.82g 9/98%:3.82g
swap | - -
state |
*/svc/* demo1 demo2
slapos/svc/comp1 up ha 1/1 | O^ S
Service
-------
Service status can be checked with command ``om slapos/svc/comp1 print status``
.. raw:: html
slapos/svc/comp1 up
`- instances
|- demo2 stdby up idle
`- demo1 up idle, started
|- volume#0 ........ up comp1-cfg
|- disk#0 ......S. stdby up loop /opt/comp1.slapos.svc.hyperopenx.img
|- disk#1 ......S. stdby up vg comp1.slapos.svc.hyperopenx
|- disk#2 ......S. stdby up lv comp1.slapos.svc.hyperopenx/comp1
|- disk#3 ......S. stdby up drbd comp1.slapos.svc.hyperopenx
| info: Primary
|- fs#0 ........ up ext4 /dev/drbd0@/srv/comp1.slapos.svc.hyperopenx
|- fs#flag ........ up fs.flag
|- fs:binds
| |- fs#1 ........ up bind /srv/comp1.slapos.svc.hyperopenx/re6st/etc/re6stnet@/etc/re6stnet
| |- fs#2 ........ up bind /srv/comp1.slapos.svc.hyperopenx/re6st/var/log/re6stnet@/var/log/re6stnet
| |- fs#3 ........ up bind /srv/comp1.slapos.svc.hyperopenx/re6st/var/lib/re6stnet@/var/lib/re6stnet
| |- fs#4 ........ up bind /srv/comp1.slapos.svc.hyperopenx/slapos/srv/slapgrid@/srv/slapgrid
| `- fs#5 ........ up bind /srv/comp1.slapos.svc.hyperopenx/slapos/etc/opt@/etc/opt
|- app:re6st
| `- app#0 ...../.. up forking: re6st
|- app:slapos
| `- app#1 ...../.. up forking: slapos
|- sync#i0 ...O./.. up rsync svc config to nodes
`- task:admin //
|- task#addpart ...O./.. up task.host
|- task#chkaddip ...O./.. up task.host
|- task#collect ...O./.. up task.host
|- task#delpart ...O./.. up task.host
`- task#software ...O./.. up task.host
.. note::
add option ``-r`` to force immediate ressource status evaluation (``om slapos/svc/comp1 print status -r``)
Tasks
-----
SlapOS component need cron jobs to be executed. They have been integrated into OpenSVC tasks.
Tasks schedule can be displayed with ``om slapos/svc/comp1 print schedule``
.. raw:: html
Action Last Run Next Run Config Parameter Schedule Definition
|- compliance_auto - 2023-11-10 03:48:52 DEFAULT.comp_schedule ~00:00-06:00
|- push_resinfo - 2023-11-09 14:34:16 DEFAULT.resinfo_schedule @60
|- status 2023-11-09 14:25:36 2023-11-09 14:35:36 DEFAULT.status_schedule @10
|- run 2023-11-09 14:34:10 2023-11-09 14:35:10 task#addpart.schedule @1m
|- run 2023-11-09 14:28:10 2023-11-09 15:28:10 task#chkaddip.schedule @60m
|- run 2023-11-09 14:34:10 2023-11-09 14:35:10 task#collect.schedule @1m
|- run 2023-11-09 14:28:10 2023-11-09 15:28:10 task#delpart.schedule @60m
|- run 2023-11-09 14:34:10 2023-11-09 14:35:10 task#software.schedule @1m
`- sync_all 2023-11-09 14:05:58 2023-11-09 15:05:58 sync#i0.schedule @60
Management commands
===================
Starting service
----------------
``om slapos/svc/comp1 start``
Relocating service
------------------
``om slapos/svc/comp1 switch``
Stopping service
----------------
``om slapos/svc/comp1 stop``
Fetching service config
-----------------------
``om slapos/svc/comp1 print config``
Editing service config
----------------------
``om slapos/svc/comp1 edit config``
Notes
-----
- This deployment is still work in progress and need to be reworked
- add more storage options
- check ipv6 routes prerequisite for slapos installer
- container implementation (lxc ? docker?)
- configure api for external management
- add more heartbeats
- ...