.. _agent.service.resources.common: Common Properties ================= All service resources share common properties and behaviours. Action Requirements ******************* to be completed Tagging ******* A resource can be tagged using the keyword ``tags``. The value is a whitespace-separated list of tags. Tag names can be user-defined or hardcoded in the agent. Custom Tags +++++++++++ Custom tags ease service management on complex configurations, as they can be used in services and resources selector expressions. Examples: ============= ============= Resource Tags ============= ============= app#db database base app#tomcat1 appsrv base app#tomcat2 appsrv app#tomcat3 appsrv app#nginx1 websrv base app#nginx2 websrv ============= ============= :: # stop resources tagged 'websrv' $ om --tag websrv stop # stop resources tagged 'websrv' or 'appsrv' $ om --tag websrv,appsrv stop # stop resources tagged 'websrv' and 'base' $ om --tag websrv+base stop Special Tags ++++++++++++ Some tag names are reserved and have a particular meaning. noaction -------- This tag keep the agent from executing state-changing actions on the resource. The agent is still running the resource status evaluations. For example, the resource mapping the ip address activated at vm boot by the operating system must be tagged ``noaction``. encap ----- This tag assigns the resource to the encapsulated/slave service. The agent on the master-part of the service does not handle such a resource. :cmd:`svcmgr print status` highlights such resources with the ``E`` flag. .. raw:: html
        $ om testzone print status
	testzone                           up                                      
	`- sol3.opensvc.com                up         frozen,          
	   |                                          idle,      
	   |                                          started    
	   |- disk#0                 ..../ n/a        testzone.raw0                
	   |- disk#1                 ..../ up         testzone.raw1                
	   |- fs#0                   ..... n/a        dir /tmp/share               
	   |- share#1                ..../ up         nfs:/tmp/share               
	   `- container#0            ..../ up         testzone                     
	      |- ip#1                ...E/ down       128.0.1.2@lo0/testzone1      
	      `- app:a1                               //                           
		 |- app#0            ...E/ n/a        true                         
		 |                                    info: not evaluated          
		 |                                    (instance not up)            
		 `- app#1            ...E/ n/a        true                         
						      info: not evaluated          
						      (instance not up)            
	
.. seealso:: :ref:`agent.service.encapsulation` nostatus -------- This tag prevents the resource status evaluation. The resource status is set to ``n/a``. dedicated --------- This tag is by the ip.docker driver only. If set, the physical network interface card is moved to the container network namespace. This NIC is thus reserved, and should not be used by other resources and services. Scoping +++++++ Like any other resource parameter, tags can be scoped. .. raw:: html
	[ip#1]
	type = crossbow
	ipname = 128.0.1.2
	ipdev = lo0
	ipdevext = {svcname}1
	netmask = 32
	tags = encap
	tags@sol1.opensvc.com = encap noaction
	
.. seealso:: :ref:`agent-service-scoping` Subsets ******* to be completed Disabled ******** A resource can be marked as disabled using the ``disable`` keyword. .. raw:: html
	[container#1]
	type = docker
	image = ubuntu:14.04
	interactive = true
	tty = true
	entrypoint = /bin/bash
	disable = true
	
This will make the agent ignore any action upon this resource. :cmd:`svcmgr print status` will highlights disabled resources with the ``D`` flag. .. raw:: html
        $ om app1.dev print status --refresh
	app1.dev                     up                                                            
	`- deb1.opensvc.com          up         idle, started  
	   |- ip#0             ..... up         192.168.1.1@lo                                     
	   `- container#1      .D... n/a        docker container app1.dev.container.1@ubuntu:14.04 
	
Optional ******** A resource can be marked as optional using the ``optional`` keyword. .. raw:: html
	[app#0]
	script = /bin/true
	info = true
	stop = true
	start = true
	optional = true
	
This parameter allow defining non critical resources in the service. Service actions won't stop on error reported by optional resources. :cmd:`svcmgr print status` will highlights optional resources with the ``O`` flag. .. raw:: html
        $ om redis.acme.com print status
	mysvc1.opensvc.com                up                                                           
	`- deb1.opensvc.com               up         idle, started 
	   |- ip#1                  ..... up         128.0.1.124@lo                                    
	   |- disk#1                ..... stdby up   loop /opt/disk1.dd                                
	   |- disk#2                ..... stdby up   loop /opt/disk2.dd                                
	   |- disk#3                ..... stdby up   vg vgtest                                         
	   |- fs#1                  ..... up         ext4 /dev/vgtest/lvtest1@/opt/avn/lvtest1         
	   |- fs#2                  ..... up         ext4 /dev/vgtest/lvtest2@/opt/avn/lvtest2         
	   |- fs#3                  ..... up         ext4 /dev/disk/by-label/testfs@/opt/avn/lvtest3   
	   |- share#0               ..../ up         nfs:/opt/avn/lvtest3                              
	   |- app#0                 ..O./ n/a        true                                              
	   |                                         info: check is not set                            
	   `- sync#i0               ..O./ up         rsync svc config to drpnodes, nodes               

	
Monitoring ********** A resource can be marked as monitored using the ``monitor`` keyword. .. raw:: html
	[disk#3]
	type = vg
	name = vgtest
	standby = true
	monitor = true
	
It means that this resource is **critical** for the service availability. If the resource goes down, then the agent triggers the ``monitor_action``, which may cause a crash or reboot of the node, or stop of the service, to force a failover. :cmd:`svcmgr print status` will highlights monitored resources with the ``M`` flag. .. raw:: html
        $ om redis.acme.com print status
	mysvc1.opensvc.com                up                                                           
	`- deb1.opensvc.com               up         idle, started 
	   |- ip#1                  ..... up         128.0.1.124@lo                                    
	   |- disk#1                ..... stdby up   loop /opt/disk1.dd                                
	   |- disk#2                ..... stdby up   loop /opt/disk2.dd                                
	   |- disk#3                M.... stdby up   vg vgtest                                         
	   |- fs#1                  ..... up         ext4 /dev/vgtest/lvtest1@/opt/avn/lvtest1         
	   |- fs#2                  ..... up         ext4 /dev/vgtest/lvtest2@/opt/avn/lvtest2         
	   |- fs#3                  ..... up         ext4 /dev/disk/by-label/testfs@/opt/avn/lvtest3   
	   |- share#0               ..../ up         nfs:/opt/avn/lvtest3                              
	   |- app#0                 ..O./ n/a        true                                              
	   |                                         info: check is not set                            
	   `- sync#i0               ..O./ up         rsync svc config to drpnodes, nodes               
	
.. note:: ``restart`` parameter can be combined with ``monitor`` setting, as explained below Automatic Restart ***************** The ``restart`` parameter can be set to make the agent daemon monitor restart the resource if it fails: .. raw:: html
	[app#0]
	script = /bin/true
	info = true
	stop = true
	start = true
	optional = true
	restart = 2
	
The ``restart`` value is the number of times the daemon will attempt to restart the resource before giving up. If combined with ``monitor``, the agent will try to restart the failed resource before triggering the ``monitor_action`` Standby resources ***************** Some resources must remain up, even when the service instance is stopped. For example, in a 2-nodes failover service with a fs resource and a sync.rsync resource replicating the fs, the fs resource must be up on the passive node receive the rsync'ed data. If not, the data gets written to the underlying filesystem. The ``standby`` keyword can be set in these cases: .. raw:: html
	[disk#3]
	type = vg
	name = vgtest
	standby = true
	monitor = true
	
Possible values are 'nodes', 'drpnodes' or 'nodes drpnodes', or a list of nodes. Resources with ``standby = true`` are started on service ``boot`` and ``start`` actions, and stopped only on service ``shutdown`` action. ``svcgr print status`` will display the ``stdby up`` status for up standby resources, and ``stdby down`` status for down standby resources. .. raw:: html
        # Primary Node
        $ om mysvc.acme.com print status
	mysvc1.opensvc.com                up                                                           
	`- deb1.opensvc.com               up         idle, started 
	   |- ip#1                  ..... up         128.0.1.124@lo                                    
	   |- disk#1                ..... stdby up   loop /opt/disk1.dd                                
	   |- disk#2                ..... stdby up   loop /opt/disk2.dd                                
	   |- disk#3                M.... stdby up   vg vgtest                                         
	   |- fs#1                  ..... up         ext4 /dev/vgtest/lvtest1@/opt/avn/lvtest1         
	   |- fs#2                  ..... up         ext4 /dev/vgtest/lvtest2@/opt/avn/lvtest2         
	   |- fs#3                  ..... up         ext4 /dev/disk/by-label/testfs@/opt/avn/lvtest3   
	   |- share#0               ..../ up         nfs:/opt/avn/lvtest3                              
	   |- app#0                 ..O./ n/a        true                                              
	   |                                         info: check is not set                            
	   `- sync#i0               ..O./ up         rsync svc config to drpnodes, nodes               


        # Secondary Node
	mysvc1.opensvc.com                                                                           
	`- deb2.opensvc.com               warn       warn       
	   |- ip#1                  ..... down       128.0.1.124@lo                                  
	   |- disk#1                ..... stdby up   loop /opt/disk1.dd                              
	   |- disk#2                ..... stdby down loop /opt/disk2.dd                              
	   |- disk#3                M.... stdby up   vg vgtest                                       
	   |- fs#1                  ..... down       ext4 /dev/vgtest/lvtest1@/opt/avn/lvtest1       
	   |- fs#2                  ..... down       ext4 /dev/vgtest/lvtest2@/opt/avn/lvtest2       
	   |- fs#3                  ..... down       ext4 /dev/disk/by-label/testfs@/opt/avn/lvtest3 
	   |- share#0               ..../ down       nfs:/opt/avn/lvtest3                            
	   |- app#0                 ..O.. n/a        true                                            
	   |                                         info: not evaluated (instance not up)           
	   `- sync#i0               ..O./ up         rsync svc config to drpnodes, nodes             
	
.. warning:: Don't set shared disk always on. This would cause data corruption. .. include:: agent.service.triggers.rst .. include:: agent.service.resources.devices.rst