Pacemaker Corosync: различия между версиями

Материал из noname.com.ua
Перейти к навигацииПерейти к поиску
 
(не показано 48 промежуточных версий этого же участника)
Строка 34: Строка 34:
 
chown -R mysql:mysql /var/lib/mysql
 
chown -R mysql:mysql /var/lib/mysql
 
</PRE>
 
</PRE>
Для <code>OCF</code> скриптов нужны переменные окружения, их нужно установить перед запуском скрипта <code>mysql-wss</code> <BR>,
+
Для <code>OCF</code> скриптов нужны переменные окружения, их нужно установить перед запуском скрипта <code>mysql-wss</code><BR>
 
После чего стартовать базу на выбранном контроллере <code>controller-x</code>.
 
После чего стартовать базу на выбранном контроллере <code>controller-x</code>.
 
<BR>
 
<BR>
Строка 48: Строка 48:
 
/usr/lib/ocf/resource.d/fuel/mysql-wss start
 
/usr/lib/ocf/resource.d/fuel/mysql-wss start
 
</PRE>
 
</PRE>
Execute monitor operation on controller-x to update Galera GTID in Pacemaker cluster configuration:
+
Запустить monitor operation на <code>controller-x</code> to update Galera GTID in Pacemaker cluster configuration,
  +
<BR>
  +
Тоже нужны переменные окружения
 
<PRE>
 
<PRE>
 
/usr/lib/ocf/resource.d/fuel/mysql-wss monitor
 
/usr/lib/ocf/resource.d/fuel/mysql-wss monitor
 
</PRE>
 
</PRE>
  +
Запустить на остальных 2-х контроллерах:
Export variables for mysql-wss and start mysqld on all other controllers:
 
 
<PRE>
 
<PRE>
 
export OCF_RESOURCE_INSTANCE=p_mysqld
 
export OCF_RESOURCE_INSTANCE=p_mysqld
Строка 62: Строка 64:
 
/usr/lib/ocf/resource.d/fuel/mysql-wss start
 
/usr/lib/ocf/resource.d/fuel/mysql-wss start
 
</PRE>
 
</PRE>
  +
С любого контроллера включить ресурс mysql в Pacemaker:
From any controller in the cluster, enable MySQL resource in Pacemaker by running the following command:
 
 
<PRE>
 
<PRE>
 
pcs resource enable clone_p_mysqld
 
pcs resource enable clone_p_mysqld
 
</PRE>
 
</PRE>
  +
Убедиться что ресурс стартовал на всех контроллерах:
Verify that clone set clone_p_mysqld is running on all controllers:
 
 
<PRE>
 
<PRE>
 
pcs status resources
 
pcs status resources
Строка 74: Строка 76:
 
https://clusterlabs.org/pacemaker/doc/2.1/Pacemaker_Explained/html/nodes.html#tracking-node-health
 
https://clusterlabs.org/pacemaker/doc/2.1/Pacemaker_Explained/html/nodes.html#tracking-node-health
   
  +
=Что делать если все упало из-за занятого места на дисках=
=2=
 
   
  +
==Как проверить?==
 
  +
Узнать что есть проблемы с диском с точки зрения Pacemaker можно из лога <code>/var/log/pacemaker.log</code>
All Controller services go down due to a full partition issue.
 
  +
<PRE>
Horizon Web Interface is down
 
Cluster Public IP is unreachable
 
Floating IPs are unreachable
 
A "health_disk" red value change is traceable on the affected Controller /var/log/pacemaker.log file due to a filled partition.
 
 
# grep 'health_disk.*value="red"' /var/log/pacemaker.log
 
# grep 'health_disk.*value="red"' /var/log/pacemaker.log
 
May 10 02:53:21 [9160] node-X.default.ltd cib: info: cib_perform_op: ++ /cib/status/node_state[@id='X']/transient_attributes[@id='X']/instance_attributes[@id='status-X']: <nvpair id="status-X-#health_disk" name="#health_disk" value="red"/>
 
May 10 02:53:21 [9160] node-X.default.ltd cib: info: cib_perform_op: ++ /cib/status/node_state[@id='X']/transient_attributes[@id='X']/instance_attributes[@id='status-X']: <nvpair id="status-X-#health_disk" name="#health_disk" value="red"/>
  +
</PRE>
  +
 
​After the health disk change you will see a health strategy: "migrate-on-red"
 
​After the health disk change you will see a health strategy: "migrate-on-red"
  +
<PRE>
 
May 10 02:53:21 [9164] node-X.default.ltd pengine: info: apply_system_health: Applying automated node health strategy: migrate-on-red
 
May 10 02:53:21 [9164] node-X.default.ltd pengine: info: apply_system_health: Applying automated node health strategy: migrate-on-red
  +
</PRE>
  +
После чего какие-то ресурсы не могут стартовать 0 обращать внимание на
  +
* <code>Applying automated node health strategy: migrate-on-red</code>
  +
* <code>Resource p_haproxy:0 cannot run anywhere</code>
  +
  +
{{#spoiler:show=Пример лога|
  +
  +
<PRE>
 
On a Cluster of only one Controller; all services leave the node and can't find a suitable node
 
On a Cluster of only one Controller; all services leave the node and can't find a suitable node
 
May 10 02:53:51 [9164] node-X.default.ltd pengine: info: determine_online_status: Node node-X.default.ltd is online
 
May 10 02:53:51 [9164] node-X.default.ltd pengine: info: determine_online_status: Node node-X.default.ltd is online
Строка 167: Строка 177:
 
May 10 02:53:51 [9164] node-X.default.ltd pengine: info: LogActions: Leave ping_vip__public:0 (Stopped)
 
May 10 02:53:51 [9164] node-X.default.ltd pengine: info: LogActions: Leave ping_vip__public:0 (Stopped)
 
May 10 02:53:51 [9164] node-X.default.ltd pengine: info: LogActions: Leave p_ntp:0 (Stopped)
 
May 10 02:53:51 [9164] node-X.default.ltd pengine: info: LogActions: Leave p_ntp:0 (Stopped)
  +
</PRE>
EnvironmentThe characteristics of the issue that are not part of the problem (Product/s, Release, Version/s, Operating System, Component/s)
 
  +
}}
Mirantis OpenStack 8.0, 9.2
 
  +
Pacemaker
 
  +
==Как решать?==
HA Mode cluster with only 1 Controller
 
  +
Проверить место
ResolutionNumbered series of steps about how to fix the issue
 
  +
<PRE>
Determine the which of the partitions is full
 
 
# df -h / /var/log /var/lib/mysql
 
# df -h / /var/log /var/lib/mysql
  +
</PRE>
Clean up the partition that is full. Full partitions can have different causes, but the partitions that are most likely to get full are: /var/log and /var/lib/mysql
 
  +
Обычно проблемные
/var/log - How to Clean up a full /var/log/ partition in a Controller on Mirantis OpenStack 8.0
 
  +
* /var/log
/var/lib/mysql - How to purge MySQL binary logs to recover space on MySQL partition.
 
  +
* /var/lib/mysql
Stop and Start Pacemaker
 
  +
После того как место очищено - рестарт
  +
<PRE>
 
# service pacemaker stop
 
# service pacemaker stop
 
# service pacemaker start
 
# service pacemaker start
  +
</PRE>
CauseWhy the issue occurred – if known
 
  +
==Проверка настроек==
Mirantis OpenStack 8.0 utilizes a Pacemaker/Corosync feature not previously used in Mirantis OpenStack; the node-health-strategy=migrate-on-red is now setup by default.
 
  +
<PRE>
 
# crm configure show cib-bootstrap-options
 
# crm configure show cib-bootstrap-options
 
property cib-bootstrap-options: \
 
property cib-bootstrap-options: \
Строка 193: Строка 206:
 
last-lrm-refresh=1461091450 \
 
last-lrm-refresh=1461091450 \
 
node-health-strategy=migrate-on-red
 
node-health-strategy=migrate-on-red
  +
</PRE>
There are new Pacemaker/Corosync primitives that will monitor the node health; each Controller node should have a sysinfo_node-x primitive.
 
  +
Тут обратить внимание на <code>node-health-strategy=migrate-on-red</code>
  +
<BR>
  +
есть примитив Pacemaker/Corosync который мониторит "живость" ноды, его можно посмотреть так:
  +
<PRE>
 
# crm configure show sysinfo_*
 
# crm configure show sysinfo_*
 
primitive sysinfo_node-1.default.tld ocf:pacemaker:SysInfo \
 
primitive sysinfo_node-1.default.tld ocf:pacemaker:SysInfo \
 
op monitor interval=15s \
 
op monitor interval=15s \
 
params disks="/ /var/log /var/lib/mysql" min_disk_free=512M disk_unit=M
 
params disks="/ /var/log /var/lib/mysql" min_disk_free=512M disk_unit=M
  +
</PRE>
There are 3 monitored partitions:
 
  +
* <code>/</code>
/
 
/var/log
+
* <code>/var/log</code>
/var/lib/mysql
+
* <code>/var/lib/mysql</code>
The node health will change to red if the the minimum disk space of 512 MB is reached in any of the above partitions; triggering the node services to be evacuated to another node. The main issue in a Cluster of 1 node is that once the services are migrated, there is no additional node to evacuated the services to.
 
 
   
  +
=Проверка статуса репликации Galera=
  +
<PRE>
 
SHOW GLOBAL STATUS LIKE 'wsrep_%';
 
SHOW GLOBAL STATUS LIKE 'wsrep_%';
  +
</PRE>
   
  +
=Если кластер "не собирается" - поднять таймауты=
=3=
 
  +
Validated External
 
 
ProblemSymptoms of the problem being solved, objectives of the procedure or additional details about the question being asked that summarize what the customer is experiencing.
 
ProblemSymptoms of the problem being solved, objectives of the procedure or additional details about the question being asked that summarize what the customer is experiencing.
 
MySQL is not running on one of the controller nodes
 
MySQL is not running on one of the controller nodes
  +
<PRE>
 
# pcs status
 
# pcs status
 
Clone Set: clone_p_mysql [p_mysql]
 
Clone Set: clone_p_mysql [p_mysql]
 
Started: [ b05-39-controller.domain.tld b06-39-controller.domain.tld ]
 
Started: [ b05-39-controller.domain.tld b06-39-controller.domain.tld ]
 
Stopped: [ b05-38-controller.domain.tld ]
 
Stopped: [ b05-38-controller.domain.tld ]
  +
</PRE>
EnvironmentThe characteristics of the issue that are not part of the problem (Product/s, Release, Version/s, Operating System, Component/s)
 
Mirantis OpenStack 7.0 and higher
 
MySQL
 
Pacemaker
 
Galera
 
ResolutionNumbered series of steps about how to fix the issue
 
 
Make sure MySQL is stopped on problematic node
 
Make sure MySQL is stopped on problematic node
  +
<PRE>
 
# ps -ef | grep mysql
 
# ps -ef | grep mysql
 
root 14878 5566 0 14:33 pts/0 00:00:00 grep --color=auto mysql
 
root 14878 5566 0 14:33 pts/0 00:00:00 grep --color=auto mysql
  +
</PRE>
 
Edit the default timeout for start operation
 
Edit the default timeout for start operation
  +
<PRE>
 
# crm configure edit p_mysql
 
# crm configure edit p_mysql
  +
</PRE>
Note: for Mirantis OpenStack 9.0 and higher, run the following command:
 
  +
или
  +
<PRE>
 
# crm configure edit p_mysqld
 
# crm configure edit p_mysqld
  +
</PRE>
 
Set the temporary timeout value for p_mysql-start-0 to be 1200
 
Set the temporary timeout value for p_mysql-start-0 to be 1200
  +
<PRE>
 
# pcs resource show p_mysql
 
# pcs resource show p_mysql
 
Resource: p_mysql (class=ocf provider=fuel type=mysql-wss)
 
Resource: p_mysql (class=ocf provider=fuel type=mysql-wss)
Строка 235: Строка 256:
 
start interval=0 timeout=1200 (p_mysql-start-0)
 
start interval=0 timeout=1200 (p_mysql-start-0)
 
stop interval=0 timeout=120 (p_mysql-stop-0)
 
stop interval=0 timeout=120 (p_mysql-stop-0)
  +
</PRE>
 
Cleanup the p_mysql resource
 
Cleanup the p_mysql resource
  +
<PRE>
 
# crm resource cleanup p_mysql
 
# crm resource cleanup p_mysql
  +
</PRE>
 
Wait for 15-20 minutes for synchronization to be completed
 
Wait for 15-20 minutes for synchronization to be completed
 
Run pcs status again to check mysql is back up and running on the problematic controller node
 
Run pcs status again to check mysql is back up and running on the problematic controller node
  +
<PRE>
 
# pcs status
 
# pcs status
 
Clone Set: clone_p_mysql [p_mysql]
 
Clone Set: clone_p_mysql [p_mysql]
Строка 247: Строка 272:
 
| wsrep_causal_reads | 0 |
 
| wsrep_causal_reads | 0 |
 
| wsrep_incoming_addresses | 10.128.0.133:3307,10.128.0.132:3307,10.128.0.134:3307
 
| wsrep_incoming_addresses | 10.128.0.133:3307,10.128.0.132:3307,10.128.0.134:3307
  +
</PRE>
8. Reset the timeout value for p_mysql-start-0 to default value.
 
  +
Reset the timeout value for p_mysql-start-0 to default value.
   
  +
=Ходовые команды CRM/PCS=
 
=4=
+
==CRM==
The most frequently used CRM commands
 
 
 
  +
{| class="wikitable"
Action Command Example
 
  +
|+ The most frequently used CRM commands
Show cluster status crm status crm status
 
  +
|-
Show status of resource crm resource status crm resource status
 
  +
! Action !! Command !! Example
Cleanup resource status crm resource cleanup <resource> crm resource cleanup p_neutron-dhcp-agent
 
  +
|-
Start a resource crm resource start <resource> crm resource start p_neutron-dhcp-agent
 
  +
|Show cluster status
Stop a resource crm resource stop <resource> crm resource stop p_neutron-dhcp-agent
 
  +
||crm status
Restart a resource crm resource restart <resource> crm resource restart p_neutron-dhcp-agent
 
  +
||crm status
Put a resource into manage mode crm resource manage <resource> crm resource manage p_neutron-dhcp-agent
 
  +
|-
Put a resource into unmanage mode crm resource unmanage <resource> crm resource unmanage p_neutron-dhcp-agent
 
  +
|Show status of resource
Migrate a resource to another node crm resource migrate <resource> [<node>] [<lifetime>] [force] crm resource migrate p_neutron-dhcp-agent
 
  +
||crm resource status
  +
||crm resource status
  +
|-
  +
|Cleanup resource status
  +
||crm resource cleanup <resource>
  +
||crm resource cleanup p_neutron-dhcp-agent
  +
|-
  +
|Start a resource
  +
||crm resource start <resource>
  +
||crm resource start p_neutron-dhcp-agent
  +
|-
  +
|Stop a resource
  +
||crm resource stop <resource>
  +
||crm resource stop p_neutron-dhcp-agent
  +
|-
  +
|Restart a resource
  +
||crm resource restart <resource>
  +
||crm resource restart p_neutron-dhcp-agent
  +
|-
  +
|Put a resource into manage mode
  +
||crm resource manage <resource>
  +
||crm resource manage p_neutron-dhcp-agent
  +
|-
  +
|Put a resource into unmanage mode
  +
||crm resource unmanage <resource>
  +
||crm resource unmanage p_neutron-dhcp-agent
  +
|-
  +
|Migrate a resource to another node
  +
||crm resource migrate <resource> [<node>] [<lifetime>] [force]
  +
||crm resource migrate p_neutron-dhcp-agent
  +
|}
  +
  +
==PCS==
  +
  +
{| class="wikitable"
  +
|+ The frequently used PCS commands
  +
|-
  +
! Action !! Command !! Example
  +
|-
  +
|Show cluster status
  +
||pcs status
  +
||pcs status
  +
|-
  +
|Show status of resource
  +
||pcs status resources
  +
||pcs status resources
  +
|-
  +
|Cleanup resource status
  +
||pcs resource cleanup resource_id
  +
||pcs resource cleanup p_neutron-plugin-openvswitch-agent
  +
|-
  +
|Put a resource into manage mode
  +
||pcs manage <resource id> ... [resource n]
  +
||
  +
|-
  +
|Put a resource into unmanage mode
  +
||pcs unmanage <resource id> ... [resource n]
  +
||
  +
|-
  +
|Saves the raw xml from the CIB into a file
  +
||pcs cluster cib
  +
||pcs cluster cib
  +
|-
  +
|Display full cluster config
  +
||pcs config
  +
||pcs config
  +
|-
  +
|Stopping Cluster Services / Force stop
  +
||
  +
* pcs cluster stop [--all] [node] [...]
  +
* pcs cluster kill
  +
||pcs cluster stop node-1
  +
|-
  +
|Standby Mode
  +
The specified node is no longer able to host resources
  +
||
  +
* pcs cluster standby node | --all
  +
* pcs cluster unstandby node | --all
  +
||
  +
* pcs cluster standby node-1
  +
* pcs cluster unstandby node-1
  +
|-
  +
|Enabling Cluster Resources
  +
||pcs resource enable resource_id [--wait[=n]]
  +
||pcs resource enable p_neutron-l3-agent
  +
|-
  +
|Disabling Cluster Resources
  +
||pcs resource disable resource_id [--wait[=n]]
  +
||pcs resource disable p_neutron-l3-agent
  +
|-
  +
|Prevent the resource id specified from running on the node
  +
||pcs resource ban <resource id> [node] [--master]
  +
||pcs resource ban p_neutron-plugin-openvswitch-agent node-1
  +
|-
  +
|Remove constraints created by move and/or ban on the specified resource
  +
||pcs resource clear <resource id> [node] [--master]
  +
||pcs resource clear p_neutron-plugin-openvswitch-agent node-1
  +
|-
  +
|Maintenance Mode tells the cluster to go to a "hands off" mode
  +
||pcs property set maintenance-mode=[true|false]
  +
||
  +
* pcs property set maintenance-mode=true
  +
* pcs property set maintenance-mode=false
  +
|-
  +
|Move resource off current node (and optionally onto destination node)
  +
||pcs move <resource id> [destination node] [--master]
  +
||pcs resource move p_neutron-dhcp-agent node-1
  +
|-
  +
|Force the specified resource to start on this node ignoring the cluster
  +
recommendations and print the output from starting the resource
  +
||pcs debug-start <resource id> [--full]
  +
||pcs resource debug-start p_ceilometer-agent-central
  +
|-
  +
|Show options specific resource
  +
||pcs resource show <resource id>
  +
||pcs resource show p_neutron-l3-agent
  +
|-
  +
|Show current failcount for specified resource
  +
||pcs failcount show <resource id> [node]
  +
||pcs resource failcount show p_neutron-l3-agent
  +
|-
  +
|Restart metadata-agent on a specific node
  +
||pcs resource ban p_neutron-metadata-agent node-1.domain.tld
  +
||pcs resource clear p_neutron-metadata-agent node-1.domain.tld
  +
|}
   
  +
=Еще один документ по восстановлению кластера=
The frequently used PCS commands
 
Action Command Example
 
Show cluster status pcs status pcs status
 
Show status of resource pcs status resources pcs status resources
 
Cleanup resource status pcs resource cleanup resource_id pcs resource cleanup p_neutron-plugin-openvswitch-agent
 
Put a resource into manage mode pcs manage <resource id> ... [resource n]
 
Put a resource into unmanage mode pcs unmanage <resource id> ... [resource n]
 
Saves the raw xml from the CIB into a file pcs cluster cib pcs cluster cib
 
Display full cluster config pcs config pcs config
 
Stopping Cluster Services
 
Force stop pcs cluster stop [--all] [node] [...]
 
pcs cluster kill pcs cluster stop node-1
 
Standby Mode
 
The specified node is no longer able to host resources pcs cluster standby node | --all
 
pcs cluster unstandby node | --all pcs cluster standby node-1
 
pcs cluster unstandby node-1
 
Enabling Cluster Resources pcs resource enable resource_id [--wait[=n]] pcs resource enable p_neutron-l3-agent
 
Disabling Cluster Resources pcs resource disable resource_id [--wait[=n]] pcs resource disable p_neutron-l3-agent
 
Prevent the resource id specified from running on the node pcs resource ban <resource id> [node] [--master] pcs resource ban p_neutron-plugin-openvswitch-agent node-1
 
Remove constraints created by move and/or ban on the specified resource pcs resource clear <resource id> [node] [--master] pcs resource clear p_neutron-plugin-openvswitch-agent node-1
 
Maintenance Mode tells the cluster to go to a "hands off" mode pcs property set maintenance-mode=[true|false] pcs property set maintenance-mode=true
 
pcs property set maintenance-mode=false
 
Move resource off current node (and optionally onto destination node) pcs move <resource id> [destination node] [--master] pcs resource move p_neutron-dhcp-agent node-1
 
Force the specified resource to start on this node ignoring the cluster
 
recommendations and print the output from starting the resource pcs debug-start <resource id> [--full] pcs resource debug-start p_ceilometer-agent-central
 
Show options specific resource pcs resource show <resource id> pcs resource show p_neutron-l3-agent
 
Show current failcount for specified resource pcs failcount show <resource id> [node] pcs resource failcount show p_neutron-l3-agent
 
Restart metadata-agent on a specific node pcs resource ban p_neutron-metadata-agent node-1.domain.tld
 
pcs resource clear p_neutron-metadata-agent node-1.domain.tld
 
   
=5=
 
Cloud restoration how-to
 
   
This is brief document like step-by-step how to for restoration MOS–7.0 in case 2 of 3 controllers are down (hardware malfunction, power outage, etc). There is no quorum on pacemaker. Cloud is down.
+
This is brief document like step-by-step how to for restoration in case 2 of 3 controllers are down (hardware malfunction, power outage, etc). There is no quorum on pacemaker. Cloud is down.
 
current state:
 
current state:
1 controller up or 1 controller healthy.
+
* 1 controller up or 1 controller healthy.
2 controllers down or not available or not functional
+
* 2 controllers down or not available or not functional
 
Choose 1st controller – better will be controller with MySQL up and running. But on most cases, with such failures mysql is down on all nodes.
 
Choose 1st controller – better will be controller with MySQL up and running. But on most cases, with such failures mysql is down on all nodes.
 
node–1 – alive controller (mysql database should not be corrupted) we will back up this node
 
node–1 – alive controller (mysql database should not be corrupted) we will back up this node
Строка 308: Строка 428:
   
 
For restoring node–1 as a single controller we need:
 
For restoring node–1 as a single controller we need:
Fix pacemaker quorum policy
+
* Fix pacemaker quorum policy
Fix MySQL server
+
* Fix MySQL server
Fix RabbitMQ service
+
* * Fix RabbitMQ service
Fix other core services (Neutron, Nova, heat, etc)
+
==Fix other core services (Neutron, Nova, heat, etc)==
Pacemaker
+
===Pacemaker===
   
 
Switch node–2 and node–3 to maintenance/unmanage state. just for make sure they will not interrupt restoring process.
 
Switch node–2 and node–3 to maintenance/unmanage state. just for make sure they will not interrupt restoring process.
$ crm node maintenance node-2
+
* <code>$ crm node maintenance node-2</code>
$ crm node maintenance node-3
+
* $ <code>crm node maintenance node-3</code>
 
Check:
 
Check:
  +
<PRE>
 
$ pcs cluster cib |grep maint
 
$ pcs cluster cib |grep maint
 
<nvpair id="nodes-1-maintenance" name="maintenance" value="off"/>
 
<nvpair id="nodes-1-maintenance" name="maintenance" value="off"/>
 
<nvpair id="nodes-2-maintenance" name="maintenance" value="on"/>
 
<nvpair id="nodes-2-maintenance" name="maintenance" value="on"/>
 
<nvpair id="nodes-3-maintenance" name="maintenance" value="on"/>
 
<nvpair id="nodes-3-maintenance" name="maintenance" value="on"/>
  +
</PRE>
  +
<PRE>
 
$ pcs status
 
$ pcs status
  +
</PRE>
 
Disable mysql on node–1, if that node is not donor from pacemaker prospective we need manually start it as donor. Also we need disable zabbix (in case it installed). High load on MySQL can caused problem during replication or even break replication process.
 
Disable mysql on node–1, if that node is not donor from pacemaker prospective we need manually start it as donor. Also we need disable zabbix (in case it installed). High load on MySQL can caused problem during replication or even break replication process.
  +
$ pcs resource disable clone_p_mysql
 
$ pcs resource disable p_zabbix-server
+
* <code>$ pcs resource disable clone_p_mysql</code>
  +
* <code>$ pcs resource disable p_zabbix-server</code>
$ pcs status
 
  +
* <code>$ pcs status</code>
 
Switch no-quorum-policy to ignore. We need that for start all pacemaker’s services without quorum. Usually, this step is restoring RabbitMQ.
 
Switch no-quorum-policy to ignore. We need that for start all pacemaker’s services without quorum. Usually, this step is restoring RabbitMQ.
  +
$ crm configure property no-quorum-policy=ignore
 
$ crm configure show
+
* <code>$ crm configure property no-quorum-policy=ignore</code>
  +
* <code>$ crm configure show</code>
$ pcs cluster cib |grep no-quorum
 
  +
* <code>$ pcs cluster cib |grep no-quorum</code>
$ pcs status
 
  +
* <code>$ pcs status</code>
RabbitMQ
 
  +
===RabbitMQ===
   
 
Restart rabbitmq messaging service. In some cases it is mandatory, for make sure that service is up, running and operational.
 
Restart rabbitmq messaging service. In some cases it is mandatory, for make sure that service is up, running and operational.
  +
<BR>
 
Stop p_rabbitmq-server on pacemaker
 
Stop p_rabbitmq-server on pacemaker
  +
<BR>
  +
<PRE>
 
$ crm resource stop p_rabbitmq-server
 
$ crm resource stop p_rabbitmq-server
  +
</PRE>
 
Ensure that rabbitmq down and kill it if necessary. If you are using murano please do not kill murano rabbit: kill only rabbit with line like 'pluginsexpanddir "/var/lib/rabbitmq/mnesia/rabbit@node-1-plugins-expand".
 
Ensure that rabbitmq down and kill it if necessary. If you are using murano please do not kill murano rabbit: kill only rabbit with line like 'pluginsexpanddir "/var/lib/rabbitmq/mnesia/rabbit@node-1-plugins-expand".
  +
<PRE>
 
$ ps -ef|grep rabbit
 
$ ps -ef|grep rabbit
  +
</PRE>
 
Start p_rabbitmq-server by pacemaker
 
Start p_rabbitmq-server by pacemaker
$ crm resource start p_rabbitmq-server
+
* <code>$ crm resource start p_rabbitmq-server</code>
$ pcs status
+
* <code>$ pcs status</code>
 
Ensure that rabbitmq running
 
Ensure that rabbitmq running
$ rabbitmqctl status
+
* <code>$ rabbitmqctl status</code>
$ rabbitmqctl cluster_status
+
* <code>$ rabbitmqctl cluster_status</code>
  +
MySQL
 
  +
===MySQL===
   
 
Ensure that there is no MySQL is running and kill it if necessary.
 
Ensure that there is no MySQL is running and kill it if necessary.
  +
<PRE>
 
$ ps -ef|grep mysql
 
$ ps -ef|grep mysql
  +
</PRE>
 
Start MySQL as donor for galera cluster.
 
Start MySQL as donor for galera cluster.
  +
<PRE>
 
$ export OCF_RESOURCE_INSTANCE=p_mysql
 
$ export OCF_RESOURCE_INSTANCE=p_mysql
 
$ export OCF_ROOT=/usr/lib/ocf
 
$ export OCF_ROOT=/usr/lib/ocf
Строка 355: Строка 491:
 
$ export OCF_RESKEY_additional_parameters="--wsrep-new-cluster"
 
$ export OCF_RESKEY_additional_parameters="--wsrep-new-cluster"
 
$ /usr/lib/ocf/resource.d/fuel/mysql-wss start
 
$ /usr/lib/ocf/resource.d/fuel/mysql-wss start
  +
</code>
 
Check mysql in a couple of minutes
 
Check mysql in a couple of minutes
  +
<PRE>
 
$ mysql -e "show status like 'wsrep_%';"
 
$ mysql -e "show status like 'wsrep_%';"
  +
</PRE>
 
Create mysql backup just in case (if necessary)
 
Create mysql backup just in case (if necessary)
  +
<PRE>
 
$ mkdir /tmp/db-backup; \
 
$ mkdir /tmp/db-backup; \
 
innobackupex --no-timestamp --socket=/var/run/mysqld/mysqld.sock /tmp/db-backup; \
 
innobackupex --no-timestamp --socket=/var/run/mysqld/mysqld.sock /tmp/db-backup; \
 
innobackupex --use-memory=1G --lock-wait-query-type=all --apply-log --socket=/var/run/mysqld/mysqld.sock /tmp/db-backup
 
innobackupex --use-memory=1G --lock-wait-query-type=all --apply-log --socket=/var/run/mysqld/mysqld.sock /tmp/db-backup
  +
</PRE>
 
You can specify just some special list of tables, Excluding zabbix. please refer to Percona Partial Backups. For example backup without zabbix we will have:
 
You can specify just some special list of tables, Excluding zabbix. please refer to Percona Partial Backups. For example backup without zabbix we will have:
  +
<PRE>
 
$ mysql -e "SELECT CONCAT(table_schema,'.',table_name) FROM information_schema.tables WHERE table_schema not like 'zabbix'; " > db-tables.txt
 
$ mysql -e "SELECT CONCAT(table_schema,'.',table_name) FROM information_schema.tables WHERE table_schema not like 'zabbix'; " > db-tables.txt
 
$ innobackupex --tables-file=db-tables.txt --socket=/var/run/mysqld/mysqld.sock /tmp/db-backup
 
$ innobackupex --tables-file=db-tables.txt --socket=/var/run/mysqld/mysqld.sock /tmp/db-backup
  +
</PRE>
Ceph
 
  +
  +
===Ceph===
   
 
We have one controller with ceph monitor. There is no quorum, and ceph is not operational. We need remove 2 other monitors from down controllers.
 
We have one controller with ceph monitor. There is no quorum, and ceph is not operational. We need remove 2 other monitors from down controllers.
  +
<BR>
 
Stop ceph monitor:
 
Stop ceph monitor:
  +
<PRE>
 
$ stop ceph-mon-all
 
$ stop ceph-mon-all
  +
</PRE>
 
Dump monitor map:
 
Dump monitor map:
$ mkdir ./ceph-mon-dump
+
* <code> $ mkdir ./ceph-mon-dump </code>
$ ceph-mon -i node-1 --extract-monmap ./ceph-mon-dump/monmap
+
* <code> $ ceph-mon -i node-1 --extract-monmap ./ceph-mon-dump/monmap </code>
$ cp ./ceph-mon-dump/monmap ./ceph-mon-dump/monmap.bak
+
* <code> $ cp ./ceph-mon-dump/monmap ./ceph-mon-dump/monmap.bak </code>
 
Remove node–2 and node–3 from monitor-map
 
Remove node–2 and node–3 from monitor-map
$ monmaptool ./ceph-mon-dump/monmap --rm node-2
+
* <code> $ monmaptool ./ceph-mon-dump/monmap --rm node-2 </code>
$ monmaptool ./ceph-mon-dump/monmap --rm node-3
+
* <code> $ monmaptool ./ceph-mon-dump/monmap --rm node-3 </code>
 
Inject updated monitor map with one monitor:
 
Inject updated monitor map with one monitor:
$ ceph-mon -i node-1 --inject-monmap ./ceph-mon-dump/monmap
+
* <code> $ ceph-mon -i node-1 --inject-monmap ./ceph-mon-dump/monmap </code>
 
Starting and checking ceph
 
Starting and checking ceph
$ start ceph-mon-all
+
* <code> $ start ceph-mon-all </code>
$ ceph -s
+
* <code> $ ceph -s </code>
 
Restart ceph osd on ceph-osd nodes if necessary
 
Restart ceph osd on ceph-osd nodes if necessary
  +
Neutron and nova services
 
  +
===Neutron and nova services===
   
 
Now we have MySQL, RabbitMQ and ceph running.
 
Now we have MySQL, RabbitMQ and ceph running.
  +
<BR>
 
Check and restore pacemaker services on controller, and neutron and nova services on computes.
 
Check and restore pacemaker services on controller, and neutron and nova services on computes.
$ pcs status
+
* <code> $ pcs status </code>
$ . openrc
+
* <code> $ . openrc </code>
$ nova service-list
+
* <code> $ nova service-list </code>
$ neutron agent-list
+
* <code> $ neutron agent-list </code>
$ neutron agent-list |grep -i xxx
+
* <code> $ neutron agent-list |grep -i xxx </code>
 
After that cloud should be operational.
 
After that cloud should be operational.
  +
Node–2 restoration
 
  +
==Node–2 restoration==
   
 
Now we have:
 
Now we have:
  +
<BR>
 
cloud – operational, up and running.
 
cloud – operational, up and running.
node–1 – Single controller. up and running.
 
node–2 – down or malfunction controller. let’s start this node.
 
node–3 – down or malfunction controller
 
Start node–2, if it down. For restoring node 2 we have the same sequence, except rabbitmq: adding second node to the rabbitMQ cluster by pacemaker causes rabbitMQ service interruption for a 1–5 min. Node–2 still unmanageable and marked as “maintenance” in the pacemaker.
 
Restoring MySQL on node–2
 
   
  +
* node–1 – Single controller. up and running.
ssh to node–2. On node–2:
 
  +
* node–2 – down or malfunction controller. let’s start this node.
  +
* node–3 – down or malfunction controller
  +
<BR>
  +
Start node–2, if it down. <BR>
  +
For restoring node 2 we have the same sequence, except rabbitmq: adding second node to the rabbitMQ cluster by pacemaker causes rabbitMQ service interruption for a 1–5 min. Node–2 still unmanageable and marked as “maintenance” in the pacemaker.
  +
  +
===Restoring MySQL on node–2===
  +
  +
ssh to node–2.
  +
<BR>
  +
On node–2:
  +
<BR>
 
Stop mysql by pacemaker script if running:
 
Stop mysql by pacemaker script if running:
$ export OCF_RESOURCE_INSTANCE=p_mysql
+
* <code> $ export OCF_RESOURCE_INSTANCE=p_mysql </code>
$ export OCF_ROOT=/usr/lib/ocf
+
* <code> $ export OCF_ROOT=/usr/lib/ocf </code>
$ export OCF_RESKEY_socket=/var/run/mysqld/mysqld.sock
+
* <code> $ export OCF_RESKEY_socket=/var/run/mysqld/mysqld.sock </code>
$ /usr/lib/ocf/resource.d/fuel/mysql-wss stop
+
* <code> $ /usr/lib/ocf/resource.d/fuel/mysql-wss stop </code>
 
Ensure that mysql is down and no mysql process exist. This is mandatory. Kill the MySQL process if it still running or stucked.
 
Ensure that mysql is down and no mysql process exist. This is mandatory. Kill the MySQL process if it still running or stucked.
  +
<BR>
 
Clean up the MySQL directory and run MySQL by pacemaker.
 
Clean up the MySQL directory and run MySQL by pacemaker.
$ rm -R /var/lib/mysql/*
+
* <code> $ rm -R /var/lib/mysql/* </code>
$ /usr/lib/ocf/resource.d/fuel/mysql-wss start
+
* <code> $ /usr/lib/ocf/resource.d/fuel/mysql-wss start </code>
  +
In a 5 minutes or so replication started. In a some time depends of the data amount and network speed replication completed and mysql started. Check the /var/log/mysqld.log for details. Check mysql cluster status after mysql started as galera cluster member.
 
  +
In a 5 minutes or so replication started. <BR>
  +
In a some time depends of the data amount and network speed replication completed and mysql started. <BR>
  +
Check the /var/log/mysqld.log for details. Check mysql cluster status after mysql started as galera cluster member.
  +
<PRE>
 
$ mysql -e "show status like 'wsrep_%';"
 
$ mysql -e "show status like 'wsrep_%';"
  +
</PRE>
 
Return back to node–1.
 
Return back to node–1.
Adding node–2 to pacemaker cluster
 
   
  +
===Adding node–2 to pacemaker cluster===
Now we need restore pacemaker cluster management for node–2. Please make sure that we will not start rabbitMQ on the node.
 
  +
$ pcs resource ban p_rabbitmq-server node-2
 
  +
Now we need restore pacemaker cluster management for node–2.<BR>
$ crm node ready node-2
 
  +
Please make sure that we will not start rabbitMQ on the node.
$ pcs status
 
  +
Adding node–2 to the ceph map
 
  +
* <code> $ pcs resource ban p_rabbitmq-server node-2 </code>
  +
* <code> $ crm node ready node-2 </code>
  +
* <code> $ pcs status </code>
  +
  +
===Adding node–2 to the ceph map===
   
 
Stop ceph monitor on node–2
 
Stop ceph monitor on node–2
  +
<PRE>
$ ssh node-2 stop ceph-mon-all
 
  +
$ ssh node-2 stop ceph-mon-all </code>
  +
</PRE>
 
re-add monitor on node-2 node.
 
re-add monitor on node-2 node.
$ ceph mon remove node-2
+
* <code> $ ceph mon remove node-2 </code>
$ ceph mon add node-2 <ip addr of node-2 from management network>
+
* <code> $ ceph mon add node-2 <ip addr of node-2 from management network> </code>
$ ceph-deploy --overwrite-conf mon create node-2
+
* <code> $ ceph-deploy --overwrite-conf mon create node-2 </code>
 
Check ceph status
 
Check ceph status
$ ceph -s
+
<PRE>$ ceph -s</PRE>
 
As result you will set two monitors in the ceph cluster.
 
As result you will set two monitors in the ceph cluster.
  +
Nova and neutron services.
 
  +
===Nova and neutron services.===
   
 
Check and restore (if necessary) nova and neutron services on node–2
 
Check and restore (if necessary) nova and neutron services on node–2
$ nova service-list
+
* <code> $ nova service-list </code>
$ neutron agent-list
+
* <code> $ neutron agent-list </code>
$ neutron agent-list |grep -i xxx
+
* <code> $ neutron agent-list |grep -i xxx </code>
Node–3 restoration
 
   
  +
==Node–3 restoration==
Now we have:
 
  +
  +
Now we have:<BR>
 
cloud – operational, up and running.
 
cloud – operational, up and running.
node–1 – 1st controller. up and running.
+
* node–1 – 1st controller. up and running.
node–2 – 2nd controller. up and running.
+
* node–2 – 2nd controller. up and running.
node–3 – down or malfunction controller.
+
* node–3 – down or malfunction controller.
 
Start node–3, if it down.
 
Start node–3, if it down.
  +
Restoring node–3 is similar to restoting node–2, with the same exception for rabbitmq: adding second node to the rabbitMQ cluster by pacemaker causes rabbitMQ service interruption for a 1–5 min.
 
  +
Restoring node–3 is similar to restoting node–2, with the same exception for rabbitmq:<BR>
  +
adding second node to the rabbitMQ cluster by pacemaker causes rabbitMQ service interruption for a 1–5 min.
  +
<BR>
 
Node–3 still is unmanageable and marked as “maintenance” in the pacemaker.
 
Node–3 still is unmanageable and marked as “maintenance” in the pacemaker.
 
Perform all steps similar to restore node-2
 
Perform all steps similar to restore node-2
  +
Restoring quorum policy
 
  +
===Restoring quorum policy===
   
 
Restore quorum policy
 
Restore quorum policy
  +
<PRE>
 
$ crm configure property no-quorum-policy=stop
 
$ crm configure property no-quorum-policy=stop
  +
</PRE>
 
Check pacemaker config and status
 
Check pacemaker config and status
$ crm configure show
+
* <code>$ crm configure show </code>
$ pcs cluster cib |grep no-quorum
+
* <code>$ pcs cluster cib |grep no-quorum </code>
$ pcs status
+
* <code>$ pcs status </code>
  +
Post-recovering tasks.
 
  +
==Post-recovering tasks.==
   
 
This is task assigned to the maintenances because some service interruption expected.
 
This is task assigned to the maintenances because some service interruption expected.
  +
Restoring mysql managing by pacemaker
 
  +
===Restoring mysql managing by pacemaker===
   
 
This step is not cause mysql restart. It is safe to do during business time, but we recommend shift this step to the maintenance.
 
This step is not cause mysql restart. It is safe to do during business time, but we recommend shift this step to the maintenance.
$ pcs resource clear p_mysql node-2
+
* <code> $ pcs resource clear p_mysql node-2 </code>
$ pcs resource clear p_mysql node-3
+
* <code> $ pcs resource clear p_mysql node-3 </code>
$ pcs resource enable clone_p_mysql
+
* <code> $ pcs resource enable clone_p_mysql </code>
$ pcs status
+
* <code> $ pcs status </code>
  +
Starting zabbix monitoring service.
 
  +
===Starting zabbix monitoring service.===
   
 
Start zabbix service
 
Start zabbix service
$ pcs resource enable p_zabbix-server
+
* <code> $ pcs resource enable p_zabbix-server </code>
 
In 2-5 minutes please check zabbix status in the pacemaker dashboard
 
In 2-5 minutes please check zabbix status in the pacemaker dashboard
$ pcs status
+
* <code> $ pcs status </code>
  +
Restoring rabbitmq cluster managing by pacemaker. Service interruption for 1–5 min.
 
  +
===Restoring rabbitmq cluster managing by pacemaker.===
  +
Service interruption for 1–5 min.
   
 
Return to pacemaker managing of rabbitmq
 
Return to pacemaker managing of rabbitmq
$ pcs resource clear p_rabbitmq-server node-2
+
* <code> $ pcs resource clear p_rabbitmq-server node-2 </code>
$ pcs resource clear p_rabbitmq-server node-3
+
* <code> $ pcs resource clear p_rabbitmq-server node-3 </code>
 
In a 3-5 minutes check rabbitmq cluster status by rabbitmqctl:
 
In a 3-5 minutes check rabbitmq cluster status by rabbitmqctl:
$ rabbitmqctl cluster_status
+
* <code> $ rabbitmqctl cluster_status </code>
 
Check by pacemaker:
 
Check by pacemaker:
$ pcs status p_rabbitmq-server
+
* <code> $ pcs status p_rabbitmq-server </code>
 
Ensure that cluster name is the same on master node from pacemaker prospective.
 
Ensure that cluster name is the same on master node from pacemaker prospective.

Текущая версия на 12:46, 8 февраля 2024

Восстановление Galera запущенной под PCS

Это заметка о том как восстанавливать разваленный кластер, что бы не потерять если еще раз понадобится.

На любом контроллере (в этом вом вопросе все контроллеры равнозначны):

pcs resource disable clone_p_mysqld

Дождаться пока все процессы mysqld будут остановлены

Убедиться что ресурс clone_p_mysql остановлен на всех контроллерах:

pcs status resources

На всех контроллерах зачистить (перенести или удалить) данные mysql:

mv /var/lib/mysql/* /tmp/mysql/

Выбрать один контроллер как основной для восстановления (это может быть любой контроллер), далее называем его controller-x

Восстановить бекап (это конечно хорошо, если он есть) на controller-x:

cp -R /ext-volume/mysql-backup/* /var/lib/mysql/

Поправить права на controller-x:

chown -R mysql:mysql /var/lib/mysql

Для OCF скриптов нужны переменные окружения, их нужно установить перед запуском скрипта mysql-wss
После чего стартовать базу на выбранном контроллере controller-x.
Обратить внимание что первый (и только первый) контроллер запускается с параметром --wsrep-new-cluster

export OCF_RESOURCE_INSTANCE=p_mysqld
export OCF_ROOT=/usr/lib/ocf
export OCF_RESKEY_socket=/var/run/mysqld/mysqld.sock
export OCF_RESKEY_master_timeout=10
export OCF_RESKEY_test_passwd=`crm_resource -r p_mysqld -g test_passwd`
export OCF_RESKEY_test_user=`crm_resource -r p_mysqld -g test_user`
export OCF_RESKEY_additional_parameters="--wsrep-new-cluster"
/usr/lib/ocf/resource.d/fuel/mysql-wss start

Запустить monitor operation на controller-x to update Galera GTID in Pacemaker cluster configuration,
Тоже нужны переменные окружения

/usr/lib/ocf/resource.d/fuel/mysql-wss monitor

Запустить на остальных 2-х контроллерах:

export OCF_RESOURCE_INSTANCE=p_mysqld
export OCF_ROOT=/usr/lib/ocf
export OCF_RESKEY_socket=/var/run/mysqld/mysqld.sock
export OCF_RESKEY_master_timeout=10
export OCF_RESKEY_test_passwd=`crm_resource -r p_mysqld -g test_passwd`
export OCF_RESKEY_test_user=`crm_resource -r p_mysqld -g test_user`
/usr/lib/ocf/resource.d/fuel/mysql-wss start

С любого контроллера включить ресурс mysql в Pacemaker:

pcs resource enable clone_p_mysqld

Убедиться что ресурс стартовал на всех контроллерах:

pcs status resources


https://clusterlabs.org/pacemaker/doc/2.1/Pacemaker_Explained/html/nodes.html#tracking-node-health

Что делать если все упало из-за занятого места на дисках

Как проверить?

Узнать что есть проблемы с диском с точки зрения Pacemaker можно из лога /var/log/pacemaker.log

# grep 'health_disk.*value="red"' /var/log/pacemaker.log
May 10 02:53:21 [9160] node-X.default.ltd        cib:     info: cib_perform_op:     ++ /cib/status/node_state[@id='X']/transient_attributes[@id='X']/instance_attributes[@id='status-X']:  <nvpair id="status-X-#health_disk" name="#health_disk" value="red"/>

​After the health disk change you will see a health strategy: "migrate-on-red"

May 10 02:53:21 [9164] node-X.default.ltd    pengine:     info: apply_system_health: 	Applying automated node health strategy: migrate-on-red

После чего какие-то ресурсы не могут стартовать 0 обращать внимание на

  • Applying automated node health strategy: migrate-on-red
  • Resource p_haproxy:0 cannot run anywhere

Как решать?

Проверить место

# df -h / /var/log /var/lib/mysql

Обычно проблемные

  • /var/log
  • /var/lib/mysql

После того как место очищено - рестарт

# service pacemaker stop 
# service pacemaker start

Проверка настроек

# crm configure show cib-bootstrap-options
property cib-bootstrap-options: \
	dc-version=1.1.12-561c4cf \
	cluster-infrastructure=corosync \
	no-quorum-policy=stop \
	cluster-recheck-interval=190s \
	stonith-enabled=false \
	start-failure-is-fatal=false \
	symmetric-cluster=false \
	last-lrm-refresh=1461091450 \
	node-health-strategy=migrate-on-red

Тут обратить внимание на node-health-strategy=migrate-on-red
есть примитив Pacemaker/Corosync который мониторит "живость" ноды, его можно посмотреть так:

# crm configure show sysinfo_*
primitive sysinfo_node-1.default.tld ocf:pacemaker:SysInfo \
	op monitor interval=15s \
	params disks="/ /var/log /var/lib/mysql" min_disk_free=512M disk_unit=M
  • /
  • /var/log
  • /var/lib/mysql

Проверка статуса репликации Galera

SHOW GLOBAL STATUS LIKE 'wsrep_%';

Если кластер "не собирается" - поднять таймауты

ProblemSymptoms of the problem being solved, objectives of the procedure or additional details about the question being asked that summarize what the customer is experiencing. MySQL is not running on one of the controller nodes

# pcs status
Clone Set: clone_p_mysql [p_mysql]
    Started: [ b05-39-controller.domain.tld b06-39-controller.domain.tld ]
    Stopped: [ b05-38-controller.domain.tld ]

Make sure MySQL is stopped on problematic node

# ps -ef | grep mysql
root     14878  5566  0 14:33 pts/0    00:00:00 grep --color=auto mysql

Edit the default timeout for start operation

# crm configure edit p_mysql

или

# crm configure edit p_mysqld

Set the temporary timeout value for p_mysql-start-0 to be 1200

# pcs resource show p_mysql
   Resource: p_mysql (class=ocf provider=fuel type=mysql-wss)
    Attributes: test_user=wsrep_sst test_passwd=??? socket=/var/run/mysqld/mysqld.sock 
    Operations: monitor interval=60 timeout=55 (p_mysql-monitor-60)
                start interval=0 timeout=1200 (p_mysql-start-0)
                stop interval=0 timeout=120 (p_mysql-stop-0)

Cleanup the p_mysql resource

# crm resource cleanup p_mysql

Wait for 15-20 minutes for synchronization to be completed Run pcs status again to check mysql is back up and running on the problematic controller node

# pcs status
Clone Set: clone_p_mysql [p_mysql]
    Started: [ b05-38-controller.domain.tld b05-39-controller.domain.tld b06-39-controller.domain.tld ]
Ensure that the cluster is synced again
| wsrep_local_state_comment  | Synced                                                |
| wsrep_cert_index_size      | 899                                                   |
| wsrep_causal_reads         | 0                                                     |
| wsrep_incoming_addresses   | 10.128.0.133:3307,10.128.0.132:3307,10.128.0.134:3307

Reset the timeout value for p_mysql-start-0 to default value.

Ходовые команды CRM/PCS

CRM

The most frequently used CRM commands
Action Command Example
Show cluster status crm status crm status
Show status of resource crm resource status crm resource status
Cleanup resource status crm resource cleanup <resource> crm resource cleanup p_neutron-dhcp-agent
Start a resource crm resource start <resource> crm resource start p_neutron-dhcp-agent
Stop a resource crm resource stop <resource> crm resource stop p_neutron-dhcp-agent
Restart a resource crm resource restart <resource> crm resource restart p_neutron-dhcp-agent
Put a resource into manage mode crm resource manage <resource> crm resource manage p_neutron-dhcp-agent
Put a resource into unmanage mode crm resource unmanage <resource> crm resource unmanage p_neutron-dhcp-agent
Migrate a resource to another node crm resource migrate <resource> [<node>] [<lifetime>] [force] crm resource migrate p_neutron-dhcp-agent

PCS

The frequently used PCS commands
Action Command Example
Show cluster status pcs status pcs status
Show status of resource pcs status resources pcs status resources
Cleanup resource status pcs resource cleanup resource_id pcs resource cleanup p_neutron-plugin-openvswitch-agent
Put a resource into manage mode pcs manage <resource id> ... [resource n]
Put a resource into unmanage mode pcs unmanage <resource id> ... [resource n]
Saves the raw xml from the CIB into a file pcs cluster cib pcs cluster cib
Display full cluster config pcs config pcs config
Stopping Cluster Services / Force stop
  • pcs cluster stop [--all] [node] [...]
  • pcs cluster kill
pcs cluster stop node-1
Standby Mode

The specified node is no longer able to host resources

  • pcs cluster standby node | --all
  • pcs cluster unstandby node | --all
  • pcs cluster standby node-1
  • pcs cluster unstandby node-1
Enabling Cluster Resources pcs resource enable resource_id [--wait[=n]] pcs resource enable p_neutron-l3-agent
Disabling Cluster Resources pcs resource disable resource_id [--wait[=n]] pcs resource disable p_neutron-l3-agent
Prevent the resource id specified from running on the node pcs resource ban <resource id> [node] [--master] pcs resource ban p_neutron-plugin-openvswitch-agent node-1
Remove constraints created by move and/or ban on the specified resource pcs resource clear <resource id> [node] [--master] pcs resource clear p_neutron-plugin-openvswitch-agent node-1
Maintenance Mode tells the cluster to go to a "hands off" mode pcs property set maintenance-mode=[true|false]
  • pcs property set maintenance-mode=true
  • pcs property set maintenance-mode=false
Move resource off current node (and optionally onto destination node) pcs move <resource id> [destination node] [--master] pcs resource move p_neutron-dhcp-agent node-1
Force the specified resource to start on this node ignoring the cluster

recommendations and print the output from starting the resource

pcs debug-start <resource id> [--full] pcs resource debug-start p_ceilometer-agent-central
Show options specific resource pcs resource show <resource id> pcs resource show p_neutron-l3-agent
Show current failcount for specified resource pcs failcount show <resource id> [node] pcs resource failcount show p_neutron-l3-agent
Restart metadata-agent on a specific node pcs resource ban p_neutron-metadata-agent node-1.domain.tld pcs resource clear p_neutron-metadata-agent node-1.domain.tld

Еще один документ по восстановлению кластера

This is brief document like step-by-step how to for restoration in case 2 of 3 controllers are down (hardware malfunction, power outage, etc). There is no quorum on pacemaker. Cloud is down. current state:

  • 1 controller up or 1 controller healthy.
  • 2 controllers down or not available or not functional

Choose 1st controller – better will be controller with MySQL up and running. But on most cases, with such failures mysql is down on all nodes. node–1 – alive controller (mysql database should not be corrupted) we will back up this node node–2 – down or malfunction controller node–3 – down or malfunction controller Recover cloud on the one controller

For restoring node–1 as a single controller we need:

  • Fix pacemaker quorum policy
  • Fix MySQL server
  • * Fix RabbitMQ service

Fix other core services (Neutron, Nova, heat, etc)

Pacemaker

Switch node–2 and node–3 to maintenance/unmanage state. just for make sure they will not interrupt restoring process.

  • $ crm node maintenance node-2
  • $ crm node maintenance node-3

Check:

$ pcs cluster cib |grep maint
<nvpair id="nodes-1-maintenance" name="maintenance" value="off"/>
<nvpair id="nodes-2-maintenance" name="maintenance" value="on"/>
<nvpair id="nodes-3-maintenance" name="maintenance" value="on"/>
$ pcs status

Disable mysql on node–1, if that node is not donor from pacemaker prospective we need manually start it as donor. Also we need disable zabbix (in case it installed). High load on MySQL can caused problem during replication or even break replication process.

  • $ pcs resource disable clone_p_mysql
  • $ pcs resource disable p_zabbix-server
  • $ pcs status

Switch no-quorum-policy to ignore. We need that for start all pacemaker’s services without quorum. Usually, this step is restoring RabbitMQ.

  • $ crm configure property no-quorum-policy=ignore
  • $ crm configure show
  • $ pcs cluster cib |grep no-quorum
  • $ pcs status

RabbitMQ

Restart rabbitmq messaging service. In some cases it is mandatory, for make sure that service is up, running and operational.
Stop p_rabbitmq-server on pacemaker

$ crm resource  stop p_rabbitmq-server

Ensure that rabbitmq down and kill it if necessary. If you are using murano please do not kill murano rabbit: kill only rabbit with line like 'pluginsexpanddir "/var/lib/rabbitmq/mnesia/rabbit@node-1-plugins-expand".

$ ps -ef|grep rabbit

Start p_rabbitmq-server by pacemaker

  • $ crm resource start p_rabbitmq-server
  • $ pcs status

Ensure that rabbitmq running

  • $ rabbitmqctl status
  • $ rabbitmqctl cluster_status

MySQL

Ensure that there is no MySQL is running and kill it if necessary.

$ ps -ef|grep mysql

Start MySQL as donor for galera cluster.

$ export OCF_RESOURCE_INSTANCE=p_mysql
$ export OCF_ROOT=/usr/lib/ocf
$ export OCF_RESKEY_socket=/var/run/mysqld/mysqld.sock
$ export OCF_RESKEY_additional_parameters="--wsrep-new-cluster"
$ /usr/lib/ocf/resource.d/fuel/mysql-wss start
</code>
Check mysql in a couple of minutes
<PRE>
$ mysql -e "show status like 'wsrep_%';"

Create mysql backup just in case (if necessary)

$ mkdir /tmp/db-backup; \
 innobackupex --no-timestamp --socket=/var/run/mysqld/mysqld.sock /tmp/db-backup; \
 innobackupex --use-memory=1G --lock-wait-query-type=all --apply-log --socket=/var/run/mysqld/mysqld.sock /tmp/db-backup

You can specify just some special list of tables, Excluding zabbix. please refer to Percona Partial Backups. For example backup without zabbix we will have:

$ mysql -e "SELECT CONCAT(table_schema,'.',table_name) FROM information_schema.tables WHERE table_schema not like 'zabbix'; " > db-tables.txt
$ innobackupex --tables-file=db-tables.txt --socket=/var/run/mysqld/mysqld.sock /tmp/db-backup

Ceph

We have one controller with ceph monitor. There is no quorum, and ceph is not operational. We need remove 2 other monitors from down controllers.
Stop ceph monitor:

$ stop ceph-mon-all

Dump monitor map:

  • $ mkdir ./ceph-mon-dump
  • $ ceph-mon -i node-1 --extract-monmap ./ceph-mon-dump/monmap
  • $ cp ./ceph-mon-dump/monmap ./ceph-mon-dump/monmap.bak

Remove node–2 and node–3 from monitor-map

  • $ monmaptool ./ceph-mon-dump/monmap --rm node-2
  • $ monmaptool ./ceph-mon-dump/monmap --rm node-3

Inject updated monitor map with one monitor:

  • $ ceph-mon -i node-1 --inject-monmap ./ceph-mon-dump/monmap

Starting and checking ceph

  • $ start ceph-mon-all
  • $ ceph -s

Restart ceph osd on ceph-osd nodes if necessary

Neutron and nova services

Now we have MySQL, RabbitMQ and ceph running.
Check and restore pacemaker services on controller, and neutron and nova services on computes.

  • $ pcs status
  • $ . openrc
  • $ nova service-list
  • $ neutron agent-list
  • $ neutron agent-list |grep -i xxx

After that cloud should be operational.

Node–2 restoration

Now we have:
cloud – operational, up and running.

  • node–1 – Single controller. up and running.
  • node–2 – down or malfunction controller. let’s start this node.
  • node–3 – down or malfunction controller


Start node–2, if it down.
For restoring node 2 we have the same sequence, except rabbitmq: adding second node to the rabbitMQ cluster by pacemaker causes rabbitMQ service interruption for a 1–5 min. Node–2 still unmanageable and marked as “maintenance” in the pacemaker.

Restoring MySQL on node–2

ssh to node–2.
On node–2:
Stop mysql by pacemaker script if running:

  • $ export OCF_RESOURCE_INSTANCE=p_mysql
  • $ export OCF_ROOT=/usr/lib/ocf
  • $ export OCF_RESKEY_socket=/var/run/mysqld/mysqld.sock
  • $ /usr/lib/ocf/resource.d/fuel/mysql-wss stop

Ensure that mysql is down and no mysql process exist. This is mandatory. Kill the MySQL process if it still running or stucked.
Clean up the MySQL directory and run MySQL by pacemaker.

  • $ rm -R /var/lib/mysql/*
  • $ /usr/lib/ocf/resource.d/fuel/mysql-wss start

In a 5 minutes or so replication started.
In a some time depends of the data amount and network speed replication completed and mysql started.
Check the /var/log/mysqld.log for details. Check mysql cluster status after mysql started as galera cluster member.

$ mysql -e "show status like 'wsrep_%';"

Return back to node–1.

Adding node–2 to pacemaker cluster

Now we need restore pacemaker cluster management for node–2.
Please make sure that we will not start rabbitMQ on the node.

  • $ pcs resource ban p_rabbitmq-server node-2
  • $ crm node ready node-2
  • $ pcs status

Adding node–2 to the ceph map

Stop ceph monitor on node–2

$ ssh node-2 stop ceph-mon-all </code>

re-add monitor on node-2 node.

  • $ ceph mon remove node-2
  • $ ceph mon add node-2 <ip addr of node-2 from management network>
  • $ ceph-deploy --overwrite-conf mon create node-2

Check ceph status

$ ceph -s

As result you will set two monitors in the ceph cluster.

Nova and neutron services.

Check and restore (if necessary) nova and neutron services on node–2

  • $ nova service-list
  • $ neutron agent-list
  • $ neutron agent-list |grep -i xxx

Node–3 restoration

Now we have:
cloud – operational, up and running.

  • node–1 – 1st controller. up and running.
  • node–2 – 2nd controller. up and running.
  • node–3 – down or malfunction controller.

Start node–3, if it down.

Restoring node–3 is similar to restoting node–2, with the same exception for rabbitmq:
adding second node to the rabbitMQ cluster by pacemaker causes rabbitMQ service interruption for a 1–5 min.
Node–3 still is unmanageable and marked as “maintenance” in the pacemaker. Perform all steps similar to restore node-2

Restoring quorum policy

Restore quorum policy

$ crm configure property no-quorum-policy=stop

Check pacemaker config and status

  • $ crm configure show
  • $ pcs cluster cib |grep no-quorum
  • $ pcs status

Post-recovering tasks.

This is task assigned to the maintenances because some service interruption expected.

Restoring mysql managing by pacemaker

This step is not cause mysql restart. It is safe to do during business time, but we recommend shift this step to the maintenance.

  • $ pcs resource clear p_mysql node-2
  • $ pcs resource clear p_mysql node-3
  • $ pcs resource enable clone_p_mysql
  • $ pcs status

Starting zabbix monitoring service.

Start zabbix service

  • $ pcs resource enable p_zabbix-server

In 2-5 minutes please check zabbix status in the pacemaker dashboard

  • $ pcs status

Restoring rabbitmq cluster managing by pacemaker.

Service interruption for 1–5 min.

Return to pacemaker managing of rabbitmq

  • $ pcs resource clear p_rabbitmq-server node-2
  • $ pcs resource clear p_rabbitmq-server node-3

In a 3-5 minutes check rabbitmq cluster status by rabbitmqctl:

  • $ rabbitmqctl cluster_status

Check by pacemaker:

  • $ pcs status p_rabbitmq-server

Ensure that cluster name is the same on master node from pacemaker prospective.