Pacemaker Corosync

Материал из noname.com.ua
Перейти к навигацииПерейти к поиску

Восстановление Galera запущенной под PCS

Это заметка о том как восстанавливать разваленный кластер, что бы не потерять если еще раз понадобится.

На любом контроллере (в этом вом вопросе все контроллеры равнозначны):

pcs resource disable clone_p_mysqld

Дождаться пока все процессы mysqld будут остановлены

Убедиться что ресурс clone_p_mysql остановлен на всех контроллерах:

pcs status resources

На всех контроллерах зачистить (перенести или удалить) данные mysql:

mv /var/lib/mysql/* /tmp/mysql/

Выбрать один контроллер как основной для восстановления (это может быть любой контроллер), далее называем его controller-x

Восстановить бекап (это конечно хорошо, если он есть) на controller-x:

cp -R /ext-volume/mysql-backup/* /var/lib/mysql/

Поправить права на controller-x:

chown -R mysql:mysql /var/lib/mysql

Для OCF скриптов нужны переменные окружения, их нужно установить перед запуском скрипта mysql-wss
После чего стартовать базу на выбранном контроллере controller-x.
Обратить внимание что первый (и только первый) контроллер запускается с параметром --wsrep-new-cluster

export OCF_RESOURCE_INSTANCE=p_mysqld
export OCF_ROOT=/usr/lib/ocf
export OCF_RESKEY_socket=/var/run/mysqld/mysqld.sock
export OCF_RESKEY_master_timeout=10
export OCF_RESKEY_test_passwd=`crm_resource -r p_mysqld -g test_passwd`
export OCF_RESKEY_test_user=`crm_resource -r p_mysqld -g test_user`
export OCF_RESKEY_additional_parameters="--wsrep-new-cluster"
/usr/lib/ocf/resource.d/fuel/mysql-wss start

Запустить monitor operation на controller-x to update Galera GTID in Pacemaker cluster configuration,
Тоже нужны переменные окружения

/usr/lib/ocf/resource.d/fuel/mysql-wss monitor

Запустить на остальных 2-х контроллерах:

export OCF_RESOURCE_INSTANCE=p_mysqld
export OCF_ROOT=/usr/lib/ocf
export OCF_RESKEY_socket=/var/run/mysqld/mysqld.sock
export OCF_RESKEY_master_timeout=10
export OCF_RESKEY_test_passwd=`crm_resource -r p_mysqld -g test_passwd`
export OCF_RESKEY_test_user=`crm_resource -r p_mysqld -g test_user`
/usr/lib/ocf/resource.d/fuel/mysql-wss start

С любого контроллера включить ресурс mysql в Pacemaker:

pcs resource enable clone_p_mysqld

Убедиться что ресурс стартовал на всех контроллерах:

pcs status resources


https://clusterlabs.org/pacemaker/doc/2.1/Pacemaker_Explained/html/nodes.html#tracking-node-health

Что делать если все упало из-за занятого места на дисках

Как проверить?

Узнать что есть проблемы с диском с точки зрения Pacemaker можно из лога /var/log/pacemaker.log

# grep 'health_disk.*value="red"' /var/log/pacemaker.log
May 10 02:53:21 [9160] node-X.default.ltd        cib:     info: cib_perform_op:     ++ /cib/status/node_state[@id='X']/transient_attributes[@id='X']/instance_attributes[@id='status-X']:  <nvpair id="status-X-#health_disk" name="#health_disk" value="red"/>

​After the health disk change you will see a health strategy: "migrate-on-red"

May 10 02:53:21 [9164] node-X.default.ltd    pengine:     info: apply_system_health: 	Applying automated node health strategy: migrate-on-red

После чего какие-то ресурсы не могут стартовать 0 обращать внимание на

  • Applying automated node health strategy: migrate-on-red
  • Resource p_haproxy:0 cannot run anywhere

Как решать?

Проверить место

# df -h / /var/log /var/lib/mysql

Обычно проблемные

  • /var/log
  • /var/lib/mysql

После того как место очищено - рестарт

# service pacemaker stop 
# service pacemaker start

Проверка настроек

# crm configure show cib-bootstrap-options
property cib-bootstrap-options: \
	dc-version=1.1.12-561c4cf \
	cluster-infrastructure=corosync \
	no-quorum-policy=stop \
	cluster-recheck-interval=190s \
	stonith-enabled=false \
	start-failure-is-fatal=false \
	symmetric-cluster=false \
	last-lrm-refresh=1461091450 \
	node-health-strategy=migrate-on-red

Тут обратить внимание на node-health-strategy=migrate-on-red
есть примитив Pacemaker/Corosync который мониторит "живость" ноды, его можно посмотреть так:

# crm configure show sysinfo_*
primitive sysinfo_node-1.default.tld ocf:pacemaker:SysInfo \
	op monitor interval=15s \
	params disks="/ /var/log /var/lib/mysql" min_disk_free=512M disk_unit=M
  • /
  • /var/log
  • /var/lib/mysql

Проверка статуса репликации Galera

SHOW GLOBAL STATUS LIKE 'wsrep_%';

3

Validated External ProblemSymptoms of the problem being solved, objectives of the procedure or additional details about the question being asked that summarize what the customer is experiencing. MySQL is not running on one of the controller nodes

  1. pcs status

Clone Set: clone_p_mysql [p_mysql]

   Started: [ b05-39-controller.domain.tld b06-39-controller.domain.tld ]
   Stopped: [ b05-38-controller.domain.tld ]

EnvironmentThe characteristics of the issue that are not part of the problem (Product/s, Release, Version/s, Operating System, Component/s) Mirantis OpenStack 7.0 and higher MySQL Pacemaker Galera ResolutionNumbered series of steps about how to fix the issue Make sure MySQL is stopped on problematic node

  1. ps -ef | grep mysql

root 14878 5566 0 14:33 pts/0 00:00:00 grep --color=auto mysql Edit the default timeout for start operation

  1. crm configure edit p_mysql

Note: for Mirantis OpenStack 9.0 and higher, run the following command:

  1. crm configure edit p_mysqld

Set the temporary timeout value for p_mysql-start-0 to be 1200

  1. pcs resource show p_mysql
  Resource: p_mysql (class=ocf provider=fuel type=mysql-wss)
   Attributes: test_user=wsrep_sst test_passwd=??? socket=/var/run/mysqld/mysqld.sock 
   Operations: monitor interval=60 timeout=55 (p_mysql-monitor-60)
               start interval=0 timeout=1200 (p_mysql-start-0)
               stop interval=0 timeout=120 (p_mysql-stop-0)

Cleanup the p_mysql resource

  1. crm resource cleanup p_mysql

Wait for 15-20 minutes for synchronization to be completed Run pcs status again to check mysql is back up and running on the problematic controller node

  1. pcs status

Clone Set: clone_p_mysql [p_mysql]

   Started: [ b05-38-controller.domain.tld b05-39-controller.domain.tld b06-39-controller.domain.tld ]

Ensure that the cluster is synced again | wsrep_local_state_comment | Synced | | wsrep_cert_index_size | 899 | | wsrep_causal_reads | 0 | | wsrep_incoming_addresses | 10.128.0.133:3307,10.128.0.132:3307,10.128.0.134:3307

      8. Reset the timeout value for p_mysql-start-0 to default value.


Ходовые команды CRM/PCS

CRM

The most frequently used CRM commands
Action Command Example
Show cluster status crm status crm status
Show status of resource crm resource status crm resource status
Cleanup resource status crm resource cleanup <resource> crm resource cleanup p_neutron-dhcp-agent
Start a resource crm resource start <resource> crm resource start p_neutron-dhcp-agent
Stop a resource crm resource stop <resource> crm resource stop p_neutron-dhcp-agent
Restart a resource crm resource restart <resource> crm resource restart p_neutron-dhcp-agent
Put a resource into manage mode crm resource manage <resource> crm resource manage p_neutron-dhcp-agent
Put a resource into unmanage mode crm resource unmanage <resource> crm resource unmanage p_neutron-dhcp-agent
Migrate a resource to another node crm resource migrate <resource> [<node>] [<lifetime>] [force] crm resource migrate p_neutron-dhcp-agent

PCS

The frequently used PCS commands
Action Command Example
Show cluster status pcs status pcs status
Show status of resource pcs status resources pcs status resources
Cleanup resource status pcs resource cleanup resource_id pcs resource cleanup p_neutron-plugin-openvswitch-agent
Put a resource into manage mode pcs manage <resource id> ... [resource n]
Put a resource into unmanage mode pcs unmanage <resource id> ... [resource n]
Saves the raw xml from the CIB into a file pcs cluster cib pcs cluster cib
Display full cluster config pcs config pcs config
Stopping Cluster Services / Force stop
  • pcs cluster stop [--all] [node] [...]
  • pcs cluster kill
pcs cluster stop node-1
Standby Mode

The specified node is no longer able to host resources

  • pcs cluster standby node | --all
  • pcs cluster unstandby node | --all
  • pcs cluster standby node-1
  • pcs cluster unstandby node-1
Enabling Cluster Resources pcs resource enable resource_id [--wait[=n]] pcs resource enable p_neutron-l3-agent
Disabling Cluster Resources pcs resource disable resource_id [--wait[=n]] pcs resource disable p_neutron-l3-agent
Prevent the resource id specified from running on the node pcs resource ban <resource id> [node] [--master] pcs resource ban p_neutron-plugin-openvswitch-agent node-1
Remove constraints created by move and/or ban on the specified resource pcs resource clear <resource id> [node] [--master] pcs resource clear p_neutron-plugin-openvswitch-agent node-1
Maintenance Mode tells the cluster to go to a "hands off" mode pcs property set maintenance-mode=[true|false]
  • pcs property set maintenance-mode=true
  • pcs property set maintenance-mode=false
Move resource off current node (and optionally onto destination node) pcs move <resource id> [destination node] [--master] pcs resource move p_neutron-dhcp-agent node-1
Force the specified resource to start on this node ignoring the cluster

recommendations and print the output from starting the resource

pcs debug-start <resource id> [--full] pcs resource debug-start p_ceilometer-agent-central
Show options specific resource pcs resource show <resource id> pcs resource show p_neutron-l3-agent
Show current failcount for specified resource pcs failcount show <resource id> [node] pcs resource failcount show p_neutron-l3-agent
Restart metadata-agent on a specific node pcs resource ban p_neutron-metadata-agent node-1.domain.tld pcs resource clear p_neutron-metadata-agent node-1.domain.tld

Еще один документ по восстановлению кластера

This is brief document like step-by-step how to for restoration in case 2 of 3 controllers are down (hardware malfunction, power outage, etc). There is no quorum on pacemaker. Cloud is down. current state:

  • 1 controller up or 1 controller healthy.
  • 2 controllers down or not available or not functional

Choose 1st controller – better will be controller with MySQL up and running. But on most cases, with such failures mysql is down on all nodes. node–1 – alive controller (mysql database should not be corrupted) we will back up this node node–2 – down or malfunction controller node–3 – down or malfunction controller Recover cloud on the one controller

For restoring node–1 as a single controller we need:

  • Fix pacemaker quorum policy
  • Fix MySQL server
  • * Fix RabbitMQ service

Fix other core services (Neutron, Nova, heat, etc)

Pacemaker

Switch node–2 and node–3 to maintenance/unmanage state. just for make sure they will not interrupt restoring process.

  • $ crm node maintenance node-2
  • $ crm node maintenance node-3

Check:

$ pcs cluster cib |grep maint
<nvpair id="nodes-1-maintenance" name="maintenance" value="off"/>
<nvpair id="nodes-2-maintenance" name="maintenance" value="on"/>
<nvpair id="nodes-3-maintenance" name="maintenance" value="on"/>
$ pcs status

Disable mysql on node–1, if that node is not donor from pacemaker prospective we need manually start it as donor. Also we need disable zabbix (in case it installed). High load on MySQL can caused problem during replication or even break replication process.

  • $ pcs resource disable clone_p_mysql
  • $ pcs resource disable p_zabbix-server
  • $ pcs status

Switch no-quorum-policy to ignore. We need that for start all pacemaker’s services without quorum. Usually, this step is restoring RabbitMQ.

  • $ crm configure property no-quorum-policy=ignore
  • $ crm configure show
  • $ pcs cluster cib |grep no-quorum
  • $ pcs status

RabbitMQ

Restart rabbitmq messaging service. In some cases it is mandatory, for make sure that service is up, running and operational.
Stop p_rabbitmq-server on pacemaker

$ crm resource  stop p_rabbitmq-server

Ensure that rabbitmq down and kill it if necessary. If you are using murano please do not kill murano rabbit: kill only rabbit with line like 'pluginsexpanddir "/var/lib/rabbitmq/mnesia/rabbit@node-1-plugins-expand".

$ ps -ef|grep rabbit

Start p_rabbitmq-server by pacemaker

  • $ crm resource start p_rabbitmq-server
  • $ pcs status

Ensure that rabbitmq running

  • $ rabbitmqctl status
  • $ rabbitmqctl cluster_status

MySQL

Ensure that there is no MySQL is running and kill it if necessary.

$ ps -ef|grep mysql

Start MySQL as donor for galera cluster.

$ export OCF_RESOURCE_INSTANCE=p_mysql
$ export OCF_ROOT=/usr/lib/ocf
$ export OCF_RESKEY_socket=/var/run/mysqld/mysqld.sock
$ export OCF_RESKEY_additional_parameters="--wsrep-new-cluster"
$ /usr/lib/ocf/resource.d/fuel/mysql-wss start
</code>
Check mysql in a couple of minutes
<PRE>
$ mysql -e "show status like 'wsrep_%';"

Create mysql backup just in case (if necessary)

$ mkdir /tmp/db-backup; \
 innobackupex --no-timestamp --socket=/var/run/mysqld/mysqld.sock /tmp/db-backup; \
 innobackupex --use-memory=1G --lock-wait-query-type=all --apply-log --socket=/var/run/mysqld/mysqld.sock /tmp/db-backup

You can specify just some special list of tables, Excluding zabbix. please refer to Percona Partial Backups. For example backup without zabbix we will have:

$ mysql -e "SELECT CONCAT(table_schema,'.',table_name) FROM information_schema.tables WHERE table_schema not like 'zabbix'; " > db-tables.txt
$ innobackupex --tables-file=db-tables.txt --socket=/var/run/mysqld/mysqld.sock /tmp/db-backup

Ceph

We have one controller with ceph monitor. There is no quorum, and ceph is not operational. We need remove 2 other monitors from down controllers.
Stop ceph monitor:

$ stop ceph-mon-all

Dump monitor map:

  • $ mkdir ./ceph-mon-dump
  • $ ceph-mon -i node-1 --extract-monmap ./ceph-mon-dump/monmap
  • $ cp ./ceph-mon-dump/monmap ./ceph-mon-dump/monmap.bak

Remove node–2 and node–3 from monitor-map

  • $ monmaptool ./ceph-mon-dump/monmap --rm node-2
  • $ monmaptool ./ceph-mon-dump/monmap --rm node-3

Inject updated monitor map with one monitor:

  • $ ceph-mon -i node-1 --inject-monmap ./ceph-mon-dump/monmap

Starting and checking ceph

  • $ start ceph-mon-all
  • $ ceph -s

Restart ceph osd on ceph-osd nodes if necessary

Neutron and nova services

Now we have MySQL, RabbitMQ and ceph running.
Check and restore pacemaker services on controller, and neutron and nova services on computes.

  • $ pcs status
  • $ . openrc
  • $ nova service-list
  • $ neutron agent-list
  • $ neutron agent-list |grep -i xxx

After that cloud should be operational.

Node–2 restoration

Now we have: cloud – operational, up and running. node–1 – Single controller. up and running. node–2 – down or malfunction controller. let’s start this node. node–3 – down or malfunction controller Start node–2, if it down. For restoring node 2 we have the same sequence, except rabbitmq: adding second node to the rabbitMQ cluster by pacemaker causes rabbitMQ service interruption for a 1–5 min. Node–2 still unmanageable and marked as “maintenance” in the pacemaker. Restoring MySQL on node–2

ssh to node–2. On node–2: Stop mysql by pacemaker script if running: $ export OCF_RESOURCE_INSTANCE=p_mysql $ export OCF_ROOT=/usr/lib/ocf $ export OCF_RESKEY_socket=/var/run/mysqld/mysqld.sock $ /usr/lib/ocf/resource.d/fuel/mysql-wss stop Ensure that mysql is down and no mysql process exist. This is mandatory. Kill the MySQL process if it still running or stucked. Clean up the MySQL directory and run MySQL by pacemaker. $ rm -R /var/lib/mysql/* $ /usr/lib/ocf/resource.d/fuel/mysql-wss start In a 5 minutes or so replication started. In a some time depends of the data amount and network speed replication completed and mysql started. Check the /var/log/mysqld.log for details. Check mysql cluster status after mysql started as galera cluster member. $ mysql -e "show status like 'wsrep_%';" Return back to node–1. Adding node–2 to pacemaker cluster

Now we need restore pacemaker cluster management for node–2. Please make sure that we will not start rabbitMQ on the node. $ pcs resource ban p_rabbitmq-server node-2 $ crm node ready node-2 $ pcs status Adding node–2 to the ceph map

Stop ceph monitor on node–2 $ ssh node-2 stop ceph-mon-all re-add monitor on node-2 node. $ ceph mon remove node-2 $ ceph mon add node-2 <ip addr of node-2 from management network> $ ceph-deploy --overwrite-conf mon create node-2 Check ceph status $ ceph -s As result you will set two monitors in the ceph cluster. Nova and neutron services.

Check and restore (if necessary) nova and neutron services on node–2

   $ nova service-list
   $ neutron agent-list
   $ neutron agent-list  |grep -i xxx

Node–3 restoration

Now we have: cloud – operational, up and running. node–1 – 1st controller. up and running. node–2 – 2nd controller. up and running. node–3 – down or malfunction controller. Start node–3, if it down. Restoring node–3 is similar to restoting node–2, with the same exception for rabbitmq: adding second node to the rabbitMQ cluster by pacemaker causes rabbitMQ service interruption for a 1–5 min. Node–3 still is unmanageable and marked as “maintenance” in the pacemaker. Perform all steps similar to restore node-2 Restoring quorum policy

Restore quorum policy $ crm configure property no-quorum-policy=stop Check pacemaker config and status $ crm configure show $ pcs cluster cib |grep no-quorum $ pcs status

Post-recovering tasks.

This is task assigned to the maintenances because some service interruption expected.

Restoring mysql managing by pacemaker

This step is not cause mysql restart. It is safe to do during business time, but we recommend shift this step to the maintenance. $ pcs resource clear p_mysql node-2 $ pcs resource clear p_mysql node-3 $ pcs resource enable clone_p_mysql $ pcs status

Starting zabbix monitoring service.

Start zabbix service $ pcs resource enable p_zabbix-server In 2-5 minutes please check zabbix status in the pacemaker dashboard $ pcs status


Restoring rabbitmq cluster managing by pacemaker.

Service interruption for 1–5 min.

Return to pacemaker managing of rabbitmq $ pcs resource clear p_rabbitmq-server node-2 $ pcs resource clear p_rabbitmq-server node-3 In a 3-5 minutes check rabbitmq cluster status by rabbitmqctl: $ rabbitmqctl cluster_status Check by pacemaker: $ pcs status p_rabbitmq-server Ensure that cluster name is the same on master node from pacemaker prospective.