Ceph2: различия между версиями
Sirmax (обсуждение | вклад) |
Sirmax (обсуждение | вклад) |
||
Строка 41: | Строка 41: | ||
</PRE> |
</PRE> |
||
+ | ==Создать <code>OSD</code> на дискеи добавить в кластер <code>Ceph</code> как <code>osd.123</code>== |
||
− | ==Create an OSD on the disk, and Ceph will add it as osd.123== |
||
+ | <PRE> |
||
ceph-deploy --overwrite-conf osd create node-2:sdd |
ceph-deploy --overwrite-conf osd create node-2:sdd |
||
</PRE> |
</PRE> |
||
+ | |||
==Additional Information== |
==Additional Information== |
||
Once an OSD is created Ceph will run a recovery operation and start moving placement groups from secondary OSDs to the new OSD. Again the recovery operation will take a while depending on the size of your cluster, once it is completed your Ceph cluster will be HEALTH_OK. |
Once an OSD is created Ceph will run a recovery operation and start moving placement groups from secondary OSDs to the new OSD. Again the recovery operation will take a while depending on the size of your cluster, once it is completed your Ceph cluster will be HEALTH_OK. |
Версия 14:15, 22 октября 2024
Ceph2
Сборник рецептов по ceph
Удаление OSD из кластера
123
- номер OSD на удалениеnode-2
- хостнейм ноды на котрой этот OSDsdd
- блочное устройство
Пометить OSD
out
из кластера Ceph
ceph osd out osd.123
Удалить сбойную OSD
из CRUSH map
ceph osd crush rm osd.123
Удалить ключи (authentication keys
) для OSD
ceph auth del osd.123
Удалить OSD
из кластера Ceph
ceph osd rm osd.123
Please keep in mind that whenever an OSD is unavailable your cluster health will not be OK, and it will continue to perform the recovery which is a normal Ceph operation in this situation.
Заменить диск
Список дисков (после замены) посмотреть так:
ceph-deploy disk list node-2
Перед добавлением диска в кластер Ceph выполните очистку диска
Перед добавлением проверить как определился диск (sdd или другая буква)
ceph-deploy disk zap node-2:sdd
Создать OSD
на дискеи добавить в кластер Ceph
как osd.123
ceph-deploy --overwrite-conf osd create node-2:sdd
Additional Information
Once an OSD is created Ceph will run a recovery operation and start moving placement groups from secondary OSDs to the new OSD. Again the recovery operation will take a while depending on the size of your cluster, once it is completed your Ceph cluster will be HEALTH_OK.
When a new host or disk is added to a Ceph cluster, CRUSH starts a rebalancing operation, under which it moves the data from existing hosts/disks to a new host/ disk. Rebalancing is performed to keep all disks equally utilized, which improves the cluster performance and keeps it healthy. Internal Notes