Ceph2: различия между версиями

Материал из noname.com.ua
Перейти к навигацииПерейти к поиску
Строка 8: Строка 8:
 
==Mark this OSD out of the Ceph cluster==
 
==Mark this OSD out of the Ceph cluster==
 
<PRE>
 
<PRE>
ceph osd out osd.226
+
ceph osd out osd.123
 
</PRE>
 
</PRE>
 
2. Remove the failed OSD from the CRUSH map:
 
2. Remove the failed OSD from the CRUSH map:
 
<PRE>
 
<PRE>
ceph osd crush rm osd.226
+
ceph osd crush rm osd.123
 
</PRE>
 
</PRE>
 
3. Delete Ceph authentication keys for the OSD:
 
3. Delete Ceph authentication keys for the OSD:
 
<PRE>
 
<PRE>
ceph auth del osd.226
+
ceph auth del osd.123
 
</PRE>
 
</PRE>
 
4. Remove the OSD from the Ceph cluster:
 
4. Remove the OSD from the Ceph cluster:
 
<PRE>
 
<PRE>
ceph osd rm osd.226
+
ceph osd rm osd.123
 
</PRE>
 
</PRE>
 
Please keep in mind that whenever an OSD is unavailable your cluster health will not be OK, and it will continue to perform the recovery which is a normal Ceph operation in this situation.
 
Please keep in mind that whenever an OSD is unavailable your cluster health will not be OK, and it will continue to perform the recovery which is a normal Ceph operation in this situation.
5. So as long as the server is hot swappable which it should be, then just put the new drive in and perform the following tasks. If its node-130 then, keep in mind you will need to use the name of your node.
+
5. So as long as the server is hot swappable which it should be, then just put the new drive in and perform the following tasks. If its node-2 then, keep in mind you will need to use the name of your node.
 
<PRE>
 
<PRE>
ceph-deploy disk list node-130
+
ceph-deploy disk list node-2
 
</PRE>
 
</PRE>
 
6. Before adding the disk to the Ceph cluster, perform a disk zap:
 
6. Before adding the disk to the Ceph cluster, perform a disk zap:
(Watch the output of the disk list for the location that gets assigned, that will be the location that you will run the osd create command on. For example if disk list output is [node-130][DEBUG ] /dev/sdd other, btrfs then disk zap will be the following:
+
(Watch the output of the disk list for the location that gets assigned, that will be the location that you will run the osd create command on. For example if disk list output is [node-2][DEBUG ] /dev/sdd other, btrfs then disk zap will be the following:
 
<PRE>
 
<PRE>
ceph-deploy disk zap node-130:sdd
+
ceph-deploy disk zap node-2:sdd
 
</PRE>
 
</PRE>
7. Create an OSD on the disk, and Ceph will add it as osd.226:
+
7. Create an OSD on the disk, and Ceph will add it as osd.123:
ceph-deploy --overwrite-conf osd create node-130:sdd
+
ceph-deploy --overwrite-conf osd create node-2:sdd
 
</PRE>
 
</PRE>
 
Additional Information
 
Additional Information

Версия 13:24, 22 октября 2024

Ceph2

Сборник рецептов по ceph

Удаление OSD из кластера

If 226 is the number of the OSD to replace, node-130 is the name of the node and sdd is the name of the disk, then:

Mark this OSD out of the Ceph cluster

ceph osd out osd.123

2. Remove the failed OSD from the CRUSH map:

ceph osd crush rm osd.123

3. Delete Ceph authentication keys for the OSD:

ceph auth del osd.123

4. Remove the OSD from the Ceph cluster:

ceph osd rm osd.123

Please keep in mind that whenever an OSD is unavailable your cluster health will not be OK, and it will continue to perform the recovery which is a normal Ceph operation in this situation. 5. So as long as the server is hot swappable which it should be, then just put the new drive in and perform the following tasks. If its node-2 then, keep in mind you will need to use the name of your node.

ceph-deploy disk list node-2

6. Before adding the disk to the Ceph cluster, perform a disk zap: (Watch the output of the disk list for the location that gets assigned, that will be the location that you will run the osd create command on. For example if disk list output is [node-2][DEBUG ] /dev/sdd other, btrfs then disk zap will be the following:

ceph-deploy disk zap node-2:sdd

7. Create an OSD on the disk, and Ceph will add it as osd.123: ceph-deploy --overwrite-conf osd create node-2:sdd

Additional Information Once an OSD is created Ceph will run a recovery operation and start moving placement groups from secondary OSDs to the new OSD. Again the recovery operation will take a while depending on the size of your cluster, once it is completed your Ceph cluster will be HEALTH_OK.

When a new host or disk is added to a Ceph cluster, CRUSH starts a rebalancing operation, under which it moves the data from existing hosts/disks to a new host/ disk. Rebalancing is performed to keep all disks equally utilized, which improves the cluster performance and keeps it healthy. Internal Notes