ceph集群osd故障修复实例演示


集群安装方式:
1: ceph-deploy 方式安装ceph集群,模拟osd磁盘损坏;


分别采用如下两种方式修复:

1:使用ceph-deploy 方式修复故障osd;

2:手动修复故障osd;


#######使用ceph-deploy方式修复过程演示########

1:停止osd
/etc/init.d/ceph stop osd.3   

2:查看osd磁盘挂载情况;
[root@node243 ceph]# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0   50G  0 disk
├─sda1   8:1    0  500M  0 part /boot
├─sda2   8:2    0    2G  0 part [SWAP]
└─sda3   8:3    0 47.5G  0 part /
sdb      8:16   0  100G  0 disk
├─sdb1   8:17   0   95G  0 part /var/lib/ceph/tmp/mnt.x4MbgI
└─sdb2   8:18   0    5G  0 part /var/lib/ceph/osd/ceph-3
sr0     11:0    1 1024M  0 rom  

3:卸载挂载分区
umount /var/lib/ceph/osd/ceph-3
umount /var/lib/ceph/tmp/mnt.x4MbgI


4:格式化磁盘模拟磁盘损坏
mkfs.xfs  -f /dev/sdb

5:查看集群osd 状态
ceph osd tree
[root@node243 ceph]# ceph osd tree
ID WEIGHT  TYPE NAME            UP/DOWN REWEIGHT PRIMARY-AFFINITY
-6       0 host node01                                            
-1 0.44998 root default                                           
-2 0.09000     host ceph-deploy                                   
 0 0.09000         osd.0             up  1.00000          1.00000
-3 0.09000     host node241                                       
 1 0.09000         osd.1             up  1.00000          1.00000
-4 0.09000     host node242                                       
 2 0.09000         osd.2             up  1.00000          1.00000
-5 0.09000     host node243                                       
 3 0.09000         osd.3           down  1.00000          1.00000       《==发现osd状态down
-7 0.09000     host node245                                       
 5 0.09000         osd.5             up  1.00000          1.00000



6:将osd状态设置为out
ceph osd out osd.3      

7:将osd从集群中删除
ceph osd rm osd.3  

8:从CRUSH中移除     
ceph osd crush rm osd.3  

9:删除osd.3 的认证信息
ceph auth del osd.3      

10:再次查看集群osd状态
[root@node243 ceph]# ceph osd tree
ID WEIGHT  TYPE NAME            UP/DOWN REWEIGHT PRIMARY-AFFINITY
-6       0 host node01                                            
-1 0.35999 root default                                           
-2 0.09000     host ceph-deploy                                   
 0 0.09000         osd.0             up  1.00000          1.00000
-3 0.09000     host node241                                       
 1 0.09000         osd.1             up  1.00000          1.00000
-4 0.09000     host node242                                       
 2 0.09000         osd.2             up  1.00000          1.00000
-5       0     host node243                                          《==osd依被清理出集群
-7 0.09000     host node245                                       
 5 0.09000         osd.5             up  1.00000          1.00000
[root@node243 ceph]#


开始恢复

11:登陆安装配置主机(ceph-deploy)
cd /etc/ceph/

12:初始化磁盘:
ceph-deploy osd prepare node243:/dev/sdb

13:激活磁盘
ceph-deploy osd activate node243:/dev/sdb

14:查看磁盘状态,验证添加是否成功
ceph osd tree
ID WEIGHT  TYPE NAME            UP/DOWN REWEIGHT PRIMARY-AFFINITY
-6       0 host node01                                            
-1 0.44998 root default                                           
-2 0.09000     host ceph-deploy                                   
 0 0.09000         osd.0             up  1.00000          1.00000
-3 0.09000     host node241                                       
 1 0.09000         osd.1             up  1.00000          1.00000
-4 0.09000     host node242                                       
 2 0.09000         osd.2             up  1.00000          1.00000
-5 0.09000     host node243                                       
 3 0.09000         osd.3             up  1.00000          1.00000     #添加成功
-7 0.09000     host node245                                       
 5 0.09000         osd.5             up  1.00000          1.00000
[root@node243 ceph]#



#########手动操作修复osd过程演示:#########

1:停止osd
/etc/init.d/ceph stop osd.3   

2:查看osd磁盘挂载情况;
[root@node243 ceph]# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0   50G  0 disk
├─sda1   8:1    0  500M  0 part /boot
├─sda2   8:2    0    2G  0 part [SWAP]
└─sda3   8:3    0 47.5G  0 part /
sdb      8:16   0  100G  0 disk
├─sdb1   8:17   0   95G  0 part /var/lib/ceph/tmp/mnt.x4MbgI
└─sdb2   8:18   0    5G  0 part /var/lib/ceph/osd/ceph-3
sr0     11:0    1 1024M  0 rom  

3:卸载挂载分区
umount /var/lib/ceph/osd/ceph-3
umount /var/lib/ceph/tmp/mnt.x4MbgI


4:格式化磁盘模拟磁盘损坏
mkfs.xfs  -f /dev/sdb

5:查看集群osd 状态
ceph osd tree
[root@node243 ceph]# ceph osd tree
ID WEIGHT  TYPE NAME            UP/DOWN REWEIGHT PRIMARY-AFFINITY
-6       0 host node01                                            
-1 0.44998 root default                                           
-2 0.09000     host ceph-deploy                                   
 0 0.09000         osd.0             up  1.00000          1.00000
-3 0.09000     host node241                                       
 1 0.09000         osd.1             up  1.00000          1.00000
-4 0.09000     host node242                                       
 2 0.09000         osd.2             up  1.00000          1.00000
-5 0.09000     host node243                                       
 3 0.09000         osd.3           down  1.00000          1.00000       《==发现osd状态down
-7 0.09000     host node245                                       
 5 0.09000         osd.5             up  1.00000          1.00000



6:将osd状态设置为out
ceph osd out osd.3      

7:将osd从集群中删除
ceph osd rm osd.3  

8:从CRUSH中移除     
ceph osd crush rm osd.3  

9:删除osd.3 的认证信息
ceph auth del osd.3      

10:再次查看集群osd状态
[root@node243 ceph]# ceph osd tree
ID WEIGHT  TYPE NAME            UP/DOWN REWEIGHT PRIMARY-AFFINITY
-6       0 host node01                                            
-1 0.35999 root default                                           
-2 0.09000     host ceph-deploy                                   
 0 0.09000         osd.0             up  1.00000          1.00000
-3 0.09000     host node241                                       
 1 0.09000         osd.1             up  1.00000          1.00000
-4 0.09000     host node242                                       
 2 0.09000         osd.2             up  1.00000          1.00000
-5       0     host node243                                          《==osd依被清理出集群
-7 0.09000     host node245                                       
 5 0.09000         osd.5             up  1.00000          1.00000


11:进入osd挂载目标文件夹
[root@node243 ceph]#
cd /var/lib/ceph/osd/ceph-3

12:查看挂载目录,发现文件夹为空,
[root@node243 ceph-3]# ls

和正常osd挂载目录对比,正常如下:
[root@node242 osd]# ll ceph-2/
total 56
-rw-r--r--   1 root root  193 Aug 26 02:27 activate.monmap
-rw-r--r--   1 root root    3 Aug 26 02:27 active
-rw-r--r--   1 root root   37 Aug 26 02:27 ceph_fsid
drwxr-xr-x 226 root root 8192 Dec 17 16:53 current
-rw-r--r--   1 root root   37 Aug 26 02:27 fsid
lrwxrwxrwx   1 root root   58 Aug 26 02:27 journal -> /dev/disk/by-partuuid/6781a828-3baf-4e47-8f41-d12fa8cb0078
-rw-r--r--   1 root root   37 Aug 26 02:27 journal_uuid
-rw-------   1 root root   56 Aug 26 02:27 keyring
-rw-r--r--   1 root root   21 Aug 26 02:27 magic
-rw-r--r--   1 root root    6 Aug 26 02:27 ready
-rw-r--r--   1 root root    4 Aug 26 02:27 store_version
-rw-r--r--   1 root root   53 Aug 26 02:27 superblock
-rw-r--r--   1 root root    0 Dec 17 11:37 sysvinit
-rw-r--r--   1 root root    2 Aug 26 02:27 whoami


13:查看新挂载的盘、然后格式化;
[root@node243 ~]# fdisk  -l
Disk /dev/sdb: 107.4 GB, 107374182400 bytes, 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


14:创建osd挂载目录
mount /dev/sdb /var/lib/ceph/osd/ceph-3
mount -o remount,user_xattr /var/lib/ceph/osd/ceph-3
mount -o remount,noatime /var/lib/ceph/osd/ceph-3
查看挂载的情况
 mount
 。。。。。。
/dev/sdb on /var/lib/ceph/osd/ceph-3 type xfs (rw,noatime,attr2,inode64,noquota)


15:创建一个OSD,生成一个osdnumber
[root@node01 ~]# ceph osd create

3



16:初始化osd数据目录
[root@node01 ~]# ceph-osd -i 3 --mkfs --mkkey


17:创建key文件
cd /var/lib/ceph/osd/ceph-3/
[root@node243 ceph-3]# touch keyring
[root@node243 ceph-3]# ll
total 0
-rw-r--r-- 1 root root 0 Dec 21 20:15 keyring


18:注册osd的认证密钥
ceph auth add osd.3 osd 'allow *' mon 'allow profile osd' -i /var/lib/ceph/osd/ceph-3/keyring


19:验证key文件,如果key文件为空,通过ceph auth list 查看系统值,填入key文件即可;
[root@node243 ceph-3]# more keyring
[osd.0]
        key = AQBk7XdWZQrnFBAAmxYgBuHYxckSX8G3GRWexQ==

20:为此osd节点创建一个crushmap(本次仅修复osd,不用修改host相关信息)
ceph osd crush add-bucket node243 host
防放置 Node243 到根路径
ceph osd crush move node243 root=default



21:放置osd.3到 bucket node243
ceph osd crush add osd.3 1.0 host=node243

add item id 3 name'osd.3' weight 1 at location {host=node243} to crush map

22:创建一个初始化目录
touch /var/lib/ceph/osd/ceph-3/sysvinit

23:启动osd服务
/etc/init.d/ceph start osd.3

24:查看磁盘状态,验证添加是否成功
ceph osd tree
ID WEIGHT  TYPE NAME            UP/DOWN REWEIGHT PRIMARY-AFFINITY
-6       0 host node01                                            
-1 0.44998 root default                                           
-2 0.09000     host ceph-deploy                                   
 0 0.09000         osd.0             up  1.00000          1.00000
-3 0.09000     host node241                                       
 1 0.09000         osd.1             up  1.00000          1.00000
-4 0.09000     host node242                                       
 2 0.09000         osd.2             up  1.00000          1.00000
-5 0.09000     host node243                                       
 3 0.09000         osd.3             up  1.00000          1.00000     #添加成功
-7 0.09000     host node245                                       
 5 0.09000         osd.5             up  1.00000          1.00000
[root@node243 ceph]#

本文出自 “康建华” 博客,转载请与作者联系!

你可能感兴趣的:(down,故障,ceph,修复,OSD)