七、准备OSD
1.从管理节点(都是ceph用户my-cluster目录下执行)执行 ceph-deploy 来准备 OSD
ceph-deploy osd prepare node2:/ceph/osd node3:/ceph/osd
...
...
[node2][INFO ] checking OSD status...
[node2][DEBUG ] find the location of an executable
[node2][INFO ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host node2 is now ready for osd use.
...
...
[node3][INFO ] checking OSD status...
[node3][DEBUG ] find the location of an executable
[node3][INFO ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host node3 is now ready for osd use.
chown ceph:ceph /ceph/osd/
ceph-deploy osd activate node2:/ceph/osd node3:/ceph/osd
ceph-deploy osd activate node2:/ceph/osd node3:/ceph/osd
...
...
[node2][INFO ] checking OSD status...
[node2][DEBUG ] find the location of an executable
[node2][INFO ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
[node2][INFO ] Running command: sudo systemctl enable ceph.target
...
...
[node3][INFO ] checking OSD status...
[node3][DEBUG ] find the location of an executable
[node3][INFO ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
[node3][INFO ] Running command: sudo systemctl enable ceph.target
ceph-deploy admin admin-node node1 node2 node3
[ceph@admin-node my-cluster]$ ceph-deploy admin admin-node node1 node2 node3
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /bin/ceph-deploy admin admin-node node1 node2 node3
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf :
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] client : ['admin-node', 'node1', 'node2', 'node3']
[ceph_deploy.cli][INFO ] func :
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to admin-node
[admin-node][DEBUG ] connection detected need for sudo
[admin-node][DEBUG ] connected to host: admin-node
[admin-node][DEBUG ] detect platform information from remote host
[admin-node][DEBUG ] detect machine type
[admin-node][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node1
[node1][DEBUG ] connection detected need for sudo
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node2
[node2][DEBUG ] connection detected need for sudo
[node2][DEBUG ] connected to host: node2
[node2][DEBUG ] detect platform information from remote host
[node2][DEBUG ] detect machine type
[node2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node3
[node3][DEBUG ] connection detected need for sudo
[node3][DEBUG ] connected to host: node3
[node3][DEBUG ] detect platform information from remote host
[node3][DEBUG ] detect machine type
[node3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[root@node2 ~]# ceph -s
cluster 46ac86e8-1efe-403c-b735-587f9d76a905
health HEALTH_WARN
64 pgs degraded
64 pgs stuck degraded
64 pgs stuck unclean
64 pgs stuck undersized
64 pgs undersized
monmap e1: 1 mons at {node1=10.0.0.41:6789/0}
election epoch 3, quorum 0 node1
osdmap e9: 2 osds: 2 up, 2 in
flags sortbitwise,require_jewel_osds
pgmap v22: 64 pgs, 1 pools, 0 bytes data, 0 objects
10277 MB used, 26561 MB / 40960 MB avail
64 active+undersized+degraded
然而你会发现你等再久都不会出现 active + clean状态
原因:
因为你创建集群ceph配置文件默认是 3个副本,我们现在只有两个当然不可能,后面我会再加上一个osd
____________________________________________________________________________________________________________________________________
你也可以选择就两个,修改配置文件
这个配置文件在admin节点my-cluster目录中
vi ceph.conf
在[global]下添加
osd pool default size = 2