Centos搭建ceph+++七、准备OSD

七、准备OSD


1.从管理节点(都是ceph用户my-cluster目录下执行执行 ceph-deploy 来准备 OSD

ceph-deploy osd prepare node2:/ceph/osd node3:/ceph/osd

...
...
[node2][INFO  ] checking OSD status...
[node2][DEBUG ] find the location of an executable
[node2][INFO  ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host node2 is now ready for osd use.
...
...
[node3][INFO  ] checking OSD status...
[node3][DEBUG ] find the location of an executable
[node3][INFO  ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host node3 is now ready for osd use.

2.修改 node1 node2 node3 节点上的/ceph/osd/ 目录权限,不然激活OSD会报错 Permission den

chown ceph:ceph /ceph/osd/

3.激活 OSD (回到admin-node节点)

ceph-deploy osd activate node2:/ceph/osd node3:/ceph/osd

ceph-deploy osd activate node2:/ceph/osd node3:/ceph/osd

...
...
[node2][INFO  ] checking OSD status...
[node2][DEBUG ] find the location of an executable
[node2][INFO  ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
[node2][INFO  ] Running command: sudo systemctl enable ceph.target
...
...
[node3][INFO  ] checking OSD status...
[node3][DEBUG ] find the location of an executable
[node3][INFO  ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
[node3][INFO  ] Running command: sudo systemctl enable ceph.target

4.用 ceph-deploy 把配置文件和 admin 密钥拷贝各个节点

ceph-deploy admin admin-node node1 node2 node3

[ceph@admin-node my-cluster]$ ceph-deploy admin admin-node node1 node2 node3
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /bin/ceph-deploy admin admin-node node1 node2 node3
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : 
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['admin-node', 'node1', 'node2', 'node3']
[ceph_deploy.cli][INFO  ]  func                          : 
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to admin-node
[admin-node][DEBUG ] connection detected need for sudo
[admin-node][DEBUG ] connected to host: admin-node 
[admin-node][DEBUG ] detect platform information from remote host
[admin-node][DEBUG ] detect machine type
[admin-node][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node1
[node1][DEBUG ] connection detected need for sudo
[node1][DEBUG ] connected to host: node1 
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node2
[node2][DEBUG ] connection detected need for sudo
[node2][DEBUG ] connected to host: node2 
[node2][DEBUG ] detect platform information from remote host
[node2][DEBUG ] detect machine type
[node2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node3
[node3][DEBUG ] connection detected need for sudo
[node3][DEBUG ] connected to host: node3 
[node3][DEBUG ] detect platform information from remote host
[node3][DEBUG ] detect machine type
[node3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

5. node1节点查看集群状态

[root@node2 ~]# ceph -s
    cluster 46ac86e8-1efe-403c-b735-587f9d76a905
     health HEALTH_WARN
            64 pgs degraded
            64 pgs stuck degraded
            64 pgs stuck unclean
            64 pgs stuck undersized
            64 pgs undersized
     monmap e1: 1 mons at {node1=10.0.0.41:6789/0}
            election epoch 3, quorum 0 node1
     osdmap e9: 2 osds: 2 up, 2 in
            flags sortbitwise,require_jewel_osds
      pgmap v22: 64 pgs, 1 pools, 0 bytes data, 0 objects
            10277 MB used, 26561 MB / 40960 MB avail
                  64 active+undersized+degraded

active + clean状态是完成

然而你会发现你等再久都不会出现 active + clean状态

原因:

因为你创建集群ceph配置文件默认是 3个副本,我们现在只有两个当然不可能,后面我会再加上一个osd

____________________________________________________________________________________________________________________________________

你也可以选择就两个,修改配置文件

这个配置文件在admin节点my-cluster目录中

vi ceph.conf

在[global]下添加
osd pool default size = 2

当然你可以照着我的继续往后做


你可能感兴趣的:(ceph四节点部署,ceph四节点部署)