Centos搭建ceph+++五、创建ceph集群

五、创建ceph集群

在node-admin节点上

1.node-admin管理节点上创建一个目录,用于保存 ceph-deploy 生成的配置文件和密钥对

切换到ceph用户

su - ceph

创建my-cluster文件夹

mkdir my-cluster

进入文件夹中(下面命令都在这里面执行)

cd my-cluster

[ceph@admin-node ~]$ cd my-cluster/
[ceph@admin-node my-cluster]$ 


2.清理旧配置


2.1:删除节点上的所有的ceph数据:

ceph-deploy purgedata node1 node2 node3

如果报如下错误:

[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.25): /bin/ceph-deploy purgedata node1 node2 node3
[ceph_deploy.install][DEBUG ] Purging data from cluster ceph hosts node1 node2 node3
[node1][DEBUG ] connection detected need for sudo
sudo: sorry, you must have a tty to run sudo
[node1][DEBUG ] connected to host: node1 
[ceph_deploy][ERROR ] RuntimeError: remote connection got closed, ensure ``requiretty`` is disabled for node1

注释掉Defaults    requiretty

修改node1,node2,node3中/etc/sudoers文件

注释掉其中的Defaults    requiretty

vi /etc/sudoers

#Defaults    requiretty

2.2:执行ceph-deploy forgetkeys

[ceph@admin-node my-cluster]$ ceph-deploy forgetkeys
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.25): /bin/ceph-deploy forgetkeys

提示:用下列命令可以连 Ceph 安装包一起清除

ceph-deploy purge node1 node2 node3(我们只是清理配置,不用清理包, 不需要执行,除非你出了什么大错误, 执行之后需要重新安装ceph了)


3.创建集群

ceph-deploy new node1

[ceph@admin-node my-cluster]$ ceph-deploy forgetkeys
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.25): /bin/ceph-deploy forgetkeys
[ceph@admin-node my-cluster]$ ceph-deploy new node1
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.25): /bin/ceph-deploy new node1
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[node1][DEBUG ] connected to host: admin-node 
[node1][INFO  ] Running command: ssh -CT -o BatchMode=yes node1
[node1][DEBUG ] connection detected need for sudo
[node1][DEBUG ] connected to host: node1 
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] find the location of an executable
[node1][INFO  ] Running command: sudo /usr/sbin/ip link show
[node1][INFO  ] Running command: sudo /usr/sbin/ip addr show
[node1][DEBUG ] IP addresses found: ['192.168.200.41', '10.0.0.41', '192.168.100.41']
[ceph_deploy.new][DEBUG ] Resolving host node1
[ceph_deploy.new][DEBUG ] Monitor node1 at 10.0.0.41
[ceph_deploy.new][DEBUG ] Monitor initial members are ['node1']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['10.0.0.41']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...

我们不需要执行ceph-deploy install admin-node node1 node2 node3这条安装命令

在前面我们已经在node1,node2,node3上单独安装了ceph

可以单独执行一下,看看有没有差什么包,最后都会报错因为我们已经安装好了ceph包

ceph-deploy install node1
ceph-deploy install node2
ceph-deploy install node3

报错如下

[node3][DEBUG ] Complete!
[node3][WARNIN] ensuring that /etc/yum.repos.d/ceph.repo contains a high priority
[ceph_deploy][ERROR ] RuntimeError: NoSectionError: No section: 'ceph'

最后说了一句完成的,木有问题

你可能感兴趣的:(ceph四节点部署,ceph四节点部署)