【原创】Ceph 集群搭建完美实践

    本文通过安装 ceph-deploy 软件的管理节点来安装配置 ceph 集群,以 6 个节点—3 个 monitor 节点,3 个 osd 节点,以 ceph kraken(1.11.1)版本为例,说明 Ceph 存储集群的安装以及配置过程。

1. 拓扑架构说明

2. 时间同步


集群最重要的前提条件,就是时间同步,所以首先要实现的就是在各个节点间配置时钟同步。本例中采用chrony作为时间同步软件,当然你可以采用ntp。


[root@ceph-mon-0 ~]# vi /etc/chrony.conf
server ntp6.aliyun.com iburst
allow 192.168.1.0/24
[root@ceph-mon-0 ~]# systemctl start chronyd
[root@ceph-mon-0 ~]# systemctl enable chronyd

其他节点:

[root@ceph-mon-1 ~]# vi /etc/chrony.conf
server 192.168.1.10 iburst
[root@ceph-mon-1 ~]# systemctl start chronyd
[root@ceph-mon-1 ~]# systemctl enable chronyd
[root@ceph-mon-1 ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample 
===============================================================================
^* ceph-mon-0 10 6 377 56 -7014ns[-9275ns] +/- 131us

3. Ceph 所需要的yum源环境

软件安装尽可能采用yum的方式来实现,如下是当前框架需要的yum环境。
##在所有节点配置yum源

3.1. base 源

# cat /etc/yum.repos.d/Centos_base.repo 
[base]
name=aliyun base
baseurl=https://mirrors.aliyun.com/centos/7/os/x86_64/
enabled=1
gpgcheck=0

3.2. epel 源

# cat /etc/yum.repos.d/epel.repo 
[epel]
name=aliyun epel
baseurl=https://mirrors.aliyun.com/epel/7Server/x86_64/
enabled=1
gpgcheck=0

3.3. extras源

[ceph@ceph-mon-0 ~]$ cat /etc/yum.repos.d/extras.repo
[extra]
name=centos extra
baseurl=https://mirrors.aliyun.com/centos/7/extras/x86_64/
enabled=1
gpgcheck=0

3.4. ceph 源

# cat /etc/yum.repos.d/ceph.repo 
[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-kraken/el7/noarch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

[ceph]
name=Ceph packages for 
baseurl=http://download.ceph.com/rpm-kraken/el7/x86_64
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

[ceph-source]
name=Ceph source packages
baseurl=http://download.ceph.com/rpm-kraken/el7/SRPMS
enabled=0
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

4. 管理节点安装deph-deploy

安装之前,应该首先对系统进行update;


#所有节点update
# yum update
#管理节点安装ceph-deploy
[root@ceph-mon-0 yum.repos.d]# yum install ceph-deploy -y
#所有节点安装openssh-server
# yum install openssh-server -y

5. 所有节点的用户权限配置

##所有节点创建用户,并赋予权限
# useradd ceph
# echo ceph | passwd ceph --stdin
# echo "ceph ALL = (root) NOPASSWD:ALL" > /etc/sudoers.d/ceph
# echo 'Defaults:ceph  !requiretty' >> /etc/sudoers

6. 配置本地解析文件

如果没有dns,那么需要配置hosts文件,实现本地服务器解析。
# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.10 ceph-mon-0
192.168.1.11 ceph-mon-1
192.168.1.12 ceph-mon-2
192.168.1.13 ceph-osd-0
192.168.1.14 ceph-osd-1
192.168.1.15 ceph-osd-2

7. 配置基于密钥的ssh连接

一般集群,都需要实现基于密钥的ssh连接。配置方法如下:
##配置管理节点和其他节点的基于密钥的连接
##本例中配置192.168.1.10 到其他节点的连接

7.1. ##配置ceph用户的密钥连接

# su - ceph
[ceph@ceph-mon-0 ~]$ ssh-keygen
全部默认即可
##将公钥拷贝到其他节点即可
[ceph@ceph-mon-0 ~]$ ssh-copy-id -i /home/ceph/.ssh/id_rsa.pub ceph@ceph-mon-1
[ceph@ceph-mon-0 ~]$ ssh-copy-id -i /home/ceph/.ssh/id_rsa.pub ceph@ceph-mon-2
[ceph@ceph-mon-0 ~]$ ssh-copy-id -i /home/ceph/.ssh/id_rsa.pub ceph@ceph-osd-0
[ceph@ceph-mon-0 ~]$ ssh-copy-id -i /home/ceph/.ssh/id_rsa.pub ceph@ceph-osd-1
[ceph@ceph-mon-0 ~]$ ssh-copy-id -i /home/ceph/.ssh/id_rsa.pub ceph@ceph-osd-2

7.2. ##配置主机

[ceph@ceph-mon-0 .ssh]$ cat /home/ceph/.ssh/config 
Host mon-1
Hostname ceph-mon-1
User ceph
Host mon-2
Hostname ceph-mon-2
User ceph
Host osd-0
Hostname ceph-osd-0
User ceph
Host osd-1
Hostname ceph-osd-1
User ceph
Host osd-2
Hostname ceph-osd-2
User ceph
[ceph@ceph-mon-0 .ssh]$ chmod 600 /home/ceph/.ssh/config

8. 在管理节点上创建ceph集群

8.1. 在管理节点创建目录
[ceph@ceph-mon-0 ~]$ mkdir /home/ceph/ceph-cluster

8.2. 创建集群

语法:ceph-deploy new {initial-monitor-node(s)}
本例中:有如下三个monitor site
[ceph@ceph-mon-0 ceph-cluster]$ ceph-deploy new ceph-mon-0 ceph-mon-1 ceph-mon-2

8.3. 修改ceph.conf配置文件

[ceph@ceph-mon-0 ceph-cluster]$ vi /home/ceph/ceph-cluster/ceph.conf
##更改osd个数
osd_pool_default_size = 3
##允许ceph 集群删除pool
[mon]
mon_allow_pool_delete = true

9. 所有节点安装ceph工具

##在管理节点运行命令,给集群所有节点安装ceph
语法:ceph-deploy install {ceph-node} [{ceph-node} ...]

本例中:

[ceph@ceph-mon-0 ~]$ ceph-deploy install ceph-mon-0 ceph-mon-1 ceph-mon-2 ceph-osd-0 ceph-osd-1 ceph-osd-2
##安装由于网络原因,可能不成功,最后单独安装
# yum install ceph ceph-radosgw -y
[ceph@ceph-mon-0 ~]$ rpm -qa | grep ceph
ceph-selinux-11.2.1-0.el7.x86_64
python-cephfs-11.2.1-0.el7.x86_64
ceph-common-11.2.1-0.el7.x86_64
ceph-mds-11.2.1-0.el7.x86_64
ceph-radosgw-11.2.1-0.el7.x86_64
libcephfs2-11.2.1-0.el7.x86_64
ceph-base-11.2.1-0.el7.x86_64
ceph-mgr-11.2.1-0.el7.x86_64
ceph-11.2.1-0.el7.x86_64
ceph-deploy-1.5.38-0.noarch
ceph-mon-11.2.1-0.el7.x86_64
ceph-osd-11.2.1-0.el7.x86_64
ceph-release-1-1.el7.noarch

10. 配置并初始化monitor(s),并收集密钥信息

##配置初始 monitor(s)、并收集所有密钥:
[ceph@ceph-mon-0 ceph-cluster]$ pwd
/home/ceph/ceph-cluster
[ceph@ceph-mon-0 ceph-cluster]$ ceph-deploy mon create-initial
[ceph@ceph-mon-0 ceph-cluster]$ ls -l
total 64
-rw------- 1 ceph ceph 71 Feb 27 15:50 ceph.bootstrap-mds.keyring
-rw------- 1 ceph ceph 71 Feb 27 15:50 ceph.bootstrap-mgr.keyring
-rw------- 1 ceph ceph 71 Feb 27 15:50 ceph.bootstrap-osd.keyring
-rw------- 1 ceph ceph 71 Feb 27 15:50 ceph.bootstrap-rgw.keyring
-rw------- 1 ceph ceph 63 Feb 27 15:50 ceph.client.admin.keyring
-rw-rw-r-- 1 ceph ceph 312 Feb 26 16:52 ceph.conf
-rw-rw-r-- 1 ceph ceph 34753 Feb 27 15:50 ceph-deploy-ceph.log
-rw------- 1 ceph ceph 73 Feb 26 16:44 ceph.mon.keyring
[ceph@ceph-mon-0 ceph-cluster]$

11. 在管理节点上登陆到osd节点,创建数据存储目录

##在管理节点上登录到每个 osd 节点,创建 osd 节点的数据存储目录
[ceph@ceph-mon-0 ceph-cluster]$ ssh ceph-osd-0 
[ceph@ceph-osd-0 ~]$ sudo mkdir /var/local/osd0
[ceph@ceph-osd-0 ~]$ sudo chmod 777 -R /var/local/osd0/
[ceph@ceph-osd-0 ~]$ exit
[ceph@ceph-mon-0 ceph-cluster]$ ssh ceph-osd-1
[ceph@ceph-osd-1 ~]$ sudo mkdir /var/local/osd1
[ceph@ceph-osd-1 ~]$ sudo chmod 777 -R /var/local/osd1
[ceph@ceph-osd-1 ~]$ exit
[ceph@ceph-mon-0 ceph-cluster]$ ssh ceph-osd-2
[ceph@ceph-osd-2 ~]$ sudo mkdir /var/local/osd2
[ceph@ceph-osd-2 ~]$ sudo chmod 777 -R /var/local/osd2
[ceph@ceph-osd-2 ~]$ exit

12. 在管理节点激活osd节点

##在管理节点上执行命令,使每个 osd 就绪
[ceph@ceph-mon-0 ceph-cluster]$ pwd
/home/ceph/ceph-cluster
[ceph@ceph-mon-0 ceph-cluster]$ ceph-deploy osd prepare ceph-osd-0:/var/local/osd0 ceph-osd-1:/var/local/osd1 ceph-osd-2:/var/local/osd2
##在管理节点上激活每个 osd 节点
[ceph@ceph-mon-0 ceph-cluster]$ pwd
/home/ceph/ceph-cluster
[ceph@ceph-mon-0 ceph-cluster]$ ceph-deploy osd activate ceph-osd-0:/var/local/osd0 ceph-osd-1:/var/local/osd1 ceph-osd-2:/var/local/osd2

13. 将配置文件和admin密钥拷贝到管理节点和ceph节点

##在管理节点把配置文件和 admin 密钥拷贝到管理节点和 Ceph 节点
[ceph@ceph-mon-0 ceph-cluster]$ pwd
/home/ceph/ceph-cluster
[ceph@ceph-mon-0 ceph-cluster]$ ceph-deploy admin ceph-mon-0 ceph-mon-1 ceph-mon-2 ceph-osd-0 ceph-osd-1 ceph-osd-2

14. 授权

##所有节点上赋予ceph.client.admin.keyring 有操作权限

$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring


15. 最后,检测ceph集群健康状态
[ceph@ceph-mon-0 ceph-cluster]$ ceph health
HEALTH_OK

16.大功告成!

你可能感兴趣的:(Linux,集群搭建,IT技术,原创,linux)