为什么要研究Ceph集群搭建呢?
本次实验使用3台vmware workstation软件创建的虚拟机
1.1、关闭firewalld防火墙&&SELinux安全模块
systemctl stop firewalld && systemctl disable firewalld &> /dev/null
setenforce 0
cat > /etc/selinux/config << EOF
SELINUX=disabled
SELINUXTYPE=targeted
EOF
1.2、配置主机名解析配置
cat > /etc/hosts << EOF
10.0.0.1 ceph-node1
10.0.0.2 ceph-node2
10.0.0.3 ceph-node3
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
EOF
1.3、所有节点修改国内yum源,添加epel仓库
1.3.1.备份CentOS-Base.repo
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
1.3.2.下载repo文件
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
1.3.3.添加EPEL源
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
1.3.4.手工编写ceph.repo仓库
cat > /etc/yum.repos.d/ceph.repo << EOF
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/x86_64/
gpgcheck=0
priority=1
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch/
gpgcheck=0
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS
gpgcheck=0
priority=1
EOF
1.3.5.清除缓存
yum clean all
1.3.6.更新本地YUM缓存
yum makecache
1.4、NTP配置
yum install ntp ntpdate -y
ntpdate pool.ntp.org
systemctl restart ntpdate.service
systemctl restart ntpd.service
systemctl enable ntpd.service
systemctl enable ntpdate.service
1.5、创建ceph集群
1.5.1、初始化创建集群
mkdir /etc/ceph
cd /etc/ceph
ceph-deploy new --cluster-network 10.0.0.0/24 --public-network 192.168.0.0/24 ceph-node1
1.5.2、安装二进制软件包
ceph-deploy install ceph-node1
[root@ceph-node1 ceph]# ceph -v
ceph version 14.2.16 (762032d6f509d5e7ee7dc008d80fe9c87086603c) nautilus (stable)
1.5.3、初始化并创建第一个moniter
ceph-deploy mon create-initial
1.5.4、创建三个osd并加入集群
fdisk /dev/sdb
n–>p–>1–>回车–>回车–>wq
mkfs.xfs /dev/sdb1
ceph-deploy osd create ceph-node1 --data /dev/sdb1
fdisk /dev/sdc
n–>p–>1–>回车–>回车–>wq
mkfs.xfs /dev/sdc1
ceph-deploy osd create ceph-node1 --data /dev/sdc1
fdisk /dev/sdd
n–>p–>1–>回车–>回车–>wq
mkfs.xfs /dev/sdd1
ceph-deploy osd create ceph-node1 --data /dev/sdd1
ceph status
[root@ceph-node1 ceph]# ceph status
cluster:
id: 231326c7-fb58-40e7-92f4-627c6def0200
health: HEALTH_WARN
no active mgr
services:
mon: 1 daemons, quorum ceph-node1 (age 10m)
mgr: no daemons active
osd: 3 osds: 3 up (since 43s), 3 in (since 43s)
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
1.6、SSH免密配置
ceph-node1执行下面命令
ssh-keygen,键入三次回车
ssh-copy-id ceph-node1
输入yes,输入ceph-node2的root密码
ssh-copy-id ceph-node2
输入yes,输入ceph-node2的root密码
ssh-copy-id ceph-node3
输入yes,输入ceph-node3的root密码
1.7、部署Manager
ceph-deploy mgr create ceph-node1
2.1、关闭firewalld防火墙&&SELinux安全模块
systemctl stop firewalld && systemctl disable firewalld &> /dev/null
setenforce 0
cat > /etc/selinux/config << EOF
SELINUX=disabled
SELINUXTYPE=targeted
EOF
2.2、配置主机名解析配置
cat > /etc/hosts << EOF
10.0.0.1 ceph-node1
10.0.0.2 ceph-node2
10.0.0.3 ceph-node3
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
EOF
2.3、所有节点修改国内yum源,添加epel仓库
2.3.1.备份CentOS-Base.repo
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
2.3.2.下载repo文件
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
2.3.3.添加EPEL源
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
2.3.4.手工编写ceph.repo仓库
cat > /etc/yum.repos.d/ceph.repo << EOF
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/x86_64/
gpgcheck=0
priority=1
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch/
gpgcheck=0
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS
gpgcheck=0
priority=1
EOF
2.3.5.清除缓存
yum clean all
2.3.6.更新本地YUM缓存
yum makecache
2.4、NTP配置
yum install ntp ntpdate -y
ntpdate pool.ntp.org
systemctl restart ntpdate.service
systemctl restart ntpd.service
systemctl enable ntpd.service
systemctl enable ntpdate.service
2.5、在ceph-node2上向ceph集群添加组件
2.5.1、安装二进制软件包
ceph-deploy install ceph-node2
[root@ceph-node1 ceph]# ceph -v
ceph version 14.2.16 (762032d6f509d5e7ee7dc008d80fe9c87086603c) nautilus (stable)
2.5.2、添加第二个moniter
ceph-deploy mon add ceph-node2
2.5.3、在ceph-node2创建三个磁盘分区
fdisk /dev/sdb
n–>p–>1–>回车–>回车–>wq
mkfs.xfs /dev/sdb1
fdisk /dev/sdc
n–>p–>1–>回车–>回车–>wq
mkfs.xfs /dev/sdc1
fdisk /dev/sdd
n–>p–>1–>回车–>回车–>wq
mkfs.xfs /dev/sdd1
2.5.4、使用ceph-node1添加三个osd
ceph-deploy osd create ceph-node2 --data /dev/sdb1
ceph-deploy osd create ceph-node2 --data /dev/sdc1
ceph-deploy osd create ceph-node2 --data /dev/sdd1
ceph status
[root@ceph-node1 ceph]# ceph status
cluster:
id: 231326c7-fb58-40e7-92f4-627c6def0200
health: HEALTH_WARN
no active mgr
services:
mon: 2 daemons, quorum ceph-node1,ceph-node2 (age 26m)
mgr: no daemons active
osd: 6 osds: 6 up (since 86s), 6 in (since 86s)
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
2.6、在ceph-node1上添加Manager节点
ceph-deploy mgr create ceph-node2
3.1、关闭firewalld防火墙&&SELinux安全模块
systemctl stop firewalld && systemctl disable firewalld &> /dev/null
setenforce 0
cat > /etc/selinux/config << EOF
SELINUX=disabled
SELINUXTYPE=targeted
EOF
3.2、配置主机名解析配置
cat > /etc/hosts << EOF
10.0.0.1 ceph-node1
10.0.0.2 ceph-node2
10.0.0.3 ceph-node3
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
EOF
3.3、所有节点修改国内yum源,添加epel仓库
3.3.1.备份CentOS-Base.repo
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
3.3.2.下载repo文件
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
3.3.3.添加EPEL源
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
3.3.4.手工编写ceph.repo仓库
cat > /etc/yum.repos.d/ceph.repo << EOF
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/x86_64/
gpgcheck=0
priority=1
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch/
gpgcheck=0
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS
gpgcheck=0
priority=1
EOF
3.3.5.清除缓存
yum clean all
3.3.6.更新本地YUM缓存
yum makecache
3.4、NTP配置
yum install ntp ntpdate -y
ntpdate pool.ntp.org
systemctl restart ntpdate.service
systemctl restart ntpd.service
systemctl enable ntpd.service
systemctl enable ntpdate.service
3.5、在ceph-node2上向ceph集群添加组件
3.5.1、安装二进制软件包
ceph-deploy install ceph-node3
[root@ceph-node1 ceph]# ceph -v
ceph version 14.2.16 (762032d6f509d5e7ee7dc008d80fe9c87086603c) nautilus (stable)
3.5.2、添加第二个moniter
ceph-deploy mon add ceph-node3
3.5.3、在ceph-node3创建三个磁盘分区
fdisk /dev/sdb
n–>p–>1–>回车–>回车–>wq
mkfs.xfs /dev/sdb1
fdisk /dev/sdc
n–>p–>1–>回车–>回车–>wq
mkfs.xfs /dev/sdc1
fdisk /dev/sdd
n–>p–>1–>回车–>回车–>wq
mkfs.xfs /dev/sdd1
3.5.4、使用ceph-node1添加三个osd
ceph-deploy osd create ceph-node3 --data /dev/sdb1
ceph-deploy osd create ceph-node3 --data /dev/sdc1
ceph-deploy osd create ceph-node3 --data /dev/sdd1
ceph status
[root@ceph-node1 ceph]# ceph status
cluster:
id: 231326c7-fb58-40e7-92f4-627c6def0200
health: HEALTH_WARN
no active mgr
services:
mon: 3 daemons, quorum ceph-node1,ceph-node2,ceph-node3 (age 4m)
mgr: no daemons active
osd: 9 osds: 9 up (since 9s), 9 in (since 9s)
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
3.6、在ceph-node1上添加Manager节点
ceph-deploy mgr create ceph-node3
[root@ceph-node1 ceph]# ceph -s
cluster:
id: 231326c7-fb58-40e7-92f4-627c6def0200
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph-node1,ceph-node2,ceph-node3 (age 111m)
mgr: ceph-node1(active, since 89s), standbys: ceph-node3, ceph-node2
osd: 9 osds: 9 up (since 111m), 9 in (since 2h)
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 9.1 GiB used, 171 GiB / 180 GiB avail
pgs:
[root@ceph-node1 ceph]# ceph df
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 180 GiB 171 GiB 57 MiB 9.1 GiB 5.03
TOTAL 180 GiB 171 GiB 57 MiB 9.1 GiB 5.03
POOLS:
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
[root@ceph-node1 ceph]#
至此3节点的ceph集群搭建完毕,接下来是深入学习与实践运用。还有很多未知等待去摸索。下期见。