需求:安装ceph-15.2.10集群
操作系统:CentOS 7.5
ceph版本:ceph-15.2.10
节点inm247:CPU:24C,内存:256GB,硬盘:900GB
节点inm248:CPU:24C,内存:256GB,硬盘:900GB
节点inm249:CPU:24C,内存:256GB,硬盘:900GB
节点inm250:CPU:24C,内存:256GB,硬盘:900GB
要求:主机都挂载一个格式化后的硬盘给集群挂载ceph文件系统使用
从节点inm256:CPU:24C,内存:256GB,硬盘:900GB
序号 | 主机 | IP | 组件 | 规划 |
1 | inm247 | 10.0.0.247 | MOD、MDS、MGR | 主节点 |
2 | inm248 | 10.0.0.248 | OSD | 从节点 |
3 | inm249 | 10.0.0.249 | OSD | 从节点 |
4 | inm250 | 10.0.0.250 | OSD | 从节点 |
5 | inm256 | 10.0.0.256 | ceph-fuse | 客户端 |
使用一台可连接外网的主机下载公网rpm包,外网IP:10.0.0.10
yum install -y yum-utils
#yum-utils包括了很多yum功能,比如reposync下载工具,默认安装在/usr/bin/reposync
安装createrepo工具
yum install -y createrepo
#使用createrepo生成yum源镜像仓库元数据
添加阿里ceph源
vi /etc/yum.repos.d/ceph.repo
[rpm-15-2-10_x86_64]
name=rpm-15-2-10_x86_64
baseurl=https://mirrors.aliyun.com/ceph/rpm-15.2.10/el7/x86_64/
gpgcheck=0
enabled=1
[rpm-15-2-10-noarch]
name=rpm-15-2-10-noarch
baseurl=https://mirrors.aliyun.com/ceph/rpm-15.2.10/el7/noarch/
gpgcheck=0
enabled=1
清空和刷新yum源元数据缓存
yum clean all && yum makecache
在外网服务器上使用reposync下载镜像仓库的rpm包
reposync -r rpm-15-2-10_x86_64 -p /usr/local/software/ceph/mirror
ll /usr/local/software/ceph/mirror/rpm-15-2-10_x86_64/
总用量 2867628
-rw-r--r--. 1 root root 3128 3月 18 2021 ceph-15.2.10-0.el7.x86_64.rpm
-rw-r--r--. 1 root root 5972756 3月 18 2021 ceph-base-15.2.10-0.el7.x86_64.rpm
-rw-r--r--. 1 root root 21318404 3月 18 2021 ceph-common-15.2.10-0.el7.x86_64.rpm
-rw-r--r--. 1 root root 2809928520 3月 7 15:00 ceph-debuginfo-15.2.10-0.el7.x86_64.rpm
-rw-r--r--. 1 root root 24564 3月 18 2021 cephfs-java-15.2.10-0.el7.x86_64.rpm
-rw-r--r--. 1 root root 569916 3月 18 2021 ceph-fuse-15.2.10-0.el7.x86_64.rpm
-rw-r--r--. 1 root root 206900 3月 18 2021 ceph-immutable-object-cache-15.2.10-0.el7.x86_64.rpm
-rw-r--r--. 1 root root 2058036 3月 18 2021 ceph-mds-15.2.10-0.el7.x86_64.rpm
-rw-r--r--. 1 root root 1469204 3月 18 2021 ceph-mgr-15.2.10-0.el7.x86_64.rpm
-rw-r--r--. 1 root root 4563084 3月 18 2021 ceph-mon-15.2.10-0.el7.x86_64.rpm
-rw-r--r--. 1 root root 17306556 3月 18 2021 ceph-osd-15.2.10-0.el7.x86_64.rpm
-rw-r--r--. 1 root root 10317556 3月 18 2021 ceph-radosgw-15.2.10-0.el7.x86_64.rpm
-rw-r--r--. 1 root root 6860 3月 18 2021 ceph-resource-agents-15.2.10-0.el7.x86_64.rpm
-rw-r--r--. 1 root root 21956 3月 18 2021 ceph-selinux-15.2.10-0.el7.x86_64.rpm
-rw-r--r--. 1 root root 48361380 3月 18 2021 ceph-test-15.2.10-0.el7.x86_64.rpm
-rw-r--r--. 1 root root 502340 3月 18 2021 libcephfs2-15.2.10-0.el7.x86_64.rpm
-rw-r--r--. 1 root root 18808 3月 18 2021 libcephfs-devel-15.2.10-0.el7.x86_64.rpm
-rw-r--r--. 1 root root 41856 3月 18 2021 libcephfs_jni1-15.2.10-0.el7.x86_64.rpm
-rw-r--r--. 1 root root 3764 3月 18 2021 libcephfs_jni-devel-15.2.10-0.el7.x86_64.rpm
-rw-r--r--. 1 root root 3730584 3月 18 2021 librados2-15.2.10-0.el7.x86_64.rpm
-rw-r--r--. 1 root root 159192 3月 18 2021 librados-devel-15.2.10-0.el7.x86_64.rpm
-rw-r--r--. 1 root root 28192 3月 18 2021 libradospp-devel-15.2.10-0.el7.x86_64.rpm
-rw-r--r--. 1 root root 377216 3月 18 2021 libradosstriper1-15.2.10-0.el7.x86_64.rpm
-rw-r--r--. 1 root root 9408 3月 18 2021 libradosstriper-devel-15.2.10-0.el7.x86_64.rpm
-rw-r--r--. 1 root root 1958008 3月 18 2021 librbd1-15.2.10-0.el7.x86_64.rpm
-rw-r--r--. 1 root root 20120 3月 18 2021 librbd-devel-15.2.10-0.el7.x86_64.rpm
-rw-r--r--. 1 root root 3304988 3月 18 2021 librgw2-15.2.10-0.el7.x86_64.rpm
-rw-r--r--. 1 root root 7104 3月 18 2021 librgw-devel-15.2.10-0.el7.x86_64.rpm
-rw-r--r--. 1 root root 37796 3月 18 2021 python3-ceph-argparse-15.2.10-0.el7.x86_64.rpm
-rw-r--r--. 1 root root 75004 3月 18 2021 python3-ceph-common-15.2.10-0.el7.x86_64.rpm
-rw-r--r--. 1 root root 161388 3月 18 2021 python3-cephfs-15.2.10-0.el7.x86_64.rpm
-rw-r--r--. 1 root root 367604 3月 18 2021 python3-rados-15.2.10-0.el7.x86_64.rpm
-rw-r--r--. 1 root root 349992 3月 18 2021 python3-rbd-15.2.10-0.el7.x86_64.rpm
-rw-r--r--. 1 root root 110524 3月 18 2021 python3-rgw-15.2.10-0.el7.x86_64.rpm
-rw-r--r--. 1 root root 4840 3月 18 2021 rados-objclass-devel-15.2.10-0.el7.x86_64.rpm
-rw-r--r--. 1 root root 71548 3月 18 2021 rbd-fuse-15.2.10-0.el7.x86_64.rpm
-rw-r--r--. 1 root root 2775416 3月 18 2021 rbd-mirror-15.2.10-0.el7.x86_64.rpm
-rw-r--r--. 1 root root 135280 3月 18 2021 rbd-nbd-15.2.10-0.el7.x86_64.rpm
下载ceph的noarch包
reposync -r rpm-15-2-10-noarch -p /usr/local/software/ceph/mirror
ll /usr/local/software/ceph/mirror/rpm-15-2-10_noarch/
total 17352
-rwxrwxrwx 1 root root 55572 Mar 18 2021 cephadm-15.2.10-0.el7.noarch.rpm
-rwxrwxrwx 1 root root 275544 Apr 10 2020 ceph-deploy-1.5.29-0.noarch.rpm
-rwxrwxrwx 1 root root 275676 Apr 10 2020 ceph-deploy-1.5.30-0.noarch.rpm
-rwxrwxrwx 1 root root 275772 Apr 10 2020 ceph-deploy-1.5.31-0.noarch.rpm
-rwxrwxrwx 1 root root 276688 Apr 10 2020 ceph-deploy-1.5.32-0.noarch.rpm
-rwxrwxrwx 1 root root 276196 Apr 10 2020 ceph-deploy-1.5.33-0.noarch.rpm
-rwxrwxrwx 1 root root 288652 Apr 10 2020 ceph-deploy-1.5.34-0.noarch.rpm
-rwxrwxrwx 1 root root 290332 Apr 10 2020 ceph-deploy-1.5.35-0.noarch.rpm
-rwxrwxrwx 1 root root 290064 Apr 10 2020 ceph-deploy-1.5.36-0.noarch.rpm
-rwxrwxrwx 1 root root 288636 Apr 10 2020 ceph-deploy-1.5.37-0.noarch.rpm
-rwxrwxrwx 1 root root 290468 Apr 10 2020 ceph-deploy-1.5.38-0.noarch.rpm
-rwxrwxrwx 1 root root 290340 Apr 10 2020 ceph-deploy-1.5.39-0.noarch.rpm
-rwxrwxrwx 1 root root 287416 Apr 10 2020 ceph-deploy-2.0.0-0.noarch.rpm
-rwxrwxrwx 1 root root 292428 Apr 10 2020 ceph-deploy-2.0.1-0.noarch.rpm
-rwxrwxrwx 1 root root 21396 Mar 18 2021 ceph-grafana-dashboards-15.2.10-0.el7.noarch.rpm
-rwxrwxrwx 1 root root 83272 Apr 10 2020 ceph-medic-1.0.4-16.g60cf7e9.el7.noarch.rpm
-rwxrwxrwx 1 root root 109624 Apr 10 2020 ceph-medic-1.0.7-1.el7.noarch.rpm
-rwxrwxrwx 1 root root 144700 Mar 18 2021 ceph-mgr-cephadm-15.2.10-0.el7.noarch.rpm
-rwxrwxrwx 1 root root 3942196 Mar 18 2021 ceph-mgr-dashboard-15.2.10-0.el7.noarch.rpm
-rwxrwxrwx 1 root root 86340 Mar 18 2021 ceph-mgr-diskprediction-cloud-15.2.10-0.el7.noarch.rpm
-rwxrwxrwx 1 root root 9013440 Mar 18 2021 ceph-mgr-diskprediction-local-15.2.10-0.el7.noarch.rpm
-rwxrwxrwx 1 root root 33856 Mar 18 2021 ceph-mgr-k8sevents-15.2.10-0.el7.noarch.rpm
-rwxrwxrwx 1 root root 458920 Mar 18 2021 ceph-mgr-modules-core-15.2.10-0.el7.noarch.rpm
-rwxrwxrwx 1 root root 52584 Mar 18 2021 ceph-mgr-rook-15.2.10-0.el7.noarch.rpm
-rwxrwxrwx 1 root root 5808 Mar 18 2021 ceph-prometheus-alerts-15.2.10-0.el7.noarch.rpm
-rwxrwxrwx 1 root root 4008 Mar 18 2021 ceph-release-1-1.el7.noarch.rpm
下载ceph的cephDeps包
yum -y install epel-release
mkdir /usr/local/software/ceph/mirror/cephDeps
repotrack ceph ceph-deploy ceph-base ceph-common ceph-mgr ceph-mon ceph-mds ceph-osd ceph-fuse ceph-radosgw -p /usr/local/software/ceph/mirror/cephDeps
createrepo -v /usr/local/software/ceph/mirror/cephDeps
cd /usr/local/software/ceph/mirror/
tar -cvf cephDeps.tar cephDeps
tar -cvf rpm-15-2-10_noarch.tar rpm-15-2-10_noarch
tar -cvf rpm-15-2-10_x86_64.tar rpm-15-2-10_noarch
#在内网主节点inm247上安装httpd(内网有可用yum源,如何安装yum源具体可参考https://blog.csdn.net/youmatterhsp/article/details/130239769?spm=1001.2014.3001.5502)
yum install -y httpd
systemctl start httpd
systemctl enable httpd
systemctl status httpd.service
netstat -atunlp | grep httpd
#把之前三个安装包移动到httpd的默认目录/var/www/html下
在inm247上生成镜像仓库元数据,在目录下生成repodata目录,镜像仓库元数据在此目录
tar -xvf /var/www/html/cephDeps.tar
tar -xvf /var/www/html/rpm-15-2-10_noarch.tar
tar -xvf /var/www/html/rpm-15-2-10_x86_64.tar
createrepo -v /var/www/html/rpm-15.2.10/x86_64
createrepo -v /var/www/html/rpm-15.2.10/noarch
vim /etc/yum.repos.d/ceph.repo
[rpm-15-2-10_x86_64]
name=rpm-15-2-10_x86_64
baseurl=http://10.0.0.247/rpm-15-2-10_x86_64/
baseurl=http://10.0.0.247/cephDeps/
enabled=1
gpgcheck=0
[rpm-15-2-10-noarch]
name=rpm-15-2-10-noarch
baseurl=http://10.0.0.247/rpm-15-2-10-noarch/
enabled=1
gpgcheck=0
清空和刷新yum源元数据缓存(以下操作在ceph集群四个节点上都执行)
yum clean all && yum makecache
应用本地镜像仓库
yum repolist rpm-15-2-10_x86_64
yum repolist rpm-15-2-10-noarch
搜索仓库中的包
yum search --showduplicates ceph
修改hosts文件
vim /etc/hosts
10.0.0.247 inm247
10.0.0.248 inm248
10.0.0.249 inm249
10.0.0.250 inm250
关闭SELinux
vim /etc/selinux/config
#SELINUX=enforcing
SELINUX=disabled
关闭防火墙
systemctl stop firewalld
设置swappiness
echo vm.swappiness = 10 >> /etc/sysctl.conf
设置最大文件句柄数
vim /etc/security/limits.conf
* soft nofile 102400
* hard nofile 102400
配置免密登录
ssh-keygen -t rsa
ssh-copy-id inm247
ssh-copy-id inm248
ssh-copy-id inm249
ssh-copy-id inm250
配置ntp时间同步服务器
yum install ntp ntpdate -y
vim /etc/ntp.conf
#注释以下内容
server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst
#添加
server 127.127.1.0 iburst
#启动ntp服务
systemctl restart ntpd
chkconfig ntpd on
#其它三节点
vim /etc/ntp.conf
#注释以下内容
server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst
#配置上游时间服务器为本地的ntpd Server服务器
server 10.0.0.247
#配置允许上游时间服务器主动修改本机的时间
restrict 10.0.0.247 nomodify notrap noquery
#退出保存并在命令行执行
ntpdate -u 10.0.0.247
systemctl restart ntpd
chkconfig ntpd on
四台服务器安装相关工具
yum install -y net-tools.x86_64 yum-utils python-setuptools perl
主节点安装ceph-deploy、ceph和ceph-radosgw
yum install -y ceph-deploy ceph ceph-radosgw
安装完成后目录
#查看ceph
ll /etc/ceph
total 20
-rw-r--r-- 1 root root 92 Mar 18 2021 rbdmap
ll /usr/bin/ | grep ceph
-rwxr-xr-x 1 root root 45604 Mar 18 2021 ceph
-rwxr-xr-x 1 root root 190752 Mar 18 2021 ceph-authtool
-rwxr-xr-x 1 root root 8630160 Mar 18 2021 ceph-bluestore-tool
-rwxr-xr-x 1 root root 1478 Mar 18 2021 ceph-clsinfo
-rwxr-xr-x 1 root root 182384 Mar 18 2021 ceph-conf
-rwxr-xr-x 1 root root 3125 Mar 18 2021 ceph-crash
-rwxr-xr-x 1 root root 27747456 Mar 18 2021 ceph-dencoder
-rwxr-xr-x 1 root root 643 Jun 20 2018 ceph-deploy
-rwxr-xr-x 1 root root 20024 Mar 18 2021 ceph-diff-sorted
-rwxr-xr-x 1 root root 6542648 Mar 18 2021 cephfs-data-scan
-rwxr-xr-x 1 root root 6583056 Mar 18 2021 cephfs-journal-tool
-rwxr-xr-x 1 root root 6422608 Mar 18 2021 cephfs-table-tool
-rwxr-xr-x 1 root root 8273552 Mar 18 2021 ceph-kvstore-tool
-rwxr-xr-x 1 root root 5918904 Mar 18 2021 ceph-mds
-rwxr-xr-x 1 root root 4235664 Mar 18 2021 ceph-mgr
-rwxr-xr-x 1 root root 8587744 Mar 18 2021 ceph-mon
-rwxr-xr-x 1 root root 5073648 Mar 18 2021 ceph-monstore-tool
-rwxr-xr-x 1 root root 15476136 Mar 18 2021 ceph-objectstore-tool
-rwxr-xr-x 1 root root 22931040 Mar 18 2021 ceph-osd
-rwxr-xr-x 1 root root 4963008 Mar 18 2021 ceph-osdomap-tool
-rwxr-xr-x 1 root root 4172 Mar 18 2021 ceph-post-file
-rwxr-xr-x 1 root root 452 Mar 18 2021 ceph-rbdnamer
-rwxr-xr-x 1 root root 296 Mar 18 2021 ceph-run
-rwxr-xr-x 1 root root 1694528 Mar 18 2021 ceph-syn
ll /usr/bin/ | grep radosgw
-rwxr-xr-x 1 root root 7168 Mar 18 2021 radosgw
-rwxr-xr-x 1 root root 10187024 Mar 18 2021 radosgw-admin
-rwxr-xr-x 1 root root 9573720 Mar 18 2021 radosgw-es
-rwxr-xr-x 1 root root 9565672 Mar 18 2021 radosgw-object-expirer
-rwxr-xr-x 1 root root 178376 Mar 18 2021 radosgw-token
从节点安装ceph、ceph-radosgw
yum install -y ceph ceph-radosgw
创建ceph的mon模块
在主节点inm247操作,操作目录:/etc/ceph,操作工具:ceph-deploy
创建ceph集群
cd /etc/ceph
ceph-deploy new inm247 inm248 inm249 inm250
解析:使用ceph-deploy new命令创建一个集群。在/etc/ceph目录下,生成的配置文件:ceph.conf、ceph-deploy-ceph.log、ceph.mon.keyring等文件
ll /etc/ceph/
total 24
-rw-r--r-- 1 root root 235 Apr 25 11:25 ceph.conf
-rw-r--r-- 1 root root 11644 Apr 25 11:25 ceph-deploy-ceph.log
-rw------- 1 root root 73 Apr 25 11:25 ceph.mon.keyring
-rw-r--r-- 1 root root 92 Mar 18 2021 rbdmap
创建并初始化mon模块
ceph-deploy mon create-initial
解析:在ceph集群中创建并初始mon,创建Ceph的mon守护进程管理器。
在主节点/etc/ceph下生成并写配置文件:ceph.bootstrap-mds.keyring、ceph.bootstrap-mgr.keyring、ceph.bootstrap-osd.keyring、ceph.bootstrap-rgw.keyring、ceph.client.admin.keyring、ceph.conf、ceph-deploy-ceph.log、ceph.mon.keyring、rbdmap。
在从节点/etc/ceph下生成并写配置文件:ceph.conf、rbdmap、tmpHTFcfI
#主节点
ll /etc/ceph
total 72
-rw------- 1 root root 113 Apr 25 11:26 ceph.bootstrap-mds.keyring
-rw------- 1 root root 113 Apr 25 11:26 ceph.bootstrap-mgr.keyring
-rw------- 1 root root 113 Apr 25 11:26 ceph.bootstrap-osd.keyring
-rw------- 1 root root 113 Apr 25 11:26 ceph.bootstrap-rgw.keyring
-rw------- 1 root root 151 Apr 25 11:26 ceph.client.admin.keyring
-rw-r--r-- 1 root root 235 Apr 25 11:26 ceph.conf
-rw-r--r-- 1 root root 38321 Apr 25 11:26 ceph-deploy-ceph.log
-rw------- 1 root root 73 Apr 25 11:25 ceph.mon.keyring
-rw-r--r-- 1 root root 92 Mar 18 2021 rbdmap
#从节点
ll /etc/ceph/
total 8
-rw-r--r-- 1 root root 299 Apr 25 14:22 ceph.conf
-rw-r--r-- 1 root root 92 Mar 18 2021 rbdmap
-rw------- 1 root root 0 Apr 25 14:22 tmpHTFcfI
查看集群状态
命令:ceph -s
解析:集群安装完成后,可以使用ceph -s,查看集群状态。包括cluster、services、data三个模块
ceph -s
cluster:
id: 29d5c712-b78a-49fa-abf9-55a8d489a159
health: HEALTH_OK
services:
mon: 4 daemons, quorum inmshbdslave247,inmshbdslave248,inmshbdslave249,inmshbdslave250 (age 43s)
mgr: no daemons active
osd: 0 osds: 0 up, 0 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
创建ceph的osd模块
在主节点inm247操作,操作目录:cd /etc/ceph。操作工具:ceph-deploy。
确认每个节点挂载的磁盘
主节点inm247磁盘:/dev/sdb
从节点inm248磁盘:/dev/sdb
从节点inm249磁盘:/dev/sdb
从节点inm250磁盘:/dev/sdb
解析:使用ceph-deploy disk list查主机磁盘。
格式化每个节点挂载磁盘
ceph-deploy disk zap inm247 /dev/sdb
ceph-deploy disk zap inm248 /dev/sdb
ceph-deploy disk zap inm249 /dev/sdb
ceph-deploy disk zap inm250 /dev/sdb
创建osd并挂载到磁盘
将每个主机的/dev/sdb磁盘挂载为osd盘。
ceph-deploy osd create --data /dev/sdb inm247
ceph-deploy osd create --data /dev/sdb inm248
ceph-deploy osd create --data /dev/sdb inm249
ceph-deploy osd create --data /dev/sdb inm250
查看集群状态
命令:ceph -s
ceph -s
cluster:
id: 29d5c712-b78a-49fa-abf9-55a8d489a159
health: HEALTH_WARN
no active mgr
services:
mon: 4 daemons, quorum inmshbdslave247,inmshbdslave248,inmshbdslave249,inmshbdslave250 (age 5m)
mgr: no daemons active
osd: 4 osds: 4 up (since 7s), 4 in (since 7s)
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
创建ceph的mgr模块
在主节点app161操作,操作目录:cd /etc/ceph。操作工具:ceph-deploy
ceph-deploy mgr create inm247 inm248 inm249 inm250
解析:创建mgr,收集ceph集群状态,提供可视化界面用于监控集群。
查看集群状态
ceph -s
cluster:
id: 29d5c712-b78a-49fa-abf9-55a8d489a159
health: HEALTH_OK
services:
mon: 4 daemons, quorum inmshbdslave247,inmshbdslave248,inmshbdslave249,inmshbdslave250 (age 8m)
mgr: inmshbdslave247(active, since 92s), standbys: inmshbdslave248, inmshbdslave249, inmshbdslave250
osd: 4 osds: 4 up (since 2m), 4 in (since 2m)
data:
pools: 1 pools, 1 pgs
objects: 4 objects, 0 B
usage: 4.0 GiB used, 20 TiB / 20 TiB avail
pgs: 1 active+clean
配置dashboard
开启dashboard功能
ceph mgr module enable dashboard #开启dashboard功能
创建证书
ceph dashboard create-self-signed-cert
配置web登录的用户名和密码
ceph dashboard set-login-credentials test test123456
修改dashboard默认端口
配置端口,默认端口是8443,修改为18443,修改后需重启mgr,修改端口才生效。
ceph config set mgr mgr/dashboard/server_port 18443 #修改端口命令
systemctl restart ceph-mgr.target #重启mgr
查看发布服务地址
ceph mgr services
登录ceph的dashboard
https://10.0.0.247:8443 #默认登录地址
修改后登录地址
https://10.0.0.247:18443
https://10.0.0.248:18443
https://10.0.0.249:18443
https://10.0.0.250:18443
#执行ceph -s查看哪个可用
#用户名/口令:test test123456
创建ceph的mds模块
在主节点app161操作,操作目录:cd /etc/ceph。操作工具:ceph-deploy。
创建mds
ceph-deploy mds create inm247 inm248 inm249 inm250
解析:创建mds,使用cephfs文件系统服务时,需安装mds。作用:数据元服务。
查看集群状态
cluster:
id: 29d5c712-b78a-49fa-abf9-55a8d489a159
health: HEALTH_OK
services:
mon: 4 daemons, quorum inmshbdslave247,inmshbdslave248,inmshbdslave249,inmshbdslave250 (age 11m)
mgr: inmshbdslave247(active, since 4m), standbys: inmshbdslave248, inmshbdslave249, inmshbdslave250
mds: 4 up:standby
osd: 4 osds: 4 up (since 5m), 4 in (since 5m)
data:
pools: 1 pools, 1 pgs
objects: 4 objects, 0 B
usage: 4.0 GiB used, 20 TiB / 20 TiB avail
pgs: 1 active+clean
创建ceph的rgw模块
在主节点inm247操作,操作目录:cd /etc/ceph。操作工具:ceph-deploy。
创建rgw
ceph-deploy rgw create inm247 inm248 inm249 inm250
解析:创建rgw,使用对象网关。
查看集群状态
cluster:
id: 29d5c712-b78a-49fa-abf9-55a8d489a159
health: HEALTH_OK
services:
mon: 4 daemons, quorum inmshbdslave247,inmshbdslave248,inmshbdslave249,inmshbdslave250 (age 12m)
mgr: inmshbdslave247(active, since 5m), standbys: inmshbdslave248, inmshbdslave249, inmshbdslave250
mds: 4 up:standby
osd: 4 osds: 4 up (since 7m), 4 in (since 7m)
rgw: 4 daemons active (inmshbdslave247, inmshbdslave248, inmshbdslave249, inmshbdslave250)
task status:
data:
pools: 5 pools, 129 pgs
objects: 193 objects, 5.9 KiB
usage: 4.0 GiB used, 20 TiB / 20 TiB avail
pgs: 129 active+clean
io:
client: 119 KiB/s rd, 0 B/s wr, 132 op/s rd, 68 op/s wr
启动和停止ceph组件
在主节点inm247操作,操作目录:cd /etc/ceph。
启动服务
systemctl start ceph.target
systemctl start ceph-mds.target
systemctl start ceph-mgr.target
systemctl start ceph-mon.target
systemctl start ceph-osd.target
systemctl start ceph-radosgw.target
停止服务
systemctl stop ceph.target
systemctl stop ceph-mds.target
systemctl stop ceph-mgr.target
systemctl stop ceph-mon.target
systemctl stop ceph-osd.target
systemctl stop ceph-radosgw.target
重启服务
systemctl restart ceph.target
systemctl restart ceph-mds.target
systemctl restart ceph-mgr.target
systemctl restart ceph-mon.target
systemctl restart ceph-osd.target
systemctl restart ceph-radosgw.target
操作存储池pool
在主节点app161操作,操作目录:cd /etc/ceph。
存储池作用是存放数据和元数据。默认三个副本。
创建存储池pool
ceph osd pool create hs_data 16
解析:创建一个存储池,名称hs_data,分配16个pg
查看pool的pg_num数量
ceph osd pool get hs_data pg_num
指定pool的pg_num数量
ceph osd pool set hs_data pg_num 18
查看pg_num数量
ceph osd pool get hs_data pg_num
pg_num: 18
查看集群状态
cluster:
id: 29d5c712-b78a-49fa-abf9-55a8d489a159
health: HEALTH_OK
1 pool(s) have non-power-of-two pg_num
services:
mon: 4 daemons, quorum inmshbdslave247,inmshbdslave248,inmshbdslave249,inmshbdslave250 (age 16m)
mgr: inmshbdslave247(active, since 9m), standbys: inmshbdslave248, inmshbdslave249, inmshbdslave250
mds: 4 up:standby
osd: 4 osds: 4 up (since 10m), 4 in (since 10m)
rgw: 4 daemons active (inmshbdslave247, inmshbdslave248, inmshbdslave249, inmshbdslave250)
task status:
data:
pools: 6 pools, 123 pgs
objects: 193 objects, 5.9 KiB
usage: 4.1 GiB used, 20 TiB / 20 TiB avail
pgs: 123 active+clean
删除存储池pool
设置允许删除pool
#修改文件
vi /etc/ceph/ceph.conf
#修改内容
mon_allow_pool_delete=true
分发配置到每个主机:
ceph-deploy --overwrite-conf admin inm247 inm248 inm249 inm250
重启ceph-mon命令(4个主机):
systemctl restart ceph-mon.target
删除pool
ceph osd pool delete hs_data hs_data --yes-i-really-really-mean-it
解析:删除pool时,pool的名称需要传两次。
查看集群状态
cluster:
id: 29d5c712-b78a-49fa-abf9-55a8d489a159
health: HEALTH_OK
services:
mon: 4 daemons, quorum inmshbdslave247,inmshbdslave248,inmshbdslave249,inmshbdslave250 (age 36s)
mgr: inmshbdslave247(active, since 13m), standbys: inmshbdslave248, inmshbdslave249, inmshbdslave250
mds: 4 up:standby
osd: 4 osds: 4 up (since 14m), 4 in (since 14m)
rgw: 4 daemons active (inmshbdslave247, inmshbdslave248, inmshbdslave249, inmshbdslave250)
task status:
data:
pools: 5 pools, 105 pgs
objects: 193 objects, 5.9 KiB
usage: 4.1 GiB used, 20 TiB / 20 TiB avail
pgs: 105 active+clean
操作ceph文件系统
使用ceph文件系统存储,需部署mds
创建ceph文件系统
在主节点app161操作,操作目录:cd /etc/ceph
查看mds确定已安装mds
ceph mds stat
4 up:standby
解析:查看mds状态,如果没安装mds,那么需安装mds。
ceph-deploy mds create inm247 inm248 inm249 inm250
创建两个存储池
一个ceph文件系统至少要两个RADOS存储池,一个用于存放数据,一个用于存放元数据。
#创建存放数据pool:
ceph osd pool create test_data 60
pool 'test_data' created
#创建存放元数据pool:
ceph osd pool create test_metadata 30
pool 'test_metadata' created
解析:创建存储池test_data和test_metadata
创建ceph文件系统
ceph fs new test test_metadata test_data
解析:使用ceph fs new创建ceph文件系统;文件系统名称:test;存储池test_data和test_metadata
查ceph文件系统
ceph fs ls
name: test, metadata pool: test_metadata, data pools: [test_data ]
查看集群状态
cluster:
id: 29d5c712-b78a-49fa-abf9-55a8d489a159
health: HEALTH_OK
services:
mon: 4 daemons, quorum inmshbdslave247,inmshbdslave248,inmshbdslave249,inmshbdslave250 (age 5m)
mgr: inmshbdslave247(active, since 18m), standbys: inmshbdslave248, inmshbdslave249, inmshbdslave250
mds: test:1 {0=inmshbdslave250=up:active} 3 up:standby
osd: 4 osds: 4 up (since 19m), 4 in (since 19m)
rgw: 4 daemons active (inmshbdslave247, inmshbdslave248, inmshbdslave249, inmshbdslave250)
task status:
data:
pools: 7 pools, 195 pgs
objects: 215 objects, 8.1 KiB
usage: 4.1 GiB used, 20 TiB / 20 TiB avail
pgs: 195 active+clean
主节点挂载ceoh集群
在主节点inm247操作,操作目录:cd /etc/ceph
确认ceph.conf文件
#文件
/etc/ceph/ceph.conf
#内容
auth_client_required = cephx #开启认证
主节点inm247挂载ceph集群/目录
#创建挂载目录cephdir
mkdir /cephdir
#安装ceph-fuse工具
yum install -y ceph-fuse
#执行挂载命令
ceph-fuse -n client.admin -k /etc/ceph/ceph.client.admin.keyring -c /etc/ceph/ceph.conf -m 10.0.0.247,10.0.0.248,10.0.0.249,10.0.0.250 -r / /clentdir -o nonempty
#查看是否挂载成功
df -h
...
ceph-fuse 6.4T 0 6.4T 0% /cephdir
客户端inm256节点配置yum源并安装ceph-fuse
#配置yum源
vim /etc/yum.repos.d/ceph.repo
[rpm-15-2-10_x86_64]
name=rpm-15-2-10_x86_64
baseurl=http://10.0.0.247/rpm-15-2-10_x86_64/
baseurl=http://10.0.0.247/cephDeps/
enabled=1
gpgcheck=0
[rpm-15-2-10-noarch]
name=rpm-15-2-10-noarch
baseurl=http://10.0.0.247/rpm-15-2-10-noarch/
enabled=1
gpgcheck=0
#安装ceph-fuse
yum install -y ceph-fuse
配置inm256节点,操作目录:cd /etc/ceph
创建配置目录
mkdir -p /etc/ceph #创建/etc/ceph目录
#复制服务端的ceph.client.admin.keyring文件、ceph.conf文件至/etc/ceph目录下
创建挂载文件系统的目录
mkdir /localdir
解析:此目录就是为了把远程文件系统挂载到本地使用
挂载文件系统
ceph-fuse -n client.admin -k /etc/ceph/ceph.client.admin.keyring -c /etc/ceph/ceph.conf -m 10.0.0.247,10.0.0.248,10.0.0.249,10.0.0.250 -r / /localdir -o nonempty
2023-04-25T15:13:13.353+0800 7f819ef4ff80 -1 init, newargv = 0x56238d289a00 newargc=11ceph-fuse[
3210686]: starting ceph client
ceph-fuse[3210686]: starting fuse
解析:ceph-fuse挂载命令,-n表示用户名,实际就是ceph.client.admin.keyring文件中的admin,-k表示keyring文件,-c表示conf文件,-m指定主机,默认是mon的端口,/表示ceph集群的/cephdir目录,/localdir表示挂载到本地的目录,-o表示指定参数选项
查看客户端挂载文件系统
df -h
ceph-fuse 6.4T 0 6.4T 0% /localdir
数据验证
在服务端/cephdir目录下创建目录dir并在客户端查看
服务端
cd /cephdir
mkdir dir
ll
total 1
drwxr-xr-x 2 root root 0 Apr 25 15:14 dir
客户端查看
ll /localdir/
total 1
drwxr-xr-x 2 root root 0 Apr 25 15:14 dir
在客户端/localdir/dir目录下创建文件test.txt并在客户端查看
客户端
cd /localdir/dir/
touch test.txt
服务端查看
ll /cephdir/dir/
total 0
-rw-r--r-- 1 root root 0 Apr 25 15:33 test.txt
卸载ceph集群
#销毁集群
ceph-deploy purge inm247 inm248 inm249 inm250
ceph-deploy purgedata inm247 inm248 inm249 inm250
ceph-deploy forgetkeys
userdel ceph
ceph -s
#取消挂载目录
#查看挂载目录df -h
#可以看到挂载在
/var/lib/ceph/osd/ceph-0
#取消挂载命令
umount /var/lib/ceph/osd/ceph-0
#删除相关目录
#查看相关目录
find / -name ceph
#删除相关目录
rm -rf /var/lib/ceph/*
rm -rf /etc/ceph/*
rm -rf /run/ceph/*
rm -rf /var/run/ceph/*
rm -rf /var/local/osd0/*