目录
- 准备工作
- 更改时间同步方案:
- 确保安装了lvm2
- 启用rbd模块
- 内核升级
- 手动安装ceph
- 安装ceph软件包
- 安装mds
- 安装mgr
- 安装osd
- 安装ceph-mds
- 报错及解决方法
- 无关紧要的事
准备工作
更改时间同步方案:
crontab -l>/tmp/crontab.tmp
sed -i 's/.*ntpdate/# &/g' /tmp/crontab.tmp
cat /tmp/crontab.tmp |crontab
rm -rf /tmp/crontab.tmp
yum -y install chrony
systemctl enable chronyd && systemctl start chronyd
timedatectl status
timedatectl set-local-rtc 0
systemctl restart rsyslog && systemctl restart crond
确保安装了lvm2
yum -y install lvm2
启用rbd模块
modprobe rbd
cat > /etc/rc.sysinit << EOF
#!/bin/bash
for file in /etc/sysconfig/modules/*.modules
do
[ -x \$file ] && \$file
done
EOF
cat > /etc/sysconfig/modules/rbd.modules << EOF
modprobe rbd
EOF
chmod 755 /etc/sysconfig/modules/rbd.modules
lsmod |grep rbd
内核升级
# CephFS需要内核版本在4.17以上
# 使用ml版本来代替原来的lt版本,ml版本是稳定版,lt版本是长期支持版
# 目前ml版本已经到5.5.9
rpm -Uvh https://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm
yum --enablerepo=elrepo-kernel install -y kernel-ml
#保险起见,先查看内核启动顺序:awk -F\' '$1=="menuentry " {print $2}' /etc/grub2.cfg
grub2-set-default 0
reboot
uname -r
手动安装ceph
安装ceph软件包
$ yum install ceph
安装mds
# 生成uuid
$ uuidgen
c4dce24c-7ee5-4127-a7ab-89883b03b10a
#编辑配置文件
$ [global]
fsid = c4dce24c-7ee5-4127-a7ab-89883b03b10a
public network = 0.0.0.0/0
cluster network = 0.0.0.0/0
mon initial members = nau1,nau2,nau3
mon host = 192.168.15.85,192.168.15.86,192.168.15.87
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd pool default size = 2
osd pool default min size = 1
osd pool default pg num = 333
osd pool default pgp num = 333
osd crush chooseleaf type = 1
rbd_default_features = 1
mon_pg_warn_max_per_osd = 3000
[mds]
mds session autoclose = 30
mds session timeout = 20
# 生成ceph.mon.keyring
$ ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
# 生成ceph.client.admin.keyring
$ sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
# 生成bootstrap-osd/ceph.keyring
$ sudo ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' --cap mgr 'allow r'
# 将ceph.client.admin.keyring和bootstrap-osd/ceph.keyring导入到ceph.mon.keyring
$ sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
$ sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
$ sudo chown ceph:ceph /tmp/ceph.mon.keyring
#用ceph.mon.keyring生成monmap
$ monmaptool --create --add nau1 192.168.15.85 --fsid c4dce24c-7ee5-4127-a7ab-89883b03b10a /tmp/monmap
$ monmaptool --add nau2 192.168.15.86 --fsid c4dce24c-7ee5-4127-a7ab-89883b03b10a /tmp/monmap
$ monmaptool --add nau3 192.168.15.87 --fsid c4dce24c-7ee5-4127-a7ab-89883b03b10a /tmp/monmap
# 在nau1上操作(上面的步骤都是只在nau1上操作)
$ sudo -u ceph mkdir /var/lib/ceph/mon/ceph-nau1
$ sudo -u ceph ceph-mon --mkfs -i nau1 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
$ systemctl start ceph-mon@nau1
# 复制所需文件到其他节点(nau3同理)
# 如果找不到目录就自己创建一下
$ scp /tmp/monmap root@nau2:/tmp/
$ scp /etc/ceph/ceph.client.admin.keyring root@nau2:/etc/ceph/
$ scp /etc/ceph/ceph.conf root@nau2:/etc/ceph/
$ scp /var/lib/ceph/bootstrap-osd/ceph.keyring root@nau2:/var/lib/ceph/bootstrap-osd/
$ scp /tmp/ceph.mon.keyring root@nau2:/tmp/
# 在nau2上操作(nau3同理)
$ chmod 777 /tmp/ceph.mon.keyring
$ sudo -u ceph mkdir /var/lib/ceph/mon/ceph-nau2
$ sudo -u ceph ceph-mon --mkfs -i nau2 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
$ systemctl start ceph-mon@nau2
# 在任意节点上查看一下状态
$ ceph -s
cluster:
id: c4dce24c-7ee5-4127-a7ab-89883b03b10a
health: HEALTH_WARN
3 monitors have not enabled msgr2
services:
mon: 3 daemons, quorum nau1,nau2,nau3 (age 50s)
mgr: no daemons active
osd: 0 osds: 0 up, 0 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
可以看到报警告说没有msgr2,接下来安装msgr2
安装mgr
# 在nau1上操作
$ ceph auth get-or-create mgr.nau1 mon 'allow profile mgr' osd 'allow *' mds 'allow *'
[mgr.nau1]
key = AQD9YeBedvUgLhAAxzBUMsUiWcMkQEXA2p3+qA==
# 将这个结果写入配置文件/var/lib/ceph/mgr/ceph-nau1/keyring
$ sudo mkdir -p /var/lib/ceph/mgr/ceph-nau1
$ cat > /var/lib/ceph/mgr/ceph-nau1/keyring << EOF
[mgr.nau1]
key = AQD9YeBedvUgLhAAxzBUMsUiWcMkQEXA2p3+qA==
EOF
$ chmod 777 /var/lib/ceph/mgr/ceph-nau1/keyring
# 将这一段加入到ceph.conf中
$ vim /etc/ceph/ceph.conf
[global]
fsid = c4dce24c-7ee5-4127-a7ab-89883b03b10a
public network = 0.0.0.0/0
。。。此处省略无关配置等
[mgr.nau1]
key = AQD9YeBedvUgLhAAxzBUMsUiWcMkQEXA2p3+qA==
# 启动ceph-mgr
$ ceph-mgr -i nau1
$ ceph mon enable-msgr2
#查看状态
$ ceph -s
cluster:
id: c4dce24c-7ee5-4127-a7ab-89883b03b10a
health: HEALTH_WARN
OSD count 0 < osd_pool_default_size 2
3 monitors have not enabled msgr2
services:
mon: 3 daemons, quorum nau1,nau2,nau3 (age 15m)
mgr: nau1(active, since 2m)
osd: 0 osds: 0 up, 0 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
可以看到,现在的主要矛盾是没有osd了
安装osd
# 官网说可以使用ceph-volume,但是本人使用过程中坑太多,不建议使用。建议所有节点都使用下一节的笨办法
$ sudo ceph-volume lvm create --data /dev/sdb
$ ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.02930 root default
-3 0.02930 host nau1
0 hdd 0.02930 osd.0 up 1.00000 1.00000
# 笨办法安装OSD
# 以nau2为例,nau3同理
$ chmod 777 /var/lib/ceph/bootstrap-osd/ceph.keyring
$ cp /var/lib/ceph/bootstrap-osd/ceph.keyring /etc/ceph/ceph.client.bootstrap-osd.keyring
# 下面这一段步骤虽然我给出脚本的形式,但还是建议手动一行一行复制到终端执行
$ vim addosd.sh
sh bash
UUID=$(uuidgen)
OSD_SECRET=$(ceph-authtool --gen-print-key)
echo $OSD_SECRET
ID=$(echo "{\"cephx_secret\": \"$OSD_SECRET\"}" | \
ceph osd new $UUID -i - \
-n client.bootstrap-osd -k /var/lib/ceph/bootstrap-osd/ceph.keyring)
echo $ID
mkdir -p /var/lib/ceph/osd/ceph-$ID
mkfs.xfs -f /dev/sdb1
mount /dev/sdb1 /var/lib/ceph/osd/ceph-$ID
ceph-authtool --create-keyring /var/lib/ceph/osd/ceph-$ID/keyring \
--name osd.$ID --add-key $OSD_SECRET
chmod 777 /var/lib/ceph/osd/ceph-$ID/keyring
ceph-osd -i $ID --mkfs --osd-uuid $UUID
chown -R ceph:ceph /var/lib/ceph/osd/ceph-$ID
$ sh addosd.sh
# 启动并添加开机启动(这里面承担@1,其中1是刚才脚本中的$ID)
$ systemctl enable ceph-osd@1
$ systemctl start ceph-osd@1
安装ceph-mds
$ mkdir -p /var/lib/ceph/mds/ceph-nau1
$ ceph-authtool --create-keyring /var/lib/ceph/mds/ceph-nau1/keyring --gen-key -n mds.nau1
$ chmod 777 /var/lib/ceph/mds/ceph-nau1/keyring
$ ceph auth add mds.nau1 osd "allow rwx" mds "allow" mon "allow profile mds" -i /var/lib/ceph/mds/ceph-nau1/keyring
$ ceph-mds --cluster ceph -i nau1 -m nau1:6800
# 如果上面那条命令没报错,就说明能启动起来,那就Ctrl+C掉,然后按下面的命令启动并添加启动项
$ systemctl start ceph-mds@nau1
$ systemctl enable ceph-mds@nau1
# 创建cephfs
$ ceph osd pool create cephfs_data 256
$ ceph osd pool create cephfs_metadata 256
$ ceph fs new cephfs cephfs_metadata cephfs_data
报错及解决方法
- "Error ERANGE: pg_num 800 size 2 would mean 2112 total pgs, which exceeds max 750 (mon_max_pg_per_osd 250 * num_in_osds 3)"
在ceph.conf [global]中添加一行配置:
mon_max_pg_per_osd = 2000
- " 3 monitors have not enabled msgr2"
ceph mon enable-msgr2
- osd装多了如何删除
ceph osd crush remove osd.1
ceph auth del osd.1
ceph osd rm 1
无关紧要的事
全手动安装官网参考地址:
https://ceph.readthedocs.io/en/latest/install/manual-deployment/
或https://docs.ceph.com/docs/master/install/manual-deployment/
-
为什么选14.2.9?
之前用rook测试过14.2.9,发现版本符合要求。 -
为什么不用centos8?
答:因为看见阿里云里没有Centos8的镜像。https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/x86_64/ -
关于yum源?
为了避免每次都走阿里云我直接把列表复制下来,然后把nautilus14.2.9的所有rpm包全部都下载下来,然后用“createrepo .”配置成yum源,然后编辑ceph.repo,在192.168.0.1上配置了nginx做成yum源,将"gpgcheck"设置成0来做测试的,一旦出错,直接卸载,重试的速度相对快些。
不过大家没必要这么做,直接用阿里云的源就可以ceph.repo。
[Ceph-SRPMS]
name=Ceph SRPMS packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS/
enabled=1
gpgcheck=0
type=rpm-md
[Ceph-aarch64]
name=Ceph aarch64 packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/aarch64/
enabled=1
gpgcheck=0
type=rpm-md
[Ceph-noarch]
name=Ceph noarch packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch/
enabled=1
gpgcheck=0
type=rpm-md
[Ceph-x86_64]
name=Ceph x86_64 packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/x86_64/
enabled=1
gpgcheck=0
type=rpm-md
本人自己用的方法本地搭建的源ceph.repo:
[ceph-noarch]
name=Ceph noarch packages
baseurl=http://192.168.0.1/nautilus/
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=http://192.168.0.1/nautilus/release.asc
priority=1
[Ceph]
name=Ceph packages for $basearch
baseurl=http://192.168.0.1/nautilus
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=http://192.168.0.1/nautilus/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://192.168.0.1/nautilus/
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=http://192.168.0.1/nautilus/release.asc
priority=1