主机:192.168.2.90 主机名:ceph0
操作系统:centos7
ceph版本:14.2.22-0.el7.x86_64
硬盘:两块硬盘,一块用来挂载ceph
hostnamectl set-hostname ceph0
cat >> /etc/hosts << EOF
192.168.2.90 ceph0
EOF
#关闭selinux
setenforce 0
sed -ri 's#(SELINUX=).*#1disabled#g' /etc/selinux/config
# 时间同步
yum install -y chrony
systemctl start chronyd
systemctl enable chronyd
cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
服务器需要设置防火墙,测试环境可以将防火墙关闭
sudo firewall-cmd --zone=public --add-service=ceph-mon
sudo firewall-cmd --zone=public --add-service=ceph-mon --permanent
这里注意centos版本,centos7对应el7,8系列对应el8
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/x86_64/
gpgcheck=0
priority =1
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch/
gpgcheck=0
priority =1
[ceph-source]
name=cephsource
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS/
gpgcheck=0
priority =1
yum install -y ceph
ceph安装包install之后会自动建立文件夹/etc/ceph
,我们需要在该文件夹建立配置文件ceph.conf
:
uuidgen
fsid = {UUID} # fsid=ec15eb3e-eb66-4431-acda-428e91658560
mon initial members = {hostname}[,{hostname}] #mon initial members = ceph0
mon host = {ip-address}[,{ip-address}] #mon host = 192.168.2.90
# 生成秘钥
sudo ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
# 创建管理员
sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
# osd用户
sudo ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' --cap mgr 'allow r'
# 添加秘钥到ceph.mon.keyring文件
sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
# 修改ceph.mon.keyring拥有者
sudo chown ceph:ceph /tmp/ceph.mon.keyring
monmaptool --create --add {hostname} {ip-address} --fsid {uuid} /tmp/monmap
# monmaptool --create --add ceph0 192.168.2.90 --fsid ec15eb3e-eb66-4431-acda-428e91658560 /tmp/monmap
sudo mkdir /var/lib/ceph/mon/{cluster-name}-{hostname}
#sudo -u ceph mkdir /var/lib/ceph/mon/ceph-ceph0
sudo -u ceph ceph-mon [--cluster {cluster-name}] --mkfs -i {hostname} --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
#sudo -u ceph ceph-mon --mkfs -i ceph0 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
ceph.conf
[global]
fsid = {cluster-id}
mon initial members = {hostname}[, {hostname}]
mon host = {ip-address}[, {ip-address}]
public network = {network}[, {network}]
cluster network = {network}[, {network}]
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
osd journal size = {n}
osd pool default size = {n} # Write an object n times.
osd pool default min size = {n} # Allow writing n copies in a degraded state.
osd pool default pg num = {n}
osd pool default pgp num = {n}
osd crush chooseleaf type = {n}
例:
[global]
fsid = ec15eb3e-eb66-4431-acda-428e91658560
mon initial members = ceph0
mon host = 192.168.2.90
public network = 192.168.2.0/24
cluster network = 192.168.2.0/24
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
osd journal size = 1024
osd pool default size = 1
osd pool default min size = 1
osd crush chooseleaf type = 0 # 单节点需改为0
实际测试过程中发现文件最后没有空行会出错,不知道是不是跟版本有关,待验证
sudo systemctl start ceph-mon@ceph0
sudo ceph -s
cluster:
id: ec15eb3e-eb66-4431-acda-428e91658560
health: HEALTH_OK
services:
mon: 1 daemons, quorum ceph0
mgr: foo(active, since 0h)
osd: 0 osds: 0 up, 0 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 bytes
usage: 0 kB used, 0 kB / 0 kB avail
pgs:
# 配置用户
ceph auth get-or-create mgr.$name mon 'allow profile mgr' osd 'allow *' mds 'allow *'
# 启动服务
ceph-mgr -i $name
yum -y install ceph-mgr-dashboard
# 开启服务
ceph mgr module enable dashboard
#dashboard需要配置ssl,先生成ssl证书
openssl req -new -nodes -x509 \
-subj "/O=IT/CN=ceph-mgr-dashboard" -days 3650 \
-keyout dashboard.key -out dashboard.crt -extensions v3_ca
ceph dashboard set-ssl-certificate -i dashboard.crt
ceph dashboard set-ssl-certificate-key -i dashboard.key
ceph mgr module disable dashboard
ceph mgr module enable dashboard
#可以选择配置dashboard的IP和端口,也可以不配置
ceph config set mgr mgr/dashboard/server_addr $IP
ceph config set mgr mgr/dashboard/server_port $PORT
ceph config set mgr mgr/dashboard/ssl_server_port $PORT
#查看服务地址
ceph mgr services
{
"dashboard": "https://ceph1:8443/"
}
新创建的dashboard是没有用户的,通过ac-user-create
命令来创建一个用户,该命令需要先创建一个密码文件,并将密码输入
ceph dashboard ac-user-create -i administrator
# ceph dashboard ac-user-create admin -i /root/passwordfile administrator
这里使用BLUESTORE快速创建的方式,如果是多台服务器需要将monitor中生成的key复制到OSD服务器。
sudo ceph-volume lvm create --data {data-path}
# sudo ceph-volume lvm create --data /dev/sdb
# 创建文件夹
mkdir -p /var/lib/ceph/mds/{cluster-name}-{id}
# mkdir -p /var/lib/ceph/mds/ceph-ceph0
ceph-authtool --create-keyring /var/lib/ceph/mds/ceph-ceph0/keyring --gen-key -n mds.ceph0
ceph auth add mds.{id} osd "allow rwx" mds "allow *" mon "allow profile mds" -i /var/lib/ceph/mds/{cluster}-{id}/keyring
#ceph auth add mds.ceph0 osd "allow rwx" mds "allow *" mon "allow profile mds" -i /var/lib/ceph/mds/ceph-ceph0/keyring
修改ceph.conf
[mds.{id}]
host = {id}
#[mds.ceph0]
ceph-mds --cluster {cluster-name} -i {id} -m {mon-hostname}:{mon-port} [-f]
#ceph-mds --cluster ceph -i ceph0 -m 192.168.2.90:6789
通过ceph -s
查看状态