三台主机CentOS 7.3确保时间同步
配置yum源,关闭firewalld和selinux
配置硬盘/dev/sdb /dev/sdc /dev/sdd
ceph01:172.16.8.43/16
ceph02:172.16.8.44/16
ceph03:172.16.8.45/16
在各自/etc/hosts内作静态解析
三台机器各自安装软件
yum install ceph ceph-radosgw -y
在部署节点Ceph01生成秘钥对,并上传至Ceph02、Ceph03
[root@ceph01 ~]# ssh-keygen
[root@ceph01 ~]# ssh-copy-id root@ceph02
[root@ceph01 ~]# ssh-copy-id root@ceph03
在部署节点安装ceph-deploy
yum install ceph-deploy -y
查询版本
[root@ceph01 ~]# ceph-deploy --version
1.5.25
[root@ceph01 ~]# ceph -v
ceph version 10.2.11 (e4b061b47f07f583c92a050d9e84b1813a35671e)
在部署节点创建部署目录开始部署
[root@ceph01 ~]# mkdir /root/cluster
[root@ceph01 ~]# cd cluster/
[root@ceph01 cluster]# ceph-deploy new ceph01 ceph02 ceph03
[root@ceph01 cluster]# ls
ceph.conf ceph.log ceph.mon.keyring
根据自己的IP配置向ceph.conf中添加public_network
[root@ceph01 cluster]# echo public_network=172.16.0.0/16 >>ceph.conf
[root@ceph01 cluster]# echo mon_clock_drift allowed = 2 >>ceph.conf
如果在虚机上部署,开始部署monitor前建议先打上快照
[root@ceph01 cluster]# ceph-deploy mon create-initial
注:如果开始部署monitor失败,有可能是因为ceph-deploy软件版本过低,还原快照尝试使用下面的yum源重新安装ceph-deploy
[ceph-noarch]
name=Ceph noarch packages
baseurl=http://download.ceph.com/rpm-jewel/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpg
key=https://download.ceph.com/keys/release.asc
开始部署OSD
# ceph-deploy --overwrite-conf osd prepare ceph01:/dev/sdb ceph01:/dev/sdc ceph01:/dev/sdd ceph02:/dev/sdb ceph02:/dev/sdc ceph02:/dev/sdd ceph03:/dev/sdb ceph03:/dev/sdc ceph03:/dev/sdd --zap-disk
# ceph-deploy --overwrite-conf osd activate ceph01:/dev/sdb1 ceph01:/dev/sdc1 ceph01:/dev/sdd1 ceph02:/dev/sdb1 ceph02:/dev/sdc1 ceph02:/dev/sdd1 ceph03:/dev/sdb1 ceph03:/dev/sdc1 ceph03:/dev/sdd1
[root@ceph01 cluster]# ceph -s
cluster 26473e64-48f6-40d6-a8d2-7759aa147839
health HEALTH_WARN
too few PGs per OSD (21 < min 30)
monmap e1: 3 mons at {ceph01=172.16.8.43:6789/0,ceph02=172.16.8.44:6789/0,ceph03=172.16.8.45:6789/0}
election epoch 16, quorum 0,1,2 ceph01,ceph02,ceph03
osdmap e46: 9 osds: 9 up, 9 in
flags sortbitwise,require_jewel_osds
pgmap v94: 64 pgs, 1 pools, 0 bytes data, 0 objects
972 MB used, 100304 MB / 101276 MB avail
64 active+clean
[root@ceph01 cluster]# ceph osd pool set rbd pg_num 128
set pool 0 pg_num to 128
[root@ceph01 cluster]# ceph -s
cluster 26473e64-48f6-40d6-a8d2-7759aa147839
health HEALTH_WARN
pool rbd pg_num 128 > pgp_num 64
monmap e1: 3 mons at {ceph01=172.16.8.43:6789/0,ceph02=172.16.8.44:6789/0,ceph03=172.16.8.45:6789/0}
election epoch 16, quorum 0,1,2 ceph01,ceph02,ceph03
osdmap e48: 9 osds: 9 up, 9 in
flags sortbitwise,require_jewel_osds
pgmap v99: 128 pgs, 1 pools, 0 bytes data, 0 objects
975 MB used, 100301 MB / 101276 MB avail
128 active+clean
至此,集群部署完毕。
config推送:请不要使用直接修改某个节点的/etc/ceph/ceph.conf文件的方式,而是去部署节点(此处为ceph01:/root/cluster/ceph.conf)目录下修改。因为节点到几十个的时候,不可能一个个去修改的,采用推送的方式快捷安全!
修改完毕后,执行如下指令,将conf文件推送到各个节点:
# ceph-deploy --overwrite-conf config push ceph01 ceph02 ceph03
此时,需要重启各个节点的monitor服务
mon & osd启动方式:
# ceph01为各个monitor所在节点的主机名
# systemctl start [email protected]
# systemctl restart [email protected]
# systemctl stop [email protected]
0为该节点的OSD的id,可通过’ceph osd tree’查看
# systemctl start/stop/restart [email protected]
[root@ceph01 ~]# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.09627 root default
-2 0.03209 host ceph01
0 0.01070 osd.0 up 1.00000 1.00000
1 0.01070 osd.1 up 1.00000 1.00000
2 0.01070 osd.2 up 1.00000 1.00000
-3 0.03209 host ceph02
3 0.01070 osd.3 up 1.00000 1.00000
4 0.01070 osd.4 up 1.00000 1.00000
5 0.01070 osd.5 up 1.00000 1.00000
-4 0.03209 host ceph03
6 0.01070 osd.6 up 1.00000 1.00000
7 0.01070 osd.7 up 1.00000 1.00000
8 0.01070 osd.8 up 1.00000 1.00000