CEPH搭建
环境部署---三台centos7.1主机;内核:3.10.0-229.el7.x86_64
IP 角色 Hostname
IP
|
角色
|
Hostname
|
192.168.1.10
|
osd mon admin
|
ceph-01
|
192.168.1.11
|
asd mon
|
ceph-02
|
192.168.1.12
|
asd mon
|
ceph-03
|
一、配置admin节点与osd节点无密码认证(ssh秘钥)
1.1、修改主机名(三台主机均做此操作)
ceph-01
ceph-01
1.2、修改hosts文件,通过主机名可以实现通讯
1.2.1在admin节点做以下操作(实现到所有asd节点的免密码登录)
192.168.1.10 ceph-01
192.168.1.11 ceph-02
192.168.1.12 ceph-03
1.2.2在其他osd节点和mon节点做以下操作(按照规划修改命名)
192.168.1.11 ceph-02
1.2.3ping测试(admin节点)
PING ceph-01 (192.168.1.10) 56(84) bytes of data.
64 bytes from ceph-01 (192.168.1.10): icmp_seq=1 ttl=64 time=0.036 ms
64 bytes from ceph-01 (192.168.1.10): icmp_seq=2 ttl=64 time=0.028 ms
^C
--- ceph-01 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.028/0.032/0.036/0.004 ms
PING ceph-02 (192.168.1.11) 56(84) bytes of data.
64 bytes from ceph-02 (192.168.1.11): icmp_seq=1 ttl=64 time=0.248 ms
64 bytes from ceph-02 (192.168.1.11): icmp_seq=2 ttl=64 time=0.250 ms
^C
--- ceph-02 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.248/0.249/0.250/0.001 ms
PING ceph-03 (192.168.1.12) 56(84) bytes of data.
64 bytes from ceph-03 (192.168.1.12): icmp_seq=1 ttl=64 time=0.174 ms
64 bytes from ceph-03 (192.168.1.12): icmp_seq=2 ttl=64 time=0.172 ms
^C
--- ceph-03 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.172/0.173/0.174/0.001 ms
1.2.4添加白名单(所有节点做以下操作hosts.allow 60是admin节点IP)
sshd:192.168.1.10:allow
1.2.5配置基础环境,关闭防火墙、设置selinux(所有节点均做此操作)
SELINUX=disabled
1.2.6配置免密码认证
1.2.6.1修改ssh配置文件(所有节点)
PermitRootLogin yes #允许root登录 (在环境部署完成后可以将其注释掉 root风险较大)
1.2.6.2生成秘钥(admin节点)
ssh-keygen #一路回车即可
1.2.6.3将公钥拷贝到各ceph 节点(admin节点)
验证
Last login: Tue Mar 20 09:46:13 2018 from 192.168.1.10
Last login: Tue Mar 20 09:45:40 2018 from 192.168.1.10
Last login: Mon Mar 19 16:01:10 2018 from 192.168.1.10
二、配置NTP(所有节点)
2.1、yum安装ntp
2.2、将ntp.conf文件备份,指定ntp服务器(这里是将ceph-01、ceph-02部署为ntp server)
driftfile /var/lib/ntp/drift
restrict default nomodify notrap nopeer noquery
restrict 127.0.0.1
restrict ::1
restrict 0.0.0.0 mask 0.0.0.0 nomodify notrap
server 192.168.1.10 prefer
server 192.168.1.11
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys
disable monitor
2.3.、重合并加入开机自启
2.4、 写入硬时钟
2.5、验证
remote refid st t when poll reach delay offset jitter
==============================================================================
*192.168.1.10 LOCAL(0) 2 u 254 1024 377 0.209 -0.475 1.544
192.168.1.11 .INIT. 16 u - 1024 0 0.000 0.000 0.000
三、部署Ceph环境
3.1、添加分区,这里以一台设备添加一块磁盘为例
#sdb 500G的ssd盘 在这里做日志盘
#sdd 4T普通盘 做数据盘
#for i in {b..m};do parted -s /dev/sd${i} mklabel gpt;done
查看
sdb 8:16 0 446.6G 0 disk
└─sdb1 8:17 0 14G 0 part
sdd 8:48 0 3.7T 0 disk
└─sdd1 8:49 0 3.7T 0 part
3.2、配置yum源,这里用的是自己搭建的yum源(所有节点)
#若不知道怎么搭建,请看上篇博客-《搭建本地Ceph yum源》
3.2.1查看yum源文件
[Ceph-10.2.9]
name=Ceph-10.2.9
baseurl=http://yum server IP/yum/x86_64/ceph
gpgcheck=0
enabled=1
#也可以使用阿里源
3.2.2清除缓存并生成新的yum缓存
3.3、开始安装
3.3.1内核优化(所有节点)
3.3.1admin节点部署
3.3.2osd节点部署
3.3.3初始化mon
3.3.3.1第一步 这些操作必须在my-cluster目录下
ceph.conf ceph-deploy-ceph.log ceph.mon.keyring
#会创建默认配置文件ceph.conf 有其他需求,可以修改配置文件
3.3.3.2初始化第二步
#--overwrite-conf 参数意义是覆盖之前配置
3.4、添加osd(前一块盘放数据、后一块放日志)
日志盘需要给ceph:ceph权限
预加载osd
[[email protected] my-cluster]#ceph-deploy --overwrite-conf osd prepare ceph-01:/dev/sdd1:/dev/sdb1
激活osd
[[email protected] my-cluster]#ceph-deploy --overwrite-conf osd activate ceph-01:/dev/sdd1:/dev/sdb1
查看
cluster fba764dc-998a-4acb-ac23-2d0a405e59f7
health HEALTH_OK
monmap e1: 3 mons at {ceph-01=192.168.1.10:6789/0,ceph-02=192.168.1.11:6789/0,ceph-03=192.168.1.12:6789/0}
election epoch 6, quorum 0,1,2 ceph-01,ceph-02,ceph-03
osdmap e21: 3 osds: 3 up, 3 in
flags sortbitwise,require_jewel_osds
pgmap v46: 64 pgs, 1 pools, 181 bytes data, 1 objects
100 MB used, 11170 GB / 11171 GB avail
64 active+clean
快速格式磁盘
sgdisk -o /dev/sdd
mkfs.xfs -f -i size=2048 /dev/sdd1
parted -s /dev/sdd mklabel gpt
parted -s /dev/sdd mkpart primary 1 100%