部署ceph存储集群
软件信息:
操作系统: Linux release 7.4.1708 (Core)
内核版本:3.10.0-693.el7.x86_64
Ceph版本:0.94.6
主机信息:
Ceph210 10.220.218.210 (osd)
Ceph211 10.220.218.211 (osd)
Ceph212 10.220.218.212 (mon osd)
本次部署ceph 0.94.6无外网,使用yum方式的代理,配置yum代理如下:
# vim /etc/yum.conf
proxy=http://10.63.229.85:3128
proxy=https://10.63.229.85:3128
exclude= *0.94.10* *0.94.9* *0.94.8* *0.94.7* ( 安装ceph 0.94.6版本,忽略较高的ceph版本)
一、环境准备
1.1主机间ssh免秘钥登录
[root@ceph211 ~] ssh-keygen –t rsa
[root@ceph211 ~] ssh-copy-id –i /root/.ssh/id.rsz.pub [email protected]
1.2 添加hosts,配置主机名
# vi /etc/hosts
10.220.128.210 ceph210
10.220.128.211 ceph211
10.220.128.212 ceph212
1.3 关闭防火墙、selinux
# systemctl stop firewalld
# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
# sentenforce 0
1.4 配置yum源(建议国内)、配置ceph.repo源
[Ceph]
name=Ceph packages for $basearch
baseurl=http://download.ceph.com/rpm-hammer/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://download.ceph.com/rpm-hammer/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
[ceph-source]
name=Ceph source packages
baseurl=http://download.ceph.com/rpm-hammer/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
# yum clean all
# yum repolist
# yum makecache
二、手动部署ceph
------主节点上操作:10.220.128.212 ceph212
2.1 安装ceph
# yum install ceph –y
# ceph –v
ceph version 0.94.6 (e832001feaf8c176593e0325c8298e3f16dfb403)
2.2创建ceph配置文件
创建配置文件前,定义uuid作为集群fsid:uuidgen
# uuidgen
86a1432a-d9ca-4d87-a5c5-6f92c1db0961
编辑配置文件:
# vim /etc/ceph/ceph.conf
fsid = 86a1432a-d9ca-4d87-a5c5-6f92c1db0961
mon initial members = ceph212
mon host = 10.220.128.212
public network = 10.220.128.0/24
auth cluster required = cephx
auth service required = cephx
osd journal size = 1024
filestore xattr use omap = true
osd pool default size = 2
osd pool default min size = 1
2.3 创建mon组件;
创建必要的秘钥环:
#ceph-authtool --create-keyring /etc/ceph/ceph.mon.keyring --gen-key \
>>-n mon. --cap mon 'allow *‘
创建client.admi秘钥:
# ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring \
>> --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' \
>>--cap osd 'allow *' --cap mds 'allow '
将client.admin秘钥导入到mon秘钥当中
# ceph-authtool /etc/ceph/ceph.mon.keyring \
>>--import-keyring /etc/ceph/ceph.client.admin.keyring
创建mon map:
# monmaptool –create –add ceph212 10.220.128.212 \
>>--fsid 86a1432a-d9ca-4d87-a5c5-6f92c1db0961 /etc/ceph/monmap
初始化mon:
# ceph-mon –mkfs –I ceph212 –monmap /etc/ceph/monmap \
>> --keyring /etc/ceph/ceph.mon.keyring
启动mon:
# service ceph start mon
2.4 创建osd节点:
创建osd:
# uuidgen
16fe9c78-64b6-47c6-a616-f404ccdadf3a
#ceph osd create 16fe9c78-64b6-47c6-a616-f404ccdadf3a
0
创建osd数据目录:
# mkdir –p /data/osd.0
硬盘分区:
# parted -s /dev/sdb mklabel gpt 做条带化
# parted -s /dev/sdb mkpart journal xfs 2048s 10G 第一块磁盘分10G 名字journal 格式xfs
# parted -s /dev/sdb mkpart data xfs 10G 100% 第二块磁盘分剩余的 名字data 格式xfs
# mkfs.xfs -f /dev/sdb1 格式化为xfs格式
# mkfs.xfs -f /dev/sdb2 格式化为xfs格式
挂载:
# mount /dev/sdb2 /data/osd.0
配置ceph配置文件,添加如下:
[osd.0]
host = ceph212
uuid = 16fe9c78-64b6-47c6-a616-f404ccdadf3a
devs = /dev/disk/by-partuuid/8fde5b17-0dee-430e-8620-357110b13c8d
osd journal = /dev/disk/by-partuuid/7124b95c-470f-4332-853f-40a7f77c82e6
初始化osd数据目录:
# ceph-osd -i 0 –-mkfs --mkkey \
>>--osd-uuid 16fe9c78-64b6-47c6-a616-f404ccdadf3a
将osd秘钥添加至集群:
# ceph auth add osd.$ID osd 'allow *' mon 'allow profile osd'\
>>-i "/data/osd.$ID"/keyring
启动osd:
# service ceph start osd.0
查看集群状态:
# ceph –s
【至此,ceph集群手动部署完成。】