ceph jewel bluestore 安装

说明: jewel 版本不建议在生成环境配置 bluestore

本文适用于: 3 台 或 5 台 ceph 节点的安装。
系统: Ubuntu 16.04
ceph: jewel 10.2.7
部署方式: ceph-deploy

部署服务器: hostadmin
文件服务器: hostname1 hostname2 hostname3
OSD数据盘: /dev/xvdb

准备工作:

mkdir -p ~/ceph-cluster
cd ~/ceph-cluster
node1=hostname1
node2=hostname2
node3=hostname3
node4=
node5=
admin=hostadmin

清空历史数据:

ceph-deploy purge  $node1 $node2 $node3 $node4 $node5
ceph-deploy purgedata $node1 $node2 $node3 $node4 $node5
ceph-deploy forgetkeys

创建新集群的配置文件

ceph-deploy new $node1 $node2 $node3 $node4 $node5

vim ceph.conf 增加以下内容

filestore_xattr_use_omap = true
enable experimental unrecoverable data corrupting features = bluestore rocksdb
bluestore fsck on mount = true
bluestore block db size = 67108864
bluestore block wal size = 134217728
bluestore block size = 5368709120
osd objectstore = bluestore
[osd]
bluestore = true

部署命令:

# 安装程序
ceph-deploy install  $admin
ceph-deploy install  $node1 $node2 $node3 $node4 $node5
#初始化
ceph-deploy mon create-initial
# 格式化 OSD 盘,并格式化为 bluestore
ceph-deploy --overwrite-conf osd create  --zap-disk --bluestore $node1:/dev/xvdb $node2:/dev/xvdb $node3:/dev/xvdb $node4:/dev/xvdb $node5:/dev/xvdb 

## create osd for xfs 
# ceph-deploy --overwrite-conf osd create --zap-disk \
#    $node1:/dev/xvdb $node1:/dev/xvdc \
#    $node2:/dev/xvdb $node2:/dev/xvdc \
#    $node3:/dev/xvdb $node3:/dev/xvdc \
#    $node4:/dev/xvdb $node4:/dev/xvdc \
#    $node5:/dev/xvdb $node5:/dev/xvdc 

# 下发配置到节点
ceph-deploy --overwrite-conf admin $node1 $node2 $node3 $node4 $node5
ceph-deploy --overwrite-conf admin $admin
# 设置只读权限
sudo chmod +r /etc/ceph/ceph.client.admin.keyring
# 部署 rgw ,以支持 s3 协议
ceph-deploy rgw create $node1 $node2 $node3 $node4 $node5
# 部署 mds,以支持cephfs
ceph-deploy mds create $node1 $node2 $node3 $node4 $node5

TODO:

multisite 部署,支持多机房

你可能感兴趣的:(ceph jewel bluestore 安装)