ceph-deploy配置ceph分布式集群
graph LR
ceph-deploy-->ceph-node1
ceph-deploy-->ceph-node2
ceph-deploy-->ceph-node3
ceph-depoly install使用说明:
由于环境限制,实验过程中,我们会将ceph-deploy角色部署到ceph-node1上,最终角色图如下:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 40G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 39G 0 part
├─centos-root 253:0 0 37G 0 lvm /
└─centos-swap 253:1 0 2G 0 lvm [SWAP]
sdb 8:16 0 200G 0 disk
sdc 8:32 0 200G 0 disk
sdd 8:48 0 200G 0 disk
sr0 11:0 1 1024M 0 rom
nvme0n1 259:0 0 99G 0 disk
├─nvme0n1p1 259:1 0 33G 0 part
├─nvme0n1p2 259:2 0 33G 0 part
└─nvme0n1p3 259:3 0 33G 0 part
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum clean all
yum makecache
yum install -y https://mirrors.aliyun.com/ceph/rpm-mimic/el7/noarch/ceph-release-1-1.el7.noarch.rpm
yum clean all
yum makecache
yum install -y chrony
vim /etc/chrony.conf
server ntp1.aliyun.com iburst
systemctl enable chronyd.service
systemctl start chronyd.service
chronyc sources
vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.10.50 ceph-node1
192.168.10.51 ceph-node2
192.168.10.52 ceph-node3
systemctl disable firewalld
systemctl stop firewalld
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
各节点上依次执行:
hostnamectl set-hostname --static ceph-node1
hostnamectl set-hostname --static ceph-node2
hostnamectl set-hostname --static ceph-node3
在node1上生产key文件:
ssh-keygen
将key文件同步到各节点,在各节点上对应执行如下命令:
ssh-copy-id root@ceph-node1
ssh-copy-id root@ceph-node2
ssh-copy-id root@ceph-node3
yum install -y python2-pip deltarpm
yum install -y ceph-deploy
查看ceph-deploy的版本,应该是2.0.1
ceph-deploy --version
如果在某些地方碰到麻烦,想从头再来,可以用下列命令清除配置:
ceph-deploy purgedata {ceph-node} [{ceph-node}]
ceph-deploy purgedata ceph-node1 ceph-node2 ceph-node3
ceph-deploy forgetkeys
用下列命令可以连 Ceph 安装包一起清除:
ceph-deploy purge {ceph-node} [{ceph-node}]
例如:
ceph-deploy purge ceph-node1 ceph-node2 ceph-node3
ceph-deploy forgetkeys
mkdir cephcluster
cd cephcluster
pwd
/root/cephcluster
ceph-deploy new ceph-node1 ceph-node2 ceph-node3
执行完成以后,将生成下面三个文件:
-rw-r--r-- 1 root root 253 Jun 2 18:47 ceph.conf
-rw-r--r-- 1 root root 5150 Jun 2 18:47 ceph-deploy-ceph.log
-rw------- 1 root root 73 Jun 2 18:47 ceph.mon.keyring
添加配置:
public network = 192.168.10.0/24
mon clock drift allowed = 2
mon clock drift warn backoff = 30
max open files = 131072
[mon]
#为了能让后面将已经创建的存储池删除,添加该项配置
mon allow pool delete = true
[OSD]
osd mkfs type = xfs
filestore max sync interval = 15
filestore min sync interval = 10
[CLIENT]
rbd cache = true
在集群机器上安装ceph
ceph-deploy install --no-adjust-repos --release mimic ceph-node1 ceph-node2 ceph-node3
#假如我们没有禁用自动调整源,而使用了官方源,有可能上面的命令会各种问题。
#此时我们可以在各节点上执行yum install -y ceph ceph-radosgw替换,有时候由于网络各种问题,该方法更有效
配置初始 monitor(s)、并收集所有密钥
ceph-deploy mon create-initial
同步配置文件和key文件到各节点:
ceph-deploy admin ceph-node1 ceph-node2 ceph-node3
在三个节点上创建journal和osd
#filestore
for node in ceph-node1 ceph-node2 ceph-node3;do ceph-deploy osd create --filestore --data /dev/sdb --journal /dev/nvme0n1p1 $node;ceph-deploy osd create --filestore --data /dev/sdc --journal /dev/nvme0n1p2 $node;ceph-deploy osd create --filestore --data /dev/sdd --journal /dev/nvme0n1p3 $node;done;
#bluestore:当部署bluestore的时候,不需要journal分区,只需要指定要初始化的磁盘就行
for node in ceph-node1 ceph-node2 ceph-node3;do ceph-deploy osd create --data /dev/sdb $node;ceph-deploy osd create --data /dev/sdc $node;done;
如果原来的磁盘上有分区或者有数据的话,可以用以下命令清理磁盘:
ceph-deploy disk zap {osd-server-name} {disk-name}
比如,清理ceph-node01的sdb
ceph-deploy disk zap ceph-node01 /dev/sdb
创建metadata server,防止后面用于文件服务器
ceph-deploy mds create ceph-node1
#mgr创建的时候,如果指定多个节点,后期会发生选举的问题,但是有高可用的功效。如果只指定一个节点,会存在单节点分享
ceph-deploy mgr create ceph-node1 ceph-node2 ceph-node3
ceph mgr module enable dashboard
ceph dashboard create-self-signed-cert
ceph dashboard set-login-credentials admin admin
ceph mgr services
{
"dashboard": "https://ceph-node1:8443/"
}