Ceph 多节点集群部署

Ceph简介

Ceph是一个开源的 PB 级分布式文件系统,它有着优秀的性能、可靠性和可扩展性。Ceph 独一无二地用统一的系统提供了对象、块、和文件存储功能。Ceph 的强大足以改变贵公司的 IT 基础架构、和管理海量数据的能力。Ceph 可提供极大的伸缩性——供成千用户访问 PB 乃至 EB 级的数据。 Ceph 节点以普通硬件和智能守护进程作为支撑点, Ceph 存储集群组织起了大量节点,它们之间靠相互通讯来复制数据、并动态地重分布数据。

本文在4台装有 CentOS 7的虚拟机上部署了一个 Ceph 集群以用于研究学习。

集群环境

4台装有CentOS 7的虚拟机:

主机名 IP 角色
ceph-admin 192.168.134.128 deploy, mon, mds, rgw
ceph-0 192.168.134.129 osd
ceph-1 192.168.134.130 osd
ceph-2 192.168.134.131 osd

准备工作

网络配置(所有节点)

修改主机名:

sudo vim /etc/sysconfig/network

NETWORKING=yes
HOSTNAME={Hostname}

修改IP与主机名的对应关系:

sudo vim /etc/hosts

192.168.134.128   ceph-admin
192.168.134.129   ceph-0
192.168.134.130   ceph-1
192.168.134.131   ceph-2

重启网络服务:

sudo systemctl  restart network

创建部署 Ceph 的用户(所有节点)

创建新用户:

sudo useradd -d /home/ceph-deploy -m ceph-deploy
sudo passwd ceph-deploy

确保各 Ceph 节点上的新用户都有免密码 sudo 权限:

echo "ceph-deploy ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
sudo chmod 0440 /etc/sudoers.d/ceph

切换到新创建的Ceph用户

配置NTP服务

安装并开启NTP:

sudo yum install ntp ntpdate ntp-doc
sudo systemctl start  ntpd
sudo systemctl enable  ntpd

以节点ceph-admin作为NTPserver,其余三个节点ceph-0/1/2作为client解决时间同步问题。
ceph-admin节点
修改/etc/ntp.conf

sudo vim  /etc/ntp.conf
###注释掉默认的server
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
###添加如下三行
server 127.127.1.0 #local lock
fudge 127.127.1.0 stratum 0
restrict 192.168.134.0 mask 255.255.0.0 nomodify notrap

修改/etc/ntp/step-tickers

sudo vim /etc/ntp/step-tickers
# List of NTP servers used by the ntpdate service.
# 0.centos.pool.ntp.org 注释掉此行
127.127.1.0 #添加此行

重启NTPsystemctl restart ntpd,查看服务状态:

[ceph-deploy@ceph-admin my-cluster]$ sudo ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
*LOCAL(0)        .LOCL.           0 l    8   16  377    0.000    0.000   0.000
 [ceph-deploy@ceph-admin my-cluster]$ sudo ntpstat
synchronised to local net at stratum 1 
   time correct to within 10 ms
   polling server every 16 s

ntpq -p指令的最下面一行是*表示服务正常。

ceph-0/1/2节点
修改/etc/ntp.conf

sudo vim  /etc/ntp.conf
###注释掉默认的server
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
###添加下面一行
server 196.168.134.128 #ceph-admin节点

重启NTPsystemctl restart ntpd,查看服务状态:

[ceph-deploy@ceph-0 ~]$ sudo ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
*ceph-admin      .LOCL.           1 u   36   64  177    0.241  -170.10 105.870
[ceph-deploy@ceph-0 ~]$ sudo ntpstat
synchronised to NTP server (192.168.134.128) at stratum 2 
   time correct to within 246 ms
   polling server every 64 s

配置SSH服务

安装SSH:

sudo yum install openssh-server

ceph-admin点上生成 SSH 密钥并把其公钥分发到各 Ceph 节点:

ssh-keygen #一路回车即可

ssh-copy-id ceph-deploy@ceph-0
ssh-copy-id ceph-deploy@ceph-1
ssh-copy-id ceph-deploy@ceph-2

修改~/.ssh/config,这样 ceph-deploy 就能用新用户名登录各个 Ceph 节点了:

Host ceph-admin
   Hostname ceph-admin
   User ceph-deploy
Host ceph-0
   Hostname ceph-0
   User ceph-deploy
Host ceph-1
   Hostname ceph-1
   User ceph-deploy
Host ceph-2
   Hostname ceph-2
   User ceph-deploy

可能遇到的错误:

[ceph-deploy@ceph-admin ~]$ ssh ceph-admin
Bad owner or permissions on /home/ceph-deploy/.ssh/config
[ceph-deploy@ceph-admin ~]$ ls .ssh/config -lh
-rw-rw-r--. 1 ceph-deploy ceph-deploy 212 Jul 14 09:32 .ssh/config
#改变config权限
chmod 644 ~/.ssh/config

关闭selinux&firewalld

sudo setenforce 0 #临时有效
sudo sed -i 's/SELINUX=.*/SELINUX=permissive/' /etc/selinux/config #永久有效
sudo systemctl stop firewalld 
sudo systemctl disable firewalld

确保你的包管理器安装了优先级/首选项包且已启用

sudo yum install yum-plugin-priorities

部署集群

安装 Ceph 部署工具

ceph-admin节点上:

sudo yum install -y yum-utils && sudo yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/ && sudo yum install --nogpgcheck -y epel-release && sudo rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 && sudo rm /etc/yum.repos.d/dl.fedoraproject.org*

将软件包源加入软件仓库:

sudo vim /etc/yum.repos.d/ceph.repo

[Ceph]
name=Ceph packages for $basearch
baseurl=https://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-jewel/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc
priority=2

[Ceph-noarch]
name=Ceph noarch packages
baseurl=https://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-jewel/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc
priority=2

[ceph-source]
name=Ceph source packages
baseurl=https://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-jewel/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc
priority=2

ceph.repo拷贝到其余三个节点ceph-0/1/2

sudo scp /etc/yum.repos.d/ceph.repo ceph-0:/etc/yum.repos.d/ceph.repo
sudo scp /etc/yum.repos.d/ceph.repo ceph-1:/etc/yum.repos.d/ceph.repo
sudo scp /etc/yum.repos.d/ceph.repo ceph-2:/etc/yum.repos.d/ceph.repo

刷新源缓存:

sudo yum makecache #非常重要

更新软件库并安装 ceph-deploy

sudo yum update && sudo yum install ceph-deploy

创建集群

首先在ceph-admin节点上创建一个目录,用于保存 ceph-deploy 生成的配置文件和密钥对:

mkdir my-cluster
cd my-cluster

创建集群:

ceph-deploy new ceph-admin

修改ceph.conf

osd pool default size = 3
public network = 192.168.134.0/24

在所有节点手动安装Ceph(最重要的,可免去很多错误):

sudo yum install ceph ceph-radosgw

初始化monitor(s):

ceph-deploy mon create-initial

准备并激活OSD:

ceph-deploy osd prepare ceph-0:/dev/sdb ceph-1:/dev/sdb ceph-2:/dev/sdb
ceph-deploy osd activate ceph-0:/dev/sdb1 ceph-1:/dev/sdb1 ceph-2:/dev/sdb1

确保对 ceph.client.admin.keyring 有正确的操作权限:

ceph-deploy admin ceph-admin ceph-0 ceph-1 ceph-2
sudo chmod 644 /etc/ceph/ceph.client.admin.keyring #全部节点

检查集群的健康状况:

[ceph-deploy@ceph-admin my-cluster]$ ceph health
HEALTH_OK
[ceph-deploy@ceph-admin my-cluster]$ ceph -s
    cluster eb684f52-4dd3-4cc1-8011-f0c5b0b4b98f
     health HEALTH_OK
     monmap e1: 1 mons at {ceph-admin=192.168.134.128:6789/0}
            election epoch 3, quorum 0 ceph-admin
     osdmap e14: 3 osds: 3 up, 3 in
            flags sortbitwise,require_jewel_osds
      pgmap v27: 64 pgs, 1 pools, 0 bytes data, 0 objects
            100 MB used, 104 GB / 104 GB avail
                  64 active+clean
[ceph-deploy@ceph-admin my-cluster]$ ceph osd tree
ID WEIGHT  TYPE NAME       UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 0.10258 root default                                      
-2 0.03419     host ceph-0                                   
 0 0.03419         osd.0        up  1.00000          1.00000 
-3 0.03419     host ceph-1                                   
 1 0.03419         osd.1        up  1.00000          1.00000 
-4 0.03419     host ceph-2                                   
 2 0.03419         osd.2        up  1.00000          1.00000 

你可能感兴趣的:(Ceph)