ceph-deploy部署指定版本ceph集群

注意:

16版本的ceph已经不支持ceph-deploy,安装方法见我的博客:

 cephadm快速部署指定版本ceph集群_ggrong0213的博客-CSDN博客

1、集群规划:

主机名 IP 组件
ceph1 192.168.150.120 ceph-deploy,mon,mgr,osd
ceph2 192.168.150.121 mon,mgr,osd
ceph3 192.168.150.122 mon,mgr,osd

2、操作系统:

        centos7

3、ceph版本:

        ceph15.2.13

3、系统初始化(三台机器操作):

        3、1 配置epel源:

$ yum install epel-release -y

        3、2 关闭防火墙:

$ systemctl stop firewalld
$ systemctl disable firewalld
$ systemctl status firewalld

        3、3 配置主机名:

       配置永久静态主机名:

$ hostnamectl --static set-hostname HostName

      修改域名解析文件:

$ vi /etc/hosts

# 添加如下内容:
ceph1Ip   ceph1
ceph2Ip   ceph2
ceph3Ip   ceph3

        3、4 设置时间同步:

        安装NTP服务:

$ yum -y install ntp ntpdate

       备份旧配置:

$ cd /etc && mv ntp.conf ntp.conf.bak

       以ceph1为NTP服务端节点,在ceph1新建NTP文件:

$ vi /etc/ntp.conf

# 添加如下内容:
restrict 127.0.0.1
restrict ::1
restrict ceph1IP mask 255.255.255.0
server 127.127.1.0
fudge 127.127.1.0 stratum 8

        在ceph2、ceph3节点新建NTP文件 :

$ vi /etc/ntp.conf

# 添加如下内容:
server ceph1Ip

         在ceph1节点启动NTP服务:

$ systemctl start ntpd
$ systemctl enable ntpd
$ systemctl status ntpd

        在除ceph1的所有节点强制同步server(ceph1)时间:

$ ntpdate ceph1

        在除ceph1的所有节点写入硬件时钟,避免重启后失效 :

$ hwclock -w

        在除ceph1的所有节点安装并启动crontab工具 :

$ yum install -y crontabs
$ chkconfig crond on
$ systemctl start crond
$ crontab -e

# 添加如下内容:
*/10 * * * * /usr/sbin/ntpdate ceph1Ip

        3、5 配置免密登录:

        在ceph1节点生成公钥,并发放到各个主机:

$ ssh-keygen -t rsa
$ for i in {1..3}; do ssh-copy-id ceph$i; done

        3、6 关闭SELinux:

$ setenforce 0
$ vi /etc/selinux/config

# 修改为disabled

        3、7 配置ceph镜像源:

$ vi /etc/yum.repos.d/ceph.repo

#添加如下内容:
[Ceph]
name=Ceph packages for $basearch
baseurl=https://download.ceph.com/rpm-15.2.13/el7/$basearch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
 
[Ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-15.2.13/el7/noarch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
 
[ceph-source]
name=Ceph source packages
baseurl=https://download.ceph.com/rpm-15.2.13/el7/SRPMS
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1

        更新yum源:

$ yum clean all && yum makecache

4、安装ceph:

        在所有主机安装Ceph:

$ yum -y install ceph

        验证是否安装成功:

$ ceph -v
ceph version 15.2.13 (c44bc49e7a57a87d84dfff2a077a2058aa2172e2) octopus (stable)

5、在ceph2节点额外安装ceph-deploy:

$ yum -y install ceph-deploy

6、部署mon节点(在主节点ceph2执行):

       6、1  创建集群:

$ cd /etc/ceph
$ ceph-deploy new ceph1 ceph2 ceph3

        若出现:

Traceback (most recent call last):
  File "/usr/bin/ceph-deploy", line 18, in 
    from ceph_deploy.cli import main
  File "/usr/lib/python2.7/site-packages/ceph_deploy/cli.py", line 1, in 
    import pkg_resources
ImportError: No module named pkg_resources

         解决办法:

$ yum install -y wget
$ wget https://pypi.python.org/packages/source/d/distribute/distribute-0.7.3.zip --no-check-certificate
$ yum install -y unzip
$ unzip distribute-0.7.3.zip
$ cd distribute-0.7.3
$ python setup.py install

        6、2 配置网络mon_host、public network:

$ vi /etc/ceph/ceph.conf

# 添加如下内容:
[global]
fsid = f6b3c38c-7241-44b3-b433-52e276dd53c6
mon_initial_members = ceph1, ceph2, ceph3
mon_host = ceph1IP,ceph2IP,ceph3IP
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

public_network = ceph1子网IP/24

[mon]
mon_allow_pool_delete = true

        6、3 初始化监视器并收集密钥:

$ ceph-deploy mon create-initial

        6、4 将“ceph.client.admin.keyring”拷贝到各个节点上:

$ ceph-deploy --overwrite-conf admin ceph1 ceph2 ceph3

        6、5 查看mon是否部署成功:

$ ceph -s

  cluster:
    id:     8b4fd85a-14c4-4498-a866-30752083647d
    health: HEALTH_WARN
            mons are allowing insecure global_id reclaim

  services:
    mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 87s)
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

   可以看到mon节点已经部署成功。

7、部署mgr节点:

$ ceph-deploy mgr create ceph1 ceph2 ceph3

        查看是否mgr部署成功:

$ ceph -s
  cluster:
    id:     8b4fd85a-14c4-4498-a866-30752083647d
    health: HEALTH_WARN
            mons are allowing insecure global_id reclaim

  services:
    mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 2m)
    mgr: ceph1(active, since 4s), standbys: ceph3, ceph2
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

可以看到mgr节点已经部署成功。

8、部署osd节点:

        确认各个节点各硬盘的sd*:

$ lsblk

NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   20G  0 disk
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   19G  0 part
  ├─centos-root 253:0    0   17G  0 lvm  /
  └─centos-swap 253:1    0    2G  0 lvm  [SWAP]
sdb               8:16   0   10G  0 disk
sdc               8:32   0   10G  0 disk
sdd               8:48   0   10G  0 disk
sr0              11:0    1  4.4G  0 rom

        部署osd节点:

$ ceph-deploy osd create ceph1 --data /dev/sdb
$ ceph-deploy osd create ceph1 --data /dev/sdc
$ ceph-deploy osd create ceph1 --data /dev/sdd
$ ceph-deploy osd create ceph2 --data /dev/sdb
$ ceph-deploy osd create ceph2 --data /dev/sdc
$ ceph-deploy osd create ceph2 --data /dev/sdd
$ ceph-deploy osd create ceph3 --data /dev/sdb
$ ceph-deploy osd create ceph3 --data /dev/sdc
$ ceph-deploy osd create ceph3 --data /dev/sdd

        验证osd节点是否部署成功:

$ ceph -s

cluster:
    id:     8b4fd85a-14c4-4498-a866-30752083647d
    health: HEALTH_WARN
            mons are allowing insecure global_id reclaim
            Module 'restful' has failed dependency: No module named 'pecan'

  services:
    mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 11m)
    mgr: ceph1(active, since 9m), standbys: ceph3, ceph2
    osd: 9 osds: 9 up (since 6s), 9 in (since 6s)

  task status:

  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   9.1 GiB used, 81 GiB / 90 GiB avail
    pgs:     1 active+clean

可以看到osd节点已经部署成功。

9、解决health问题:

        9、1 解决mons are allowing insecure global_id reclaim问题:

        禁用不安全模式:

$ ceph config set mon auth_allow_insecure_global_id_reclaim false

         9、2 解决 Module 'restful' has failed dependency: No module named 'pecan'问题:

$ pip3 install pecan

        再次查看集群状态:

$ ceph -s
cluster:
    id:     8b4fd85a-14c4-4498-a866-30752083647d
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 3m)
    mgr: ceph1(active, since 3m), standbys: ceph3, ceph2
    osd: 9 osds: 9 up (since 10m), 9 in (since 10m)

  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   9.1 GiB used, 81 GiB / 90 GiB avail
    pgs:     1 active+clean

   可以看到集群已经处于健康状态了。

9、验证ceph块存储:

         9、1 创建存储池:

$ ceph osd pool create vdbench 250 250

        9、2 指定存储池类型为块存储:

$ ceph osd pool application enable vdbench rbd

        9、3 指定压缩算法:

$ ceph osd pool set vdbench compression_algorithm zlib
$ ceph osd pool set vdbench compression_mode force

        9 、5 设置算法压缩率:

$ ceph osd pool set vdbench compression_required_ratio .99

       9、6 创建并映射image:

$ rbd create image1 --size 20G --pool vdbench --image-format 2 --image-feature layering
rbd map vdbench/image1

        9、7 准备100M大小的测试文件:

$ dd if=/dev/zero of=/home/compress_test bs=1M count=100

        9、8 将测试文件写入RBD设备:

$ dd if=/home/compress_test of=/dev/rbd0 bs=1M count=100 oflag=direct

        9、9 压缩验证:

$ ceph df
--- RAW STORAGE ---
CLASS  SIZE    AVAIL   USED     RAW USED  %RAW USED
hdd    90 GiB  80 GiB  684 MiB   9.7 GiB      10.75
TOTAL  90 GiB  80 GiB  684 MiB   9.7 GiB      10.75

--- POOLS ---
POOL                   ID  PGS  STORED   OBJECTS  USED     %USED  MAX AVAIL
device_health_metrics   1    1      0 B        0      0 B      0     25 GiB
vdbench                 2  151  100 MiB       29  150 MiB   0.19     25 GiB

你可能感兴趣的:(ceph,ceph,运维,centos)