【ceph】集群搭建(centos7-1908)

一 环境准备

1 安装虚拟机

pc机  1T硬盘 16G内存。

vmware workstation 15

创建 4台虚拟机,  网络为桥接。

   编号        hostname                  ip                                     对应下图节点

ceph-1        dev-ceph1               192.168.199.175/24             admin-node

ceph-2        dev-ceph2               192.168.199.164/24             node1 

ceph-3        dev-ceph3               192.168.199.105/24             node2 

ceph-4        dev-ceph4               192.168.199.222/24             node3

【ceph】集群搭建(centos7-1908)_第1张图片

 

 

 

2 配置源

yum源, 将/etc/yum.repo.d/CentOS*  移出bp目录备份;

编辑CentOS-Base.repo

[base]
name=CentOS-$releasever - Base - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/os/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/os/$basearch/
        http://mirrors.cloud.aliyuncs.com/centos/$releasever/os/$basearch/
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
 
#released updates 
[updates]
name=CentOS-$releasever - Updates - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/updates/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/updates/$basearch/
        http://mirrors.cloud.aliyuncs.com/centos/$releasever/updates/$basearch/
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
 
#additional packages that may be useful
[extras]
name=CentOS-$releasever - Extras - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/extras/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/extras/$basearch/
        http://mirrors.cloud.aliyuncs.com/centos/$releasever/extras/$basearch/
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
 
#additional packages that extend functionality of existing packages
[centosplus]
name=CentOS-$releasever - Plus - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/centosplus/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/centosplus/$basearch/
        http://mirrors.cloud.aliyuncs.com/centos/$releasever/centosplus/$basearch/
gpgcheck=1
enabled=0
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
 
#contrib - packages by Centos Users
[contrib]
name=CentOS-$releasever - Contrib - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/contrib/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/contrib/$basearch/
        http://mirrors.cloud.aliyuncs.com/centos/$releasever/contrib/$basearch/
gpgcheck=1
enabled=0
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7

ceph 源   j版

编辑ceph.repo


[Ceph-SRPMS]
name=Ceph SRPMS packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS/
enabled=1
gpgcheck=0
type=rpm-md

[Ceph-aarch64]
name=Ceph aarch64 packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-jewel/el7/aarch64/
enabled=1
gpgcheck=0
type=rpm-md

[Ceph-noarch]
name=Ceph noarch packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/
enabled=1
gpgcheck=0
type=rpm-md

[Ceph-x86_64]
name=Ceph x86_64 packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/
enabled=1
gpgcheck=0
type=rpm-md

3 ceph-1 安装ceph-deploy

sudo yum update && sudo yum install ceph-deploy

 

4 所有主机安装ntp,ssh和创建ceph用户。

ntp

yum install  ntp  ntpdate ntp-doc

openssh-server    yum update 时已经安装。

创建ceph用户

ceph-1 ceph-2  ceph-3  ceph-4 创建 ceph用户  test-ceph

[root@dev-ceph2 yum.repos.d]# sudo useradd -d /home/test-ceph -m test-ceph
[root@dev-ceph2 yum.repos.d]# sudo passwd test-ceph
Changing password for user test-ceph.
New password: 
BAD PASSWORD: The password is shorter than 8 characters
Retype new password: 
passwd: all authentication tokens updated successfully.
[root@dev-ceph2 yum.repos.d]# 

test-ceph 用户配置sudo权限

[root@dev-ceph4 yum.repos.d]# echo "test-ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/test-ceph
test-ceph ALL = (root) NOPASSWD:ALL
[root@dev-ceph4 yum.repos.d]# sudo chmod 0440 /etc/sudoers.d/test-ceph
[root@dev-ceph4 yum.repos.d]# 

5 各节点免密

ceph-1 管理节点。 创建ceph-admin用户。

A  生成密钥

[root@dev-ceph1 ~]# sudo useradd -d /home/ceph-admin -m ceph-admin
[root@dev-ceph1 ~]# sudo passwd ceph-admin
Changing password for user ceph-admin.
New password: 
BAD PASSWORD: The password is shorter than 8 characters
Retype new password: 
passwd: all authentication tokens updated successfully.
[root@dev-ceph1 ~]# su ceph-admin -l
[ceph-admin@dev-ceph1 ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/ceph-admin/.ssh/id_rsa): 
Created directory '/home/ceph-admin/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/ceph-admin/.ssh/id_rsa.
Your public key has been saved in /home/ceph-admin/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:xU79sJfUL/5srMzy4jylqlfa3RgmQfD2IBTbXKdQ47c ceph-admin@dev-ceph1
The key's randomart image is:
+---[RSA 2048]----+
|          +o..+ .|
|         o *.+ = |
|          *.O + o|
|         + o.B oo|
|        S . ..*E.|
|            oo+. |
|           + *.= |
|          o.*ooo+|
|        .o.oo==oo|
+----[SHA256]-----+
[ceph-admin@dev-ceph1 ~]$ 

B  拷贝公钥到各个节点

拷贝到ceph-2   


[ceph-admin@dev-ceph1 ~]$ ssh-copy-id -i /home/ceph-admin/.ssh/id_rsa.pub  dev-ceph2 
/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/ceph-admin/.ssh/id_rsa.pub"
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
test-ceph@dev-ceph2's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'dev-ceph2'"
and check to make sure that only the key(s) you wanted were added.

依次拷贝到ceph-3, ceph-4 

ceph-1 上配置 ssh config文件

[ceph-admin@dev-ceph1 ~]$ cat ~/.ssh/config
Host dev-ceph2
    Hostname dev-ceph2
    User test-ceph
Host dev-ceph3
    Hostname dev-ceph3
    User test-ceph
Host dev-ceph4
    Hostname dev-ceph4
    User test-ceph

6 网络检查

A 各主机 网卡设置开机启动

[ceph-admin@dev-ceph1 ~]$ cat /etc/sysconfig/network-scripts/ifcfg-ens33 
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="dhcp"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="64725794-0775-426d-a6a9-48386e2293e9"
DEVICE="ens33"
ONBOOT="yes"
[ceph-admin@dev-ceph1 ~]$ 

默认 ONBOOT 都是yes 不需要配置

B  各节点网络通 互ping

ping  dev-ceph1

ping dev-ceph2

ping dev-ceph3 

ping dev-ceph4

C  开端口6789

因为时测试, 直接禁用firewalld   root用户操作

systemctl stop firewalld

systemctl disable firewalld

D 管理节点配置终端tty

如果配置了requiretty , 我们没有配置过, 忽略。

E 设置SELINUX

各节点 执行命令 

sudo setenforce 0

F 配置包优先级 

sudo  yum install yum-plugin-priorities

 

二   集群安装

 

1 创建集群

ceph-1 管理节点 切换至ceph-admin 用户

[ceph-admin@dev-ceph1 ~]$ mkdir my-cluster
[ceph-admin@dev-ceph1 ~]$ cd my-cluster/

执行 ceph-deploy new -h  会报错  python 没有 pkg_rexxx包;

解决安装pip , 方法 1 在pypi 上 下载 pip 源码包, 解压  python  setup.py install 安装, 但可能有很多其他依赖。

                         方法2 在 rpmsearch 网站上搜索 el7 的 python-pip 包, 下载安装,只有一个python-setuptools rpm包。源里有直接安装。 然后   pip install distribute .

ceph-1 上  ceph-admin 用户创建  mon  节点

[ceph-admin@dev-ceph1 my-cluster]$ ceph-deploy new dev-ceph2
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph-admin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy new dev-ceph2
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  func                          : 
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : 
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO  ]  mon                           : ['dev-ceph2']
[ceph_deploy.cli][INFO  ]  public_network                : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster_network               : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  fsid                          : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[dev-ceph2][DEBUG ] connected to host: dev-ceph1 
[dev-ceph2][INFO  ] Running command: ssh -CT -o BatchMode=yes dev-ceph2
[dev-ceph2][DEBUG ] connection detected need for sudo
[dev-ceph2][DEBUG ] connected to host: dev-ceph2 
[dev-ceph2][DEBUG ] detect platform information from remote host
[dev-ceph2][DEBUG ] detect machine type
[dev-ceph2][DEBUG ] find the location of an executable
[dev-ceph2][INFO  ] Running command: sudo /usr/sbin/ip link show
[dev-ceph2][INFO  ] Running command: sudo /usr/sbin/ip addr show
[dev-ceph2][DEBUG ] IP addresses found: [u'192.168.199.164']
[ceph_deploy.new][DEBUG ] Resolving host dev-ceph2
[ceph_deploy.new][DEBUG ] Monitor dev-ceph2 at 192.168.199.164
[ceph_deploy.new][DEBUG ] Monitor initial members are ['dev-ceph2']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.199.164']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
[ceph-admin@dev-ceph1 my-cluster]$ 

 

2 安装ceph

 

在ceph-1   ceph-admin 用户

[ceph-admin@dev-ceph1 my-cluster]$ ceph-deploy install dev-ceph1 dev-ceph2 dev-ceph3 dev-ceph4  --no-adjust-repos 

执行完后

[dev-ceph4][DEBUG ] 
[dev-ceph4][DEBUG ] Complete!
[dev-ceph4][INFO  ] Running command: sudo ceph --version
[dev-ceph4][DEBUG ] ceph version 10.2.11 (e4b061b47f07f583c92a050d9e84b1813a35671e)

安装成功。

 

3 初始化mon

在ceph-1  ceph-admin 用户

[ceph-admin@dev-ceph1 my-cluster]$ ceph-deploy mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph-admin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create-initial
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : 
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : 

[dev-ceph2][INFO  ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-dev-ceph2/keyring auth get client.bootstrap-rgw
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mgr.keyring
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO  ] Destroy temp directory /tmp/tmpnBO5hj

结束。

[ceph-admin@dev-ceph1 my-cluster]$ ls
ceph.bootstrap-mds.keyring  ceph.bootstrap-rgw.keyring  ceph-deploy-ceph.log
ceph.bootstrap-mgr.keyring  ceph.client.admin.keyring   ceph.mon.keyring
ceph.bootstrap-osd.keyring  ceph.conf
[ceph-admin@dev-ceph1 my-cluster]$ 

 

4 配置 osd 节点

 

登录到ceph-3 和ceph-4

[ceph-admin@dev-ceph1 my-cluster]$ ssh dev-ceph3
[test-ceph@dev-ceph3 ~]$ sudo mkdir /var/local/osd0
[test-ceph@dev-ceph3 ~]$ exit
logout
Connection to dev-ceph3 closed.
[ceph-admin@dev-ceph1 my-cluster]$ ssh dev-ceph4
[test-ceph@dev-ceph4 ~]$ sudo mkdir /var/local/osd1
[test-ceph@dev-ceph4 ~]$ exit
logout
Connection to dev-ceph4 closed.
[ceph-admin@dev-ceph1 my-cluster]$ 

装的版本10 版本的, 配置osd时 没有ceph-volume 

所有节点 升级到14.2 版本

[root@dev-ceph4 ~]# yum install centos-release-ceph-nautilus

然后  

[root@dev-ceph4 ~]# yum install ceph

 

然后 给ceph-3 和ceph-4 分别挂一个20G的盘   /dev/sdb

在  ceph-1   ceph-admin 用户 

 
[ceph-admin@dev-ceph1 my-cluster]$ ceph-deploy osd create dev-ceph3 --data /dev/sdc
 
[ceph-admin@dev-ceph1 my-cluster]$ ceph-deploy osd create dev-ceph4 --data /dev/sdb

查看 osd 状态

[ceph-admin@dev-ceph1 my-cluster]$ ceph-deploy osd list  dev-ceph3 
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph-admin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy osd list dev-ceph3
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : list
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : 
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : ['dev-ceph3']
[ceph_deploy.cli][INFO  ]  func                          : 
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[dev-ceph3][DEBUG ] connection detected need for sudo
[dev-ceph3][DEBUG ] connected to host: dev-ceph3 
[dev-ceph3][DEBUG ] detect platform information from remote host
[dev-ceph3][DEBUG ] detect machine type
[dev-ceph3][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.7.1908 Core
[ceph_deploy.osd][DEBUG ] Listing disks on dev-ceph3...
[dev-ceph3][DEBUG ] find the location of an executable
[dev-ceph3][INFO  ] Running command: sudo /usr/sbin/ceph-volume lvm list
[dev-ceph3][DEBUG ] 
[dev-ceph3][DEBUG ] 
[dev-ceph3][DEBUG ] ====== osd.0 =======
[dev-ceph3][DEBUG ] 
[dev-ceph3][DEBUG ]   [block]       /dev/ceph-2cb00355-7518-416f-861f-d9993dbaa5b3/osd-block-023b630a-2547-420d-a8ab-35f57e8df72c
[dev-ceph3][DEBUG ] 
[dev-ceph3][DEBUG ]       block device              /dev/ceph-2cb00355-7518-416f-861f-d9993dbaa5b3/osd-block-023b630a-2547-420d-a8ab-35f57e8df72c
[dev-ceph3][DEBUG ]       block uuid                Be9InV-48HO-gGwx-MawA-8Lcl-pwRS-Kws2m6
[dev-ceph3][DEBUG ]       cephx lockbox secret      
[dev-ceph3][DEBUG ]       cluster fsid              2edcb77b-5bba-4680-bc58-9959f8e59b74
[dev-ceph3][DEBUG ]       cluster name              ceph
[dev-ceph3][DEBUG ]       crush device class        None
[dev-ceph3][DEBUG ]       encrypted                 0
[dev-ceph3][DEBUG ]       osd fsid                  023b630a-2547-420d-a8ab-35f57e8df72c
[dev-ceph3][DEBUG ]       osd id                    0
[dev-ceph3][DEBUG ]       type                      block
[dev-ceph3][DEBUG ]       vdo                       0
[dev-ceph3][DEBUG ]       devices                   /dev/sdc
[ceph-admin@dev-ceph1 my-cluster]$ 

在ceph-2 上查看ceph集群状态

[root@dev-ceph2 ~]# ceph -s
  cluster:
    id:     2edcb77b-5bba-4680-bc58-9959f8e59b74
    health: HEALTH_WARN
            crush map has legacy tunables (require firefly, min is hammer)
            no active mgr
            1 monitors have not enabled msgr2
 
  services:
    mon: 1 daemons, quorum dev-ceph2 (age 20m)
    mgr: no daemons active
    osd: 2 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     
 

看到

crush map has legacy tunables ....  , 参考 https://github.com/rook/rook/issues/3138

no active mgr  则需要在ceph-1  ceph-admin 用户 做如下操作

ceph-admin@dev-ceph1 my-cluster]$ ceph-deploy mgr create dev-ceph1
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph-admin/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy mgr create dev-ceph1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  mgr                           : [('dev-ceph1', 'dev-ceph1')]
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : 
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : 
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts dev-ceph1:dev-ceph1
[dev-ceph1][DEBUG ] connection detected need for sudo
[dev-ceph1][DEBUG ] connected to host: dev-ceph1 
[dev-ceph1][DEBUG ] detect platform information from remote host
[dev-ceph1][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: CentOS Linux 7.7.1908 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to dev-ceph1
[dev-ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[dev-ceph1][WARNIN] mgr keyring does not exist yet, creating one
[dev-ceph1][DEBUG ] create a keyring file
[dev-ceph1][DEBUG ] create path recursively if it doesn't exist
[dev-ceph1][INFO  ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.dev-ceph1 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-dev-ceph1/keyring
[dev-ceph1][INFO  ] Running command: sudo systemctl enable ceph-mgr@dev-ceph1
[dev-ceph1][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[dev-ceph1][INFO  ] Running command: sudo systemctl start ceph-mgr@dev-ceph1
[dev-ceph1][INFO  ] Running command: sudo systemctl enable ceph.target

 

再在ceph-2 ,3,4 上查看  ceph -s 

[root@dev-ceph2 ~]# ceph -s
  cluster:
    id:     2edcb77b-5bba-4680-bc58-9959f8e59b74
    health: HEALTH_WARN
            crush map has legacy tunables (require firefly, min is hammer)
            Reduced data availability: 64 pgs inactive
            1 monitors have not enabled msgr2
 
  services:
    mon: 1 daemons, quorum dev-ceph2 (age 32m)
    mgr: dev-ceph1(active, since 6m)
    osd: 2 osds: 0 up, 0 in
 
  data:
    pools:   1 pools, 64 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     100.000% pgs unknown
             64 unknown
 
[root@dev-ceph2 ~]# 

ceph集群 应该搭建成功了吧。  第一次搭建, 不足之处请大家指教, 参考的文章比较多, 主要是官方文档。

 

 

 

 

你可能感兴趣的:(ceph,ceph)