Ceph Cookbook 学习实践笔记(一)之使用虚拟基础设施搭建ceph集群

Ceph Cookbook 学习实践笔记(一)之使用虚拟基础设施搭建ceph集群

搭建一个虚拟基础设施

1、准备软件:

Oracle VirbualBox:开源虚拟化软件包

root@ceph-cookbook# sudo apt-get install virtualbox
root@ceph-cookbook# VBoxManage -v
5.1.38_Ubuntur122592

Vagrant:创建虚拟研发环境

root@ceph-cookbook# sudo apt-get install vagrant
root@ceph-cookbook# vagrant -v
Vagrant 1.8.1

Git:分布式版本控制系统

root@ceph-cookbook# sudo apt-get install git
root@ceph-cookbook# git --version
git version 2.7.4

2、开始创建虚拟机

克隆代码到本地

root@ceph-cookbook# git clone https://github.com/ksingh7/ceph-cookbook.git
root@ceph-cookbook# cd ceph-cookbook

查看Vagrantfile文件,这个是vagrant的配置文件,用于引导VirtualBox配置虚拟机,这样就很容易地安装好初始环境了,然后开始启动虚拟机

root@ceph-cookbook# git clone https://github.com/ksingh7/ceph-cookbook.git
root@ceph-cookbook# cd ceph-cookbook
root@ceph-cookbook# vagrant up ceph-node1 ceph-node2 ceph-node3 

启动失败,里面需要链接到外网获取两个镜像文件centos7-standard和openstack,那么我们就尝试根据链接去下载获取镜像到本地,然后手动添加到vagrant里面去,这里将获取到的两个镜像文件放到ceph-cookbook同级的dropbox目录里面。两个镜像网盘链接

openstack.box(提取码:4a5f)

centos7-standard.box(提取码:fiww)

手动添加镜像文件到vagrant并启动。

root@ceph-cookbook# vagrant box add openstack ../dropbox/openstack.box
root@ceph-cookbook# vagrant box add openstack ../dropbox/centos7-standard.box
root@ceph-cookbook# vagrant box list
centos7-standard (virtualbox, 0)
openstack        (virtualbox, 0)
root@ceph-cookbook# vagrant up ceph-node1 ceph-node2 ceph-node3 
root@ceph-cookbook# vagrant status ceph-node1 ceph-node2 ceph-node3
Current machine states:

ceph-node1                running (virtualbox)
ceph-node2                running (virtualbox)
ceph-node3                running (virtualbox)

Vagrant用来配置虚拟机的用户名和密码都是vagrant,Vagrant有sudo权限。默认的root用户密码是vagrant,登陆虚拟机,检查hostname、ip和磁盘配置。

配置ceph-node1到ceph-node2和ceph-node3的免密登陆

root@ceph-cookbook# vagrant ssh ceph-node1
[root@ceph-node1 ~]#  sudo su -
[root@ceph-node1 ~]#  ssh-keygen
[root@ceph-node1 ~]#  ssh-copy-id root@ceph-node2
[root@ceph-node1 ~]#  ssh-copy-id root@ceph-node3

所有虚拟机开放端口,同时禁用SELINEX

[root@ceph-node1 ~]# firewall-cmd --zone=public --add-port=6789/tcp --permanent
[root@ceph-node1 ~]# firewall-cmd --zone=public --add-port=6800-7100/tcp --permanent
[root@ceph-node1 ~]# firewall-cmd --reload
[root@ceph-node1 ~]# firewall-cmd --zone=public --list-all
public (default, active)
  interfaces: enp0s3 enp0s8
  sources: 
  services: dhcpv6-client ssh
  ports: 6789/tcp 6800-7100/tcp
  masquerade: no
  forward-ports: 
  icmp-blocks: 
  rich rules:
[root@ceph-node1 ~]# setenforce 0
[root@ceph-node1 ~]# sed -i s'/SELINUX.=*enforcing/SELINUX=disabled'/g /etc/selinux/config
[root@ceph-node1 ~]# cat /etc/selinux/config | grep -i =disabled
SELINUX=disabled

所有虚拟机安装并配置ntp服务

[root@ceph-node1 ~]# timedatectl set-timezone Asia/Shanghai    //设置时区
[root@ceph-node1 ~]# timedatectl
      Local time: Thu 2020-06-04 14:01:57 CST
  Universal time: Thu 2020-06-04 06:01:57 UTC
        RTC time: Thu 2020-06-04 06:01:57
        Timezone: Asia/Shanghai (CST, +0800)
     NTP enabled: yes
NTP synchronized: yes
 RTC in local TZ: no
      DST active: n/a

[root@ceph-node1 ~]# yum install ntp ntpdate -y
[root@ceph-node1 ~]# ntpq -p
[root@ceph-node1 ~]# systemctl restart ntpd.service
[root@ceph-node1 ~]# systemctl enable ntpd.service
[root@ceph-node1 ~]# systemctl enable ntpdate.service

在所有Ceph节点上添加Ceph Jewel版本库并更新yum

[root@ceph-node1 ~]# rpm -Uhv http://mirrors.163.com/ceph/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm
[root@ceph-node1 ~]# yum update -y

163的源比较稳定


安装和配置Ceph

1、在ceph-node1上创建Ceph集群

[root@ceph-node1 ~]# yum install ceph-deploy -y

2、通过在ceph-node1上执行如下命令,用ceph-deploy创建一个Ceph集群

[root@ceph-node1 ~]# mkdir /etc/ceph; cd /etc/ceph
[root@ceph-node1 ~]# ceph-deploy new ceph-node1

ceph-deploy的new子命令能够部署一个默认名称为ceph的新集群,并且能够生成集群配置文件和秘钥文件

[root@ceph-node1 ceph]# ll
total 308
-rw------- 1 root root    113 Jun  5 08:28 ceph.bootstrap-mds.keyring
-rw------- 1 root root     71 Jun  5 08:28 ceph.bootstrap-mgr.keyring
-rw------- 1 root root    113 Jun  5 08:28 ceph.bootstrap-osd.keyring
-rw------- 1 root root    113 Jun  5 08:28 ceph.bootstrap-rgw.keyring
-rw------- 1 ceph ceph     63 Jun  5 08:28 ceph.client.admin.keyring
-rw-r--r-- 1 root root    233 Jun  5 08:28 ceph.conf
-rw-r--r-- 1 root root  49658 Jun  8 11:03 ceph-deploy-ceph.log
-rw-r--r-- 1 root root 228734 Jun  4 20:27 ceph.log
-rw------- 1 root root     73 Jun  4 18:59 ceph.mon.keyring
-rw-r--r-- 1 root root     92 Jul 10  2018 rbdmap

3、在ceph-node1上执行如下命令,安装Ceph二进制软件包

[root@ceph-node1 ceph]# ceph-deploy install ceph-node1 ceph-node2 ceph-node3

ceph-deploy工具包首先会安装Ceph 版本的所有依赖包。命令成功完成后,检查所有节点上Ceph的版本和健康状态,可能存在ceph-node2或者ceph-node3安装失败的情况,这样可以直接到ceph-node2上安装ceph(yum -y install ceph)

[root@ceph-node1 ceph]# ceph -v
ceph version 10.2.11 (e4b061b47f07f583c92a050d9e84b1813a35671e)

4、在ceph-node1上创建第一个Ceph monitor,并查看状态,这时状态为不健康

[root@ceph-node1 ceph]# ceph-deploy mon create-initial
[root@ceph-node1 ceph]# ceph -s
    cluster d2d6c48f-7510-472f-8b1f-ef66faf3132d
     health HEALTH_ERR
            64 pgs are stuck inactive for more than 300 seconds
            64 pgs stuck inactive
            64 pgs stuck unclean
            no osds
     monmap e1: 1 mons at {ceph-node1=192.168.1.101:6789/0}
            election epoch 3, quorum 0 ceph-node1
     osdmap e1: 0 osds: 0 up, 0 in
            flags sortbitwise,require_jewel_osds
      pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects
            0 kB used, 0 kB / 0 kB avail
                  64 creating

5、在ceph-node上创建OSD

列出所有可用磁盘,选择磁盘来创建Ceph OSD,删除现有分区表和磁盘内容,默认使用xfs格式化,并激活分区

[root@ceph-node1 ceph]# ceph-deploy disk zap ceph-node1:sdb ceph-node1:sdc ceph-node1:sdd
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /bin/ceph-deploy disk zap ceph-node1:sdb ceph-node1:sdc ceph-node1:sdd
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : zap
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : 
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : 
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : [('ceph-node1', '/dev/sdb', None), ('ceph-node1', '/dev/sdc', None), ('ceph-node1', '/dev/sdd', None)]
[ceph_deploy.osd][DEBUG ] zapping /dev/sdb on ceph-node1
[ceph-node1][INFO  ] Running command: /usr/sbin/ceph-disk zap /dev/sdb
[ceph-node1][INFO  ] Running command: /usr/sbin/ceph-disk zap /dev/sdc
[ceph-node1][INFO  ] Running command: /usr/sbin/ceph-disk zap /dev/sdd
[ceph-node1][DEBUG ] The operation has completed successfully.
[root@ceph-node1 ceph]# 
[root@ceph-node1 ceph]# 
[root@ceph-node1 ceph]# ceph-deploy osd create ceph-node1:sdb ceph-node1:sdc ceph-node1:sdd
xxxxx    //一系列输出

再次检查Ceph的状态,并注意OSD个数。在这个阶段,集群还是不健康状态,需要更多的节点添加到Ceph集群中,使其能够在这个集群中将对象复制三次(默认),从而变成健康状态,这个在后面扩展集群中做。

[root@ceph-node1 ceph]# ceph -s
    cluster d2d6c48f-7510-472f-8b1f-ef66faf3132d
     health HEALTH_ERR
            64 pgs are stuck inactive for more than 300 seconds
            64 pgs degraded
            64 pgs stuck inactive
            64 pgs stuck unclean
            64 pgs undersized
            too few PGs per OSD (21 < min 30)
     monmap e1: 1 mons at {ceph-node1=192.168.1.101:6789/0}
            election epoch 3, quorum 0 ceph-node1
     osdmap e13: 3 osds: 3 up, 3 in
            flags sortbitwise,require_jewel_osds
      pgmap v22: 64 pgs, 1 pools, 0 bytes data, 0 objects
            322 MB used, 45724 MB / 46046 MB avail
                  64 undersized+degraded+peered

扩展你的Ceph集群

至此,我们已经在ceph-node1上运行了Ceph集群了,它有一个MON和3个OSD。下面我们将通过添加ceph-node2和ceph-node3作为MON和OSD节点来扩展这个集群。因为一个Ceph集群至少需要一个monitor才能运行。为了高可用性,一个Ceph存储集群必须依赖多于一个的奇数个的monitor,例如3或者5个,以形成仲裁。Ceph使用Paxos算法来确保仲裁的一致性。

1、在ceph-node1上将公共网络地址添加到文件/etc/ceph/ceph.conf中(这里的公共网络即为/etc/hosts中各个节点连接的网络,并非连接公网的公网)

[root@ceph-node1 ceph]# cat /etc/ceph/ceph.conf 
[global]
fsid = d2d6c48f-7510-472f-8b1f-ef66faf3132d
mon_initial_members = ceph-node1
mon_host = 192.168.1.101
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
public network = 192.168.1.101/24

2、在ceph-node1上,使用ceph-deploy在ceph-node2、ceph-node3上创建monitor

[root@ceph-node1 ceph]# ceph-deploy --overwrite-conf mon create ceph-node3
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /usr/bin/ceph-deploy --overwrite-conf mon create ceph-node3
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : True
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : 
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  mon                           : ['ceph-node3']
[ceph_deploy.cli][INFO  ]  func                          : 
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  keyrings                      : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph-node3
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph-node3 ...
[ceph-node3][DEBUG ] connected to host: ceph-node3 
[ceph-node3][DEBUG ] detect platform information from remote host
[ceph-node3][DEBUG ] detect machine type
[ceph-node3][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: CentOS Linux 7.8.2003 Core
[ceph-node3][DEBUG ] determining if provided host has same hostname in remote
[ceph-node3][DEBUG ] get remote short hostname
[ceph-node3][DEBUG ] deploying mon to ceph-node3
[ceph-node3][DEBUG ] get remote short hostname
[ceph-node3][DEBUG ] remote hostname: ceph-node3
[ceph-node3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node3][DEBUG ] create the mon path if it does not exist
[ceph-node3][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph-node3/done
[ceph-node3][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-ceph-node3/done
[ceph-node3][INFO  ] creating keyring file: /var/lib/ceph/tmp/ceph-ceph-node3.mon.keyring
[ceph-node3][DEBUG ] create the monitor keyring file
[ceph-node3][INFO  ] Running command: ceph-mon --cluster ceph --mkfs -i ceph-node3 --keyring /var/lib/ceph/tmp/ceph-ceph-node3.mon.keyring --setuser 167 --setgroup 167
[ceph-node3][DEBUG ] ceph-mon: set fsid to d2d6c48f-7510-472f-8b1f-ef66faf3132d
[ceph-node3][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-ceph-node3 for mon.ceph-node3
[ceph-node3][INFO  ] unlinking keyring file /var/lib/ceph/tmp/ceph-ceph-node3.mon.keyring
[ceph-node3][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph-node3][DEBUG ] create the init path if it does not exist
[ceph-node3][INFO  ] Running command: systemctl enable ceph.target
[ceph-node3][INFO  ] Running command: systemctl enable ceph-mon@ceph-node3
[ceph-node3][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[ceph-node3][INFO  ] Running command: systemctl start ceph-mon@ceph-node3
[ceph-node3][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-node3.asok mon_status
[ceph-node3][DEBUG ] ********************************************************************************
[ceph-node3][DEBUG ] status for monitor: mon.ceph-node3
[ceph-node3][DEBUG ] {
[ceph-node3][DEBUG ]   "election_epoch": 0, 
[ceph-node3][DEBUG ]   "extra_probe_peers": [
[ceph-node3][DEBUG ]     "192.168.1.101:6789/0"
[ceph-node3][DEBUG ]   ], 
[ceph-node3][DEBUG ]   "monmap": {
[ceph-node3][DEBUG ]     "created": "2020-06-05 08:28:42.451755", 
[ceph-node3][DEBUG ]     "epoch": 2, 
[ceph-node3][DEBUG ]     "fsid": "d2d6c48f-7510-472f-8b1f-ef66faf3132d", 
[ceph-node3][DEBUG ]     "modified": "2020-06-09 09:17:55.949886", 
[ceph-node3][DEBUG ]     "mons": [
[ceph-node3][DEBUG ]       {
[ceph-node3][DEBUG ]         "addr": "192.168.1.101:6789/0", 
[ceph-node3][DEBUG ]         "name": "ceph-node1", 
[ceph-node3][DEBUG ]         "rank": 0
[ceph-node3][DEBUG ]       }, 
[ceph-node3][DEBUG ]       {
[ceph-node3][DEBUG ]         "addr": "192.168.1.102:6789/0", 
[ceph-node3][DEBUG ]         "name": "ceph-node2", 
[ceph-node3][DEBUG ]         "rank": 1
[ceph-node3][DEBUG ]       }
[ceph-node3][DEBUG ]     ]
[ceph-node3][DEBUG ]   }, 
[ceph-node3][DEBUG ]   "name": "ceph-node3", 
[ceph-node3][DEBUG ]   "outside_quorum": [], 
[ceph-node3][DEBUG ]   "quorum": [], 
[ceph-node3][DEBUG ]   "rank": -1, 
[ceph-node3][DEBUG ]   "state": "probing", 
[ceph-node3][DEBUG ]   "sync_provider": []
[ceph-node3][DEBUG ] }
[ceph-node3][DEBUG ] ********************************************************************************
[ceph-node3][INFO  ] monitor: mon.ceph-node3 is currently at the state of probing
[ceph-node3][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-node3.asok mon_status
[ceph-node3][WARNIN] ceph-node3 is not defined in `mon initial members`
[ceph-node3][WARNIN] monitor ceph-node3 does not exist in monmap

3、检查Ceph集群状态

[root@ceph-node1 ceph]# ceph -s
    cluster d2d6c48f-7510-472f-8b1f-ef66faf3132d
     health HEALTH_ERR
            64 pgs are stuck inactive for more than 300 seconds
            64 pgs degraded
            64 pgs stuck degraded
            64 pgs stuck inactive
            64 pgs stuck unclean
            64 pgs stuck undersized
            64 pgs undersized
            too few PGs per OSD (21 < min 30)
     monmap e3: 3 mons at {ceph-node1=192.168.1.101:6789/0,ceph-node2=192.168.1.102:6789/0,ceph-node3=192.168.1.103:6789/0}
            election epoch 6, quorum 0,1,2 ceph-node1,ceph-node2,ceph-node3
     osdmap e13: 3 osds: 3 up, 3 in
            flags sortbitwise,require_jewel_osds
      pgmap v22: 64 pgs, 1 pools, 0 bytes data, 0 objects
            322 MB used, 45724 MB / 46046 MB avail
                  64 undersized+degraded+peered
[root@ceph-node1 ceph]# ceph mon stat
e3: 3 mons at {ceph-node1=192.168.1.101:6789/0,ceph-node2=192.168.1.102:6789/0,ceph-node3=192.168.1.103:6789/0}, election epoch 6, quorum 0,1,2 ceph-node1,ceph-node2,ceph-node3

4、在ceph-node1上使用ceph-deploy执行disk list和disk zap命令,并在ceph-node2和ceph-node3上面执行osd create创建OSD

[root@ceph-node1 ceph]# ceph-deploy disk list ceph-node2 ceph-node3
[root@ceph-node1 ceph]# ceph-deploy disk zap ceph-node2:sdb ceph-node2:sdc ceph-node2:sdd
[root@ceph-node1 ceph]# ceph-deploy disk zap ceph-node3:sdb ceph-node3:sdc ceph-node3:sdd
[root@ceph-node1 ceph]# ceph-deploy osd create ceph-node2:sdb ceph-node2:sdc ceph-node2:sdd
[root@ceph-node1 ceph]# ceph-deploy osd create ceph-node3:sdb ceph-node3:sdc ceph-node3:sdd

5、添加了OSD后,我们需要调整rbd存储池的pg_num和pgp_num的值,来世我们的集群达到HEALTH_OK状态

[root@ceph-node1 ceph]# ceph osd pool set rbd pg_num 256
set pool 0 pg_num to 256
[root@ceph-node1 ceph]# ceph osd pool set rbd pgp_num 256
set pool 0 pgp_num to 256
[root@ceph-node1 ceph]# ceph -s
    cluster d2d6c48f-7510-472f-8b1f-ef66faf3132d
     health HEALTH_OK
     monmap e3: 3 mons at {ceph-node1=192.168.1.101:6789/0,ceph-node2=192.168.1.102:6789/0,ceph-node3=192.168.1.103:6789/0}
            election epoch 6, quorum 0,1,2 ceph-node1,ceph-node2,ceph-node3
     osdmap e48: 9 osds: 9 up, 9 in
            flags sortbitwise,require_jewel_osds
      pgmap v151: 256 pgs, 1 pools, 0 bytes data, 0 objects
            976 MB used, 133 GB / 134 GB avail
                 256 active+clean

在实践中使用Ceph集群

1、检查Ceph的安装状态

[root@ceph-node1 ceph]# ceph status
    cluster d2d6c48f-7510-472f-8b1f-ef66faf3132d
     health HEALTH_OK
     monmap e3: 3 mons at {ceph-node1=192.168.1.101:6789/0,ceph-node2=192.168.1.102:6789/0,ceph-node3=192.168.1.103:6789/0}
            election epoch 6, quorum 0,1,2 ceph-node1,ceph-node2,ceph-node3
     osdmap e48: 9 osds: 9 up, 9 in
            flags sortbitwise,require_jewel_osds
      pgmap v151: 256 pgs, 1 pools, 0 bytes data, 0 objects
            976 MB used, 133 GB / 134 GB avail
                 256 active+clean

2、观察集群健康状况

[root@ceph-node1 ceph]# ceph -w
    cluster d2d6c48f-7510-472f-8b1f-ef66faf3132d
     health HEALTH_OK
     monmap e3: 3 mons at {ceph-node1=192.168.1.101:6789/0,ceph-node2=192.168.1.102:6789/0,ceph-node3=192.168.1.103:6789/0}
            election epoch 6, quorum 0,1,2 ceph-node1,ceph-node2,ceph-node3
     osdmap e48: 9 osds: 9 up, 9 in
            flags sortbitwise,require_jewel_osds
      pgmap v151: 256 pgs, 1 pools, 0 bytes data, 0 objects
            976 MB used, 133 GB / 134 GB avail
                 256 active+clean

2020-06-09 09:33:19.471354 mon.0 [INF] pgmap v151: 256 pgs: 256 active+clean; 0 bytes data, 976 MB used, 133 GB / 134 GB avail

3、检查Ceph monitor仲裁状态

[root@ceph-node1 ceph]# ceph quorum_status --format json-pretty

{
    "election_epoch": 6,
    "quorum": [
        0,
        1,
        2
    ],
    "quorum_names": [
        "ceph-node1",
        "ceph-node2",
        "ceph-node3"
    ],
    "quorum_leader_name": "ceph-node1",
    "monmap": {
        "epoch": 3,
        "fsid": "d2d6c48f-7510-472f-8b1f-ef66faf3132d",
        "modified": "2020-06-09 09:18:14.073662",
        "created": "2020-06-05 08:28:42.451755",
        "mons": [
            {
                "rank": 0,
                "name": "ceph-node1",
                "addr": "192.168.1.101:6789\/0"
            },
            {
                "rank": 1,
                "name": "ceph-node2",
                "addr": "192.168.1.102:6789\/0"
            },
            {
                "rank": 2,
                "name": "ceph-node3",
                "addr": "192.168.1.103:6789\/0"
            }
        ]
    }
}

4、导出Ceph monitor信息

[root@ceph-node1 ceph]# ceph mon dump
dumped monmap epoch 3
epoch 3
fsid d2d6c48f-7510-472f-8b1f-ef66faf3132d
last_changed 2020-06-09 09:18:14.073662
created 2020-06-05 08:28:42.451755
0: 192.168.1.101:6789/0 mon.ceph-node1
1: 192.168.1.102:6789/0 mon.ceph-node2
2: 192.168.1.103:6789/0 mon.ceph-node3

5、检查集群使用状态

[root@ceph-node1 ceph]# ceph df
GLOBAL:
    SIZE     AVAIL     RAW USED     %RAW USED 
    134G      133G         976M          0.71 
POOLS:
    NAME     ID     USED     %USED     MAX AVAIL     OBJECTS 
    rbd      0         0         0        43418M           0 

6、检车Ceph monitor、OSD和PG状态

[root@ceph-node1 ceph]# ceph mon stat
e3: 3 mons at {ceph-node1=192.168.1.101:6789/0,ceph-node2=192.168.1.102:6789/0,ceph-node3=192.168.1.103:6789/0}, election epoch 6, quorum 0,1,2 ceph-node1,ceph-node2,ceph-node3
[root@ceph-node1 ceph]# ceph osd stat
     osdmap e48: 9 osds: 9 up, 9 in
            flags sortbitwise,require_jewel_osds
[root@ceph-node1 ceph]# ceph pg stat
v151: 256 pgs: 256 active+clean; 0 bytes data, 976 MB used, 133 GB / 134 GB avail

7、列表PG

[root@ceph-node1 ceph]# ceph pg dump

8、列表Ceph存储池

[root@ceph-node1 ceph]# ceph osd lspools
0 rbd,

9、检查OSD的CRUSH map

[root@ceph-node1 ceph]# ceph osd tree
ID WEIGHT  TYPE NAME           UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 0.13129 root default                                          
-2 0.04376     host ceph-node1                                   
 0 0.01459         osd.0            up  1.00000          1.00000 
 1 0.01459         osd.1            up  1.00000          1.00000 
 2 0.01459         osd.2            up  1.00000          1.00000 
-3 0.04376     host ceph-node2                                   
 3 0.01459         osd.3            up  1.00000          1.00000 
 4 0.01459         osd.4            up  1.00000          1.00000 
 5 0.01459         osd.5            up  1.00000          1.00000 
-4 0.04376     host ceph-node3                                   
 6 0.01459         osd.6            up  1.00000          1.00000 
 7 0.01459         osd.7            up  1.00000          1.00000 
 8 0.01459         osd.8            up  1.00000          1.00000 

10、列表集群的认证秘钥

[root@ceph-node1 ceph]# ceph auth list

 

你可能感兴趣的:(ceph-cookbook,ceph,ceph-cook,实践)