04 Ceph集群部署

kubernetes集群对接Ceph存储

文章目录

    • kubernetes集群对接Ceph存储
      • 一、Ceph集群部署环境准备
        • 1.1 配置ceph的yum源
        • 1.2 安装ceph-deploy
        • 1.3 安装ceph包
        • 1.4 创建ceph集群
          • 1.4.1 创建mon&mgr
          • 1.4.2 修改集群配置文件(optional)
          • 1.4.3 部署initial monitor
          • 1.4.4 添加2mon
          • 1.4.5 创建ceph keyring
          • 1.4.6 分发ceph keyring
          • 1.4.7 创建ceph mgr
          • 1.4.8 更新ceph.conf配置文件
        • 1.5 添加OSD节点
          • 1.5.1 添加osd.0
        • 1.6 安装MDS
          • 1.6.3 安装Dashboard
        • 1.7 Ceph基本用法
          • 1.7.1 基本参考知识
          • 1.7.2 挂载Cephfs
            • 1.7.2.1 ceph上启用cephfs
          • 1.7.2.2 客户端挂载
            • 1.7.2.3 创建secret
            • 1.7.2.4 创建PersistentVolume
        • 1.8 故障恢复
          • 1.8.1 查看状态
          • 1.8.2 停止数据均衡
          • 1.8.3 定位故障盘
          • 1.8.4 卸载故障节点
          • 1.8.5 从crush map 中移除osd
          • 1.8.6 删除故障OSD的密钥
          • 1.8.7 删除故障OSD
          • 1.8.8 重新添加OSD
          • 1.8.9 重新开启集群禁用标志
    • 后续参考(集群):

提供者:MappleZF

版本:1.0.0

一、Ceph集群部署环境准备

1.1 配置ceph的yum源

所有节点操作

[[email protected]:/etc/yum.repos.d]# vim ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-15.2.4/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
#gpgkey=https://download.ceph.com/keys/release.asc

[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-15.2.4/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
#gpgkey=https://download.ceph.com/keys/release.asc

[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-15.2.4/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
#gpgkey=https://download.ceph.com/keys/release.asc

分发ceph.repo至各节点 其他节点同样操作
[[email protected]:/etc/yum.repos.d]# scp ceph.repo k8smaster02:/etc/yum.repos.d/
[[email protected]:/root]#  yum clean all && yum repolist
[[email protected]:/root]#  yum info ceph
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * elrepo: hkg.mirror.rackspace.com
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
Available Packages
Name        : ceph
Arch        : x86_64
Epoch       : 2
Version     : 15.2.4
Release     : 0.el7
Size        : 3.0 k
Repo        : Ceph/x86_64
Summary     : User space components of the Ceph file system
URL         : http://ceph.com/
License     : LGPL-2.1 and LGPL-3.0 and CC-BY-SA-3.0 and GPL-2.0 and BSL-1.0 and BSD-3-Clause and MIT
Description : Ceph is a massively scalable, open-source, distributed storage system that runs
            : on commodity hardware and delivers object, block and file system storage.



1.2 安装ceph-deploy

[[email protected]:/root]# yum install python-setuptools -y
[[email protected]:/root]# yum install ceph-deploy -y
[[email protected]:/root]# yum install ceph ceph-radosgw -y
[[email protected]:/root]# yum install deltarpm -y
[[email protected]:/root]# ceph-deploy --version
2.0.1



注:安装过程如报Error: Package: 2:librados2-15.2.4-0.el7.x86_64 (Ceph)  Requires: liblttng-ust.so.0()(64bit)
执行:yum install epel-release   -y
     wget https://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/epel-release-7-12.noarch.rpm
     rpm -Uvh epel-release*rpm
     yum install lttng-ust -y

1.3 安装ceph包


注:安装过程如报[Errno 14] curl#37 - "Couldn't open file /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7"
执行:cd /etc/pki/rpm-gpg && wget -c https://archive.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-7

[[email protected]:/root]# ceph-deploy install  k8smaster01  k8sworker01 k8sworker02 k8sworker03 k8sworker04
.......省略
[k8sworker04][DEBUG ] Complete!
[k8sworker04][INFO  ] Running command: ceph --version
[k8sworker04][DEBUG ] ceph version 15.2.4 (7447c15c6ff58d7fce91843b705a268a1917325c) octopus (stable)

[[email protected]:/root]# rpm -qa | grep ceph
ceph-mds-15.2.4-0.el7.x86_64
ceph-deploy-2.0.1-0.noarch
libcephfs2-15.2.4-0.el7.x86_64
python3-ceph-argparse-15.2.4-0.el7.x86_64
python3-cephfs-15.2.4-0.el7.x86_64
ceph-common-15.2.4-0.el7.x86_64
ceph-selinux-15.2.4-0.el7.x86_64
ceph-mgr-15.2.4-0.el7.x86_64
ceph-mon-15.2.4-0.el7.x86_64
ceph-radosgw-15.2.4-0.el7.x86_64
python3-ceph-common-15.2.4-0.el7.x86_64
ceph-mgr-modules-core-15.2.4-0.el7.noarch
ceph-base-15.2.4-0.el7.x86_64
ceph-osd-15.2.4-0.el7.x86_64
ceph-15.2.4-0.el7.x86_64
ceph-release-1-1.el7.noarch
共 12个

1.4 创建ceph集群

1.4.1 创建mon&mgr

[[email protected]:/root]# mkdir -p /etc/ceph && cd /etc/ceph
[[email protected]:/etc/ceph]# ceph-deploy new k8smaster01
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy new k8smaster01
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  func                          : 
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : 
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO  ]  mon                           : ['k8smaster01']
[ceph_deploy.cli][INFO  ]  public_network                : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster_network               : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  fsid                          : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[k8smaster01][DEBUG ] connected to host: k8smaster01 
[k8smaster01][DEBUG ] detect platform information from remote host
[k8smaster01][DEBUG ] detect machine type
[k8smaster01][DEBUG ] find the location of an executable
[k8smaster01][INFO  ] Running command: /usr/sbin/ip link show
[k8smaster01][INFO  ] Running command: /usr/sbin/ip addr show
[k8smaster01][DEBUG ] IP addresses found: [u'10.10.158.194', u'10.10.0.1', u'10.10.246.212', u'10.10.0.2', u'10.10.235.60', u'10.244.101.1', u'192.168.15.101', u'10.10.217.147', u'192.168.13.101', u'172.16.170.64', u'10.10.207.129', u'10.10.27.53', u'10.10.225.156', u'10.10.106.106']
[ceph_deploy.new][DEBUG ] Resolving host k8smaster01
[ceph_deploy.new][DEBUG ] Monitor k8smaster01 at 192.168.13.101
[ceph_deploy.new][DEBUG ] Monitor initial members are ['k8smaster01']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.13.101']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...

[[email protected]:/etc/ceph]# ls
ceph.conf  ceph-deploy-ceph.log  ceph.mon.keyring  rbdmap

1.4.2 修改集群配置文件(optional)

[[email protected]:/etc/ceph]# vim /etc/ceph/ceph.conf
#!/bin/sh
[global]
fsid = 61ab49ca-21d8-4b03-8237-8c05c5c5d177
mon_initial_members = k8smaster01
mon_host = 192.168.13.101
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd_pool_default_size = 3
public network = 192.168.13.0/24
cluster network = 192.168.15.0/24

~ 

[[email protected]:/etc/ceph]# cat /etc/ceph/ceph.mon.keyring
[mon.]
key = AQB+N2BfAAAAABAAcWcQMUSAwAxIZoguyhPxlA==
caps mon = allow *


1.4.3 部署initial monitor


[[email protected]:/etc/ceph]# ceph-deploy mon create k8smaster01
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mon create k8smaster01
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : 
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  mon                           : ['k8smaster01']
[ceph_deploy.cli][INFO  ]  func                          : 
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  keyrings                      : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts k8smaster01
[ceph_deploy.mon][DEBUG ] detecting platform for host k8smaster01 ...
[k8smaster01][DEBUG ] connected to host: k8smaster01 
[k8smaster01][DEBUG ] detect platform information from remote host
[k8smaster01][DEBUG ] detect machine type
[k8smaster01][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: CentOS Linux 7.8.2003 Core
[k8smaster01][DEBUG ] determining if provided host has same hostname in remote
[k8smaster01][DEBUG ] get remote short hostname
[k8smaster01][DEBUG ] deploying mon to k8smaster01
[k8smaster01][DEBUG ] get remote short hostname
[k8smaster01][DEBUG ] remote hostname: k8smaster01
[k8smaster01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[k8smaster01][DEBUG ] create the mon path if it does not exist
[k8smaster01][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-k8smaster01/done
[k8smaster01][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-k8smaster01/done
[k8smaster01][INFO  ] creating keyring file: /var/lib/ceph/tmp/ceph-k8smaster01.mon.keyring
[k8smaster01][DEBUG ] create the monitor keyring file
[k8smaster01][INFO  ] Running command: ceph-mon --cluster ceph --mkfs -i k8smaster01 --keyring /var/lib/ceph/tmp/ceph-k8smaster01.mon.keyring --setuser 167 --setgroup 167
[k8smaster01][INFO  ] unlinking keyring file /var/lib/ceph/tmp/ceph-k8smaster01.mon.keyring
[k8smaster01][DEBUG ] create a done file to avoid re-doing the mon deployment
[k8smaster01][DEBUG ] create the init path if it does not exist
[k8smaster01][INFO  ] Running command: systemctl enable ceph.target
[k8smaster01][INFO  ] Running command: systemctl enable ceph-mon@k8smaster01
[k8smaster01][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[k8smaster01][INFO  ] Running command: systemctl start ceph-mon@k8smaster01
[k8smaster01][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.k8smaster01.asok mon_status
[k8smaster01][DEBUG ] ********************************************************************************
[k8smaster01][DEBUG ] status for monitor: mon.k8smaster01
[k8smaster01][DEBUG ] {
[k8smaster01][DEBUG ]   "election_epoch": 3, 
[k8smaster01][DEBUG ]   "extra_probe_peers": [], 
[k8smaster01][DEBUG ]   "feature_map": {
[k8smaster01][DEBUG ]     "mon": [
[k8smaster01][DEBUG ]       {
[k8smaster01][DEBUG ]         "features": "0x3f01cfb8ffadffff", 
[k8smaster01][DEBUG ]         "num": 1, 
[k8smaster01][DEBUG ]         "release": "luminous"
[k8smaster01][DEBUG ]       }
[k8smaster01][DEBUG ]     ]
[k8smaster01][DEBUG ]   }, 
[k8smaster01][DEBUG ]   "features": {
[k8smaster01][DEBUG ]     "quorum_con": "4540138292836696063", 
[k8smaster01][DEBUG ]     "quorum_mon": [
[k8smaster01][DEBUG ]       "kraken", 
[k8smaster01][DEBUG ]       "luminous", 
[k8smaster01][DEBUG ]       "mimic", 
[k8smaster01][DEBUG ]       "osdmap-prune", 
[k8smaster01][DEBUG ]       "nautilus", 
[k8smaster01][DEBUG ]       "octopus"
[k8smaster01][DEBUG ]     ], 
[k8smaster01][DEBUG ]     "required_con": "2449958747315978244", 
[k8smaster01][DEBUG ]     "required_mon": [
[k8smaster01][DEBUG ]       "kraken", 
[k8smaster01][DEBUG ]       "luminous", 
[k8smaster01][DEBUG ]       "mimic", 
[k8smaster01][DEBUG ]       "osdmap-prune", 
[k8smaster01][DEBUG ]       "nautilus", 
[k8smaster01][DEBUG ]       "octopus"
[k8smaster01][DEBUG ]     ]
[k8smaster01][DEBUG ]   }, 
[k8smaster01][DEBUG ]   "monmap": {
[k8smaster01][DEBUG ]     "created": "2020-09-15T03:40:57.259097Z", 
[k8smaster01][DEBUG ]     "epoch": 1, 
[k8smaster01][DEBUG ]     "features": {
[k8smaster01][DEBUG ]       "optional": [], 
[k8smaster01][DEBUG ]       "persistent": [
[k8smaster01][DEBUG ]         "kraken", 
[k8smaster01][DEBUG ]         "luminous", 
[k8smaster01][DEBUG ]         "mimic", 
[k8smaster01][DEBUG ]         "osdmap-prune", 
[k8smaster01][DEBUG ]         "nautilus", 
[k8smaster01][DEBUG ]         "octopus"
[k8smaster01][DEBUG ]       ]
[k8smaster01][DEBUG ]     }, 
[k8smaster01][DEBUG ]     "fsid": "61ab49ca-21d8-4b03-8237-8c05c5c5d177", 
[k8smaster01][DEBUG ]     "min_mon_release": 15, 
[k8smaster01][DEBUG ]     "min_mon_release_name": "octopus", 
[k8smaster01][DEBUG ]     "modified": "2020-09-15T03:40:57.259097Z", 
[k8smaster01][DEBUG ]     "mons": [
[k8smaster01][DEBUG ]       {
[k8smaster01][DEBUG ]         "addr": "192.168.13.101:6789/0", 
[k8smaster01][DEBUG ]         "name": "k8smaster01", 
[k8smaster01][DEBUG ]         "priority": 0, 
[k8smaster01][DEBUG ]         "public_addr": "192.168.13.101:6789/0", 
[k8smaster01][DEBUG ]         "public_addrs": {
[k8smaster01][DEBUG ]           "addrvec": [
[k8smaster01][DEBUG ]             {
[k8smaster01][DEBUG ]               "addr": "192.168.13.101:3300", 
[k8smaster01][DEBUG ]               "nonce": 0, 
[k8smaster01][DEBUG ]               "type": "v2"
[k8smaster01][DEBUG ]             }, 
[k8smaster01][DEBUG ]             {
[k8smaster01][DEBUG ]               "addr": "192.168.13.101:6789", 
[k8smaster01][DEBUG ]               "nonce": 0, 
[k8smaster01][DEBUG ]               "type": "v1"
[k8smaster01][DEBUG ]             }
[k8smaster01][DEBUG ]           ]
[k8smaster01][DEBUG ]         }, 
[k8smaster01][DEBUG ]         "rank": 0, 
[k8smaster01][DEBUG ]         "weight": 0
[k8smaster01][DEBUG ]       }
[k8smaster01][DEBUG ]     ]
[k8smaster01][DEBUG ]   }, 
[k8smaster01][DEBUG ]   "name": "k8smaster01", 
[k8smaster01][DEBUG ]   "outside_quorum": [], 
[k8smaster01][DEBUG ]   "quorum": [
[k8smaster01][DEBUG ]     0
[k8smaster01][DEBUG ]   ], 
[k8smaster01][DEBUG ]   "quorum_age": 2, 
[k8smaster01][DEBUG ]   "rank": 0, 
[k8smaster01][DEBUG ]   "state": "leader", 
[k8smaster01][DEBUG ]   "sync_provider": []
[k8smaster01][DEBUG ] }
[k8smaster01][DEBUG ] ********************************************************************************
[k8smaster01][INFO  ] monitor: mon.k8smaster01 is running
[k8smaster01][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.k8smaster01.asok mon_status


[[email protected]:/etc/ceph]# ps -ef | grep ceph
root        2576       1  0 10:11 ?        00:00:00 /usr/bin/python3.6 /usr/bin/ceph-crash
ceph      297712       1  0 11:40 ?        00:00:00 /usr/bin/ceph-mon -f --cluster ceph --id k8smaster01 --setuser ceph --setgroup ceph
root      313777   27720  0 11:45 pts/0    00:00:00 grep --color=auto ceph
[[email protected]:/etc/ceph]# netstat -anpl | grep 6789 | grep LISTEN
tcp        0      0 192.168.13.101:6789     0.0.0.0:*               LISTEN      297712/ceph-mon     



1.4.4 添加2mon

[[email protected]:/etc/ceph]# ceph-deploy mon add k8smaster02
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mon add k8smaster02
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : add
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : 
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  mon                           : ['k8smaster02']
[ceph_deploy.cli][INFO  ]  func                          : 
[ceph_deploy.cli][INFO  ]  address                       : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mon][INFO  ] ensuring configuration of new mon host: k8smaster02
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to k8smaster02
[k8smaster02][DEBUG ] connected to host: k8smaster02 
[k8smaster02][DEBUG ] detect platform information from remote host
[k8smaster02][DEBUG ] detect machine type
[k8smaster02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.mon][DEBUG ] Adding mon to cluster ceph, host k8smaster02
[ceph_deploy.mon][DEBUG ] using mon address by resolving host: 192.168.13.102
[ceph_deploy.mon][DEBUG ] detecting platform for host k8smaster02 ...
[k8smaster02][DEBUG ] connected to host: k8smaster02 
[k8smaster02][DEBUG ] detect platform information from remote host
[k8smaster02][DEBUG ] detect machine type
[k8smaster02][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: CentOS Linux 7.8.2003 Core
[k8smaster02][DEBUG ] determining if provided host has same hostname in remote
[k8smaster02][DEBUG ] get remote short hostname
[k8smaster02][DEBUG ] adding mon to k8smaster02
[k8smaster02][DEBUG ] get remote short hostname
[k8smaster02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[k8smaster02][DEBUG ] create the mon path if it does not exist
[k8smaster02][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-k8smaster02/done
[k8smaster02][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-k8smaster02/done
[k8smaster02][INFO  ] creating keyring file: /var/lib/ceph/tmp/ceph-k8smaster02.mon.keyring
[k8smaster02][DEBUG ] create the monitor keyring file
[k8smaster02][INFO  ] Running command: ceph --cluster ceph mon getmap -o /var/lib/ceph/tmp/ceph.k8smaster02.monmap
[k8smaster02][WARNIN] got monmap epoch 1
[k8smaster02][INFO  ] Running command: ceph-mon --cluster ceph --mkfs -i k8smaster02 --monmap /var/lib/ceph/tmp/ceph.k8smaster02.monmap --keyring /var/lib/ceph/tmp/ceph-k8smaster02.mon.keyring --setuser 167 --setgroup 167
[k8smaster02][INFO  ] unlinking keyring file /var/lib/ceph/tmp/ceph-k8smaster02.mon.keyring
[k8smaster02][DEBUG ] create a done file to avoid re-doing the mon deployment
[k8smaster02][DEBUG ] create the init path if it does not exist
[k8smaster02][INFO  ] Running command: systemctl enable ceph.target
[k8smaster02][INFO  ] Running command: systemctl enable ceph-mon@k8smaster02
[k8smaster02][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[k8smaster02][INFO  ] Running command: systemctl start ceph-mon@k8smaster02
[k8smaster02][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.k8smaster02.asok mon_status
[k8smaster02][WARNIN] k8smaster02 is not defined in `mon initial members`
[k8smaster02][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.k8smaster02.asok mon_status
[k8smaster02][DEBUG ] ********************************************************************************
[k8smaster02][DEBUG ] status for monitor: mon.k8smaster02
[k8smaster02][DEBUG ] {
[k8smaster02][DEBUG ]   "election_epoch": 1, 
[k8smaster02][DEBUG ]   "extra_probe_peers": [], 
[k8smaster02][DEBUG ]   "feature_map": {
[k8smaster02][DEBUG ]     "mon": [
[k8smaster02][DEBUG ]       {
[k8smaster02][DEBUG ]         "features": "0x3f01cfb8ffadffff", 
[k8smaster02][DEBUG ]         "num": 1, 
[k8smaster02][DEBUG ]         "release": "luminous"
[k8smaster02][DEBUG ]       }
[k8smaster02][DEBUG ]     ]
[k8smaster02][DEBUG ]   }, 
[k8smaster02][DEBUG ]   "features": {
[k8smaster02][DEBUG ]     "quorum_con": "0", 
[k8smaster02][DEBUG ]     "quorum_mon": [], 
[k8smaster02][DEBUG ]     "required_con": "2449958197560098820", 
[k8smaster02][DEBUG ]     "required_mon": [
[k8smaster02][DEBUG ]       "kraken", 
[k8smaster02][DEBUG ]       "luminous", 
[k8smaster02][DEBUG ]       "mimic", 
[k8smaster02][DEBUG ]       "osdmap-prune", 
[k8smaster02][DEBUG ]       "nautilus", 
[k8smaster02][DEBUG ]       "octopus"
[k8smaster02][DEBUG ]     ]
[k8smaster02][DEBUG ]   }, 
[k8smaster02][DEBUG ]   "monmap": {
[k8smaster02][DEBUG ]     "created": "2020-09-15T03:40:57.259097Z", 
[k8smaster02][DEBUG ]     "epoch": 2, 
[k8smaster02][DEBUG ]     "features": {
[k8smaster02][DEBUG ]       "optional": [], 
[k8smaster02][DEBUG ]       "persistent": [
[k8smaster02][DEBUG ]         "kraken", 
[k8smaster02][DEBUG ]         "luminous", 
[k8smaster02][DEBUG ]         "mimic", 
[k8smaster02][DEBUG ]         "osdmap-prune", 
[k8smaster02][DEBUG ]         "nautilus", 
[k8smaster02][DEBUG ]         "octopus"
[k8smaster02][DEBUG ]       ]
[k8smaster02][DEBUG ]     }, 
[k8smaster02][DEBUG ]     "fsid": "61ab49ca-21d8-4b03-8237-8c05c5c5d177", 
[k8smaster02][DEBUG ]     "min_mon_release": 15, 
[k8smaster02][DEBUG ]     "min_mon_release_name": "octopus", 
[k8smaster02][DEBUG ]     "modified": "2020-09-15T03:49:03.577110Z", 
[k8smaster02][DEBUG ]     "mons": [
[k8smaster02][DEBUG ]       {
[k8smaster02][DEBUG ]         "addr": "192.168.13.101:6789/0", 
[k8smaster02][DEBUG ]         "name": "k8smaster01", 
[k8smaster02][DEBUG ]         "priority": 0, 
[k8smaster02][DEBUG ]         "public_addr": "192.168.13.101:6789/0", 
[k8smaster02][DEBUG ]         "public_addrs": {
[k8smaster02][DEBUG ]           "addrvec": [
[k8smaster02][DEBUG ]             {
[k8smaster02][DEBUG ]               "addr": "192.168.13.101:3300", 
[k8smaster02][DEBUG ]               "nonce": 0, 
[k8smaster02][DEBUG ]               "type": "v2"
[k8smaster02][DEBUG ]             }, 
[k8smaster02][DEBUG ]             {
[k8smaster02][DEBUG ]               "addr": "192.168.13.101:6789", 
[k8smaster02][DEBUG ]               "nonce": 0, 
[k8smaster02][DEBUG ]               "type": "v1"
[k8smaster02][DEBUG ]             }
[k8smaster02][DEBUG ]           ]
[k8smaster02][DEBUG ]         }, 
[k8smaster02][DEBUG ]         "rank": 0, 
[k8smaster02][DEBUG ]         "weight": 0
[k8smaster02][DEBUG ]       }, 
[k8smaster02][DEBUG ]       {
[k8smaster02][DEBUG ]         "addr": "192.168.13.102:6789/0", 
[k8smaster02][DEBUG ]         "name": "k8smaster02", 
[k8smaster02][DEBUG ]         "priority": 0, 
[k8smaster02][DEBUG ]         "public_addr": "192.168.13.102:6789/0", 
[k8smaster02][DEBUG ]         "public_addrs": {
[k8smaster02][DEBUG ]           "addrvec": [
[k8smaster02][DEBUG ]             {
[k8smaster02][DEBUG ]               "addr": "192.168.13.102:3300", 
[k8smaster02][DEBUG ]               "nonce": 0, 
[k8smaster02][DEBUG ]               "type": "v2"
[k8smaster02][DEBUG ]             }, 
[k8smaster02][DEBUG ]             {
[k8smaster02][DEBUG ]               "addr": "192.168.13.102:6789", 
[k8smaster02][DEBUG ]               "nonce": 0, 
[k8smaster02][DEBUG ]               "type": "v1"
[k8smaster02][DEBUG ]             }
[k8smaster02][DEBUG ]           ]
[k8smaster02][DEBUG ]         }, 
[k8smaster02][DEBUG ]         "rank": 1, 
[k8smaster02][DEBUG ]         "weight": 0
[k8smaster02][DEBUG ]       }
[k8smaster02][DEBUG ]     ]
[k8smaster02][DEBUG ]   }, 
[k8smaster02][DEBUG ]   "name": "k8smaster02", 
[k8smaster02][DEBUG ]   "outside_quorum": [], 
[k8smaster02][DEBUG ]   "quorum": [], 
[k8smaster02][DEBUG ]   "rank": 1, 
[k8smaster02][DEBUG ]   "state": "electing", 
[k8smaster02][DEBUG ]   "sync_provider": []
[k8smaster02][DEBUG ] }
[k8smaster02][DEBUG ] ********************************************************************************
[k8smaster02][INFO  ] monitor: mon.k8smaster02 is running

[[email protected]:/etc/ceph]# ps -ef | grep ceph
root        8564       1  0 10:11 ?        00:00:00 /usr/bin/python3.6 /usr/bin/ceph-crash
ceph      135492       1  0 11:49 ?        00:00:00 /usr/bin/ceph-mon -f --cluster ceph --id k8smaster02 --setuser ceph --setgroup ceph
root      139906     642  0 11:52 pts/0    00:00:00 grep --color=auto ceph
[[email protected]:/etc/ceph]# netstat -anpl | grep 6789 | grep LISTEN
tcp        0      0 192.168.13.102:6789     0.0.0.0:*               LISTEN      135492/ceph-mon   


[[email protected]:/etc/ceph]# ceph-deploy mon add k8smaster03
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mon add k8smaster03
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : add
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : 
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  mon                           : ['k8smaster03']
[ceph_deploy.cli][INFO  ]  func                          : 
[ceph_deploy.cli][INFO  ]  address                       : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mon][INFO  ] ensuring configuration of new mon host: k8smaster03
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to k8smaster03
[k8smaster03][DEBUG ] connected to host: k8smaster03 
[k8smaster03][DEBUG ] detect platform information from remote host
[k8smaster03][DEBUG ] detect machine type
[k8smaster03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.mon][DEBUG ] Adding mon to cluster ceph, host k8smaster03
[ceph_deploy.mon][DEBUG ] using mon address by resolving host: 192.168.13.103
[ceph_deploy.mon][DEBUG ] detecting platform for host k8smaster03 ...
[k8smaster03][DEBUG ] connected to host: k8smaster03 
[k8smaster03][DEBUG ] detect platform information from remote host
[k8smaster03][DEBUG ] detect machine type
[k8smaster03][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: CentOS Linux 7.8.2003 Core
[k8smaster03][DEBUG ] determining if provided host has same hostname in remote
[k8smaster03][DEBUG ] get remote short hostname
[k8smaster03][DEBUG ] adding mon to k8smaster03
[k8smaster03][DEBUG ] get remote short hostname
[k8smaster03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[k8smaster03][DEBUG ] create the mon path if it does not exist
[k8smaster03][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-k8smaster03/done
[k8smaster03][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-k8smaster03/done
[k8smaster03][INFO  ] creating keyring file: /var/lib/ceph/tmp/ceph-k8smaster03.mon.keyring
[k8smaster03][DEBUG ] create the monitor keyring file
[k8smaster03][INFO  ] Running command: ceph --cluster ceph mon getmap -o /var/lib/ceph/tmp/ceph.k8smaster03.monmap
[k8smaster03][WARNIN] got monmap epoch 2
[k8smaster03][INFO  ] Running command: ceph-mon --cluster ceph --mkfs -i k8smaster03 --monmap /var/lib/ceph/tmp/ceph.k8smaster03.monmap --keyring /var/lib/ceph/tmp/ceph-k8smaster03.mon.keyring --setuser 167 --setgroup 167
[k8smaster03][INFO  ] unlinking keyring file /var/lib/ceph/tmp/ceph-k8smaster03.mon.keyring
[k8smaster03][DEBUG ] create a done file to avoid re-doing the mon deployment
[k8smaster03][DEBUG ] create the init path if it does not exist
[k8smaster03][INFO  ] Running command: systemctl enable ceph.target
[k8smaster03][INFO  ] Running command: systemctl enable ceph-mon@k8smaster03
[k8smaster03][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[k8smaster03][INFO  ] Running command: systemctl start ceph-mon@k8smaster03
[k8smaster03][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.k8smaster03.asok mon_status
[k8smaster03][WARNIN] k8smaster03 is not defined in `mon initial members`
[k8smaster03][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.k8smaster03.asok mon_status
[k8smaster03][DEBUG ] ********************************************************************************
[k8smaster03][DEBUG ] status for monitor: mon.k8smaster03
[k8smaster03][DEBUG ] {
[k8smaster03][DEBUG ]   "election_epoch": 1, 
[k8smaster03][DEBUG ]   "extra_probe_peers": [
[k8smaster03][DEBUG ]     {
[k8smaster03][DEBUG ]       "addrvec": [
[k8smaster03][DEBUG ]         {
[k8smaster03][DEBUG ]           "addr": "192.168.13.102:3300", 
[k8smaster03][DEBUG ]           "nonce": 0, 
[k8smaster03][DEBUG ]           "type": "v2"
[k8smaster03][DEBUG ]         }, 
[k8smaster03][DEBUG ]         {
[k8smaster03][DEBUG ]           "addr": "192.168.13.102:6789", 
[k8smaster03][DEBUG ]           "nonce": 0, 
[k8smaster03][DEBUG ]           "type": "v1"
[k8smaster03][DEBUG ]         }
[k8smaster03][DEBUG ]       ]
[k8smaster03][DEBUG ]     }
[k8smaster03][DEBUG ]   ], 
[k8smaster03][DEBUG ]   "feature_map": {
[k8smaster03][DEBUG ]     "mon": [
[k8smaster03][DEBUG ]       {
[k8smaster03][DEBUG ]         "features": "0x3f01cfb8ffadffff", 
[k8smaster03][DEBUG ]         "num": 1, 
[k8smaster03][DEBUG ]         "release": "luminous"
[k8smaster03][DEBUG ]       }
[k8smaster03][DEBUG ]     ]
[k8smaster03][DEBUG ]   }, 
[k8smaster03][DEBUG ]   "features": {
[k8smaster03][DEBUG ]     "quorum_con": "0", 
[k8smaster03][DEBUG ]     "quorum_mon": [], 
[k8smaster03][DEBUG ]     "required_con": "2449958197560098820", 
[k8smaster03][DEBUG ]     "required_mon": [
[k8smaster03][DEBUG ]       "kraken", 
[k8smaster03][DEBUG ]       "luminous", 
[k8smaster03][DEBUG ]       "mimic", 
[k8smaster03][DEBUG ]       "osdmap-prune", 
[k8smaster03][DEBUG ]       "nautilus", 
[k8smaster03][DEBUG ]       "octopus"
[k8smaster03][DEBUG ]     ]
[k8smaster03][DEBUG ]   }, 
[k8smaster03][DEBUG ]   "monmap": {
[k8smaster03][DEBUG ]     "created": "2020-09-15T03:40:57.259097Z", 
[k8smaster03][DEBUG ]     "epoch": 3, 
[k8smaster03][DEBUG ]     "features": {
[k8smaster03][DEBUG ]       "optional": [], 
[k8smaster03][DEBUG ]       "persistent": [
[k8smaster03][DEBUG ]         "kraken", 
[k8smaster03][DEBUG ]         "luminous", 
[k8smaster03][DEBUG ]         "mimic", 
[k8smaster03][DEBUG ]         "osdmap-prune", 
[k8smaster03][DEBUG ]         "nautilus", 
[k8smaster03][DEBUG ]         "octopus"
[k8smaster03][DEBUG ]       ]
[k8smaster03][DEBUG ]     }, 
[k8smaster03][DEBUG ]     "fsid": "61ab49ca-21d8-4b03-8237-8c05c5c5d177", 
[k8smaster03][DEBUG ]     "min_mon_release": 15, 
[k8smaster03][DEBUG ]     "min_mon_release_name": "octopus", 
[k8smaster03][DEBUG ]     "modified": "2020-09-15T03:53:33.897666Z", 
[k8smaster03][DEBUG ]     "mons": [
[k8smaster03][DEBUG ]       {
[k8smaster03][DEBUG ]         "addr": "192.168.13.101:6789/0", 
[k8smaster03][DEBUG ]         "name": "k8smaster01", 
[k8smaster03][DEBUG ]         "priority": 0, 
[k8smaster03][DEBUG ]         "public_addr": "192.168.13.101:6789/0", 
[k8smaster03][DEBUG ]         "public_addrs": {
[k8smaster03][DEBUG ]           "addrvec": [
[k8smaster03][DEBUG ]             {
[k8smaster03][DEBUG ]               "addr": "192.168.13.101:3300", 
[k8smaster03][DEBUG ]               "nonce": 0, 
[k8smaster03][DEBUG ]               "type": "v2"
[k8smaster03][DEBUG ]             }, 
[k8smaster03][DEBUG ]             {
[k8smaster03][DEBUG ]               "addr": "192.168.13.101:6789", 
[k8smaster03][DEBUG ]               "nonce": 0, 
[k8smaster03][DEBUG ]               "type": "v1"
[k8smaster03][DEBUG ]             }
[k8smaster03][DEBUG ]           ]
[k8smaster03][DEBUG ]         }, 
[k8smaster03][DEBUG ]         "rank": 0, 
[k8smaster03][DEBUG ]         "weight": 0
[k8smaster03][DEBUG ]       }, 
[k8smaster03][DEBUG ]       {
[k8smaster03][DEBUG ]         "addr": "192.168.13.102:6789/0", 
[k8smaster03][DEBUG ]         "name": "k8smaster02", 
[k8smaster03][DEBUG ]         "priority": 0, 
[k8smaster03][DEBUG ]         "public_addr": "192.168.13.102:6789/0", 
[k8smaster03][DEBUG ]         "public_addrs": {
[k8smaster03][DEBUG ]           "addrvec": [
[k8smaster03][DEBUG ]             {
[k8smaster03][DEBUG ]               "addr": "192.168.13.102:3300", 
[k8smaster03][DEBUG ]               "nonce": 0, 
[k8smaster03][DEBUG ]               "type": "v2"
[k8smaster03][DEBUG ]             }, 
[k8smaster03][DEBUG ]             {
[k8smaster03][DEBUG ]               "addr": "192.168.13.102:6789", 
[k8smaster03][DEBUG ]               "nonce": 0, 
[k8smaster03][DEBUG ]               "type": "v1"
[k8smaster03][DEBUG ]             }
[k8smaster03][DEBUG ]           ]
[k8smaster03][DEBUG ]         }, 
[k8smaster03][DEBUG ]         "rank": 1, 
[k8smaster03][DEBUG ]         "weight": 0
[k8smaster03][DEBUG ]       }, 
[k8smaster03][DEBUG ]       {
[k8smaster03][DEBUG ]         "addr": "192.168.13.103:6789/0", 
[k8smaster03][DEBUG ]         "name": "k8smaster03", 
[k8smaster03][DEBUG ]         "priority": 0, 
[k8smaster03][DEBUG ]         "public_addr": "192.168.13.103:6789/0", 
[k8smaster03][DEBUG ]         "public_addrs": {
[k8smaster03][DEBUG ]           "addrvec": [
[k8smaster03][DEBUG ]             {
[k8smaster03][DEBUG ]               "addr": "192.168.13.103:3300", 
[k8smaster03][DEBUG ]               "nonce": 0, 
[k8smaster03][DEBUG ]               "type": "v2"
[k8smaster03][DEBUG ]             }, 
[k8smaster03][DEBUG ]             {
[k8smaster03][DEBUG ]               "addr": "192.168.13.103:6789", 
[k8smaster03][DEBUG ]               "nonce": 0, 
[k8smaster03][DEBUG ]               "type": "v1"
[k8smaster03][DEBUG ]             }
[k8smaster03][DEBUG ]           ]
[k8smaster03][DEBUG ]         }, 
[k8smaster03][DEBUG ]         "rank": 2, 
[k8smaster03][DEBUG ]         "weight": 0
[k8smaster03][DEBUG ]       }
[k8smaster03][DEBUG ]     ]
[k8smaster03][DEBUG ]   }, 
[k8smaster03][DEBUG ]   "name": "k8smaster03", 
[k8smaster03][DEBUG ]   "outside_quorum": [], 
[k8smaster03][DEBUG ]   "quorum": [], 
[k8smaster03][DEBUG ]   "rank": 2, 
[k8smaster03][DEBUG ]   "state": "electing", 
[k8smaster03][DEBUG ]   "sync_provider": []
[k8smaster03][DEBUG ] }
[k8smaster03][DEBUG ] ********************************************************************************
[k8smaster03][INFO  ] monitor: mon.k8smaster03 is running


[[email protected]:/etc/ceph]#  ps -ef | grep ceph
root        2955       1  0 10:11 ?        00:00:00 /usr/bin/python3.6 /usr/bin/ceph-crash
ceph      111301       1  0 11:53 ?        00:00:00 /usr/bin/ceph-mon -f --cluster ceph --id k8smaster03 --setuser ceph --setgroup ceph
root      112936   11659  0 11:55 pts/2    00:00:00 grep --color=auto ceph
[[email protected]:/etc/ceph]# netstat -anpl | grep 6789 | grep LISTEN
tcp        0      0 192.168.13.103:6789     0.0.0.0:*               LISTEN      111301/ceph-mon 

1.4.5 创建ceph keyring

[[email protected]:/etc/ceph]# ceph-deploy gatherkeys k8smaster01
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy gatherkeys k8smaster01
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : 
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  mon                           : ['k8smaster01']
[ceph_deploy.cli][INFO  ]  func                          : 
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.gatherkeys][INFO  ] Storing keys in temp directory /tmp/tmpwH6AOv
[k8smaster01][DEBUG ] connected to host: k8smaster01 
[k8smaster01][DEBUG ] detect platform information from remote host
[k8smaster01][DEBUG ] detect machine type
[k8smaster01][DEBUG ] get remote short hostname
[k8smaster01][DEBUG ] fetch remote file
[k8smaster01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.k8smaster01.asok mon_status
[k8smaster01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-k8smaster01/keyring auth get client.admin
[k8smaster01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-k8smaster01/keyring auth get client.bootstrap-mds
[k8smaster01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-k8smaster01/keyring auth get client.bootstrap-mgr
[k8smaster01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-k8smaster01/keyring auth get client.bootstrap-osd
[k8smaster01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-k8smaster01/keyring auth get client.bootstrap-rgw
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mgr.keyring
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO  ] Destroy temp directory /tmp/tmpwH6AOv

[[email protected]:/etc/ceph]# ll
total 104
-rw------- 1 root root   113 Sep 15 11:47 ceph.bootstrap-mds.keyring
-rw------- 1 root root   113 Sep 15 11:47 ceph.bootstrap-mgr.keyring
-rw------- 1 root root   113 Sep 15 11:47 ceph.bootstrap-osd.keyring
-rw------- 1 root root   113 Sep 15 11:47 ceph.bootstrap-rgw.keyring
-rw------- 1 root root   151 Sep 15 11:47 ceph.client.admin.keyring
-rw-r--r-- 1 root root   351 Sep 15 12:23 ceph.conf
-rw-r--r-- 1 root root 70922 Sep 15 12:24 ceph-deploy-ceph.log
-rw------- 1 root root    73 Sep 15 11:39 ceph.mon.keyring
-rw-r--r-- 1 root root    92 Jul  1 00:11 rbdmap

[[email protected]:/etc/ceph]#  cat ceph.client.admin.keyring ceph.bootstrap-osd.keyring
[client.admin]
	key = AQDKN2BfzZxpARAA0BP6urOP9aSv7M+dByoB+A==
	caps mds = "allow *"
	caps mgr = "allow *"
	caps mon = "allow *"
	caps osd = "allow *"
[client.bootstrap-osd]
	key = AQDKN2BfCPRpARAAedAnkp05ug/l8V3yQAIqbA==
	caps mon = "allow profile bootstrap-osd"

注://admin的key保存在ceph.client.admin.keyring文件里,通过–keyring提供

[[email protected]:/etc/ceph]#  ceph --keyring ceph.client.admin.keyring -c ceph.conf -s
  cluster:
    id:     61ab49ca-21d8-4b03-8237-8c05c5c5d177
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum k8smaster01,k8smaster02,k8smaster03 (age 36m)
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     
 注://admin的key保存在ceph.client.admin.keyring文件里,通过–keyring提供
[[email protected]:/etc/ceph]# ceph --keyring ceph.client.admin.keyring -c ceph.conf auth get client.admin
exported keyring for client.admin
[client.admin]
	key = AQDKN2BfzZxpARAA0BP6urOP9aSv7M+dByoB+A==
	caps mds = "allow *"
	caps mgr = "allow *"
	caps mon = "allow *"
	caps osd = "allow *"

1.4.6 分发ceph keyring

[[email protected]:/etc/ceph]# ceph-deploy admin k8smaster01 k8smaster02 k8smaster03 k8sworker01 k8sworker02 k8sworker03 k8sworker04
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy admin k8smaster01 k8smaster02 k8smaster03 k8sworker01 k8sworker02 k8sworker03 k8sworker04
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : 
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['k8smaster01', 'k8smaster02', 'k8smaster03', 'k8sworker01', 'k8sworker02', 'k8sworker03', 'k8sworker04']
[ceph_deploy.cli][INFO  ]  func                          : 
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to k8smaster01
[k8smaster01][DEBUG ] connected to host: k8smaster01 
[k8smaster01][DEBUG ] detect platform information from remote host
[k8smaster01][DEBUG ] detect machine type
[k8smaster01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to k8smaster02
[k8smaster02][DEBUG ] connected to host: k8smaster02 
[k8smaster02][DEBUG ] detect platform information from remote host
[k8smaster02][DEBUG ] detect machine type
[k8smaster02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to k8smaster03
[k8smaster03][DEBUG ] connected to host: k8smaster03 
[k8smaster03][DEBUG ] detect platform information from remote host
[k8smaster03][DEBUG ] detect machine type
[k8smaster03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to k8sworker01
[k8sworker01][DEBUG ] connected to host: k8sworker01 
[k8sworker01][DEBUG ] detect platform information from remote host
[k8sworker01][DEBUG ] detect machine type
[k8sworker01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to k8sworker02
[k8sworker02][DEBUG ] connected to host: k8sworker02 
[k8sworker02][DEBUG ] detect platform information from remote host
[k8sworker02][DEBUG ] detect machine type
[k8sworker02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to k8sworker03
[k8sworker03][DEBUG ] connected to host: k8sworker03 
[k8sworker03][DEBUG ] detect platform information from remote host
[k8sworker03][DEBUG ] detect machine type
[k8sworker03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to k8sworker04
[k8sworker04][DEBUG ] connected to host: k8sworker04 
[k8sworker04][DEBUG ] detect platform information from remote host
[k8sworker04][DEBUG ] detect machine type
[k8sworker04][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

1.4.7 创建ceph mgr

[[email protected]:/etc/ceph]# ceph-deploy mgr create k8smaster01 k8smaster02 k8smaster03
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mgr create k8smaster01 k8smaster02 k8smaster03
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  mgr                           : [('k8smaster01', 'k8smaster01'), ('k8smaster02', 'k8smaster02'), ('k8smaster03', 'k8smaster03')]
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : 
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : 
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts k8smaster01:k8smaster01 k8smaster02:k8smaster02 k8smaster03:k8smaster03
[k8smaster01][DEBUG ] connected to host: k8smaster01 
[k8smaster01][DEBUG ] detect platform information from remote host
[k8smaster01][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: CentOS Linux 7.8.2003 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to k8smaster01
[k8smaster01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[k8smaster01][WARNIN] mgr keyring does not exist yet, creating one
[k8smaster01][DEBUG ] create a keyring file
[k8smaster01][DEBUG ] create path recursively if it doesn't exist
[k8smaster01][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.k8smaster01 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-k8smaster01/keyring
[k8smaster01][INFO  ] Running command: systemctl enable ceph-mgr@k8smaster01
[k8smaster01][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[k8smaster01][INFO  ] Running command: systemctl start ceph-mgr@k8smaster01
[k8smaster01][INFO  ] Running command: systemctl enable ceph.target
[k8smaster02][DEBUG ] connected to host: k8smaster02 
[k8smaster02][DEBUG ] detect platform information from remote host
[k8smaster02][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: CentOS Linux 7.8.2003 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to k8smaster02
[k8smaster02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[k8smaster02][WARNIN] mgr keyring does not exist yet, creating one
[k8smaster02][DEBUG ] create a keyring file
[k8smaster02][DEBUG ] create path recursively if it doesn't exist
[k8smaster02][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.k8smaster02 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-k8smaster02/keyring
[k8smaster02][INFO  ] Running command: systemctl enable ceph-mgr@k8smaster02
[k8smaster02][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[k8smaster02][INFO  ] Running command: systemctl start ceph-mgr@k8smaster02
[k8smaster02][INFO  ] Running command: systemctl enable ceph.target
[k8smaster03][DEBUG ] connected to host: k8smaster03 
[k8smaster03][DEBUG ] detect platform information from remote host
[k8smaster03][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: CentOS Linux 7.8.2003 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to k8smaster03
[k8smaster03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[k8smaster03][WARNIN] mgr keyring does not exist yet, creating one
[k8smaster03][DEBUG ] create a keyring file
[k8smaster03][DEBUG ] create path recursively if it doesn't exist
[k8smaster03][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.k8smaster03 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-k8smaster03/keyring
[k8smaster03][INFO  ] Running command: systemctl enable ceph-mgr@k8smaster03
[k8smaster03][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[k8smaster03][INFO  ] Running command: systemctl start ceph-mgr@k8smaster03
[k8smaster03][INFO  ] Running command: systemctl enable ceph.target

1.4.8 更新ceph.conf配置文件

[[email protected]:/etc/ceph]# ceph -s
  cluster:
    id:     61ab49ca-21d8-4b03-8237-8c05c5c5d177
    health: HEALTH_WARN
            Module 'restful' has failed dependency: No module named 'pecan'
            Reduced data availability: 1 pg inactive
            OSD count 0 < osd_pool_default_size 3
 
  services:
    mon: 3 daemons, quorum k8smaster01,k8smaster02,k8smaster03 (age 47m)
    mgr: k8smaster01(active, since 2m), standbys: k8smaster03, k8smaster02
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     100.000% pgs unknown
             1 unknown


[[email protected]:/etc/ceph]#  vim ceph.conf
#!/bin/sh
[global]
fsid = 61ab49ca-21d8-4b03-8237-8c05c5c5d177
mon_initial_members = k8smaster01, k8smaster02, k8smaster03
mon_host = 192.168.13.101, 192.168.13.102, 192.168.13.103
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd_pool_default_size = 3         #数据存储数,默认为3份
osd_pool_default_min_size = 1
public network = 192.168.13.0/24
cluster network = 192.168.15.0/24

[mon]
mon_allow_pool_delete = true       # 默认保护机制不允许删除pool,根据情况设置
mon_clock_drift_allowed = 0.5      #监视器间允许的时钟漂移量,系统默认为0.5
mon_clock_drift_warn_backoff = 10  #时钟偏移警告的退避指数,系统默认为5

[osd]
osd_mkfs_type = xfs
osd_mkfs_options_xfs = -f
osd_max_write_size = 512          #OSD一次可写入的最大值(MB) 默认值为90
osd_client_message_size_cap = 2147483648        #2048M 客户端允许在内存中的最大数据(bytes),默认值为100
osd_deep_scrub_stride = 131072          #在Deep Scrub时候允许读取的字节数(bytes),默认值为524288
osd_op_threads = 16                #并发文件系统操作数,默认值为2
osd_disk_threads = 4               #OSD密集型操作例如恢复和Scrubbing时的线程,默认值为1
osd_map_cache_size = 1024          #保留OSD Map的缓存(MB),默认值为500
osd_map_cache_bl_size = 128        #OSD进程在内存中的OSD Map缓存(MB),默认值为50
osd_mount_options_xfs = "rw,noexec,nodev,noatime,nodiratime,nobarrier"  #Ceph OSD xfs Mount选项,默认值rw,noatime,inode64
osd_recovery_op_priority = 10      #恢复操作优先级,取值1-63,值越高占用资源越高,默认值为10
osd_recovery_max_active = 10       #同一时间内活跃的恢复请求数,默认值为15
osd_max_backfills = 8              #一个OSD允许的最大backfills数,默认值为10
osd_min_pg_log_entries = 30000     #修建PGLog是保留的最小PGLog数,默认值为3000
osd_max_pg_log_entries = 100000    #修建PGLog是保留的最大PGLog数,默认值为100000
osd_mon_heartbeat_interval = 40    #OSD ping一个monitor的时间间隔,默认值为30
osd_op_log_threshold = 50          #一次显示多少操作的log,默认值为5
filestore_max_sync_interval = 10   #从日志到数据盘最大同步间隔(seconds),默认值为5
filestore_min_sync_interval = 2    #从日志到数据盘最小同步间隔(seconds),默认值为0.1
filestore_queue_max_ops = 25000    #数据盘最大接受的操作数,默认值500
filestore_queue_max_bytes = 1048576000          #1024M 数据盘一次操作最大字节数(bytes),默认值100
filestore_queue_committing_max_ops = 50000      #数据盘能够commit的操作数,默认值500
filestore_queue_committing_max_bytes = 10485760000   #10240M 数据盘能够commit的最大字节数(bytes),默认值100
filestore_split_multiple = 800                  #前一个子目录分裂成子目录中的文件的最大数量,默认值2
filestore_merge_threshold = 40                  #前一个子类目录中的文件合并到父类的最小数量,默认值10
filestore_fd_cache_size = 655350                #对象文件句柄缓存大小,默认值为128
filestore_omap_header_cache_size = 655350
filestore_fiemap = false                        #读写分离,默认值false
filestore_ondisk_finisher_threads = 2           #默认值为1
filestore_apply_finisher_threads = 2            #默认值为1
filestore_fd_cache_random = true
filestore_op_threads = 8
max_open_files = 655350

#[mds]
#mds_cache_size = 102400000
#mds_beacon_grace = 120
#mds_session_timeout = 15
#mds_session_autoclose = 60
#mds_reconnect_timeout = 15
#mds_decay_halflife = 10
~ 




更新ceph.conf 并重启mon节点
[[email protected]:/etc/ceph]#  systemctl restart ceph-mon@k8smaster01
注:如果存在都MON节点,只需要在管理节点修改一份,然后推送至需要更行的节点上即可,然后重启对应节点的mon
[[email protected]:/etc/ceph]# ceph-deploy --overwrite-conf config push k8smaster02 k8smaster03 k8sworker01 k8sworker02 k8sworker03 k8sworker04

[[email protected]:/etc/ceph]# systemctl restart ceph-mon@k8smaster02
[[email protected]:/etc/ceph]# systemctl restart ceph-mon@k8smaster03

[[email protected]:/etc/ceph]# scp -p ceph.bootstrap* ceph.mon.keyring k8smaster02:/etc/ceph/
[[email protected]:/etc/ceph]# scp -p ceph.bootstrap* ceph.mon.keyring k8smaster03:/etc/ceph/


1.5 添加OSD节点

1.5.1 添加osd.0

(磁盘作block,无block.db,无block.wal)

1.添加k8sworker01的OSD
[[email protected]:/etc/ceph]# ceph-deploy osd create k8sworker01 --data /dev/sda
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create k8sworker01 --data /dev/sda
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : 
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : k8sworker01
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : 
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/sda
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sda
[k8sworker01][DEBUG ] connected to host: k8sworker01 
[k8sworker01][DEBUG ] detect platform information from remote host
[k8sworker01][DEBUG ] detect machine type
[k8sworker01][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.8.2003 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to k8sworker01
[k8sworker01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[k8sworker01][DEBUG ] find the location of an executable
[k8sworker01][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sda
[k8sworker01][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[k8sworker01][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new b0aa8b09-7d0a-4cf8-bc10-d152e4263dc2
[k8sworker01][WARNIN] Running command: /usr/sbin/vgcreate --force --yes ceph-c3ee3683-008d-4cc5-8c84-98e1c69b4b79 /dev/sda
[k8sworker01][WARNIN]  stdout: Physical volume "/dev/sda" successfully created.
[k8sworker01][WARNIN]  stdout: Volume group "ceph-c3ee3683-008d-4cc5-8c84-98e1c69b4b79" successfully created
[k8sworker01][WARNIN] Running command: /usr/sbin/lvcreate --yes -l 100%FREE -n osd-block-b0aa8b09-7d0a-4cf8-bc10-d152e4263dc2 ceph-c3ee3683-008d-4cc5-8c84-98e1c69b4b79
[k8sworker01][WARNIN]  stdout: Logical volume "osd-block-b0aa8b09-7d0a-4cf8-bc10-d152e4263dc2" created.
[k8sworker01][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[k8sworker01][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
[k8sworker01][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-c3ee3683-008d-4cc5-8c84-98e1c69b4b79/osd-block-b0aa8b09-7d0a-4cf8-bc10-d152e4263dc2
[k8sworker01][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-1
[k8sworker01][WARNIN] Running command: /bin/ln -s /dev/ceph-c3ee3683-008d-4cc5-8c84-98e1c69b4b79/osd-block-b0aa8b09-7d0a-4cf8-bc10-d152e4263dc2 /var/lib/ceph/osd/ceph-0/block
[k8sworker01][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
[k8sworker01][WARNIN]  stderr: 2020-09-15T13:04:19.371+0800 7fd9ddd4a700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[k8sworker01][WARNIN] 2020-09-15T13:04:19.371+0800 7fd9ddd4a700 -1 AuthRegistry(0x7fd9d8058528) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[k8sworker01][WARNIN]  stderr: got monmap epoch 3
[k8sworker01][WARNIN] Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQBSS2Bfa5OpHxAAmYTQvZbsXrdcl2lmOFZpnw==
[k8sworker01][WARNIN]  stdout: creating /var/lib/ceph/osd/ceph-0/keyring
[k8sworker01][WARNIN] added entity osd.0 auth(key=AQBSS2Bfa5OpHxAAmYTQvZbsXrdcl2lmOFZpnw==)
[k8sworker01][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
[k8sworker01][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
[k8sworker01][WARNIN] Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid b0aa8b09-7d0a-4cf8-bc10-d152e4263dc2 --setuser ceph --setgroup ceph
[k8sworker01][WARNIN] --> ceph-volume lvm prepare successful for: /dev/sda
[k8sworker01][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[k8sworker01][WARNIN] Running command: /bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-c3ee3683-008d-4cc5-8c84-98e1c69b4b79/osd-block-b0aa8b09-7d0a-4cf8-bc10-d152e4263dc2 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
[k8sworker01][WARNIN] Running command: /bin/ln -snf /dev/ceph-c3ee3683-008d-4cc5-8c84-98e1c69b4b79/osd-block-b0aa8b09-7d0a-4cf8-bc10-d152e4263dc2 /var/lib/ceph/osd/ceph-0/block
[k8sworker01][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
[k8sworker01][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-1
[k8sworker01][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[k8sworker01][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-0-b0aa8b09-7d0a-4cf8-bc10-d152e4263dc2
[k8sworker01][WARNIN]  stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[k8sworker01][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@0
[k8sworker01][WARNIN]  stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[k8sworker01][WARNIN] Running command: /bin/systemctl start ceph-osd@0
[k8sworker01][WARNIN] --> ceph-volume lvm activate successful for osd ID: 0
[k8sworker01][WARNIN] --> ceph-volume lvm create successful for: /dev/sda
[k8sworker01][INFO  ] checking OSD status...
[k8sworker01][DEBUG ] find the location of an executable
[k8sworker01][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host k8sworker01 is now ready for osd use.


[[email protected]:/etc/ceph]# mount | grep ceph
tmpfs on /var/lib/ceph/osd/ceph-0 type tmpfs (rw,relatime)
[[email protected]:/etc/ceph]# ll /var/lib/ceph/osd/ceph-0/
total 52
-rw-r--r-- 1 ceph ceph 469 Sep 15 13:04 activate.monmap
lrwxrwxrwx 1 ceph ceph  93 Sep 15 13:04 block -> /dev/ceph-c3ee3683-008d-4cc5-8c84-98e1c69b4b79/osd-block-b0aa8b09-7d0a-4cf8-bc10-d152e4263dc2
-rw------- 1 ceph ceph   2 Sep 15 13:04 bluefs
-rw------- 1 ceph ceph  37 Sep 15 13:04 ceph_fsid
-rw-r--r-- 1 ceph ceph  37 Sep 15 13:04 fsid
-rw------- 1 ceph ceph  55 Sep 15 13:04 keyring
-rw------- 1 ceph ceph   8 Sep 15 13:04 kv_backend
-rw------- 1 ceph ceph  21 Sep 15 13:04 magic
-rw------- 1 ceph ceph   4 Sep 15 13:04 mkfs_done
-rw------- 1 ceph ceph  41 Sep 15 13:04 osd_key
-rw------- 1 ceph ceph   6 Sep 15 13:04 ready
-rw------- 1 ceph ceph   3 Sep 15 13:04 require_osd_release
-rw------- 1 ceph ceph  10 Sep 15 13:04 type
-rw------- 1 ceph ceph   2 Sep 15 13:04 whoami

----分割线----------分割线----------分割线----------分割线----------分割线----------分割线----------分割线------
----分割线----------分割线----------分割线----------分割线----------分割线----------分割线----------分割线------
2.添加k8sworker02的OSD
[[email protected]:/etc/ceph]# ceph-deploy osd create k8sworker02 --data /dev/sda
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create k8sworker02 --data /dev/sda
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : 
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : k8sworker02
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : 
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/sda
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sda
[k8sworker02][DEBUG ] connected to host: k8sworker02 
[k8sworker02][DEBUG ] detect platform information from remote host
[k8sworker02][DEBUG ] detect machine type
[k8sworker02][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.8.2003 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to k8sworker02
[k8sworker02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[k8sworker02][DEBUG ] find the location of an executable
[k8sworker02][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sda
[k8sworker02][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[k8sworker02][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 18606c59-b95f-4a5a-bf9d-5c61b4934408
[k8sworker02][WARNIN] Running command: /usr/sbin/vgcreate --force --yes ceph-d0ae129b-e502-4934-a969-a3b7b66c3500 /dev/sda
[k8sworker02][WARNIN]  stdout: Physical volume "/dev/sda" successfully created.
[k8sworker02][WARNIN]  stdout: Volume group "ceph-d0ae129b-e502-4934-a969-a3b7b66c3500" successfully created
[k8sworker02][WARNIN] Running command: /usr/sbin/lvcreate --yes -l 100%FREE -n osd-block-18606c59-b95f-4a5a-bf9d-5c61b4934408 ceph-d0ae129b-e502-4934-a969-a3b7b66c3500
[k8sworker02][WARNIN]  stdout: Wiping ntfs signature on /dev/ceph-d0ae129b-e502-4934-a969-a3b7b66c3500/osd-block-18606c59-b95f-4a5a-bf9d-5c61b4934408.
[k8sworker02][WARNIN]  stdout: Logical volume "osd-block-18606c59-b95f-4a5a-bf9d-5c61b4934408" created.
[k8sworker02][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[k8sworker02][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
[k8sworker02][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-d0ae129b-e502-4934-a969-a3b7b66c3500/osd-block-18606c59-b95f-4a5a-bf9d-5c61b4934408
[k8sworker02][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-1
[k8sworker02][WARNIN] Running command: /bin/ln -s /dev/ceph-d0ae129b-e502-4934-a969-a3b7b66c3500/osd-block-18606c59-b95f-4a5a-bf9d-5c61b4934408 /var/lib/ceph/osd/ceph-1/block
[k8sworker02][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
[k8sworker02][WARNIN]  stderr: 2020-09-15T13:31:41.141+0800 7f0e78e03700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[k8sworker02][WARNIN] 2020-09-15T13:31:41.141+0800 7f0e78e03700 -1 AuthRegistry(0x7f0e74058528) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[k8sworker02][WARNIN]  stderr: got monmap epoch 3
[k8sworker02][WARNIN] Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-1/keyring --create-keyring --name osd.1 --add-key AQC8UWBfNokZEBAAJznIOVbMhoUvy/XqKIrkSQ==
[k8sworker02][WARNIN]  stdout: creating /var/lib/ceph/osd/ceph-1/keyring
[k8sworker02][WARNIN] added entity osd.1 auth(key=AQC8UWBfNokZEBAAJznIOVbMhoUvy/XqKIrkSQ==)
[k8sworker02][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
[k8sworker02][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
[k8sworker02][WARNIN] Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 18606c59-b95f-4a5a-bf9d-5c61b4934408 --setuser ceph --setgroup ceph
[k8sworker02][WARNIN] --> ceph-volume lvm prepare successful for: /dev/sda
[k8sworker02][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
[k8sworker02][WARNIN] Running command: /bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-d0ae129b-e502-4934-a969-a3b7b66c3500/osd-block-18606c59-b95f-4a5a-bf9d-5c61b4934408 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
[k8sworker02][WARNIN] Running command: /bin/ln -snf /dev/ceph-d0ae129b-e502-4934-a969-a3b7b66c3500/osd-block-18606c59-b95f-4a5a-bf9d-5c61b4934408 /var/lib/ceph/osd/ceph-1/block
[k8sworker02][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
[k8sworker02][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-1
[k8sworker02][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
[k8sworker02][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-1-18606c59-b95f-4a5a-bf9d-5c61b4934408
[k8sworker02][WARNIN]  stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[k8sworker02][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@1
[k8sworker02][WARNIN]  stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[k8sworker02][WARNIN] Running command: /bin/systemctl start ceph-osd@1
[k8sworker02][WARNIN] --> ceph-volume lvm activate successful for osd ID: 1
[k8sworker02][WARNIN] --> ceph-volume lvm create successful for: /dev/sda
[k8sworker02][INFO  ] checking OSD status...
[k8sworker02][DEBUG ] find the location of an executable
[k8sworker02][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host k8sworker02 is now ready for osd use.

[[email protected]:/root]# mount | grep ceph
tmpfs on /var/lib/ceph/osd/ceph-1 type tmpfs (rw,relatime)
[[email protected]:/root]# ll /var/lib/ceph/osd/ceph-1/
total 52
-rw-r--r-- 1 ceph ceph 469 Sep 15 13:31 activate.monmap
lrwxrwxrwx 1 ceph ceph  93 Sep 15 13:31 block -> /dev/ceph-d0ae129b-e502-4934-a969-a3b7b66c3500/osd-block-18606c59-b95f-4a5a-bf9d-5c61b4934408
-rw------- 1 ceph ceph   2 Sep 15 13:31 bluefs
-rw------- 1 ceph ceph  37 Sep 15 13:31 ceph_fsid
-rw-r--r-- 1 ceph ceph  37 Sep 15 13:31 fsid
-rw------- 1 ceph ceph  55 Sep 15 13:31 keyring
-rw------- 1 ceph ceph   8 Sep 15 13:31 kv_backend
-rw------- 1 ceph ceph  21 Sep 15 13:31 magic
-rw------- 1 ceph ceph   4 Sep 15 13:31 mkfs_done
-rw------- 1 ceph ceph  41 Sep 15 13:31 osd_key
-rw------- 1 ceph ceph   6 Sep 15 13:31 ready
-rw------- 1 ceph ceph   3 Sep 15 13:31 require_osd_release
-rw------- 1 ceph ceph  10 Sep 15 13:31 type
-rw------- 1 ceph ceph   2 Sep 15 13:31 whoami

----分割线----------分割线----------分割线----------分割线----------分割线----------分割线----------分割线------
----分割线----------分割线----------分割线----------分割线----------分割线----------分割线----------分割线------

3.添加k8sworker03的OSD
[[email protected]:/etc/ceph]# ceph-deploy osd create k8sworker03 --data /dev/sdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create k8sworker03 --data /dev/sdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : 
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : k8sworker03
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : 
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/sdb
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sdb
[k8sworker03][DEBUG ] connected to host: k8sworker03 
[k8sworker03][DEBUG ] detect platform information from remote host
[k8sworker03][DEBUG ] detect machine type
[k8sworker03][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.8.2003 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to k8sworker03
[k8sworker03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[k8sworker03][DEBUG ] find the location of an executable
[k8sworker03][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdb
[k8sworker03][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[k8sworker03][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 9e694065-8674-4710-a1a3-3f8864cd2e1c
[k8sworker03][WARNIN] Running command: /usr/sbin/vgcreate --force --yes ceph-d504553d-23ca-403f-b97b-d25c5376dcef /dev/sdb
[k8sworker03][WARNIN]  stdout: Physical volume "/dev/sdb" successfully created.
[k8sworker03][WARNIN]  stdout: Volume group "ceph-d504553d-23ca-403f-b97b-d25c5376dcef" successfully created
[k8sworker03][WARNIN] Running command: /usr/sbin/lvcreate --yes -l 100%FREE -n osd-block-9e694065-8674-4710-a1a3-3f8864cd2e1c ceph-d504553d-23ca-403f-b97b-d25c5376dcef
[k8sworker03][WARNIN]  stdout: Wiping ntfs signature on /dev/ceph-d504553d-23ca-403f-b97b-d25c5376dcef/osd-block-9e694065-8674-4710-a1a3-3f8864cd2e1c.
[k8sworker03][WARNIN]  stdout: Logical volume "osd-block-9e694065-8674-4710-a1a3-3f8864cd2e1c" created.
[k8sworker03][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[k8sworker03][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
[k8sworker03][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-d504553d-23ca-403f-b97b-d25c5376dcef/osd-block-9e694065-8674-4710-a1a3-3f8864cd2e1c
[k8sworker03][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-1
[k8sworker03][WARNIN] Running command: /bin/ln -s /dev/ceph-d504553d-23ca-403f-b97b-d25c5376dcef/osd-block-9e694065-8674-4710-a1a3-3f8864cd2e1c /var/lib/ceph/osd/ceph-2/block
[k8sworker03][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
[k8sworker03][WARNIN]  stderr: 2020-09-15T14:06:52.495+0800 7f892c6e3700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[k8sworker03][WARNIN] 2020-09-15T14:06:52.495+0800 7f892c6e3700 -1 AuthRegistry(0x7f8924058528) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[k8sworker03][WARNIN]  stderr: got monmap epoch 3
[k8sworker03][WARNIN] Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-2/keyring --create-keyring --name osd.2 --add-key AQD7WWBf/N7sIBAAkIrrhLC6qbRzMGHWKlfAGw==
[k8sworker03][WARNIN]  stdout: creating /var/lib/ceph/osd/ceph-2/keyring
[k8sworker03][WARNIN] added entity osd.2 auth(key=AQD7WWBf/N7sIBAAkIrrhLC6qbRzMGHWKlfAGw==)
[k8sworker03][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
[k8sworker03][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
[k8sworker03][WARNIN] Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 9e694065-8674-4710-a1a3-3f8864cd2e1c --setuser ceph --setgroup ceph
[k8sworker03][WARNIN] --> ceph-volume lvm prepare successful for: /dev/sdb
[k8sworker03][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
[k8sworker03][WARNIN] Running command: /bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-d504553d-23ca-403f-b97b-d25c5376dcef/osd-block-9e694065-8674-4710-a1a3-3f8864cd2e1c --path /var/lib/ceph/osd/ceph-2 --no-mon-config
[k8sworker03][WARNIN] Running command: /bin/ln -snf /dev/ceph-d504553d-23ca-403f-b97b-d25c5376dcef/osd-block-9e694065-8674-4710-a1a3-3f8864cd2e1c /var/lib/ceph/osd/ceph-2/block
[k8sworker03][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
[k8sworker03][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-1
[k8sworker03][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
[k8sworker03][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-2-9e694065-8674-4710-a1a3-3f8864cd2e1c
[k8sworker03][WARNIN]  stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[k8sworker03][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@2
[k8sworker03][WARNIN]  stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[k8sworker03][WARNIN] Running command: /bin/systemctl start ceph-osd@2
[k8sworker03][WARNIN] --> ceph-volume lvm activate successful for osd ID: 2
[k8sworker03][WARNIN] --> ceph-volume lvm create successful for: /dev/sdb
[k8sworker03][INFO  ] checking OSD status...
[k8sworker03][DEBUG ] find the location of an executable
[k8sworker03][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host k8sworker03 is now ready for osd use.

[[email protected]:/root]# mount | grep ceph
tmpfs on /var/lib/ceph/osd/ceph-2 type tmpfs (rw,relatime)
[[email protected]:/root]# ll /var/lib/ceph/osd/ceph-2/
total 52
-rw-r--r-- 1 ceph ceph 469 Sep 15 14:06 activate.monmap
lrwxrwxrwx 1 ceph ceph  93 Sep 15 14:06 block -> /dev/ceph-d504553d-23ca-403f-b97b-d25c5376dcef/osd-block-9e694065-8674-4710-a1a3-3f8864cd2e1c
-rw------- 1 ceph ceph   2 Sep 15 14:06 bluefs
-rw------- 1 ceph ceph  37 Sep 15 14:06 ceph_fsid
-rw-r--r-- 1 ceph ceph  37 Sep 15 14:06 fsid
-rw------- 1 ceph ceph  55 Sep 15 14:06 keyring
-rw------- 1 ceph ceph   8 Sep 15 14:06 kv_backend
-rw------- 1 ceph ceph  21 Sep 15 14:06 magic
-rw------- 1 ceph ceph   4 Sep 15 14:06 mkfs_done
-rw------- 1 ceph ceph  41 Sep 15 14:06 osd_key
-rw------- 1 ceph ceph   6 Sep 15 14:06 ready
-rw------- 1 ceph ceph   3 Sep 15 14:06 require_osd_release
-rw------- 1 ceph ceph  10 Sep 15 14:06 type
-rw------- 1 ceph ceph   2 Sep 15 14:06 whoami


----分割线----------分割线----------分割线----------分割线----------分割线----------分割线----------分割线------
----分割线----------分割线----------分割线----------分割线----------分割线----------分割线----------分割线------

4.添加k8sworker04的OSD
[[email protected]:/etc/ceph]# ceph-deploy osd create k8sworker04 --data /dev/sda
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create k8sworker04 --data /dev/sda
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : 
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : k8sworker04
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : 
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/sda
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sda
[k8sworker04][DEBUG ] connected to host: k8sworker04 
[k8sworker04][DEBUG ] detect platform information from remote host
[k8sworker04][DEBUG ] detect machine type
[k8sworker04][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.8.2003 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to k8sworker04
[k8sworker04][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[k8sworker04][WARNIN] osd keyring does not exist yet, creating one
[k8sworker04][DEBUG ] create a keyring file
[k8sworker04][DEBUG ] find the location of an executable
[k8sworker04][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sda
[k8sworker04][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[k8sworker04][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new e16853c2-153f-49a5-8389-23d5e2e90cde
[k8sworker04][WARNIN] Running command: /usr/sbin/vgcreate --force --yes ceph-a376c5fa-ba78-4e6e-a217-acef5c63b888 /dev/sda
[k8sworker04][WARNIN]  stdout: Physical volume "/dev/sda" successfully created.
[k8sworker04][WARNIN]  stdout: Volume group "ceph-a376c5fa-ba78-4e6e-a217-acef5c63b888" successfully created
[k8sworker04][WARNIN] Running command: /usr/sbin/lvcreate --yes -l 100%FREE -n osd-block-e16853c2-153f-49a5-8389-23d5e2e90cde ceph-a376c5fa-ba78-4e6e-a217-acef5c63b888
[k8sworker04][WARNIN]  stdout: Wiping ntfs signature on /dev/ceph-a376c5fa-ba78-4e6e-a217-acef5c63b888/osd-block-e16853c2-153f-49a5-8389-23d5e2e90cde.
[k8sworker04][WARNIN]  stdout: Logical volume "osd-block-e16853c2-153f-49a5-8389-23d5e2e90cde" created.
[k8sworker04][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[k8sworker04][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-3
[k8sworker04][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-a376c5fa-ba78-4e6e-a217-acef5c63b888/osd-block-e16853c2-153f-49a5-8389-23d5e2e90cde
[k8sworker04][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-1
[k8sworker04][WARNIN] Running command: /bin/ln -s /dev/ceph-a376c5fa-ba78-4e6e-a217-acef5c63b888/osd-block-e16853c2-153f-49a5-8389-23d5e2e90cde /var/lib/ceph/osd/ceph-3/block
[k8sworker04][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-3/activate.monmap
[k8sworker04][WARNIN]  stderr: 2020-09-15T14:22:00.553+0800 7f41f4c5b700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[k8sworker04][WARNIN] 2020-09-15T14:22:00.553+0800 7f41f4c5b700 -1 AuthRegistry(0x7f41f0058528) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[k8sworker04][WARNIN]  stderr: got monmap epoch 3
[k8sworker04][WARNIN] Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-3/keyring --create-keyring --name osd.3 --add-key AQCHXWBf9SguAhAAA2XTLVOYPKCJkoTauHXCSg==
[k8sworker04][WARNIN]  stdout: creating /var/lib/ceph/osd/ceph-3/keyring
[k8sworker04][WARNIN] added entity osd.3 auth(key=AQCHXWBf9SguAhAAA2XTLVOYPKCJkoTauHXCSg==)
[k8sworker04][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/keyring
[k8sworker04][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/
[k8sworker04][WARNIN] Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 3 --monmap /var/lib/ceph/osd/ceph-3/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-3/ --osd-uuid e16853c2-153f-49a5-8389-23d5e2e90cde --setuser ceph --setgroup ceph
[k8sworker04][WARNIN] --> ceph-volume lvm prepare successful for: /dev/sda
[k8sworker04][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3
[k8sworker04][WARNIN] Running command: /bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-a376c5fa-ba78-4e6e-a217-acef5c63b888/osd-block-e16853c2-153f-49a5-8389-23d5e2e90cde --path /var/lib/ceph/osd/ceph-3 --no-mon-config
[k8sworker04][WARNIN] Running command: /bin/ln -snf /dev/ceph-a376c5fa-ba78-4e6e-a217-acef5c63b888/osd-block-e16853c2-153f-49a5-8389-23d5e2e90cde /var/lib/ceph/osd/ceph-3/block
[k8sworker04][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-3/block
[k8sworker04][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-1
[k8sworker04][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3
[k8sworker04][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-3-e16853c2-153f-49a5-8389-23d5e2e90cde
[k8sworker04][WARNIN]  stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[k8sworker04][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@3
[k8sworker04][WARNIN]  stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[k8sworker04][WARNIN] Running command: /bin/systemctl start ceph-osd@3
[k8sworker04][WARNIN] --> ceph-volume lvm activate successful for osd ID: 3
[k8sworker04][WARNIN] --> ceph-volume lvm create successful for: /dev/sda
[k8sworker04][INFO  ] checking OSD status...
[k8sworker04][DEBUG ] find the location of an executable
[k8sworker04][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host k8sworker04 is now ready for osd use.

[[email protected]:/root]#  mount | grep ceph
tmpfs on /var/lib/ceph/osd/ceph-3 type tmpfs (rw,relatime)
[[email protected]:/root]# ll /var/lib/ceph/osd/ceph-3/
total 52
-rw-r--r-- 1 ceph ceph 469 Sep 15 14:22 activate.monmap
lrwxrwxrwx 1 ceph ceph  93 Sep 15 14:22 block -> /dev/ceph-a376c5fa-ba78-4e6e-a217-acef5c63b888/osd-block-e16853c2-153f-49a5-8389-23d5e2e90cde
-rw------- 1 ceph ceph   2 Sep 15 14:22 bluefs
-rw------- 1 ceph ceph  37 Sep 15 14:22 ceph_fsid
-rw-r--r-- 1 ceph ceph  37 Sep 15 14:22 fsid
-rw------- 1 ceph ceph  55 Sep 15 14:22 keyring
-rw------- 1 ceph ceph   8 Sep 15 14:22 kv_backend
-rw------- 1 ceph ceph  21 Sep 15 14:22 magic
-rw------- 1 ceph ceph   4 Sep 15 14:22 mkfs_done
-rw------- 1 ceph ceph  41 Sep 15 14:22 osd_key
-rw------- 1 ceph ceph   6 Sep 15 14:22 ready
-rw------- 1 ceph ceph   3 Sep 15 14:22 require_osd_release
-rw------- 1 ceph ceph  10 Sep 15 14:22 type
-rw------- 1 ceph ceph   2 Sep 15 14:22 whoami



----分割线----------分割线----------分割线----------分割线----------分割线----------分割线----------分割线------
----分割线----------分割线----------分割线----------分割线----------分割线----------分割线----------分割线------

5.添加k8smaster01的OSD
[[email protected]:/etc/ceph]# ceph-deploy osd create k8smaster01 --data /dev/sda
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create k8smaster01 --data /dev/sda
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : 
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : k8smaster01
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : 
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/sda
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sda
[k8smaster01][DEBUG ] connected to host: k8smaster01 
[k8smaster01][DEBUG ] detect platform information from remote host
[k8smaster01][DEBUG ] detect machine type
[k8smaster01][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.8.2003 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to k8smaster01
[k8smaster01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[k8smaster01][DEBUG ] find the location of an executable
[k8smaster01][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sda
[k8smaster01][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[k8smaster01][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new c9255a62-33e5-41a0-a128-2b3b362cc2a0
[k8smaster01][WARNIN] Running command: /usr/sbin/vgcreate --force --yes ceph-f1d0829e-e545-4dfa-ae80-d26a9bc02520 /dev/sda
[k8smaster01][WARNIN]  stdout: Physical volume "/dev/sda" successfully created.
[k8smaster01][WARNIN]  stdout: Volume group "ceph-f1d0829e-e545-4dfa-ae80-d26a9bc02520" successfully created
[k8smaster01][WARNIN] Running command: /usr/sbin/lvcreate --yes -l 100%FREE -n osd-block-c9255a62-33e5-41a0-a128-2b3b362cc2a0 ceph-f1d0829e-e545-4dfa-ae80-d26a9bc02520
[k8smaster01][WARNIN]  stdout: Logical volume "osd-block-c9255a62-33e5-41a0-a128-2b3b362cc2a0" created.
[k8smaster01][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[k8smaster01][WARNIN] Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-4
[k8smaster01][WARNIN] Running command: /usr/bin/chown -h ceph:ceph /dev/ceph-f1d0829e-e545-4dfa-ae80-d26a9bc02520/osd-block-c9255a62-33e5-41a0-a128-2b3b362cc2a0
[k8smaster01][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
[k8smaster01][WARNIN] Running command: /usr/bin/ln -s /dev/ceph-f1d0829e-e545-4dfa-ae80-d26a9bc02520/osd-block-c9255a62-33e5-41a0-a128-2b3b362cc2a0 /var/lib/ceph/osd/ceph-4/block
[k8smaster01][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-4/activate.monmap
[k8smaster01][WARNIN]  stderr: 2020-09-15T17:39:41.200+0800 7fba9ea8b700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[k8smaster01][WARNIN] 2020-09-15T17:39:41.200+0800 7fba9ea8b700 -1 AuthRegistry(0x7fba98058528) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[k8smaster01][WARNIN]  stderr: got monmap epoch 3
[k8smaster01][WARNIN] Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-4/keyring --create-keyring --name osd.4 --add-key AQDci2BfUtzlCxAA41M++UhB4wnFPhbO8P+8oQ==
[k8smaster01][WARNIN]  stdout: creating /var/lib/ceph/osd/ceph-4/keyring
[k8smaster01][WARNIN] added entity osd.4 auth(key=AQDci2BfUtzlCxAA41M++UhB4wnFPhbO8P+8oQ==)
[k8smaster01][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4/keyring
[k8smaster01][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4/
[k8smaster01][WARNIN] Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 4 --monmap /var/lib/ceph/osd/ceph-4/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-4/ --osd-uuid c9255a62-33e5-41a0-a128-2b3b362cc2a0 --setuser ceph --setgroup ceph
[k8smaster01][WARNIN] --> ceph-volume lvm prepare successful for: /dev/sda
[k8smaster01][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4
[k8smaster01][WARNIN] Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-f1d0829e-e545-4dfa-ae80-d26a9bc02520/osd-block-c9255a62-33e5-41a0-a128-2b3b362cc2a0 --path /var/lib/ceph/osd/ceph-4 --no-mon-config
[k8smaster01][WARNIN] Running command: /usr/bin/ln -snf /dev/ceph-f1d0829e-e545-4dfa-ae80-d26a9bc02520/osd-block-c9255a62-33e5-41a0-a128-2b3b362cc2a0 /var/lib/ceph/osd/ceph-4/block
[k8smaster01][WARNIN] Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-4/block
[k8smaster01][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
[k8smaster01][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4
[k8smaster01][WARNIN] Running command: /usr/bin/systemctl enable ceph-volume@lvm-4-c9255a62-33e5-41a0-a128-2b3b362cc2a0
[k8smaster01][WARNIN]  stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[k8smaster01][WARNIN] Running command: /usr/bin/systemctl enable --runtime ceph-osd@4
[k8smaster01][WARNIN]  stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[k8smaster01][WARNIN] Running command: /usr/bin/systemctl start ceph-osd@4
[k8smaster01][WARNIN] --> ceph-volume lvm activate successful for osd ID: 4
[k8smaster01][WARNIN] --> ceph-volume lvm create successful for: /dev/sda
[k8smaster01][INFO  ] checking OSD status...
[k8smaster01][DEBUG ] find the location of an executable
[k8smaster01][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host k8smaster01 is now ready for osd use.

[[email protected]:/etc/ceph]# mount | grep ceph
tmpfs on /var/lib/ceph/osd/ceph-4 type tmpfs (rw,relatime)
[[email protected]:/etc/ceph]# ll /var/lib/ceph/osd/ceph-4
total 52
-rw-r--r-- 1 ceph ceph 469 Sep 15 17:39 activate.monmap
lrwxrwxrwx 1 ceph ceph  93 Sep 15 17:39 block -> /dev/ceph-f1d0829e-e545-4dfa-ae80-d26a9bc02520/osd-block-c9255a62-33e5-41a0-a128-2b3b362cc2a0
-rw------- 1 ceph ceph   2 Sep 15 17:39 bluefs
-rw------- 1 ceph ceph  37 Sep 15 17:39 ceph_fsid
-rw-r--r-- 1 ceph ceph  37 Sep 15 17:39 fsid
-rw------- 1 ceph ceph  55 Sep 15 17:39 keyring
-rw------- 1 ceph ceph   8 Sep 15 17:39 kv_backend
-rw------- 1 ceph ceph  21 Sep 15 17:39 magic
-rw------- 1 ceph ceph   4 Sep 15 17:39 mkfs_done
-rw------- 1 ceph ceph  41 Sep 15 17:39 osd_key
-rw------- 1 ceph ceph   6 Sep 15 17:39 ready
-rw------- 1 ceph ceph   3 Sep 15 17:39 require_osd_release
-rw------- 1 ceph ceph  10 Sep 15 17:39 type
-rw------- 1 ceph ceph   2 Sep 15 17:39 whoami

[[email protected]:/etc/ceph]# ceph-deploy osd create k8smaster01 --data /dev/sdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create k8smaster01 --data /dev/sdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : 
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : k8smaster01
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : 
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/sdb
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sdb
[k8smaster01][DEBUG ] connected to host: k8smaster01 
[k8smaster01][DEBUG ] detect platform information from remote host
[k8smaster01][DEBUG ] detect machine type
[k8smaster01][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.8.2003 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to k8smaster01
[k8smaster01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[k8smaster01][DEBUG ] find the location of an executable
[k8smaster01][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdb
[k8smaster01][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[k8smaster01][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 1a96dc35-c585-488c-95bc-633d82b9a23a
[k8smaster01][WARNIN] Running command: /usr/sbin/vgcreate --force --yes ceph-291b4104-bdd4-432f-b966-44bca7227ba8 /dev/sdb
[k8smaster01][WARNIN]  stdout: Physical volume "/dev/sdb" successfully created.
[k8smaster01][WARNIN]  stdout: Volume group "ceph-291b4104-bdd4-432f-b966-44bca7227ba8" successfully created
[k8smaster01][WARNIN] Running command: /usr/sbin/lvcreate --yes -l 100%FREE -n osd-block-1a96dc35-c585-488c-95bc-633d82b9a23a ceph-291b4104-bdd4-432f-b966-44bca7227ba8
[k8smaster01][WARNIN]  stdout: Logical volume "osd-block-1a96dc35-c585-488c-95bc-633d82b9a23a" created.
[k8smaster01][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[k8smaster01][WARNIN] Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-5
[k8smaster01][WARNIN] Running command: /usr/bin/chown -h ceph:ceph /dev/ceph-291b4104-bdd4-432f-b966-44bca7227ba8/osd-block-1a96dc35-c585-488c-95bc-633d82b9a23a
[k8smaster01][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
[k8smaster01][WARNIN] Running command: /usr/bin/ln -s /dev/ceph-291b4104-bdd4-432f-b966-44bca7227ba8/osd-block-1a96dc35-c585-488c-95bc-633d82b9a23a /var/lib/ceph/osd/ceph-5/block
[k8smaster01][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-5/activate.monmap
[k8smaster01][WARNIN]  stderr: 2020-09-23T19:40:53.742+0800 7f68ddfe2700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[k8smaster01][WARNIN] 2020-09-23T19:40:53.742+0800 7f68ddfe2700 -1 AuthRegistry(0x7f68d8058528) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[k8smaster01][WARNIN]  stderr: got monmap epoch 3
[k8smaster01][WARNIN] Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-5/keyring --create-keyring --name osd.5 --add-key AQBENGtf1JdtKxAAdDlcnJEN3SzeD3DnwxMj/Q==
[k8smaster01][WARNIN]  stdout: creating /var/lib/ceph/osd/ceph-5/keyring
[k8smaster01][WARNIN] added entity osd.5 auth(key=AQBENGtf1JdtKxAAdDlcnJEN3SzeD3DnwxMj/Q==)
[k8smaster01][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5/keyring
[k8smaster01][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5/
[k8smaster01][WARNIN] Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 5 --monmap /var/lib/ceph/osd/ceph-5/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-5/ --osd-uuid 1a96dc35-c585-488c-95bc-633d82b9a23a --setuser ceph --setgroup ceph
[k8smaster01][WARNIN] --> ceph-volume lvm prepare successful for: /dev/sdb
[k8smaster01][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5
[k8smaster01][WARNIN] Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-291b4104-bdd4-432f-b966-44bca7227ba8/osd-block-1a96dc35-c585-488c-95bc-633d82b9a23a --path /var/lib/ceph/osd/ceph-5 --no-mon-config
[k8smaster01][WARNIN] Running command: /usr/bin/ln -snf /dev/ceph-291b4104-bdd4-432f-b966-44bca7227ba8/osd-block-1a96dc35-c585-488c-95bc-633d82b9a23a /var/lib/ceph/osd/ceph-5/block
[k8smaster01][WARNIN] Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-5/block
[k8smaster01][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
[k8smaster01][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5
[k8smaster01][WARNIN] Running command: /usr/bin/systemctl enable ceph-volume@lvm-5-1a96dc35-c585-488c-95bc-633d82b9a23a
[k8smaster01][WARNIN]  stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[k8smaster01][WARNIN] Running command: /usr/bin/systemctl enable --runtime ceph-osd@5
[k8smaster01][WARNIN]  stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[k8smaster01][WARNIN] Running command: /usr/bin/systemctl start ceph-osd@5
[k8smaster01][WARNIN] --> ceph-volume lvm activate successful for osd ID: 5
[k8smaster01][WARNIN] --> ceph-volume lvm create successful for: /dev/sdb
[k8smaster01][INFO  ] checking OSD status...
[k8smaster01][DEBUG ] find the location of an executable
[k8smaster01][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host k8smaster01 is now ready for osd use.
[[email protected]:/etc/ceph]# mount | grep ceph
192.168.13.101:6789,192.168.13.102:6789,192.168.13.103:6789:/ on /mnt/cephfs type ceph (rw,noatime,name=admin,secret=,acl)
tmpfs on /var/lib/ceph/osd/ceph-4 type tmpfs (rw,relatime)
tmpfs on /var/lib/ceph/osd/ceph-5 type tmpfs (rw,relatime)
[[email protected]:/etc/ceph]# ll /var/lib/ceph/osd/ceph-5
total 52
-rw-r--r-- 1 ceph ceph 469 Sep 23 19:40 activate.monmap
lrwxrwxrwx 1 ceph ceph  93 Sep 23 19:40 block -> /dev/ceph-291b4104-bdd4-432f-b966-44bca7227ba8/osd-block-1a96dc35-c585-488c-95bc-633d82b9a23a
-rw------- 1 ceph ceph   2 Sep 23 19:40 bluefs
-rw------- 1 ceph ceph  37 Sep 23 19:40 ceph_fsid
-rw-r--r-- 1 ceph ceph  37 Sep 23 19:40 fsid
-rw------- 1 ceph ceph  55 Sep 23 19:40 keyring
-rw------- 1 ceph ceph   8 Sep 23 19:40 kv_backend
-rw------- 1 ceph ceph  21 Sep 23 19:40 magic
-rw------- 1 ceph ceph   4 Sep 23 19:40 mkfs_done
-rw------- 1 ceph ceph  41 Sep 23 19:40 osd_key
-rw------- 1 ceph ceph   6 Sep 23 19:40 ready
-rw------- 1 ceph ceph   3 Sep 23 19:40 require_osd_release
-rw------- 1 ceph ceph  10 Sep 23 19:40 type
-rw------- 1 ceph ceph   2 Sep 23 19:40 whoami

[[email protected]:/etc/ceph]# ceph-deploy osd create k8smaster01 --data /dev/sdc
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create k8smaster01 --data /dev/sdc
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : 
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : k8smaster01
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : 
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/sdc
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sdc
[k8smaster01][DEBUG ] connected to host: k8smaster01 
[k8smaster01][DEBUG ] detect platform information from remote host
[k8smaster01][DEBUG ] detect machine type
[k8smaster01][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.8.2003 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to k8smaster01
[k8smaster01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[k8smaster01][DEBUG ] find the location of an executable
[k8smaster01][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdc
[k8smaster01][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[k8smaster01][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 5ac3661f-cc6e-4ae3-906c-17d074929d83
[k8smaster01][WARNIN] Running command: /usr/sbin/vgcreate --force --yes ceph-0979d24b-e9a4-4e01-9912-08a83706ae15 /dev/sdc
[k8smaster01][WARNIN]  stdout: Physical volume "/dev/sdc" successfully created.
[k8smaster01][WARNIN]  stdout: Volume group "ceph-0979d24b-e9a4-4e01-9912-08a83706ae15" successfully created
[k8smaster01][WARNIN] Running command: /usr/sbin/lvcreate --yes -l 100%FREE -n osd-block-5ac3661f-cc6e-4ae3-906c-17d074929d83 ceph-0979d24b-e9a4-4e01-9912-08a83706ae15
[k8smaster01][WARNIN]  stdout: Logical volume "osd-block-5ac3661f-cc6e-4ae3-906c-17d074929d83" created.
[k8smaster01][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[k8smaster01][WARNIN] Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-6
[k8smaster01][WARNIN] Running command: /usr/bin/chown -h ceph:ceph /dev/ceph-0979d24b-e9a4-4e01-9912-08a83706ae15/osd-block-5ac3661f-cc6e-4ae3-906c-17d074929d83
[k8smaster01][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /dev/dm-3
[k8smaster01][WARNIN] Running command: /usr/bin/ln -s /dev/ceph-0979d24b-e9a4-4e01-9912-08a83706ae15/osd-block-5ac3661f-cc6e-4ae3-906c-17d074929d83 /var/lib/ceph/osd/ceph-6/block
[k8smaster01][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-6/activate.monmap
[k8smaster01][WARNIN]  stderr: 2020-09-23T19:42:36.801+0800 7fc3f0653700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[k8smaster01][WARNIN] 2020-09-23T19:42:36.801+0800 7fc3f0653700 -1 AuthRegistry(0x7fc3e8058528) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[k8smaster01][WARNIN]  stderr: got monmap epoch 3
[k8smaster01][WARNIN] Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-6/keyring --create-keyring --name osd.6 --add-key AQCrNGtfLD7CMBAAzqvCddh8twu7HPohxwdCQg==
[k8smaster01][WARNIN]  stdout: creating /var/lib/ceph/osd/ceph-6/keyring
[k8smaster01][WARNIN] added entity osd.6 auth(key=AQCrNGtfLD7CMBAAzqvCddh8twu7HPohxwdCQg==)
[k8smaster01][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-6/keyring
[k8smaster01][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-6/
[k8smaster01][WARNIN] Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 6 --monmap /var/lib/ceph/osd/ceph-6/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-6/ --osd-uuid 5ac3661f-cc6e-4ae3-906c-17d074929d83 --setuser ceph --setgroup ceph
[k8smaster01][WARNIN] --> ceph-volume lvm prepare successful for: /dev/sdc
[k8smaster01][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-6
[k8smaster01][WARNIN] Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-0979d24b-e9a4-4e01-9912-08a83706ae15/osd-block-5ac3661f-cc6e-4ae3-906c-17d074929d83 --path /var/lib/ceph/osd/ceph-6 --no-mon-config
[k8smaster01][WARNIN] Running command: /usr/bin/ln -snf /dev/ceph-0979d24b-e9a4-4e01-9912-08a83706ae15/osd-block-5ac3661f-cc6e-4ae3-906c-17d074929d83 /var/lib/ceph/osd/ceph-6/block
[k8smaster01][WARNIN] Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-6/block
[k8smaster01][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /dev/dm-3
[k8smaster01][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-6
[k8smaster01][WARNIN] Running command: /usr/bin/systemctl enable ceph-volume@lvm-6-5ac3661f-cc6e-4ae3-906c-17d074929d83
[k8smaster01][WARNIN]  stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[k8smaster01][WARNIN] Running command: /usr/bin/systemctl enable --runtime ceph-osd@6
[k8smaster01][WARNIN]  stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[k8smaster01][WARNIN] Running command: /usr/bin/systemctl start ceph-osd@6
[k8smaster01][WARNIN] --> ceph-volume lvm activate successful for osd ID: 6
[k8smaster01][WARNIN] --> ceph-volume lvm create successful for: /dev/sdc
[k8smaster01][INFO  ] checking OSD status...
[k8smaster01][DEBUG ] find the location of an executable
[k8smaster01][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host k8smaster01 is now ready for osd use.
[[email protected]:/etc/ceph]# mount | grep ceph
192.168.13.101:6789,192.168.13.102:6789,192.168.13.103:6789:/ on /mnt/cephfs type ceph (rw,noatime,name=admin,secret=,acl)
tmpfs on /var/lib/ceph/osd/ceph-4 type tmpfs (rw,relatime)
tmpfs on /var/lib/ceph/osd/ceph-5 type tmpfs (rw,relatime)
tmpfs on /var/lib/ceph/osd/ceph-6 type tmpfs (rw,relatime)
[[email protected]:/etc/ceph]# ll /var/lib/ceph/osd/ceph-6
total 52
-rw-r--r-- 1 ceph ceph 469 Sep 23 19:42 activate.monmap
lrwxrwxrwx 1 ceph ceph  93 Sep 23 19:42 block -> /dev/ceph-0979d24b-e9a4-4e01-9912-08a83706ae15/osd-block-5ac3661f-cc6e-4ae3-906c-17d074929d83
-rw------- 1 ceph ceph   2 Sep 23 19:42 bluefs
-rw------- 1 ceph ceph  37 Sep 23 19:42 ceph_fsid
-rw-r--r-- 1 ceph ceph  37 Sep 23 19:42 fsid
-rw------- 1 ceph ceph  55 Sep 23 19:42 keyring
-rw------- 1 ceph ceph   8 Sep 23 19:42 kv_backend
-rw------- 1 ceph ceph  21 Sep 23 19:42 magic
-rw------- 1 ceph ceph   4 Sep 23 19:42 mkfs_done
-rw------- 1 ceph ceph  41 Sep 23 19:42 osd_key
-rw------- 1 ceph ceph   6 Sep 23 19:42 ready
-rw------- 1 ceph ceph   3 Sep 23 19:42 require_osd_release
-rw------- 1 ceph ceph  10 Sep 23 19:42 type
-rw------- 1 ceph ceph   2 Sep 23 19:42 whoami


[[email protected]:/etc/ceph]# ceph -s
  cluster:
    id:     61ab49ca-21d8-4b03-8237-8c05c5c5d177
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum k8smaster01,k8smaster02,k8smaster03 (age 14m)
    mgr: k8smaster03(active, since 14m), standbys: k8smaster01, k8smaster02
    mds: cephfs:1 {0=k8smaster02=up:active} 2 up:standby
    osd: 7 osds: 7 up (since 7m), 7 in (since 7m)
 
  data:
    pools:   3 pools, 65 pgs
    objects: 5.48k objects, 5.8 GiB
    usage:   26 GiB used, 10 TiB / 10 TiB avail
    pgs:     65 active+clean
 
  io:
    client:   391 KiB/s wr, 0 op/s rd, 1 op/s wr

 
//查看OSD树
[[email protected]:/etc/ceph]# ceph osd tree
ID   CLASS  WEIGHT    TYPE NAME             STATUS  REWEIGHT  PRI-AFF
 -1         10.18839  root default                                   
-11          6.54959      host k8smaster01                           
  4    hdd   2.18320          osd.4             up   1.00000  1.00000
  5    hdd   2.18320          osd.5             up   1.00000  1.00000
  6    hdd   2.18320          osd.6             up   1.00000  1.00000
 -3          0.90970      host k8sworker01                           
  0    hdd   0.90970          osd.0             up   1.00000  1.00000
 -5          0.90970      host k8sworker02                           
  1    hdd   0.90970          osd.1             up   1.00000  1.00000
 -7          0.90970      host k8sworker03                           
  2    hdd   0.90970          osd.2             up   1.00000  1.00000
 -9          0.90970      host k8sworker04                           
  3    hdd   0.90970          osd.3             up   1.00000  1.00000



补充1 
集群检测出现
health: HEALTH_WARN
            Module 'restful' has failed dependency: No module named 'pecan'
执行如下操作:
pip3 install pecan werkzeug

然后重启系统

补充2:
出现错误:
[ERROR ] RuntimeError: command returned non-zero exit status: 2
[ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sda
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs

执行如下操作:
[[email protected]:/etc/ceph]# yum install gdisk -y
[[email protected]:/etc/ceph]# sgdisk --zap-all /dev/sda
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.

1.6 安装MDS

1.6.1 安装必要的环境

pip3 install pyjwt
pip install python3-authjwt
python3 -m pip install cherrypy
python3 -m pip install routes

1.6.2 deploy MDS


[[email protected]:/etc/ceph]#  ceph-deploy mds create k8smaster01 k8smaster02 k8smaster03
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mds create k8smaster01 k8smaster02 k8smaster03
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : 
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : 
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  mds                           : [('k8smaster01', 'k8smaster01'), ('k8smaster02', 'k8smaster02'), ('k8smaster03', 'k8smaster03')]
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mds][DEBUG ] Deploying mds, cluster ceph hosts k8smaster01:k8smaster01 k8smaster02:k8smaster02 k8smaster03:k8smaster03
[k8smaster01][DEBUG ] connected to host: k8smaster01 
[k8smaster01][DEBUG ] detect platform information from remote host
[k8smaster01][DEBUG ] detect machine type
[ceph_deploy.mds][INFO  ] Distro info: CentOS Linux 7.8.2003 Core
[ceph_deploy.mds][DEBUG ] remote host will use systemd
[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to k8smaster01
[k8smaster01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[k8smaster01][WARNIN] mds keyring does not exist yet, creating one
[k8smaster01][DEBUG ] create a keyring file
[k8smaster01][DEBUG ] create path if it doesn't exist
[k8smaster01][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.k8smaster01 osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-k8smaster01/keyring
[k8smaster01][INFO  ] Running command: systemctl enable ceph-mds@k8smaster01
[k8smaster01][WARNIN] Created symlink from /etc/systemd/system/ceph-mds.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[k8smaster01][INFO  ] Running command: systemctl start ceph-mds@k8smaster01
[k8smaster01][INFO  ] Running command: systemctl enable ceph.target
[k8smaster02][DEBUG ] connected to host: k8smaster02 
[k8smaster02][DEBUG ] detect platform information from remote host
[k8smaster02][DEBUG ] detect machine type
[ceph_deploy.mds][INFO  ] Distro info: CentOS Linux 7.8.2003 Core
[ceph_deploy.mds][DEBUG ] remote host will use systemd
[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to k8smaster02
[k8smaster02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[k8smaster02][WARNIN] mds keyring does not exist yet, creating one
[k8smaster02][DEBUG ] create a keyring file
[k8smaster02][DEBUG ] create path if it doesn't exist
[k8smaster02][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.k8smaster02 osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-k8smaster02/keyring
[k8smaster02][INFO  ] Running command: systemctl enable ceph-mds@k8smaster02
[k8smaster02][WARNIN] Created symlink from /etc/systemd/system/ceph-mds.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[k8smaster02][INFO  ] Running command: systemctl start ceph-mds@k8smaster02
[k8smaster02][INFO  ] Running command: systemctl enable ceph.target
[k8smaster03][DEBUG ] connected to host: k8smaster03 
[k8smaster03][DEBUG ] detect platform information from remote host
[k8smaster03][DEBUG ] detect machine type
[ceph_deploy.mds][INFO  ] Distro info: CentOS Linux 7.8.2003 Core
[ceph_deploy.mds][DEBUG ] remote host will use systemd
[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to k8smaster03
[k8smaster03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[k8smaster03][WARNIN] mds keyring does not exist yet, creating one
[k8smaster03][DEBUG ] create a keyring file
[k8smaster03][DEBUG ] create path if it doesn't exist
[k8smaster03][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.k8smaster03 osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-k8smaster03/keyring
[k8smaster03][INFO  ] Running command: systemctl enable ceph-mds@k8smaster03
[k8smaster03][WARNIN] Created symlink from /etc/systemd/system/ceph-mds.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[k8smaster03][INFO  ] Running command: systemctl start ceph-mds@k8smaster03
[k8smaster03][INFO  ] Running command: systemctl enable ceph.target

1.6.3 安装Dashboard

暂时不部署

1.7 Ceph基本用法

1.7.1 基本参考知识

Ceph默认使用pool的形式存储数据,pool是对若干pg进行组织管理的逻辑划分,pg里的对象被映射到不同的osd,因此pool分布到整个集群里。
可以将不同的数据存入1个pool,但如此操作不便于客户端数据区分管理,因此一般是为每个客户端分别创建pool。
若少于5个OSD, 设置pg_num为128。
5~10个OSD,设置pg_num为512。
10~50个OSD,设置pg_num为4096。
超过50个OSD,可以参考pgcalc计算。
# 64 指定的 pg 和 pgp 的数量
[root@k8smaster01:/etc/ceph]# ceph osd pool create cephpools 128 128
pool 'cephpool' created
列出当前集群所有存储池
[root@k8smaster01:/etc/ceph]# ceph osd pool ls
device_health_metrics
cephpool
或者
[root@k8smaster01:/etc/ceph]# rados lspools
device_health_metrics
cephpool

//上传文件
[root@k8smaster01:/etc/ceph]# echo Mapple > cephtest01.txt
[root@k8smaster01:/etc/ceph]# rados put cephtest01 ./cephtest01.txt --pool=cephpools
//列出指定存储池的文件
[root@k8smaster01:/etc/ceph]# rados ls --pool=cephpools
cephtest01
//获取文件
[root@k8smaster01:/etc/ceph]# rados get cephtest01 out_cephtest01 -p cephpools
[root@k8smaster01:/etc/ceph]# cat ./out_cephtest01 
Mapple
//删除文件  并再次查看存储池
[root@k8smaster01:/etc/ceph]# rados rm cephtest01 -p cephpools
[root@k8smaster01:/etc/ceph]# rados ls --pool=cephpool
//查看指定文件在Ceph集群内是怎样做映射的
[root@k8smaster01:/etc/ceph]# ceph osd map cephpools cephtese01
osdmap e59 pool 'cephpool' (2) object 'cephtese01' -> pg 2.45fcc521 (2.21) -> up ([0,2,1], p0) acting ([0,2,1], p0)
注:pg 2.45fcc521 (2.21),表示第2号存储池的21号pg
注:up  Ceph的存储池是三副本存储的,后面的三个数字是存储了此文件的三个osd的编号,p0表示0号osd是主osd
#acting 同理


排错补充:由于volumes创建时,创建pg_num过多,删除pool的操作
[root@k8smaster01:/etc/ceph]# ceph osd lspools
1 device_health_metrics
2 cephpools
[root@k8smaster01:/etc/ceph]# ceph osd pool delete cephpools
此时会报错,内容提示如下
Error EPERM: WARNING: this will *PERMANENTLY DESTROY* all data stored in pool cephpool.  If you are *ABSOLUTELY CERTAIN* that is what you want, pass the pool name *twice*, followed by --yes-i-really-really-mean-it.
[root@k8smaster01:/etc/ceph]# ceph osd pool delete cephpools cephpools --yes-i-really-really-mean-it
pool 'cephpools' removed
[root@k8smaster01:/etc/ceph]# ceph osd lspools
1 device_health_metrics

[root@k8smaster01:/etc/ceph]# ceph osd pool ls detail
[root@k8smaster01:/etc/ceph]# rados df


运维操作参考知识
ceph运维操作:

1.ceph -s  集群整体当前状态情况查看
2.ceph -w  实时状态查看
3.ceph df  ceph存储空间查看
4.ceph osd tree  查看osd状态
5.ceph osd dump  查看pool设置的信息和osd设置信息和各种状态
--------pool相关操作-------
6.ceph osd lspools  查看pools
7.ceph osd pool create images 100   创建一个image名称的pool,包涵100个pgnum  (pgsnum = (osd数量 * 100) / 副本数 向上对齐)
8.ceph  osd  pool  set  images  size  3    #设置pool:images的副本数为3
ceph osd pool set rbd pg_num 2048
ceph osd pool set rbd  pgp_num 2048  设置pool的pg数目
9.ceph  osd  pool  set-quota  images  max_objects  10000   #设置pool最大对象数
10.ceph  osd  pool  delete  images  images  --yes-i-really-really-mean-it  删除images pool
11.ceph osd pool set {rbdpool} min_size 1  修改ceph数据的最小副本数和副本数
12.ceph osd pool set {rbdpool} size 2
13.ceph osd pool set {rbdpool} target_max_bytes 100000000000000  设置池的最大的存储空间为100T
14.ceph osd pool set-quota {rbdpool} max_objects 10000  为集群pool设置配额,达到预警前集群会告警,达到上限之后就不在写入数据
15.ceph osd pool rename {current——pool-name} {new-pool-name} 重命名pool
16.rados df 展示pool统计
17.ceph osd pool get {pool-name} {key} 获取pool相关的参数值
18.ceph osd dump |grep '参数'
19.要设置一个存储池的选项值,执行命令:
ceph osd pool set {pool-name} {key} {value}
常用选项介绍:
size:设置存储池中的对象副本数,详情参见设置对象副本数。仅适用于副本存储池。
min_size:设置 I/O 需要的最小副本数,详情参见设置对象副本数。仅适用于副本存储池。
pg_num:计算数据分布时的有效 PG 数。只能大于当前 PG 数。
pgp_num:计算数据分布时使用的有效 PGP 数量。小于等于存储池的 PG 数。
crush_ruleset:
hashpspool:给指定存储池设置/取消 HASHPSPOOL 标志。
target_max_bytes:达到 max_bytes 阀值时会触发 Ceph 冲洗或驱逐对象。
target_max_objects:达到 max_objects 阀值时会触发 Ceph 冲洗或驱逐对象。
scrub_min_interval:在负载低时,洗刷存储池的最小间隔秒数。如果是 0 ,就按照配置文件里的 osd_scrub_min_interval 。
scrub_max_interval:不管集群负载如何,都要洗刷存储池的最大间隔秒数。如果是 0 ,就按照配置文件里的 osd_scrub_max_interval 。
deep_scrub_interval:“深度”洗刷存储池的间隔秒数。如果是 0 ,就按照配置文件里的 osd_deep_scrub_interval 。

9.10 可调选项
1、调整现有集群上的可调选项。注意,这可能会导致一些数据迁移(可能有 10% 之多)。这是推荐的办法,但是在生产集群上要注意此调整对性能带来的影响。此命令可启用较优可调选项:
ceph osd crush tunables optimal(调整集群参数到最优的状态)
如果切换得不太顺利(如负载太高)且切换才不久,或者有客户端兼容问题(较老的 cephfs 内核驱动或 rbd 客户端、或早于 bobtail 的 librados 客户端),你可以这样切回:
ceph osd crush tunables legacy(调整集群的参数到默认的状态)
2、不对 CRUSH 做任何更改也能消除报警,把下列配置加入 ceph.conf 的 [mon] 段下:
mon warn on legacy crush tunables = false
为使变更生效需重启所有监视器,或者执行下列命令:
ceph tell mon.\* injectargs --no-mon-warn-on-legacy-crush-tunables

--------对象存储-----------
11.rados  put  test-object-1  a.txt  --pool=data 创建对象
12.rados -p   data ls 查看pool中的对象
13.ceph osd map data  test-object-1 确定对象的存储的位置
14.rados rm test-object-1 --pool=data 删除对象
15.rados ls -p poolname  列出poolname中所有的objects


---------查看crushMap--------
16.ceph osd getcrushmap -o crush.out
17.crushtool -d crush.out -o crush.txt
18.ceph osd crush dump
19.ceph osd crush rule dump   //将rule结果调出来

ceph osd 维护模式
ceph osd set noout
/etc/init.d/ceph stop osd.{osd-num}
ceph osd crush rule rm {rule name}   从crush中删除一个rule

----------ceph injectargs 维护模式--------
#How often an Ceph OSD Daemon pings its peers (in seconds).
#默认为6
ceph tell osd.* injectargs '--osd-heartbeat-interval 10'

#The elapsed time when a Ceph OSD Daemon hasn’t shown a heartbeat that the Ceph Storage Cluster considers it down.
#默认为20
ceph tell osd.* injectargs '--osd-heartbeat-grace 30'
-----------维护完成------------------------
/etc/init.d/ceph start osd.{osd-num}
ceph osd unset noout

------------radosgw运维--------------------(ceph底层是没有权限控制的,权限控制在上层的radosgw进行提供,如果是s3cmd的话只能查看自己的相关信息)
创建用户
#sudo  radosgw-admin  user  create  --uid=newtouchstep  --display-name=newtouchstep [email protected]
修改用户
#sudo  radosgw-admin user  modify  --uid=newtouchstep  --display-name=newtouchstep [email protected]
查看用户信息
#sudo  radosgw-admin  user  info  --uid=newtouchone
删除用户
#sudo  radosgw-admin    user  rm  --uid=newtouchone  #没有数据才可以删除
#sudo  radosgw-admin   user  rm  --uid=newtouchone   --purge-data #删除用户删除数据
暂停用户使用
#sudo  radosgw-admin   user  suspend   --uid=newtouchone
用户生效
#sudo  radosgw-admin  user  enable   --uid=newtouchone
用户检查
#sudo  radosgw-admin    user  check   --uid=newtouchone
查询bucket
#sudo  radosgw-admin  bucket  list    查询所有的bucket 信息
查询指定bucket的对象
#sudo  radosgw-admin  bucket  list  --bucket=images
统计bucket信息
#sudo  radosgw-admin  bucket  stats     #查询所有bucket统计信息
#sudo  radosgw-admin  bucket  stats  --bucket=images  #查询指定bucket统计信息
删除bucket
删除pucket(但是不删除object,加上bucket后恢复)
#sudo  radosgw-admin  bucket  rm  --bucket=images
删除bucket后同时删除object
#sudo  radosgw-admin  bucket  rm  --bucket=images   --purge-objects
检查bucket
#sudo  radosgw-admin  bucket  check
删除object
#sudo  radosgw-admin  object  rm  --bucket=attach  --object=fanbingbing.jpg
为bucket设置配额
#sudo  radosgw-admin  quota  set  --max-objects=200   --max-size=10000000000 --quota-scope=bucket  --bucket=images
#sudo  radosgw-admin  quota  enable   --quota-scope=bucket   --bucket=images  
#sudo  radosgw-admin  quota  disable   --quota-scope=bucket  --bucket=images
为帐号设置配额
#sudo  radosgw-admin  quota  set  --max-objects=2  --max-size=100000  --quota-scope=user --uid=newtouchstep
#sudo  radosgw-admin  quota  enable   --quota-scope=user  --uid=newtouchstep
#sudo  radosgw-admin  quota  disable   --quota-scope=user  --uid=newtouchstep

radosgw-admin metadata get bucket:maoweitest02  查看bucket的详细信息 包括ID号,存储的位置,创建的时间,拥有者

#添加用户权限,允许其读写users权限
radosgw-admin caps add  --uid=admin --caps="users=*"
radosgw-admin caps add --uid=admin --caps="usage=read,write"

---------------/etc/init.d/ceph的使用(关闭这个集群上的所有的ceph相关进程,包括mon。mds.osd 等进程)-------------
/etc/init.d/ceph start 、stop、restart   单节点启动、关闭、重启ceph集群
/etc/init.d/ceph -a start 、stop、 restart  所有节点启动、关闭、重启ceph集群
service ceph start 、stop、restart   把ceph当做service服务
service ceph start -a start、stop、restart
/etc/init.d/ceph start osd.2
service ceph start osd.2

-----------查看monitor的状态-------------------------------
ceph mon stat   查看mon的状态信息
ceph mon dump   查看mon的映射信息
ceph quorum_status    检查mon的选举状态
ceph mon remove test-55  删除一个mon节点
ceph quorum_status -f json-pretty  产看monitor的选举情况

-----------osd相关操作-------------------------------------
ceph osd df  详细列出集群每块磁盘的使用情况,包括大小、权重多少空间率等 
ceph osd down 0  把osd.0给down掉,主要是标记为down
ceph osd  in 0   把osd.0标记为in
ceph osd rm 0 在集群中删除一个osd硬盘
ceph osd crush rm  osd.0 在集群中删除一个osd硬盘crush map
ceph osd crush set osd.1  0.5 host=node241 设置osd crush的权重为0.5
ceph osd out osd.3  此时osd.3的reweight变为0,此时不在此节点上分配数据,但是osd进程设备还在
ceph osd in osd.3   将逐出的osd加入到集群中
ceph osd pause      暂停整个集群的osd,此时集群不能接受数据
ceph osd unpause     再次开启接受数据
ceph osd set nodown  设置标志flags,不允许关闭osd,解决网络不稳定,osd状态不断切换的问题
ceph osd unset nodown 取消设置
ceph osd lost {id} [--yes-i-really-mean-it]  把OSD 标记为丢失,有可能导致永久性数据丢失,慎用!

------------rados命令相关操作--------------------------------
rados是和ceph的对象存储集群进行通信的方式
rados lspools <====>ceph osd pool ls 查看集群里面有多少个pool
rados mkpool test_pool 创建一个pool
rados create test_object -p test_pool 创建一个对象
rados -p test_pool ls 查看test_pool下所有的对象文件
rados rm test_object  -p test_pool 删除一个pool中指定的对象
rados rmpool test_pool test_pool -yes-i-really-really-mean-it 删除pool池和它中的数据
rados -p test_pool put test_object xx.txt上传一个对象到test_pool
rados bench 600 write  rand -t 100 -b 4096 -p test_pool  使用rados进行 性能测试
ceph --admin-daemon /var/run/ceph/ceph-client.radosgw.gateway.asok config set debug_rgw 20/20  修改radosgw的日志等级

------------pg相关操作---------------------------------------
ceph pg dump <===> ceph pg ls 查看pg组的映射关系
ceph pg map 2.06 查看一个pg的map映射
ceph pg stat  查看pg的状态
ceph pg 2.06 query 查看一个pg的详细信息
ceph pg scrub {pg-id} 洗刷一个pg组
ceph pg dump_stuck unclean  查看pg中的各种pg状态
ceph pg dump_stuck inactive
ceph pg dump_stuck stable 
ceph pg dump --format plain (纯文本)显示一个集群中所有的pg统计
          --format json  (json格式)
要查询某个 PG,用下列命令:
ceph pg {poolnum}.{pg-id} query
为找出卡住的归置组,执行:
ceph pg dump_stuck [unclean|inactive|stale|undersized|degraded]

ceph tell mon.FOO injectargs --debug_mon 10/10   设置mon的日志等级
查看mon的配置文件信息
ceph daemon mon.FOO config show
or:
ceph daemon mon.FOO config get 'OPTION_NAME'

1.7.2 挂载Cephfs

1.7.2.1 ceph上启用cephfs

# 创建数据pool  指定的 pg 和 pgp 的数量
[[email protected]:/etc/ceph]# ceph osd pool create cephfs_data 512 512
pool 'cephfs_data' created
[[email protected]:/etc/ceph]# ceph osd lspools
1 device_health_metrics
4 cephfs_data
# 创建Metadata池
[[email protected]:/etc/ceph]# ceph osd pool create cephfs_metadata 512 512
pool 'cephfs_metadata' created
# 启用pool
[[email protected]:/etc/ceph]# ceph fs new cephfs cephfs_metadata cephfs_data
new fs with metadata pool 4 and data pool 5
# 查看cephfs
[[email protected]:/etc/ceph]# ceph fs ls
name: cephfs, metadata pool: cephfs_data, data pools: [cephfs_metadata ]

1.7.2.2 客户端挂载


mkdir -p /mnt/cephfs

[[email protected]:/etc/ceph]# cat ceph.client.admin.keyring 
[client.admin]
	key = AQDKN2BfzZxpARAA0BP6urOP9aSv7M+dByoB+A==
	caps mds = "allow *"
	caps mgr = "allow *"
	caps mon = "allow *"
	caps osd = "allow *"


[[email protected]:/etc/ceph]# mount -t ceph 192.168.13.101:6789,192.168.13.102:6789,192.168.13.103:6789:/ /mnt/cephfs -o name=admin,secret=AQDKN2BfzZxpARAA0BP6urOP9aSv7M+dByoB+A==

[[email protected]:/root]#  mount -t ceph 192.168.13.101:6789,192.168.13.102:6789,192.168.13.103:6789:/ /mnt/cephfs -o name=admin,secret=AQDKN2BfzZxpARAA0BP6urOP9aSv7M+dByoB+A==

[[email protected]:/root]#  mount -t ceph 192.168.13.101:6789,192.168.13.102:6789,192.168.13.103:6789:/ /mnt/cephfs -o name=admin,secret=AQDKN2BfzZxpARAA0BP6urOP9aSv7M+dByoB+A==


df -TH



设置开机自动挂载/etc/fstab
[[email protected]:/etc/ceph]# vim /etc/fstab
192.168.13.101:6789,192.168.13.102:6789,192.168.13.103:6789:/ /mnt/cephfs       ceph    name=admin,secret=AQDKN2BfzZxpARAA0BP6urOP9aSv7M+dByoB+A==,noatime,_netdev      0       2
同理
[[email protected]:/etc/ceph]# vim /etc/fstab
[[email protected]:/etc/ceph]# vim /etc/fstab

在这里插入图片描述

1.7.2.3 创建secret

//获取管理key并进行64位编码
[[email protected]:/etc/ceph]# ceph auth get-key client.admin | base64
QVFES04yQmZ6WnhwQVJBQTBCUDZ1ck9QOWFTdjdNK2RCeW9CK0E9PQ==

[[email protected]:/data/yaml/ceph-volume]# vim ceph-secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
data:
  key: QVFES04yQmZ6WnhwQVJBQTBCUDZ1ck9QOWFTdjdNK2RCeW9CK0E9PQ==


[[email protected]:/data/yaml/ceph-volume]# kubectl apply -f ceph-secret.yaml 
secret/ceph-secret created

1.7.2.4 创建PersistentVolume

[[email protected]:/mnt/cephfs]# mkdir cephfs-pv0{01..20}
[[email protected]:/mnt/cephfs]# ls
cephfs-pv001  cephfs-pv003  cephfs-pv005  cephfs-pv007  cephfs-pv009  cephfs-pv011  cephfs-pv013  cephfs-pv015  cephfs-pv017  cephfs-pv019
cephfs-pv002  cephfs-pv004  cephfs-pv006  cephfs-pv008  cephfs-pv010  cephfs-pv012  cephfs-pv014  cephfs-pv016  cephfs-pv018  cephfs-pv020



[[email protected]:/data/yaml/ceph-volume]# vim pv-ceph.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: cephfs-pv001
  labels:
    pv: cephfs-pv
spec:
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 30Gi
  volumeMode: Filesystem
  cephfs:
    monitors:
    - 192.168.13.101:6789
    - 192.168.13.102:6789
    - 192.168.13.103:6789
    path: /cephfs-pv001
    readOnly: false
    user: admin
    secretRef:
      name: ceph-secret
  persistentVolumeReclaimPolicy: Delete
---
......(本次一共创建20个)



[[email protected]:/data/yaml/ceph-volume]# kubectl apply -f pv-ceph.yaml 
persistentvolume/cephfs-pv001 created
......(本次一共创建20个)


[[email protected]:/data/yaml/ceph-volume]# kubectl get pv
NAME           CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
cephfs-pv001   30Gi       RWO,RWX        Delete           Available                                   56s
cephfs-pv002   30Gi       RWO,RWX        Delete           Available                                   56s
cephfs-pv003   30Gi       RWO,RWX        Delete           Available                                   56s
cephfs-pv004   30Gi       RWO,RWX        Delete           Available                                   56s
cephfs-pv005   30Gi       RWO,RWX        Delete           Available                                   56s
cephfs-pv006   30Gi       RWO,RWX        Delete           Available                                   56s
cephfs-pv007   30Gi       RWO,RWX        Delete           Available                                   56s
cephfs-pv008   30Gi       RWO,RWX        Delete           Available                                   56s
cephfs-pv009   30Gi       RWO,RWX        Delete           Available                                   56s
cephfs-pv010   30Gi       RWO,RWX        Delete           Available                                   56s
cephfs-pv011   30Gi       RWO,RWX        Delete           Available                                   56s
cephfs-pv012   30Gi       RWO,RWX        Delete           Available                                   56s
cephfs-pv013   30Gi       RWO,RWX        Delete           Available                                   56s
cephfs-pv014   30Gi       RWO,RWX        Delete           Available                                   56s
cephfs-pv015   30Gi       RWO,RWX        Delete           Available                                   56s
cephfs-pv016   30Gi       RWO,RWX        Delete           Available                                   56s
cephfs-pv017   30Gi       RWO,RWX        Delete           Available                                   56s
cephfs-pv018   30Gi       RWO,RWX        Delete           Available                                   56s
cephfs-pv019   30Gi       RWO,RWX        Delete           Available                                   56s
cephfs-pv020   30Gi       RWO,RWX        Delete           Available                                   56s
k8sdashboard   35Gi       RWX            Recycle          Available           slow                   4d5h

1.8 故障恢复

1.8.1 查看状态

有一个OSD down

[[email protected]:/etc/ceph]# ceph -s
  cluster:
    id:     61ab49ca-21d8-4b03-8237-8c05c5c5d177
    health: HEALTH_WARN
            1 osds down
            1 host (1 osds) down
 
  services:
    mon: 3 daemons, quorum k8smaster01,k8smaster02,k8smaster03 (age 17m)
    mgr: k8smaster03(active, since 44m), standbys: k8smaster02, k8smaster01
    mds: cephfs:1 {0=k8smaster02=up:active} 2 up:standby
    osd: 5 osds: 4 up (since 45m), 5 in (since 7s); 59 remapped pgs
 
  data:
    pools:   3 pools, 65 pgs
    objects: 5.26k objects, 5.0 GiB
    usage:   20 GiB used, 3.6 TiB / 3.6 TiB avail
    pgs:     4762/15771 objects misplaced (30.195%)
             59 active+clean+remapped
             6  active+clean
 
  io:
    client:   341 B/s wr, 0 op/s rd, 0 op/s wr
 


--------------------------------------------------->
[[email protected]:/etc/ceph]# ceph osd tree
ID   CLASS  WEIGHT   TYPE NAME             STATUS  REWEIGHT  PRI-AFF
 -1         5.82199  root default                                   
-11         2.18320      host k8smaster01                           
  4    hdd  2.18320          osd.4           down         0  1.00000
 -3         0.90970      host k8sworker01                           
  0    hdd  0.90970          osd.0             up   1.00000  1.00000
 -5         0.90970      host k8sworker02                           
  1    hdd  0.90970          osd.1             up   1.00000  1.00000
 -7         0.90970      host k8sworker03                           
  2    hdd  0.90970          osd.2             up   1.00000  1.00000
 -9         0.90970      host k8sworker04                           
  3    hdd  0.90970          osd.3             up   1.00000  1.00000


------------------------------------------------>
[[email protected]:/etc/ceph]# systemctl status [email protected][email protected] - Ceph object storage daemon osd.4
   Loaded: loaded (/usr/lib/systemd/system/[email protected]; disabled; vendor preset: disabled)
   Active: inactive (dead)

Sep 23 19:06:36 k8smaster01.host.com systemd[1]: [/usr/lib/systemd/system/[email protected]:15] Unknown lvalue 'LockPersonality' in section 'Service'
Sep 23 19:06:36 k8smaster01.host.com systemd[1]: [/usr/lib/systemd/system/[email protected]:16] Unknown lvalue 'MemoryDenyWriteExecute' in section 'Service'
Sep 23 19:06:36 k8smaster01.host.com systemd[1]: [/usr/lib/systemd/system/[email protected]:19] Unknown lvalue 'ProtectControlGroups' in section 'Service'
Sep 23 19:06:36 k8smaster01.host.com systemd[1]: [/usr/lib/systemd/system/[email protected]:21] Unknown lvalue 'ProtectKernelModules' in section 'Service'
Sep 23 19:06:36 k8smaster01.host.com systemd[1]: [/usr/lib/systemd/system/[email protected]:23] Unknown lvalue 'ProtectKernelTunables' in section 'Service'

1.8.2 停止数据均衡

[[email protected]:/etc/ceph]#  for i in noout nobackfill norecover noscrub nodeep-scrub;do ceph osd set $i;done
noout is set
nobackfill is set
norecover is set
noscrub is set
nodeep-scrub is set
[[email protected]:/etc/ceph]# ceph -s
  cluster:
    id:     61ab49ca-21d8-4b03-8237-8c05c5c5d177
    health: HEALTH_WARN
            noout,nobackfill,norecover,noscrub,nodeep-scrub flag(s) set
            1 osds down
            1 host (1 osds) down
 
  services:
    mon: 3 daemons, quorum k8smaster01,k8smaster02,k8smaster03 (age 24m)
    mgr: k8smaster03(active, since 52m), standbys: k8smaster02, k8smaster01
    mds: cephfs:1 {0=k8smaster02=up:active} 2 up:standby
    osd: 5 osds: 4 up (since 52m), 5 in (since 7m); 59 remapped pgs
         flags noout,nobackfill,norecover,noscrub,nodeep-scrub
 
  data:
    pools:   3 pools, 65 pgs
    objects: 5.30k objects, 5.1 GiB
    usage:   20 GiB used, 3.6 TiB / 3.6 TiB avail
    pgs:     4799/15897 objects misplaced (30.188%)
             59 active+clean+remapped
             6  active+clean
 
  io:
    client:   227 KiB/s wr, 0 op/s rd, 0 op/s wr

1.8.3 定位故障盘

[[email protected]:/etc/ceph]# ceph osd tree | grep -i down
  4    hdd  2.18320          osd.4           down   1.00000  1.00000

1.8.4 卸载故障节点

[[email protected]:/etc/ceph]# umount /var/lib/ceph/osd/ceph-4

1.8.5 从crush map 中移除osd

[[email protected]:/etc/ceph]# ceph osd crush remove osd.4
removed item id 4 name 'osd.4' from crush map

1.8.6 删除故障OSD的密钥

[[email protected]:/etc/ceph]# ceph auth del osd.4
updated

1.8.7 删除故障OSD

[[email protected]:/etc/ceph]# ceph osd rm 4
removed osd.4

------------------------------------------------>
[[email protected]:/etc/ceph]# ceph osd tree
ID   CLASS  WEIGHT   TYPE NAME             STATUS  REWEIGHT  PRI-AFF
 -1         3.63879  root default                                   
-11               0      host k8smaster01                           
 -3         0.90970      host k8sworker01                           
  0    hdd  0.90970          osd.0             up   1.00000  1.00000
 -5         0.90970      host k8sworker02                           
  1    hdd  0.90970          osd.1             up   1.00000  1.00000
 -7         0.90970      host k8sworker03                           
  2    hdd  0.90970          osd.2             up   1.00000  1.00000
 -9         0.90970      host k8sworker04                           
  3    hdd  0.90970          osd.3             up   1.00000  1.00000

1.8.8 重新添加OSD

[[email protected]:/etc/ceph]# ceph-deploy osd create k8smaster01 --data /dev/sda

1.8.9 重新开启集群禁用标志

PS:待新osd添加crush map后,重新开启集群禁用标志

for i in noout nobackfill norecover noscrub nodeep-scrub;do ceph osd unset $i;done

查看状态

[[email protected]:/etc/ceph]# ceph -s
  cluster:
    id:     61ab49ca-21d8-4b03-8237-8c05c5c5d177
    health: HEALTH_WARN
            Degraded data redundancy: 2766/16377 objects degraded (16.890%), 22 pgs degraded, 33 pgs undersized
 
  services:
    mon: 3 daemons, quorum k8smaster01,k8smaster02,k8smaster03 (age 9m)
    mgr: k8smaster03(active, since 9m), standbys: k8smaster01, k8smaster02
    mds: cephfs:1 {0=k8smaster02=up:active} 2 up:standby
    osd: 7 osds: 7 up (since 2m), 7 in (since 2m); 59 remapped pgs
 
  data:
    pools:   3 pools, 65 pgs
    objects: 5.46k objects, 5.7 GiB
    usage:   26 GiB used, 10 TiB / 10 TiB avail
    pgs:     2766/16377 objects degraded (16.890%)
             4219/16377 objects misplaced (25.762%)
             17 active+recovering+undersized+remapped
             17 active+remapped+backfill_wait
             10 active+recovery_wait+undersized+degraded+remapped
             5  active+recovering+undersized+degraded+remapped
             5  active+recovery_wait+degraded+remapped
             2  active+clean
             2  active+recovering
             2  active+recovery_wait+remapped
             2  active+recovery_wait+degraded
             1  active+recovering+remapped
             1  active+remapped+backfilling
             1  active+recovery_wait+undersized+remapped
 
  io:
    client:   266 KiB/s wr, 0 op/s rd, 1 op/s wr
    recovery: 5.5 MiB/s, 64 keys/s, 31 objects/s
 
  progress:
    Rebalancing after osd.5 marked in (4m)
      [............................] 
    Rebalancing after osd.6 marked in (2m)
      [............................] (remaining: 7h)


--------------------------------------------------------------------->
[[email protected]:/etc/ceph]# ceph -s
  cluster:
    id:     61ab49ca-21d8-4b03-8237-8c05c5c5d177
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum k8smaster01,k8smaster02,k8smaster03 (age 14m)
    mgr: k8smaster03(active, since 14m), standbys: k8smaster01, k8smaster02
    mds: cephfs:1 {0=k8smaster02=up:active} 2 up:standby
    osd: 7 osds: 7 up (since 7m), 7 in (since 7m)
 
  data:
    pools:   3 pools, 65 pgs
    objects: 5.48k objects, 5.8 GiB
    usage:   26 GiB used, 10 TiB / 10 TiB avail
    pgs:     64 active+clean
             1  active+clean+wait
 
  io:
    client:   208 KiB/s wr, 0 op/s rd, 1 op/s wr
    recovery: 3.9 MiB/s, 2 objects/s

后续参考(集群):

01 kubernetes二进制部署
02 kubernetes辅助环境设置
03 K8S集群网络ACL规则
04 Ceph集群部署
05 部署zookeeper和kafka集群
06 部署日志系统
07 部署Indluxdb-telegraf
08 部署jenkins
09 部署k3s和Helm-Rancher
10 部署maven软件

 

转载至https://blog.csdn.net/weixin_43667733/article/details/117267185?ops_request_misc=%257B%2522request%255Fid%2522%253A%2522162834815016780264045472%2522%252C%2522scm%2522%253A%252220140713.130102334.pc%255Fall.%2522%257D&request_id=162834815016780264045472&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~all~first_rank_v2~rank_v29-17-117267185.first_rank_v2_pc_rank_v29&utm_term=k8s+ceph+%E7%94%9F%E4%BA%A7%E7%8E%AF%E5%A2%83&spm=1018.2226.3001.4187

你可能感兴趣的:(K8S)