主机名 | 角色 | ip | NAT |
---|---|---|---|
ceph01 | mon+osd | 192.168.1.189 | 192.168.100.189 |
cephadmin | ceph-deploy+client | 192.168.1.190 | 192.168.100.190 |
ceph02 | mon+osd | 192.168.1.191 | 192.168.100.191 |
ceph03 | mon+osd | 192.168.1.192 | 192.168.100.192 |
ceph04 | mon+osd | 192.168.1.193 | 192.168.100.193 |
有时候CentOS默认的yum源不一定是国内镜像,导致yum在线安装及更新速度不是很理想,这时候需要yum源设置为国内镜像站点
yum install wget -y
阿里云
cd /etc/yum.repos.d && \
mv CentOS-Base.repo CentOS-Base.repo.bak && \
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
yum makecache
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org && \
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm && \
yum --disablerepo=\* --enablerepo=elrepo-kernel repolist && \
yum --disablerepo=\* --enablerepo=elrepo-kernel install kernel-ml.x86_64 -y
yum remove kernel-tools-libs.x86_64 kernel-tools.x86_64 -y && \
yum --disablerepo=\* --enablerepo=elrepo-kernel install kernel-ml-tools.x86_64 -y
awk -F \' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg && \
grub2-editenv list && \
grub2-set-default 0
reboot
uname -r
yum update -y && yum install epel-release -y
hostnamectl set-hostname ceph01 # 192.168.1.189
hostnamectl set-hostname cephadmin # 192.168.1.190
hostnamectl set-hostname ceph02 # 192.168.1.191
hostnamectl set-hostname ceph03 # 192.168.1.192
hostnamectl set-hostname ceph04 # 192.168.1.193
sudo vim /etc/hosts
# 内容如下
192.168.1.189 ceph01
192.168.1.190 cephadmin
192.168.1.191 ceph02
192.168.1.192 ceph03
192.168.1.193 ceph04
创建用户(四台机器上都运行)
useradd -d /home/admin_ceph -m admin_ceph
echo "Xuexi123" | passwd admin_ceph --stdin
echo "admin_ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/admin_ceph
sudo chmod 0440 /etc/sudoers.d/admin_ceph
设置免密登录 (只在cephadmin节点运行)
su - admin_ceph
ssh-keygen
ssh-copy-id admin_ceph@ceph01
ssh-copy-id admin_ceph@ceph02
ssh-copy-id admin_ceph@ceph03
ssh-copy-id admin_ceph@ceph04
sudo cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
sudo yum install ntp -y
sudo systemctl enable ntpd && \
sudo systemctl start ntpd && \
sudo ntpstat
配置ceph清华源
cat > /etc/yum.repos.d/ceph.repo<<'EOF'
[Ceph]
name=Ceph packages for $basearch
baseurl=https://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc
priority=1
[Ceph-noarch]
name=Ceph noarch packages
baseurl=https://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=https://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc
priority=1
EOF
安装ceph-deploy(只在cephadmin上执行)
sudo yum install ceph-deploy -y
所有节点安装 epel-release
sudo yum install epel-release -y
# 进入admin用户
su - admin_ceph
mkdir ceph-cluster
cd ceph-cluster
创建集群
ceph-deploy new {initial-monitor-node(s)}
如:
ceph-deploy new cephadmin ceph01 ceph02 ceph03 ceph04
情况如下:
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/admin/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /bin/ceph-deploy new cephadmin ceph01 ceph02 ceph03
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] func : <function new at 0x7f8a22d452a8>
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f8a22d60ef0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] ssh_copykey : True
[ceph_deploy.cli][INFO ] mon : ['cephadmin', 'ceph01', 'ceph02', 'ceph03']
[ceph_deploy.cli][INFO ] public_network : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] cluster_network : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] fsid : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[cephadmin][DEBUG ] connection detected need for sudo
[cephadmin][DEBUG ] connected to host: cephadmin
[cephadmin][DEBUG ] detect platform information from remote host
[cephadmin][DEBUG ] detect machine type
[cephadmin][DEBUG ] find the location of an executable
[cephadmin][INFO ] Running command: sudo /usr/sbin/ip link show
[cephadmin][INFO ] Running command: sudo /usr/sbin/ip addr show
[cephadmin][DEBUG ] IP addresses found: [u'192.168.124.1', u'192.168.3.189', u'192.168.122.189']
[ceph_deploy.new][DEBUG ] Resolving host cephadmin
[ceph_deploy.new][DEBUG ] Monitor cephadmin at 192.168.3.189
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[ceph01][DEBUG ] connected to host: cephadmin
[ceph01][INFO ] Running command: ssh -CT -o BatchMode=yes ceph01
[ceph01][DEBUG ] connection detected need for sudo
[ceph01][DEBUG ] connected to host: ceph01
[ceph01][DEBUG ] detect platform information from remote host
[ceph01][DEBUG ] detect machine type
[ceph01][DEBUG ] find the location of an executable
[ceph01][INFO ] Running command: sudo /usr/sbin/ip link show
[ceph01][INFO ] Running command: sudo /usr/sbin/ip addr show
[ceph01][DEBUG ] IP addresses found: [u'192.168.122.190', u'192.168.124.1', u'192.168.3.190']
[ceph_deploy.new][DEBUG ] Resolving host ceph01
[ceph_deploy.new][DEBUG ] Monitor ceph01 at 192.168.3.190
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[ceph02][DEBUG ] connected to host: cephadmin
[ceph02][INFO ] Running command: ssh -CT -o BatchMode=yes ceph02
[ceph02][DEBUG ] connection detected need for sudo
[ceph02][DEBUG ] connected to host: ceph02
[ceph02][DEBUG ] detect platform information from remote host
[ceph02][DEBUG ] detect machine type
[ceph02][DEBUG ] find the location of an executable
[ceph02][INFO ] Running command: sudo /usr/sbin/ip link show
[ceph02][INFO ] Running command: sudo /usr/sbin/ip addr show
[ceph02][DEBUG ] IP addresses found: [u'192.168.122.191', u'192.168.3.191', u'192.168.124.1']
[ceph_deploy.new][DEBUG ] Resolving host ceph02
[ceph_deploy.new][DEBUG ] Monitor ceph02 at 192.168.3.191
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[ceph03][DEBUG ] connected to host: cephadmin
[ceph03][INFO ] Running command: ssh -CT -o BatchMode=yes ceph03
[ceph03][DEBUG ] connection detected need for sudo
[ceph03][DEBUG ] connected to host: ceph03
[ceph03][DEBUG ] detect platform information from remote host
[ceph03][DEBUG ] detect machine type
[ceph03][DEBUG ] find the location of an executable
[ceph03][INFO ] Running command: sudo /usr/sbin/ip link show
[ceph03][INFO ] Running command: sudo /usr/sbin/ip addr show
[ceph03][DEBUG ] IP addresses found: [u'192.168.3.192', u'192.168.124.1', u'192.168.122.192']
[ceph_deploy.new][DEBUG ] Resolving host ceph03
[ceph_deploy.new][DEBUG ] Monitor ceph03 at 192.168.3.192
[ceph_deploy.new][DEBUG ] Monitor initial members are ['cephadmin', 'ceph01', 'ceph02', 'ceph03']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.3.189', '192.168.3.190', '192.168.3.191', '192.168.3.192']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
修改ceph.conf
vim /home/admin_ceph/ceph-cluster/ceph.conf
# 添加如下内容
public network = 192.168.1.0/24
cluster network = 192.168.100.0/24
osd pool default size = 3
osd pool default min size = 2
osd pool default pg num = 128
osd pool default pgp num = 128
osd pool default crush rule = 0
osd crush chooseleaf type = 1
max open files = 131072
ms bing ipv6 = false
[mon]
mon clock drift allowed = 10
mon clock drift warn backoff = 30
mon osd full ratio = .95
mon osd nearfull ratio = .85
mon osd down out interval = 600
mon osd report timeout = 300
mon allow pool delete = true
[osd]
osd recovery max active = 3
osd max backfills = 5
osd max scrubs = 2
osd mkfs type = xfs
osd mkfs options xfs = -f -i size=1024
osd mount options xfs = rw,noatime,inode64,logbsize=256k,delaylog
filestore max sync interval = 5
osd op threads = 2
ceph-deploy install --no-adjust-repos cephadmin ceph01 ceph02 ceph03 ceph04
–no-adjust-repos是直接使用本地源,不生成官方源
部署初始的monitors,并获得keys
ceph-deploy mon create-initial
做完这一步,在当前目录下就会看到有如下的keyrings:
ls -al /home/admin_ceph/ceph-cluster
drwxrwxr-x 2 admin admin 4096 10月 27 10:46 .
drwx------ 7 admin admin 177 10月 27 10:36 ..
-rw------- 1 admin admin 113 10月 27 10:46 ceph.bootstrap-mds.keyring
-rw------- 1 admin admin 113 10月 27 10:46 ceph.bootstrap-mgr.keyring
-rw------- 1 admin admin 113 10月 27 10:46 ceph.bootstrap-osd.keyring
-rw------- 1 admin admin 113 10月 27 10:46 ceph.bootstrap-rgw.keyring
-rw------- 1 admin admin 151 10月 27 10:46 ceph.client.admin.keyring
-rw-rw-r-- 1 admin admin 1107 10月 27 10:36 ceph.conf
-rw-rw-r-- 1 admin admin 237600 10月 27 10:46 ceph-deploy-ceph.log
-rw------- 1 admin admin 73 10月 27 10:20 ceph.mon.keyring
将配置文件和密钥复制到集群各节点
配置文件就是生成的ceph.conf,而密钥是ceph.client.admin.keyring,当使用ceph客户端连接至ceph集群时需要使用的密默认密钥,这里我们所有节点都要复制,命令如下
ceph-deploy admin cephadmin ceph01 ceph02 ceph03 ceph04
#在L版本的`Ceph`中新增了`manager daemon`,如下命令部署一个`Manager`守护进程
[admin@node1 my-cluster]$ ceph-deploy mgr create cephadmin ceph01 ceph02 ceph03 ceph04
在cephadmin
上执行以下命令
#用法:ceph-deploy osd create –data {device} {ceph-node}
ceph-deploy osd create --data /dev/sdb cephadmin && \
ceph-deploy osd create --data /dev/sdc cephadmin && \
ceph-deploy osd create --data /dev/sdd cephadmin && \
ceph-deploy osd create --data /dev/sdb ceph01 && \
ceph-deploy osd create --data /dev/sdc ceph01 && \
ceph-deploy osd create --data /dev/sde ceph01 && \
ceph-deploy osd create --data /dev/sda ceph01 && \
ceph-deploy osd create --data /dev/sdd ceph02 && \
ceph-deploy osd create --data /dev/sdb ceph02 && \
ceph-deploy osd create --data /dev/sde ceph02 && \
ceph-deploy osd create --data /dev/sdc ceph02 && \
ceph-deploy osd create --data /dev/sdd ceph03 && \
ceph-deploy osd create --data /dev/sde ceph03 && \
ceph-deploy osd create --data /dev/sdc ceph03 && \
ceph-deploy osd create --data /dev/sda ceph03 && \
ceph-deploy osd create --data /dev/sdd ceph04 && \
ceph-deploy osd create --data /dev/sdb ceph04 && \
ceph-deploy osd create --data /dev/sdc ceph04
sudo ceph health
sudo ceph -s
默认情况下ceph.client.admin.keyring文件的权限为600,属主和属组为root,如果在集群内节点使用cephadmin用户直接直接ceph命令,将会提示无法找到/etc/ceph/ceph.client.admin.keyring
文件,因为权限不足
如果使用sudo ceph
不存在此问题,为方便直接使用ceph
命令,可将权限设置为644。在集群节点上面cephadmin
节点上admin
用户下执行下面命令
sudo chmod 644 /etc/ceph/ceph.client.admin.keyring
ceph -s
sudo ceph osd tree
安装ceph-mgr-dashboard
,在mgr的节点上安装(四台机器)
yum install ceph-mgr-dashboard -y
方式一:命令操作
ceph mgr module enable dashboard
方式二:配置文件
vim /home/cephadmin/my-cluster/ceph.conf
# 内容如下
[mon]
mgr initial modules = dashboard
# 推送配置
ceph-deploy --overwrite-conf config push cephadmin ceph01 ceph02 ceph03
# 重启mgr
systemctl restart ceph-mgr@cephadmin ceph-mgr@ceph01 ceph-mgr@ceph02 ceph-mgr@ceph03
默认情况下,仪表板的所有HTTP连接均使用SSL/TLS进行保护
方法一
#要快速启动并运行仪表板,可以使用以下内置命令生成并安装自签名证书: 用root权限
[root@node1 my-cluster]# ceph dashboard create-self-signed-cert
#创建具有管理员角色的用户:
[root@node1 my-cluster]# ceph dashboard set-login-credentials admin Shanghai711
#查看ceph-mgr服务:
[root@node1 my-cluster]# ceph mgr services
{
"dashboard": "https://cephadmin:8443/"
}
方法二
ceph config-key set mgr/dashboard/server_port 11030 # 设置端口为8080
ceph config-key set mgr/dashboard/server_addr 192.168.1.190 # 设置绑定ip
ceph config set mgr mgr/dashboard/ssl false # 因为是内网使用,所以关闭ssl
# 重启一下dashboard
ceph mgr module disable dashboard
ceph mgr module enable dashboard
ceph dashboard set-login-credentials admin Shanghai711 # 设置用户名密码
如果你是用 ceph-deploy
部署 Argonaut 或 Bobtail 的,那么 Ceph 可以作为服务运行(还可以用 sysvinit )
启动所有守护进程
要启动你的 Ceph 集群,执行 ceph
时加上 start
命令,语法:
sudo service ceph [options] [start|restart] [daemonType|daemonID]
如:
sudo service ceph -a start
集群运行起来后,你可以用 ceph
工具来监控,典型的监控包括检查 OSD 状态、监视器状态、归置组状态和元数据服务器状态。
交互模式
要在交互模式下运行ceph,不要带参数运行 ceph
ceph
ceph> health
ceph> status
ceph> quorum_status
ceph> mon_status
检查集群健康状况
启动集群后、读写数据前,先检查下集群的健康状态。你可以用下面的命令检查
ceph health
集群起来的时候,你也许会碰到像 HEALTH_WARN XXX num placement groups stale
这样的健康告警,等一会再检查下。集群准备好的话 ceph health
会给出像 HEALTH_OK
一样的消息,这时候就可以开始使用集群了
观察集群
要观察集群内正发生的事件,打开一个新终端,然后输入
ceph -w
Ceph 会打印各种事件。例如一个包括 1 个监视器、和 2 个 OSD 的小型 Ceph 集群可能会打印出这些
# 检查集群状态
ceph status
ceph -s
ceph> status
# 检查OSD状态
ceph osd stat
ceph osd dup
ceph osd tree
# 检查监视器状态
ceph mon stat
ceph mon dump
ceph quorum_status
# 检查MDS状态
ceph mds stat
ceph mds dump
创建存储池时,它会创建指定数量的归置组。 Ceph 在创建一或多个归置组时会显示 creating
;创建完后,在其归置组的 Acting Set 里的 OSD 将建立互联;一旦互联完成,归置组状态应该变为 active+clean
,意思是 Ceph 客户端可以向归置组写入数据了
# 列出存储池
ceph osd lspools
# 创建存储池
osd pool default pg num = 100
osd pool default pgp num = 100
# 创建存储此
ceph osd pool create cssc711 4096
#################
确定 pg_num 取值是强制性的,因为不能自动计算。下面是几个常用的值:
少于 5 个 OSD 时可把 pg_num 设置为 128
OSD 数量在 5 到 10 个时,可把 pg_num 设置为 512
OSD 数量在 10 到 50 个时,可把 pg_num 设置为 4096
OSD 数量大于 50 时,你得理解权衡方法、以及如何自己计算 pg_num 取值
自己计算 pg_num 取值时可借助 pgcalc 工具
#######################
# 删除存储池
ceph osd pool delete test test --yes-i-really-really-mean-it
# 重命名存储池
ceph osd pool rename {current-pool-name} {new-pool-name}
# 查看存储池统计信息
rados df # 要查看某存储池的使用统计信息
# 拍下存储池的快照
ceph osd pool mksnap {pool-name} {snap-name}
# 删除存储池的快照
ceph osd pool rmsnap {pool-name} {snap-name}
# 调整存储池选项值
cph osd pool set {pool-name} {key} {value} # http://docs.ceph.org.cn/rados/operations/pools/
Ceph扩容节点(新服务器)
原来
主机名 | 角色 | ip | NAT |
---|---|---|---|
CephAdmin | ceph-deploy+client | 192.168.3.189 | 192.168.122.189 |
ceph01 | mon+osd | 192.168.3.190 | 192.168.122.190 |
ceph02 | mon+osd | 192.168.3.191 | 192.168.122.191 |
ceph03 | mon+osd | 192.168.3.192 | 192.168.122.192 |
新增
主机名 | 角色 | ip | NAT |
---|---|---|---|
CephAdmin | ceph-deploy+client | 192.168.3.189 | 192.168.122.189 |
ceph01 | mon+osd | 192.168.3.190 | 192.168.122.190 |
ceph02 | mon+osd | 192.168.3.191 | 192.168.122.191 |
ceph03 | mon+osd | 192.168.3.192 | 192.168.122.192 |
ceph04 | mon+osd | 192.168.3.193 | 192.168.122.193 |
生产环境中,一般不会在新节点加入ceph集群后,立即开始数据回填,这样会影响集群性能。所以我们需要设置一些标志位,来完成这个目的
ceph osd set noin # 设置标志位
ceph osd set nobackfill # 设置不回填数据标志
在用户访问的非高峰时,取消这些标志位,集群开始在平衡任务
ceph osd unset noin # 取消设置标志位
ceph osd unset nobackfill # 取消不回填数据标志
修改所有节点hosts文件,添加新增节点192.168.3.193 ceph04
vim /etc/hosts
# 内容如下
192.168.3.189 cephadmin
192.168.3.190 ceph01
192.168.3.191 ceph02
192.168.3.192 ceph03
192.168.3.193 ceph04
修改ceph04的主机名和hosts
hostnamectl set-hostname ceph04
创建用户
useradd -d /home/admin -m admin
echo "123456" | passwd admin --stdin
echo "admin ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/admin
sudo chmod 0440 /etc/sudoers.d/admin
设置免密登录 (只在cephadmin节点运行)
su - admin
ssh-copy-id admin@ceph04
修改同步时区(新增的机器上运行)
sudo cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
sudo yum install ntp -y
sudo systemctl enable ntpd
sudo systemctl start ntpd
sudo ntpstat
配置ceph清华源
cat > /etc/yum.repos.d/ceph.repo<<'EOF'
[Ceph]
name=Ceph packages for $basearch
baseurl=https://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc
priority=1
[Ceph-noarch]
name=Ceph noarch packages
baseurl=https://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=https://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc
priority=1
EOF
ceph04节点安装ceph和ceph-radosgw
yum install ceph ceph-radosgw -y
在**cephadmin**节点上修改ceph.conf
文件
vim /home/admin/my-cluster
# 修改内容如下
mon_initial_members = cephadmin, ceph01, ceph02, ceph03, ceph04 # 添加了ceph04
将监视器添加到现有集群
ceph-deploy --overwrite-conf mon add ceph04 --address 192.168.3.193
扩展rgw
ceph-deploy --overwrite-conf rgw create ceph04
扩展mgr
ceph-deploy --overwrite-conf mgr create ceph04
查看ceph.conf
cat /home/admin/my-cluster/ceph.conf
# 内容如下
[global]
fsid = 7218408f-9951-49d7-9acc-857f63369a84
mon_initial_members = cephadmin, ceph01, ceph02, ceph03, ceph04
mon_host = 192.168.3.189,192.168.3.190,192.168.3.191,192.168.3.192,192.168.3.193
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public network = 192.168.3.0/24
cluster network = 192.168.122.0/24
在管理节点把配置文件和 admin 密钥拷贝到Ceph 节点
ceph-deploy --overwrite-conf admin ceph01 ceph02 ceph03 ceph04
创建osd,添加新节点ceph04
的sdb到存储池中
ceph-deploy osd create --data /dev/sdb ceph04
部署完Ceph集群之后,如何在Ceph集群中存储文件呢?ceph提供了三种接口供用户使用,分别是:
在Ceph集群中创建一个RBD块文件供用户进行使用,要使用Ceph,首先需要一个资源池pool,pool是Ceph中数据存储抽象的概念,其由多个pg(Placegroup)和pgp组成,创建的时候可以指定pg的数量,pg的大小一般为2^n次方,如下先创建一个pool
1、创建pool,其名字为xuexi,包含1024个PG/PGP
```shell
ceph osd pool create cssc711 1024 1024
```
2、可以查看pool的信息,如查看当前集群的pool列表——lspools,查看pg_num和pgp_num,副本数size大小
# 查看pool列表
ceph osd lspools
# 查看pg和pgp数量
ceph osd pool get cssc711 pg_num
ceph osd pool get cssc711 pgp_num
# 查看size大小,默认为三副本
ceph osd pool get cssc711 size
3、此时pool已经创建好,可以在pool中创建RBD块,通过rbd命令来实现RBD块的创建,如创建一个10G的块存储
rbd create -p cssc711 --image vm_images.img --size 30G
如上创建了一个vm_images.img的RBD块文件,大小为30G,可以通过ls和info查看RBD镜像的列表和详情信息
# 查看RBD镜像列表
rbd -p cssc711 ls
# 查看RBD详情,可以看到镜像包含2560个objects,每个ojbect大小为4M,对象以rbd_data.10b96b8b4567开头
rbd -p cssc711 info vm_images.img
4、RBD存储块已经创建起来,如果已经和虚拟环境结合,创建好虚拟机,然后在磁盘中写入数据即可,rbd提供了一个map的工具,可以将一个RBD块映射到本地进行使用,大大简化了使用过程,rbd map的时候,exclusive-lock object-map fast-diff deep-flatten的features不支持,因此需要先disable,负责会提示RBD的报错
# 关闭默认的featrues
rbd -p cssc711 --image vm_images.img feature disable deep-flatten && \
rbd -p cssc711 --image vm_images.img feature disable fast-diff && \
rbd -p cssc711 --image vm_images.img feature disable object-map && \
rbd -p cssc711 --image vm_images.img feature disable exclusive-lock
# 查看校验feature信息
rbd -p cssc711 info vm_images.img
# 将RBD块map到本地,此时map后,可以看到RBD块设备映射到了本地的一个/dev/rbd0设备上
rbd map -p cssc711 --image vm_images.img
ls -l /dev/rbd0
5、RBD块设备已映射到本地的/dev/rbd0设备上,因此可以对设备进行格式化操作使用
# 通过device list可以查看到当前机器RBD块设备的映射情况
[root@node-1 ~]# ls -l /dev/rbd0
# 该设备可以像本地的一个盘来使用,因此可以对其进行格式化操作
[root@node-1 ~]# mkfs.xfs /dev/rbd0
[root@node-1 ~]# blkid /dev/rbd0
#挂载磁盘到系统中
mkdir /mnt/vm_images_rbd
mount /dev/rbd0 /mnt/vm_images_rbd
df -h /mnt/vm_images_rbd
cd /mnt/vm_images_rbd
echo "This is VM images" > README.md