【云原生 | Kubernetes 系列】--Ceph认证和RBD

1. Ceph 认证机制

ceph使用cephx协议对客户端进行身份认证.
cephx用于对ceph保存的数据进行认证访问和授权,用于对访问ceph的请求进行认证和授权检查,与mon通信的请求都要经过ceph认证通过.但是也可以在mon节点关闭cephx认证,但是关闭认证后任何访问都将被允许,因此无法保证数据的安全性.

1.1 授权流程

每个mon节点都可以对客户端进行身份认证并分发秘钥,因此多个mon节点就不存在单点故障和认证性能瓶颈
mon节点会返回用于身份认证的数据结构,其中包含获取ceph服务时用到的session key,session key通过客户端秘钥进行加密传输,而秘钥在客户端提供前配置好的,保存在/etc/ceph/ceph.client.admin.keyring文件中.
客户端使用session key向mon请求所需要的服务,mon向客户端提供一个tiket,用于向实际处理数据的OSD等服务验证客户端身份,MON和OSD共享同一个secret,因此OSD会信任所有MON发放的tiket(tiket在有效期内)

cephx只能用在ceph组件之间进行认证.
cephx只负责认证授权,不负责数据传输的加密

认证过程:

  1. client带着自己的key向mon(/etc/ceph/ceph.conf中的mon_host)发起认证请求
  2. mon验证client的key,生成session key并使用client的key进行加密,将session key 返回给client
  3. client通过key解密session key,使用session key向mon请求tiket
  4. mon收到client发送的session key进行验证,生成tiket加密后返回给client
  5. client解密tiket后,请求MDS,MSD获取元数据后返回客户端.此时clinet获得数据路径(CEPH FS,如果是OSD则没这一步)
  6. client拿着tiket访问OSD.
  7. OSD向MON验证tiket,验证通过后接受客户端访问.
  8. 客户端将文件拆分成4M每个的数据块.通过ino和ono相加获得对象ID
  9. 通过hash计算对象ID并和PG进行与运算,计算出对象属于哪个PG,使用CRUSH算法进行OSD映射.

1.3 ceph用户

用户是指个人(ceph管理者)或系统参与者(MON/OSD/MDS)
通过创建用户,可以控制用户或哪个参与者能够访问ceph存储集群,以及可以访问的存储池及存储池中的数据.
ceph支持多种类型的用户,但可管理的用户都属于client类型
区分用户的类型原因在于,MON/OSD/MDS等系统组件使用cephx协议,但它们不是客户端.
通过点号来分割用户类型和用户名,格式为TYPE.ID 例如client.admin

$ cat /etc/ceph/ceph.client.admin.keyring 
[client.admin]
	key = AQAAxiFjNoK5FRAAy8DUqFsOoCd2H0m9Q1SuIQ==
	caps mds = "allow *"
	caps mgr = "allow *"
	caps mon = "allow *"
	caps osd = "allow *"

1.4 查看授权

使用ceph auth get查看用户权限

cephadmin@ceph-mgr01:~/ceph-cluster$ ceph auth get osd.10
[osd.10]
	key = AQBtqCJjlkiTLBAAJKyQGtQqageBL7naKmNTAg==
	caps mgr = "allow profile osd"
	caps mon = "allow profile osd"
	caps osd = "allow *"
exported keyring for osd.10
cephadmin@ceph-mgr01:~/ceph-cluster$ ceph auth get client.admin
[client.admin]
	key = AQAAxiFjNoK5FRAAy8DUqFsOoCd2H0m9Q1SuIQ==
	caps mds = "allow *"
	caps mgr = "allow *"
	caps mon = "allow *"
	caps osd = "allow *"
exported keyring for client.admin

1.5 ceph授权

ceph基于使能/能力(Capabilities,简称caps)来描述用户可针对MOD/OSD或MDS使用的授权范围或级别.

语法: daemon-type ‘allow caps’ […]

[…]中常用类型有

权限 含义
r 读权限
w 写权限
x 调用方法权限
* 读写执行权限以及执行命令的权限
class-read 调用类方法读权限,属于x的子集
class-write 调用类方法写入权限,属于X的子集
profile osd 授予用户以osd身份连接到其他osd或mon,授予osd权限使osd能够处理复制检测信号流量和状态报告
profile mds 授予用户某个mds身份连接到其他mds或监视器权限
profile bootstrap-osd 授予用户引导osd的权限,部署和舒适化ceph集群时候使用
profile bootstrap-mds 授予用户引导元数据服务器的权限,授权部署工具权限,使其在引导元数据服务时有权限添加权限和秘钥

MON能力
包括r/w/x和allow profile ceph(ceph的运行图)
例如:

mon 'allow rwx'
mon 'allow profile osd'

OSD能力
包括r/w/x/class-read/class-write和profile osd(类写入),另外OSD能力还允许进行存储池和命名空间的设置.

oad 'allow capability' [pool=poolname] [namespace=namespace-name]

MDS能力
只需要allow或空都表示允许

mds 'allow'

1.6 ceph用户管理

用户管理功能可以让ceph集群管理员能够直接在ceph集群中创建,更新和删除用户.在ceph集群中管理用户时,可能需要将秘钥分发到客户端,以便将秘钥添加到秘钥环文件中/etc/ceph/ceph.client.admin.keyring,此文件中可以包含一个或者多个用户认证信息,凡是拥有此文件的节点,都具有ceph的权限,而且可以使用/etc/ceph/ceph.client.admin.keyring文件中任意账号的权限.

1.6.1 列出用户

ceph auth list 可以列出所有用户

可以将结果重定向到文件进行保存

cephadmin@ceph-mgr01:~/ceph-cluster$ ceph auth list -o 123.key
installed auth entries:

1.6.2 添加用户

添加一个用户会创建用户名(TYPE.ID),机密秘钥,以及包含在命令中用于创建该用户的所有能力,用户可使用其秘钥向ceph存储集群进行身份验证,用户的能力授予该用户在mon,osd,或mds上进行读取,写入或执行的能力.

  1. ceph auth add

添加用户标准用法

## 创建用户
$ ceph auth add client.ehelp mon 'allow r' osd 'allow rwx pool=mypool'
added key for client.ehelp
## 验证权限
$ ceph auth get client.ehelp
[client.ehelp]
	key = AQD65CdjHdv8LBAAvJFbiQYBx9k4O+aiWK/jvw==
	caps mon = "allow r"
	caps osd = "allow rwx pool=mypool"
exported keyring for client.ehelp

## 将用户权限导出
$ ceph auth get client.ehelp -o ceph.client.ehelp.keyring
exported keyring for client.ehelp
$ cat ceph.client.ehelp.keyring
[client.ehelp]
	key = AQD65CdjHdv8LBAAvJFbiQYBx9k4O+aiWK/jvw==
	caps mon = "allow r"
	caps osd = "allow rwx pool=mypool"
  1. ceph auth get-or-create

此命令创建用户会返回用户名和秘钥文,如果该用户已存在,此命令只以秘钥文件格式返回用户名和秘钥.

## 创建用户
$ ceph auth get-or-create client.ctbs mon 'allow r' osd 'allow rwx pool=mypool'
[client.ctbs]
	key = AQBG8idj8ulgKBAAbbMeiO9lV+QAFybQumaFeA==
	
## 验证用户
$ ceph auth get client.ctbs
[client.ctbs]
	key = AQBG8idj8ulgKBAAbbMeiO9lV+QAFybQumaFeA==
	caps mon = "allow r"
	caps osd = "allow rwx pool=mypool"
exported keyring for client.ctbs
## 导出
$ ceph auth get-or-create client.ctbs mon 'allow r' osd 'allow rwx pool=mypool' -o ceph.client.ctbs.keyring

$ cat ceph.client.ctbs.keyring
[client.ctbs]
	key = AQBG8idj8ulgKBAAbbMeiO9lV+QAFybQumaFeA==
  1. ceph auth get-or-create-key

此命令仅返回用户秘钥,对于只需要秘钥的客户端此命令比较有用,例如libvirt

$ ceph auth get-or-create-key client.vmware mon 'allow r' osd 'allow rwx pool=mypool'
AQCn9ydjHLobERAA5BKeawsRqjVa3fXNflbghw==
  1. ceph auth print-key

只打印某个用户的key

$ ceph auth print-key client.ctbs
AQBG8idj8ulgKBAAbbMeiO9lV+QAFybQumaFeA==

1.6.3 修改用户权限

  1. 查看用户权限
$ ceph auth get client.ctbs
[client.ctbs]
	key = AQBG8idj8ulgKBAAbbMeiO9lV+QAFybQumaFeA==
	caps mon = "allow r"
	caps osd = "allow rwx pool=mypool"
exported keyring for client.ctbs
  1. 修改权限

对用户重新授权即可覆盖之前的授权,加完立即生效

$ ceph auth caps client.ctbs mon 'allow rw' osd "allow rwx pool=mypool"
updated caps for client.ctbs
$ ceph auth get client.ctbs
[client.ctbs]
	key = AQBG8idj8ulgKBAAbbMeiO9lV+QAFybQumaFeA==
	caps mon = "allow rw"
	caps osd = "allow rwx pool=mypool"
exported keyring for client.ctbs
  1. 删除用户

使用ceph auth del 用户名 即可删除用户

$ ceph auth get client.vmware
[client.vmware]
	key = AQCn9ydjHLobERAA5BKeawsRqjVa3fXNflbghw==
	caps mon = "allow r"
	caps osd = "allow rwx pool=mypool"
exported keyring for client.vmware
$ ceph auth del client.vmware
updated
$ ceph auth get client.vmware
Error ENOENT: failed to find client.vmware in keyring

客户端访问ceph集群时,ceph会使用以下四个秘钥环文件

#1. 单个用户keyring
/etc/ceph/<$cluster name>.<user $type>.<user $id>.keyring
#2.多个用户keyring
/etc/ceph/cluster.keyring
# 未定义集群名称的多个用户的keyring
/etc/ceph/keyring
# 编译后的二进制文件
/etc/ceph/keyring.bin

1.6.4 keyring备份

  1. 备份
# 创建备份文件
$ ceph-authtool --create-keyring ceph.client.user.keyring
creating ceph.client.user.keyring
# 将记录导出到keyring
$ ceph auth get client.ctbs -o ceph.client.user.keyring
exported keyring for client.ctbs
$ cat ceph.client.user.keyring
[client.ctbs]
	key = AQBG8idj8ulgKBAAbbMeiO9lV+QAFybQumaFeA==
	caps mon = "allow rw"
	caps osd = "allow rwx pool=mypool"
  1. 恢复

恢复和重建的区别,权限可以设置成一样,但key是随机生成的,如果create的话key就会发生变化

# 误删除用户
$ ceph auth del client.ctbs
updated
$ ceph auth get client.ctbs 
Error ENOENT: failed to find client.ctbs in keyring
# 恢复
$ ceph auth import -i ceph.client.user.keyring 
imported keyring
$ ceph auth get client.ctbs 
[client.ctbs]
	key = AQBG8idj8ulgKBAAbbMeiO9lV+QAFybQumaFeA==
	caps mon = "allow rw"
	caps osd = "allow rwx pool=mypool"
exported keyring for client.ctbs
  1. 备份整个集群的keyring
# 创建keyring文件
$ ceph-authtool --create-keyring cluster.keyring
creating cluster.keyring
# 导入一个用户的keyring
$ ceph-authtool cluster.keyring --import-keyring ceph.client.admin.keyring
importing contents of ceph.client.admin.keyring into cluster.keyring
$ cat cluster.keyring
[client.admin]
	key = AQAAxiFjNoK5FRAAy8DUqFsOoCd2H0m9Q1SuIQ==
	caps mds = "allow *"
	caps mgr = "allow *"
	caps mon = "allow *"
	caps osd = "allow *"
# 再追加一个用户的keyring
$ ceph-authtool cluster.keyring --import-keyring ceph.client.ctbs.keyring
importing contents of ceph.client.ctbs.keyring into cluster.keyring
$ cat cluster.keyring
[client.admin]
	key = AQAAxiFjNoK5FRAAy8DUqFsOoCd2H0m9Q1SuIQ==
	caps mds = "allow *"
	caps mgr = "allow *"
	caps mon = "allow *"
	caps osd = "allow *"
[client.ctbs]
	key = AQBG8idj8ulgKBAAbbMeiO9lV+QAFybQumaFeA==

## 保存所有用户
$ ceph auth ls -o cluster.keyring 
installed auth entries:

2. Ceph RBD使用

2.1 RBD

ceph可以同时提供RADOSW,RBD,CephFs.
Rbd即Rados Block Device,RBD是常用的存储类型,RBD块设备类似磁盘可以被挂载,RBD块设备具有快照,多副本,克隆和一致性等特性.数据以条带化的方式存储在Ceph集群不同主机的多个osd中.
条带化就是一种自动的将I/O的负载均衡到多个物理磁盘上的技术,条带化技术就是将一块连续的数据分城多个小份,并把他们分别存储到不同的磁盘上去.这就能使用多个进程同时访问数据的多个不同部分而不会造成磁盘的冲突,而且在需要对这种数据进行顺序访问时可以获得最大程度上的I/O并行能力,从而获得更好的性能.

2.2 创建存储池

使用ceph osd pool create 创建存储池

# 创建pool
$ ceph osd pool create rbd-data 32 32
pool 'rbd-data' created
$ ceph osd pool ls
device_health_metrics
mypool
.rgw.root
default.rgw.log
default.rgw.control
default.rgw.meta
cephfs-metadata
cephfs-data
rbd-data
# 启用块存储
$ ceph osd pool application enable rbd-data rbd
enabled application 'rbd' on pool 'rbd-data'
# 初始化rbd
$ rbd pool init -p rbd-data

2.3 创建镜像

centos默认只支持layering

# 创建镜像
$ rbd create data-img1 --size 2G --pool rbd-data
$ rbd create data-img2 --size 1G --pool rbd-data
# 查看pool下有几个镜像
$ rbd ls --pool rbd-data -l
NAME       SIZE   PARENT  FMT  PROT  LOCK
data-img1  2 GiB            2            
data-img2  1 GiB            2            

2.4 查看镜像详细信息

$ rbd --image data-img1 --pool rbd-data info
rbd image 'data-img1':
	size 2 GiB in 512 objects
	order 22 (4 MiB objects)		## 对象大小 2^22/1024/1024=4MiB
	snapshot_count: 0			# 快照数
	id: 8dc3f0937a0d			# 镜像ID
	block_name_prefix: rbd_data.8dc3f0937a0d  # 名称前缀+镜像id
	format: 2	# 镜像文件格式版本
	features: layering, exclusive-lock, object-map, fast-diff, deep-flatten # 特性
	op_features: 
	flags: 
	create_timestamp: Mon Sep 19 14:07:14 2022
	access_timestamp: Mon Sep 19 14:07:14 2022
	modify_timestamp: Mon Sep 19 14:07:14 2022

2.5 客户端挂载RBD

2.5.1 使用admin账户挂载

  1. 安装ceph-common环境

安装apt源,ceph源

$ echo > /etc/apt/sources.list <<EOF
deb https://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
deb-src https://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse

deb https://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
deb-src https://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse

deb https://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
deb-src https://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse

# deb https://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse
# deb-src https://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse

deb https://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse
deb-src https://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse
EOF
$ wget -q -O- 'https://mirrors.aliyun.com/ceph/keys/release.asc' | sudo apt-key add -
$ sudo apt-add-repository 'deb https://mirrors.aliyun.com/ceph/debian-octopus/ buster main'
$ sudo apt update
  1. 复制admin的keyring到客户端
$ scp ceph.conf ceph.client.admin.keyring [email protected]:/etc/ceph/
[email protected]'s password: 
ceph.conf         100%  329   597.6KB/s   00:00    
ceph.client.admin.keyring         100%  151   173.0KB/s   00:00 
  1. 客户端映射镜像
# 映射
$ rbd -p rbd-data map data-img1
/dev/rbd1
$ lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
loop1    7:1    0 70.3M  1 loop /snap/lxd/21029
loop2    7:2    0   48M  1 loop /snap/snapd/16778
loop3    7:3    0 55.6M  1 loop /snap/core18/2560
loop4    7:4    0   47M  1 loop /snap/snapd/16292
loop5    7:5    0   62M  1 loop /snap/core20/1611
loop6    7:6    0 67.8M  1 loop /snap/lxd/22753
loop7    7:7    0 63.2M  1 loop /snap/core20/1623
loop8    7:8    0 55.6M  1 loop /snap/core18/2566
sda      8:0    0  120G  0 disk 
├─sda1   8:1    0    1M  0 part 
└─sda2   8:2    0  120G  0 part /
sr0     11:0    1 1024M  0 rom  
rbd0   252:0    0    2G  0 disk /data
rbd1   252:16   0    2G  0 disk 
# 格式化
root@skywalking-ui:~# mkfs.xfs /dev/rbd1
meta-data=/dev/rbd1              isize=512    agcount=8, agsize=65536 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1
data     =                       bsize=4096   blocks=524288, imaxpct=25
         =                       sunit=16     swidth=16 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=16 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
# 挂载
root@skywalking-ui:~# mount /dev/rbd1 /u01
root@skywalking-ui:~# df -Th
Filesystem     Type      Size  Used Avail Use% Mounted on
udev           devtmpfs  936M     0  936M   0% /dev
tmpfs          tmpfs     196M  1.4M  195M   1% /run
/dev/sda2      xfs       120G  8.6G  112G   8% /
tmpfs          tmpfs     980M     0  980M   0% /dev/shm
tmpfs          tmpfs     5.0M     0  5.0M   0% /run/lock
tmpfs          tmpfs     980M     0  980M   0% /sys/fs/cgroup
/dev/loop1     squashfs   71M   71M     0 100% /snap/lxd/21029
tmpfs          tmpfs     196M     0  196M   0% /run/user/0
/dev/loop3     squashfs   56M   56M     0 100% /snap/core18/2560
/dev/loop4     squashfs   47M   47M     0 100% /snap/snapd/16292
/dev/loop5     squashfs   62M   62M     0 100% /snap/core20/1611
/dev/loop6     squashfs   68M   68M     0 100% /snap/lxd/22753
/dev/loop2     squashfs   48M   48M     0 100% /snap/snapd/16778
/dev/loop7     squashfs   64M   64M     0 100% /snap/core20/1623
/dev/rbd0      xfs       2.0G   47M  2.0G   3% /data
/dev/loop8     squashfs   56M   56M     0 100% /snap/core18/2566
/dev/rbd1      xfs       2.0G   47M  2.0G   3% /u01	

2.5.2 使用普通用户挂载

  1. 创建普通账户
# 创建账户
$ ceph auth add client.skywalking mon 'allow r' osd 'allow rwx pool=rbd-data'
added key for client.skywalking
# 导出账户
$ ceph auth get client.skywalking -o ceph.client.skywalking.keyring
exported keyring for client.skywalking
  1. 复制账户和ceph.conf到远程
$ scp ceph.client.skywalking.keyring ceph.conf [email protected]:/etc/ceph/
[email protected]'s password: 
ceph.client.skywalking.keyring     100%  128   189.0KB/s   00:00    
ceph.conf     100%  329   248.3KB/s   00:00 
  1. 挂载
# 移掉admin账户的keyring
$ mv /etc/ceph/ceph.client.admin.keyring /root/
$ ls /etc/ceph/
ceph.client.skywalking.keyring  ceph.conf  rbdmap

# 使用skywalking查看ceph
$ ceph --user skywalking -s
  cluster:
    id:     86c42734-37fc-4091-b543-be6ff23e5134
    health: HEALTH_WARN
            1 pool(s) do not have an application enabled
 
  services:
    mon: 3 daemons, quorum ceph-mon01,ceph-mon02,ceph-mon03 (age 7h)
    mgr: ceph-mgr01(active, since 4d)
    mds: 1/1 daemons up
    osd: 16 osds: 16 up (since 7h), 16 in (since 4d)
    rgw: 1 daemon active (1 hosts, 1 zones)
 
  data:
    volumes: 1/1 healthy
    pools:   9 pools, 265 pgs
    objects: 248 objects, 29 MiB
    usage:   578 MiB used, 15 GiB / 16 GiB avail
    pgs:     265 active+clean
# 映射rbd
# rbd --user skywalking -p rbd-data map data-img2
/dev/rbd0
# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
loop0    7:0    0   62M  1 loop /snap/core20/1611
loop1    7:1    0 70.3M  1 loop /snap/lxd/21029
loop2    7:2    0   48M  1 loop /snap/snapd/16778
loop3    7:3    0   47M  1 loop /snap/snapd/16292
loop4    7:4    0 67.8M  1 loop /snap/lxd/22753
loop5    7:5    0 55.6M  1 loop /snap/core18/2566
loop6    7:6    0 55.6M  1 loop /snap/core18/2560
loop7    7:7    0 63.2M  1 loop /snap/core20/1623
sda      8:0    0  120G  0 disk 
├─sda1   8:1    0    1M  0 part 
└─sda2   8:2    0  120G  0 part /
sr0     11:0    1 1024M  0 rom  
rbd0   252:0    0    1G  0 disk 
# mount /dev/rbd0 /data
# df -Th
Filesystem     Type      Size  Used Avail Use% Mounted on
udev           devtmpfs  932M     0  932M   0% /dev
tmpfs          tmpfs     196M  1.2M  195M   1% /run
/dev/sda2      xfs       120G  8.6G  112G   8% /
tmpfs          tmpfs     977M     0  977M   0% /dev/shm
tmpfs          tmpfs     5.0M     0  5.0M   0% /run/lock
tmpfs          tmpfs     977M     0  977M   0% /sys/fs/cgroup
/dev/loop1     squashfs   71M   71M     0 100% /snap/lxd/21029
/dev/loop2     squashfs   48M   48M     0 100% /snap/snapd/16778
/dev/loop3     squashfs   47M   47M     0 100% /snap/snapd/16292
/dev/loop0     squashfs   62M   62M     0 100% /snap/core20/1611
/dev/loop4     squashfs   68M   68M     0 100% /snap/lxd/22753
/dev/loop5     squashfs   56M   56M     0 100% /snap/core18/2566
/dev/loop6     squashfs   56M   56M     0 100% /snap/core18/2560
/dev/loop7     squashfs   64M   64M     0 100% /snap/core20/1623
tmpfs          tmpfs     196M     0  196M   0% /run/user/0
/dev/rbd0      xfs      1014M   40M  975M   4% /data

2.5.3 镜像拉伸

$ rbd ls --pool rbd-data -l
NAME       SIZE   PARENT  FMT  PROT  LOCK
data-img1  2 GiB            2        excl
data-img2  1 GiB            2        excl
$ rbd resize --pool rbd-data --image data-img2 --size 3G
Resizing image: 100% complete...done.
$  rbd ls --pool rbd-data -l
NAME       SIZE   PARENT  FMT  PROT  LOCK
data-img1  2 GiB            2        excl
data-img2  2 GiB            2        excl
# 此时在client上
# df -Th|grep /data
/dev/rbd0      xfs       2.0G   47M  2.0G   3% /data
# lsblk /dev/rbd0
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
rbd0 252:0    0   3G  0 disk /data
# fdisk -l /dev/rbd0
Disk /dev/rbd0: 3 GiB, 3221225472 bytes, 6291456 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
# 拉伸磁盘 xfs用xfs_growfs,ext4用resize2fs进行拉伸
# xfs_growfs /dev/rbd0
meta-data=/dev/rbd0              isize=512    agcount=8, agsize=65536 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1
data     =                       bsize=4096   blocks=524288, imaxpct=25
         =                       sunit=16     swidth=16 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=16 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 524288 to 786432
# df -Th|grep /data
/dev/rbd0      xfs       3.0G   55M  3.0G   2% /data

2.5.4 开机挂载

echo "rbd --user skywalking -p rbd-data map data-img2" >> /etc/rc.local
echo "mount /dev/rbd0 /data" >> /etc/rc.local
chmod a+x /etc/rc.local

查看挂载情况

# rbd showmapped
id  pool      namespace  image      snap  device   
0   rbd-data             data-img2  -     /dev/rbd0

2.5.5 卸载

# umount /data
# rbd --user skywalking -p rbd-data unmap data-img2
# rbd showmapped

2.5.6 删除rbd镜像

$ rbd rm --pool rbd-data --image data-img2
Removing image: 100% complete...done.

2.6 镜像快照

参数 含义
rbd snap create(add) 创建快照
snap limit set 设置镜像快照上限
snap limit clear 清除镜像的快照数量限制
snap list(ls) 列出所有快照
snap protect 保护快照避免被删除
snap purge 删除所有未保护的快照
snap remove(rm) 删除一个快照
snap rename 重命名快照
snap rollback(revert) 还原快照
snap unprotect 允许一个快照被删除(取消快照保护)

2.6.1 创建快照

# rbd snap create --pool rbd-data --image data-img2 --snap data-img2-20220919
Creating snap: 100% complete...done.

2.6.2 查看快照

# rbd snap ls --pool rbd-data --image data-img2
SNAPID  NAME                SIZE   PROTECTED  TIMESTAMP               
     4  data-img2-20220919  2 GiB             Mon Sep 19 16:40:16 2022

2.6.3 恢复快照

# 误操作了
root@skywalking-ui:~# tail -n2 /data/passwd 
statd:x:114:65534::/var/lib/nfs:/usr/sbin/nologin
ceph:x:64045:64045:Ceph storage service:/var/lib/ceph:/usr/sbin/nologin
root@skywalking-ui:~# rm -f /data/passwd
# 开始还原
# 卸载/data
root@skywalking-ui:~# umount /data 
# 卸载rbd0
root@skywalking-ui:~# rbd unmap /dev/rbd0
root@skywalking-ui:~# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
loop0    7:0    0 55.6M  1 loop /snap/core18/2560
loop1    7:1    0 63.2M  1 loop /snap/core20/1623
loop2    7:2    0   62M  1 loop /snap/core20/1611
loop3    7:3    0 55.6M  1 loop /snap/core18/2566
loop4    7:4    0 70.3M  1 loop /snap/lxd/21029
loop5    7:5    0   47M  1 loop /snap/snapd/16292
loop6    7:6    0 67.8M  1 loop /snap/lxd/22753
loop7    7:7    0   48M  1 loop /snap/snapd/16778
sda      8:0    0  120G  0 disk 
├─sda1   8:1    0    1M  0 part 
└─sda2   8:2    0  120G  0 part /
sr0     11:0    1 1024M  0 rom  
## ceph 节点rollback
# rbd snap rollback --pool rbd-data --image data-img2 --snap data-img2-20220919
Rolling back to snapshot: 100% complete...done.
## 恢复完成,重新挂载
# rbd --user skywalking -p rbd-data map data-img2
/dev/rbd0
# mount /dev/rbd0 /data
# tail -n 2 /data/passwd 
statd:x:114:65534::/var/lib/nfs:/usr/sbin/nologin
ceph:x:64045:64045:Ceph storage service:/var/lib/ceph:/usr/sbin/nologin

你可能感兴趣的:(Linux,Ceph,K8s,云原生,kubernetes,ceph)