ceph块存储

ceph 存储有三种访问方式:

什么是块存储:

ceph块设备也叫RADOS 设备

 

块存储集群:

 

实验操作:

 

查看存储池:

[root@node1 ~]# ceph osd lspools

0 rbd,

 

 

创建镜像和查看镜像:

[root@node1 ~]# rbd create demo-image --image-feature layering --size 10G

[root@node1 ~]# rbd list 

demo-image

 

查看镜像的内容:

[root@node1 ~]# rbd info demo-image

rbd image 'demo-image':

size 10240 MB in 2560 objects

order 22 (4096 kB objects)

block_name_prefix: rbd_data.1035238e1f29

format: 2

features: layering

flags: 

 

动态调整镜像大少:

1)缩少容量:

[root@node1 ~]# rbd resize  --size 7G image --allow-shrink

Resizing image: 100% complete...done.

[root@node1 ~]# rbd info image

rbd image 'image':

size 7168 MB in 1792 objects

order 22 (4096 kB objects)

block_name_prefix: rbd_data.1030238e1f29

format: 2

features: layering

flags: 

 

 

2)扩大容量:

[root@node1 ~]# rbd resize  --size 15G image

Resizing image: 100% complete...done.

[root@node1 ~]# rbd info image

rbd image 'image':

size 15360 MB in 3840 objects

order 22 (4096 kB objects)

block_name_prefix: rbd_data.1030238e1f29

format: 2

features: layering

flags: 

 

 

将块设备,映射到本地磁盘中,并格式化

[root@node1 ~]# rbd map demo-image

/dev/rbd0

 

[root@node1 ~]# mkfs.xfs  /dev/rbd0

 

 

一.客户端通过KRBD访问

1)#客户端需要安装ceph-common软件包,#拷贝配置文件(否则不知道集群在哪),拷贝连接密钥(否则无连接权限)

 

[root@client ~]# yum -y install  ceph-common

 

[root@client ~]# scp 192.168.4.11:/etc/ceph/ceph.conf   /etc/ceph/

 

[root@client ~]# scp 192.168.4.11:/etc/ceph/ceph.client.admin.kevring   /etc/ceph/

 

 

2)将镜像映射到本地磁盘:

[root@client ~]# rbd map image

/dev/rbd0

 

[root@client ~]# lsblk 

..........

rbd0          251:0    0   15G  0 disk 

 

[root@client ~]# rbd showmapped

id pool image snap device    

0  rbd  image -    /dev/rbd0 

 

3)客户端格式化、挂载分区

[root@client ~]# mkfs.xfs  /dev/rbd0

[root@client ~]# mount /dev/rbd0  /mnt/

[root@client ~]# echo "test"  > /mnt/test.txt    //添加一个测试文件

 

二.创建映像快照

1)查看快照:

[root@node1 ~]# rbd snap ls image

 

2)创建快照:

[root@node1 ~]# rbd snap create  image --snap image-snap1

[root@node1 ~]# rbd snap ls image

SNAPID NAME            SIZE

4 image-snap1 15360 MB

 

注意: 使用COW技术,对大数据进行快照会比较快

快照是写实拷贝

 

3)删除测试文件:

[root@client ~]# rm -rf /mnt/test.txt 

 

4)还原快照:

[root@client ~]# ls   /mnt

[root@client ~]# umount /mnt

[root@client ~]# rbd snap rollback image --snap image-snap1

Rolling back to snapshot: 100% complete...done.

[root@client ~]# mount /dev/rbd0   /mnt/

[root@client ~]# ls /mnt/

test.txt

 

 

四,创建快照克隆

1)克隆快照:

root@client ~]# rbd snap protect image --snap image-snap1   

 

[root@client ~]# rbd snap rm image  --snap image-snap1  //删除失败,因为被保护起来了

rbd: snapshot 'image-snap1' is protected from removal.

2018-08-09 17:01:28.978977 7f1272bd0d80 -1 librbd::Operations: snapshot is protecte

 

2.查看克隆镜像和父镜像的关系:

[root@client ~]# rbd info image-clone

rbd image 'image-clone':

size 15360 MB in 3840 objects

order 22 (4096 kB objects)

block_name_prefix: rbd_data.103c3d1b58ba

format: 2

features: layering

flags: 

parent: rbd/image@image-snap1

overlap: 15360 MB

  1. #克隆镜像很多数据都来自于快照链
  2. #如果希望克隆镜像可以独立工作,就需要将父快照中的数据,全部拷贝一份,但比较耗时!!!

[root@client ~]# rbd flatten image-clone Image flatten: 100% complete...done.

 

[root@client ~]# rbd info image-clone rbd image 'image-clone': size 15360 MB in 3840 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.103c3d1b58ba format: 2 features: layering flags:

 

  1. #注意,父快照信息没了!

 

五。客户端取消磁盘映射:

1.取消挂载点:

[root@client ~]# umount /mnt/

 

2.取消RBD磁盘映射

 

3.卸载磁盘镜像:

[root@node1 ~]# rbd unmap /dev/rbd/rbd/demo-image

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

你可能感兴趣的:(集群技术)