Ceph 快存储部署

7.3.2 Ceph块存储部署与使用

1.安装client操作系统

(1)虚拟机基础设置

在VMware中创建一台虚拟机,操作系统为CentOS-7-x86_64-DVD-1908,硬盘大小为20G,并将虚拟机的名字设为client,如图7-6所示。



                  图7-6 虚拟机配置

(2)虚拟机网络设置

为虚拟机配置主机名:client。设置网络为NAT模式,配置IP地址:192.168.100.100,子网掩码为255.255.255.0,默认网关为192.168.100.2,DNS服务器为192.168.100.2,使虚拟机可以访问Internet。

2.配置CephClient

(1)配置主机文件

配置ceph-1节点的/etc/hosts文件,将client节点进去,并保存。

[root@ceph-1 ~]# vi /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4localhost4.localdomain4

::1         localhost localhost.localdomainlocalhost6 localhost6.localdomain6

192.168.100.100 client

192.168.100.101 ceph-1

192.168.100.102 ceph-2

192.168.100.103 ceph-3

(2)配置免密服务

在ceph-1节点上,将公钥上传到client节点,配置免密登录。

[root@ceph-1 ~]# ssh-copy-idroot@client

/usr/bin/ssh-copy-id: INFO: Sourceof key(s) to be installed: "/root/.ssh/id_rsa.pub"

The authenticity of host 'client(192.168.100.100)' can't be established.

ECDSA key fingerprint isSHA256:XpUmb4kHGXWKkaTj44vITJDwBBApSk6Yo8ZunIEW010.

ECDSA key fingerprint isMD5:85:d3:f8:41:59:c4:a4:b4:c3:e9:71:2c:3b:45:0b:29.

Are you sure you want to continueconnecting (yes/no)? yes

/usr/bin/ssh-copy-id: INFO:attempting to log in with the new key(s), to filter out any that are alreadyinstalled

/usr/bin/ssh-copy-id: INFO: 1key(s) remain to be installed -- if you are prompted now it is to install thenew keys

root@client's password:


Number of key(s) added: 1


Now try logging into the machine,with:   "ssh 'root@client'"

and check to make sure that onlythe key(s) you wanted were added.

(3)创建YUM源文件

在client节点上删除原有软件源配置文件,上传新的yum文件并查验。

[root@client ~]# mkdir /opt/bak

[root@client ~]# cd/etc/yum.repos.d

[root@client yum.repos.d]# mv */opt/bak

将CentOS7-Base-163.repo通过SFTP复制到/etc/yum.repos.d中:

[root@client yum.repos.d]# ls

CentOS7-Base-163.repo

[root@client yum.repos.d]# yumclean all

[root@client yum.repos.d]# yummakecache


(4)配置YUM源文件

在client节点上添加ceph软件源配置文件,并查验。

[root@client yum.repos.d]# viceph.repo

[Ceph]

name=Ceph packages for $basearch

baseurl=http://mirrors.163.com/ceph/rpm-nautilus/el7/$basearch

enabled=1

gpgcheck=1

type=rpm-md

gpgkey=https://download.ceph.com/keys/release.asc

priority=1


[Ceph-noarch]

name=Ceph noarch packages

baseurl=http://mirrors.163.com/ceph/rpm-nautilus/el7/noarch

enabled=1

gpgcheck=1

type=rpm-md

gpgkey=https://download.ceph.com/keys/release.asc

priority=1


[ceph-source]

name=Ceph source packages

baseurl=http://mirrors.163.com/ceph/rpm-nautilus/el7/SRPMS

enabled=1

gpgcheck=1

type=rpm-md

gpgkey=https://download.ceph.com/keys/release.asc

priority=1


[root@client ~]# yum clean all

[root@client ~]# yum makecache

(5)安装ceph软件包

在ceph-1节点上使用ceph-deploy工具将ceph软件包安装到client节点上,并指定安装的版本为nautilus。

[root@ceph-1 ~]# cd /opt/osd

[root@ceph-1 osd]# ceph-deployinstall --release=nautilus client

[client][DEBUG ]Dependency Updated:

[client][DEBUG]   cryptsetup-libs.x86_640:1.6.7-1.el7                                         

[client][DEBUG ]

[client][DEBUG ]Complete!

[client][INFO  ] Running command: ceph --version

[client][DEBUG ] ceph version0.94.5 (9764da52395923e0b32908d83a9f7304401fee43)

(6)配置client节点的文件

在ceph-1节点上将ceph配置文件复制到client节点。

[root@ceph-1 osd]# ceph-deployconfig push client

(7)创建Ceph用户

在ceph-1节点上创建ceph用户client.rbd,它拥有访问rbd存储池的权限。

[root@ceph-1 osd]# ceph authget-or-create client.rbd mon 'allow r' osd 'allow class-read object_prefixrbd_children, allow rwx pool=rbd'

[client.rbd]

        key =AQDVodBdQMfmHxAAWIrWq8LH3RPbYyt83VBi1A==

(8)配置用户秘钥

在ceph-1节点上为client节点上的client.rbd用户添加密钥。

[root@ceph-1 osd]# ceph authget-or-create client.rbd | ssh root@client tee /etc/ceph/ceph.client.rbd.keyring

[client.rbd]

        key =AQDVodBdQMfmHxAAWIrWq8LH3RPbYyt83VBi1A==

(9)创建keyring

在client节点上创建keyring。

[root@client ~]# cat /etc/ceph/ceph.client.rbd.keyring>> /etc/ceph/keyring

(10)检查集群状态

通过提供用户名和密钥在client节点上检查ceph集群的状态。

[root@client ~]# ceph -s --nameclient.rbd

 cluster:

   id:    68ecba50-862d-482e-afe2-f95961ec3323

   health: HEALTH_OK


 services:

   mon: 3 daemons, quorum ceph-1,ceph-2,ceph-3 (age 21m)

   mgr: ceph-1(active, since 21m)

   osd: 3 osds: 3 up (since 21m), 3 in (since 23h)


 data:

   pools:   0 pools, 0 pgs

   objects: 0 objects, 0 B

   usage:   3.0 GiB used, 294 GiB /297 GiB avail

pgs:

3.创建和使用ceph块设备

创建块设备的顺序是,在 client 节点上使用rbd create命令创建一个块设备 image,然后用rbd map命令把image映射为块设备,最后对映射出来的/dev/rbd0格式化并挂载。就可以当成普通了块设备使用了。

(1)查看Ceph存储池

在ceph-1节点查看ceph存储池。

[root@ceph-1 osd]# ceph osd lspools

可以看见目前没有任何存储池

(2)创建rbd存储池

在Ceph-1节点创建rbd存储池

[root@ceph-1 osd]# ceph osd poolcreate rbd 128

pool 'rbd' created

(3)配置块存储

在ceph-1节点为存储池rbd指定应用为块存储rbd,并检验。

[root@ceph-1 osd]# ceph osd poolapplication enable rbd rbd

enabled application 'rbd' on pool'rbd'

使用帮助:

osd pool application enable {--yes-i-really-mean-it} :  enable use of an application [cephfs,rbd,rgw] on pool

此时可以看见rbd存储池

[root@ceph-1 osd]# ceph osd lspools

1 rbd

(4)创建块设备

在client节点创建一个10GB大小的ceph块设备,取名为rbd0。

[root@client ~]# rbd create rbd0--size 10240 --name client.rbd

(5)检查块设备

列出创建的rbd镜像,三种列出rbd镜像的方法:

[root@client ~]# rbd ls --nameclient.rbd

rbd0

[root@client ~]# rbd ls -p rbd--name client.rbd

rbd0

[root@client ~]# rbd list --nameclient.rbd

rbd0

检查rbd镜像的详细信息:

[root@client ~]# rbd info --imagerbd0 --name client.rbd

rbd image 'rbd0':

        size 10 GiB in 2560 objects

        order 22 (4 MiB objects)

        snapshot_count: 0

        id: 375afa8aa381

        block_name_prefix:rbd_data.375afa8aa381

        format: 2

        features: layering, exclusive-lock,object-map, fast-diff, deep-flatten

        op_features:

        flags:

        create_timestamp: Sat Nov 16 20:38:452019

        access_timestamp: Sat Nov 16 20:38:452019

        modify_timestamp: Sat Nov 16 20:38:452019

(6)配置镜像特性

在ceph-1节点使用命令禁用rbd0镜像的部分特性。

[root@ceph-1 osd]# rbd featuredisable rbd0 object-map fast-diff deep-flatten

(7)配置镜像映射

在client节点使用rbd map命令将rbd镜像映射到/dev目录下。

[root@client ~]# rbd map --imagerbd0 --name client.rbd

/dev/rbd0

(8)检查块设备

使用命令检查被映射的块设备。

[root@client ~]# rbd showmapped--name client.rbd

id pool namespace image snapdevice   

0 rbd            rbd0  -   /dev/rbd0

结果显示是映射正常

(9)设定块设备

在分区命令行中将块设备进行分区、格式化rbd块设备,最后并挂载到特定目录中。

[root@client ~]# fdisk /dev/rbd0

Welcome to fdisk (util-linux2.23.2).


Changes will remain in memory only,until you decide to write them.

Be careful before using the writecommand.


Device does not contain arecognized partition table

Building a new DOS disklabel withdisk identifier 0x6b1adeb6.


Command (m for help): n

Partition type:

  p   primary (0 primary, 0extended, 4 free)

  e   extended

Select (default p): p

Partition number (1-4, default 1):1

First sector (8192-20971519,default 8192):

Using default value 8192

Last sector, +sectors or+size{K,M,G} (8192-20971519, default 20971519): +5G

Partition 1 of type Linux and ofsize 5 GiB is set


Command (m for help): n

Partition type:

  p   primary (1 primary, 0extended, 3 free)

  e   extended

Select (default p): p

Partition number (2-4, default 2):2

First sector (10493952-20971519,default 10493952):

Using default value 10493952

Last sector, +sectors or+size{K,M,G} (10493952-20971519, default 20971519):

Using default value 20971519

Partition 2 of type Linux and ofsize 5 GiB is set


Command (m for help): w

The partition table has beenaltered!


Calling ioctl() to re-readpartition table.

Syncing disks.

[root@client ~]# fdisk -l /dev/rbd0


Disk /dev/rbd0: 10.7 GB,10737418240 bytes, 20971520 sectors

Units = sectors of 1 * 512 = 512bytes

Sector size (logical/physical): 512bytes / 512 bytes

I/O size (minimum/optimal): 4194304bytes / 4194304 bytes

Disk label type: dos

Disk identifier: 0x6b1adeb6


    Device Boot      Start         End      Blocks  Id  System

/dev/rbd0p1            8192    10493951    5242880   83  Linux

/dev/rbd0p2        10493952    20971519    5238784   83  Linux

[root@client ~]# mkfs -t xfs/dev/rbd0p1

meta-data=/dev/rbd0p1            isize=512    agcount=8, agsize=163840 blks

         =                       sectsz=512   attr=2, projid32bit=1

         =                       crc=1        finobt=0, sparse=0

data     =                       bsize=4096   blocks=1310720, imaxpct=25

         =                       sunit=1024   swidth=1024 blks

naming   =version 2              bsize=4096   ascii-ci=0 ftype=1

log      =internal log           bsize=4096   blocks=2560, version=2

         =                       sectsz=512   sunit=8 blks, lazy-count=1

realtime =none                   extsz=4096   blocks=0, rtextents=0

[root@client ~]# mkfs -t ext4 /dev/rbd0p2

mke2fs 1.42.9 (28-Dec-2013)

Discarding device blocks: done                           

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

Stride=1024 blocks, Stripewidth=1024 blocks

327680 inodes, 1309696 blocks

65484 blocks (5.00%) reserved forthe super user

First data block=0

Maximum filesystem blocks=1342177280

40 block groups

32768 blocks per group, 32768fragments per group

8192 inodes per group

Superblock backups stored onblocks:

        32768, 98304, 163840, 229376, 294912,819200, 884736


Allocating group tables: done                            

Writing inode tables: done                           

Creating journal (32768 blocks):done

Writing superblocks and filesystemaccounting information: done


[root@client ~]# mkdir /media/xfs

[root@client ~]# mkdir /media/ext4

[root@client ~]# mount -t xfs/dev/rbd0p1 /media/xfs

[root@client ~]# mount -t ext4/dev/rbd0p2 /media/ext4  

[root@client ~]# mount | grep rbd

/dev/rbd0p1 on /media/xfs type xfs(rw,relatime,seclabel,attr2,inode64,sunit=8192,swidth=8192,noquota)

/dev/rbd0p2 on /media/ext4 typeext4 (rw,relatime,seclabel,stripe=1024,data=ordered)

(10)扩容镜像大小

Ceph 块设备映像是精简配置,只有在你开始写入数据时它们才会占用物理空间。然而,它们都有最大容量,就是你设置的 --size 选项。如果你想增加(或减小) Ceph 块设备映像的最大尺寸,执行下列命令。此处实验将之前创建的rbd镜像大小增加到20GB,并查验。

[root@client ~]# rbd resize --imagerbd0 --size 20480 --name client.rbd

Resizing image: 100%complete...done.

[root@client ~]# rbd info --imagerbd0 --name client.rbd

rbd image 'rbd0':

        size 20 GiB in 5120 objects

        order 22 (4 MiB objects)

        snapshot_count: 0

        id: 375afa8aa381

        block_name_prefix:rbd_data.375afa8aa381

        format: 2

        features: layering, exclusive-lock

        op_features:

        flags:

        create_timestamp: Sat Nov 16 20:38:452019

        access_timestamp: Sat Nov 16 20:38:452019

        modify_timestamp: Sat Nov 16 20:38:452019


可以使用fdisk对/dev/rbd0设备继续进行分区:

[root@client ~]# fdisk /dev/rbd0                     

Welcome to fdisk (util-linux2.23.2).


Changes will remain in memory only,until you decide to write them.

Be careful before using the writecommand.



Command (m for help): n

Partition type:

  p   primary (2 primary, 0extended, 2 free)

  e   extended

Select (default p): p

Partition number (3,4, default 3):3

First sector (20971520-41943039,default 20971520):

Using default value 20971520

Last sector, +sectors or +size{K,M,G}(20971520-41943039, default 41943039):

Using default value 41943039

Partition 3 of type Linux and ofsize 10 GiB is set


Command (m for help): w

The partition table has beenaltered!


Calling ioctl() to re-readpartition table.


WARNING: Re-reading the partitiontable failed with error 16: Device or resource busy.

The kernel still uses the oldtable. The new table will be used at

the next reboot or after you runpartprobe(8) or kpartx(8)

Syncing disks.

[root@client ~]# partprobe -s

/dev/sda: msdos partitions 1 2

Warning: Unable to open /dev/sr0read-write (Read-only file system). /dev/sr0 has been opened read-only.

/dev/sr0: msdos partitions 2

/dev/rbd0: msdos partitions 1 2 3

[root@client ~]# fdisk -l /dev/rbd0


Disk /dev/rbd0: 21.5 GB,21474836480 bytes, 41943040 sectors

Units = sectors of 1 * 512 = 512bytes

Sector size (logical/physical): 512bytes / 512 bytes

I/O size (minimum/optimal): 4194304bytes / 4194304 bytes

Disk label type: dos

Disk identifier: 0x6b1adeb6


    Device Boot      Start         End     Blocks   Id  System

/dev/rbd0p1            8192    10493951    5242880   83  Linux

/dev/rbd0p2        10493952   20971519    5238784   83  Linux

/dev/rbd0p3        20971520    41943039   10485760   83  Linux

你可能感兴趣的:(Ceph 快存储部署)