环境准备:
1.创建ceph客户端(环境检查)
安装一台虚拟机或者用一台物理机来作为cpeh客户端进行测试,操作系统需要是linux
需要注意:
1、linux内核从2.6.32版本开始支持ceph
2、建议使用2.6.34以及以上的内核版本
3、modprobe rbd
[root@localhost ~]# uname -r
3.10.0-957.el7.x86_64
2.配置yum源
如果没有安装yum -y install wget
a)删除默认的源,国外的比较慢
rm -rf /etc/yum.repos.d/*.repo
b)下载阿里云的base源
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
c)下载阿里云的epel源
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
e)添加ceph源
vi /etc/yum.repos.d/ceph.repo
[ceph]
name=Ceph noarch packages
baseurl=http://mirrors.163.com/ceph/rpm-luminous/el7/x86_64/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=http://mirrors.163.com/ceph/keys/release.asc
[ceph-deploy]
name=Ceph noarch packages
baseurl=http://mirrors.163.com/ceph/rpm-luminous/el7/noarch/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=http://mirrors.163.com/ceph/keys/release.asc
或者使用阿里的源
[ceph]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/x86_64/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=http://mirrors.aliyun.com/ceph/keys/release.asc
[ceph-deploy]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/noarch/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=http://mirrors.aliyun.com/ceph/keys/release.asc
f)安装epel仓库
使用命令
sudo yum install epel-release -y
3.安装ceph
1)#yum -y install --release luminous ceph
2)#ceph -v
3)列出rbd设备镜像(可在任意一台节点上列出)
# rbd ls
foo
4)查看创建后rbd镜像详细信息(可以在任意一台ceph节点上查看)
[root@localhost ceph]# rbd --image foo info
rbd image 'foo':
size 4096 MB in 1024 objects
order 22 (4096 kB objects)
block_name_prefix: rb.0.12177.238e1f29
format: 1
RBD-Client安装
配置块设备(client节点)
注意作为client节点也需要安装ceph,否则rbd会报错sudo: rbd: command not found。
需要注意的是client节点需要是ceph集群中的一员,需要有/etc/ceph/ceph.client.admin.keyring文件,否则无法跟ceph集群沟通,会报错ceph monclient(hunting) error missing keyring cannot use cephx for authentication。
1.配置hosts
192.168.110.3 node1
192.168.110.4 node2
192.168.110.5 node3
192.168.110.6 node4
2.可以从集群的其他节点(主节点的/etc/ceph目录下)上将ceph.client.admin.keyring和ceph.conf文件复制一份过来,放到/etc/ceph目录下。 比如在node4节点使用命令
scp node1:/etc/ceph/ceph.client.admin.keyring /etc/ceph
scp node1:/etc/ceph/ceph.conf /etc/ceph
[root@localhost ceph]# ls
ceph.client.admin.keyring ceph.conf rbdmap
3.CephX 认证系统
在执行rbd命令之前请先使用命令ceph -s确保集群的健康情况如下: (health: HEALTH_OK开可以进行后面操作)
[root@node1 ceph]# ceph -s
cluster:
id: 4fde6dd1-7e32-4f07-8f6f-9bb4577a041a
health: HEALTH_OK
services:
mon: 3 daemons, quorum node1,node2,node3
mgr: node1(active), standbys: node2, node3
osd: 3 osds: 3 up, 3 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0B
usage: 3.01GiB used, 57.0GiB / 60.0GiB avail
pgs:
4.查看ceph池
#ceph osd lspools
5.创建pools
创建pool语法:
ceph osd pool create {pool-name} {pg-num} [{pgp-num}] [replicated] \[crush-ruleset-name] [expected-num-objects]
ceph osd pool create {pool-name} {pg-num} {pgp-num} erasure \[erasure-code-profile] [crush-ruleset-name] [expected_num_objects]
ceph osd pool create rbd 50
#50确定 pg_num 取值是强制性的,官网推荐
少于 5 个 OSD 时可把 pg_num 设置为 128
OSD 数量在 5 到 10 个时,可把 pg_num 设置为 512
OSD 数量在 10 到 50 个时,可把 pg_num 设置为 4096
OSD 数量大于 50 时,你得理解权衡方法、以及如何自己计算 pg_num 取值
a)创建池
ceph osd pool create rbd 50
b)客户端创建块设备
sudo rbd create foo --size 4096 [-m {mon-IP}] [-k /path/to/ceph.client.admin.keyring]
foo是块的名称
size是块大小
-m 是主机名或ip
-k是client节点的key文件,可以使用命令查看ls /etc/ceph
例子:rbd create foo --size 4096 -m node1 -k /etc/ceph/ceph.client.admin.keyring
例如
创建一个容量为 100M 的 rbd 块设备
rbd create foo --size 100 --name client.rbd -k /etc/ceph/ceph.client.admin.keyring
####################################################################################
注意:进入客户端/etc/ceph
[root@localhost ceph]# ls
ceph.client.admin.keyring ceph.conf rbdmap #注意client.admin为--name
6.可以使用命令查看块的创建情况
[root@localhost ceph]# rbd list -k /etc/ceph/ceph.client.admin.keyring
foo
7.将映像映射为块设备(块存储和客户端rbd磁盘映射)
sudo rbd map foo --name client.admin [-m {mon-IP}] [-k /path/to/ceph.client.admin.keyring]
如:
[root@localhost ceph]# rbd map foo --name client.admin -m node1 -k /etc/ceph/ceph.client.admin.keyring
rbd: sysfs write failed
RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable foo object-map fast-diff deep-flatten".
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (6) No such device or address
排查:
[root@localhost ceph]# rbd info foo
rbd image 'foo':
size 100MiB in 25 objects
order 22 (4MiB objects)
block_name_prefix: rbd_data.5e736b8b4567
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
flags:
create_timestamp: Fri Aug 16 21:20:25 2019
说明:特性feature一栏,由于我OS的kernel只支持layering,其他都不支持,所以需要把部分不支持的特性disable掉
#rbd map foo --name client.admin -m node1 -k /etc/ceph/ceph.client.admin.keyring --image-feature layering
解决:
[root@localhost ceph]# rbd feature disable foo exclusive-lock object-map fast-diff deep-flatten
[root@localhost ceph]# rbd info foo
rbd image 'foo':
size 100MiB in 25 objects
order 22 (4MiB objects)
block_name_prefix: rbd_data.5e736b8b4567
format: 2
features: layering
flags:
create_timestamp: Fri Aug 16 21:20:25 2019
[root@localhost ceph]# rbd map foo --name client.admin -m node1 -k /etc/ceph/ceph.client.admin.keyring
/dev/rbd0
8.查看块映射map和创建文件系统
[root@localhost ceph]# rbd showmapped
id pool image snap device
0 rbd foo - /dev/rbd0
7.格式化磁盘,则使用/dev/rbd0创建文件系统
#默认回车即可
[root@localhost ceph]# mkfs.ext4 -m0 /dev/rbd0
mke2fs 1.42.9 (28-Dec-2013)
Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
Stride=4096 blocks, Stripe width=4096 blocks
25688 inodes, 102400 blocks
0 blocks (0.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=33685504
13 block groups
8192 blocks per group, 8192 fragments per group
1976 inodes per group
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729
Allocating group tables: done
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done
8.挂载文件系统并写入文件查看
#sudo mkdir /cephAll
#sudo mount /dev/rbd0 /cephAll/
#cd /cephAll
#sudo vi helloCeph.txt
[root@localhost cephAll]# ls
helloCeph.txt lost+found
[root@localhost cephAll]# df -h
df: `/mnt/hgfs': Protocol error
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 8.5G 3.0G 5.1G 37% /
tmpfs 488M 72K 488M 1% /dev/shm
/dev/sda1 283M 75M 189M 29% /boot
/dev/rbd0 3.9G 8.0M 3.8G 1% /cephAll