根据去年2018的Ceph社区用户报告,使用CephFS Client已经略超RGW,用户数据如下:
使用CephFS虽然performance不好,但在文件备份和文档管理的应用方面已基本满足需求,另外CephFS是支持多节点挂载(例如,K8S应用中),这让对performance要求不高的Ceph用户使用CephFS的需求上升。CephFS的应用场景调查如下:
2018年用户报告统计的访问CephFS的方式如下:
kernel cephfs接口的性能略好于ceph-fuse,其使用与ceph-fuse一样方便,所以kernel cephfs用户使用率更高一些。
1. mds元数据服务器的部署:
使用ceph-deploy部署ceph mds元数据服务器,它会依赖之前ceph-deploy时候生成的一些配置和keyring文件,进入之前部署的节点目录,执行命令: ceph-deploy mds create [hostname]:[mdsname]
~/ceph-cluster$ ceph-deploy mds create node1:mds-1
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/yjiang2/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.38): /usr/bin/ceph-deploy mds create node1:mds-1
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf :
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func :
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] mds : [('node1', 'mds-1')]
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.mds][DEBUG ] Deploying mds, cluster ceph hosts node1:mds-1
[node1][DEBUG ] connection detected need for sudo
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[ceph_deploy.mds][INFO ] Distro info: CentOS Linux 7.6.1810 Core
[ceph_deploy.mds][DEBUG ] remote host will use systemd
[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to node1
[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node1][WARNIN] mds keyring does not exist yet, creating one
[node1][DEBUG ] create a keyring file
[node1][DEBUG ] create path if it doesn't exist
[node1][INFO ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.mds-1 osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-mds-1/keyring
[node1][INFO ] Running command: sudo systemctl enable ceph-mds@mds-1
[node1][WARNIN] Created symlink from /etc/systemd/system/ceph-mds.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[node1][INFO ] Running command: sudo systemctl start ceph-mds@mds-1
[node1][INFO ] Running command: sudo systemctl enable ceph.target
在ceph-deploy节点服务器上使用 ceph mds stat查看mds状态:
$ sudo ceph mds stat
test_cephfs1-1/1/1 up {0=mds-1=up:active}
或者去元服务器节点查看mds daemon
$ ps -axu| grep mds
ceph 94389 0.1 1.3 336904 20716 ? Ssl 03:36 0:00 /usr/bin/ceph-mds -f --cluster ceph --id mds-1 --setuser ceph --setgroup ceph
2. 创建CephFS需要的metadata 和 data pools,执行命令: ceph osd pool create [poolname] [pgnum] [pgpnum]
$ sudo ceph osd pool create cephfs_data 100 100 // 创建data pool
pool 'cephfs_data' created
$ sudo ceph osd pool create cephfs_metadata 100 100
Error ERANGE: pg_num 100 size 3 would mean 792 total pgs, which exceeds max 750 (mon_max_pg_per_osd 250 * num_in_osds 3)
yjiang2@admin-node:~/ceph-cluster$ sudo ceph osd pool create cephfs_metadata 64 64 // 创建metadata pool
pool 'cephfs_metadata' created
3. 创建CephFS filesystem,执行命令:ceph fs new
$ sudo ceph fs new test_cephfs1 cephfs_metadata cephfs_data
new fs with metadata pool 7 and data pool 6
4. 挂载
client挂载方式有两种,通过 kernel cephfs module 和 ceph-fuse,框架如下:
由于ceph-fuse的IO path比较长,性能会比ceph kernel module的方式差一些。
4.1 kernel cephfs module 挂载:
CephFS filesystem,使用执行命令mount.ceph 或者 mount -t ceph,命令格式:mount.ceph [monitor host name]:[portnum:6789]:/ /client/mount
提示出错,mount error 22 = Invalid argument,请参考mount error 22 = Invalid argument解决方法
挂载命令:
$ sudo mount.ceph node1:6789:/ ~/client_cephfs_mnt/ -o name=admin,secretfile=~/admin.keyring
挂载成功后可以df 查看:
$ df -h |grep ceph
192.168.122.157:6789:/ 7.1G 0 7.1G 0% /home/yjiang2/client_cephfs_mnt
4.2 ceph-fuse 方式挂载:
先安装ceph-fuse工具,挂载后,df 可以看到 ceph-fuse
$ sudo apt install ceph-fuse //安装ceph-fuse
$ sudo ceph-fuse -m node1:6789 /home/yjiang2/client_cephfs_mnt/
ceph-fuse[25517]: starting ceph client
2019-07-16 15:16:03.935532 7f0d77c18500 -1 init, newargv = 0x5566c89d2280 newargc=9
ceph-fuse[25517]: starting fuse
yjiang2@admin-node:~/ceph-cluster$ df -h |grep ceph
ceph-fuse 7.1G 0 7.1G 0% /home/yjiang2/client_cephfs_mnt
cephfs官网文档
cephfs部署之前准备工作