ceph block device与cephfs快速入门

本文接从零部署一个ceph集群

block device快速入门

ceph块设备也称之为RBD或者RADOS块设备。

创建一个块设备池

若要创建一个块设备池,首先要在admin节点,创建一个pool,然后对pool进行初始化。

[root@ceph-admin ceph-cluster]# ceph osd pool create mytest 100 100	#第一个100指定存储池存储对象的个数,第二个100指定存储池的OSD组合个数
pool 'mytest' created
[root@ceph-admin ceph-cluster]# rbd pool init mytest

配置块设备

1.创建一个块设备镜像

[root@ceph-admin ceph-cluster]# rbd create foo --size 4096 --pool mytest --image-feature layering

2.映射镜像到块设备

[root@ceph-admin ceph-cluster]# rbd  map foo --name client.admin --pool mytest
/dev/rbd0

3.使用块设备创建文件系统

[root@ceph-admin ceph-cluster]# mkfs.xfs  /dev/rbd/mytest/foo 
meta-data=/dev/rbd/mytest/foo    isize=512    agcount=8, agsize=131072 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=1048576, imaxpct=25
         =                       sunit=1024   swidth=1024 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

4.挂载文件系统

[root@ceph-admin ceph-cluster]# mkdir /mnt/ceph-block-device
[root@ceph-admin ceph-cluster]# mount /dev/rbd/mytest/foo /mnt/ceph-block-device

CephFS快速入门

部署metadata server

CephFS中的所有元数据操作都通过一台元数据服务器进行,因此至少需要一个元数据服务器。 将ceph-admin节点创建元数据服务器:

[root@ceph-admin ceph-cluster]# ceph-deploy mds create ceph-admin
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mds create ceph-admin
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : 
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : 
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  mds                           : [('ceph-admin', 'ceph-admin')]
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mds][DEBUG ] Deploying mds, cluster ceph hosts ceph-admin:ceph-admin
[ceph-admin][DEBUG ] connected to host: ceph-admin 
[ceph-admin][DEBUG ] detect platform information from remote host
[ceph-admin][DEBUG ] detect machine type
[ceph_deploy.mds][INFO  ] Distro info: CentOS Linux 7.8.2003 Core
[ceph_deploy.mds][DEBUG ] remote host will use systemd
[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to ceph-admin
[ceph-admin][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-admin][WARNIN] mds keyring does not exist yet, creating one
[ceph-admin][DEBUG ] create a keyring file
[ceph-admin][DEBUG ] create path if it doesn't exist
[ceph-admin][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.ceph-admin osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-ceph-admin/keyring
[ceph-admin][INFO  ] Running command: systemctl enable ceph-mds@ceph-admin
[ceph-admin][WARNIN] Created symlink from /etc/systemd/system/ceph-mds.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[ceph-admin][INFO  ] Running command: systemctl start ceph-mds@ceph-admin
[ceph-admin][INFO  ] Running command: systemctl enable ceph.target

创建file system

[root@ceph-admin ceph-cluster]# ceph osd pool create cephfs_data 32
pool 'cephfs_data' created
[root@ceph-admin ceph-cluster]# ceph osd pool create cephfs_meta 32
pool 'cephfs_meta' created
[root@ceph-admin ceph-cluster]# ceph fs new mycephfs cephfs_meta cephfs_data
new fs with metadata pool 3 and data pool 2

挂载文件系统

使用内核驱动

mount -t ceph :{path-to-mounted} {mount-point} -o name={user-name},secret={secret}

{path-to-be-mounted}:使用的cephfs的路径

{mount-point}:cephfs挂载点

{user-name}:要安装cephx的用户名

{secret}:用户密钥,可以从/etc/ceph/xxx.keyring中获取用户名和密钥

[root@ceph-node1 ceph]# mkdir /mnt/mycephfs
[root@ceph-node1 ceph]# mount -t ceph 10.10.128.174:6789:/ /mnt/mycephfs  -o name=admin,secret=AQAFZeNeeLGKDxAAzBBLA23ebdk+7t/FaclHYQ==

[root@ceph-node1 ceph]# df -Th
文件系统                类型      容量  已用  可用 已用% 挂载点
/dev/mapper/centos-root xfs        50G  1.6G   49G    4% /
devtmpfs                devtmpfs  2.9G     0  2.9G    0% /dev
tmpfs                   tmpfs     2.9G     0  2.9G    0% /dev/shm
tmpfs                   tmpfs     2.9G   33M  2.8G    2% /run
tmpfs                   tmpfs     2.9G     0  2.9G    0% /sys/fs/cgroup
/dev/vda1               xfs      1014M  142M  873M   14% /boot
/dev/mapper/centos-home xfs        47G   33M   47G    1% /home
tmpfs                   tmpfs     577M     0  577M    0% /run/user/0
tmpfs                   tmpfs     2.9G   52K  2.9G    1% /var/lib/ceph/osd/ceph-1
10.10.128.174:6789:/    ceph       18G     0   18G    0% /mnt/mycephfs

使用FUSE(Filesystem in User Space)

安装ceph-fuse工具

[root@ceph-node1 ceph]# yum install ceph-fuse -y
[root@ceph-node1 ceph]# mkdir /mnt/mycephfs2

[root@ceph-node1 ceph]# ceph-fuse -h
usage: ceph-fuse [-n client.username] [-m mon-ip-addr:mon-port]  [OPTIONS]
  --client_mountpoint/-r 
                    use sub_directory as the mounted root, rather than the full Ceph tree.
                    
[root@ceph-node1 ceph]# ceph-fuse -m 10.10.128.174:6789 /mnt/mycephfs2/ 
ceph-fuse[25485]: starting ceph client2020-06-13 17:13:57.445 7fd711122c00 -1 init, newargv = 0x559c2250bf20 newargc=7

ceph-fuse[25485]: starting fuse

[root@ceph-node1 ceph]# df -Th
文件系统                类型            容量  已用  可用 已用% 挂载点
/dev/mapper/centos-root xfs              50G  1.6G   49G    4% /
devtmpfs                devtmpfs        2.9G     0  2.9G    0% /dev
tmpfs                   tmpfs           2.9G     0  2.9G    0% /dev/shm
tmpfs                   tmpfs           2.9G   33M  2.8G    2% /run
tmpfs                   tmpfs           2.9G     0  2.9G    0% /sys/fs/cgroup
/dev/vda1               xfs            1014M  142M  873M   14% /boot
/dev/mapper/centos-home xfs              47G   33M   47G    1% /home
tmpfs                   tmpfs           577M     0  577M    0% /run/user/0
tmpfs                   tmpfs           2.9G   52K  2.9G    1% /var/lib/ceph/osd/ceph-1
10.10.128.174:6789:/    ceph             18G     0   18G    0% /mnt/mycephfs
ceph-fuse               fuse.ceph-fuse   18G     0   18G    0% /mnt/mycephfs2

你可能感兴趣的:(ceph,ceph,分布式存储)