Ceph 文件存储

 Ceph 文件存储_第1张图片

 

 

 

(1)部署cephfs

[ceph-admin@c720181 my-cluster]$ ceph-deploy mds create c720182

注意:查看输出,可以看到执行了那些命令,以及生成的keyring

[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph-admin/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /bin/ceph-deploy mds create c720182

[ceph_deploy.cli][INFO  ] ceph-deploy options:

[ceph_deploy.cli][INFO  ]  username                      : None

[ceph_deploy.cli][INFO  ]  verbose                       : False

[ceph_deploy.cli][INFO  ]  overwrite_conf                : False

[ceph_deploy.cli][INFO  ]  subcommand                    : create

[ceph_deploy.cli][INFO  ]  quiet                         : False

[ceph_deploy.cli][INFO  ]  cd_conf                       :

[ceph_deploy.cli][INFO  ]  cluster                       : ceph

[ceph_deploy.cli][INFO  ]  func                          :

[ceph_deploy.cli][INFO  ]  ceph_conf                     : None

[ceph_deploy.cli][INFO  ]  mds                           : [('c720182', 'c720182')]

[ceph_deploy.cli][INFO  ]  default_release               : False

[ceph_deploy.mds][DEBUG ] Deploying mds, cluster ceph hosts c720182:c720182

[c720182][DEBUG ] connection detected need for sudo

[c720182][DEBUG ] connected to host: c720182

[c720182][DEBUG ] detect platform information from remote host

[c720182][DEBUG ] detect machine type

[ceph_deploy.mds][INFO  ] Distro info: CentOS Linux 7.6.1810 Core

[ceph_deploy.mds][DEBUG ] remote host will use systemd

[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to c720182

[c720182][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[c720182][WARNIN] mds keyring does not exist yet, creating one

[c720182][DEBUG ] create a keyring file

[c720182][DEBUG ] create path if it doesn't exist

[c720182][INFO  ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.c720182 osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-c720182/keyring

[c720182][INFO  ] Running command: sudo systemctl enable ceph-mds@c720182

[c720182][WARNIN] Created symlink from /etc/systemd/system/ceph-mds.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].

[c720182][INFO  ] Running command: sudo systemctl start ceph-mds@c720182

[c720182][INFO  ] Running command: sudo systemctl enable ceph.target

[ceph-admin@c720181 my-cluster]$

[ceph-admin@c720181 my-cluster]$ ceph -s

  cluster:

    id:     a4088ae8-c818-40d6-ab40-8f40c5bedeee

    health: HEALTH_WARN

            application not enabled on 1 pool(s)

 

 

  services:

    mon: 3 daemons, quorum c720181,c720182,c720183

    mgr: c720181(active), standbys: c720183, c720182

    osd: 3 osds: 3 up, 3 in

    rgw: 3 daemons active

 

  data:

    pools:   18 pools, 156 pgs

    objects: 222 objects, 1.62KiB

    usage:   3.06GiB used, 56.9GiB / 60.0GiB avail

    pgs:     156 active+clean

 

 

(2)创建存放cephfs_data和cephfs_metadata池

[ceph-admin@c720181 my-cluster]$ ceph osd pool create cephfs_data 50

[ceph-admin@c720181 my-cluster]$ ceph osd pool create cephfs_metadate 30

[ceph-admin@c720181 my-cluster]$ ceph fs new cephfs cephfs_metadate cephfs_data

new fs with metadata pool 35 and data pool 36

 

(3)查看相关信息

[ceph-admin@c720181 my-cluster]$ ceph mds stat

cephfs-1/1/1 up  {0=c720182=up:active}

[ceph-admin@c720181 my-cluster]$ ceph osd pool ls

default.rgw.control

default.rgw.meta

default.rgw.log

rbd

.rgw

.rgw.root

.rgw.control

.rgw.gc

.rgw.buckets

.rgw.buckets.index

.rgw.buckets.extra

.log

.intent-log

.usage

.users

.users.email

.users.swift

.users.uid

cephfs_metadate

cephfs_data

[ceph-admin@c720181 my-cluster]$ ceph fs ls

name: cephfs, metadata pool: cephfs_metadate, data pools: [cephfs_data ]

 

(4)创建用户(可选,因为部署mds的时候已经生成)

[ceph-admin@c720181 my-cluster]$ ceph auth get-or-create client.cephfs mon 'allow r' mds 'allow r,allow rw path=/' osd 'allow rw pool=cephfs_data' -o ceph.client.cephfs.keyring

 

(5)将用户密钥拷贝到客户端,比如客户端是c720184

[ceph-admin@c720181 my-cluster]$ cat ceph.client.cephfs.keyring

[client.cephfs]

key = AQBXllpd6s2IHRAAk8iiDAHRiCdc9LlKVTB74w==

[ceph-admin@c720181 my-cluster]$ scp ceph.client.cephfs.kering c720184:/etc/ceph/

scp: /etc/ceph//ceph.client.cephfs.kering: Permission denied

[ceph-admin@c720181 my-cluster]$ sudo scp ceph.client.cephfs.kering c720184:/etc/ceph/

ceph.client.cephfs.kering                                               100%   64     2.4KB/s   00:00   

 

 ===========================================================================

一、通过内核驱动挂载cephfs

说明:在Linux内核2.6.34和以后的版本中添加了对Ceph的本机支持

(1)创建挂载目录

[root@c720184 ceph]# mkdir /mnt/cephfs

 

(2)挂载

[ceph-admin@c720181 my-cluster]$ ceph auth get-key client.cephfs

AQBXllpd6s2IHRAAk8iiDAHRiCdc9LlKVTB74w==  //在ceph管理端上执行,获取key,当然上一步已经拷贝到本机/etc/ceph/ceph.client.cephfs.keyring,可直接在本地查看

[root@c720184 ceph]# mount -t ceph c720182:6789:/ /mnt/cephfs -o name=cephfs,secret=AQBXllpd6s2IHRAAk8iiDAHRiCdc9LlKVTB74w==       //name为用户名称(去掉前面的client)

[root@c720184 ceph]# df -h

Filesystem               Size  Used Avail Use% Mounted on

/dev/mapper/centos-root   17G  1.5G   16G   9% /

devtmpfs                 908M     0  908M   0% /dev

tmpfs                    920M     0  920M   0% /dev/shm

tmpfs                    920M  8.5M  911M   1% /run

tmpfs                    920M     0  920M   0% /sys/fs/cgroup

/dev/sda1               1014M  145M  870M  15% /boot

tmpfs                    184M     0  184M   0% /run/user/0

/dev/rbd1                 10G  133M  9.9G   2% /mnt/ceph-disk1

192.168.20.182:6789:/     18G     0   18G   0% /mnt/cephfs

 

因上述命令包括key密钥,不太安全,所以可通过下面的方式使用密钥配置文件挂载

[root@c720184 ceph]# mount -t ceph c720182:6789:/ /mnt/cephfs -o name=cephfs,secretfile=/etc/ceph/ceph.client.cephfs.keyring

secret is not valid base64: Invalid argument.

adding ceph secret key to kernel failed: Invalid argument.

failed to parse ceph_options

 

报上述错误,是因为密钥文件格式不对,需要拷贝一份重新配置一些:

[root@c720184 ceph]# cp ceph.client.cephfs.kerying cephfskey

修改如下:

[root@c720184 ceph]# cat cephfskey

AQBXllpd6s2IHRAAk8iiDAHRiCdc9LlKVTB74w==

重新挂载:

[root@c720184 ceph]# mount -t ceph c720182:6789:/ /mnt/cephfs -o name=cephfs,secretfile=/etc/ceph/cephfskey

[root@c720184 ceph]# df -h

Filesystem               Size  Used Avail Use% Mounted on

/dev/mapper/centos-root   17G  1.5G   16G   9% /

devtmpfs                 908M     0  908M   0% /dev

tmpfs                    920M     0  920M   0% /dev/shm

tmpfs                    920M  8.5M  911M   1% /run

tmpfs                    920M     0  920M   0% /sys/fs/cgroup

/dev/sda1               1014M  145M  870M  15% /boot

tmpfs                    184M     0  184M   0% /run/user/0

/dev/rbd1                 10G  133M  9.9G   2% /mnt/ceph-disk1

192.168.20.182:6789:/     18G     0   18G   0% /mnt/cephfs

(3)设置开机自动挂载

[root@c720184 ceph]# vim /etc/fstab

# /etc/fstab

# Created by anaconda on Tue Jul  9 07:05:16 2019

#

# Accessible filesystems, by reference, are maintained under '/dev/disk'

# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info

#

/dev/mapper/centos-root /                       xfs     defaults        0 0

UUID=9a641a11-b1ab-4ffe-9ed6-584d3bd308a7 /boot                   xfs     defaults        0 0

/dev/mapper/centos-swap swap                    swap    defaults        0 0

c720182:6789:/ /mnt/cephfs ceph name=cephfs,secretfile=/etc/ceph/cephfskey,_netdev,noatime 0 0

 

(4)取消挂载测试

[root@c720184 ceph]# umount /mnt/cephfs

[root@c720184 ceph]# mount /mnt/cephfs

[root@c720184 ceph]# df -h

Filesystem               Size  Used Avail Use% Mounted on

/dev/mapper/centos-root   17G  1.5G   16G   9% /

devtmpfs                 908M     0  908M   0% /dev

tmpfs                    920M     0  920M   0% /dev/shm

tmpfs                    920M  8.5M  911M   1% /run

tmpfs                    920M     0  920M   0% /sys/fs/cgroup

/dev/sda1               1014M  145M  870M  15% /boot

tmpfs                    184M     0  184M   0% /run/user/0

/dev/rbd1                 10G  133M  9.9G   2% /mnt/ceph-disk1

192.168.20.182:6789:/     18G     0   18G   0% /mnt/cephfs

(5)写入文件测试

[root@c720184 ceph]# dd if=/dev/zero of=/mnt/cephfs/file1 bs=1M count=1024

1024+0 records in

1024+0 records out

1073741824 bytes (1.1 GB) copied, 34.0426 s, 31.5 MB/s

 

二、使用ceph-fuse客户端挂载(建议使用客户端挂载方式,磁盘容量会更加准确(内核挂载显示所有容量包括元数据池,客户端挂载只使用数据盘容量))

说明:ceph文件系统由Linux内核本地支持,但是如果主机在较低内核版本上运行,或者有任何应用程序依赖项,总是可以使用FUSE客户端让Ceph挂载Cephfs

 

(1)安装Ceph-fuse客户端

         rpm -qa | grep -I ceph-fuse

yum install -y ceph-fuse

 

(2)挂载

[root@c720184 yum.repos.d]# ceph-fuse --keyring /etc/ceph/ceph.client.cephfs.keyring --name client.cephfs -m c720182:6789 /mnt/cephfs

注意:这里的秘钥ceph.client.cephfs.keyring和内核挂载的格式不一样(不需要做任何修改)

cat /etc/ceph/ceph.client.cephfs.keyring

[client.cephfs]

key = AQBXllpd6s2IHRAAk8iiDAHRiCdc9LlKVTB74w==

 

2019-08-19 21:09:48.274554 7f9c058790c0 -1 init, newargv = 0x560fdf46e8a0 newargc=9

ceph-fuse[16151]: starting ceph client

ceph-fuse[16151]: starting fuse

 

说明:因为keyring文件中包含了地址,所以fstab不需要指定地址了

(3)开机启动自动挂载

[root@c720184 yum.repos.d]# vim /etc/fstab

# /etc/fstab

# Created by anaconda on Tue Jul  9 07:05:16 2019

#

# Accessible filesystems, by reference, are maintained under '/dev/disk'

# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info

#

/dev/mapper/centos-root /                       xfs     defaults        0 0

UUID=9a641a11-b1ab-4ffe-9ed6-584d3bd308a7 /boot                   xfs     defaults        0 0

/dev/mapper/centos-swap swap                    swap    defaults        0 0

#c720182:6789:/ /mnt/cephfs ceph name=cephfs,secretfile=/etc/ceph/cephfskey,_netdev,noatime 0 0

id=cephfs,keyring=/etc/ceph/ceph.client.cephfs.keyring /mnt/cephfs fuse.ceph defaults 0 0 _netdev

 

(4)测试开机自动挂载

[root@c720184 yum.repos.d]# umount /mnt/cephfs

[root@c720184 yum.repos.d]# mount /mnt/cephfs

2019-08-19 21:20:10.970017 7f217d1fa0c0 -1 init, newargv = 0x558304b7e0e0 newargc=11

ceph-fuse[16306]: starting ceph client

ceph-fuse[16306]: starting fuse

[root@c720184 yum.repos.d]# df -h

Filesystem               Size  Used Avail Use% Mounted on

/dev/mapper/centos-root   17G  1.5G   16G   9% /

devtmpfs                 908M     0  908M   0% /dev

tmpfs                    920M     0  920M   0% /dev/shm

tmpfs                    920M  8.5M  911M   1% /run

tmpfs                    920M     0  920M   0% /sys/fs/cgroup

/dev/sda1               1014M  145M  870M  15% /boot

tmpfs                    184M     0  184M   0% /run/user/0

/dev/rbd1                 10G  133M  9.9G   2% /mnt/ceph-disk1

ceph-fuse                 18G  1.0G   17G   6% /mnt/cephfs

 

 ===========================================================================

将Ceph FS导出为NFS服务器

网络文件系统(NFS)是最流行的可共享文件协议之一,每个基于unix的系统都可以使用它。不理解Ceph FS类型的基于unix的

客户机仍然可以使用NFS访问Ceph文件系统。要做到这一点,我们需要一个NFS服务器,它可以作为NFS共享重新导出CephFS.

NFS-ganesha是一个用户空间中运行的NFS服务器,使用libcephfs支持CephFS文件系统抽象层(FSAL).

 

#编译安装软件nfs-ganesha,请参考这篇博客:https://www.cnblogs.com/flytor/p/11430490.html

#yum install -y nfs-utils

#启动NFS所需要的rpc服务

systemctl start rpcbind

systemctl enable rpcbind

 

#修改配置文件

vim /etc/ganesha/ganesha.conf

......

EXPORT

{

  Export_ID = 1

  Path = "/";

  Pseudo = "/";

  Access_Type = RW;

  SecType = "none";

  NFS_Protocols = "3";

  Squash = No_Root_Squash;

  Transport_Protocols = TCP;

  FSAL {

      Name = CEPH;

     }

}

LOG {
        ## Default log level for all components
        Default_Log_Level = WARN;

        ## Configure per-component log levels.
        Components {
                FSAL = INFO;
                NFS4 = EVENT;
        }

        ## Where to log
        Facility {
                name = FILE;
                destination = "/var/log/ganesha.log";
                enable = active;
        }
}

#通过提供Ganesha.conf启动NFS Ganesha守护进程

ganesha.nfsd -f /etc/ganesha.conf -L /var/log/ganesha.log -N NIV_DEBUG

[root@c720182 ~]# showmount -e
Export list for c720182:
/ (everyone

 

#客户端挂载

yum install -y nfs-utils

mkdir /mnt/cephnfs

[root@client ~]# mount -o rw,noatime c720182:/ /mnt/cephnfs/
[root@client ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/cl_-root   50G  1.7G   49G   4% /
devtmpfs              910M     0  910M   0% /dev
tmpfs                 920M     0  920M   0% /dev/shm
tmpfs                 920M   17M  904M   2% /run
tmpfs                 920M     0  920M   0% /sys/fs/cgroup
/dev/sda1            1014M  184M  831M  19% /boot
/dev/mapper/cl_-home  196G   33M  195G   1% /home
tmpfs                 184M     0  184M   0% /run/user/0
ceph-fuse              54G     0   54G   0% /mnt/cephfs
c720182:/              54G     0   54G   0% /mnt/cephnfs

 

转载于:https://www.cnblogs.com/flytor/p/11380033.html

你可能感兴趣的:(Ceph 文件存储)