Ceph 对象存储部署

Ceph对象存储部署与使用

1.安装client操作系统

(1)虚拟机基础设置

在VMware中创建一台虚拟机,操作系统为CentOS-7-x86_64-DVD-1908,硬盘大小为20G,并将虚拟机的名字设为client,如图7-8所示。


                  图7-8 虚拟机配置

(2)虚拟机网络设置

为虚拟机配置主机名:client。设置网络为NAT模式,配置IP地址:192.168.100.100,子网掩码为255.255.255.0,默认网关为192.168.100.2,DNS服务器为192.168.100.2,使虚拟机可以访问Internet。

2.配置Ceph对象存储

(1)在ceph-1节点上安装ceph对象网关软件包

Ceph对象存储使用Ceph对象网关守护进程(radosgw),所以在使用对象存储之前,我们需要先安装配置好对象网关RGW。

Ceph RGW的FastCGI支持多种Web服务器作为前端,例如Nginx、Apache2等。 从Ceph

Hammer版本开始,使用ceph-deploy部署时将会默认使用内置的civetweb作为前端,区别在于配置的方式不同,我们这里采用默认civetweb方式安装配置RGW。

[root@ceph-1 ~]# cd /opt/osd

[root@ceph-1 osd]# ceph-deploy rgwcreate ceph-1

(2)编辑pool文件

.rgw

.rgw.root

.rgw.control

.rgw.gc

.rgw.buckets

.rgw.buckets.index

.rgw.buckets.extra

.log

.intent-log

.usage

.users

.users.email

.users.swift

.users.uid

(3)编辑创建和配置pool的脚本文件

此处可以通过脚本一键创建对象存储需要使用的pool。

[root@ceph-1 osd]# vi/root/create_pool.sh

#!/bin/bash


PG_NUM=8

PGP_NUM=8

SIZE=3


for i in `cat /root/pool`

        do

        ceph osd pool create $i $PG_NUM

        ceph osd pool set $i size $SIZE

        done


for i in `cat /root/pool`

        do

        ceph osd pool set $i pgp_num $PGP_NUM

        done

(4)运行脚本文件,创建对象存储所使用的所有pool

[root@ceph-1 osd]# chmod +x/root/create_pool.sh

[root@ceph-1 osd]#/root/create_pool.sh

pool '.rgw' created

set pool 5 size to 3

pool '.rgw.root' already exists

set pool 1 size to 3

pool '.rgw.control' created

set pool 6 size to 3

pool '.rgw.gc' created

set pool 7 size to 3

pool '.rgw.buckets' created

set pool 8 size to 3

pool '.rgw.buckets.index' created

set pool 9 size to 3

pool '.rgw.buckets.extra' created

set pool 10 size to 3

pool '.log' created

set pool 11 size to 3

pool '.intent-log' created

set pool 12 size to 3

pool '.usage' created

set pool 13 size to 3

pool '.users' created

set pool 14 size to 3

pool '.users.email' created

set pool 15 size to 3

pool '.users.swift' created

set pool 16 size to 3

pool '.users.uid' created

set pool 17 size to 3

set pool 5 pgp_num to 8

set pool 1 pgp_num to 8

set pool 6 pgp_num to 8

set pool 7 pgp_num to 8

set pool 8 pgp_num to 8

set pool 9 pgp_num to 8

set pool 10 pgp_num to 8

set pool 11 pgp_num to 8

set pool 12 pgp_num to 8

set pool 13 pgp_num to 8

set pool 14 pgp_num to 8

set pool 15 pgp_num to 8

set pool 16 pgp_num to 8

set pool 17 pgp_num to 8

(5)测试是否能访问ceph集群

在使用脚本一键创建好所需要的pool之后,需要进行ceph集群的测试,防止实验过程中出现错误。

[root@ceph-1 osd]# cp /var/lib/ceph/radosgw/ceph-rgw.ceph-1/keyring/etc/ceph/ceph.client.rgw.ceph-1.keyring

[root@ceph-1 osd]# ceph -s -k /var/lib/ceph/radosgw/ceph-rgw.ceph-1/keyring--name client.rgw.ceph-1

 cluster:

   id:    68ecba50-862d-482e-afe2-f95961ec3323

   health: HEALTH_OK


 services:

   mon: 3 daemons, quorum ceph-1,ceph-2,ceph-3 (age 21m)

   mgr: ceph-1(active, since 21m)

    osd:3 osds: 3 up (since 21m), 3 in (since 7d)

   rgw: 1 daemon active (ceph-1)


 data:

   pools:   17 pools, 136 pgs

   objects: 187 objects, 1.2 KiB

   usage:   3.0 GiB used, 294 GiB /297 GiB avail

pgs:     136 active+clean

3.使用S3

API 访问Ceph对象存储

(1)在ceph-1节点创建radosgw用户

[root@ceph-1 osd]# radosgw-adminuser create --uid=radosgw --display-name="radosgw"

输出结果如下:

{

   "user_id": "radosgw",

   "display_name": "radosgw",

   "email": "",

   "suspended": 0,

   "max_buckets": 1000,

   "subusers": [],

   "keys": [

        {

            "user":"radosgw",

            "access_key": "XTAA1VRTXIKIEH89GUBG",

            "secret_key": "P8i5dC6jeOpVnlK9qKYYN2enLjFcz0fPZne9sxVE"

        }

   ],

   "swift_keys": [],

   "caps": [],

   "op_mask": "read, write, delete",

   "default_placement": "",

   "default_storage_class": "",

   "placement_tags": [],

   "bucket_quota": {

        "enabled": false,

        "check_on_raw": false,

        "max_size": -1,

        "max_size_kb": 0,

        "max_objects": -1

   },

   "user_quota": {

        "enabled": false,

        "check_on_raw": false,

        "max_size": -1,

        "max_size_kb": 0,

        "max_objects": -1

   },

   "temp_url_keys": [],

   "type": "rgw",

   "mfa_ids": []

}

(2)在client节点安装bind

[root@client ~]# mkdir /opt/bak

[root@client ~]# cd/etc/yum.repos.d

[root@client yum.repos.d]# mv */opt/bak

将CentOS7-Base-163.repo通过SFTP复制到client节点的/etc/yum.repos.d目录中。

[root@client yum.repos.d]# ls

CentOS7-Base-163.repo

[root@client yum.repos.d]# yumclean all

[root@client yum.repos.d]# yummakecache

[root@client yum.repos.d]# yum -yinstall bind

(3)编辑bind主配置文件

[root@client ~]# vi /etc/named.conf

修改以下配置:

listen-on port 53 {127.0.0.1;192.168.100.100; };

allow-query     { localhost;192.168.100.0/24; };

添加以下配置:

zone "lab.net" IN {

        type master;

        file "db.lab.net";

        allow-update { none; };

};

(4)编辑域lab.net的区域配置文件

[root@client ~]# vi /var/named/db.lab.net

@ 86400 IN SOA lab.net.root.lab.net. (

        20191120

        10800

        3600

       3600000

        86400 )

@ 86400 IN NS lab.net.

@ 86400 IN A 192.168.100.101

* 86400 IN CNAME @

(5)检查配置文件

[root@client ~]# named-checkconf/etc/named.conf

[root@client ~]# named-checkzonelab.net /var/named/db.lab.net

zone lab.net/IN: loaded serial20191120

OK

(6)启动bind服务

[root@client ~]# systemctl startnamed

[root@client ~]# systemctl enablenamed

Created symlink from/etc/systemd/system/multi-user.target.wants/named.service to/usr/lib/systemd/system/named.service.

(7)编辑网卡配置文件

在配置文件中,将DNS服务器指向client自己的IP地址。

[root@client ~]# vi/etc/sysconfig/network-scripts/ifcfg-ens32

DNS1=192.168.100.100

(8)编辑/etc/resolv.conf

在配置文件中,将DNS服务器指向client自己的IP地址。

[root@client ~]# vi/etc/resolv.conf

nameserver 192.168.100.100

(9)安装nslookup并测试DNS配置

[root@client ~]# yum -y installbind-utils

[root@client ~]# nslookup

> ceph-1.lab.net

Server:         192.168.100.100

Address:        192.168.100.100#53


ceph-1.lab.net  canonical name = lab.net.

Name:   lab.net

Address: 192.168.100.101

> exit

(10)安装s3cmd

访问https://s3tools.org/download,下载s3cmd的2.0.2版本。

[root@client ~]# ls

anaconda-ks.cfg  s3cmd-2.0.2.zip

[root@client ~]# yum -y installunzip python-dateutil

[root@client ~]# unzips3cmd-2.0.2.zip

……

(11)配置s3cmd

[root@client ~]# cd s3cmd-2.0.2

[root@client s3cmd-2.0.2]# ./s3cmd--configure


Enter new values or accept defaultsin brackets with Enter.

Refer to user manual for detaileddescription of all options.


Access key and Secret key are youridentifiers for Amazon S3. Leave them empty for using the env variables.

Access Key: XTAA1VRTXIKIEH89GUBG(输入ceph-1节点显示的access_key)

Secret Key: P8i5dC6jeOpVnlK9qKYYN2enLjFcz0fPZne9sxVE(输入ceph-1节点显示的secret_key)

Default Region [US]: (直接回车)


Use "s3.amazonaws.com"for S3 Endpoint and not modify it to the target Amazon S3.

S3 Endpoint [s3.amazonaws.com]: ceph-1.lab.net:7480


Use "%(bucket)s.s3.amazonaws.com"to the target Amazon S3. "%(bucket)s" and "%(location)s"vars can be used

if the target S3 system supportsdns based buckets.

DNS-style bucket+hostname:porttemplate for accessing a bucket [%(bucket)s.s3.amazonaws.com]: %(bucket).ceph-1.lab.net:7480


Encryption password is used toprotect your files from reading

by unauthorized persons while intransfer to S3

Encryption password: (直接回车)

Path to GPG program [/usr/bin/gpg]:(直接回车)


When using secure HTTPS protocolall communication with Amazon S3

servers is protected from 3rd partyeavesdropping. This method is

slower than plain HTTP, and canonly be proxied with Python 2.7 or newer

Use HTTPS protocol [Yes]: no


On some networks all internetaccess must go through a HTTP proxy.

Try setting it here if you can'tconnect to S3 directly

HTTP Proxy server name: (直接回车)


New settings:

 Access Key: XTAA1VRTXIKIEH89GUBG

 Secret Key: P8i5dC6jeOpVnlK9qKYYN2enLjFcz0fPZne9sxVE

 Default Region: US

 S3 Endpoint: ceph-1.lab.net:7480

 DNS-style bucket+hostname:port template for accessing a bucket:%(bucket).ceph-1.lab.net:7480

 Encryption password:

 Path to GPG program: /usr/bin/gpg

 Use HTTPS protocol: False

 HTTP Proxy server name:

 HTTP Proxy server port: 0


Test access with supplied credentials?[Y/n] n


Save settings? [y/N] y

Configuration saved to'/root/.s3cfg'

(12)显示存储桶

[root@client s3cmd-2.0.2]# ./s3cmdls

(13)创建存储桶bucket

[root@client s3cmd-2.0.2]# ./s3cmd mbs3://bucket

Bucket 's3://bucket/' created

[root@client s3cmd-2.0.2]# ./s3cmdls

2019-11-23 07:45  s3://bucket

(14)上传文件到存储桶

[root@client s3cmd-2.0.2]# ./s3cmdput /etc/hosts s3://bucket

WARNING: Module python-magic is notavailable. Guessing MIME types based on file extensions.

upload: '/etc/hosts' ->'s3://bucket/hosts'  [1 of 1]

 158 of 158  100% in    1s   107.77 B/s done

[root@client s3cmd-2.0.2]# ./s3cmdls s3://bucket

2019-11-23 07:46       158  s3://bucket/hosts

4.使用Swift

API访问Ceph对象存储

(1)创建Swift用户

要通过 Swift 访问对象网关,需要 Swift 用户,我们创建radosgw:swift作为子用户。在ceph-1节点创建radosgw用户的子用户radosgw:swift。

[root@ceph-1 osd]# radosgw-adminsubuser create --uid=radosgw --subuser=radosgw:swift--display-name="radosgw-sub" --access=full

输出结果如下:

{

   "user_id": "radosgw",

   "display_name": "radosgw",

   "email": "",

   "suspended": 0,

   "max_buckets": 1000,

   "subusers": [

        {

            "id":"radosgw:swift",

            "permissions":"full-control"

        }

   ],

   "keys": [

        {

            "user":"radosgw",

            "access_key":"XTAA1VRTXIKIEH89GUBG",

            "secret_key": "P8i5dC6jeOpVnlK9qKYYN2enLjFcz0fPZne9sxVE"

        }

   ],

   "swift_keys": [

        {

            "user":"radosgw:swift",

            "secret_key": "8uocgTs9CO3tWN8oSc2MDmGPodeotcKUr4454i37"

        }

   ],

   "caps": [],

   "op_mask": "read, write, delete",

   "default_placement": "",

   "default_storage_class": "",

   "placement_tags": [],

   "bucket_quota": {

        "enabled": false,

        "check_on_raw": false,

        "max_size": -1,

        "max_size_kb": 0,

        "max_objects": -1

   },

   "user_quota": {

        "enabled": false,

        "check_on_raw": false,

        "max_size": -1,

        "max_size_kb": 0,

       "max_objects": -1

   },

   "temp_url_keys": [],

   "type": "rgw",

   "mfa_ids": []

}

注意:返回的 Json 值中,我们要记住swift_keys中的secret_key 因为下边我们测试访问 Swift 接口时需要使用。

(2)在client节点安装swift客户端

[root@client ~]# yum -y installpython-setuptools

[root@client ~]# easy_install pip

[root@client ~]# pip install--upgrade setuptools

[root@client ~]# pip install--upgrade python-swiftclient

(3)使用swift列出容器(存储桶)列表

[root@client ~]# swift -Ahttp://192.168.100.101:7480/auth/1.0 -U radosgw:swift -K8uocgTs9CO3tWN8oSc2MDmGPodeotcKUr4454i37 list

Bucket

注意:192.168.100.101可以替换为admin,这里为 admin-node 节点 IP,端口默认 7480,若已修改端口号,这里也需要对应修改一下。密钥 Key 为上边返回值中的 secret_key。

(4)创建容器container

[root@client ~]# swift -Ahttp://192.168.100.101:7480/auth/1.0 -U radosgw:swift -K8uocgTs9CO3tWN8oSc2MDmGPodeotcKUr4454i37 post container

[root@client ~]# swift -Ahttp://192.168.100.101:7480/auth/1.0 -U radosgw:swift -K8uocgTs9CO3tWN8oSc2MDmGPodeotcKUr4454i37 list

bucket

container

(5)将anaconda-ks.cfg文件上传到容器container中

[root@client ~]# swift -Ahttp://192.168.100.101:7480/auth/1.0 -U radosgw:swift -K8uocgTs9CO3tWN8oSc2MDmGPodeotcKUr4454i37 upload container anaconda-ks.cfg

anaconda-ks.cfg

[root@client ~]# swift -Ahttp://192.168.100.101:7480/auth/1.0 -U radosgw:swift -K 8uocgTs9CO3tWN8oSc2MDmGPodeotcKUr4454i37list container

anaconda-ks.cfg

(6)修改端口

如果我们想修改 7480 端口为其他值时,ceph 也是支持的,通过修改 Ceph 配置文件更改默认端口,然后重启 Ceph 对象网关即可。例如我们修改端口为 80。

修改 Ceph 配置文件

vi /etc/ceph/ceph.conf

在 [global] 节点下增加

[client.rgw.admin]

rgw_frontends = "civetwebport=80"

systemctl restart ceph-radosgw.service

你可能感兴趣的:(Ceph 对象存储部署)