在搭建成功Ceph集群后,对于如何使用,其实我还是一脸MB的,心想竟然提供三种存储接口(对象,文件,快),口气也未免太大。在结合项目需求之后,我选择了对象存储接口。那么问题又来了,如何配置IPv6的对象存储?
1 2 3 4 5 6 7 8 9 |
[root@ceph001 ~]# ceph -s cluster 2818c750-8724-4a70-bb26-f01af7f6067f health HEALTH_OK monmap e1: 1 mons at {ceph001=[2001:250:4402:2001:20c:29ff:fe25:8888]:6789/0} election epoch 1, quorum 0 ceph001 osdmap e17: 3 osds: 3 up, 3 in pgmap v26: 128 pgs, 1 pools, 0 bytes data, 0 objects 101676 kB used, 284 GB / 284 GB avail 128 active+clean |
具体如何搭建可以参考教程配置基于IPv6的单节点Ceph
从 firefly(v0.80)版本以后,网关进程内嵌了Civetweb,而无需配置安装web服务器或者配置FastCGI,大大简化了Ceph对象网关的安装与配置。本教程亦是选用Civetweb
安装 ceph-radosgw
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
[root@ceph001 cluster]# yum install ceph-radosgw Loaded plugins: fastestmirror, langpacks base | 3.6 kB 00:00:00 ceph | 2.9 kB 00:00:00 ceph-noarch | 2.9 kB 00:00:00 epel | 4.3 kB 00:00:00 extras | 3.4 kB 00:00:00 updates | 3.4 kB 00:00:00 (1/9): ceph-noarch/primary_db | 5.4 kB 00:00:00 (2/9): base/x86_64/group_gz | 155 kB 00:00:00 (3/9): epel/x86_64/group_gz | 170 kB 00:00:01 (4/9): ceph/primary_db | 160 kB 00:00:01 (5/9): extras/x86_64/primary_db | 166 kB 00:00:00 (6/9): epel/x86_64/updateinfo | 673 kB 00:00:01 (7/9): epel/x86_64/primary_db | 4.3 MB 00:00:17 (8/9): base/x86_64/primary_db | 5.3 MB 00:00:20 (9/9): updates/x86_64/primary_db | 9.1 MB 00:00:27 Determining fastest mirrors Resolving Dependencies --> Running transaction check ---> Package ceph-radosgw.x86_64 1:0.94.9-0.el7 will be installed --> Processing Dependency: mailcap for package: 1:ceph-radosgw-0.94.9-0.el7.x86_64 --> Processing Dependency: libfcgi.so.0()(64bit) for package: 1:ceph-radosgw-0.94.9-0.el7.x86_64 --> Running transaction check ---> Package fcgi.x86_64 0:2.4.0-25.el7 will be installed ---> Package mailcap.noarch 0:2.1.41-2.el7 will be installed --> Finished Dependency Resolution Dependencies Resolved ======================================================================================= Package Arch Version Repository Size ======================================================================================= Installing: ceph-radosgw x86_64 1:0.94.9-0.el7 ceph 2.3 M Installing for dependencies: fcgi x86_64 2.4.0-25.el7 epel 47 k mailcap noarch 2.1.41-2.el7 base 31 k Transaction Summary ======================================================================================= Install 1 Package (+2 Dependent packages) Total download size: 2.4 M Installed size: 8.6 M Is this ok [y/d/N]: y Downloading packages: (1/3): mailcap-2.1.41-2.el7.noarch.rpm | 31 kB 00:00:00 (2/3): fcgi-2.4.0-25.el7.x86_64.rpm | 47 kB 00:00:00 (3/3): ceph-radosgw-0.94.9-0.el7.x86_64.rpm | 2.3 MB 00:00:02 --------------------------------------------------------------------------------------- Total 867 kB/s | 2.4 MB 00:02 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : fcgi-2.4.0-25.el7.x86_64 1/3 Installing : mailcap-2.1.41-2.el7.noarch 2/3 Installing : 1:ceph-radosgw-0.94.9-0.el7.x86_64 3/3 Verifying : mailcap-2.1.41-2.el7.noarch 1/3 Verifying : 1:ceph-radosgw-0.94.9-0.el7.x86_64 2/3 Verifying : fcgi-2.4.0-25.el7.x86_64 3/3 Installed: ceph-radosgw.x86_64 1:0.94.9-0.el7 Dependency Installed: fcgi.x86_64 0:2.4.0-25.el7 mailcap.noarch 0:2.1.41-2.el7 Complete! |
设置对象网关管理节点
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
ot@ceph001 cluster]# ceph-deploy admin ceph001 [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (1.5.36): /usr/bin/ceph-deploy admin ceph001 [ceph_deploy.cli][INFO ] ceph-deploy options: [ceph_deploy.cli][INFO ] username : None [ceph_deploy.cli][INFO ] verbose : False [ceph_deploy.cli][INFO ] overwrite_conf : False [ceph_deploy.cli][INFO ] quiet : False [ceph_deploy.cli][INFO ] cd_conf : [ceph_deploy.cli][INFO ] cluster : ceph [ceph_deploy.cli][INFO ] client : ['ceph001'] [ceph_deploy.cli][INFO ] func : <function admin at 0x7fe09b38f410> [ceph_deploy.cli][INFO ] ceph_conf : None [ceph_deploy.cli][INFO ] default_release : False [ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph001 [ceph001][DEBUG ] connection detected need for sudo [ceph001][DEBUG ] connected to host: ceph001 [ceph001][DEBUG ] detect platform information from remote host [ceph001][DEBUG ] detect machine type [ceph001][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
[root@ceph001 cluster]# ceph-deploy rgw create ceph001 [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (1.5.36): /usr/bin/ceph-deploy rgw create ceph001 [ceph_deploy.cli][INFO ] ceph-deploy options: [ceph_deploy.cli][INFO ] username : None [ceph_deploy.cli][INFO ] verbose : False [ceph_deploy.cli][INFO ] rgw : [('ceph001', 'rgw.ceph001')] [ceph_deploy.cli][INFO ] overwrite_conf : False [ceph_deploy.cli][INFO ] subcommand : create [ceph_deploy.cli][INFO ] quiet : False [ceph_deploy.cli][INFO ] cd_conf : [ceph_deploy.cli][INFO ] cluster : ceph [ceph_deploy.cli][INFO ] func : <function rgw at 0x29e7230> [ceph_deploy.cli][INFO ] ceph_conf : None [ceph_deploy.cli][INFO ] default_release : False [ceph_deploy.rgw][DEBUG ] Deploying rgw, cluster ceph hosts ceph001:rgw.ceph001 [ceph001][DEBUG ] connection detected need for sudo [ceph001][DEBUG ] connected to host: ceph001 [ceph001][DEBUG ] detect platform information from remote host [ceph001][DEBUG ] detect machine type [ceph_deploy.rgw][INFO ] Distro info: CentOS Linux 7.2.1511 Core [ceph_deploy.rgw][DEBUG ] remote host will use sysvinit [ceph_deploy.rgw][DEBUG ] deploying rgw bootstrap to ceph001 [ceph001][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf [ceph001][DEBUG ] create path recursively if it doesn't exist [ceph001][INFO ] Running command: sudo ceph --cluster ceph --name client.bootstrap-rgw --keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring auth get-or-create client.rgw.ceph001 osd allow rwx mon allow rw -o /var/lib/ceph/radosgw/ceph-rgw.ceph001/keyring [ceph001][INFO ] Running command: sudo service ceph-radosgw start [ceph001][DEBUG ] Reloading systemd: [ OK ] [ceph001][DEBUG ] Starting ceph-radosgw (via systemctl): [ OK ] [ceph001][INFO ] Running command: sudo systemctl enable ceph-radosgw [ceph001][WARNIN] ceph-radosgw.service is not a native service, redirecting to /sbin/chkconfig. [ceph001][WARNIN] Executing /sbin/chkconfig ceph-radosgw on [ceph_deploy.rgw][INFO ] The Ceph Object Gateway (RGW) is now running on host ceph001 and default port 7480 |
ceph-radosgw守护进程 Civetweb Webserver 默认运行在端口7480上,可以通过以下命令查看
1 2 |
[root@ceph001 ceph]# netstat -nlp | grep 7480 tcp 0 0 0.0.0.0:7480 0.0.0.0:* LISTEN 3537/radosgw |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
[root@ceph001 cluster]# vim ceph.conf [global] fsid = 2818c750-8724-4a70-bb26-f01af7f6067f ms_bind_ipv6 = true mon_initial_members = ceph001 mon_host = [2001:250:4402:2001:20c:29ff:fe25:8888] auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx osd_pool_default_size = 1 # IPv6网关配置 [client.rgw.ceph001] rgw_frontends= "civetweb port=[::]:80" [root@ceph001 cluster]# ceph-deploy --overwrite-conf config push ceph001 [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (1.5.36): /usr/bin/ceph-deploy --overwrite-conf config push ceph001 [ceph_deploy.cli][INFO ] ceph-deploy options: [ceph_deploy.cli][INFO ] username : None [ceph_deploy.cli][INFO ] verbose : False [ceph_deploy.cli][INFO ] overwrite_conf : True [ceph_deploy.cli][INFO ] subcommand : push [ceph_deploy.cli][INFO ] quiet : False [ceph_deploy.cli][INFO ] cd_conf : [ceph_deploy.cli][INFO ] cluster : ceph [ceph_deploy.cli][INFO ] client : ['ceph001'] [ceph_deploy.cli][INFO ] func : <function config at 0x7fdc072652a8> [ceph_deploy.cli][INFO ] ceph_conf : None [ceph_deploy.cli][INFO ] default_release : False [ceph_deploy.config][DEBUG ] Pushing config to ceph001 [ceph001][DEBUG ] connection detected need for sudo [ceph001][DEBUG ] connected to host: ceph001 [ceph001][DEBUG ] detect platform information from remote host [ceph001][DEBUG ] detect machine type [ceph001][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf |
重启服务,查看ceph-radosgw守护进程
1 2 |
[root@ceph001 ceph]# netstat -nlp | grep 80 tcp6 0 0 :::80 :::* LISTEN 3540/ra |
可以通过浏览器来访问的IPv6对象网关(http://[2001:250:4402:2001:20c:29ff:fe25:8888]/)
得到与下图类似返回结果
恭喜你,完成了IPv6的网关配置,更多内容可以参考官网Ceph对象网关
新建一个用户(S3接口)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
[root@ceph001 ~]# radosgw-admin user create --uid=lemon --display-name="柠檬" [email protected] { "user_id": "lemon", "display_name": "柠檬", "email": "[email protected]", "suspended": 0, "max_buckets": 1000, "auid": 0, "subusers": [], "keys": [ { "user": "lemon", "access_key": "29YAB6D3BVRBQQDFVLHI", "secret_key": "QVPTxEvZHxQJhNdR58tZCfsgyP37jOKBKiPg1TaU" } ], "swift_keys": [], "caps": [], "op_mask": "read, write, delete", "default_placement": "", "placement_tags": [], "bucket_quota": { "enabled": false, "max_size_kb": -1, "max_objects": -1 }, "user_quota": { "enabled": false, "max_size_kb": -1, "max_objects": -1 }, "temp_url_keys": [] } |
生成的access_key与secret_key可供访问任何兼容 S3 API 的客户端能够使用。更多配置请参考官网 管理手册
通过CloudBerry Explorer for Amazon S3 客户端验证IPv6平台的部署情况
下载该测试客户端CloudBerry Explorer for Amazon S3 for Windows
安装成功后启动客户端,得到类似如下界面
添加创建成功的Ceph对象存储
点击菜单栏 File->Edit Accounts
点击 Add->S3 Compatible
配置相关参数,如下图所示,然后点击测试连接(test connection)
如果测试成功会弹出一个 connection success对话框
使用
选择刚刚创建的账户ceph001
接下来可以通过该客户端创建bucket,上传以及下载文件等操作,其他就不一一介绍了
ceph端可以通过ceph -w 查看实时的客户端操作,比如这里是客户端进行写操作
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
[root@ceph001 ~]# ceph -w cluster 2818c750-8724-4a70-bb26-f01af7f6067f health HEALTH_OK monmap e1: 1 mons at {ceph001=[2001:250:4402:2001:20c:29ff:fe25:8888]:6789/0} election epoch 1, quorum 0 ceph001 osdmap e54: 3 osds: 3 up, 3 in pgmap v312: 200 pgs, 10 pools, 425 MB data, 164 objects 826 MB used, 284 GB / 284 GB avail 200 active+clean client io 11630 kB/s wr, 25 op/s 2016-11-10 10:30:23.606053 mon.0 [INF] pgmap v312: 200 pgs: 200 active+clean; 425 MB data, 826 MB used, 284 GB / 284 GB avail; 11630 kB/s wr, 25 op/s 2016-11-10 10:30:27.197373 mon.0 [INF] pgmap v313: 200 pgs: 200 active+clean; 443 MB data, 866 MB used, 284 GB / 284 GB avail; 7826 kB/s wr, 17 op/s 2016-11-10 10:30:28.810358 mon.0 [INF] pgmap v314: 200 pgs: 200 active+clean; 457 MB data, 910 MB used, 283 GB / 284 GB avail; 5914 kB/s wr, 12 op/s 2016-11-10 10:30:31.830126 mon.0 [INF] pgmap v315: 200 pgs: 200 active+clean; 466 MB data, 927 MB used, 283 GB / 284 GB avail; 5015 kB/s wr, 11 op/s 2016-11-10 10:30:32.918332 mon.0 [INF] pgmap v316: 200 pgs: 200 active+clean; 502 MB data, 1011 MB used, 283 GB / 284 GB avail; 10214 kB/s wr, 22 op/s 2016-11-10 10:30:37.113515 mon.0 [INF] pgmap v317: 200 pgs: 200 active+clean; 521 MB data, 1037 MB used, 283 GB / 284 GB avail; 11093 kB/s wr, 24 op/s 2016-11-10 10:30:38.256587 mon.0 [INF] pgmap v318: 200 pgs: 200 active+clean; 563 MB data, 1082 MB used, 283 GB / 284 GB avail; 11669 kB/s wr, 25 op/s 2016-11-10 10:30:42.089761 mon.0 [INF] pgmap v319: 200 pgs: 200 active+clean; 580 MB data, 1106 MB used, 283 GB / 284 GB avail; 11572 kB/s wr, 25 op/s 2016-11-10 10:30:43.099061 mon.0 [INF] pgmap v320: 200 pgs: 200 active+clean; 609 MB data, 1162 MB used, 283 GB / 284 GB avail; 9575 kB/s wr, 21 op/s 2016-11-10 10:30:47.423680 mon.0 [INF] pgmap v321: 200 pgs: 200 active+clean; 628 MB data, 1184 MB used, 283 GB / 284 GB avail; 9104 kB/s wr, 19 op/s 2016-11-10 10:30:48.938458 mon.0 [INF] pgmap v322: 200 pgs: 200 active+clean; 652 MB data, 1212 MB used, 283 GB / 284 GB avail; 7914 kB/s wr, 17 op/s 2016-11-10 10:30:49.948222 mon.0 [INF] pgmap v323: 200 pgs: 200 active+clean; 660 MB data, 1216 MB used, 283 GB / 284 GB avail; 13007 kB/s wr, 28 op/s 2016-11-10 10:30:52.843301 mon.0 [INF] pgmap v324: 200 pgs: 200 active+clean; 668 MB data, 1238 MB used, 283 GB / 284 GB avail; 3975 kB/s wr, 8 op/s 2016-11-10 10:30:54.968022 mon.0 [INF] pgmap v325: 200 pgs: 200 active+clean; 714 MB data, 1278 MB used, 283 GB / 284 GB avail; 12919 kB/s wr, 28 op/s 2016-11-10 10:30:58.521788 mon.0 [INF] pgmap v326: 200 pgs: 200 active+clean; 730 MB data, 1290 MB used, 283 GB / 284 GB avail; 12325 kB/s wr, 27 op/s 2016-11-10 10:30:59.558175 mon.0 [INF] pgmap v327: 200 pgs: 200 active+clean; 764 MB data, 1334 MB used, 283 GB / 284 GB avail; 9621 kB/s wr, 20 op/s 2016-11-10 10:31:03.218629 mon.0 [INF] pgmap v328: 200 pgs: 200 active+clean; 776 MB data, 1354 MB used, 283 GB / 284 GB avail; 8880 kB/s wr, 19 op/s 2016-11-10 10:31:04.234516 mon.0 [INF] pgmap v329: 200 pgs: 200 active+clean; 816 MB data, 1382 MB used, 283 GB / 284 GB avail; 11345 kB/s wr, 24 op/s 2016-11-10 10:31:08.422236 mon.0 [INF] pgmap v330: 200 pgs: 200 active+clean; 820 MB data, 1398 MB used, 283 GB / 284 GB avail; 8706 kB/s wr, 19 op/s 2016-11-10 10:31:09.870466 mon.0 [INF] pgmap v331: 200 pgs: 200 active+clean; 856 MB data, 1463 MB used, 283 GB / 284 GB avail; 7887 kB/s wr, 17 op/s 2016-11-10 10:31:14.003109 mon.0 [INF] pgmap v332: 200 pgs: 200 active+clean; 884 MB data, 1491 MB used, 283 GB / 284 GB avail; 12958 kB/s wr, 28 op/s ... |
如果是想开发属于自己的S3客户端,可以调用相关的S3 API,具体可以参考官网S3 API
希望能帮到大家