(1) 节点要求
==》节点配置硬件最低要求
角色 设备 最小配置 推荐配置
-----------------------------------------------------------------------------------------------------------------
ceph-osd RAM 500M RAM for per daemon 1GB RAM for 1TB of storage per daemon
Volume Storage 1x storage drive per daemon >1TB storage drive per daemon
Journal Storage 5GB(default) SSD, >1GB for 1TB of storage per daemon
Network 2x 1GB Ethernet NICs 2x10GB Ethernet NICs
-----------------------------------------------------------------------------------------------------------------
ceph-mon RAM 1 GB per daemon 2 GB per daemon
Disk Space 10 GB per daemon >20 GB per daemon
Network 2x 1GB Ethernet NICs 2x10GB Ethernet NICs
-----------------------------------------------------------------------------------------------------------------
ceph-mds RAM 1 GB minimum per daemon >2GB per daemon
Disk Space 1 MB per daemon >1MB per daemon
Network 2x 1GB Ethernet NICs 2x10GB Ethernet NICs
==》OS环境
CentOS7
linux distribution:3.10.0-229.el7.x86_64
==》搭建环境
a) 电脑*1 (RAM>6G Disk>100G)
b) VirtualBox
c) CentOS7.1(3.10.0-229.el7.x86_64).ISO安装包
==》基本环境搭建,配置节点
主机名 角色 OS 磁盘
=====================================================================================================
a) admnode deploy-node CentOS7.1(3.10.0-229.el7.x86_64)
b) node1 mon,osd CentOS7.1(3.10.0-229.el7.x86_64) Disk(/dev/sdb capacity:10G)
c) node2 osd CentOS7.1(3.10.0-229.el7.x86_64) Disk(/dev/sdb capacity:10G)
d) node3 osd CentOS7.1(3.10.0-229.el7.x86_64) Disk(/dev/sdb capacity:10G)
(2) 节点设置代理上网
a)修改/etc/yum.conf,加入下面的内容
proxy=http://<proxyserver's IP>:port/
proxy_username=<G08's username>
proxy_password=<G08's passwork>
b)设置全局http代理(非root用户,root用户需要在/etc/profile.d/目录下,新建一个XXX.sh文件,添加 export https_proxy=XXXXX),在/etc/environment 文中添加以下内容
http_proxy=http://username:password@proxyserver:port/
https_proxy=http://username:password@proxyserver:port/
(3) 集群内yum源配置(注:如果配置这一步骤,请不要设置步骤(2)节点设置代理上网。 )
3.1 yum源服务器端配置
a) 在admnode上通过yum或rpm包安装vsftpd
# yum install vsftpd
启动vsftpd服务
# systemctl start vsftpd
关闭防火墙
# service iptables stop
关闭selinux
# setenforce 0
确保/etc/yum.conf中没有设置代理,注释掉代理设置的下面三行
####proxy=http://<proxyserver's IP>:port/
####proxy_username=<G08's username>
####proxy_password=<G08's passwork>
确保控制台没有设置http代理以及其他的代理服务(如yum.conf的代理,http代理等)
# unset http_proxy
使用浏览器验证ftp服务是否正常。
在服务端机器和客户端机器的浏览器地址栏上输入ftp://ip/pub/ ,浏览器会显示相应文件夹目录。
b) ceph安装包拷贝。
将所有的需要的rpm包拷贝到 /var/ftp/pub/<self-content>目录下。
c) 创建YUM源.
安装createrepo工具
# yum install createrepo
生成yum源的repodata依赖文件
# createrepo /var/ftp/pub
d) 配置本地YUM源.
在/etc/yum.repos.d/目录下新建*.repo文件(例:local_yum.repo),添加以下内容:
[local_yum] #库名称
name=local_yum #名称描述
baseurl=ftp://10.167.221.108/pub/ #yum源目录,填写Server端FTP服务器的IP。
enabled=1 #是否启用该yum源,0为禁用,1为使用
gpgcheck=0 #检查GPG-KEY,0为不检查,1为检查
修改默认源(注:一般只需要修改CentOS-Base.repo源即可,但如果有其他源生效,就在后缀加上.bak)
# cd /etc/yum.repos.d/
# mv CentOS-Base.repo CentOS-Base.repo.bak
更新服务器端yum源,已便客户端识别改动的rpm包。
# yum clean all
# createrepo --update /var/ftp/pub/
# createrepo /var/ftp/pub/
e) 查看所拥有的源,以及通过本地yum源安装软件
#yum repolist all
# yum install <local-yum-Software-package-name>
3.2 YUM客户端的配置
a) 准备工作。
关闭防火墙
# service iptables stop
关闭selinux
# setenforce 0
确保控制台没有设置http代理以及其他的代理服务
# unset http_proxy
使用浏览器验证ftp服务是否正常。在浏览器地址栏上输入ftp://ip/pub/ ,浏览器会显示相应文件夹目录。
b) 配置集群服务器YUM源.
在/etc/yum.repos.d/目录下新建*.repo文件(例:local_yum.repo),添加以下内容:
[local_yum] #库名称
name=local_yum #名称描述
baseurl=ftp://10.167.221.108/pub/ #yum源目录,填写Server端FTP服务器的IP。
enabled=1 #是否启用该yum源,0为禁用,1为使用
gpgcheck=0 #检查GPG-KEY,0为不检查,1为检查
修改默认源(注:一般只需要修改CentOS-Base.repo源即可,但如果有其他源生效,就在后缀加上.bak)
# cd /etc/yum.repos.d/
# mv CentOS-Base.repo CentOS-Base.repo.bak
e) 查看所拥有的源,以及通过本地yum源安装软件
#yum repolist all
# yum install <local-yum-Software-package-name>
(b)输入命令
# sudo vim /etc/yum.repos.d/ceph.repo
并将如下信息拷贝(注:如果是要在安装hammer版本,将下面的"rpm-infernalis"替换成"rpm-hammer")
[ceph]
name=Ceph packages for $basearch
baseurl=http://download.ceph.com/rpm-infernalis/el7/$basearch
enabled=1
priority=2
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
[ceph-noarch]
name=Ceph noarch packages
baseurl=http://download.ceph.com/rpm-infernalis/el7/noarch
enabled=1
priority=2
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
[ceph-source]
name=Ceph source packages
baseurl=http://download.ceph.com/rpm-infernalis/el7/SRPMS
enabled=0
priority=2
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
(c) 安装ceph-deploy:
# sudo yum install ceph-deploy
(2) 安装NTP,输入
# sudo yum install ntp ntpdate ntp-doc
(3) 安装SSH Server
# sudo yum install openssh-server
(4) 创建Ceph Deploy User,其中{username}部分需要自己指定用户,并将{username}替换掉。
# sudo useradd -d /home/{username} -m {username}
# sudo passwd {username}
例:
# sudo useradd -d /home/cephadmin -m cephadmin
# sudo passwd cephadmin
(5) 确保创建的{username}有sudo权限
# echo "{username} ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/{username}
# sudo chmod 0440 /etc/sudoers.d/{username}
例:
# echo "cephadmin ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephadmin
# sudo chmod 0440 /etc/sudoers.d/cephadmin
(6) Enable Password-less SSH
a) 用上面的{username}用户生成SSH keys(不要用root用户),输入命令,然后直接全部按回车键:
# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/cephadmin/.ssh/id_rsa):
Created directory '/home/cephadmin/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/cephadmin/.ssh/id_rsa.
Your public key has been saved in /home/cephadmin/.ssh/id_rsa.pub.
The key fingerprint is:
1c:f1:23:84:3d:60:81:c2:75:a0:e3:6d:93:03:66:92 [email protected]
The key's randomart image is:
+--[ RSA 2048]----+
| . .oo==o |
| .o..o..oo |
|E *. o.o |
| = + . . o . |
| . * S |
| . o |
| |
| |
| |
+-----------------+
b) 将上述生成的key拷贝到其他所有节点,输入命令
# ssh-copy-id {username}@node1’s IP
# ssh-copy-id {username}@node2’s IP
# ssh-copy-id {username}@node3’s IP
例:
# ssh-copy-id [email protected]
# ssh-copy-id [email protected]
(7) Enable Networking On Bootup
navigate to /etc/sysconfig/network-scripts and ensure that the ifcfg-{iface} file has ONBOOT set to yes.
(8) Ensure connectivity using ping with short hostnames (hostname -s).Hostnames should resolve to a network IP address, not to the loopback IP address
a) 输入命令,修改主机名(例如修改为admnode)。(注:修正生效需要重启网卡)
# vim /etc/hostname
b) 指定所有节点的主机名和IP之间的对应关系。(注:修正生效需要重启网卡)
# vim /etc/hosts
在末尾添加
<node1's IP> admnode
<node2's IP> node1
<node3's IP> node2
例:
10.167.225.111 admnode
10.167.225.114 node1
10.167.225.116 node2
c) 确保各个主机间能够通过 ping 主机名 的方式连接。示例如下:
[root@localhost etc]# ping node1
PING node1 (10.167.225.114) 56(84) bytes of data.
64 bytes from node1 (10.167.225.114): icmp_seq=1 ttl=64 time=0.442 ms
64 bytes from node1 (10.167.225.114): icmp_seq=2 ttl=64 time=0.453 ms
64 bytes from node1 (10.167.225.114): icmp_seq=3 ttl=64 time=0.417 ms
...
(9) 打开防火墙中需要的端口
# sudo firewall-cmd --zone=public --add-port=6789/tcp --permanent
(10) 确保ceph-deploy 能正常连接创建ceph其余节点(例如,如果不设置可能在部署时会提示如下错误:[ceph_deploy.osd][ERROR ] remote connection got closed, ensure ``requiretty`` is disabled for node2)
输入如下命令
# sudo visudo
找到如下Defaults requiretty设置选项
#
# Disable "ssh hostname sudo <cmd>", because it will show the password in clear.
# You have to run "ssh -t hostname sudo <cmd>".
#
Defaults requiretty
将"Defaults requiretty"修改为"Defaults:ceph !requiretty"
(11) 设置SELinux,将enforcing 改成permissive
# sudo setenforce 0
# vim /etc/selinux/config
将其中的"SELINUX=enforcing"修改为"SELINUX=permissive"
(12) 安装yum-plugin-priorities
# sudo yum install yum-plugin-priorities
确保/etc/yum/pluginconf.d/priorities.conf有内容:
[main]
enabled = 1
(13)关闭防火墙
# sudo systemctl stop firewalld
(14)安装redhat-lsb(安装该软件包,确保有lib/lsb/init-functions 目录)
# yum install redhat-lsb
基本环境搭建,配置节点
主机名 角色 磁盘
================================================================
a) admnode deploy-node
b) node1 mon1 Disk(/dev/sdb capacity:10G)
c) node2 osd.0 Disk(/dev/sdb capacity:10G)
d) node3 osd.1 Disk(/dev/sdb capacity:10G)
(1) admin节点切换到自定义的cephadmin用户(避免使用sodu 或 root用户调用ceph-deploy)
(2) 在cephadmin用户下创建一个ceph-cluster的目录,用来保存执行ceph-deploy命令后输出的文件。
# mkdir ceph-cluster
# cd ceph-cluster
(3) Create a Cluster
a) ceph-cluster的目录下,使用ceph-deploy命令,用初始的monitor节点创建cluster。
# ceph-deploy new {initial-monitor-node(s)}
例如
# ceph-deploy new node1
b) 将默认的osd数量从3改成2。
修改ceph-cluster目录下的 ceph.cong 配置文件,在[global]区域后添加:
osd pool default size = 2
osd pool default min size = 2
osd pool default pg num = 512
osd pool default pgp num = 512
osd crush chooseleaf type = 1
[osd]
osd journal size = 1024
(4) 在每个节点中安装Ceph(由于公司网络的限制,使用桥接模式时无法上网,)
# ceph-deploy install {ceph-node}[{ceph-node} ...]
例)
# ceph-deploy install admnode node1 node2
启动所有ceph服务(注意:新版本中已经用ceph.target 替代了ceph.service)
# sudo systemctl start ceph.target
(5) 在admin节点上初始化某个(或多个节点为mon节点)monitor(s)
b) 初始化monitor。
# ceph-deploy mon create-initial
(6) 增加两个OSDs
a) 查看集群节点的磁盘节点信息,例如 /dev/sdb
# ceph-deploy disk list <节点Host名>
b) 准备OSDs
# ceph-deploy osd prepare node2:/dev/sdb node3:/dev/sdb
c) 激活OSDs节点(注:prepare OSDs时,ceph-deploy会自动格式化磁盘,作成/sdb1数据盘和/sdb2日志盘,这里使用数据盘/sdb1,而非整个/dev/sdb)
# ceph-deploy osd activate node2:/dev/sdb1 node3:/dev/sdb1
备)如果OSD激活失败,或者OSD的状态是down,查看
http://docs.ceph.com/docs/master/rados/operations/monitoring-osd-pg/
http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshooting-osd/#osd-not-running
d) 推送configuration file 和 admin key 给admin节点和其他所有Ceph节点,以便可以在所有节点上执行ceph CLI命令(如ceph -s),而不用必须在monitor节点上执行。
# ceph-deploy admin admnode node1 node2 node3
e) 确保对ceph.client.admin.keyring有足够权限
# sudo chmod +r /etc/ceph/ceph.client.admin.keyring
f) 如果激活OSDs节点成功,则可通过在mon节点上执行ceph -s 或 ceph -w 查看到如下active+clean的信息
[root@node1 etc]# ceph -w
cluster 62d61946-b429-4802-b7a7-12289121a022
health HEALTH_OK
monmap e1: 1 mons at {node1=10.167.225.137:6789/0}
election epoch 2, quorum 0 node2
osdmap e9: 2 osds: 2 up, 2 in
pgmap v15: 64 pgs, 1 pools, 0 bytes data, 0 objects
67916 kB used, 18343 MB / 18409 MB avail
64 active+clean
2016-03-08 20:12:00.436008 mon.0 [INF] pgmap v15: 64 pgs: 64 active+clean; 0 bytes data, 67916 kB used, 18343 MB / 18409 MB avail
完整环境搭建,配置节点
主机名 角色 磁盘
================================================================
a) admnode deploy-node
b) node1 mon1,osd.2,mds Disk(/dev/sdb capacity:10G)
c) node2 osd.0,mon2 Disk(/dev/sdb capacity:10G)
d) node3 osd.1,mon3 Disk(/dev/sdb capacity:10G)
(1) 在Node1上增加一个OSD
# ceph-deploy osd prepare node1:/dev/sdb
# ceph-deploy osd activate node1:/dev/sdb1
执行成功后,集群的状态如下“
[root@node1 etc]# ceph -w
cluster 62d61946-b429-4802-b7a7-12289121a022
health HEALTH_OK
monmap e1: 1 mons at {node1=10.167.225.137:6789/0}
election epoch 2, quorum 0 node2
osdmap e13: 3 osds: 3 up, 3 in
pgmap v23: 64 pgs, 1 pools, 0 bytes data, 0 objects
102032 kB used, 27515 MB / 27614 MB avail
64 active+clean
2016-03-08 21:21:29.930307 mon.0 [INF] pgmap v23: 64 pgs: 64 active+clean; 0 bytes data, 102032 kB used, 27515 MB / 27614 MB avail
(2) 在Node1增加一个MDS(如果要使用CephFS,必须要有一个metadata server)
# ceph-deploy mds create node1
(3) 增加RGW实例(为了使用Ceph Object Gateway组件)
# ceph-deploy rgw create node1
(4) 增加Monitors,根据Monitor节点法定人数(quorum)的要求,Monitors机器需要奇数以上的节点,因此增加2个MON节点,同时,MON集群之间需要时间同步。
4.1 MONs节点之间配置时间同步(admnode作为NTP服务器,由于不连接外网,因此将使用local时间作为ntp服务提供给ntp客户端)。
a) 在admnode节点上配置局域网NTP服务器(使用local时间)。
a.1) 编辑/etc/ntp.conf,注释掉 "server 0|1|2|3.centos.pool.ntp.org iburst"四行。
添加"server 127.127.1.0 fudge"和”127.127.1.0 stratum 8“这两行
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server 127.127.1.0 fudge
127.127.1.0 stratum 8
a.2) admin节点启用ntpd服务
# sudo systemctl restart ntpd
a.3) 查看ntpd服务启动信息
# ntpstat
synchronised to local net at stratum 6
time correct to within 12 ms
polling server every 64 s
# ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
*LOCAL(0) .LOCL. 5 l 3 64 377 0.000 0.000 0.000
b) 在其余node1,node2,Node3三个需要配置Monitor服务的节点上,配置NTP,与NTP服务器同步时间。
b.1) 确保ntpd服务关闭
# sudo systemctl stop ntpd
b.2) 使用 ntpdate 命令先于NTP服务同步,确保offset在1000s内。
# sudo ntpdate <admnode's IP or hostname>
9 Mar 16:59:26 ntpdate[31491]: adjust time server 10.167.225.136 offset -0.000357 sec
b.3) 编辑/etc/ntp.conf,注释掉 "server 0|1|2|3.centos.pool.ntp.org iburst"四行。
添加NTP服务器(admnode节点)的IP"server 10.167.225.136"
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server 10.167.225.136
b.4) 启动ntpd服务
# sudo systemctl start ntpd
b.5) 查看ntpd服务启动信息
# ntpstat
synchronised to NTP server (10.167.225.136) at stratum 7
time correct to within 7949 ms
polling server every 64 s
# ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
*admnode LOCAL(0) 6 u 6 64 1 0.223 -0.301 0.000
4.2 在集群中增加两个MON
a) 新增Monitor节点
# ceph-deploy mon add node2
# ceph-deploy mon add node3
b) 节点安装成功后查看集群状态如下:
# ceph -s
cluster 62d61946-b429-4802-b7a7-12289121a022
health HEALTH_OK
monmap e3: 3 mons at {node1=10.167.225.137:6789/0,node2=10.167.225.138:6789/0,node3=10.167.225.141:6789/0}
election epoch 8, quorum 0,1,2 node2,node3,node4
osdmap e21: 3 osds: 3 up, 3 in
pgmap v46: 64 pgs, 1 pools, 0 bytes data, 0 objects
101 MB used, 27513 MB / 27614 MB avail
64 active+clean
c)检查quorum的状态
# ceph quorum_status --format json-pretty
输出如下:
{
"election_epoch": 8,
"quorum": [
0,
1,
2
],
"quorum_names": [
"node1",
"node2",
"node3"
],
"quorum_leader_name": "node2",
"monmap": {
"epoch": 3,
"fsid": "62d61946-b429-4802-b7a7-12289121a022",
"modified": "2016-03-09 17:50:29.370831",
"created": "0.000000",
"mons": [
{
"rank": 0,
"name": "node1",
"addr": "10.167.225.137:6789\/0"
},
{
"rank": 1,
"name": "node2",
"addr": "10.167.225.138:6789\/0"
},
{
"rank": 2,
"name": "node3",
"addr": "10.167.225.141:6789\/0"
}
]
}
}
(1) 使用要求:
a) 集群环境搭建成功
b) 集群的状态是 active+clean。
c) 节点配置,将admnode也作为client-node使用
主机名 角色 磁盘
================================================================
a) admnode deploy-node,client-node
b) node1 mon1,osd.2,mds Disk(/dev/sdb capacity:10G)
c) node2 osd.0,mon2 Disk(/dev/sdb capacity:10G)
d) node3 osd.1,mon3 Disk(/dev/sdb capacity:10G)
(2) 使用方法 http://docs.ceph.com/docs/master/start/quick-rbd/
a) 在client-node创建Block Device Image,这里使用默认的rbd pool(使用ceph osd lspools查看)
# ceph osd lspools
0 rbd,
# rbd create --size 1024 blockDevImg
# rbd ls rbd
blockDevImg
# rbd info blockDevImg
rbd image 'blockDevImg':
size 1024 MB in 256 objects
order 22 (4096 kB objects)
block_name_prefix: rb.0.1041.74b0dc51
format: 1
b) 在client-node将该image 映射给一个 block device。
# sudo rbd map blockDevImg --name client.admin
/dev/rbd0
c) 使用该block device在client-node上创建一个文件系统。
# sudo mkfs.ext4 -m0 /dev/rbd/rbd/blockDevImg
mke2fs 1.42.9 (28-Dec-2013)
Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=1024 blocks, Stripe width=1024 blocks
65536 inodes, 262144 blocks
0 blocks (0.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=268435456
8 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376
Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
d) 挂载文件系统
# sudo mkdir /mnt/ceph-block-device
# sudo mount /dev/rbd/rbd/blockDevImg /mnt/ceph-block-device
# cd /mnt/ceph-block-device
# mount
...
/dev/rbd0 on /mnt/ceph-block-device type ext4 (rw,relatime,seclabel,stripe=1024,data=ordered)
(1) 使用要求:
a) 集群环境搭建成功
b) 集群的状态是 active+clean。
c) 节点配置,将admnode也作为client-node使用,在client-node节点上操作。
主机名 角色 磁盘
================================================================
a) admnode deploy-node,client-node
b) node1 mon1,osd.2,mds Disk(/dev/sdb capacity:10G)
c) node2 osd.0,mon2 Disk(/dev/sdb capacity:10G)
d) node3 osd.1,mon3 Disk(/dev/sdb capacity:10G)
(2) 使用方法 http://docs.ceph.com/docs/master/start/quick-cephfs/#create-a-secret-file
a) 新建两个pools(metadata pool and data pool )
命令:ceph osd pool create <creating_pool_name> <pg_num>
参数:creating_pool_name : 要创建的pool的名字
pg_num : Placement Group的个数
# ceph osd pool create cephfs_data 512
pool 'cephfs_data' created
# ceph osd pool create cephfs_metadatea 512
pool 'cephfs_metadatea' created
# ceph osd lspools
0 rbd,1 cephfs_data,2 cephfs_metadatea,
b) 创建一个Filesystem,
命令:ceph fs new <fs_name> <metadata_pool_name> <data_pool_name>
参数:fs_name : 文件系统名
metadata_pool_name : metadata pool's name
data_pool_name :data pool's name
# ceph fs new cephfs cephfs_metadatea cephfs_data
new fs with metadata pool 2 and data pool 1
c) 一旦文件系统创建成功,可看到MDS(s)进入active state
# ceph mds stat
e5: 1/1/1 up {0=node1=up:active}
d) 在管理节点admnode创建Secret File
# cat ceph.client.admin.keyring
[client.admin]
key = AQDrv95WLfajLhAAmUyN/wCoq6cxS9xOYfy9Zw==
在/etc/ceph/目录下新建admin.secret文件,拷贝粘贴key的值 AQDrv95WLfajLhAAmUyN/wCoq6cxS9xOYfy9Zw==
# vim /etc/ceph/admin.secret
新建一个mycephfs目录
#sudo mkdir /mnt/mycephfs
e) 挂载Ceph FS作为内核驱动(详细见http://docs.ceph.com/docs/master/man/8/mount.ceph/)
sudo mount -t ceph <Monitor's IP or monitor host name>:<Ceph host port,default 6789>:/ <mountpoint> -o name=<RADOS user to authenticate as when using cephx>,secretfile=<path to file containing the secret key to use with cephx>
# sudo mount -t ceph 10.167.225.137:6789:/ /mnt/mycephfs/ -o name=admin,secretfile=/etc/ceph/admin.secret
通过命令查看,新增了一个类型为cpeh的文件系统挂载点
# mount
...
/dev/rbd0 on /mnt/ceph-block-device type ext4 (rw,relatime,seclabel,stripe=1024,data=ordered)
10.167.225.137:6789:/ on /mnt/mycephfs type ceph (rw,relatime,name=admin,secret=<hidden>,nodcache)
(1) 使用要求:
a) 集群的状态是 activ+clean。节点配置,将admnode也作为client-node使用
主机名 角色 磁盘
================================================================
a) admnode deploy-node,client-node
b) node1 mon1,osd.2,mds Disk(/dev/sdb capacity:10G)
c) node2 osd.0,mon2 Disk(/dev/sdb capacity:10G)
d) node3 osd.1,mon3 Disk(/dev/sdb capacity:10G)
b) 确保7480端口没有被占用,或者没有被防火墙屏蔽(打开方法见http://docs.ceph.com/docs/master/start/quick-start-preflight/)
c) Client节点安装了 Ceph Object Gateway 包,若没有安装使用如下命令
# ceph-deploy install --rgw <client-node> [<client-node> ...]
(2) 使用方法
a) 在deploy节点是,使用命令给client节点创建rgw实例
# ceph-deploy rgw create <client-node's-host-name>
...
[node1][WARNIN] D-Bus, udev, scripted systemctl call, ...).
[ceph_deploy.rgw][INFO ] The Ceph Object Gateway (RGW) is now running on host node1 and default port 7480
通过如下命令查询7480端口,确认Ceph Object Gateway (RGW)服务开启
# sudo netstat -tlunp | grep 7480
tcp 0 0 0.0.0.0:7480 0.0.0.0:* LISTEN 10399/radosgw
b) 查看RGW服务,通过在client节点的浏览器打开如下网址,则RRW服务测试成功。
# http://<ip-of-client-node or client node's host name>:7480
网页中会出现如下类似XML配置信息:
<?xml version="1.0" encoding="UTF-8"?>
<ListAllMyBucketsResult>
<Owner>
<ID>anonymous</ID>
<DisplayName></DisplayName>
</Owner>
<Buckets>
</Buckets>
</ListAllMyBucketsResult>