配置环境
环境说明
- 在Win7下安装vmware软件,用CentOS-7.2创建一台centos7.2-mini虚拟机,安装完后把内存改为512MB就足够了,如果是用来实践的话。
- 基于上面的虚拟机链接克隆3台虚拟机
修改hostname
[root@localhost ~]
[root@localhost ~]
[root@localhost ~]
配置解析
# 全体都有,并测试网络是否连通
[root@ceph8 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.103.139 ceph6
192.168.103.140 ceph7
192.168.103.138 ceph8
配置防火墙启用端口
# 全体都有
[root@ceph6 ~]# firewall-cmd --zone=public --add-port=6789/tcp --permanent
success
[root@ceph6 ~]# firewall-cmd --zone=public --add-port=6800-7100/tcp --permanent
success
[root@ceph6 ~]# firewall-cmd --reload
success
[root@ceph6 ~]# firewall-cmd --zone=public --list-all
public (default, active)
interfaces: eno16777736
sources:
services: dhcpv6-client ssh
ports: 6789/tcp 6800-7100/tcp
masquerade: no
forward-ports:
icmp-blocks:
rich rules:
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
禁用SELINUX
[root@ceph6 ~]
[root@ceph6 ~]
[root@ceph6 ~]
SELINUX=disabled
配置网络
# 全体都有;公司内部开发网需要单独设置
[root@ceph6 ~]# cat /etc/yum.conf
[main]
cachedir=/var/cache/yum/$basearch/$releasever
keepcache=0
debuglevel=2
logfile=/var/log/yum.log
exactarch=1
obsoletes=1
gpgcheck=1
plugins=1
installonly_limit=5
bugtracker_url=http://bugs.centos.org/set_project.php?project_id=23&ref=http://bugs.centos.org/bug_report_page.php?category=yum
distroverpkg=centos-release
Proxy=http://dev-proxy.oa.com:8080/
[root@ceph6 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.14.87.100 dev-proxy.oa.com
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
设置全局代理
# 全体都有;公司网络限制,需要设置全局代理,需要重启主机(poweroff命令,再开机)
http_proxy=dev-proxy.oa.com:8080
https_proxy=dev-proxy.oa.com:8080
export http_proxy https_proxy
安装配置ntp服务
[root@ceph6 ~]
server 10.14.0.131
wget下载rpm;代理
[root@ceph6 ~]
Ceph安装与部署
安装部署工具
[root@ceph6 ~]
初始化monitor
[root@ceph6 ~]
[root@ceph6 ~]
[root@ceph6 ceph]
[root@ceph6 ceph]
ceph.conf ceph.log ceph.mon.keyring
免密码登录
[root@ceph6 ~]
[root@ceph6 ~]
[root@ceph6 ~]
远程安装部署
[root@ceph6 ~]# ceph-deploy install ceph6 ceph7 ceph8
[ceph6][DEBUG ]
[ceph6][DEBUG ] Complete!
[ceph6][DEBUG ] Configure Yum priorities to include obsoletes
[ceph6][WARNIN] check_obsoletes has been enabled for Yum priorities plugin
[ceph6][INFO ] Running command: rpm --import https://download.ceph.com/keys/release.asc
[ceph6][WARNIN] curl: (7) Failed to connect to 2607:f298:6050:51f3:f816:3eff:fe71:9135: Network is unreachable
[ceph6][WARNIN] error: https://download.ceph.com/keys/release.asc: import read failed(2).
[ceph6][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: rpm --import https://download.ceph.com/keys/release.asc
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
独立安装部署
[root@ceph7 ~]
[root@ceph8 ~]
搭建集群
创建OSD
[root@ceph6 ceph]
[root@ceph6 ceph]
[root@ceph6 ceph]
[root@ceph6 ceph]
[root@ceph6 ceph]
[root@ceph6 ceph]
[root@ceph6 ceph]
[root@ceph6 ceph]
cluster 8ea4fa79-3b6b-4de3-8cfb-a0922d6827c5
health HEALTH_WARN
too few PGs per OSD (21 < min 30)
monmap e1: 1 mons at {ceph6=192.168.103.139:6789/0}
election epoch 3, quorum 0 ceph6
osdmap e43: 9 osds: 9 up, 9 in
flags sortbitwise
pgmap v95: 64 pgs, 1 pools, 0 bytes data, 0 objects
307 MB used, 134 GB / 134 GB avail
64 active+clean
[root@ceph6 ceph]
[root@ceph6 ceph]
[root@ceph6 ceph]
cluster 8ea4fa79-3b6b-4de3-8cfb-a0922d6827c5
health HEALTH_OK
monmap e1: 1 mons at {ceph6=192.168.103.139:6789/0}
election epoch 3, quorum 0 ceph6
osdmap e48: 9 osds: 9 up, 9 in
flags sortbitwise
pgmap v130: 128 pgs, 1 pools, 0 bytes data, 0 objects
309 MB used, 134 GB / 134 GB avail
128 active+clean
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
部署奇数个数的monitor
[root@ceph6 ceph]
[global]
fsid = 8ea4fa79-3b6b-4de3-8cfb-a0922d6827c5
mon_initial_members = ceph6,ceph7,ceph8
mon_host = 192.168.103.139,192.168.103.140,192.168.103.138
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public network = 192.168.103.0/24
[root@ceph6 ceph]
[root@ceph6 ceph]
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
实践使用ceph集群
检查Ceph的状态
[root@ceph6 ceph]
cluster 8ea4fa79-3b6b-4de3-8cfb-a0922d6827c5
health HEALTH_OK
monmap e3: 3 mons at {ceph6=192.168.103.139:6789/0,ceph7=192.168.103.140:6789/0,ceph8=192.168.103.138:6789/0}
election epoch 8, quorum 0,1,2 ceph8,ceph6,ceph7
osdmap e48: 9 osds: 9 up, 9 in
flags sortbitwise
pgmap v130: 128 pgs, 1 pools, 0 bytes data, 0 objects
309 MB used, 134 GB / 134 GB avail
128 active+clean
[root@ceph6 ceph]
cluster 8ea4fa79-3b6b-4de3-8cfb-a0922d6827c5
health HEALTH_OK
monmap e3: 3 mons at {ceph6=192.168.103.139:6789/0,ceph7=192.168.103.140:6789/0,ceph8=192.168.103.138:6789/0}
election epoch 8, quorum 0,1,2 ceph8,ceph6,ceph7
osdmap e48: 9 osds: 9 up, 9 in
flags sortbitwise
pgmap v130: 128 pgs, 1 pools, 0 bytes data, 0 objects
309 MB used, 134 GB / 134 GB avail
128 active+clean
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
观察集群健康状态
[root@ceph6 ceph]
cluster 8ea4fa79-3b6b-4de3-8cfb-a0922d6827c5
health HEALTH_OK
monmap e3: 3 mons at {ceph6=192.168.103.139:6789/0,ceph7=192.168.103.140:6789/0,ceph8=192.168.103.138:6789/0}
election epoch 8, quorum 0,1,2 ceph8,ceph6,ceph7
osdmap e48: 9 osds: 9 up, 9 in
flags sortbitwise
pgmap v130: 128 pgs, 1 pools, 0 bytes data, 0 objects
309 MB used, 134 GB / 134 GB avail
128 active+clean
2016-11-09 10:01:39.675477 mon.0 [INF] from='client.4174 :/0' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.bootstrap-mds", "caps": ["mon", "allow profile bootstrap-mds"]}]: dispatch
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
检查Ceph monitor仲裁状态
[root@ceph6 ceph]
{
"election_epoch": 8,
"quorum": [
0,
1,
2
],
"quorum_names": [
"ceph8",
"ceph6",
"ceph7"
],
"quorum_leader_name": "ceph8",
"monmap": {
"epoch": 3,
"fsid": "8ea4fa79-3b6b-4de3-8cfb-a0922d6827c5",
"modified": "2016-11-09 10:01:31.732730",
"created": "2016-11-08 22:36:48.791105",
"mons": [
{
"rank": 0,
"name": "ceph8",
"addr": "192.168.103.138:6789\/0"
},
{
"rank": 1,
"name": "ceph6",
"addr": "192.168.103.139:6789\/0"
},
{
"rank": 2,
"name": "ceph7",
"addr": "192.168.103.140:6789\/0"
}
]
}
}
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
导出Ceph monitor信息
[root@ceph6 ceph]# ceph mon dump
dumped monmap epoch 3
epoch 3
fsid 8ea4fa79-3b6b-4de3-8cfb-a0922d6827c5
last_changed 2016-11-09 10:01:31.732730
created 2016-11-08 22:36:48.791105
0: 192.168.103.138:6789/0 mon.ceph8
1: 192.168.103.139:6789/0 mon.ceph6
2: 192.168.103.140:6789/0 mon.ceph7
检查集群使用状态
[root@ceph6 ceph]
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
134G 134G 309M 0.22
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
rbd 0 0 0 45942M 0
检查Ceph monitor、OSD和PG(配置组)状态
[root@ceph6 ceph]
e3: 3 mons at {ceph6=192.168.103.139:6789/0,ceph7=192.168.103.140:6789/0,ceph8=192.168.103.138:6789/0}, election epoch 8, quorum 0,1,2 ceph8,ceph6,ceph7
[root@ceph6 ceph]
osdmap e48: 9 osds: 9 up, 9 in
flags sortbitwise
[root@ceph6 ceph]
v130: 128 pgs: 128 active+clean; 0 bytes data, 309 MB used, 134 GB / 134 GB avail
列表PG
[root@ceph6 ceph]
列表Ceph存储池
[root@ceph6 ceph]
0 rbd,
检查OSD的CRUSH map
[root@ceph6 ceph]# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.13129 root default
-2 0.04376 host ceph6
0 0.01459 osd.0 up 1.00000 1.00000
1 0.01459 osd.1 up 1.00000 1.00000
2 0.01459 osd.2 up 1.00000 1.00000
-3 0.04376 host ceph7
3 0.01459 osd.3 up 1.00000 1.00000
4 0.01459 osd.4 up 1.00000 1.00000
5 0.01459 osd.5 up 1.00000 1.00000
-4 0.04376 host ceph8
6 0.01459 osd.6 up 1.00000 1.00000
7 0.01459 osd.7 up 1.00000 1.00000
8 0.01459 osd.8 up 1.00000 1.00000
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
列表集群的认证密钥
[root@ceph6 ceph]