ceph docker mysql_ceph docker搭建

centos7.4 docker环境搭建ceph集群

docker镜像

docker pull ceph/daemon:latest-luminous

注意下载的ceph镜像版本需要指定为latest-luminous,网上其他的docker搭建讲解都是latest,他们留下文档时间太早。

启动mon服务

#docker run -d --net=host --name=mon -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph -e MON_IP=192.168.0.21 -e CEPH_PUBLIC_NETWORK=192.168.0.0/24 ceph/daemon:latest-luminous mon

192.168.0.21是部署的物理机或云主机所在网络

/etc/ceph ; /var/lib/ceph是容器内配置文件所在位置,与外部物理机或云主机做文件映射

检查容器状态

[root@ceph2 etc]# docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

2ec681cdcd27 ceph/daemon:latest-luminous "/opt/ceph-contain..." 2 minutes ago Up 2 minutes mon

[root@ceph2 etc]# docker exec mon ceph -s

cluster:

id: 471883b4-d13c-4d7c-bb39-fe2e77680b23

health: HEALTH_OK

services:

mon: 1 daemons, quorum ceph2

mgr: no daemons active

osd: 0 osds: 0 up, 0 in

data:

pools: 0 pools, 0 pgs

objects: 0 objects, 0B

usage: 0B used, 0B / 0B avail

pgs:

验证mon服务已经启动

这里踩坑,注意要运行luminous版本的ceph容器镜像,否则启动mon时,无法在/var/lib/ceph/bootstrap-osd中无法生成ceph.keyring文件,导致启动osd时找不到该文件,而发生认证失败情况;

启动osd

注意物理机或云主机已经介入硬盘设备

[root@ceph2 etc]# fdisk -l

Disk /dev/vda: 42.9 GB, 42949672960 bytes, 83886080 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk label type: dos

Disk identifier: 0x000a5a8a

Device Boot Start End Blocks Id System

/dev/vda1 * 2048 83886079 41942016 83 Linux

Disk /dev/vdb: 32.2 GB, 32212254720 bytes, 62914560 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/vdc: 107.4 GB, 107374182400 bytes, 209715200 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/vdd: 107.4 GB, 107374182400 bytes, 209715200 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk label type: gpt

Disk identifier: 8B756319-14C6-4074-86C7-03625EF2FD01

# Start End Size Type Name

1 2048 206847 100M Ceph OSD ceph data

2 3483648 209715166 98.3G unknown ceph block

3 206848 2303999 1G unknown ceph block.db

4 2304000 3483647 576M unknown ceph block.wal

注意这里的/dev/vdb,/dev/vdc,/dev/vdd,尽可能提前格式化

[root@ceph2 etc]# mkfs.ext4 /dev/vdd

mke2fs 1.42.9 (28-Dec-2013)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

Stride=0 blocks, Stripe width=0 blocks

6553600 inodes, 26214400 blocks

1310720 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=2174746624

800 block groups

32768 blocks per group, 32768 fragments per group

8192 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,

4096000, 7962624, 11239424, 20480000, 23887872

Allocating group tables: done

Writing inode tables: done

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done

本文部署单节点osd,测试使用,所以配置osd副本数为1,并重启mon容器

在/etc/ceph/ceph.conf中添加

osd pool default size = 1

启动osd

#docker run -d --net=host --name=osd --privileged=true --pid=host -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph -v /dev/:/dev/ -e OSD_TYPE=disk -e OSD_FORCE_ZAP=1 -e OSD_DEVICE=/dev/vdd ceph/daemon:latest-luminous osd_directory

注意命令中使用的/dev/vdd设备未进行任何挂载

osd状态检查

[root@ceph2 ceph]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS

PORTS NAMES

a4b6896b0f07 ceph/daemon:latest-luminous "/opt/ceph-contain..." About a minute ago Up About a minute osd

5650908f9b11 ceph/daemon:latest-luminous "/opt/ceph-contain..." 9 minutes ago Up 9 minutes mon

[root@ceph2 ceph]# docker exec mon ceph -s

cluster:

id: 4d40f5a8-d291-45b4-b635-492f8b01e9bc

health: HEALTH_WARN

no active mgr

services:

mon: 1 daemons, quorum ceph2

mgr: no daemons active

osd: 1 osds: 1 up, 1 in

data:

pools: 0 pools, 0 pgs

objects: 0 objects, 0B

usage: 0B used, 0B / 0B avail

pgs:

启动mgr

#docker run -d --net=host --name=mgr -v /etc/ceph:/etc/ceph -v /var/lib/ceph:/var/lib/ceph ceph/daemon:latest-luminous mgr

状态检查

[root@ceph2 ceph]# docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

a42d8cfa41ab ceph/daemon:latest-luminous "/opt/ceph-contain..." 3 seconds ago Up 2 seconds mgr

a4b6896b0f07 ceph/daemon:latest-luminous "/opt/ceph-contain..." 2 minutes ago Up 2 minutes osd

5650908f9b11 ceph/daemon:latest-luminous "/opt/ceph-contain..." 10 minutes ago Up 10 minutes mon

[root@ceph2 ceph]# docker exec mon ceph -s

cluster:

id: 4d40f5a8-d291-45b4-b635-492f8b01e9bc

health: HEALTH_OK

services:

mon: 1 daemons, quorum ceph2

mgr: ceph2(active)

osd: 1 osds: 1 up, 1 in

data:

pools: 0 pools, 0 pgs

objects: 0 objects, 0B

usage: 4.56GiB used, 35.4GiB / 40.0GiB avail

pgs:

启动dashboard

[root@ceph2 ceph]# docker exec mon ceph mgr module enable dashboard

[root@ceph2 ceph]# docker exec mon ceph mgr dump

{

"epoch": 42,

"active_gid": 4130,

"active_name": "ceph2",

"active_addr": "192.168.0.21:6804/120",

"available": true,

"standbys": [],

"modules": [

"balancer",

"dashboard",

"restful",

"status"

],

"available_modules": [

"balancer",

"dashboard",

"influx",

"localpool",

"prometheus",

"restful",

"selftest",

"status",

"zabbix"

],

"services": {

"dashboard": "http://ceph2:7000/"

}

}

创建池

# docker exec mon ceph osd pool create swimmingpool 30

创建后,ceph集群状态出现告警

2019-04-14 13:24:04.843747 [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 30 pgs inactive)

2019-04-14 12:55:18.250747 [WRN] Health check failed: Degraded data redundancy: 30 pgs undersized (PG_DEGRADED)

是pg和数据池的一些配置问题

进行配置

# docker exec mon ceph osd pool set swimmingpool min_size 1

出现新问题

2019-04-14 13:25:04.801042 [WRN] Health check failed: Degraded data redundancy: 30 pgs undersized (PG_DEGRADED)

原因是Peering完成后,PG检测到任意一个PG实例存在不一致(需要被同步/修复)的对象,或者当前ActingSet 小于存储池副本数

Degraded说明

降级:由上文可以得知,每个PG有三个副本,分别保存在不同的OSD中,在非故障情况下,这个PG是active+clean 状态,那么,如果PG 的 副本osd.4 挂掉了,这个 PG 是降级状态

降级就是在发生了一些故障比如OSD挂掉之后,Ceph 将这个 OSD 上的所有 PG 标记为 Degraded。

降级的集群可以正常读写数据,降级的 PG 只是相当于小毛病而已,并不是严重的问题。

Undersized的意思就是当前存活的PG 副本数为 2,小于副本数3,将其做此标记,表明存货副本数不足,也不是严重的问题。

创建块

#docker exec mon rbd create --size 1024 swimmingpool/bar

问题记录

1、application not enabled on 1 pool(s) (POOL_APP_NOT_ENABLED)

解决方法:

#docker exec mon ceph osd pool application enable swimmingpool rbd

enabled application 'rbd' on pool 'swimmingpool'

你可能感兴趣的:(ceph,docker,mysql)