Docker快速部署Ceph测试集群

通过docker可以快速部署小规模Ceph集群的流程,可用于开发测试。
以下的安装流程是通过linux shell来执行的;假设你只有一台机器,装了linux(如Ubuntu)系统和docker环境,那么可以参考以下步骤安装Ceph:

# 要用root用户创建, 或有sudo权限
# 注: 建议使用这个docker镜像源:https://registry.docker-cn.com
# 1. 修改docker镜像源
cat > /etc/docker/daemon.json << EOF
{
  "registry-mirrors": [
    "https://registry.docker-cn.com"
  ]
}
EOF
# 重启docker,使配置生效
systemctl restart docker
# 关于镜像, 这里需要用到三个: ceph/mon, ceph/osd, ceph/radosgw
# 如果下载不了的话,可以尝试下载我打包的: moxiaomomo/ceph-mon, moxiaomomo/ceph-osd, moxiaomomo/ceph-radosgw, 具体操作:
docker pull moxiaomomo/ceph-mon
docker pull moxiaomomo/ceph-osd
docker pull moxiaomomo/ceph-radosgw
# 下载完之后,可以重命名成官方镜像名,比如: 
docker tag moxiaomomo/ceph-mon:latest ceph/mon:latest
docker tag moxiaomomo/ceph-osd:latest ceph/osd:latest
docker tag moxiaomomo/ceph-radosgw:latest ceph/radosgw:latest
# 2. 创建Ceph专用网络
docker network create --driver bridge --subnet 172.20.0.0/16 ceph-network
docker network inspect ceph-network
# 3. 删除旧的ceph相关容器
docker rm -f $(docker ps -a | grep ceph | awk '{print $1}')
# 4. 清理旧的ceph相关目录文件,加入有的话
rm -rf /www/ceph /var/lib/ceph/  /www/osd/
# 5. 创建相关目录及修改权限,用于挂载volume
mkdir -p /www/ceph /var/lib/ceph/osd /www/osd/
chown -R 64045:64045 /var/lib/ceph/osd/
chown -R 64045:64045 /www/osd/
# 6. 创建monitor节点
docker run -itd --name monnode --network ceph-network --ip 172.20.0.10 -e MON_NAME=monnode -e MON_IP=172.20.0.10 -v /www/ceph:/etc/ceph ceph/mon
# 7. 在monitor节点上标识3个osd节点
docker exec monnode ceph osd create
docker exec monnode ceph osd create
docker exec monnode ceph osd create
# 8. 创建OSD节点
docker run -itd --name osdnode0 --network ceph-network -e CLUSTER=ceph -e WEIGHT=1.0 -e MON_NAME=monnode -e MON_IP=172.20.0.10 -v /www/ceph:/etc/ceph -v /www/osd/0:/var/lib/ceph/osd/ceph-0 ceph/osd 
docker run -itd --name osdnode1 --network ceph-network -e CLUSTER=ceph -e WEIGHT=1.0 -e MON_NAME=monnode -e MON_IP=172.20.0.10 -v /www/ceph:/etc/ceph -v /www/osd/1:/var/lib/ceph/osd/ceph-1 ceph/osd
docker run -itd --name osdnode2 --network ceph-network -e CLUSTER=ceph -e WEIGHT=1.0 -e MON_NAME=monnode -e MON_IP=172.20.0.10 -v /www/ceph:/etc/ceph -v /www/osd/2:/var/lib/ceph/osd/ceph-2 ceph/osd
# 9. 增加monitor节点,组件成集群
docker run -itd --name monnode_1 --network ceph-network --ip 172.20.0.11 -e MON_NAME=monnode_1 -e MON_IP=172.20.0.11 -v /www/ceph:/etc/ceph ceph/mon
docker run -itd --name monnode_2 --network ceph-network --ip 172.20.0.12 -e MON_NAME=monnode_2 -e MON_IP=172.20.0.12 -v /www/ceph:/etc/ceph ceph/mon
# 10. 创建gateway节点
docker run -itd --name gwnode --network ceph-network --ip 172.20.0.9 -p 9080:80 -e RGW_NAME=gwnode -v /www/ceph:/etc/ceph ceph/radosgw
# 11. 查看ceph集群状态
sleep 10 && docker exec monnode ceph -s

正常情况下,集群状态信息类似如下所示:

[root@ali-xiaomo001 ~]# sleep 10 && docker exec monnode ceph -s
    cluster 522aa30c-22c1-459d-873e-9749dd359692
     health HEALTH_OK
     monmap e3: 3 mons at {monnode=172.20.0.10:6789/0,monnode_1=172.20.0.11:6789/0,monnode_2=172.20.0.12:6789/0}
            election epoch 6, quorum 0,1,2 monnode,monnode_1,monnode_2
     osdmap e20: 3 osds: 3 up, 3 in
            flags sortbitwise
      pgmap v24: 112 pgs, 7 pools, 848 bytes data, 170 objects
            71662 MB used, 43309 MB / 117 GB avail
                 112 active+clean
  client io 61440 B/s rd, 579 B/s wr, 150 op/s

创建一个用于程序访问ceph的client用户:

[root@ali-xiaomo001 ~]# docker exec -it gwnode radosgw-admin user create --uid=user1 --display-name=user1
{
    "user_id": "user1",
    "display_name": "user1",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "auid": 0,
    "subusers": [],
    "keys": [
        {
            "user": "user1",
            "access_key": "MJUAEX5BTLT3QXK5YBxx",
            "secret_key": "yz6BPkqgcyuD0U3zjbpDKoDchbk62RGPK7qaNvxx"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "max_size_kb": -1,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "max_size_kb": -1,
        "max_objects": -1
    },
    "temp_url_keys": []
}

以上的access_keysecret_key可用于s3 API中配置访问ceph存储.

你可能感兴趣的:(bigdata,DevOps)