CentOS7环境部署虚拟ceph集群

一 资源、版本信息

cpu:Intel(R) Core(TM) i7-6500U CPU @ 2.50GHz //4core
mem:total 7686,swap 7935
os:Linux promote.cache-dns.local 3.10.0-957.el7.x86_64
ceph:rh-luminous

二 ceph简介

  1. 分布式存储
  2. ceph层次结构
  3. 最简部署方式:一个管理节点、一个mon节点、两个osd节点

三 环境准备

  1. 部署KVM虚拟环境,参考基于KVM的虚拟机环境搭建
  • 镜像选择

CentOS-7-x86_64-Minimal-1810.iso

  • 资源配置

mem:1G
disk:50G
cpu:1core

  • 虚拟机名称

ceph

  • 虚拟网络选择

NAT:default

四 ceph节点工具安装

本节操作在上一节创建的虚拟机中执行

  1. 安装常用网络工具
yum install net-tools -y
  1. 网络修改,使用静态IP
ifconfig
netstat -rn

结果如下图:


CEPH_Node_01.png

将这些信息写到配置文件中固化:
修改DNS

echo "NETWORKING=yes" >> /etc/sysconfig/network
echo "DNS1=114.114.114.114" >> /etc/sysconfig/network
echo "DNS2=8.8.8.8" >> /etc/sysconfig/network

修改静态IP

vi /etc/sysconfig/network-scripts/ifcfg-eth0

修改如下配置,其他配置不变

#BOOTPROTO="dhcp"  //这一行需要注释掉
BOOTPROTO="static"
NM_CONTROLLED=no
IPADDR=192.168.122.122 //IP 和原先IP一样也可
NETMASK=255.255.255.0
GATEWAY=192.168.122.1

添加主机名

echo "192.168.122.122 node" >> /etc/hosts 
echo "192.168.122.123 node1" >> /etc/hosts 
echo "192.168.122.124 node2" >> /etc/hosts 
echo "192.168.122.125 node3" >> /etc/hosts

重启网络服务

service network restart

NOTE:如果你是ssh到这个虚拟机的,会失去连接,可以关闭终端重新连接

  1. yum相关
  • 安装第三方源管理工具
yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
yum install -y yum-plugin-priorities
yum install -y yum-utils 
  • 源配置
    创建ceph源配置文件,并打开编辑
touch /etc/yum.repos.d/ceph.repo
vi /etc/yum.repos.d/ceph.repo

在文件中写入如下内容

[ceph-noarch]
name=Ceph noarch packages
baseurl=http://download.ceph.com/rpm-luminous/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

更新yum源

yum update -y
  1. 时钟相关
yum install -y ntp ntpdate ntp-doc
  1. 关闭防火墙
firewall-cmd --zone=public --add-service=ceph-mon --permanent
firewall-cmd --zone=public --add-service=ceph --permanent
firewall-cmd --reload
iptables -A INPUT -i eth0 -p tcp -s 192.168.122.0/24 -d 192.168.122.254 -j ACCEPT
iptables-save 
sudo setenforce 0
  1. ceph部署用户
    创建用户
useradd -g root -m cephD -d /home/cephD
passwd cephD

免密码权限

echo "cephD ALL=(ALL)NOPASSWD: ALL" | sudo tee /etc/sudoers.d/cephD
chmod 0440 /etc/sudoers.d/cephD

五 ceph节点clone

本节操作在宿主机执行

  1. 获得root权限
sudo su
  1. 关闭虚拟机ceph
virsh shutdown ceph
  1. clone出ceph-1、ceph-2、ceph-3节点
virt-clone -o ceph -n ceph-1 -f /home/data/ceph-1.qcow2
virt-clone -o ceph -n ceph-2 -f /home/data/ceph-2.qcow2
virt-clone -o ceph -n ceph-3 -f /home/data/ceph-3.qcow2

注:ceph为管理节点,ceph-1为mon节点,ceph-2、ceph-3为osd节点

  1. 挂载硬盘
  • 创建硬盘镜像
qemu-img create -f qcow2 /home/data/osd1.qcow2 50g
qemu-img create -f qcow2 /home/data/osd2.qcow2 50g
qemu-img create -f qcow2 /home/data/osd3.qcow2 50g
  • 修改配置文件,将磁盘挂载到虚拟机(以ceph-2为例)
virsh edit ceph-2

添加如下内容到domain.devices节点下

 
      
      
      
 
  1. 启动虚拟机
virsh start ceph
virsh start ceph-1
virsh start ceph-2
virsh start ceph-3

查看虚拟机状态

virsh list --all
CEPH_Node_02.png
  1. 修改虚拟机IP
    KVM进行clone操作之后,虚拟机IP也被clone了,在同一网段中,IP冲突,需要手动修改(以ceph-1为例)
virt-viewer -c qemu:///system ceph-1

进入控制台,root登陆,修改eth0的IP

vi /etc/sysconfig/network-scripts/ifcfg-eth0

修改IPADDR,不与其他虚拟机冲突
IPADDR=192.168.122.123
重启网络服务

service network restart

修改 ceph-2、ceph-3的IP为124、125

六 ceph-deploy 部署ceph集群

本节操作在虚拟机ceph[ceph管理节点]上执行

  1. 安装ceph-deploy
yum install -y ceph-deploy
  1. 部署用户的免密码登陆其他节点
su - cephD

生成SSH秘钥,不输入密码,全部[enter]

ssh-keygen
CEPH_Node_03.png

添加信任,执行以下操作

ssh-copy-id cephD@node1
ssh-copy-id cephD@node2
ssh-copy-id cephD@node3
cd ~;
touch ~/.ssh/config;
vi ~/.ssh/config

输入如下内容

Host node1
    Hostname node1
    User cephD
Host node2
    Hostname node2
    User cephD
Host node3
    Hostname node3
    User cephD
  1. 创建集群
cd ~;
mkdir my-cluster;cd my-cluster;
ceph-deploy new node1

结果如下:


CEPH_Node_04.png

修改OSD默认数量为2

echo "osd pool default size = 2" >> ceph.conf
echo "public_network = 192.168.122.0/24" >> ceph.conf
  1. 集群安装ceph
ceph-deploy install --release luminous node node1 node2 node3
CEPH_Node_05.png
  1. 初始化ceph-moni服务
ceph-deploy mon create-initial
CEPH_Node_06.png
  1. 拷贝管理员配置到各个节点
ceph-deploy admin node node1 node2 node3
  1. 安装管理例程
ceph-deploy mgr create node1 

NOTE:mgr和moni是什么关系

  1. 添加OSD节点
ceph-deploy osd create --data /dev/vdb node2
ceph-deploy osd create --data /dev/vdb node3
CEPH_Node_07.png
  1. 查看ceph集群状态
ssh node1 sudo ceph health
ssh node2 sudo ceph health
ssh node3 sudo ceph health
CEPH_Node_08.png
ssh node1 sudo ceph -s
CEPH_Node_09.png
  1. 集群扩展
  • 新增元数据服务节点
ceph-deploy mds create node1
ceph-deploy mds create node2
  • 新增ceph-moni
ceph-deploy mon add node2 
ceph-deploy mon add node3

NOTE:现在集群三个节点都运行了ceph-moni ?

  • 新增管理例程节点
ceph-deploy mgr create node2 node3
  • 新增rgw实例
ceph-deploy rgw create node1
ceph-deploy rgw create node2
  1. pool操作
ceph osd pool create mytest 8  //创建
ceph osd pool rm mytest //删除
  1. 对象操作
[cephD@node my-cluster]$ rados put test-object-1 ceph.log --pool=mytest
[cephD@node my-cluster]$ rados -p mytest ls
test-object-1
[cephD@node my-cluster]$ ceph osd map mytest test-object-1
osdmap e26 pool 'mytest' (5) object 'test-object-1' -> pg 5.74dc35e2 (5.2) -> up ([1,0], p1) acting ([1,0], p1)
[cephD@node my-cluster]$ rados rm test-object-1 --pool=mytest

七 ansible 部署ceph集群

本节在ceph主机以cephD用户执行

  1. 准备工作
  • 卸载ceph集群
cd ~/my-cluster;
ceph-deploy purge node node1 node2 node3
ceph-deploy purgedata node node1 node2 node3
ceph-deploy forgetkeys
rm ceph.*
  • 安装python-pip工具
cd ~;
sudo yum update -y;
sudo yum install -y python-pip;
  1. 安装ceph-ansible
  • 安装ansible-2.6.4
sudo yum install -y PyYAML
sudo yum install -y python-jinja2
sudo yum install -y python-paramiko
sudo yum install -y python-six
sudo yum install -y python2-cryptography
sudo yum install -y sshpass
wget https://releases.ansible.com/ansible/rpm/release/epel-7-x86_64/ansible-2.6.4-1.el7.ans.noarch.rpm
sudo rpm -ivh ansible-2.6.4-1.el7.ans.noarch.rpm
ansible --version
CEPH_ANSIBLE_02.png
  • 下载ceph-ansible
cd ~;
sudo yum install -y git;
git clone https://github.com/ceph/ceph-ansible.git
cd ceph-ansible;
git branch -a|grep stable

结果如下:


CEPH_ANSIBLE_01.png
  • 发行版说明
ceph-ansible分支 ceph版本 ansible版本
stable-3.0 jewel 和 luminous 2.4
stable-3.1 luminous 和 mimic 2.4
stable-3.2 luminous 和 mimic 2.6
master luminous 和 mimic 2.7
  • 选择stable-3.2,解决python依赖
git checkout stable-3.2
sudo pip install -r requirements.txt
sudo pip install --upgrade pip
  1. 配置Inventory集群主机
sudo chmod 0660 /etc/ansible/hosts 
sudo echo "[mons]">>/etc/ansible/hosts
sudo echo "node1">>/etc/ansible/hosts
sudo echo "node2">>/etc/ansible/hosts
sudo echo "[osds]">>/etc/ansible/hosts
sudo echo "node2">>/etc/ansible/hosts
sudo echo "node3">>/etc/ansible/hosts
sudo echo "[mgrs]">>/etc/ansible/hosts
sudo echo "node1">>/etc/ansible/hosts
sudo echo "node2">>/etc/ansible/hosts
sudo echo "node3">>/etc/ansible/hosts
  1. 配置Playbook部署指令
cp site.yml.sample site.yml
  1. 配置ceph部署
cp group_vars/all.yml.sample group_vars/all.yml
vi group_vars/all.yml
------
###########
# INSTALL #
###########
ceph_origin:repository
ceph_repository: community
ceph_stable_release: luminous
ceph_stable_repo: "{{ ceph_mirror }}/rpm-{{ ceph_stable_release }}/el7/x86_64"
......
monitor_interface: eth0
......
public_network: 192.168.122.0/24
osd_objectstore: filestore
devices:
  - '/dev/vdb'
osd_scenario: collocated
------
  1. 安装执行
ansible-playbook site.yml -vv
ceph -s

NOTE:-vv 提示更多错误信息

PLAY RECAP ********************************************************************************************************************************************************************************************************
node1                      : ok=165  changed=26   unreachable=0    failed=0   
node2                      : ok=248  changed=35   unreachable=0    failed=0   
node3                      : ok=176  changed=26   unreachable=0    failed=0   


INSTALLER STATUS **************************************************************************************************************************************************************************************************
Install Ceph Monitor        : Complete (0:07:34)
Install Ceph Manager        : Complete (0:07:58)
Install Ceph OSD            : Complete (0:01:09)

Wednesday 27 March 2019  02:50:32 -0400 (0:00:00.065)       0:17:19.385 ******* 
=============================================================================== 
ceph-common : install redhat ceph packages --------------------------------------------------------------------------------------------------------------------------------------------------------------- 274.13s
/home/cephD/ceph-ansible/roles/ceph-common/tasks/installs/install_redhat_packages.yml:16 -------------------------------------------------------------------------------------------------------------------------
ceph-common : install redhat ceph packages --------------------------------------------------------------------------------------------------------------------------------------------------------------- 230.22s
/home/cephD/ceph-ansible/roles/ceph-common/tasks/installs/install_redhat_packages.yml:16 -------------------------------------------------------------------------------------------------------------------------
ceph-common : install centos dependencies ---------------------------------------------------------------------------------------------------------------------------------------------------------------- 104.34s
/home/cephD/ceph-ansible/roles/ceph-common/tasks/installs/install_redhat_packages.yml:9 --------------------------------------------------------------------------------------------------------------------------
ceph-common : install centos dependencies ----------------------------------------------------------------------------------------------------------------------------------------------------------------- 93.92s
/home/cephD/ceph-ansible/roles/ceph-common/tasks/installs/install_redhat_packages.yml:9 --------------------------------------------------------------------------------------------------------------------------
ceph-mgr : install ceph-mgr package on RedHat or SUSE ----------------------------------------------------------------------------------------------------------------------------------------------------- 78.47s
/home/cephD/ceph-ansible/roles/ceph-mgr/tasks/pre_requisite.yml:2 ------------------------------------------------------------------------------------------------------------------------------------------------
ceph-mon : create ceph mgr keyring(s) when mon is not containerized --------------------------------------------------------------------------------------------------------------------------------------- 18.35s
/home/cephD/ceph-ansible/roles/ceph-mon/tasks/ceph_keys.yml:61 ---------------------------------------------------------------------------------------------------------------------------------------------------
ceph-osd : manually prepare ceph "filestore" non-containerized osd disk(s) with collocated osd data and journal ------------------------------------------------------------------------------------------- 12.11s
/home/cephD/ceph-ansible/roles/ceph-osd/tasks/scenarios/collocated.yml:53 ----------------------------------------------------------------------------------------------------------------------------------------
ceph-osd : activate osd(s) when device is a disk ----------------------------------------------------------------------------------------------------------------------------------------------------------- 9.93s
/home/cephD/ceph-ansible/roles/ceph-osd/tasks/activate_osds.yml:5 ------------------------------------------------------------------------------------------------------------------------------------------------
ceph-config : generate ceph configuration file: ceph.conf -------------------------------------------------------------------------------------------------------------------------------------------------- 7.68s
/home/cephD/ceph-ansible/roles/ceph-config/tasks/main.yml:77 -----------------------------------------------------------------------------------------------------------------------------------------------------
ceph-mon : collect admin and bootstrap keys ---------------------------------------------------------------------------------------------------------------------------------------------------------------- 6.42s
/home/cephD/ceph-ansible/roles/ceph-mon/tasks/ceph_keys.yml:2 ----------------------------------------------------------------------------------------------------------------------------------------------------
ceph-mon : create monitor initial keyring ------------------------------------------------------------------------------------------------------------------------------------------------------------------ 5.64s
/home/cephD/ceph-ansible/roles/ceph-mon/tasks/deploy_monitors.yml:22 ---------------------------------------------------------------------------------------------------------------------------------------------
ceph-mgr : disable ceph mgr enabled modules ---------------------------------------------------------------------------------------------------------------------------------------------------------------- 5.45s
/home/cephD/ceph-ansible/roles/ceph-mgr/tasks/main.yml:32 --------------------------------------------------------------------------------------------------------------------------------------------------------
ceph-config : generate ceph configuration file: ceph.conf -------------------------------------------------------------------------------------------------------------------------------------------------- 4.88s
/home/cephD/ceph-ansible/roles/ceph-config/tasks/main.yml:77 -----------------------------------------------------------------------------------------------------------------------------------------------------
ceph-common : configure red hat ceph community repository stable key --------------------------------------------------------------------------------------------------------------------------------------- 4.35s
/home/cephD/ceph-ansible/roles/ceph-common/tasks/installs/redhat_community_repository.yml:2 ----------------------------------------------------------------------------------------------------------------------
ceph-common : configure red hat ceph community repository stable key --------------------------------------------------------------------------------------------------------------------------------------- 4.07s
/home/cephD/ceph-ansible/roles/ceph-common/tasks/installs/redhat_community_repository.yml:2 ----------------------------------------------------------------------------------------------------------------------
ceph-config : create ceph initial directories -------------------------------------------------------------------------------------------------------------------------------------------------------------- 4.06s
/home/cephD/ceph-ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml:18 ---------------------------------------------------------------------------------------------------------------------------------
ceph-common : purge yum cache ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 3.59s
/home/cephD/ceph-ansible/roles/ceph-common/tasks/installs/configure_redhat_repository_installation.yml:23 --------------------------------------------------------------------------------------------------------
ceph-common : configure red hat ceph community repository stable key --------------------------------------------------------------------------------------------------------------------------------------- 3.27s
/home/cephD/ceph-ansible/roles/ceph-common/tasks/installs/redhat_community_repository.yml:2 ----------------------------------------------------------------------------------------------------------------------
ceph-config : create ceph initial directories -------------------------------------------------------------------------------------------------------------------------------------------------------------- 3.27s
/home/cephD/ceph-ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml:18 ---------------------------------------------------------------------------------------------------------------------------------
ceph-config : create ceph initial directories -------------------------------------------------------------------------------------------------------------------------------------------------------------- 3.12s
/home/cephD/ceph-ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml:18 ---------------------------------------------------------------------------------------------------------------------------------

检查集群状态

cephD@node ceph-ansible (stable-3.2) $ ssh node1 sudo ceph -s
  cluster:
    id:     bb653ada-5753-4672-9d3b-b5e92846b897
    health: HEALTH_OK
 
  services:
    mon: 2 daemons, quorum node1,node2
    mgr: node2(active), standbys: node3, node1
    osd: 2 osds: 2 up, 2 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0B
    usage:   214MiB used, 89.7GiB / 90.0GiB avail
    pgs:     

其他操作可以参考【七 使用ceph-deploy安装】第7步之后的操作

NOTE:本节安装没有ceph-admin节点,所以node节点上是没有ceph的,所有ceph操作需要在node1上执行:
ssh node1

八 离线部署

本章在cceph主机以ephD用户执行

  1. 搭建本地仓库
    CentOS7搭建本地仓库--CEPH
  2. 使用ceph-ansible部署
    参考【七 ansible 部署ceph集群】
  3. 与第七章不一样的地方
  • 安装python-pip工具注意点
sudo pip install -r /home/cephD/ceph-ansible/requirements.txt --find-links=http://192.168.232.129/repo/python/deps/ --trusted-host 192.168.232.129
  • 配置ceph部署注意点
cp group_vars/all.yml.sample group_vars/all.yml
vi group_vars/all.yml
------
###########
# INSTALL #
###########
ceph_origin:repository
ceph_repository: custom
ceph_stable_release: luminous
ceph_stable_repo: "http://192.168.232.129/repo/ceph/luminous/"
......
monitor_interface: eth0
......
public_network: 192.168.122.0/24
osd_objectstore: filestore
devices:
  - '/dev/sdb'
osd_scenario: collocated
------
  1. 提醒
    ceph-ansible 部署ceph集群的时候 cephD用户的一系列操作也是必要的

九 操作集群

  1. 启动所有守护例程
sudo systemctl start ceph.target
  1. 停止所有守护例程
sudo systemctl stop ceph\*.service ceph\*.target

十 问题&解决

  1. [ceph_deploy][ERROR ] RuntimeError: NoSectionError: No section: 'ceph'
    Solution:等待20分钟,再次执行(有时候,由于网络原因,yum install -y ceph ceph-radosgw 时间会超过300s,造成超时)
  2. [node1][WARNIN] Another app is currently holding the yum lock; waiting for it to exit...
    Solution:等待,或者通过[ps -ef|grep yum]找到锁住的指令进程,cancel掉之后,以此执行yum指令
  3. 安装特别慢
    Solution:可以不在一个命令中安装,经测试,支持并行安装,如下:
ceph-deploy install --release luminous node &
ceph-deploy install --release luminous node1 &
ceph-deploy install --release luminous node2 &
ceph-deploy install --release luminous node3 &
  1. auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring ---- ceph quorum_status --format json-pretty
sudo cp * /etc/ceph/
sudo chown cephD:root /etc/ceph/*
  1. [ceph_deploy.rgw][ERROR ] RuntimeError: config file /etc/ceph/ceph.conf exists with different content; ...
ceph-deploy  --overwrite-conf rgw create node1
  1. [ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
echo "public_network = 192.168.122.0/24" >> ceph.conf
ceph-deploy --overwrite-conf config push node node1 node2 node3
  1. mgr和moni有啥区别
    在luminous版本之前,mgr进程包含在moni进程内部,L版开始拆分出来

十一 参考文档

http://docs.ceph.com/ceph-ansible/master/
http://docs.ceph.com/docs/master/start/

你可能感兴趣的:(CentOS7环境部署虚拟ceph集群)