龙芯Ceph 集群手动部署(LoongArch架构、Ceph N版、手动部署MON、OSD、MGR、Dashboard服务)

基础环境信息

CPU: 龙芯3C5000L×4
硬盘:一块256的nvme用于安装操作,3块2T的HDD盘用作Ceph存储盘。
OS版本:Loongnix-server8.3
Ceph版本:Ceph 14.2.21(Nautilus)

部署目标

进行Ceph 集群典型架构(最小化)手动部署测试。
可对外提供块存储、对象存储、文件存储服务。支持Dashboard管理。

Ceph集群角色

主机名 IP地址 Ceph角色
Ceph01 10.40.65.156 / 10.40.65.148 mon×1、mgr×1、osd×3
Ceph02 10.40.65.175 / 10.40.65.132 mon×1、mgr×1、osd×3
Ceph03 10.40.65.129 / 10.40.65.154 mon×1、mgr×1、osd×3

Dashboard 访问地址:https://10.40.65.148:8443

环境预配置

  1. 配置主机名称

    设备1执行:

    hostnamectl set-hostname ceph01
    bash
    

    设备2执行:

    hostnamectl set-hostname ceph02
    bash
    

    设备3执行:

    hostnamectl set-hostname ceph03
    bash
    
  2. 关闭防火墙和selinux(所有设备执行)

    systemctl disable --now firewalld
    setenforce 0
    sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
    
  3. 配置时间同步(所有设备执行)

    ​yum install -y chrony
    sed -i '/^pool/d' /etc/chrony.conf
    sed -i '/^server/d' /etc/chrony.conf
    echo "pool ntp.aliyun.com iburst" >> /etc/chrony.conf
    systemctl restart chronyd.service && systemctl enable chronyd.service
    timedatectl status
    chronyc sources
    
  4. 添加节点间SSH互信(Ceph01上执行)

    ssh-keygen -t rsa
    ssh-copy-id [email protected]
    ssh-copy-id [email protected]
    
  5. 配置host文件并同步到集群其他设备(Ceph01上执行)

    cat >> /etc/hosts <<EOF
    10.40.65.156 ceph01
    10.40.65.175 ceph02
    10.40.65.129 ceph03
    
    10.40.65.148 ceph01
    10.40.65.132 ceph02
    10.40.65.154 ceph03
    EOF
    
    scp /etc/hosts root@ceph02:/etc/hosts
    scp /etc/hosts root@ceph03:/etc/hosts
    
  6. 安装Ceph源N版(所有机器上执行)

    yum install -y loongnix-release-ceph-nautilus
    

部署Ceph mon服务

  1. 安装Ceph-mon服务程序(所有设备执行)

    yum install -y ceph-mon
    
  2. 初始化Mon服务(Ceph 01执行)

    生成uuid

    uuidgen
    > 9bf24809-220b-4910-b384-c1f06ea80728
    

    创建Ceph配置文件

    cat >> /etc/ceph/ceph.conf <<EOF
    [global]
    fsid = 9bf24809-220b-4910-b384-c1f06ea80728
    mon_initial_members = ceph01,ceph02,ceph03
    mon_host = 10.40.65.156,10.40.65.175,10.40.65.129
    public_network = 10.40.65.0/24
    auth_cluster_required = cephx
    auth_service_required = cephx
    auth_client_required = cephx
    osd_journal_size = 1024
    osd_pool_default_size = 3
    osd_pool_default_min_size = 2
    osd_pool_default_pg_num = 64
    osd_pool_default_pgp_num = 64
    osd_crush_chooseleaf_type = 1
    EOF
    

    创建集群Monitor密钥。

    ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
    

    创建client.admin用户、client.bootstrap-osd用户密钥,添加到集群密钥中。

    ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
    ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
    
    ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' --cap mgr 'allow r'
    ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
    

    使用主机名、主机IP地址、FSID生成monitor map。

    monmaptool --create --add ceph01 10.40.65.156 --add ceph02 10.40.65.175 --add ceph03 10.40.65.129 --fsid 9bf24809-220b-4910-b384-c1f06ea80728 /tmp/monmap
    

    初始化并启动monitor服务

    sudo -u ceph mkdir /var/lib/ceph/mon/ceph-ceph01
    chown ceph.ceph -R /var/lib/ceph /etc/ceph /tmp/ceph.mon.keyring /tmp/monmap
    sudo -u ceph ceph-mon --mkfs -i ceph01 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
    ls /var/lib/ceph/mon/ceph-ceph01/
    
    systemctl start ceph-mon@ceph01
    systemctl enable ceph-mon@ceph01
    systemctl status ceph-mon@ceph01
    
  3. 同步配置文件、密钥、monmap到其他节点中(Ceph 01执行)

    复制ceph.client.admin.keyring、client.bootstrap-osd key、ceph.mon.keyring、monitor map、ceph.conf到另外2个节点

    scp /etc/ceph/ceph.client.admin.keyring root@ceph02:/etc/ceph/
    scp /etc/ceph/ceph.client.admin.keyring root@ceph03:/etc/ceph/
    
    scp /var/lib/ceph/bootstrap-osd/ceph.keyring root@ceph02:/var/lib/ceph/bootstrap-osd/
    scp /var/lib/ceph/bootstrap-osd/ceph.keyring root@ceph03:/var/lib/ceph/bootstrap-osd/
    
    scp /tmp/ceph.mon.keyring root@ceph02:/tmp/
    scp /tmp/ceph.mon.keyring root@ceph03:/tmp/
    
    scp /tmp/monmap root@ceph02:/tmp/
    scp /tmp/monmap root@ceph03:/tmp/
    
    scp /etc/ceph/ceph.conf root@ceph02:/etc/ceph/
    scp /etc/ceph/ceph.conf root@ceph03:/etc/ceph/
    
  4. 启动其他节点的monitor服务(Ceph 02执行)

    sudo -u ceph mkdir /var/lib/ceph/mon/ceph-ceph02
    chown ceph.ceph -R /var/lib/ceph /etc/ceph /tmp/ceph.mon.keyring /tmp/monmap
    sudo -u ceph ceph-mon --mkfs -i ceph02 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
    ls /var/lib/ceph/mon/ceph-ceph02/
    
    systemctl start ceph-mon@ceph02
    systemctl enable ceph-mon@ceph02
    systemctl status ceph-mon@ceph02
    
  5. 启动其他节点的monitor服务(Ceph 03执行)

    sudo -u ceph mkdir /var/lib/ceph/mon/ceph-ceph03
    chown ceph.ceph -R /var/lib/ceph /etc/ceph /tmp/ceph.mon.keyring /tmp/monmap
    sudo -u ceph ceph-mon --mkfs -i ceph03 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
    ls /var/lib/ceph/mon/ceph-ceph03/
    
    systemctl start ceph-mon@ceph03
    systemctl enable ceph-mon@ceph03
    systemctl status ceph-mon@ceph03
    
  6. 查看当前集群状态(任意节点执行)
    通过ceph -s命令查询集群状态,可见集群services中3个mon服务已启动。

    	ceph -s
    
    >   cluster:
    >     id:     ef944f47-8f98-4b50-b1cb-983e9948deee
    >     health: HEALTH_OK
    >   services:
    >     mon: 3 daemons, quorum controller2,controller3,controller1 (age 2s)
    >     mgr: no daemons active
    >     osd: 0 osds: 0 up, 0 in
    >     data:
    >     pools:   0 pools, 0 pgs
    >     objects: 0 objects, 0 B
    >     usage:   0 B used, 0 B / 0 B avail
    >     pgs:
    

    HEALTH_WARN警报信息清除

    出现警告:mons are allowing insecure global_id reclaim。
    解决办法:ceph config set mon auth_allow_insecure_global_id_reclaim false

    出现警告:monitors have not enabled msgr2
    解决办法:ceph mon enable-msgr2

部署Ceph osd服务(ceph-volume 自动化创建)

  1. 安装Ceph-osd服务程序(所有设备执行)
    yum install -y ceph-osd
    
  2. 初始化osd服务(所有设备执行)
    通过fdisk等工具查看磁盘盘符,然后利用ceph-volume工具自动化创建osd服务。
    ceph-volume lvm create --data /dev/sda
    ceph-volume lvm create --data /dev/sdb
    ceph-volume lvm create --data /dev/sdc
    
  3. 查看当前集群状态(任意节点执行)
    通过ceph osd tree命令查询集群状态,可见集群services中所有osd服务已启动。
    ceph osd tree
    > ID CLASS WEIGHT   TYPE NAME       STATUS REWEIGHT PRI-AFF
    > -1       16.36908 root default
    > -3        5.45636     host ceph01
    >  0   hdd  1.81879         osd.0       up  1.00000 1.00000
    >  1   hdd  1.81879         osd.1       up  1.00000 1.00000
    >  2   hdd  1.81879         osd.2       up  1.00000 1.00000
    > -5        5.45636     host ceph02
    >  3   hdd  1.81879         osd.3       up  1.00000 1.00000
    >  4   hdd  1.81879         osd.4       up  1.00000 1.00000
    >  5   hdd  1.81879         osd.5       up  1.00000 1.00000
    > -7        5.45636     host ceph03
    >  6   hdd  1.81879         osd.6       up  1.00000 1.00000
    >  7   hdd  1.81879         osd.7       up  1.00000 1.00000
    >  8   hdd  1.81879         osd.8       up  1.00000 1.00000
    

部署Ceph mgr服务并开启Dashboard

  1. 安装Ceph-mgr服务程序(所有设备执行)

    yum install -y ceph-mgr
    
  2. 初始化并启动主MGR服务(Ceph01执行)

    mkdir -p /var/lib/ceph/mgr/ceph-ceph01
    chown ceph.ceph -R /var/lib/ceph
    ceph-authtool --create-keyring /etc/ceph/ceph.mgr.ceph01.keyring --gen-key -n mgr.ceph01 --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *'
    ceph auth import -i /etc/ceph/ceph.mgr.ceph01.keyring
    ceph auth get-or-create mgr.ceph01 -o /var/lib/ceph/mgr/ceph-ceph01/keyring
    
    systemctl start ceph-mgr@ceph01
    systemctl enable ceph-mgr@ceph01
    systemctl status ceph-mgr@ceph01
    
  3. 初始化并启动从MGR服务(Ceph02执行)

    mkdir -p /var/lib/ceph/mgr/ceph-ceph02
    chown ceph.ceph -R /var/lib/ceph
    ceph-authtool --create-keyring /etc/ceph/ceph.mgr.ceph02.keyring --gen-key -n mgr.ceph02 --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *'
    ceph auth import -i /etc/ceph/ceph.mgr.ceph02.keyring
    ceph auth get-or-create mgr.ceph02 -o /var/lib/ceph/mgr/ceph-ceph02/keyring
    
    systemctl start ceph-mgr@ceph02
    systemctl enable ceph-mgr@ceph02
    systemctl status ceph-mgr@ceph02
    
  4. 初始化并启动从MGR服务(Ceph03执行)

    mkdir -p /var/lib/ceph/mgr/ceph-ceph03
    chown ceph.ceph -R /var/lib/ceph
    ceph-authtool --create-keyring /etc/ceph/ceph.mgr.ceph03.keyring --gen-key -n mgr.ceph03 --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *'
    ceph auth import -i /etc/ceph/ceph.mgr.ceph03.keyring
    ceph auth get-or-create mgr.ceph03 -o /var/lib/ceph/mgr/ceph-ceph03/keyring
    
    systemctl start ceph-mgr@ceph03
    systemctl enable ceph-mgr@ceph03
    systemctl status ceph-mgr@ceph03
    
  5. 查看当前集群状态(任意节点执行)
    通过ceph -s命令查询集群状态,可见集群services中3个mgr服务已启动。

    ceph -s
    > cluster: 
    > 		id:     cf3862c5-f8f6-423e-a03e-beb40fecb74a 	
    > 		health: HEALTH_OK
    > 
    > services: 	
    > 		mon: 	3 daemons, quorum ceph03,ceph02,ceph01 (age 12d) 	
    > 		mgr:	ceph01(active, since 3w), standbys: ceph02、ceph03			
    > 		osd:    9 osds: 9 up (since 89s), 9 in (since 89s)
    > 
    > data:
    > 	 	pools:   0 pools, 0 pgs
    > 		objects: 0 objects, 0 B
    > 		usage:   9.0 GiB used, 16 TiB / 16 TiB avail
    > 		pgs:
    
  6. 使能Dashboard访问功能(任意节点执行)
    开启mgr dashboard功能

    ceph mgr module enable dashboard
    

    生成并安装自签名的证书

    ceph dashboard create-self-signed-cert
    

    配置dashboard

    ceph config set mgr mgr/dashboard/server_addr 10.40.65.148
    ceph config set mgr mgr/dashboard/server_port 8080
    ceph config set mgr mgr/dashboard/ssl_server_port 8443
    

    创建一个dashboard登录用户名密码

    echo '123456' > password.txt
    ceph dashboard ac-user-create admin  administrator -i password.txt
    

    查看服务访问方式

    	ceph mgr services
    > 	{
    >     "dashboard": "https://ceph01:8443/" 
    >   }
    

    通过web访问Ceph Dashboard,用户名密码为admin/123456

    https://10.40.65.148:8443
    

    龙芯Ceph 集群手动部署(LoongArch架构、Ceph N版、手动部署MON、OSD、MGR、Dashboard服务)_第1张图片

客户端访问配置(RBD权限)(客户端执行)

  1. 安装Ceph

    yum install -y ceph-common
    
  2. 拷贝服务器端的配置文件及密钥

    scp root@ceph01:/etc/ceph/ceph.conf ./
    scp root@ceph01:/etc/ceph/ceph.client.admin.keyring ./
    
  3. 测试连接状态

    ceph -s
    > cluster: 
    > 		id:     cf3862c5-f8f6-423e-a03e-beb40fecb74a 	
    > 		health: HEALTH_OK
    > 
    > services: 	
    > 		mon: 	3 daemons, quorum ceph03,ceph02,ceph01 (age 12d) 	
    > 		mgr:	ceph01(active, since 3w), standbys: ceph02、ceph03			
    > 		osd:    9 osds: 9 up (since 89s), 9 in (since 89s)
    > 
    > data:
    > 	 	pools:   0 pools, 0 pgs
    > 		objects: 0 objects, 0 B
    > 		usage:   9.0 GiB used, 16 TiB / 16 TiB avail
    > 		pgs:
    

块设备服务测试(客户端执行)

  1. 创建块设备池
    ceph osd pool create test-pool 128
    ceph osd pool application enable test-pool rbd
    ceph osd lspools
    
  2. 创建块设备镜像
    rbd create --size 1T disk01 --pool test-pool
    
    rbd ls test-pool -l
    rbd info test-pool/disk01
    
  3. 修改块设备镜像能力(镜像能力与内核版本相关)
    rbd feature disable test-pool/disk01 exclusive-lock, object-map, fast-diff, deep-flatten
    
  4. 映射块设备
    rbd map test-pool/disk01
    mkfs.xfs /dev/rbd0 
    rbd showmapped
    
  5. 挂载并使用块设备
    mkdir /home/loongson/ceph_disk01
    mount /dev/rbd0 /home/loongson/ceph_disk01
    cd /home/loongson/ceph_disk01
    

你可能感兴趣的:(linux)