Ceph扩容节点(新服务器)

Ceph扩容节点(新服务器)

1.修改所有节点hosts文件,添加新增节点192.168.1.33 node4:

[root@vm30 cluster]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain
192.168.0.30 node1
192.168.0.31 node2
192.168.0.32 node3
192.168.0.33 node4

2.在node4中添加ceph源:

[root@node4 yum.repos.d]# cat ceph.repo 
[noarch]
name=ceph noarch
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch
enabled=1
gpgcheck=0
[x86_64]
name=ceph x86_64
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/x86_64
enabled=1
gpgcheck=0
  1. 把node1(ceph-deploy节点)的ssh key拷贝给node4节点
[root@node1 ~]# ssh-copy-id node4

4.node4节点安装ceph和ceph-radosgw

[root@node4 ~]#yum install ceph ceph-radosgw –y
  1. 将监视器添加到现有群集
[root@node1cluster]#ceph-deploy --overwrite-conf mon add node4 --address 192.168.0.33

扩展rgw

[root@node1 cluster]#ceph-deploy --overwrite-conf rgw create node4

扩展mgr

[root@node1 cluster]#ceph-deploy --overwrite-conf mgr create node4
[root@node1 cluster]# cat ceph.conf 
[global]
fsid = 2ed0f25e-e385-4718-bea3-d9c0de33ae02
mon_initial_members = node1, node2, node3, node4
mon_host = 192.168.0.30,192.168.0.31,192.168.0.32,192.168.0.33
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public_network = 192.168.0.0/24
  1. 在管理节点把配置文件和 admin 密钥拷贝到Ceph 节点
[root@node1 ~]#ceph-deploy admin node4
  1. 创建osd,添加新节点node4的sdb,sdc磁盘到存储池中
[root@node1 ~]#ceph-deploy osd create --data /dev/sdb node4
[root@node1 ~]#ceph-deploy osd create --data /dev/sdc node4
  1. 查看集群的状态
[root@node1 cluster]# ceph -s
  cluster:
    id:     2ed0f25e-e385-4718-bea3-d9c0de33ae02
    health: HEALTH_WARN
            application not enabled on 1 pool(s)
            clock skew detected on mon.node2, mon.node3, mon.node4
 
  services:
    mon: 4 daemons, quorum node1,node2,node3,node4 (age 2m)
    mgr: node1(active, since 98m), standbys: node3, node2,node4
    osd: 8 osds: 8 up (since 32m), 8 in (since 38m)
    rgw: 4 daemons active (node1, node2, node3,node4)
 
  task status:
 
  data:
    pools:   5 pools, 192 pgs
    objects: 240 objects, 137 MiB
    usage:   8.5 GiB used, 151 GiB / 160 GiB avail
    pgs:     192 active+clean
  1. 查看新的osd
[root@node1 cluster]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME     STATUS REWEIGHT PRI-AFF 
-1       0.15588 root default                          
-3       0.03897     host node1                         
 0   hdd 0.01949         osd.0     up  1.00000 1.00000 
 1   hdd 0.01949         osd.1     up  1.00000 1.00000 
-5       0.03897     host node2                         
 2   hdd 0.01949         osd.2     up  1.00000 1.00000 
 3   hdd 0.01949         osd.3     up  1.00000 1.00000 
-7       0.03897     host node3                         
 4   hdd 0.01949         osd.4     up  1.00000 1.00000 
 5   hdd 0.01949         osd.5     up  1.00000 1.00000 
-9       0.03897     host node4                         
 6   hdd 0.01949         osd.6     up  1.00000 1.00000 
 7   hdd 0.01949         osd.7     up  1.00000 1.00000

你可能感兴趣的:(ceph,分布式存储)