2019独角兽企业重金招聘Python工程师标准>>>
1 备份etcd数据
etcdctl backup --data-dir /var/lib/etcd/default.etcd --backup-dir /root/etcdback
2 etcd备份脚本
#!/bin/bash
date_time=`date +%Y%m%d`
etcdctl backup --data-dir /var/lib/etcd/default.etcd --backup-dir /root/etcd71-${date_time}.etcd
tar cvzf etcd71-${date_time}.tar.gz etcd71-${date_time}.etcd
find /root/*.etcd -ctime +7 -exec rm -r {} \;
find /root/*.gz -ctime +7 -exec rm -r {} \;
tar cvzf etcdback-
3 V3版本备份
# mkdir -p /var/lib/etcd_backup/
# ETCDCTL_API=3 etcdctl snapshot save /var/lib/etcd_backup/etcd_$(date "+%Y%m%d%H%M%S").db
4 恢复etcd数据(集群不可用,灾难恢复)
下面介绍下当整个etcd集群不可用的情况下,如何快速的恢复一个etcd集群。
1.首先需要停止master节点的kube-apiserver服务:
systemctl stop kube-apiserver
确保kube-apiserver已经停止了,执行下列命令返回值为0
2 停掉集群中的所有etcd服务
systemctl stop etcd
# ps -ef|grep etcd|grep -v etcd|wc -l
0
确保etcd停止成功
3 移除所有etcd服务实例的数据目录
mv /var/lib/etcd/data.etcd /var/lib/etcd/data.etcd_bak
分别在各个节点恢复数据,首先需要拷贝数据到每个etcd节点,假设备份数据存储在/var/lib/etcd_backup/backup_20180107172459.db
scp /var/lib/etcd_backup/backup_20180107172459.db root@etcd01:/var/lib/etcd_backup/
scp /var/lib/etcd_backup/backup_20180107172459.db root@etcd02:/var/lib/etcd_backup/
scp /var/lib/etcd_backup/backup_20180107172459.db root@etcd03:/var/lib/etcd_backup/
scp /var/lib/etcd_backup/backup_20180107172459.db root@etcd04:/var/lib/etcd_backup/
scp /var/lib/etcd_backup/backup_20180107172459.db root@etcd05:/var/lib/etcd_backup/
在需要恢复的所有etcd实例上执行恢复命令:
ETCDCTL_API=3 etcdctl snapshot --ca-file=/etc/etcd/ssl/ca.pem --cert-file=/etc/etcd/ssl/etcd.pem --key-file=/etc/etcd/ssl/etcd-key.pem restore <备份数据> --name= --data-dir=<元数据存储路径> --initial-cluster= --initial-cluster-token=
4.同时启动etcd集群的所有etcd实例
systemctl start etcd
5.检查etcd集群member及健康状态
etcdctl --ca-file=/etc/etcd/ssl/ca.pem --cert-file=/etc/etcd/ssl/etcd.pem --key-file=/etc/etcd/ssl/etcd-key.pem member list
etcdctl --ca-file=/etc/etcd/ssl/ca.pem --cert-file=/etc/etcd/ssl/etcd.pem --key-file=/etc/etcd/ssl/etcd-key.pem cluster-health
6.启动master节点的所有kube-apiserver服务:
# systemctl start kube-apiserver
# systemctl status kube-apiserver
摘除etcd节点
向我们遇到的问题,需要将ceph节点的机器换成本地sata盘的机器,就需要先将部署在ceph上的etcd实例从集群中先摘除掉,然后在增加新的etcd实例到集群中。
1.查看etcd集群member信息
etcdctl --ca-file=/etc/etcd/ssl/ca.pem --cert-file=/etc/etcd/ssl/etcd.pem --key-file=/etc/etcd/ssl/etcd-key.pem member list
2.根据member信息移除具体的etcd实例
etcdctl --ca-file=/etc/etcd/ssl/ca.pem --cert-file=/etc/etcd/ssl/etcd.pem --key-file=/etc/etcd/ssl/etcd-key.pem member remove
3.停止etcd集群中被移除的etcd实例
# systemctl stop etcd
# yum remove -y etcd-xxxx
4.查看etcd实例是否从集群中被移除
etcdctl --ca-file=/etc/etcd/ssl/ca.pem --cert-file=/etc/etcd/ssl/etcd.pem --key-file=/etc/etcd/ssl/etcd-key.pem member list
新增etcd节点
在已经存在的etcd节点上执行如下命令,增加新的etcd节点到集群中。
# etcdctl --ca-file=/etc/etcd/ssl/ca.pem --cert-file=/etc/etcd/ssl/etcd.pem --key-file=/etc/etcd/ssl/etcd-key.pem member add http://:2380
ETCD_NAME=etcd01
ETCD_INITIAL_CLUSTER="etcd01=http://ip1:2380,etcd02=http://ip2:2380,etcd03=http://ip3:2380,etcd04=http://ip4:2380,etcd05=http://ip5:2380"
ETCD_INITIAL_CLUSTER_STATE="existing"
注意:
- etcd_name: etcd.conf配置文件中ETCD_NAME内容
- etdc_node_address: etcd.conf配置文件中的ETCD_LISTEN_PEER_URLS内容
此时新的etcd节点已经被加到了现有的etcd集群。修改新增加的etcd节点的配置文件/etc/etcd/etcd.conf
, 将ETCD_INITIAL_CLUSTER修改成上面输出的内容,并增加相关的配置。
启动新的etcd节点:
systemctl start etcd
并对已经存在的etcd节点的配置项ETCD_INITIAL_CLUSTER增加
更新etcd节点
ETCDCTL_API=3 etcdctl member update http://:2380