kubernetes 1.26.1 Etcd部署(外接)保姆级教程

目录

部署etcd前期准备

机器信息

升级内核

系统配置

部署容器运行时Containerd

安装crictl客户端命令

配置服务器支持开启ipvs的前提条件

安装 kubeadm、kubelet 和 kubectl

安装部署etcd

1.将 kubelet 配置为 etcd 的服务管理器

2.为 kubeadm 创建配置文件

3. 生成证书颁发机构

4. 为每个成员创建证书

5. 复制证书和 kubeadm 配置

6.确保所有节点预期的文件都存在

7. 创建静态 Pod 清单

8. 可选:检查集群运行状况

9. ctr 命令使用

10. 下载etcdctl

11. 常见问题

12. 检查etcd服务状态

13. 证书更新

14. containerd 私有镜像仓库配置


如果有帮助到你,请点个赞哦~

部署etcd前期准备

机器信息

主机名 IP 内核 系统版本 配置
l-shahe-k8s-etcd1.ops.prod 10.120.174.14 5.4.231-1.el7.elrepo.x86_64 CentOS Linux release 7.9.2009 (Core) 12C 64G 500G SSD
l-shahe-k8s-etcd2.ops.prod 10.120.175.5 5.4.231-1.el7.elrepo.x86_64 CentOS Linux release 7.9.2009 (Core) 12C 64G 500G SSD
l-shahe-k8s-etcd3.ops.prod 10.120.175.36 5.4.231-1.el7.elrepo.x86_64 CentOS Linux release 7.9.2009 (Core) 12C 64G 500G SSD

升级内核

GitHub - containerd/containerd: An open and reliable container runtimeAn open and reliable container runtime. Contribute to containerd/containerd development by creating an account on GitHub.https://github.com/containerd/containerd

kubernetes 1.26.1 Etcd部署(外接)保姆级教程_第1张图片

执行如下命令

下载内核5.4.231 https://download.csdn.net/download/weixin_43798031/87497156?spm=1001.2014.3001.5503

yum update -y
mkdir -p  /opt/k8s-install/  && cd /opt/k8s-install/
wget 10.60.127.202:19999/kernel.tar.gz
tar xf kernel.tar.gz
 
rpm -qa | grep kernel | grep 3.10 | xargs  yum -y remove
yum -y install *.rpm
cat /boot/grub2/grub.cfg |grep menuentry
grub2-set-default 'CentOS Linux (5.4.231-1.el7.elrepo.x86_64) 7 (Core)'
grub2-mkconfig -o /boot/grub2/grub.cfg
 
#需要重启
sed -i 's/,nobarrier//g' /etc/fstab && reboot

系统配置

禁用SELinux
setenforce 0
vi /etc/selinux/config SELINUX=disabled
 
关闭 swap 分区
swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
 
创建/etc/modules-load.d/containerd.conf配置文件
cat << EOF > /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
 
# 设置所需的 sysctl 参数,参数在重新启动后保持不变
cat << EOF > /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
user.max_user_namespaces=28633
EOF
# 应用 sysctl 参数而不重新启动
sudo sysctl --system
通过运行以下指令确认 br_netfilter 和 overlay 模块被加载:
lsmod | grep br_netfilter
lsmod | grep overlay
 
通过运行以下指令确认 net.bridge.bridge-nf-call-iptables、net.bridge.bridge-nf-call-ip6tables 和 net.ipv4.ip_forward 系统变量在你的 sysctl 配置中被设置为 1:
sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward

部署容器运行时Containerd

在各个服务器节点上安装容器运行时Containerd。
下载Containerd的二进制包

wget https://github.com/containerd/containerd/releases/download/v1.6.14/cri-containerd-cni-1.6.14-linux-amd64.tar.gz
cri-containerd-cni-1.6.14-linux-amd64.tar.gz压缩包中已经按照官方二进制部署推荐的目录结构布局好。 里面包含了systemd配置文件,containerd以及cni的部署文件。
 
将解压缩到系统的根目录/中:
tar -zxvf cri-containerd-cni-1.6.14-linux-amd64.tar.gz -C /
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml
 
systemctl daemon-reload
systemctl enable --now containerd

安装crictl客户端命令

下载安装包:https://download.csdn.net/download/weixin_43798031/87497561?spm=1001.2014.3001.5503

下载安装包
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.0/crictl-v1.24.0-linux-amd64.tar.gz
 
解压
tar -zxvf crictl-v1.24.0-linux-amd64.tar.gz -C /usr/local/bin
 
修改crictl配置文件
cat > /etc/crictl.yaml < .bash_profile <

如遇以下报错

mv /etc/containerd/config.toml /etc/containerd/config.bak 
containerd config default > /etc/containerd/config.toml 
systemctl daemon-reload systemctl restart containerd

配置服务器支持开启ipvs的前提条件

由于ipvs已经加入到了内核的主干,所以为kube-proxy开启ipvs的前提需要加载以下的内核模块:

在各个服务器节点上执行以下脚本:

cat > /etc/sysconfig/modules/ipvs.modules <

安装 kubeadm、kubelet 和 kubectl

  • kubeadm:用来初始化集群的指令。

  • kubelet:在集群中的每个节点上用来启动 Pod 和容器等。

  • kubectl:用来与集群通信的命令行工具。

cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes 
sudo systemctl enable --now kubelet

安装部署etcd

1.将 kubelet 配置为 etcd 的服务管理器

说明: 你必须在要运行 etcd 的所有主机上执行此操作。k8s master 和node节点不用执行这个步骤,否则会发现节点无法加入集群的问题。

原因:以外接的方式部署etcd,虽然是以kubelet管理容器服务,但是守护进程用的是systemd。

由于 etcd 是首先创建的,因此你必须通过创建具有更高优先级的新文件来覆盖 kubeadm 提供的 kubelet 单元文件。

mkdir -p /etc/systemd/system/kubelet.service.d/
cat << EOF > /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf
[Service]
ExecStart=
# 将下面的 "systemd" 替换为你的容器运行时所使用的 cgroup 驱动。如果使用系统systemd则无需修改
# kubelet 的默认值为 "cgroupfs"。
# 如果需要的话,将 "--container-runtime-endpoint " 的值替换为一个不同的容器运行时。
ExecStart=/usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --cgroup-driver=systemd --container-runtime=remote --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock
Restart=always
EOF
检查 kubelet 的状态以确保其处于运行状态:
systemctl daemon-reload && systemctl restart kubelet  && systemctl status kubelet

2.为 kubeadm 创建配置文件

使用以下脚本为每个将要运行 etcd 成员的主机生成一个 kubeadm 配置文件。

/opt/k8s-install/etcd-init.sh

# 使 用 你 的 主 机  IP 替 换  HOST0、 HOST1 和  HOST2 的  IP 地 址                                                                                                                                                                                    
export HOST0=10.120.174.14                                                                                                                                                                                                                
export HOST1=10.120.175.5                                                                                                                                                                                                                 
export HOST2=10.120.175.36                                                                                                                                                                                                                
                                                                                                                                                                                                                                            
# 使 用 你 的 主 机 名 更 新  NAME0、 NAME1 和  NAME2                                                                                                                                                                                                
export NAME0="l-shahe-k8s-etcd1.ops.prod"                                                                                                                                                                                                 
export NAME1="l-shahe-k8s-etcd2.ops.prod"                                                                                                                                                                                                 
export NAME2="l-shahe-k8s-etcd3.ops.prod"                                                                                                                                                                                                 
                                                                                                                                                                                                                                            
# 创 建 临 时 目 录 来 存 储 将 被 分 发 到 其 它 主 机 上 的 文 件                                                                                                                                                                                             
mkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/ /tmp/${HOST2}/                                                                                                                                                                                     
                                                                                                                                                                                                                                            
HOSTS=(${HOST0} ${HOST1} ${HOST2})                                                                                                                                                                                                        
NAMES=(${NAME0} ${NAME1} ${NAME2})                                                                                                                                                                                                        
                                                                                                                                                                                                                                            
for i in "${!HOSTS[@]}"; do                                                                                                                                                                                                               
HOST=${HOSTS[$i]}                                                                                                                                                                                                                         
NAME=${NAMES[$i]}                                                                                                                                                                                                                         
cat << EOF > /tmp/${HOST}/kubeadmcfg.yaml                                                                                                                                                                                                 
---                                                                                                                                                                                                                                       
apiVersion: "kubeadm.k8s.io/v1beta3"                                                                                                                                                                                                      
kind: InitConfiguration                                                                                                                                                                                                                   
nodeRegistration:                                                                                                                                                                                                                         
    name: ${NAME}                                                                                                                                                                                                                         
localAPIEndpoint:                                                                                                                                                                                                                         
    advertiseAddress: ${HOST}                                                                                                                                                                                                             
---                                                              
apiVersion: "kubeadm.k8s.io/v1beta3"                                                                                                                                                                                                      
kind: ClusterConfiguration                                                                                                                                                                                                                
etcd:                                                                                                                                                                                                                                     
    local:                                                                                                                                                                                                                                
        serverCertSANs:                                                                                                                                                                                                                   
        - "${HOST}"                                                                                                                                                                                                                       
        dataDir: "/ssd1/etcd"                                                                                                                                                                                                             
        peerCertSANs:                                                                                                                                                                                                                     
        - "${HOST}"                                                                                                                                                                                                                       
        extraArgs:                                                                                                                                                                                                                        
            quota-backend-bytes: 8589934592                                                                                                                                                                                               
            max-snapshots: 5                                                                                                                                                                                                              
            auto-compaction-retention: "1"                                                                                                                                                                                                
            max-wals: 8                                                                                                                                                                                                                   
            initial-cluster: ${NAMES[0]}=https://${HOSTS[0]}:2380,${NAMES[1]}=https://${HOSTS[1]}:2380,${NAMES[2]}=https://${HOSTS[2]}:2380                                                                                               
            initial-cluster-state: new                                                                                                                                                                                                    
            name: ${NAME}                                                                                                                                                                                                                 
            listen-peer-urls: https://${HOST}:2380                                                                                                                                                                                        
            listen-client-urls: https://${HOST}:2379                                                                                                                                                                                      
            advertise-client-urls: https://${HOST}:2379                                                                                                                                                                                   
            initial-advertise-peer-urls: https://${HOST}:2380                                                                                                                                                                             
EOF                                                                                                                                                                                                                                       
done

执行脚本后会在/tmp 目录下生成一个yaml文件

3. 生成证书颁发机构

如果你已经拥有 CA,那么唯一的操作是复制 CA 的 crt 和 key 文件到 etc/kubernetes/pki/etcd/ca.crt 和 /etc/kubernetes/pki/etcd/ca.key。 复制完这些文件后继续下一步,“为每个成员创建证书”。

如果你还没有 CA,则在 $HOST0(你为 kubeadm 生成配置文件的位置)上运行此命令。

kubeadm init phase certs etcd-ca

这一操作创建如下两个文件:
/etc/kubernetes/pki/etcd/ca.crt
/etc/kubernetes/pki/etcd/ca.key

4. 为每个成员创建证书

[[email protected] k8s-install]$ HOST2=10.120.175.36                                                                                                                                                                        
[[email protected] k8s-install]$ HOST1=10.120.175.5                                                                                                                                                                         
[[email protected] k8s-install]$ HOST0=10.120.174.14
 
kubeadm init phase certs etcd-server --config=/tmp/${HOST2}/kubeadmcfg.yaml
kubeadm init phase certs etcd-peer  --config=/tmp/${HOST2}/kubeadmcfg.yaml
kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST2}/kubeadmcfg.yaml
kubeadm init phase certs apiserver-etcd-client  --config=/tmp/${HOST2}/kubeadmcfg.yaml
cp -R /etc/kubernetes/pki /tmp/${HOST2}/.
# 清理不可重复使用的证书
find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete
kubeadm init phase certs etcd-server --config=/tmp/${HOST1}/kubeadmcfg.yaml
kubeadm init phase certs etcd-peer --config=/tmp/${HOST1}/kubeadmcfg.yaml
kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
cp -R /etc/kubernetes/pki /tmp/${HOST1}/
find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete
 
kubeadm init phase certs etcd-server --config=/tmp/${HOST0}/kubeadmcfg.yaml
kubeadm init phase certs etcd-peer --config=/tmp/${HOST0}/kubeadmcfg.yaml
kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
# 不需要移动 certs 因为它们是给 HOST0 使用的
 
# 清理不应从此主机复制的证书
find /tmp/${HOST2} -name ca.key -type f -delete
find /tmp/${HOST1} -name ca.key -type f -delete

5. 复制证书和 kubeadm 配置

USER=root
HOST=${HOST1}
scp -r /tmp/${HOST}/* ${USER}@${HOST}:
ssh ${USER}@${HOST}
USER@HOST $ sudo -Es
root@HOST $ chown -R root:root pki
root@HOST $ mv pki /etc/kubernetes/

6.确保所有节点预期的文件都存在

10.120.174.14 节点

[[email protected] tmp]$ tree /tmp/10.120.174.14/                                                                                                                                                                           
/tmp/10.120.174.14/                                                                                                                                                                                                                       
└── kubeadmcfg.yaml                                                                                                                                                                                                                       
                                                                                                                                                                                                                                            
0 directories, 1 file                                                                                                                                                                                                                     
[[email protected] tmp]$ tree /etc/kubernetes/pki/                                                                                                                                                                          
/etc/kubernetes/pki/                                                                                                                                                                                                                      
├── apiserver-etcd-client.crt                                                                                                                                                                                                             
├── apiserver-etcd-client.key                                                                                                                                                                                                             
└── etcd                                                                                                                                                                                                                                  
    ├── ca.crt                                                                                                                                                                                                                            
    ├── ca.key                                                                                                                                                                                                                            
    ├── healthcheck-client.crt                                                                                                                                                                                                            
    ├── healthcheck-client.key                                                                                                                                                                                                            
    ├── peer.crt                                                                                                                                                                                                                          
    ├── peer.key                                                                                                                                                                                                                          
    ├── server.crt                                                                                                                                                                                                                        
    └── server.key                                                                                                                                                                                                                        
                                                                                                                                                                                                                                            
1 directory, 10 files

10.120.175.5节点

[[email protected] 10.120.175.5]$ tree /opt/k8s-install/10.120.175.5                                                                                                                                                         
/opt/k8s-install/10.120.175.5                                                                                                                                                                                                              
└── kubeadmcfg.yaml                                                                                                                                                                                                                        
                                                                                                                                                                                                                                           
0 directories, 1 file 


[[email protected] 10.120.175.5]$ tree /etc/kubernetes/pki/                                                                                                                                                                  
/etc/kubernetes/pki/                                                                                                                                                                                                                       
├── apiserver-etcd-client.crt                                                                                                                                                                                                              
├── apiserver-etcd-client.key                                                                                                                                                                                                              
└── etcd                                                                                                                                                                                                                                   
    ├── ca.crt                                                                                                                                                                                                                             
    ├── healthcheck-client.crt                                                                                                                                                                                                             
    ├── healthcheck-client.key                                                                                                                                                                                                             
    ├── peer.crt                                                                                                                                                                                                                           
    ├── peer.key                                                                                                                                                                                                                           
    ├── server.crt                                                                                                                                                                                                                         
    └── server.key                                                                                                                                                                                                                         
                                                                                                                                                                                                                                           
1 directory, 9 files  

10.120.175.36节点

[[email protected] 10.120.175.36]$ tree /opt/k8s-install/10.120.175.36                                                                                                                                                       
/opt/k8s-install/10.120.175.36                                                                                                                                                                                                             
└── kubeadmcfg.yaml                                                                                                                                                                                                                        
                                                                                                                                                                                                                                           
0 directories, 1 file


[[email protected] 10.120.175.36]$ tree /etc/kubernetes/pki/                                                                                                                                                                 
/etc/kubernetes/pki/                                                                                                                                                                                                                       
├── apiserver-etcd-client.crt                                                                                                                                                                                                              
├── apiserver-etcd-client.key                                                                                                                                                                                                              
└── etcd                                                                                                                                                                                                                                   
    ├── ca.crt                                                                                                                                                                                                                             
    ├── healthcheck-client.crt                                                                                                                                                                                                             
    ├── healthcheck-client.key                                                                                                                                                                                                             
    ├── peer.crt                                                                                                                                                                                                                           
    ├── peer.key                                                                                                                                                                                                                           
    ├── server.crt                                                                                                                                                                                                                         
    └── server.key                                                                                                                                                                                                                         
                                                                                                                                                                                                                                           
1 directory, 9 files          

7. 创建静态 Pod 清单

既然证书和配置已经就绪,是时候去创建清单了。 在每台主机上运行 kubeadm 命令来生成 etcd 使用的静态清单。

root@HOST0 $ kubeadm init phase etcd local --config=/tmp/10.120.174.14/kubeadmcfg.yaml                
root@HOST1 $ kubeadm init phase etcd local --config=/opt/k8s-install/10.120.175.5/kubeadmcfg.yaml
root@HOST2 $ kubeadm init phase etcd local --config=/opt/k8s-install/10.120.175.36/kubeadmcfg.yaml 

8. 可选:检查集群运行状况

如果 etcdctl 不可用,你可以在容器镜像内运行此工具。 你可以使用 crictl run 这类工具直接在容器运行时执行此操作,而不是通过 Kubernetes。

ETCDCTL_API=3 etcdctl \
--cert /etc/kubernetes/pki/etcd/peer.crt \
--key /etc/kubernetes/pki/etcd/peer.key \
--cacert /etc/kubernetes/pki/etcd/ca.crt \
--endpoints https://${HOST0}:2379 endpoint health
...
https://[HOST0 IP]:2379 is healthy: successfully committed proposal: took = 16.283339ms
https://[HOST1 IP]:2379 is healthy: successfully committed proposal: took = 19.44402ms
https://[HOST2 IP]:2379 is healthy: successfully committed proposal: took = 35.926451ms

查看集群信息
etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key --
endpoints=https://10.120.174.14:2379 endpoint status --cluster  --write-out=table

列出成员
etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key --endpoints=https://10.120.174.14:2379 member list --write-out=table

如果没有监听2379端口说明,启动pod的时候失败了,排查思路查看kubelet日志如下

导致原因是缺少了一些镜像,我们需要把镜像下载好,更改tag之后修改 /etc/kubernetes/manifests/etcd.yaml 文件中的image字段

9. ctr 命令使用

导入镜像
ctr -n k8s.io images import etcd.tar   
ctr -n k8s.io images import pause.tar 


导出镜像
ctr -n k8s.io images export etcd.tar registry.k8s.io/etcd:3.5.6-0 


打tag
ctr  -n k8s.io images tag  registry.k8s.io/etcd:3.5.6-0  harbor.int.yidian-inc.com/kubernetes-1.26.1/etcd:3.5.6-0  
ctr  -n k8s.io images tag  registry.k8s.io/pause:3.6  harbor.int.yidian-inc.com/kubernetes-1.26.1/pause:3.6


push 镜像到harbor
ctr -n k8s.io  images push harbor.int.yidian-inc.com/kubernetes-1.26.1/pause:3.6 
ctr -n k8s.io  images push harbor.int.yidian-inc.com/kubernetes-1.26.1/etcd:3.5.6-0

10. 下载etcdctl

常用命令

列出成员
etcdctl member list

只列出key
etcdctl get / --prefix --keys-only

清理etcd所有key
etcdctl del "" --prefix

删除key
etcdctl del ${path}


添加etcd到集群
etcdctl member  add etcd-10.139.6.124 --peer-urls=http://10.139.6.124:4001

从集群删除etcd
etcdctl  member remove ${ID}

指定endpoints
etcdctl --endpoints=http://10.136.45.18:2379   member list --write-out=table

添加节点
etcdctl --endpoints=http://10.139.3.22:3379 member add eventetcd-10.139.6.124  --peer-urls=http://10.139.6.124:5001
etcdctl --endpoints=http://10.139.3.22:3379 member add eventetcd-10.139.6.124  --peer-urls=http://10.139.6.124:5001
etcdctl --endpoints=http://10.139.3.22:3379 member add eventetcd-10.139.6.222  --peer-urls=http://10.139.6.222:5001

指定证书访问
etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt  --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key --endpoints=https://10.120.2.7:2379  member list --write-out=table

添加节点
etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt  --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key --endpoints=https://10.120.2.7:2379 member add 120-4-7-sh-1037-b10.yidian.com --peer-urls=https://10.120.4.7:2380

--initial-cluster-state=existing参数说明:
用于指示本次是否为新建集群。有两个取值new和existing。如果填为existing,则该member启动时会尝试与其他member交互。
集群初次建立时,要填为new,集群运行过程中,一个member故障后恢复时填为existing

11. 常见问题

1)lib系统包升级

卸载
yum -y remove libseccomp-2.3.1-4.el7.x86_64

下载
wget http://rpmfind.net/linux/centos/8-stream/BaseOS/x86_64/os/Packages/libseccomp-2.5.1-1.el8.x86_64.rpm

安装
rpm -ivh libseccomp-2.5.1-1.el8.x86_64.rpm

2)kubelet启动容器失败

mv /var/run/containerd/containerd.sock.ttrpc /var/run/containerd/containerd.sock

12. 检查etcd服务状态

[[email protected] ~]$ netstat  -nlpt                                                                                                                                                                                        
Active Internet connections (only servers)                                                                                                                                                                                                 
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name                                                                                                                                           
tcp        0      0 10.120.174.14:22        0.0.0.0:*               LISTEN      1038/sshd                                                                                                                                                  
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1404/master                                                                                                                                                
tcp        0      0 0.0.0.0:15867           0.0.0.0:*               LISTEN      1015711/ydbot-clien                                                                                                                                        
tcp        0      0 0.0.0.0:18624           0.0.0.0:*               LISTEN      1015711/ydbot-clien                                                                                                                                        
tcp        0      0 127.0.0.1:25345         0.0.0.0:*               LISTEN      912920/containerd                                                                                                                                          
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      834062/kubelet                                                                                                                                             
tcp        0      0 127.0.0.1:10250         0.0.0.0:*               LISTEN      834062/kubelet                                                                                                                                             
tcp        0      0 10.120.174.14:2379      0.0.0.0:*               LISTEN      854765/etcd                                                                                                                                                
tcp        0      0 10.120.174.14:2380      0.0.0.0:*               LISTEN      854765/etcd                                                                                                                                                
tcp        0      0 127.0.0.1:2381          0.0.0.0:*               LISTEN      854765/etcd                                                                                                                                                
tcp        0      0 127.0.0.1:10255         0.0.0.0:*               LISTEN      834062/kubelet                                                                                                                                             
tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      745/rpcbind                                                                                                                                                
[[email protected] ~]$ crictl ps                                                                                                                                                                                             
CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                                                                                            
08ddd339c3b18       fce326961ae2d       2 days ago          Running             etcd                6                   d941444ee7455       etcd-l-shahe-k8s-etcd1.ops.prod                                      

13. 证书更新

更新k8s证书
https://github.com/yuyicai/update-kube-cert

检查证书的有效期是否生效
for i in `ls /etc/kubernetes/pki/etcd/*.crt`; do echo $i | openssl x509 -noout -dates -in $i;done

14. containerd 私有镜像仓库配置

    [plugins."io.containerd.grpc.v1.cri".registry]
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."harbor-sh.yidian-inc.com"]
          endpoint = ["https://harbor-sh.yidian-inc.com"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."harbor.int.yidian-inc.com"]
          endpoint = ["https://harbor.int.yidian-inc.com"]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker2.yidian.com:5000"]
          endpoint = ["https://docker2.yidian.com:5000"]
      [plugins."io.containerd.grpc.v1.cri".registry.configs]
        [plugins."io.containerd.grpc.v1.cri".registry.configs."harbor-sh.yidian-inc.com".tls]
          insecure_skip_verify = false
          ca_file = "/etc/containerd/cert/harbor-sh-ca.crt"
        [plugins."io.containerd.grpc.v1.cri".registry.configs."harbor-sh.yidian-inc.com".auth]
          username = "admin"
          password = "Ops12345"
        [plugins."io.containerd.grpc.v1.cri".registry.configs."harbor.int.yidian-inc.com".tls]
          insecure_skip_verify = false
          ca_file = "/etc/containerd/cert/harbor.int.yidian-inc.com-ca.crt"
        [plugins."io.containerd.grpc.v1.cri".registry.configs."harbor.int.yidian-inc.com".auth]
          username = "admin"
          password = "Ops12345"
        [plugins."io.containerd.grpc.v1.cri".registry.configs."docker2.yidian.com:5000".tls]
          insecure_skip_verify = false
          ca_file = "/etc/containerd/cert/docker2.crt"

你可能感兴趣的:(kubernetes,kubernetes,etcd)