Kubernetes安装与卸载

Kubernetes安装

一、环境准备

1、使用的环境版本

  • 操作系统:Anolis 7.9 x64版本、Anolis 8.4 x64版本
  • Docker:1.31.1
  • Kubernetes:1.25.0

2、需要安装的组件

  • Docker:Docker容器
  • kubelet:运行于所有 Node上,负责启动容器和 Pod
  • kubeadm:负责初始化集群
  • kubectl: k8s命令行工具,通过其可以部署/管理应用 以及CRUD各种资源

3、环境设置

1)关闭防火墙

#开机禁用防火墙
systemctl disable firewalld.service
#关闭防火墙 
systemctl stop firewalld.service

2)设置节点主机名

hostnamectl --static set-hostname  node133
hostnamectl --static set-hostname  node132
hostnamectl --static set-hostname  node129

3)设置节点主机名/IP的hosts解析

进入/etc/hosts文件,加入以下信息

192.168.175.133 node133
192.168.175.132 node132
192.168.175.129 node129

4)关闭swap分区,否则kubelet会启动失败,也可以设置kubelet启动参数 --fail-swap-on为false关闭swap检查

a、处理方法一:关闭swap

执行

#禁用swap分区
swapoff -a
#备份fstab系统文件
cp /etc/fstab /etc/fstab.backup
#修改fstab文件配置
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
#对比新文件与原文件的不同
diff /etc/fstab /etc/fstab.backup

显示

11c11
< #/dev/mapper/ao-swap     swap                    swap    defaults        0 0
---
> /dev/mapper/ao-swap     swap                    swap    defaults        0 0
b、处理方法二:关闭swap检查

该方法较老的版本不可用。
设置/etc/sysconfig/kubelet配置文件参数
执行命令修改KUBELET_EXTRA_ARGS配置

sed -i 's/KUBELET_EXTRA_ARGS=/KUBELET_EXTRA_ARGS="--fail-swap-on=false"/' /etc/sysconfig/kubelet

5)关闭 SELinux,否则 kubelet 挂载目录时可能报错Permission denied

执行

#临时关闭SELinux
setenforce 0
#备份系统文件
cp /etc/selinux/config /etc/selinux/config.backup
#设置SELINUX=disabled
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
#比较新文件与原文件的不同
diff /etc/selinux/config /etc/selinux/config.backup

执行命令查看SELinux状态

sestatus -v

显示

SELinux status:                 disabled

执行命令查看SELinux状态

getenforce

显示

Disabled

二、开始安装

1、安装Docker

详见Anolis OS 7.9安装docker

2、安装kubelet、kubeadm、kubectl组件

1)新建repo

cat>>/etc/yum.repos.d/kubrenetes.repo<<EOF
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
EOF

2)执行安装命令并启动组件

yum install -y kubelet.x86_64 kubeadm.x86_64 kubectl.x86_64
systemctl enable kubelet && systemctl start kubelet

节选部分显示内容

Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  正在安装    : libnetfilter_cthelper-1.0.0-11.an7.x86_64                                           1/10 
  正在安装    : cri-tools-1.24.2-0.x86_64                                                           2/10 
  正在安装    : socat-1.7.3.2-2.2.an7.x86_64                                                        3/10 
  正在安装    : libnetfilter_queue-1.0.2-2.an7.x86_64                                               4/10 
  正在安装    : libnetfilter_cttimeout-1.0.0-7.an7.x86_64                                           5/10 
  正在安装    : conntrack-tools-1.4.4-7.an7.x86_64                                                  6/10 
  正在安装    : kubelet-1.25.0-0.x86_64                                                             7/10 
  正在安装    : kubernetes-cni-0.8.7-0.x86_64                                                       8/10 
  正在安装    : kubectl-1.25.0-0.x86_64                                                             9/10 
  正在安装    : kubeadm-1.25.0-0.x86_64                                                            10/10 
  验证中      : kubernetes-cni-0.8.7-0.x86_64                                                       1/10 
  验证中      : kubeadm-1.25.0-0.x86_64                                                             2/10 
  验证中      : kubectl-1.25.0-0.x86_64                                                             3/10 
  验证中      : conntrack-tools-1.4.4-7.an7.x86_64                                                  4/10 
  验证中      : libnetfilter_cttimeout-1.0.0-7.an7.x86_64                                           5/10 
  验证中      : libnetfilter_queue-1.0.2-2.an7.x86_64                                               6/10 
  验证中      : socat-1.7.3.2-2.2.an7.x86_64                                                        7/10 
  验证中      : kubelet-1.25.0-0.x86_64                                                             8/10 
  验证中      : cri-tools-1.24.2-0.x86_64                                                           9/10 
  验证中      : libnetfilter_cthelper-1.0.0-11.an7.x86_64                                          10/10 

已安装:
  kubeadm.x86_64 0:1.25.0-0         kubectl.x86_64 0:1.25.0-0         kubelet.x86_64 0:1.25.0-0        

作为依赖被安装:
  conntrack-tools.x86_64 0:1.4.4-7.an7                cri-tools.x86_64 0:1.24.2-0                        
  kubernetes-cni.x86_64 0:0.8.7-0                     libnetfilter_cthelper.x86_64 0:1.0.0-11.an7        
  libnetfilter_cttimeout.x86_64 0:1.0.0-7.an7         libnetfilter_queue.x86_64 0:1.0.2-2.an7            
  socat.x86_64 0:1.7.3.2-2.2.an7                     

完毕!

3、下载集群相关镜像

拉取镜像

docker pull mirrorgooglecontainers/kube-apiserver:v1.13.1
docker pull mirrorgooglecontainers/kube-controller-manager:v1.13.1
docker pull mirrorgooglecontainers/kube-scheduler:v1.13.1
docker pull mirrorgooglecontainers/kube-proxy:v1.13.1
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd:3.2.24
docker pull coredns/coredns:1.2.6
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64

4、初始化集群

master节点执行命令

kubeadm init --kubernetes-version=v1.25.0 --apiserver-advertise-address=192.168.175.132 --pod-network-cidr=10.244.0.0/16 --image-repository=registry.aliyuncs.com/google_containers  --service-cidr=10.96.0.0/12 --cri-socket unix:///var/run/cri-dockerd.sock
参数 说明
–kubernetes-version 指定 k8s版本,通过kubectl version可以查看
–apiserver-advertise-address 指定使用 Master的哪个network interface进行通信,若不指定,则 kubeadm会自动选择具有默认网关的 interface
–pod-network-cidr 指定Pod的网络范围,即节点中的pod的可用的IP地址范围,此为内部IP。该参数使用依赖于使用的网络方案,本文将使用经典的flannel网络方案。通常情况下Flannel网络插件的默认为10.244.0.0/16,Calico网络插件的默认值为192.168.0.0/16
–image-repository 指定镜像仓库地址,默认为 k8s.gcr.io,此地址国内可能无法访问,这里使用阿里云镜像仓库,也可以使用阿里云的另一个地址registry.cn-hangzhou.aliyuncs.com/google_containers
–control-plane-endpoint 多主节点必选项,用于指定控制平面的固定访问地址,可是IP地址或DNS名称,会被用于集群管理员及集群组件的kubeconfig配置文件的API Server的访问地址,如果是单主节点的控制平面部署时不使用该选项,注意:kubeadm 不支持将没有 --control-plane-endpoint 参数的单个控制平面集群转换为高可用性集群。
–service-cidr Service的网络地址范围,默认为10.96.0.0/12;通常,仅Flannel一类的网络插件需要手动指定该地址
–service-dns-domain 指定k8s集群域名,默认为cluster.local,会自动通过相应的DNS服务实现解析
–apiserver-advertise-address API 服务器所公布的其正在监听的 IP 地址。如果未设置,则使用默认网络接口。apiserver通告给其他组件的IP地址,一般应该为Master节点的用于集群内部通信的IP地址,0.0.0.0表示此节点上所有可用地址,非必选项.
–token-ttl 共享令牌(token)的过期时长,默认为24h0m0s,0表示永不过期;为防止不安全存储等原因导致的令牌泄露危及集群安全,建议为其设定过期时长。未设定该选项时,在token过期后,若期望再向集群中加入其它节点,可以使用如下命令重新创建token,并生成节点加入命令。kubeadm token create --print-join-command
–ignore-preflight-errors 忽略显示为警告的检查列表;例如:‘IsPrivilegedUser,Swap’。取值为 ‘all’ 时将忽略检查中的所有错误。
–upload-certs 将控制平面证书上传到 kubeadm-certs Secret。
–cri-socket 要连接的 CRI 套接字的路径。如果为空,则 kubeadm 将尝试自动检测此值;仅当安装了多个 CRI 或具有非标准 CRI 插槽时,才使用此选项。v1.24版之后指定连接cri的socket文件路径,注意:不同的CRI连接文件不同。如果是CRI是containerd,则使用--cri-socket unix:///run/containerd/containerd.sock。如果是CRI是docker,则使用--cri-socket unix:///var/run/cri-dockerd.sock,如果是CRI是CRI-o,则使用--cri-socket unix:///var/run/crio/crio.sock。注意:CRI-o与containerd的容器管理机制不一样,所以镜像文件不能通用。

安装结果如下

[init] Using Kubernetes version: v1.25.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local node132] and IPs [10.96.0.1 192.168.175.132]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost node132] and IPs [192.168.175.132 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost node132] and IPs [192.168.175.132 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 8.005059 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node node132 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node node132 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 2ylbib.w5fj73vowdcbsuw8
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

按照提示执行以下命令

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

如果是root用户,可以执行以下命令

export KUBECONFIG=/etc/kubernetes/admin.conf

执行完毕之后即可使用以下命令查看节点信息了

kubectl get nodes

显示结果为

NAME      STATUS     ROLES           AGE     VERSION
node132   NotReady   control-plane   9m37s   v1.25.2

加入节点命令,此命令的参数是在 node132 初始化完成后给出的,每个人的都不一样,需要复制自己生成的。

kubeadm join 192.168.175.132:6443 --token 13os7w.3o4fjxqsso52pnsy \
        --discovery-token-ca-cert-hash sha256:74135921ae251cb5a78411efe6f6ca40644aebd58a7fd1a0a2a1b9729c2b0038

需要注意的是,如果由于当前版本不再默认支持docker,如果服务器使用的docker,需要在命令后面加入参数--cri-socket unix:///var/run/cri-dockerd.sock。另外
token默认的有效期为24小时,过期之后就不能用了,需要重新创建token,操作如下

kubeadm token create --print-join-command

另外,短时间内生成多个token时,生成新token后建议删除前一个旧的。

查看命令为

kubeadm token list

删除命令为

#token id为1x7hzz.nvk06y4k7tisn5p8
kubeadm token delete 1x7hzz.nvk06y4k7tisn5p8

当需要拆解集群时,可以执行以下命令

kubectl drain <node name> --delete-emptydir-data --force --ignore-daemonsets
kubectl delete node <node name>

节点移除之后,可以使用一下命令重置集群

kubeadm reset

可能出现的异常问题及解决方案

如果发现初始化kubernetes时失败,异常如下

[root@node132 sysconfig]# kubeadm init --kubernetes-version=v1.25.0 --apiserver-advertise-address 192.168.175.132 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.25.0
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR CRI]: container runtime is not running: output: time="2022-09-13T09:40:42+08:00" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/containerd/containerd.sock: connect: no such file or directory\""
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

原因是Kubernetes当前操作版本的默认CRI是containerd,而当前系统并未安装containerd,故会报此错误。Kubernetes自v1.24移除了对docker-shim的支持,且Docker Engine默认不支持CRI规范,因而二者将无法直接完成整合。需要借助cri-dockerd(安装包地址:https://github.com/Mirantis/cri-dockerd/tags)为Docker Engine提供一个能够支持到CRI规范的垫片,从而能够让Kubernetes基于CRI控制Docker 。

a、将安装包上传到服务器上之后进行解压
tar zxf cri-dockerd-0.2.5.amd64.tgz
b、然后将文件复制到系统目录并赋权
cp cri-dockerd/cri-dockerd /usr/bin/
chmod +x /usr/bin/cri-dockerd
c、创建cri-dockerd启动文件

启动文件的参考模板在https://github.com/Mirantis/cri-dockerd/tree/master/packaging/systemd中。

创建service文件

cat>>/usr/lib/systemd/system/cri-docker.service<<EOF
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
Requires=cri-docker.socket

[Service]
Type=notify
ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.7
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always

# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3

# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity
Delegate=yes
KillMode=process

[Install]
WantedBy=multi-user.target
EOF

其中/usr/bin/cri-dockerd命令后面需要加上–pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.7参数,否则将使用默认的镜像仓库去k8s.gcr.io中进行拉取导致安装失败。

创建socket文件

cat>>/usr/lib/systemd/system/cri-docker.socket<<EOF
[Unit]
Description=CRI Docker Socket for the API
PartOf=cri-docker.service

[Socket]
ListenStream=%t/cri-dockerd.sock
SocketMode=0660
SocketUser=root
SocketGroup=root

[Install]
WantedBy=sockets.target
EOF
d、修改kublet配置文件
vim /etc/sysconfig/kubelet

新加一条配置

KUBELET_KUBEADM_ARGS="--container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock"

最后查看并启动cri-docker服务

systemctl status cri-docker
systemctl start cri-docker

如果在初始化kubernetes或者加入集群时出现异常如下

[root@node129 etc]# kubeadm join 192.168.175.132:6443 --token rfcjaw.8t9vovillsxb5gh0 --discovery-token-ca-cert-hash sha256:74135921ae251cb5a78411efe6f6ca40644aebd58a7fd1a0a2a1b9729c2b0038 
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR CRI]: container runtime is not running: output: E1017 23:09:02.456350    4315 remote_runtime.go:948] "Status from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
time="2022-10-17T23:09:02+08:00" level=fatal msg="getting status of runtime: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

通常是用户自己使用的docker,而服务器自带containerd导致一些配置冲突造成的。
此时可以将/etc/containerd/config.toml文件备份并删除原文件。

cp /etc/containerd/config.toml /etc/containerd/config.toml.bak
rm /etc/containerd/config.toml

然后重启服务即可

systemctl restart containerd

5、安装Pod网络插件

安装Pod网络是Pod之间进行通信的必要条件,k8s支持众多网络方案,这里选用经典的flannel方案。

1)下载插件配置文件并添加配置

#下载配置文件
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

目前https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml该地址国内无法访问,需要科学上网,或者从国内可以访问的github上下载https://github.com/flannel-io/flannel/blob/master/Documentation/kube-flannel.yml或者直接在国内找配置内容,新建kube-flannel.yml的空白文件,把内容粘贴进去。这里展示一个配置

cat>>kube-flannel.yml<<EOF
---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
       #image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
       #image: flannelcni/flannel:v0.19.2 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
       #image: flannelcni/flannel:v0.19.2 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate
EOF

然后添加配置

#添加配置
kubectl apply -f kube-flannel.yml

应用完可以通过ifconfig查看一下网络配置是否多出一个flannel开头的网络配置,如下

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.0.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::6c3e:9aff:fe53:41ea  prefixlen 64  scopeid 0x20<link>
        ether 6e:3e:9a:53:41:ea  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 27 overruns 0  carrier 0  collisions 0

2)检查各节点是否准备就绪

#查看node节点信息
kubectl get nodes

显示信息

NAME      STATUS   ROLES           AGE    VERSION
node129   Ready    <none>          90s     v1.25.2
node132   Ready    control-plane   7d1h    v1.25.2
node133   Ready    <none>          6m42s   v1.25.2
#查看pod节点信息
kubectl get pods --all-namespaces -o wide

显示信息

NAMESPACE      NAME                              READY   STATUS    RESTARTS        AGE     IP                NODE      NOMINATED NODE   READINESS GATES
kube-flannel   kube-flannel-ds-lj48s             1/1     Running   0               3m27s   192.168.175.129   node129   <none>           <none>
kube-flannel   kube-flannel-ds-pjt7f             1/1     Running   3 (103m ago)    7d      192.168.175.132   node132   <none>           <none>
kube-flannel   kube-flannel-ds-wmrqr             1/1     Running   0               8m39s   192.168.175.133   node133   <none>           <none>
kube-system    coredns-c676cc86f-65kdr           1/1     Running   2 (5d23h ago)   7d1h    10.244.0.21       node132   <none>           <none>
kube-system    coredns-c676cc86f-x6pwz           1/1     Running   2 (5d23h ago)   7d1h    10.244.0.20       node132   <none>           <none>
kube-system    etcd-node132                      1/1     Running   2 (5d23h ago)   7d1h    192.168.175.132   node132   <none>           <none>
kube-system    kube-apiserver-node132            1/1     Running   2 (5d23h ago)   7d1h    192.168.175.132   node132   <none>           <none>
kube-system    kube-controller-manager-node132   1/1     Running   2 (5d23h ago)   7d1h    192.168.175.132   node132   <none>           <none>
kube-system    kube-proxy-5vkrd                  1/1     Running   0               3m27s   192.168.175.129   node129   <none>           <none>
kube-system    kube-proxy-6dfn5                  1/1     Running   2 (5d23h ago)   7d1h    192.168.175.132   node132   <none>           <none>
kube-system    kube-proxy-q7z2r                  1/1     Running   0               8m39s   192.168.175.133   node133   <none>           <none>
kube-system    kube-scheduler-node132            1/1     Running   2 (5d23h ago)   7d1h    192.168.175.132   node132   <none>           <none>

Kubernetes卸载

一、机器环境

  • 操作系统:Anolis 7.9 x64版本、Anolis 8.4 x64版本
  • Docker:1.31.1
  • Kubernetes:1.25.0

二、卸载步骤

执行命令

# 清除节点本地文件系统
kubeadm reset -f
# 卸载所有kube相关组件
yum remove kube*
# 自动卸载ipip模块
modprobe -r ipip
# 查看现有的系统模块
lsmod
# 删除相关文件
rm -rf /etc/kubernetes/
rm -rf /etc/cni
rm -rf /var/lib/etcd
# 清除yum缓存
yum clean all

你可能感兴趣的:(docker,分布式,kubernetes,docker,容器)