kubernetes1.26版本部署一主两从

目录

一:基础环境介绍

二:基础操作配置(所有节点)

三:安装Containerd (所有节点)

四:使用kubeadm部署k8s 

五:初始化k8s集群

六:集群加入node节点

七:扩展


一:基础环境介绍

1)系统环境准备

主机名称        ip地址      系统版本
k8s-master 192.168.44.141  Centos7.9
k8s-node01  192.168.44.144  Centos7.9
k8s-node02  192.168.44.143   Centos7.9

2)软件版本

名称                   版本号

containerd.io      1.6.21
kubernetes         1.26.0

二:基础操作配置(所有节点)

1、配置所需要的yum源仓库

cd /etc/yum.repos.d/
mkdir bak ; mv *.repo bak/
cat /etc/yum.repos.d/Centos-7.repo
[base]
name=CentOS-$releasever - Base - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/os/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/os/$basearch/
        http://mirrors.cloud.aliyuncs.com/centos/$releasever/os/$basearch/
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
 
#released updates 
[updates]
name=CentOS-$releasever - Updates - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/updates/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/updates/$basearch/
        http://mirrors.cloud.aliyuncs.com/centos/$releasever/updates/$basearch/
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
 
#additional packages that may be useful
[extras]
name=CentOS-$releasever - Extras - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/extras/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/extras/$basearch/
        http://mirrors.cloud.aliyuncs.com/centos/$releasever/extras/$basearch/
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
 
#additional packages that extend functionality of existing packages
[centosplus]
name=CentOS-$releasever - Plus - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/centosplus/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/centosplus/$basearch/
        http://mirrors.cloud.aliyuncs.com/centos/$releasever/centosplus/$basearch/
gpgcheck=1
enabled=0
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
 
#contrib - packages by Centos Users
[contrib]
name=CentOS-$releasever - Contrib - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/contrib/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/contrib/$basearch/
        http://mirrors.cloud.aliyuncs.com/centos/$releasever/contrib/$basearch/
gpgcheck=1
enabled=0
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7

cat epel.repo 
[epel]
name=Extra Packages for Enterprise Linux 7 - $basearch
#baseurl=http://download.fedoraproject.org/pub/epel/7/$basearch
metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch
failovermethod=priority
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7

[epel-debuginfo]
name=Extra Packages for Enterprise Linux 7 - $basearch - Debug
#baseurl=http://download.fedoraproject.org/pub/epel/7/$basearch/debug
metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-7&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1

[epel-source]
name=Extra Packages for Enterprise Linux 7 - $basearch - Source
#baseurl=http://download.fedoraproject.org/pub/epel/7/SRPMS
metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-source-7&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1


#添加 kubernetes 仓库
cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF


cat docker-ce.repo 
[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-stable-debuginfo]
name=Docker CE Stable - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/debug-$basearch/stable
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-stable-source]
name=Docker CE Stable - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/source/stable
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-test]
name=Docker CE Test - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-test-debuginfo]
name=Docker CE Test - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/debug-$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-test-source]
name=Docker CE Test - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/source/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-nightly]
name=Docker CE Nightly - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/$basearch/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-nightly-debuginfo]
name=Docker CE Nightly - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/debug-$basearch/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-nightly-source]
name=Docker CE Nightly - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/source/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg



cat  epel-testing.repo
[epel-testing]
name=Extra Packages for Enterprise Linux 7 - Testing - $basearch
#baseurl=http://download.fedoraproject.org/pub/epel/testing/7/$basearch
metalink=https://mirrors.fedoraproject.org/metalink?repo=testing-epel7&arch=$basearch
failovermethod=priority
enabled=0
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7

[epel-testing-debuginfo]
name=Extra Packages for Enterprise Linux 7 - Testing - $basearch - Debug
#baseurl=http://download.fedoraproject.org/pub/epel/testing/7/$basearch/debug
metalink=https://mirrors.fedoraproject.org/metalink?repo=testing-debug-epel7&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1

[epel-testing-source]
name=Extra Packages for Enterprise Linux 7 - Testing - $basearch - Source
#baseurl=http://download.fedoraproject.org/pub/epel/testing/7/SRPMS
metalink=https://mirrors.fedoraproject.org/metalink?repo=testing-source-epel7&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1

 2、修改主机名(对三台主机执行并且加入到/etc/hosts文件中)

1、hostnamectl set-hostname k8s-master  && bash

hostnamectl set-hostname k8s-node01 && bash

hostnamectl set-hostname k8s-node02 && bash

2、在master节点上把node节点加入hosts文件

cat /etc/hosts
192.168.44.141 k8s-master
1192.168.44.144 k8s-node01

3、把hosts文件循环分发到两台node节点

root@k8s-master(192.168.44.141)~>for i in 1 2; do scp /etc/hosts 192.168.44.144$i:/etc/ ; done

 3、安装ntp服务(所有节点)

yum install chrony ntpdate -y
sed "s/^server/#server/g" /etc/chrony.conf
echo 'server tiger.sina.com.cn iburst' >> /etc/chrony.conf
echo 'server ntp1.aliyun.com iburst' >> /etc/chrony.conf
systemctl enable chronyd ; systemctl start chronyd
ntpdate tiger.sina.com.cn

 4、关闭selinux及firewalld(所有节点)

sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
systemctl disable firewalld; systemctl stop firewalld

建议执行完之后重启主机

 5、关闭Swap并参数调整(所有节点)
Kubernetes 1.8开始要求关闭系统的Swap,如果不关闭,默认配置下kubelet将无法启动。 关闭系统的Swap方法如下:

关闭swap
swapoff -a

注释swap挂载
vim /etc/fstab

关闭之后使用free命令查看是否关闭成功

swappiness参数调整
vim /etc/sysctl.d/99-kubernetes-cri.conf
...
vm.swappiness = 0

sysctl -p /etc/sysctl.d/99-kubernetes-cri.conf

6、 导入模块(所有节点)

cat << EOF > /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF 

执行下面命令生效:

modprobe overlay
modprobe br_netfilter

7、 配置内核参数

cat << EOF > /etc/sysctl.d/99-kubernetes-cri.conf
vm.swappiness = 0
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
user.max_user_namespaces=28633
EOF

执行下面命令生效:

sysctl -p /etc/sysctl.d/99-kubernetes-cri.conf

 4、配置支持IPVS(所有节点)

由于ipvs已经加入到了内核的主干,所以为kube-proxy开启ipvs的前提需要加载以下的内核模块:

ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4


三台服务器执行如下:

cat > /etc/sysconfig/modules/ipvs.modules < #!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF


chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4


接下来还需要确保各个节点上已经安装了ipset软件包,为了便于查看ipvs的代理规则,最好安装一下管理工具ipvsadm。

yum install -y ipset ipvsadm
注意:如果不满足以上前提条件,则即使kube-proxy的配置开启了ipvs模式,也会退回到iptables模式。
 

 

三:安装Containerd (所有节点)

安装containerd
yum info containerd.io
{Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
Installed Packages
Name        : containerd.io
Arch        : x86_64
Version     : 1.6.21    #版本号
Release     : 3.1.el7
Size        : 114 M
Repo        : installed
From repo   : docker-ce-stable
Summary     : An industry-standard container runtime
URL         : https://containerd.io
License     : ASL 2.0
Description : containerd is an industry-standard container runtime with an emphasis on
            : simplicity, robustness and portability. It is available as a daemon for Linux
            : and Windows, which can manage the complete container lifecycle of its host
            : system: image transfer and storage, container execution and supervision,
            : low-level storage and network attachments, etc.}


yum install -y containerd.io
 

 1、配置containerd

cd /etc/containerd/
mv config.toml config.toml.orig
containerd config default > config.toml

生成的配置文件/etc/containerd/config.toml:

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  ...
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true	# false 修改为 true
	
	
	再修改/etc/containerd/config.toml中的

[plugins."io.containerd.grpc.v1.cri"]
  ...
  # sandbox_image = "k8s.gcr.io/pause:3.6"
  sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"	#这里一定要注意,要根据下载到本地 pause镜像的版本来进行修改,否则初始化会过不去。
  
  
  为镜像下载添加加速源

      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
          [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
          endpoint = ["https://docker.m.daocloud.io"]
		  
配置containerd开机启动,并启动containerd
systemctl enable containerd ; systemctl start containerd

2、 查看版本信息

查看版本信息
ctr version

Client:
  Version:  1.6.21
  Revision: 3dce8eb055cbb6872793272b4f20ed16117344f8
  Go version: go1.19.9

Server:
  Version:  1.6.21
  Revision: 3dce8eb055cbb6872793272b4f20ed16117344f8
  UUID: f0b52f6f-aea7-4c9b-a277-e46289e23b40


runc -v

runc version 1.1.7
commit: v1.1.7-0-g860f061
spec: 1.0.2-dev
go: go1.19.9
libseccomp: 2.3.1

四:使用kubeadm部署k8s 

1、安装软件

yum install -y kubeadm-1.26.0 kubelet-1.26.0 kubectl-1.26.0

在各节点开机启动kubelet服务:

systemctl enable kubelet

2、配置kubectl 自动补全功能

yum -y install bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

3、生成k8s 初始化配置文件

kubeadm config print init-defaults > kubeadm-init.yml

修改初始化文件每个注释的行都需要修改或添加

cat kubeadm-init.yml

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 0s    #修改token过期时间为无限制
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.199.101    #修改为k8s-master节点IP
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  taints:
  - effect: PreferNoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers    #替换为国内的镜像仓库
kind: ClusterConfiguration
kubernetesVersion: 1.26.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16    #为pod网络指定网络段
---
#申明cgroup用 systemd
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
failSwapOn: false
---
#启用ipvs
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs

 4、查看kube-controller-manager.yaml的配置文件没有分配pod的ip端所以需要加入两个配置如下:

cat  /etc/kubernetes/manifests/kube-controller-manager.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: kube-controller-manager
    tier: control-plane
  name: kube-controller-manager
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-controller-manager
    - --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
    - --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
    - --bind-address=127.0.0.1
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --cluster-name=kubernetes
    - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
    - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
    - --controllers=*,bootstrapsigner,tokencleaner
    - --kubeconfig=/etc/kubernetes/controller-manager.conf
    - --leader-elect=true
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --root-ca-file=/etc/kubernetes/pki/ca.crt
    - --service-account-private-key-file=/etc/kubernetes/pki/sa.key  
    - --use-service-account-credentials=true
    - --allocate-node-cidrs=true   #加入这一行不然会报错没有分配pod的ip端
    - --cluster-cidr=10.244.0.0/16 #加入这一行不然会报错没有分配pod的ip端
    image: registry.aliyuncs.com/google_containers/kube-controller-manager:v1.26.0
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10257
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    name: kube-controller-manager
    resources:
      requests:
        cpu: 200m
    startupProbe:
      failureThreshold: 24
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10257
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    volumeMounts:
    - mountPath: /etc/ssl/certs
      name: ca-certs
      readOnly: true
    - mountPath: /etc/pki
      name: etc-pki
      readOnly: true
    - mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
      name: flexvolume-dir
    - mountPath: /etc/kubernetes/pki
      name: k8s-certs
      readOnly: true
    - mountPath: /etc/kubernetes/controller-manager.conf
      name: kubeconfig
      readOnly: true
  hostNetwork: true
  priorityClassName: system-node-critical
  securityContext:
    seccompProfile:
      type: RuntimeDefault
  volumes:
  - hostPath:
      path: /etc/ssl/certs
      type: DirectoryOrCreate
    name: ca-certs
  - hostPath:
      path: /etc/pki
      type: DirectoryOrCreate
    name: etc-pki
  - hostPath:
      path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
      type: DirectoryOrCreate
    name: flexvolume-dir
  - hostPath:
      path: /etc/kubernetes/pki
      type: DirectoryOrCreate
    name: k8s-certs
  - hostPath:
      path: /etc/kubernetes/controller-manager.conf
      type: FileOrCreate
    name: kubeconfig
status: {}

5、查看所需镜像+下载镜像

kubeadm config images list --config=kubeadm-init.yml

registry.aliyuncs.com/google_containers/kube-apiserver:v1.26.0
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.26.0
registry.aliyuncs.com/google_containers/kube-scheduler:v1.26.0
registry.aliyuncs.com/google_containers/kube-proxy:v1.26.0
registry.aliyuncs.com/google_containers/pause:3.9
registry.aliyuncs.com/google_containers/etcd:3.5.6-0
registry.aliyuncs.com/google_containers/coredns:v1.9.3
注意:再次确认 registry.aliyuncs.com/google_containers/pause:3.9 就是上面 /etc/containerd/config.toml 中所需要填写正确的 pause镜像及版本号。

...
    sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"
...

修改完成后记得重启 containerd,所有节点都需要操作:

systemctl restart containerd

根据配置文件拉取所需镜像

kubeadm config images pull --config=kubeadm-init.yml

[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.26.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.26.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.26.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.26.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.6-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.9.3

五:初始化k8s集群

#执行初始化并将日志记录到 kubeadm-init.log 日志文件中
kubeadm init --config=kubeadm-init.yml | tee kubeadm-init.log

cat kubeadm-init.log
解释下上面的日志输出内容,看行首 [] 括起来的内容。

[init] :开始初始化集群,动作声明;
[preflight] :开始初始化之前的检查工作;
[certs] :生成相关的各种证书
[kubeconfig] :生成相关的kubeconfig文件
[kubelet-start]: 生成kubelet的配置文件"/var/lib/kubelet/config.yaml"
[control-plane] :使用/etc/kubernetes/manifests目录中的yaml文件创建apiserver、controller-manager、scheduler的静态pod,也称为控制平面
[bootstrap-token]:生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到
[addons]: 安装基本插件:CoreDNS, kube-proxy

初始化完成之后会提示执行命令

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

在上方查看日志中还会有提示node加入集群的命令

kubeadm join 192.168.199.101:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:fff9cb2c367c904d31994abb5a460b2e16305a4b687744ad1f31af06c02d37d7

1、查看集群的健康状态

查看集群是否处于健康状态:

kubectl get cs

Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true","reason":""}
scheduler            Healthy   ok
集群正常,如果遇到错误的情况,可使用 kubeadm reset 重置,然后重启主机,再次进行 初始化。

2、安装网络插件Flanel(较长可以在网上下载相对应的版本配置文件)

编辑kube-flannel.yml 

cat kube-flannel.yml 
---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    k8s-app: flannel
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
- apiGroups:
  - networking.k8s.io
  resources:
  - clustercidrs
  verbs:
  - list
  - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: flannel
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    k8s-app: flannel
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
    k8s-app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
        image: docker.io/flannel/flannel-cni-plugin:v1.1.2
        imagePullPolicy: IfNotPresent
       #image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.2
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
        image: docker.io/flannel/flannel:v0.21.5
        imagePullPolicy: IfNotPresent
       #image: docker.io/rancher/mirrored-flannelcni-flannel:v0.21.5
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: docker.io/flannel/flannel:v0.21.5
        imagePullPolicy: IfNotPresent
       #image: docker.io/rancher/mirrored-flannelcni-flannel:v0.21.5
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate

3、 安装执行flannel

kubectl apply -f  kube-flannel.yml

六:集群加入node节点

1、该部分只在k8s-node01 和 k8s-node02 执行!

在node节点安装
yum install -y kubeadm-1.26.0 kubelet-1.26.0

加入集群
提示:加入集群前,请务必确认 /etc/containerd/config.toml 中配置修改正确,可以直接拷贝 k8s-master 节点配置文件到此。

scp k8s-master:/etc/containerd/config.toml /etc/containerd/config.toml 

配置完成后,重启服务
systemctl restart containerd

设置 kubelet 开机启动:
systemctl enable kubelet
提示:该项不设置,加入集群会出现告警信息:
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'

加入集群,该命令在 k8s-master 节点初始化成功后,日志直接给出。
kubeadm join 192.168.199.101:6443 --token abcdef.0123456789abcdef \
         --discovery-token-ca-cert-hash sha256:fff9cb2c367c904d31994abb5a460b2e16305a4b687744ad1f31af06

出现如上提示表示加入集群成功。提示:如果加入集群失败,可使用 kubeadm reset 重置。
查看集群

通过 k8s-master 上执行操作查看:
kubectl get nodes -o wide

NAME         STATUS   ROLES           AGE     VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION           CONTAINER-RUNTIME
k8s-master   Ready    control-plane   40m     v1.26.0   192.168.199.101           CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   containerd://1.6.21
k8s-node01   Ready              4m15s   v1.26.0   192.168.199.102           CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   containerd://1.6.21
k8s-node01 已经加入成功,k8s-node02 同样如上操作。


10.集群状态
10.1 集群节点状态
kubectl get nodes -o wide

NAME         STATUS   ROLES           AGE    VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION           CONTAINER-RUNTIME
k8s-master   Ready    control-plane   49m    v1.26.0   192.168.199.101           CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   containerd://1.6.21
k8s-node01   Ready              12m    v1.26.0   192.168.199.102           CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   containerd://1.6.21
k8s-node02   Ready              2m9s   v1.26.0   192.168.199.103           CentOS Linux 7 (Core)

七:扩展

如果kubectl get node 发现节点的名称不对可以看一下操作

首先kubectl get node 里面的name有误需要把集群清理重新部署加入
[root@k8s-master1 ~]# kubectl delete node k8s-node-n1
node "k8s-node-n1" deleted
[root@k8s-master1 ~]# kubectl delete node k8s-node-n2

查看scr
kubectl get csr
The connection to the server 192.168.32.29:6443 was refused - did you specify the rig
已经开始报错了

现在需要清理集群的所有配置
kubeadm reset

手动 删除配置文件
rm -rf $HOME/.kube/config
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

修改kubeadm-init.yml
里面的nodeRegistration的name为master的新hostname先

nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master1
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master

修改完成后重新初始化
kubeadm init --config=kubeadm-init.yml | tee kubeadm-init.log
使用下面的命令是配置常规用户如何使用kubectl访问集群(root也可这样执行):

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

重新在部署一下kube flannel网络插件
kubectl create -f kube-flannel.yml


在node节点重新加入集群就完成

你可能感兴趣的:(kubernetes,容器,云原生)