k8s集群部署

三台机器分别是:master1(172.16.22.27),master2(172.16.22.28),master3(172.16.29)

kubeadm 方式

1. 安装运行时

目前主流运行时有containerd,docker,CRI-O,只需选择一种就可以,推荐使用containerd,不管是master还是node都需要安装运行时。

安装docker

docker具体使用可参考https://juejin.cn/post/6873395391112019982

安装containerd

  • 安装和配置的先决条件
cat <
  • 安装配置
#安装
yum install containerd.io

#配置
containerd config default > /etc/containerd/config.toml #生成配置文件
#配置cgroup driver 为 systemd
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  ...
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true

#重启
systemctl restart containerd

注意:确保kubelet的cgroup的驱动与containerd 驱动保持一致,不然会导致k8s初始化失败,修改/etc/sysconfig/kubelet文件 --cgroup-driver=cgroupfs 或者 --cgroup-driver=systemd

2. Keepalived 安装

控制平面采用keepalived+vip实现高可用(HA)
具体配置见:https://www.jianshu.com/p/8e8d318876c7

3.ETCD安装

Kubernete底层存储是采用etcd,高可用集群有两个选项:外部etcd,堆叠模式,推荐使用外部etcd方式

  • 外部etcd
    etcd 分布式数据存储集群在独立于控制平面节点的其他节点上运行
    每个etcd成员与每个节点的kube-apiserver通信,该拓扑结构解耦了控制平面和etcd成员


    外部etcd.png
  • 堆叠模式
    HA集群默认拓扑结构,每个控制平面节点创建一个本地etcd成员,该成员只与该节点的kube-apiserver通信,同样适用于本地 kube-controller-manager 和 kube-scheduler 实例。该模式设置简单,但etcd成员跟控制平面强耦合


    堆叠模式.png

具体安装见https://www.jianshu.com/p/bc060063450e

3. 配置k8s阿里云源 或者(imdingtalk)

cat > /etc/yum.repos.d/kubernetes.repo << EOF
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF    

4. 安装 kubeadm kubelet kubectl

kubeadm kubelet kubectl 统称为k8s三剑客

  • kubeadm 初始化k8s集群的工具 详细命令见:kubeadm -h
  • kubelet 每个节点守护进程
  • kubectl k8s集群命令行工具 kubectl -h
# 控制平面节点和运行节点都需要
yum install -y  kubeadm-1.20.5 kubectl-1.20.5 kubelet-1.20.5

5. 初始化集群

先在master1机器上执行一下操作

#初始化配置文件
kubeadm config print init-defaults  > /etc/kubeadm-config.yaml

通过配置定制化组件

#配置InitConfiguration
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
....
  criSocket: "/run/containerd/containerd.sock" #指定读取运行时信息
localAPIEndpoint:
#advertiseAddress: "172.16.22.27" #设置API服务器要公布的IP地址,可不设置,自动读取ip地址
  bindPort: 6443 #默认端口

#ClusterConfiguration配置
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: "v1.20.5" #指定安装版本
controlPlaneEndpoint: "172.16.22.200:6443" #指定控制面负载均衡ip地址或者dns地址+端口,可指定keepalived vip
certificatesDir: "/etc/kubernetes/pki"
imageRepository: "registry.aliyuncs.com/google_containers"  #指定镜像仓库地址
clusterName: "kubernetes" #集群的名称
dns:
   type: CoreDNS #定义DNS插件
etcd:
   external:
       endpoints:
        - https://172.16.22.27:2379
        - https://172.16.22.28:2379 
        - https://172.16.22.29:2379
        caFile: /etc/etcd/pki/ca.crt
        certFile: /etc/etcd/pki/etcd.crt
        keyFile: /etc/kubernetes/pki/etcd.key
networking:
  serviceSubnet: "10.96.0.0/16"  #Kubernetes 服务所使用的的子网,集群网络
  podSubnet: "10.244.0.0/24" #Pod 所使用的子网
  dnsDomain: "cluster.local"
scheduler: {}
controllerManager: {}

#配置KubeletConfiguration
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd #配置设置kubelet 的cgroupDriver

详细参数见https://kubernetes.io/zh/docs/reference/config-api/kubeadm-config.v1beta2/

#执行初始化
kubeadm init --config /etc/kubeadm-config.yaml

--upload-certs 标志用来将在所有控制平面实例之间的共享证书上传到集群(加密并上传到 kubeadm-certs Secret 中),如果喜欢手动,需要复制证书到其它控制面节点

输出类似于:

...
You can now join any number of control-plane node by running the following command on each as a root:
kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use kubeadm init phase upload-certs to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:
  kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866
#耐心等待
 # 使非 root 用户可以运行 kubectl,请运行以下命令, 它们也是kubeadm init 输出的一部分:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
或者,如果你是 `root` 用户,则可以运行:
export KUBECONFIG=/etc/kubernetes/admin.conf

要重新上传证书并生成新的解密密钥,请在已加入集群节点的控制平面上使用以下命令:

kubeadm init phase upload-certs --upload-certs

6. 安装网络插件flannel

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml##修改对应参数

拷贝证书到其余master

for node in master2 master3; do
  ssh $node "mkdir -p ~/.kube/"
  scp /etc/kubernetes/pki/ca.crt $node:/etc/kubernetes/pki/ca.crt
  scp /etc/kubernetes/pki/ca.key $node:/etc/kubernetes/pki/ca.key
  scp /etc/kubernetes/pki/sa.key $node:/etc/kubernetes/pki/sa.key
  scp /etc/kubernetes/pki/sa.pub $node:/etc/kubernetes/pki/sa.pub
  scp /etc/kubernetes/pki/front-proxy-ca.crt $node:/etc/kubernetes/pki/front-proxy-ca.crtcs
  scp /etc/kubernetes/pki/front-proxy-ca.key $node:/etc/kubernetes/pki/front-proxy-ca.key
  scp /etc/kubernetes/admin.conf $node:/etc/kubernetes/admin.conf
  scp /etc/kubernetes/admin.conf $node:~/.kube/config
done

7. 加入master

kubeadm join 172.16.22.200:6443 --token  --discovery-token-ca-cert-hash sha256: --control-plane

8. 加入节点

#node加入集群
kubeadm join 172.16.22.200:6443 --token  --discovery-token-ca-cert-hash sha256:

#重新生成token
kubeadm token create --print-join-command

安装dashboard(可选)

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml

卸载

kubeadm reset -f
modprobe -r ipip
lsmod
rm -rf ~/.kube/
rm -rf /etc/kubernetes/
rm -rf /etc/systemd/system/kubelet.service.d
rm -rf /etc/systemd/system/kubelet.service
rm -rf /usr/bin/kube*
rm -rf /etc/cni
rm -rf /opt/cni
rm -rf /var/lib/etcd
rm -rf /var/etcd
yum clean all
yum remove kube*
ip link set cni0 down
ip link delete cni0    

证书更换

1.查看证书过期信息

kubeadm certs check-expiration 

2.重新生成证书

kubeadm certs renew all --config /etc/kubeadm-config.yaml

输出


renew.png

3.重启服务器或者重启kube-apiserver,kube-controller-manager,kube-scheduler服务(直接删除pod,自动重建)

v1.15版本之前参考https://github.com/yuyicai/update-kube-cert/

常见问题

  • k8s master机器重启,kubelet无法启动,是因为重启swap又开启
  • "cni0" already has an IP address different from x.x.x.x
    删除cni0会自动重建,
    ifconfig cni0 down
    ip link delete cni0
  • 云主机无法指定公网IP
    kubeadm即开始了master节点的初始化,但是由于etcd配置文件不正确,所以etcd无法启动,要对该文件进行修改。文件路径"/etc/kubernetes/manifests/etcd.yaml" --listen-client-urls 和 --listen-peer-urls改成内网ip,因为公网ip没配,无法监听
  • flannel 安装失败: Error registering network: failed to acquire lease
    确保master 节点初始化时pod-network-cidr 与 kube-flannel.yml 网段要一致
    默认是10.244.0.0/16

你可能感兴趣的:(k8s集群部署)