kubenetes学习笔记——部署

近半年有时间的时候总会看看Kubenetes,无奈时间和知识都比较零散,于是想好好把自己学习到的内容梳理梳理,系统化的固化下来,这是第一篇,也是给之后相关的学习创造一个基础环境。下述过程完全是实测过程,童叟无欺,如果有入门的新手,完全可以按照步骤构建一个自我学习的集群。

下述步骤前提:

  1. 科学上网,节省时间。(不整国内的源了,瞎折腾,也不是学习的主线)
  2. VMware创建3台VM(资源不多两台也可以),2vCPU, 6G内存,1块网卡,NAT模式, CentOS7,最小化安装即可,剩下的交给yum

一、准备工作

1. 修改主机名

修改master节点

hostnamectl set-hostname kubemaster

修改work节点

hostnamectl set-hostname kubenode1
hostnamectl set-hostname kubenode2

2. 配置静态IP

每个节点(master, node1, node2)根据实际情况修改网络配置

vi /etc/sysconfig/network-scripts/ifcfg-ens33

示例如下

TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
IPADDR=192.168.97.140
NETMASK=255.255.255.0
GATEWAY=192.168.97.2
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
DEVICE=ens33
ONBOOT=yes

3. 配置主机名与IP地址映射

每个节点均进行如下配置

vi /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.97.140 kubemaster
192.168.97.141 kubenode1
192.168.97.142 kubenode2

4. 确保iptables看见桥接流量

4.1 确认系统加载br_netfilter模块

查看系统是否加载br_netfilter模块

[root@kubemaster ~]# lsmod | grep br_netfilter
[root@kubemaster ~]#

加载br_netfilter模块后再查看

[root@kubemaster ~]# modprobe br_netfilter
[root@kubemaster ~]# lsmod | grep br_netfilter
br_netfilter           22256  0
bridge                151336  2 br_netfilter,ebtable_broute

4.2 设置正确的net.bridge.bridge-nf-call-iptables值

cat <

5. 关闭防火墙

如果各个主机启用了防火墙,需要开放Kubernetes各个组件所需要的端口,请参见下表。 这里简单起见在各节点禁用防火墙:

systemctl stop firewalld
systemctl disable firewalld

Control-plane node(s)

Protocol Direction Port Range Purpose Used By
TCP Inbound 6443* Kubernetes API server All
TCP Inbound 2379-2380 etcd server client API kube-apiserver, etcd
TCP Inbound 10250 kubelet API Self, Control plane
TCP Inbound 10251 kube-scheduler Self
TCP Inbound 10252 kube-controller-manager Self

Worker node(s)

Protocol Direction Port Range Purpose Used By
TCP Inbound 10250 kubelet API Self, Control plane
TCP Inbound 30000-32767 NodePort Services† All
header 1 header 2
row 1 col 1 row 1 col 2
row 2 col 1 row 2 col 2

6. 安装Docker CE

6.1 安装必要的包

yum install -y yum-utils device-mapper-persistent-data lvm2

6.2 添加Docker的yum库

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

6.3 安装Docker CE

yum update -y && sudo yum install -y \
  containerd.io-1.2.13 \
  docker-ce-19.03.11 \
  docker-ce-cli-19.03.11

6.4 创建Docker目录

mkdir /etc/docker

6.5 配置Docker的Daemon进程

cat <

6.6 配置Docker服务

mkdir -p /etc/systemd/system/docker.service.d

6.7 重启Docker

systemctl daemon-reload
systemctl restart docker

6.8 配置Docker自启动

systemctl enable docker

7. 关闭SWAP交换分区

关闭系统swap

swapoff -a

关闭swap自动挂载

vi /etc/fstab
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=46237b6d-cb52-4eb9-866d-18ecf3b2fe52 /boot                   xfs     defaults        0 0
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

8. 安装kubeadm, kubelet 以及 kubectl

配置yum源

cat <

关闭SELINUX

# Set SELinux in permissive mode (effectively disabling it)
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

安装kubelet kubeadm kubectl

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable --now kubelet

二、使用kubeadm创建集群

1.设置OS代理(所有节点)

export https_proxy=http://192.168.97.1:10080
export http_proxy=http://192.168.97.1:10080
export no_proxy=127.0.0.1,localhost,192.168.97.0/24,10.96.0.0/12,10.122.0.0/16

2. 设置Docker代理(所有节点)

vi /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://192.168.97.1:10080"
Environment="HTTPS_PROXY=http://192.168.97.1:10080"
Environment="NO_PROXY=127.0.0.1,localhost,192.168.97.0/24,10.96.0.0/12,10.122.0.0/16"

docker服务重新加载

systemctl daemon-reload
systemctl restart docker

3.初始化Master节点

kubeadm init --kubernetes-version=1.20.0 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.122.0.0/16

4. 配置kubectl

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

5. 安装CNI插件(Calico)

Flannel太简单,且不支持Network Policy,这里以50个节点以下,使用Kubernetes API datastore的方式部署方式进行CNI插件的安装

5.1 下载Calico manifest

curl https://docs.projectcalico.org/manifests/calico.yaml -O

5.2 安装Calico

kubectl apply -f calico.yaml

5.3 确认Calicao状态

kubectl get pods --all-namespaces

NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-66956989f4-qnpl7   0/1     Running   0          2m23s
kube-system   calico-node-26ppf                          1/1     Running   0          2m23s
kube-system   coredns-74ff55c5b-nqz2w                    1/1     Running   0          10h
kube-system   coredns-74ff55c5b-qpbfs                    1/1     Running   0          10h
kube-system   etcd-kubemaster                            1/1     Running   0          10h
kube-system   kube-apiserver-kubemaster                  1/1     Running   0          10h
kube-system   kube-controller-manager-kubemaster         1/1     Running   0          10h
kube-system   kube-proxy-l2lz5                           1/1     Running   0          10h
kube-system   kube-scheduler-kubemaster                  1/1     Running   0          10h

5.4 将Node添加至集群

在Node上执行Master节点初始化成功后提示的命令

kubeadm join 192.168.97.140:6443 --token 4r07pr.pzfvc5r3169d9a2x \
    --discovery-token-ca-cert-hash sha256:6425c39be726c438d4c06610091d3f0cd1312f8f440fbaa27d9efc299efb6d7d

Node加入集群会有一段时间来下载/初始化镜像,所以从命令执行到Node Ready会有一定的时间

6 确认集群状态

kubectl get pods -n kube-system -o wide

NAME                                       READY   STATUS    RESTARTS   AGE    IP               NODE         NOMINATED NODE   READINESS GATES
calico-kube-controllers-66956989f4-qnpl7   1/1     Running   0          25m    10.122.141.3     kubemaster              
calico-node-26ppf                          1/1     Running   0          25m    192.168.97.140   kubemaster              
calico-node-qtp7m                          1/1     Running   0          5m1s   192.168.97.142   kubenode2               
calico-node-vmv7f                          1/1     Running   0          10m    192.168.97.141   kubenode1               
coredns-74ff55c5b-nqz2w                    1/1     Running   0          11h    10.122.141.2     kubemaster              
coredns-74ff55c5b-qpbfs                    1/1     Running   0          11h    10.122.141.1     kubemaster              
etcd-kubemaster                            1/1     Running   0          11h    192.168.97.140   kubemaster              
kube-apiserver-kubemaster                  1/1     Running   0          11h    192.168.97.140   kubemaster              
kube-controller-manager-kubemaster         1/1     Running   0          11h    192.168.97.140   kubemaster              
kube-proxy-l2lz5                           1/1     Running   0          11h    192.168.97.140   kubemaster              
kube-proxy-qp4bh                           1/1     Running   0          5m1s   192.168.97.142   kubenode2               
kube-proxy-thngx                           1/1     Running   0          10m    192.168.97.141   kubenode1               
kube-scheduler-kubemaster                  1/1     Running   0          11h    192.168.97.140   kubemaster              

kubectl get nodes
kubemaster   Ready    control-plane,master   11h     v1.20.2
kubenode1    Ready                     11m     v1.20.2
kubenode2    Ready                     5m56s   v1.20.2

你可能感兴趣的:(kubenetes学习笔记——部署)