一:k8s1.27.x 的概述
太平洋时间 2023 年 4 月 11 日,Kubernetes 1.27 正式发布。此版本距离上版本发布时隔 4 个月,是 2023 年的第一个版本。
新版本中 release 团队跟踪了 60 个 enhancements,比之前版本都要多得多。其中 13 个功能升级为稳定版,29 个已有功能进行优化升级为 Beta,另有 18 个 Alpha 级别的功能,大多数为全新功能。
1.1:k8s 1.27.x 更新
镜像仓库切换 http://k8s.gcr.io 到 http://registry.gcr.io
KEP-1847:StatefulSet PVC 自动删除功能特性 Beta
KEP-3453:优化大型集群中 kube-proxy 的 iptables 模式性能
KEP-2831 和 KEP-647:APIServer 和 Kubelet 的 Tracing 功能 Beta
KEP-3077:上下文日志
KEP-1287:Pod 资源的纵向弹性伸缩
KEP-3386:Kubelet 事件驱动 PLEG 升级为 Beta
KEP-3476:Volume Group 快照 Alpha(API)
KEP-3838 和 KEP-3521:Pod 调度就绪态功能增强
KEP-3243:Deployment 滚动更新过程中的调度优化
KEP-2876:使用通用表达式语言(CEL)来验证 CRD
KEP-2258:节点日志查询
KEP-3659:kubectl apply –prune 重新设计
序号 | 操作系统及版本 | 备注 |
---|---|---|
1 | CentOS7.9 |
CPU | 内存 | 硬盘 | 角色 | 主机名 |
---|---|---|---|---|
2C | 2G | 50GB | master | Master01 |
2C | 24G | 50GB | worker(node) | Node02 |
2C | 2G | 50GB | worker(node) | Node03 |
由于本次使用3台主机完成kubernetes集群部署,其中1台为master节点,名称为k8s-master;其中2台为worker节点,名称分别为:k8s-k8s-node1及k8s-node2
master节点
# hostnamectl set-hostname master01
worker01节点
# hostnamectl set-hostname node01
worker02节点
# hostnamectl set-hostname nodeo2
所有集群主机均需要进行配置。
# cat >> /etc/hosts << EOF
192.168.1.10 master01
192.168.1.11 node01
192.168.1.12 node02
EOF
# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.10 master01
192.168.1.11 node01
192.168.1.12 node02
所有主机均需要操作。
关闭现有防火墙firewalld
# systemctl disable firewalld
# systemctl stop firewalld
# firewall-cmd --state
not running
所有主机均需要操作。修改SELinux配置需要重启操作系统。
# setenforce 0
# sed -ri 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
所有主机均需要操作。最小化安装系统需要安装ntpdate软件。
# yum -y install ntpdate
# crontab -l
0 */1 * * * /usr/sbin/ntpdate time1.aliyun.com
所有主机均需要操作。
导入elrepo gpg key
# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
安装elrepo YUM源仓库
# yum -y install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm
安装kernel-ml版本,ml为长期稳定版本,lt为长期维护版本
# yum --enablerepo="elrepo-kernel" -y install kernel-ml.x86_64
设置grub2默认引导为0
# grub2-set-default 0
重新生成grub2引导文件
# grub2-mkconfig -o /boot/grub2/grub.cfg
更新后,需要重启,使用升级的内核生效。
# reboot
重启后,需要验证内核是否为更新对应的版本
# uname -r
所有主机均需要操作。
添加网桥过滤及内核转发配置文件
# cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF
# cat /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
加载br_netfilter模块
# modprobe br_netfilter
查看是否加载
# lsmod | grep br_netfilter
br_netfilter 22256 0
bridge 151336 1 br_netfilter
加载网桥过滤及内核转发配置文件
# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
所有主机均需要操作。
安装ipset及ipvsadm
# yum -y install ipset ipvsadm
配置ipvsadm模块加载方式
添加需要加载的模块
# cat > /etc/sysconfig/modules/ipvs.modules <
修改完成后需要重启操作系统,如不重启,可临时关闭,命令为swapoff -a
永远关闭swap分区,需要重启操作系统
# swapoff -a
# sed -i 's/.*swap.*/#&/g' /etc/fstab
# cat /etc/fstab
......
# /dev/mapper/centos-swap swap swap defaults 0 0
在上一行中行首添加#
使用阿里云开源软件镜像站。
# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
# yum -y install docker-ce
# systemctl enable --now docker
/etc/docker/daemon.json 默认没有此文件,需要单独创建
在/etc/docker/daemon.json添加如下内容
# cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://84bkfzte.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
# cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://84bkfzte.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
# systemctl daemon-reload
# systemctl restart docker
# docker info
所有主机均需要操作。
下载cri-dockerd安装包,注意这里可能需要用科学上网
# wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.1/cri-dockerd-0.3.1-3.el7.x86_64.rpm
安装cri-dockerd
# rpm -ivh cri-dockerd-0.3.1-3.el7.x86_64.rpm
修改镜像地址为国内,否则kubelet拉取不了镜像导致启动失败
# vi /usr/lib/systemd/system/cri-docker.service
ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.7
启动cri-dockerd
# systemctl daemon-reload
# systemctl enable cri-docker && systemctl start cri-docker
kubeadm | kubelet | kubectl | |
---|---|---|---|
版本 | 1.27.4 | 1.27.4 | 1.27.4 |
安装位置 | 集群所有主机 | 集群所有主机 | 集群所有主机 |
作用 | 初始化集群、管理集群等 | 用于接收api-server指令,对pod生命周期进行管理 | 集群应用命令行管理工具 |
所有节点均可安装
# cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
安装
# yum install -y kubelet-1.27.4 kubeadm-1.27.4 kubectl-1.27.4
为了实现docker使用的cgroupdriver与kubelet使用的cgroup的一致性,建议修改如下文件内容。
# vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
设置kubelet为开机自启动即可,由于没有生成配置文件,集群初始化后自动启动
# systemctl enable kubelet && systemctl restart kubelet
在master节点安装
注意apiserver-advertise-address地址修改成相应的IP,pod-network-cidr地址不要改变,因为安装Calico默认地址:
[root@k8s-master ~]# kubeadm init \
--apiserver-advertise-address=192.168.1.10 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.27.4 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--cri-socket=unix:///var/run/cri-dockerd.sock \
--ignore-preflight-errors=all
初始化过程输出
[init] Using Kubernetes version: v1.27.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0415 17:50:39.742407 3689 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.0, falling back to the nearest etcd version (3.5.7-0)
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.10]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.1.10 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.1.10 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
W0415 17:51:04.317762 3689 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.0, falling back to the nearest etcd version (3.5.7-0)
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 6.002359 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 6t01k9.671ufvohi6l6fu7g
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.1.10:6443 --token 6t01k9.671ufvohi6l6fu7g \
--discovery-token-ca-cert-hash sha256:56d66ba010a67f0668f301984204f8e3f0c189bd4cba9ff20ce2289aabf24259
[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master ~]# ls /root/.kube/
cache config
在所有node节点执行
查询加入节点的命令,在master上执行
# kubeadm token create --print-join-command
kubeadm join 192.168.1.10:6443 --token i3ny5z.bsb1et1pq3lqhj3q --discovery-token-ca-cert-hash sha256:dfd931f4611f565b15776a1640dabc6720b3439812fda367ce856916e8494853
注意需要在最后增加--cri-socket=unix:///var/run/cri-dockerd.sock
# kubeadm join 192.168.1.10:6443 --token gkamdx.tt7grt6dc4vw8352 \
--discovery-token-ca-cert-hash sha256:d543c10aaf935bcbcb03da989055f2054e6fc35af7bb3f3acf35b957a4e761c8 --cri-socket=unix:///var/run/cri-dockerd.sock
在所有node节点执行
下载calico资源清单文件,注意可能需要科学上网
# wget https://projectcalico.docs.tigera.io/archive/v3.25/manifests/calico.yaml
应用资源清单文件,创建calico
# kubectl apply -f calico.yaml
查看kube-system命名空间中coredns状态,处于Running状态表明联网成功。
[root@k8s-master1 ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-6c99c8747f-rvzds 1/1 Running 0 4m13s
calico-node-f7b9l 1/1 Running 0 4m13s
coredns-7bdc4cb885-8z2fz 1/1 Running 0 18m
coredns-7bdc4cb885-gmpd7 1/1 Running 0 18m
etcd-k8s-master 1/1 Running 0 19m
kube-apiserver-k8s-master 1/1 Running 0 19m
kube-controller-manager-k8s-master 1/1 Running 0 19m
kube-proxy-hs5sg 1/1 Running 0 18m
kube-scheduler-k8s-master 1/1 Running 0 19m
查看所有的节点
[root@k8s-master ~]# kubectl get nodes -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master Ready control-plane 19m v1.27.0 192.168.1.10 CentOS Linux 7 (Core) 6.2.11-1.el7.elrepo.x86_64 docker://23.0.3
k8s-node1 Ready control-plane 19m v1.27.0 192.168.1.11 CentOS Linux 7 (Core) 6.2.11-1.el7.elrepo.x86_64 docker://23.0.3
k8s-node2 Ready control-plane 19m v1.27.0 192.168.1.12 CentOS Linux 7 (Core) 6.2.11-1.el7.elrepo.x86_64 docker://23.0.3
查看集群健康情况
[root@k8s-master ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true","reason":""}
登录后复制
Dashboard 是基于网页的 Kubernetes 用户界面。 你可以使用 Dashboard 将容器应用部署到 Kubernetes 集群中,也可以对容器应用排错,还能管理集群资源。 你可以使用 Dashboard 获取运行在集群中的应用的概览信息,也可以创建或者修改 Kubernetes 资源 (如 Deployment,Job,DaemonSet 等等)。但是一般还是图形化界面一般还是给开发或者对kubernetes不太熟悉的人查看使用。
下载地址:参考 Kubernetes-Dashboard
mkdir -p /etc/kubernetes/dashboard
cd /etc/kubernetes/dashboard
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
#修改 recommended.yaml中service的相关部分,可以临时使用nodeport的方式访问
# Adde by How
type: NodePort
-----------------------------------------------------------------------------------
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort #修改文件,增加nodeport参数便于访问
ports:
- port: 443
targetPort: 8443
nodePort: 32641 #指定nodePort,这样当集群重启时端口不会变更了。
selector:
k8s-app: kubernetes-dashboard
[root@master01]# kubectl apply -f recommended.yaml
-------------------output------------------------------------------------
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
[root@master01 dashboard]# kubectl -n kubernetes-dashboard get pods
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-5cb4f4bb9c-dmkrb 1/1 Running 0 176m
kubernetes-dashboard-6967859bff-9n966 1/1 Running 0 176m
-----------------------------------------------------------------------------------
[root@master01 tools]# kubectl -n kubernetes-dashboard get service kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard NodePort 10.106.206.69 443:32641/TCP 71m
用浏览器访问控制台
访问 https://192.168.1.10:32641 即可进入控制台界面
注意:Chrome如果提示不安全连接,并且高级选项也无法进入, 在页面空白处键入 thisisunsafe 即可
有好几种方式,这里只选择了token方式
1. 创建管理员服务帐号
mkdir -p /etc/kubernetes/dashboard
cd /etc/kubernetes/dashboard
vim admin-user.yaml
首先创建一个叫admin-user的服务账号,并放在kubernetes-dashboard名称空间下:
#vim admin-user.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
# kubectl apply -f admin-user.yaml
默认情况下,kubeadm创建集群时已经创建了cluster-admin角色,我们直接绑定即可
#cd /etc/kubernetes/dashboard
#vim admin-user-role-binding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
#执行配置
#kubectl apply -f admin-user-role-binding.yaml
现在我们创建admin-user用户的Token,以便用来登录dashboard:
cd /etc/kubernetes/dashboard
kubectl -n kubernetes-dashboard create token admin-user
-------------------------------------------------------
此处略
把Token复制到登录界面的Token输入框中登陆 https://192.168.1.10:32641「输入master地址或者node地址都可以」 正常登陆
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-IWb9R3Sh-1693207138286)(/Users/wangxiaoming/Library/Application Support/typora-user-images/image-20230828144528984.png)]
Dashboard的Token失效时间可以通过 token-ttl 参数来设置,这里我们有三种方式:【yaml、直接修改、通过Kubernetes Dashboard 】
我们这边通过kubectl 直接修改
#kubectl edit deployment kubernetes-dashboard -n kube-system
------------------------------------------------
spec:
containers:
- args:
- --auto-generate-certificates
- --token-ttl=43200
- --namespace=kubernetes-dashboard
image: kubernetesui/dashboard:v2.7.0
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
httpGet:
path: /
port: 8443
scheme: HTTPS
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 30
name: kubernetes-dashboard
ports:
参数名 | 默认值 | 说明 |
---|---|---|
token-ttl | 15 分钟 | 仪表板生成的JWE令牌的过期时间(秒)。默认值:15分钟0-永不过期。 |
1、问题
[root@master01 ~]# sysctl -p /etc/sysctl.d/k8s.conf
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
解决:
# modprobe br_netfilter
# ls /proc/sys/net/bridge
bridge-nf-call-arptables bridge-nf-call-iptables bridge-nf-filter-vlan-tagged
bridge-nf-call-ip6tables bridge-nf-filter-pppoe-tagged bridge-nf-pass-vlan-input-dev
# sysctl -p
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
[root@master01 ~]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
[root@master01 ~]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed
[root@master01 ~]# kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/nginx-85b98978db-2lsms 0/1 ImagePullBackOff 0 32s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 443/TCP 21h
service/nginx NodePort 10.96.20.147 80:30536/TCP 8s