文档:https://kuboard.cn/learning/
kubernetes从 1.23版本不支持docker了,而是使用containerd
集群编排面临的问题如下:
1.跨主机通信问题?
2.多容器跨主机部署?
3.容器发布,升级,回滚?
4.容器挂掉后,如何自动拉起服务?
5.当现有容器资源不足时,是否可以自动扩容?
6.能够实现容器的健康检查,若不健康是否能自动恢复?
7.如何将容器调度到集群特定节点?
8.将容器从一个节点驱逐,下线节点如何操作?
9.集群如何扩容?
10.集群如何监控?
11.集群日志如何收集?
…
早期容器容器编排工具:docker inc swarm,Apache mesos marathon,Google Kubernetes(简称K8S)。
2014年 docker容器编排工具,立项
2015年7月 发布kubernetes 1.0, 加入cncf基金会 孵化
2016年,kubernetes干掉两个对手,docker swarm,mesos marathon 1.2版
2017年 1.5 -1.9
2018年 k8s 从cncf基金会 毕业项目1.10 1.11 1.12,1.13
2019年: 1.14,1.15,1.16,1.17
2020年: 1.18, 1.19,1.20,1.21
2021年: 1.22
2022年: 1.23,1.24
cncf :
cloud native compute foundation 孵化器kubernetes (k8s):
希腊语 舵手,领航者 容器编排领域,谷歌15年容器使用经验,
borg容器管理平台,使用golang重构borg,kubernetes推荐阅读:
https://kubernetes.io/releases/patch-releases/#1-22
https://github.com/kubernetes/kubernetes
https://kubernetes.io/releases/release/温馨提示:
k8s的称呼并非空穴来风,可百度搜索"i18n"(是“国际化”的简称,全称为"internationalization")。
Kubernetes最初源于谷歌内部的**Borg**,提供了面向应用的容器集群部署和管理系统。Kubernetes 的目标旨在消除编排物理/虚拟计算,网络和存储基础设施的负担,并使应用程序运营商和开发人员完全将重点放在以容器为中心的原语上进行自助运营。Kubernetes 也提供稳定、兼容的基础(平台),用于构建定制化的workflows 和更高级的自动化任务。
Kubernetes 具备完善的集群管理能力,包括多层次的安全防护和准入机制、多租户应用支撑能力、透明的服务注册和服务发现机制、内建负载均衡器、故障发现和自我修复能力、服务滚动升级和在线扩容、可扩展的资源自动调度机制、多粒度的资源配额管理能力。
Kubernetes 还提供完善的管理工具,涵盖开发、部署测试、运维监控等各个环节。
K8s是用来对docker容器进行管理和编排的工具,其是一个基于docker构建的调度服务,提供资源调度、均衡容灾、服务注册、动态扩容等功能套件,其作用如下所示:
(1)数据卷:pod中容器之间数据共享,可以使用数据卷
(2)应用程序健康检查:容器内服务可能发生异常导致服务不可用,可以使用健康检查策略保证应用的健壮性。
(3)复制应用程序实例:控制器维护着pod的副本数量,保证一个pod或者一组同类的pod数量始终可用。
(4)弹性伸缩:根据设定的指标(CPU利用率等)动态的自动缩放pod数
(5)负载均衡:一组pod副本分配一个私有的集群IP地址,负载均衡转发请求到后端容器,在集群内布,其他pod可通过这个Cluster IP访问集群。
(6)滚动更新:更新服务不中断,一次更新一个pod,而不是同时删除整个服务
(7)服务编排:通过文件描述部署服务,使的程序部署更高效。
(8)资源监控:Node节点组件集成cAdvisor资源收集工具,可通过Heapster汇总整个集群节点资源数据,然后存储到InfluxDB时序数据库,再由Grafana展示
(9)提供认证和授权:支持属性访问控制、角色访问控制等认证授权策略。
机器名 | ||
---|---|---|
k8s151.com | 10.0.0.151 | 2C4G |
k8s152.com | 10.0.0.152 | 1C2G |
k8s153.com | 10.0.0.153 | 1C2G |
k8s154.com | 10.0.0.154 | 1C2G |
K8S有两大角色,一个是Master节点,一个是Worker节点。
其中Master负责管理和调度集群资源,Worker负责提供集群资源。
在一个高可用的集群当中,他俩一般由多个节点构成,这些节点可以是虚拟机也可以是物理机。
Worker提供的节点叫Pod,简单理解Pod就是K8S云平台提供的虚拟机,Pod里头住的是应用容器,比如Docker容器。
大部分情况Pod里只包含一个容器;有时候也包含多个容器,其中一个是主容器,其他是辅助容器。
为了加深理解,这里做个简单类比。Master好比指挥调度员工干活的主管,Worker好比部门当中实际干活的人。
K8S主要解决集群资源调度的问题,当有应用发布请求过来的时候,K8S会根据集群资源的空闲状态,把集群当中空闲的Pod合理的分配到Worker当中;另外,K8S还负责监控集群,当集群中有节点或者Pod挂了,它需要重新协调和启动Pod,保证应用的高可用,这个技术也叫自愈;另外K8S还需要管理集群的网络,保证Pod和服务之间可以互通互联。
主要负责管理和控制,Master节点包含Etcd、Apiserver、controller-manager、Scheduler等组件,作用分别如下:
Scheduler
:
Controller Manager
:
Etcd
:
API Server
:
除了Master节点,还有一群 Node 节点(计算节点),Node节点包含kubelet、kube-proxy等组件,作用分别如下:
可以理解为Master在工作节点上的Agent,管理本机运行容器的生命周期,比如创建容器,Pod挂载数据卷,下载secret,获取容器的节点状态等工作。kubelet将每一个Pod转换成一组容器
。(主要负责监视指派到它所在Node上的Pod,包括创建、修改、监控、删除等)Cloud Controller Manager:
用在云平台上的Kube-controller-manager组件。如果我们直接在物理机上部署的话,可以不使用该组件。POD:
用户划分容器的最小单位,一个POD可以存在多个容器。docker/rocket(rkt,已停止支持):
容器引擎,用于运行容器。参考链接:
https://kubernetes.io/zh/docs/concepts/overview/components/
Kubernetes设计理念和功能其实就是一个类似Linux的分层架构,如下图所示:
各层说明如下:
核心层:Kubernetes最核心的功能,对外提供API构建高层的应用,对内提供插件式应用执行环境
应用层:部署(无状态应用、有状态应用、批处理任务、集群应用等)和路由(服务发现、DNS解析等)
管理层:系统度量(如基础设施、容器和网络的度量),自动化(如自动扩展、动态Provision等)以及策略管理(RBAC、Quota、PSP、NetworkPolicy等)
接口层:kubectl命令行工具、客户端SDK以及集群联邦
生态系统:在接口层之上的庞大容器集群管理调度的生态系统,可以划分为两个范畴
Kubernetes外部:日志、监控、配置管理、CI、CD、Workflow、FaaS、OTS应用、ChatOps等
Kubernetes内部:CRI、CNI、CVI、镜像仓库、Cloud Provider、集群自身的配置和管理等
总体架构
yum安装:
优点:
安装,配置很简单,适合新手学习。
缺点:
不能定制安装。kind安装:
kind让你能够在本地计算机上运行Kubernetes。 kind要求你安装并配置好Docker。
推荐阅读:
https://kind.sigs.k8s.io/docs/user/quick-start/minikube部署:
minikube是一个工具, 能让你在本地运行Kubernetes。
minikube在你本地的个人计算机(包括 Windows、macOS 和 Linux PC)运行一个单节点的Kubernetes集群,以便你来尝试 Kubernetes 或者开展每天的开发工作。因此很适合开发人员体验K8S。
推荐阅读:
https://minikube.sigs.k8s.io/docs/start/kubeadm:
你可以使用kubeadm工具来创建和管理Kubernetes集群,适合在生产环境部署。
该工具能够执行必要的动作并用一种用户友好的方式启动一个可用的、安全的集群。
推荐阅读:
https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/
https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/二进制部署:
安装步骤比较繁琐,但可以更加了解细节。适合运维人员生产环境中使用。源码编译安装:
难度最大,请做好各种故障排查的心理准备。其实这样一点对于K8S二次开发的人员应该不是很难。
kubectl使得你可以对Kubernetes集群运行命令。 你可以使用kubectl来部署应用、监测和管理集群资源以及查看日志。
推荐阅读:
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands
推荐阅读:
https://kubernetes.io/zh/docs/setup/best-practices/cluster-large/
在集群机器上都要操作
参考链接:
https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
swapoff -a && sysctl -w vm.swappiness=0 #临时禁用交换分区
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab #永久禁用交换分区
cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Tue Sep 27 23:52:08 2022
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=731074a1-55a3-436a-8c7d-9b6e4f5acba9 / xfs defaults 0 0
UUID=f98df919-9080-4b39-a153-b4c8a68ff8a0 /boot xfs defaults 0 0
#UUID=ee7f7341-64b9-4f47-9aaf-9d0615b65b4d swap swap defaults 0 0
ifconfig eth0 | grep ether | awk '{print $2}'
cat /sys/class/dmi/id/product_uuid
温馨提示:
一般来讲,硬件设备会拥有唯一的地址,但是有些虚拟机的地址可能会重复。
Kubernetes使用这些值来唯一确定集群中的节点。 如果这些值在每个节点上不唯一,可能会导致安装失败。
简而言之,就是检查你的k8s集群各节点是否互通,可以使用ping命令来测试。
cat <
参考链接: https://kubernetes.io/zh/docs/reference/ports-and-protocols/
ss -ntl
参考链接:
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.15.md#unchanged
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
curl -o /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repo
sed -i 's+download.docker.com+mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repo
yum list docker-ce --showduplicates
yum -y install docker-ce-18.09.9 docker-ce-cli-18.09.9
yum -y install bash-completion
source /usr/share/bash-completion/bash_completion
mkdir docker-rpm-18-09 && find /var/cache/yum -name "*.rpm" |xargs mv -t docker-rpm-18-09/
cat > /etc/docker/daemon.json <<'EOF'
{
"registry-mirrors": ["https://hzow8dfk.mirror.aliyuncs.com"],
"insecure-registries": ["k8s151.oldboyedu.com:5000"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
systemctl enable --now docker && systemctl restart docker
systemctl disable --now firewalld
echo 1 >/proc/sys/net/ipv4/ip_forward #proc/sys/net/ipv4/ip_forward,该文件内容为0,表示禁止数据包转发,1表示允许,将其修改为1
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
grep ^SELINUX= /etc/selinux/config
cat >> /etc/hosts <<'EOF'
10.0.0.151 k8s151.com
10.0.0.152 k8s152.com
10.0.0.153 k8s153.com
10.0.0.154 k8s154.com
EOF
cat /etc/hosts
[root@k8s151 ~]$ docker run -dp 5000:5000 --restart always --name oldboyedu-registry registry:2
[root@docker151 ~]$ cat > /etc/docker/daemon.json <<'EOF'
{
"registry-mirrors": ["https://hzow8dfk.mirror.aliyuncs.com"],
"insecure-registries": ["k8s151.oldboyedu.com:5000"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
[root@docker101 ~]$ systemctl restart docker
http://10.0.0.151:5000/v2/_catalog
重置节点
如果练习的过程中遇到太多的问题,想要重新操作一遍,可以重置节点
在所有节点执行
kubeadm reset #重置节点
rm -rf /etc/cni/net.d
#删除之前安装的k8s版本,如之前安装的是1.22.0,然后重新下载安装k8s,按照之前的流程执行(如果不用还版本,就不用执行这个)
yum -y remove kubeadm-1.22.0 kubelet-1.22.0 kubectl-1.22.0
你需要在每台机器上安装以下的软件包:
kubeadm:
用来初始化集群的指令。
kubelet:
在集群中的每个节点上用来启动Pod和容器等。
kubectl:
用来与集群通信的命令行工具。
kubeadm不能帮你安装或者管理kubelet或kubectl,所以你需要确保它们与通过kubeadm安装的控制平面(master)的版本相匹配。 如果不这样做,则存在发生版本偏差的风险,可能会导致一些预料之外的错误和问题。
然而,控制平面与kubelet间的相差一个次要版本不一致是支持的,但kubelet的版本不可以超过"API SERVER"的版本。 例如,1.7.0版本的kubelet可以完全兼容1.8.0版本的"API SERVER",反之则不可以。
cat > /etc/yum.repos.d/kubernetes.repo <
yum -y install kubeadm-1.20.0 kubelet-1.20.0 kubectl-1.20.0
yum -y list kubeadm --showduplicates | sort -r
systemctl enable --now kubelet && systemctl status kubelet
mkdir k8s-rpm && find /var/cache/yum -name "*.rpm" | xargs mv -t k8s-rpm
参考链接:
https://kubernetes.io/zh/docs/tasks/tools/install-kubectl-linux/
建议:执行完上面的步骤之后,要把所有机器都拍摄快照,以防止出现问题之后,可以快速回滚到基础配置
只在151这台mstar机器上操作
把k8s集群看做一个工厂,通过初始化master节点,产生了一个厂长,这个厂长就是master,一个厂区有很多大仓库,然后每个仓库就是一个node,然后厂长就设立一个代理人,代理厂长管理仓库,这个人就Kubectl,通过kubeadm join 加入厂长的阵容,归厂长管理。 然后一个仓库可能有多个员工,员工就是pod,厂长想能联系到每个员工怎么办呢? 网络组件Flannel帮厂长解决了这个问题,flannel给每个pod分配了一个内部号码,厂长就可以联系。
[root@k8s151 ~]$ kubeadm init --kubernetes-version=v1.20.0 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --service-cidr=10.254.0.0/16
....
kubeadm join 10.0.0.151:6443 --token 7enaea.6bk11es4yzx03vu5 \
--discovery-token-ca-cert-hash sha256:72953dbe7918dc60edae342bd3096f85ccd4b0ceb410ac9ee28f38fc5b5d0ba1
相关参数说明:
--apiserver-advertise-address: 指定apiserver的地址,用于其他节点连接用的内网地址。
--kubernetes-version: 指定K8S master组件的版本号。
--image-repository: 指定下载k8s master组件的镜像仓库地址。
--pod-network-cidr: 指定Pod的网段地址。
--service-cidr: 指定SVC的网段
使用kubeadm初始化集群时,可能会出现如下的输出信息:
[init]
使用初始化的K8S版本。
[preflight]
主要是做安装K8S集群的前置工作,比如下载镜像,这个时间取决于你的网速。
[certs]
生成证书文件,默认存储在"/etc/kubernetes/pki"目录哟。
[kubeconfig]
生成K8S集群的默认配置文件,默认存储在"/etc/kubernetes"目录哟。
[kubelet-start]
启动kubelet,
环境变量默认写入:"/var/lib/kubelet/kubeadm-flags.env"
配置文件默认写入:"/var/lib/kubelet/config.yaml"
[control-plane]
使用静态的目录,默认的资源清单存放在:"/etc/kubernetes/manifests"。
此过程会创建静态Pod,包括"kube-apiserver","kube-controller-manager"和"kube-scheduler"
[etcd]
创建etcd的静态Pod,默认的资源清单存放在:""/etc/kubernetes/manifests"
[wait-control-plane]
等待kubelet从资源清单目录"/etc/kubernetes/manifests"启动静态Pod。
[apiclient]
等待所有的master组件正常运行。
[upload-config]
创建名为"kubeadm-config"的ConfigMap在"kube-system"名称空间中。
[kubelet]
创建名为"kubelet-config-1.22"的ConfigMap在"kube-system"名称空间中,其中包含集群中kubelet的配置
[upload-certs]
跳过此节点,详情请参考”--upload-certs"
[mark-control-plane]
标记控制面板,包括打标签和污点,目的是为了标记master节点。
[bootstrap-token]
创建token口令,例如:"kbkgsa.fc97518diw8bdqid"。
如下图所示,这个口令将来在加入集群节点时很有用,而且对于RBAC控制也很有用处哟。
[kubelet-finalize]
更新kubelet的证书文件信息
[addons]
添加附加组件,例如:"CoreDNS"和"kube-proxy”
初始化常遇到的问题
有时候我们第一次初始化失败后,再次初始化,会提示我们有些文件已经存在了,或者端口被占用了,那是因为之前的初始化已经产生了某些文件,再次初始化的时候要把之前的文件删除
[root@k8s151 ~]$ mkdir -p $HOME/.kube
[root@k8s151 ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s151 ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s151 ~]$ kubectl get no
NAME STATUS ROLES AGE VERSION
k8s151 NotReady master 4m34s v1.15.12
[root@k8s151 ~]$ kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
[root@k8s151 ~]$ kubectl get no,cs
NAME STATUS ROLES AGE VERSION
node/k8s151 NotReady master 5m47s v1.15.12
NAME STATUS MESSAGE ERROR
componentstatus/scheduler Healthy ok
componentstatus/controller-manager Healthy ok
componentstatus/etcd-0 Healthy {"health":"true"}
如果出现这种情况
[root@k8s151 ~]$ vim /etc/kubernetes/manifests/kube-scheduler.yaml ... # - --port=0 #这一行注释掉,然后重启就可以 [root@k8s151 ~]$ vim /etc/kubernetes/manifests/kube-controller-manager.yaml ...... # - --port=0 #找到这个port=0并注释 [root@k8s151 ~]$ systemctl restart kubelet
除了151这个master机器,其他的woker节点都要加入(154这个暂时不做)
在152和153上执行
kubeadm join 10.0.0.151:6443 --token a3fndb.oitexxftm0xpx90j \
--discovery-token-ca-cert-hash sha256:ef830cffbeb332ae329af30a9257fd7584e034d524bc5cb2cd6e9bcfe6307f10
节点都加入之后,查看一下
[root@k8s151 ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s151.oldboyedu.com NotReady master 10m v1.18.0
k8s152.oldboyedu.com NotReady 60s v1.18.0
k8s153.oldboyedu.com NotReady 38s v1.18.0
如何重新加入节点(当加入节点出现问题后执行)
swapoff -a kubeadm reset -y #重置节点,当你加入节点或者master节点想重新初始化,可以先执行此命令 rm /etc/cni/net.d/* -f systemctl daemon-reload systemctl restart kubelet iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
Flannel可以用于Kubernetes底层网络的实现,主要作用有:它能协助Kubernetes,给每一个Node上的pod都分配互相不冲突的IP地址
参考链接:
https://kubernetes.io/zh/docs/concepts/cluster-administration/addons/
https://github.com/flannel-io/flannel/blob/master/Documentation/kubernetes.md
在线初始化(不推荐)
[root@k8s151 /app/tools]$ kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
#因为在线安装kube-flannel.yaml里面的镜像是国外的,所以很多时候我们都会下载的很慢,或者甚至失败
本地安装,先把kube-flannel.yml、flannel和flannel-cni-plugin镜像文件上传到/app/tools
[root@k8s151 /app/tools]$ docker load -i flannel-cni-plugin-v1.1.2.tar.gz
[root@k8s151 /app/tools]$ docker load -i flannel-v0.21.2.tar.gz
#注意,导入这两个镜像,要在所有节点上都操作
[root@k8s151 /app/tools]$ kubectl apply -f kube-flannel.yml
[root@k8s151 /app/tools]$
验证flannel插件是否部署成功(要等一会,查看ROLES的状态变为Ready):
[root@k8s151 ~]$ kubectl get nodes #可以看到,我们的版本是v1.20.0
NAME STATUS ROLES AGE VERSION
k8s151.oldboyedu.com Ready control-plane,master 14m v1.20.0
k8s152.oldboyedu.com Ready 9m41s v1.20.0
k8s153.oldboyedu.com Ready 9m38s v1.20.0
查看flannel是否部署成功,如果不成功,节点状态就可能不会是Ready
[root@k8s151 ~]$ kubectl get pods -A -o wide | grep flannel
kube-system kube-flannel-ds-amd64-5gbnq 1/1 Running 0 4m43s 10.0.0.153 k8s153
kube-system kube-flannel-ds-amd64-8qmwf 1/1 Running 0 4m43s 10.0.0.210 k8s151
kube-system kube-flannel-ds-amd64-xvrxb 1/1 Running 0 4m43s 10.0.0.152 k8s152
[root@k8s151 ~]$ kubectl get ds -A
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system kube-flannel-ds 0 0 0 0 0 beta.kubernetes.io/arch=amd64 2m52s
kube-system kube-proxy 3 3 3 3 3 beta.kubernetes.io/os=linux 26m
查看kube-flannel-ds的日志
这个很重要,当你的pod出问题之后,要通过这个命令查看pod的错误日志
格式:
kubectl -n
logs 案例: kubectl logs -n kube-flannel kube-flannel-ds-jwnwm
当初始化出现问题,可以重新初始化:
https://blog.csdn.net/one2threexm/article/details/107735228
rm -rf /etc/kubernetes/* kubeadm reset rm -rf /etc/kubernetes/* rm -rf ~/.kube/* rm -rf /var/lib/etcd/* kubeadm reset -f
所有组件都
echo "source <(kubectl completion bash)" >> ~/.bashrc && source ~/.bashrc
(1)测试网络是否正常
[root@k8s151 ~]$ kubectl run oldboyedu-nginx --image=nginx:1.20.1 --replicas=3
[root@k8s151 ~]$ kubectl run oldboyedu-linux --image=alpine --replicas=3 -- sleep 300
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/oldboyedu-linux created
(2)观察是否是runing状态:
[root@k8s151 ~]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
oldboyedu-linux 1/1 Running 0 4m29s
oldboyedu-nginx 1/1 Running 0 29s
(3)测试网络是否互通
建议使用alpine镜像的Pod进行测试即可。
查看pod日志
- 格式:kubectl describe pod
案例: kubectl describe pod oldboyedu-linux
(1)修改终端颜色
[root@k8s151 ~]$ cat <> ~/.bashrc
PS1='[\[\e[34;1m\]\u@\[\e[0m\]\[\e[32;1m\]\h\[\e[0m\]\[\e[31;1m\] \w\[\e[0m\]]$ '
EOF
[root@k8s151 ~]$ source ~/.bashrc
(2)内存回收小妙招
echo 3 > /proc/sys/vm/drop_caches
温馨提示:
drop_caches的值可以是0-3之间的数字,代表不同的含义
0:不释放(系统默认值)
1:释放页缓存
2:释放dentries和inodes
3:释放所有缓存
敬请期待…
参考链接:
https://kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/
https://kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/high-availability/
常见文件点这里解决
特别重要
当你在k8s中遇到问题,第一反应应该先去看看到底哪里出错了,如pod的status不是Running状态,但是为什么不是,哪里错了,你只有使用describe查看,当然了,也不只是只有pod才可以使用describe,像svc这些都可以
格式:kubectl describe
如果指定名称空间的话 : kubectl -n
案例:
比如使用查看pod的具体情况
kubectl describe pod pod的名字,然后如果有错误,一般都在最后几行有报错信息
查看svc的情况
kubectl describe svc svc的名称
格式:kubectl logs [-f] [-p] POD [-c CONTAINER]
有些情况我们使用describe查看的时候是正常的,但是呢就是有问题,我们要通过这个logs查看
-c, --container="": 容器名
-f, --follow[=false]: 指定是否持续输出日志
--interactive[=true]: 如果为true,当需要时提示用户进行输入。默认为true
--limit-bytes=0: 输出日志的最大字节数。默认无限制
-p, --previous[=false]: 如果为true,输出pod中曾经运行过,但目前已终止的容器的日志
--since=0: 仅返回相对时间范围,如5s、2m或3h,之内的日志。默认返回所有日志。只能同时使用since和since-time中的一种
--since-time="": 仅返回指定时间(RFC3339格式)之后的日志。默认返回所有日志。只能同时使用since和since-time中的一种
--tail=-1: 要显示的最新的日志条数。默认为-1,显示所有的日志
--timestamps[=false]: 在日志中包含时间戳、
1)查看名称为nginx的pod的日志
kubectl logs nginx
2)查看名称为java的pod中,名称为web-m容器的日志(-c 指定容器名)
kubectl logs -p -c java web-m
3)持续输出pod java中的容器web-m的日志
kubectl logs -f -c java web-m
4)仅输出pod nginx中最近的20条日志
kubectl logs --tail=20 nginx
5)输出pod nginx中最近一小时内产生的所有日志
kubectl logs --since=1h nginx
6)查看指定pod的日志
kubectl logs
kubectl logs -f #类似tail -f的方式查看(tail -f 实时查看日志文件 tail -f 日志文件log)
7)查看指定pod中指定容器的日志
kubectl logs -c
一次性查看:
kubectl logs pod_name -c container_name -n namespace
tail -f方式实时查看:
kubectl logs -f -n namespace
Pod是Kubernetes集群中最小部署单元,一个Pod由一个容器或多个容器组成,这些容器可以共享网络,存储等资源等。
Pod有以下特点:
(1)一个Pod可以理解成一个应用实例,提供服务;
(2)Pod中容器始终部署在同一个Node上;
(3)Pod中容器共享网络,存储资源;
(4)Kubernetes集群是直接管理Pod,而不是容器;
SHORTNAMES:简称
APIGROUP:所属的api组
注意: 不同的k8s版本,使用的版本号apiVersion是不同的,所以我们在以后的工作中注意这一点,可以使用kubectl api-resource查看
[root@k8s151 /k8s-manifests/pods]$ kubectl api-resources NAME SHORTNAMES APIGROUP NAMESPACED KIND .... configmaps cm true ConfigMap endpoints ep true Endpoints events ev true Event namespaces ns false Namespace nodes no false Node pods po true Pod replicationcontrollers rc true ReplicationController secrets true Secret daemonsets ds apps true DaemonSet deployments deploy apps true Deployment replicasets rs apps true ReplicaSet 可以查看api组对应的版本号,根据这两个去写apiVersion [root@k8s151 /k8s-manifests/pods]$ kubectl api-versions ...... apps/v1 batch/v1 batch/v1beta1 v1 .....
[root@k8s151 ~]$ mkdir -p /k8s-manifests/pods
[root@k8s151 /k8s-manifests/pods]$ cat > 01-pod-nginx.yaml <<'EOF'
# 部署的资源类型
kind: Pod
# API的版本号
apiVersion: v1
# 元数据信息
metadata:
# 资源的名称
name: oldboyedu-linux80-web
# 自定义Pod资源的配置
spec:
# 定义容器香港信息
containers:
# 定义容器的名称
- name: linux80-web
# 定义容器基于哪个镜像启动
image: nginx:1.18
EOF
[root@k8s151 /k8s-manifests/podes]$ kubectl create -f 01-pod-nginx.yaml
pod/oldboyedu-linux80-web created
[root@k8s151 ~]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
oldboyedu-linux80-web 0/1 ContainerCreating 0 6s
[root@k8s151 ~]$ kubectl get pods -o wide # 主要查看IP地址
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
oldboyedu-linux80-web 0/1 ContainerCreating 0 17s k8s152.oldboyedu.com
相关字段说明:
NAME
代表的是资源的名称。
READY
代表资源是否就绪。比如 0/1 ,表示一个Pod内有一个容器,而且这个容器还未运行成功。
STATUS
代表容器的运行状态。
RESTARTS
代表Pod重启次数,即容器被创建的次数。
AGE
代表Pod资源运行的时间。
IP
代表Pod的IP地址。
NODE
代表Pod被调度到哪个节点。
其他:
"NOMINATED NODE和"READINESS GATES"暂时先忽略哈。
[root@k8s151 /k8s-manifests/podes]$ cat > 02-pod-nginx-alpine.yaml <<'EOF'
# 部署的资源类型
kind: Pod
# API的版本号
apiVersion: v1
# 元数据信息
metadata:
# 资源的名称
name: oldboyedu-linux80-nginx-alpine
# 自定义Pod资源的配置
spec:
# 定义容器相关信息
containers:
# 定义容器的名称
- name: linux80-web
# 定义容器基于哪个镜像启动
image: nginx:1.18
- name: linux80-alpine
image: alpine
# 为终端分配一个标准输入,目的是为了阻塞容器,阻塞容器后,容器将不退出。
stdin: true
EOF
[root@k8s151 /k8s-manifests/podes]$ kubectl apply -f 02-pod-nginx-alpine.yaml
[root@k8s151 /k8s-manifests/podes]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
oldboyedu-linux80-nginx-alpine 2/2 Running 0 4m16
[root@k8s151 /k8s-manifests/podes]$ kubectl get pods -o wide # 主要查看IP地址(可以看到,这个节点在153机器上)
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
oldboyedu-linux80-nginx-alpine 2/2 Running 0 4m39s 10.244.2.5 k8s153.oldboyedu.com
#我们可以看到,如上面的结果,我们创建了一个pod类型的资源,然后分配到了k8s153.oldboyedu.com这个节点上
在153机器上运行docker命令,可以看到创建了两个容器
[root@k8s153 ~]$ docker ps -a | grep oldboyedu-linux80-nginx-alpine
我们进入到容器里面,发现pod就是一个nginx的docker容器
[root@k8s153 ~]$ docker exec -it 3fcdeaff8089 /bin/bash
root@oldboyedu-linux80-nginx-alpine:/# nginx -v
nginx version: nginx/1.18.0
STATUS | 描述 |
---|---|
Pending (悬决) |
Pod 已被 Kubernetes 系统接受,但有一个或者多个容器尚未创建亦未运行。此阶段包括等待 Pod 被调度的时间和通过网络下载镜像的时间。 |
Running (运行中) |
Pod 已经绑定到了某个节点,Pod 中所有的容器都已被创建。至少有一个容器仍在运行,或者正处于启动或重启状态。 |
Succeeded (成功) |
Pod 中的所有容器都已成功终止,并且不会再重启。 |
Failed (失败) |
Pod 中的所有容器都已终止,并且至少有一个容器是因为失败终止。也就是说,容器以非 0 状态退出或者被系统终止。 |
Unknown (未知) |
因为某些原因无法取得 Pod 的状态。这种情况通常是因为与 Pod 所在主机通信失败。 |
Pod的阶段如上表所示。 Pod遵循一个预定义的生命周期,起始于 Pending 阶段,如果至少其中有一个主要容器正常启动,则进入Running,之后取决于Pod中是否有容器以失败状态结束而进入Succeeded或者Failed阶段。
Pod内容器的状态主要有以下三种:
Waiting (等待)
如果容器并不处在 Running 或 Terminated 状态之一,它就处在 Waiting 状态。 处于 Waiting 状态的容器仍在运行它完成启动所需要的操作:例如,从某个容器镜像 仓库拉取容器镜像,或者向容器应用 Secret 数据等等。 当你使用 kubectl 来查询包含 Waiting 状态的容器的 Pod 时,你也会看到一个 Reason 字段,其中给出了容器处于等待状态的原因。
Running(运行中)
Running 状态表明容器正在执行状态并且没有问题发生。 如果配置了 postStart 回调,那么该回调已经执行且已完成。 如果你使用 kubectl 来查询包含 Running 状态的容器的 Pod 时,你也会看到关于容器进入Running状态的信息。
Terminated(已终止)
处于 Terminated 状态的容器已经开始执行并且或者正常结束或者因为某些原因失败。 如果你使用kubectl来查询包含 Terminated 状态的容器的 Pod 时,你会看到 容器进入此状态的原因、退出代码以及容器执行期间的起止时间。
如果容器配置了 preStop 回调,则该回调会在容器进入 Terminated 状态之前执行。
推荐阅读:
https://kubernetes.io/zh/docs/concepts/workloads/pods/pod-lifecycle/
https://kubernetes.io/zh/docs/concepts/containers/container-lifecycle-hooks/
[root@k8s151 /k8s-manifests/podes]$ kubectl exec oldboyedu-linux80-nginx-alpine -- nginx -t
[root@k8s151 /k8s-manifests/podes]$ kubectl exec po/oldboyedu-linux80-nginx-alpine -- nginx -t
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-t5TaLVid-1681480617432)(http://qn.jicext.cn/img-202304141749947.png)]
和docker连接容器类似
kubectl exec po/oldboyedu-linux80-web -it -- bash
kubectl exec oldboyedu-linux80-nginx-alpine -c linux80-alpine -it -- sh # 连接到指定容器。
测试见视频。
核心命令: wget 127.0.0.1:80 # 在alpine中镜像的容器中执行该命令即可。
温馨提示:
使用"kubectl api-resources"可以查看集群的所有资源,各字段解释如下:
NAME
资源的名称。SHORTNAME : 资源名称的简写形式。
APIGROUP :资源属于哪个API组。
NAMESPACED :是否属于某个名称空间。
KIND :资源的类型。
查看Pod,nodes的信息 (重点掌握)
[root@k8s151 ~]$ kubectl get po,no
NAME READY STATUS RESTARTS AGE
pod/oldboyedu-linux80-nginx-alpine 2/2 Running 0 3h59m
pod/oldboyedu-linux81-homework 1/1 Running 0 3m9s
NAME STATUS ROLES AGE VERSION
node/k8s151.0ldboyedu.com Ready master 21d v1.18.0
node/k8s152.oldboyedu.com Ready 21d v1.18.0
node/k8s153.oldboyedu.com Ready 21d v1.18.0
查看Pod被调度的节点及IP地址等信息。 (重点掌握)
[root@k8s151 /k8s-manifests/podes]$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
oldboyedu-linux80-nginx-alpine 2/2 Running 0 3h59m 10.244.2.5 k8s153.oldboyedu.com
oldboyedu-linux81-homework 1/1 Running 0 3m44s 10.244.1.11 k8s152.oldboyedu.com
查看Pod资源创建时的yaml文件,若用户为定义,此处会有默认字段。(了解即可)
kubectl get pods -o yaml
查看Pod及其标签信息。 (了解即可)
kubectl get pods --show-labels
自定义列名称输出: (了解即可)
案例一:
[root@k8s151 ~]$ kubectl get pod -o custom-columns=CONTAINER:.spec.containers[0].name
案例二:
[root@k8s151 ~]$ kubectl get pod -o custom-columns=CONTAINER:.spec.containers[0].name,IMAGE:.spec.containers[0].image
案例三:
[root@k8s151 ~]$ kubectl get pods oldboyedu-linux80-nginx-alpine -o custom-columns=oldboyedu-container-name:.spec.containers[0].name,oldboyedu-stdin:.spec.containers[1].stdin,oldboyedu-image01:.spec.containers[0].image,oldboyedu-image02:.spec.containers[1].image
参考链接:
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
https://kubernetes.io/docs/reference/kubectl/#custom-columns
基于文件删除资源。(重点掌握)
[root@k8s151 ~]$ kubectl delete -f 02-pod-nginx-alpine.yaml
基于Pods的名称进行删除。(重点掌握)
[root@k8s151 ~]$ kubectl delete pods oldboyedu-linux80-web
基于标签进行删除,即匹配标签名称为"apps=myweb"的所有Pod类型。(了解即可)
[root@k8s151 ~]$ kubectl delete pods -l apps=myweb
删除所有的Pod信息。(了解即可)
[root@k8s151 ~]$ kubectl delete pod --all
apply命令是一个高级命令,若资源不存在,则会创建,若资源存在,可以用于更新。(重点掌握)
[root@k8s151 ~]$ kubectl apply -f 01-pod-nginx.yaml
将指定文件的资源进行创建,不能多次执行。该命令逐渐被apply命令所替代使用。(了解即可)
[root@k8s151 ~]$ kubectl create -f 01-pod-nginx.yaml
优点:持久化标签配置
缺点:修改配置文件并应用
[root@k8s151 /k8s-manifests/pods]$ cat > 03-pod-nginx-labels.yaml <<'EOF'
#API版本号
apiVersion: v1
#部署的资源类型
kind: Pod
#元数据信息
metadata:
#资源的名称
name: oldboyedu-linux81-labels
#为资源配置标签,KEY和VALUES都是用户自定义的
labels:
apps: web
class: linux81
school: oldboyedu
address: shahe
#自定义Pod资源的配置
spec:
#定义容器相关信息
containers:
#定义容器名称
- name: linux81-web
image: nginx:1.18
EOF
[root@k8s151 /k8s-manifests/podes]$ kubectl create -f 03-pod-nginx-labels.yaml
pod/oldboyedu-linux81-labels created
[root@k8s151 /k8s-manifests/podes]$ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
oldboyedu-linux80-nginx-alpine 2/2 Running 0 5h46m class=linux81,school=oldboyedu
oldboyedu-linux81-homework 1/1 Running 1 109m
oldboyedu-linux81-labels 1/1 Running 0 9s address=shahe,apps=web,class=linux81,school=oldboyedu
【临时测试】
- 优点:不需要修改配置文件,立即生效
- 缺点:无法持久化,临时的
[root@k8s151 /k8s-manifests/podes]$ kubectl label pods oldboyedu-linux81-labels school=oldboyedu class=linux81
[root@k8s151 /k8s-manifests/podes]$ kubectl label -f 01-pod-nginx.yaml address=ShaHe
pod/oldboyedu-linux80-web labeled
[root@k8s151 /k8s-manifests/podes]$ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
oldboyedu-linux80-nginx-alpine 2/2 Running 0 6h8m class=linux81,school=oldboyedu
oldboyedu-linux80-web 1/1 Running 0 16m address=ShaHe
oldboyedu-linux81-homework 1/1 Running 2 132m
oldboyedu-linux81-labels 1/1 Running 0 22m address=shahe,apps=web,class=linux81,school=oldboyedu
案例:移除oldboyedu-linux80-web的 address标签
[root@k8s151 /k8s-manifests/podes]$ kubectl label pods oldboyedu-linux80-web address-
pod/oldboyedu-linux80-web labeled
[root@k8s151 /k8s-manifests/podes]$ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
oldboyedu-linux80-nginx-alpine 2/2 Running 0 6h9m class=linux81,school=oldboyedu
oldboyedu-linux80-web 1/1 Running 0 17m
oldboyedu-linux81-homework 1/1 Running 2 133m
oldboyedu-linux81-labels 1/1 Running 0 23m address=shahe,apps=web,class=linux81,school=oldboyedu
特别重要
当你在k8s中遇到问题,第一反应应该先去看看到底哪里出错了,如pod的status不是Running状态,但是为什么不是,哪里错了,你只有使用describe查看,当然了,也不只是只有pod才可以使用describe,像svc这些都可以
格式:
kubectl describe
<名称>
[root@k8s151 /k8s-manifests/podes]$ kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
oldboyedu-linux80-nginx-alpine 2/2 Running 0 6h9m class=linux81,school=oldboyedu
oldboyedu-linux80-web 1/1 Running 0 17m
oldboyedu-linux81-homework 1/1 Running 2 133m
oldboyedu-linux81-labels 1/1 Running 0 23m address=shahe,apps=web,class=linux81,school=oldboyedu
[root@k8s151 /k8s-manifests/podes]$ kubectl delete pods --all
pod "oldboyedu-linux80-nginx-alpine" deleted
pod "oldboyedu-linux80-web" deleted
pod "oldboyedu-linux81-homework" deleted
pod "oldboyedu-linux81-labels" deleted
[root@k8s151 /k8s-manifests/podes]$ kubectl apply -f 02-pod-nginx-alpine.yaml
pod/oldboyedu-linux80-nginx-alpine created
[root@k8s151 /k8s-manifests/podes]$ kubectl describe pods/oldboyedu-linux80-web
Name: oldboyedu-linux80-web #资源名称
Namespace: default #名称空间
Priority: 0
Node: k8s152.oldboyedu.com/10.0.0.152 #调度节点
Start Time: Sat, 19 Nov 2022 17:33:09 +0800
Labels: #标签
Annotations: #注解信息
Status: Running #状态
IP: 10.244.1.14 #ip
IPs:
IP: 10.244.1.14
Containers:
linux80-web: #容器名称
Container ID: docker://defc257ef21874dc68d190f4d1803245374bde3f0b60dbc18e65589a839b28d7 #容器id
Image: nginx:1.18 #镜像
Image ID: docker-pullable://nginx@sha256:e90ac5331fe095cea01b121a3627174b2e33e06e83720e9a934c7b8ccc9c55a0
Port: #端口
Host Port: #主机端口
State: Running #状态
Started: Sat, 19 Nov 2022 17:33:11 +0800
Ready: True #是否是就绪状态
Restart Count: 0
Environment: #环境变量
Mounts: #挂载信息
/var/run/secrets/kubernetes.io/serviceaccount from default-token-c5lh2 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
PodScheduled True
Volumes:
default-token-c5lh2:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-c5lh2
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled default-scheduler Successfully assigned default/oldboyedu-linux80-nginx-alpine to k8s152.oldboyedu.com #成功调度152节点
Normal Pulled 69s kubelet, k8s152.oldboyedu.com Container image "nginx:1.18" already present on machine #拉取镜像,本地已经有拉取成功的
Normal Created 69s kubelet, k8s152.oldboyedu.com Created container linux80-web #创建linux80-web容器
Normal Started 68s kubelet, k8s152.oldboyedu.com Started container linux80-web #启动linux80-web容器
Normal Pulling 68s kubelet, k8s152.oldboyedu.com Pulling image "alpine" #拉取镜像
Normal Pulled 53s kubelet, k8s152.oldboyedu.com Successfully pulled image "alpine"
Normal Created 52s kubelet, k8s152.oldboyedu.com Created container linux80-alpine #创建容器
Normal Started 52s kubelet, k8s152.oldboyedu.com Started container linux80-alpine #启动
[root@k8s151 /k8s-manifests/podes]$ kubectl describe pods o #/oldboyedu-linux80-nginx-alpine
[root@k8s151 /k8s-manifests/podes]$ kubectl describe -f 02-pod-nginx-alpine.yaml
.....
[root@k8s151 /k8s-manifests/podes]$ kubectl label -f 02-pod-nginx-alpine.yaml apps=myweb
pod/oldboyedu-linux80-nginx-alpine labeled
[root@k8s151 /k8s-manifests/podes]$ kubectl describe po -l apps=myweb
....
kubectl logs [-f] [-p] POD [-c CONTAINER]
[root@k8s151 /k8s-manifests/podes]$ kubectl delete pods --all
pod "oldboyedu-linux80-nginx-alpine" deleted
[root@k8s151 /k8s-manifests/podes]$ kubectl apply -f 01-pod-nginx.yaml
pod/oldboyedu-linux80-web created
[root@k8s151 /k8s-manifests/podes]$ kubectl apply -f 02-pod-nginx-alpine.yaml
pod/oldboyedu-linux80-nginx-alpine created
查看pod日志
oldboyedu-linux80-web中只有一个容器linux80-web
[root@k8s151 /k8s-manifests/podes]$ kubectl logs oldboyedu-linux80-web
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
oldboyedu-linux80-nginx-alpine中有两个容器,linux80-web和linux80-alpine(没有指定容器名称,报错了)
[root@k8s151 /k8s-manifests/podes]$ kubectl logs -f oldboyedu-linux80-nginx-alpine
error: a container name must be specified for pod oldboyedu-linux80-nginx-alpine, choose one of: [linux80-web linux80-alpine]
[root@k8s151 /k8s-manifests/podes]$ kubectl logs oldboyedu-linux80-nginx-alpine linux80-alpine
error: a container name must be specified for pod oldboyedu-linux80-nginx-alpine, choose one of: [linux80-web linux80-alpine]
使用"-c"选项一般多用于一个Pod内有多个容器的场景。
[root@k8s151 /k8s-manifests/podes]$ kubectl logs -f oldboyedu-linux80-nginx-alpine -c linux80-alpine
# Kubernetes-managed hosts file.
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
10.244.1.16 oldboyedu-linux80-nginx-alpine
[root@k8s151 /k8s-manifests/podes]$ kubectl logs -f oldboyedu-linux80-nginx-alpine -c linux80-web --timestamps --since=10m (了解即可)
–since=10m
表示查看当前时间最近10分钟的日志信息。
–timestamps
查看时间戳。
把本地/etc文件复制到oldboyedu-linux80-nginx-alpine中的linux80-alpine容器中(如果不加-c指定,会复制到第一个容器中)
[root@k8s151 /k8s-manifests/podes]$ kubectl cp /etc/ oldboyedu-linux80-nginx-alpine:/oldboyedu-etc-2022 -c linux80-alpine容器中(如果不加-c指定,会复制到第一个容器中)
将本地的目录拷贝到Pod中指定的容器相关的路径。
连接linux80-alpine容器,查看复制结果
[root@k8s151 /k8s-manifests/podes]$ kubectl exec oldboyedu-linux80-nginx-alpine -c linux80-alpine -it -- sh
/ #
/ # cd /opt/
/opt # ls
etc
/opt #
[root@k8s151 /k8s-manifests/podes]$ kubectl -n oldboyedu-linux80 cp /etc/ oldboyedu-linux80-nginx-alpine:/opt
将本地的目录拷贝到指定名称空间的容器,若不指定名称空间,默认拷贝到default名称空间的Pod哟~(了解)
温馨提示:
(1)如果想要从Pod中拷贝数据到本地,要求容器映像中存在"tar二进制文件。如果"tar"不存在,"kubectl cp"将失败。
(2)我曾尝试在Pod手动创建一个tar包,并尝试将该tar拷贝出来,发现最终还是失败了,测试的版式1.15.12。
复制文件:把oldboyedu-linux80-nginx-alpine下的linux80-alpine中的/etc文件,复制到本地/opt/里面,并重命名为01
[root@k8s151 /k8s-manifests/podes]$ kubectl cp oldboyedu-linux80-nginx-alpine:/etc -c linux80-alpine /opt/01
[root@k8s151 /k8s-manifests/podes]$ ll /opt/01/
total 96
-rw-r--r-- 1 root root 7 Nov 20 12:35 alpine-release
drwxr-xr-x 4 root root 88 Nov 20 12:35 apk
drwxr-xr-x 2 root root 6 Nov 20 12:35 conf.d
drwxr-xr-x 2 root root 18 Nov 20 12:35 crontabs
-rw-r--r-- 1 root root 89 Nov 20 12:35 fstab
-rw-r--r-- 1 root root 697 Nov 20 12:35 group
-rw-r--r-- 1 root root 31 Nov 20 12:35 hostname
-rw-r--r-- 1 root root 226 Nov 20 12:35 hosts
.....
把/etc/hosts文件,复制到本地/opt里面,并重命名为02
[root@k8s151 /k8s-manifests/podes]$ kubectl cp oldboyedu-linux80-nginx-alpine:/etc/hosts -c linux80-alpine /opt/02
[root@k8s151 /k8s-manifests/podes]$ cat /opt/02
# Kubernetes-managed hosts file.
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
10.244.1.16 oldboyedu-linux80-nginx-alpine
一个pod中有多个容器,默认进入第一个
[root@k8s151 /k8s-manifests/podes]$ kubectl exec oldboyedu-linux80-nginx-alpine -it -- sh
Defaulting container name to linux80-web.
Use 'kubectl describe pod/oldboyedu-linux80-nginx-alpine -n default' to see all of the containers in this pod.
#
加-c指定容器
[root@k8s151 /k8s-manifests/podes]$ kubectl exec oldboyedu-linux80-nginx-alpine -c linux80-alpine -it -- sh
/ #
在pod为oldboyedu-linux80-web中执行命令cat /etc/hosts命令
[root@k8s151 /k8s-manifests/podes]$ kubectl exec oldboyedu-linux80-web -- cat /etc/hosts
# Kubernetes-managed hosts file.
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
10.244.2.6 oldboyedu-linux80-web
一个pod中有多个容器,默认查看第一个
[root@k8s151 /k8s-manifests/podes]$ kubectl exec oldboyedu-linux80-nginx-alpine -- cat /etc/hosts
Defaulting container name to linux80-web.
Use 'kubectl describe pod/oldboyedu-linux80-nginx-alpine -n default' to see all of the containers in this pod.
# Kubernetes-managed hosts file.
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
10.244.1.16 oldboyedu-linux80-nginx-alpine
指定oldboyedu-linux80-nginx-alpine中的linux80-alpine
[root@k8s151 /k8s-manifests/podes]$ kubectl exec oldboyedu-linux80-nginx-alpine -c linux80-alpine -- cat /etc/hosts
# Kubernetes-managed hosts file.
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
10.244.1.16 oldboyedu-linux80-nginx-alpine
[root@k8s151 /k8s-manifests/podes]$ cat > 05-pod-command-args.yaml <<'EOF'
kind: Pod
apiVersion: v1
metadata:
name: oldboyedu-linux80-command-args
labels:
apps: myweb
spec:
containers:
- name: linux80-web
image: nginx:1.18
# command会覆盖镜像的ENTRYPOINT
command:
- "tail"
# args会覆盖镜像的CMD指令
args:
- "-f"
- "/etc/hosts"
EOF
[root@k8s151 /k8s-manifests/podes]$ cat 03-pods-limits.yaml
kind: Pod
apiVersion: v1
metadata:
name: oldboyedu-resources-limits
labels:
apps: myweb
spec:
containers:
- name: linux80-web
image: nginx:1.18
# 配置容器的资源限制
resources:
# 设置资源的上限
limits:
# 配置内存限制
memory: "200Mi"
# 配置CPU的显示,CPU的换算公式: 1core = 1000m
cpu: "500m"
kind: Pod
apiVersion: v1
metadata:
name: oldboyedu-resources-limits-requests
labels:
apps: myweb
spec:
containers:
- name: linux80-web
image: nginx:1.18
# 配置容器的资源限制
resources:
# 设置资源的上线
limits:
# 配置内存限制
memory: "200Mi"
# 配置CPU的显示,CPU的换算公式: 1core = 1000m
cpu: "500m"
# 配置容器期望的资源,如果所有节点不符合期望的资源,则无法完成调度
requests:
memory: "100Mi"
cpu: "500m"
面试题: 请说一下如何限制Pod的使用资源,分别说一下limit和request的作用是什么?
Request: 容器使用的最小资源需求,作为容器调度时资源分配的判断依赖。只有当节点上可分配资源量>=容器资源请求数时才允许将容器调度到该节点。但Request参数不限制容器的最大可使用资源。
Limit: 容器能使用资源的资源的最大值,设置为0表示使用资源无上限。
Request能够保证Pod有足够的资源来运行,而Limit则是防止某个Pod无限制地使用资源,导致其他Pod崩溃。两者之间必须满足关系: 0<=Request<=Limit<=Infinity (如果Limit为0表示不对资源进行限制,这时可以小于Request)
(1)编写资源清单
[root@k8s151 /k8s-manifests/pods]$ cat > 07-pods-resources-stress.yaml <<'EOF'
kind: Pod
apiVersion: v1
metadata:
name: oldboyedu-linux80-stress
labels:
apps: myweb
spec:
containers:
- name: linux80-web
image: jasonyin2020/oldboyedu-linux-tools:v0.1
resources:
limits:
memory: "1Gi"
cpu: "500m"
requests:
memory: "100Mi"
cpu: "200m"
command:
- "tail"
args:
- "-f"
- "/etc/hosts"
EOF
(2)创建Pod
[root@k8s151 /k8s-manifests/podes]$ kubectl apply -f 07-pods-resources-stress.yaml
(3)执行压力测试命令
[root@k8s151 /k8s-manifests/podes]$ kubectl exec oldboyedu-linux80-stress -- stress --cpu 8 --io 4 --vm 7 --vm-bytes 128M --timeout 10m --vm-keep
(4)观察Pod使用资源状态
如上图所示。
[root@k8s151 /k8s-manifests/podes]$ cat > 04-pods-imagePullPolicy.yaml <<'EOF'
kind: Pod
apiVersion: v1
metadata:
name: oldboyedu-linux80-imagepullpolicy-02
labels:
apps: myweb
spec:
# 将Pod调度到指定到节点名称
# 注意,节点名称不能乱写,必须是在"kubectl get nodes"指令中存在.
nodeName: k8s153.oldboyedu.com
hostNetwork: true #共享宿主机网络
containers:
- name: myweb
image: nginx:1.18
resources:
limits:
memory: "1Gi"
cpu: "500m"
requests:
memory: "100Mi"
cpu: "500m"
EOF
[root@k8s151 /k8s-manifests/podes]$ kubectl apply -f 04-pods-imagePullPolicy.yaml
pod/oldboyedu-linux80-imagepullpolicy-02 created
[root@k8s151 /k8s-manifests/podes]$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
oldboyedu-linux80-imagepullpolicy-02 1/1 Running 0 31s 10.0.0.153 k8s153.oldboyedu.com
oldboyedu-linux80-nginx-alpine 2/2 Running 0 19h 10.244.1.16 k8s152.oldboyedu.com
oldboyedu-linux80-web 1/1 Running 0 19h 10.244.2.6 k8s153.oldboyedu.com
http://10.0.0.153/
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-ldVP8rfj-1681480617433)(http://qn.jicext.cn/img-202304141750881.png)]
推送镜像到仓库
[root@k8s153 ~]$ docker run nginx:1.18
[root@k8s153 ~]$ docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0da5feb95874 nginx:1.18 "/docker-entrypoint.…" 12 seconds ago Up 8 seconds 80/tcp optimistic_banach
把新启动的nginx容器提交到仓库
[root@k8s152 ~]$ docker exec -it optimistic_banach bash
root@0da5feb95874:/# echo 2222222222222 > /usr/share/nginx/html/index.html
#optimistic_banach是之前创建的nginx:1.18的镜像名字,现在把这个镜像提交,然后重新命名为myweb:v0.1
[root@k8s153 ~]$ docker commit optimistic_banach k8s151.oldboyedu.com:5000/myweb:v0.1
sha256:7657735b87ebf45a5f9d48dbfdc37856db9fab14b06d17b7817d6026148fa77c
查看提交的镜像
[root@k8s153 ~]$ docker images | grep k8s151.oldboyedu.com
k8s151.oldboyedu.com:5000/myweb v0.1 b85fe5f2830b 9 minutes ago 133MB
推送到仓库
[root@k8s153 ~]$ docker push k8s151.oldboyedu.com:5000/myweb
在网址http://10.0.0.151:5000/v2/_catalog可以看到
{"repositories":["myweb"]}
指定镜像的下载策略,其值为: Always, Never, IfNotPresent
Always:
总是去拉取最新的镜像,这是默认值.
如果本地镜像存在同名称的tag,其会取出该镜像的RepoDigests(镜像摘要)和远程仓库的RepoDigests进行比较
若比较结果相同,则直接使用本地缓存镜像,若比较结果不同,则会拉取远程仓库最新的镜像
Never:
如果本地有镜像,则尝试启动容器;
如果本地没有镜像,则永远不会去拉取尝试镜像。
IfNotPresent:
如果本地有镜像,则尝试启动容器,并不会去拉取镜像。
如果本地没有镜像,则会去拉取镜像。
(1)编写资源清单
[root@k8s151 /k8s-manifests/podes]$ cat > 04-pods-imagePullPolicy.yaml <<'EOF'
kind: Pod
apiVersion: v1
metadata:
#名字不能有大写
name: imagepullpolicy-ifnotpresent
labels:
apps: myweb
spec:
# 将Pod调度到指定到节点名称
# 注意,节点名称不能乱写,必须是在"kubectl get nodes"指令中存在.
nodeName: k8s152.oldboyedu.com
hostNetwork: true #共享宿主机网络
containers:
- name: myweb
image: nginx:1.18
imagePullPolicy: Always #镜像的下载策略
EOF
(2)常见资源清单
[root@k8s151 /k8s-manifests/podes]$ kubectl apply -f 04-pods-imagePullPolicy.yaml
(3)验证
[root@k8s151 /k8s-manifests/podes]$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
imagepullpolicy-ifnotpresent 1/1 Running 0 85s 10.244.2.11 k8s152.oldboyedu.com > >
[root@k8s152 ~]$ docker run -it --name "nginx-test" --rm nginx:1.18 bash
root@e3c4006be96f:/# cd /usr/share/nginx/html/
root@e3c4006be96f:/usr/share/nginx/html# ls
50x.html index.html
root@e3c4006be96f:/usr/share/nginx/html# echo " oldboyedu 1111
" > index.html
查看刚才启动的测试容器
[root@k8s152 ~]$ docker ps |grep nginx-test
94118ecf543a nginx:1.18 "/docker-entrypoint.…" 59 seconds ago Up 57 seconds 80/tcp nginx-test
[root@k8s152 ~]$ docker commit 94118ecf543a myweb:v0.1
sha256:19141695fd72f4e48bae58911c18d3fc8aefb16b36d3d5d9a052df32684c59f5
[root@k8s152 ~]$ docker images |grep myweb
myweb v0.1 19141695fd72 43 seconds ago 133MB
启动一个新的nginx容器
[root@k8s152 ~]$ docker run -d --name myweb nginx:1.18
0da5feb95874e976acb280b215db03703a1578dd1ea5cfb713a31f4aea8f44c3
[root@k8s152 ~]$ docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0da5feb95874 nginx:1.18 "/docker-entrypoint.…" 12 seconds ago Up 8 seconds 80/tcp optimistic_banach
把新启动的nginx容器提交到仓库
[root@k8s152 ~]$ docker exec -it myweb bash
root@0da5feb95874:/# echo 2222222222222 > /usr/share/nginx/html/index.html
[root@k8s153 ~]$ docker commit myweb k8s151.oldboyedu.com:5000/myweb:v0.1
sha256:7657735b87ebf45a5f9d48dbfdc37856db9fab14b06d17b7817d6026148fa77c
查看提交的镜像
[root@k8s153 ~]$ docker images | grep k8s151.oldboyedu.com
k8s151.oldboyedu.com:5000/myweb v0.1 b85fe5f2830b 9 minutes ago 133MB
推送到仓库
[root@k8s153 ~]$ docker push k8s151.oldboyedu.com:5000/myweb:v0.1
在网址http://10.0.0.151:5000/v2/_catalog可以看到
{"repositories":["myweb"]}
[root@k8s151 /k8s-manifests/podes]$ cat > 04-pods-imagePullPolicy.yaml <<'EOF'
kind: Pod
apiVersion: v1
metadata:
#名字不能有大写
name: imagepullpolicy-ifnotpresent-004
labels:
apps: myweb
spec:
hostNetwork: true #共享宿主机网络
# 将Pod调度到指定到节点名称
# 注意,节点名称不能乱写,必须是在"kubectl get nodes"指令中存在.
nodeName: k8s152.oldboyedu.com
containers:
- name: myweb
image: k8s151.oldboyedu.com:5000/myweb:v0.1
imagePullPolicy: IfNotPresent #镜像的下载策略
EOF
[root@k8s151 /k8s-manifests/podes]$ kubectl apply -f 04-pods-imagePullPolicy.yaml
Note:
在生产环境中部署容器时,你应该避免使用 :latest 标签,因为这使得正在运行的镜像的版本难以追踪,并且难以正确地回滚。
相反,应指定一个有意义的标签,如 v1.42.0。
删除镜像
1)删除元数据信息
[root@k8s151 /k8s-manifests/podes]$ docker exec oldboyedu-registry rm -rf /var/lib/registry/docker/registry/v2/repositories/nginx
2)回收数据
[root@k8s151 /k8s-manifests/podes]$ docker exec oldboyedu-registry registry garbage-collect /etc/docker/registry/config.yml
参考链接:
https://kubernetes.io/zh/docs/concepts/containers/images/
http://10.0.0.152/
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-nVcPMd8L-1681480617433)(http://qn.jicext.cn/img-202304141750701.png)]
(1)编写资源清单
[root@k8s151 /k8s-manifests/podes]$ cat > 05-pods-env.yaml <<'EOF'
kind: Pod
apiVersion: v1
metadata:
name: oldboyedu-pods-env
labels:
apps: myweb
spec:
containers:
- name: myweb
image: k8s151.oldboyedu.com:5000/myweb:v0.1
# 向容器传递环境变量.
env:
# 指定环境变量的名称.
- name: SCHOOL
# 指定环境变量的值.
value: oldboyedu
- name: CLASS
value: linux81
- name: test01
image: nginx:1.18
command:
- tail
- -f
- /etc/hosts
env:
- name: ADDRESS
value: 河南驻马店
EOF
(2)创建资源
[root@k8s151 /k8s-manifests/podes]$ kubectl apply -f 05-pods-env.yaml
(3)验证
[root@k8s151 /k8s-manifests/podes]$ kubectl exec oldboyedu-pods-env -c myweb -- env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=oldboyedu-pods-env-01
SCHOOL=oldboyedu
CLASS=linux81
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT=tcp://10.254.0.1:443
KUBERNETES_PORT_443_TCP=tcp://10.254.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.254.0.1
KUBERNETES_SERVICE_HOST=10.254.0.1
KUBERNETES_SERVICE_PORT=443
NGINX_VERSION=1.18.0
NJS_VERSION=0.4.4
PKG_RELEASE=2~buster
HOME=/root
[root@k8s151 /k8s-manifests/podes]$ kubectl exec oldboyedu-pods-env -c test01 -- env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=oldboyedu-pods-env-01
SCHOOL=oldboyedu
CLASS=linux81
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT=tcp://10.254.0.1:443
KUBERNETES_PORT_443_TCP=tcp://10.254.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.254.0.1
KUBERNETES_SERVICE_HOST=10.254.0.1
KUBERNETES_SERVICE_PORT=443
NGINX_VERSION=1.18.0
NJS_VERSION=0.4.4
PKG_RELEASE=2~buster
HOME=/root
[root@k8s151 /k8s-manifests/podes]$ kubectl exec oldboyedu-pods-env-01 -c test01 -- env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=oldboyedu-pods-env-01
ADDRESS=河南驻马店
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.254.0.1
KUBERNETES_SERVICE_HOST=10.254.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT=tcp://10.254.0.1:443
KUBERNETES_PORT_443_TCP=tcp://10.254.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
NGINX_VERSION=1.18.0
NJS_VERSION=0.4.4
PKG_RELEASE=2~buster
HOME=/root
容器中的文件在磁盘上是临时存放的,这给容器中运行比较重要的应用程序带来一些问题。
• 问题1:当容器升级或者崩溃时,kubelet会重建容器,容器内文件会丢失
• 问题2:一个Pod中运行多个容器需要共享文件 Kubernetes 卷(Volume) 这一抽象概念能够解决这两个问题。
容器部署过程中一般有三种数据:
(1)启动时需要的初始数据,比如配置文件,比如: wordpress,zabbix等等。
(2)启动过程中产生的临时数据,该数据需要多个容器间共享,比如: nginx + filebeat;
(3)启动容器过程中产生的持久化数据,比如:mysql。
综上所述,数据卷的作用就是为了解决上面三种情况产生的数据进行持久化的方案。
参考链接:
https://kubernetes.io/zh/docs/concepts/storage/volumes/
emptyDir类型的Volume在Pod分配到Node上时被创建,Kubernetes会在Node上自动分配一个目录,因此无需指定宿主机Node上对应的目录文件。 这个目录的初始内容为空,当Pod从Node上移除时,emptyDir中的数据会被永久删除。
什么是emptyDir:
是一个临时存储卷,与Pod的生命周期绑定到一起,如果Pod被删除了,这意味着数据也被随之删除。
emptyDir作用:
(1)可以实现持久化;
(2)同一个Pod的多个容器可以实现数据共享
,多个不同的Pod之间不能进行数据通信
;
(3)随着Pod的生命周期而存在,当我们删除Pod时,其数据也会被随之删除;
emptyDir的应用场景:
(1)临时缓存空间,比如基于磁盘的归并排序;
(2)为较耗时计算任务提供检查点,以便任务能方便的从崩溃前状态恢复执行;
(3)存储Web访问日志及错误日志等信息;
emptyDir优缺点:
优点:
(1)可以实现同一个Pod内多个容器之间数据共享;
(2)当Pod内的某个容器被强制删除时,数据并不会丢失,因为Pod没有删除;
缺点:
(1)当Pod被删除时,数据也会被随之删除;
(2)不同的Pod之间无法实现数据共享;
参考链接:
https://kubernetes.io/docs/concepts/storage/volumes#emptydir
温馨提示:
1)启动pods后,使用emptyDir其数据存储在"/var/lib/kubelet/pods"路径下对应的POD_ID目录哟!
/var/lib/kubelet/pods/${POD_ID}/volumes/kubernetes.io~empty-dir/
2)可以尝试验证上一步找到的目录,并探讨为什么Pod删除其数据会被随之删除的真正原因,见视频。
先删除所有节点,以防止有启动的nginx占用80端口,导致新的容器无法启动
[root@k8s151 /k8s-manifests/pods]$ kubectl delete pod --all
[root@k8s151 /k8s-manifests/pods]$ cat > 06-pods-volume-emptyDir.yaml <<'EOF'
kind: Pod
apiVersion: v1
metadata:
name: oldboyedu-pods-emptydir
labels:
apps: myweb
spec:
volumes:
- name: data01
emptyDir: {}
- name: data02
emptyDir: {}
containers:
- name: myweb
image: k8s151.oldboyedu.com:5000/myweb:v0.1
volumeMounts:
- name: data01
mountPath: /oldboyedu-linux81 #把此文件/oldboyedu-linux81 挂载到data01
- name: test01
image: nginx:1.18
command:
- tail
- -f
- /etc/hosts
volumeMounts:
- name: data02
mountPath: /data-linux81 #把此文件/data-linux81 挂载到data02
EOF
[root@k8s151 /k8s-manifests/pods]$ kubectl apply -f 06-pods-volume-emptyDir.yaml
pod/oldboyedu-pods-emptydir-001 created
进到pod的容器中,没有指定容器名,则进入到第一个容器,即myweb
[root@k8s151 /k8s-manifests/pods]$ kubectl exec oldboyedu-pods-emptydir -it -- bash
Defaulting container name to myweb.
Use 'kubectl describe pod/oldboyedu-pods-emptydir-001 -n default' to see all of the containers in this pod.
root@oldboyedu-pods-emptydir-001:/# cat /usr/share/nginx/html/index.html
2222222222222 #可以看到nginx首页显示的是222
root@oldboyedu-pods-emptydir-001:/# echo AAAAAAAAAAA > /usr/share/nginx/html/index.html #把nignx默认显示改为AAA...
可以看到容器部署在152节点
[root@k8s151 /k8s-manifests/pods]$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
oldboyedu-pods-emptydir 2/2 Running 0 6m41s 10.244.1.23 k8s152.oldboyedu.com
在152节点删除myweb容器(删除后,kubectl会立即重启启动)
[root@k8s152 ~]$ docker ps | grep myweb
0f91e4e67302 b85fe5f2830b "/docker-entrypoint.…" 3 minutes ago Up 3 minutes k8s_myweb_oldboyedu-pods-emptydir_default_6b012714-7a95-42c1-9fd4-7f5361b2c423_0
[root@k8s152 ~]$ docker rm -f 0f91e4e67302
36efbb657af2
然后验证curl访问nginx,看到nginx的首页恢复为修改之前的222(并没有持久化,容器重启之后,修改的数据并没有保存)
[root@k8s151 /k8s-manifests/pods]$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
oldboyedu-pods-emptydir 2/2 Running 0 6m41s 10.244.1.23 k8s152.oldboyedu.com
#之所以修改的数据没有保存,是因为我们修改的不是挂载目录 /oldboyedu-linux81
[root@k8s151 /k8s-manifests/pods]$ curl 10.244.1.23
2222222222222
启动pods后,使用emptyDir其数据存储在"/var/lib/kubelet/pods"路径下对应的POD_ID目录,/var/lib/kubelet/pods/${POD_ID}/volumes/kubernetes.io~empty-dir/
[root@k8s153 ~]$ ll /var/lib/kubelet/pods/
total 0
drwxr-x--- 5 root root 71 Oct 29 11:30 6cc3a87e-627b-40b3-8f64-c26da3378a86
drwxr-x--- 5 root root 71 Oct 29 11:30 be543b8b-f713-459f-be14-5faa4709bd4b
drwxr-x--- 5 root root 71 Nov 22 14:20 ebde8b2e-87ec-4fe2-a928-99d0e6d1ba21
可以看到修改的Neri,在本地目录中已经同步
[root@k8s153 ~]$ cat /var/lib/kubelet/pods/ebde8b2e-87ec-4fe2-a928-99d0e6d1ba21/volumes/kubernetes.io~empty-dir/data01/index.html
AAAAAAAAAAA
[root@k8s151 /k8s-manifests/podes]$ cat > 06-pods-volume-emptyDir.yaml <<'EOF'
kind: Pod
apiVersion: v1
metadata:
name: oldboyedu-pods-emptydir
labels:
apps: myweb
spec:
volumes:
- name: data01
emptyDir: {}
- name: data02
emptyDir: {}
containers:
- name: myweb
image: k8s151.oldboyedu.com:5000/myweb:v0.1
volumeMounts:
- name: data01
#挂载/usr/share/nginx/html,实现数据共享,这样修改此文件内的东西,就会实现持久化
mountPath: /usr/share/nginx/html
- name: test01
image: nginx:1.18
command:
- tail
- -f
- /etc/hosts
volumeMounts:
- name: data01
mountPath: /data-linux81
EOF
再次重复上面的步骤,发现删除相应容器后,修改的内容并没重置(因为我们在/usr/share/nginx/html修改数据,而这个文件是挂载到data01的,实现了数据共享,所以就会实现持久化)
kind: Pod
apiVersion: v1
metadata:
name: oldboyedu-linux80-volume-001
labels:
apps: myweb
spec:
nodeName: k8s202.oldboyedu.com
volumes:
- emptyDir: {}
name: data01
- name: data02
emptyDir: {}
- name: data03
emptyDir: {}
containers:
- name: linux80-web
image: k8s201.oldboyedu.com:5000/nginx:1.20.1
volumeMounts:
- name: data01
mountPath: /oldboyedu-linux80-data
- name: data02
mountPath: /oldboyedu-linux80-data002
- name: data03
mountPath: /oldboyedu-linux80-data003
- name: linux80-linux
image: k8s201.oldboyedu.com:5000/alpine
command:
- tail
- -f
- /etc/hosts
stdin: true
volumeMounts:
- name: data01
mountPath: /oldboyedu-linux-data-001
- name: data02
mountPath: /oldboyedu-linux-data-002
- name: data03
mountPath: /oldboyedu-linux-data-003
注意:如果需要用到仓库里面的镜像,首先要先把镜像推送到仓库。 还有很多都使用nginx镜像,要注意nginx端口占用
hotsPath数据卷:
挂载Node文件系统(Pod所在节点)上文件或者目录到Pod中的容器。如果Pod删除了,宿主机的数据并不会被删除。
应用场景:
Pod中容器需要访问宿主机文件。
hotsPath优缺点:
优点:
(1) 可以实现同一个Pod不同容器之间的数据共享;
(2) 可以实现同一个Node节点不同Pod之间的数据共享;
缺点:
无法满足跨节点Pod之间的数据共享。
推荐阅读:
https://kubernetes.io/docs/concepts/storage/volumes/#hostpath
[root@k8s151 /k8s-manifests/podes]$ cat > 07-pods-volume-hostPath.yaml <<'EOF'
kind: Pod
apiVersion: v1
metadata:
name: oldboyedu-volume-hostpath-01
labels:
apps: myweb
spec:
nodeName: k8s153.oldboyedu.com
# 声明存储卷类型和名称
volumes:
- name: data01
emptyDir: {}
- name: data02
hostPath:
# 指定宿主机的路径,如果源数据是文件
path: /data-linux81
containers:
- name: myweb
image: k8s151.oldboyedu.com:5000/myweb:v0.1
volumeMounts:
- name: data02
mountPath: /usr/share/nginx/html
---
kind: Pod
apiVersion: v1
metadata:
name: oldboyedu-volume-hostpath-02
labels:
apps: myweb
spec:
nodeName: k8s153.oldboyedu.com
volumes:
- name: data01
hostPath:
path: /data-linux81
containers:
- name: test01
image: nginx:1.18
volumeMounts:
- name: data01
mountPath: /usr/share/nginx/html
EOF
[root@k8s151 /k8s-manifests/podes]$ kubectl apply -f 07-pods-volume-hostPath.yaml
pod/oldboyedu-volume-hostpath-01 created
pod/oldboyedu-volume-hostpath-02 created
[root@k8s151 /k8s-manifests/podes]$ kubectl get pods -o wide |grep host
oldboyedu-volume-hostpath-01 1/1 Running 0 38s 10.244.2.24 k8s153.oldboyedu.com > >
oldboyedu-volume-hostpath-02 1/1 Running 0 38s 10.244.2.23 k8s153.oldboyedu.com > >
因为容器都在153节点
[root@k8s153 ~]$ mkdir -p /data-linux81 && cd /data-linux81
[root@k8s153 /data-linux81]$ echo ">oldboyedu linux81
>" > index.html
因为共享了同一个宿主机目录,所以
[root@k8s151 /k8s-manifests/podes]$ curl 10.244.2.42
>oldboyedu linux81
>
[root@k8s151 /k8s-manifests/podes]$ curl 10.244.2.41
>oldboyedu linux81
>
kind: Pod
apiVersion: v1
metadata:
name: oldboyedu-volume-hostpath-01
labels:
apps: myweb
spec:
nodeName: k8s152.oldboyedu.com
# 声明存储卷类型和名称
volumes:
- name: data01
hostPath:
# 指定宿主机的路径,如果源数据是文件
path: /data-linux81
containers:
- name: myweb
image: k8s151.oldboyedu.com:5000/myweb:v0.1
command: ["sleep","3600"]
volumeMounts:
- name: data01
# 挂载点也必须是文件
mountPath: /usr/share/nginx/html
# 以只读的方式挂载,默认值是false。
# readOnly: true
kind: Pod
apiVersion: v1
metadata:
name: oldboyedu-linux80-volume-hostpath-03
labels:
apps: myweb
spec:
nodeName: k8s153.oldboyedu.com
volumes:
- name: data01
hostPath:
# 如果源数据是目录
path: /data-linux81
containers:
- name: linux80-web
image: k8s151.oldboyedu.com:5000/myweb:v0.1
command: ["sleep","3600"]
volumeMounts:
- name: data01
# 对应的挂载点应该也是目录
mountPath: /etc/nginx/
# readOnly: true
NFS数据卷:
提供对NFS挂载支持,可以自动将NFS共享路径挂载到Pod中。
NFS:
英文全称为"Network File System"(网络文件系统),是由SUN公司研制的UNIX表示层协议(presentation layer protocol),能使使用者访问网络上别处的文件就像在使用自己的计算机一样。
NFS是一个主流的文件共享服务器,但存在单点故障,我们需要对数据进行备份哟,如果有必要可以使用分布式文件系统哈。
推荐阅读:
https://kubernetes.io/docs/concepts/storage/volumes/#nfs
(1)所有节点安装nfs相关软件包
yum -y install nfs-utils
(2)k8s151节点设置共享目录
mkdir -pv /oldboyedu/data/kubernetes
cat > /etc/exports <<'EOF'
/oldboyedu/data/kubernetes *(rw,no_root_squash)
EOF
(3)配置nfs服务开机自启动
systemctl enable --now nfs
(4)服务端检查NFS挂载信息,如上图所示。
exportfs
(5)客户端节点手动挂载测试(152或153节点)
mount -t nfs k8s151.oldboyedu.com:/oldboyedu/data/kubernetes /mnt/
umount /mnt
[root@k8s151 /oldboyedu/data/kubernetes]$ cat > 08-pods-volume-nfs.yaml <<'EOF'
kind: Pod
apiVersion: v1
metadata:
name: oldboyedu-volume-nfs-01
labels:
apps: myweb
spec:
nodeName: k8s152.oldboyedu.com
# 声明存储卷类型和名称
volumes:
- name: data01
#指定nfs的存储卷
nfs:
#指定nfs的服务器
server: 10.0.0.151
#指定nfs服务的存储路径
path: /oldboyedu/data/kubernetes
containers:
- name: myweb
image: k8s151.oldboyedu.com:5000/myweb:v0.1
volumeMounts:
- name: data01
mountPath: /usr/share/nginx/html
---
kind: Pod
apiVersion: v1
metadata:
name: oldboyedu-volume-nfs-02
labels:
apps: myweb
spec:
nodeName: k8s153.oldboyedu.com
volumes:
- name: data01
nfs:
server: 10.0.0.151
path: /oldboyedu/data/kubernetes
containers:
- name: test01
image: nginx:1.18
volumeMounts:
- name: data01
mountPath: /usr/share/nginx/html
EOF
[root@k8s151 /oldboyedu/data/kubernetes]$ kubectl apply -f 08-pods-volume-nfs.yaml
pod/oldboyedu-volume-nfs-01 created
pod/oldboyedu-volume-nfs-02 created
[root@k8s151 /oldboyedu/data/kubernetes]$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
oldboyedu-volume-nfs-01 1/1 Running 0 37s 10.244.1.27 k8s152.oldboyedu.com > >
oldboyedu-volume-nfs-02 1/1 Running 0 36s 10.244.2.43 k8s153.oldboyedu.com > >
[root@k8s151 /oldboyedu/data/kubernetes]$ echo "AAAAAAAA" >/oldboyedu/data/kubernetes/index.html
#提示,有时候出现错误,可以把pod删了,再试试
[root@k8s151 /oldboyedu/data/kubernetes]$ curl 10.244.2.44
AAAAAAAAA
[root@k8s151 /oldboyedu/data/kubernetes]$ curl 10.244.1.28
AAAAAAAAA
cat 15-pod-volume-nfs-02.yaml
kind: Pod
apiVersion: v1
metadata:
name: oldboyedu-linux80-volume-nfs-02
labels:
apps: myweb
spec:
nodeName: k8s152.oldboyedu.com
volumes:
- name: myweb
# 配置NFS挂载
nfs:
server: 10.0.0.151
path: /oldboyedu/data/kubernetes
containers:
- name: linux80-web
image: k8s151.oldboyedu.com:5000/myweb:v0.1
volumeMounts:
- name: myweb
mountPath: /usr/share/nginx/html
configmap数据会存储在etcd数据库,其应用场景主要在于应用程序配置。
configMap支持的数据类型:
(1) 键值对;
(2) 多行数据;
Pod使用configmap资源有两种常见的方式:
(1) 变量注入;
(2) 数据卷挂载
推荐阅读:
https://kubernetes.io/docs/concepts/storage/volumes/#configmap
https://kubernetes.io/docs/concepts/configuration/configmap/
[root@k8s151 ~]$ mkdir -p /k8s-manifests/configMap
[root@k8s151 /k8s-manifests/configMap]$ cat > 01.cm-configMap.yaml <<"EOF"
apiVersion: v1
kind: ConfigMap
metadata:
name: oldboyedu-cm-linux81
data:
# 单行数据
school: "清华"
class: "云计算"
# 多行数据
student.txt: |
姓名: 张三
年龄: 25
民族: 汉
EOF
[root@k8s151 /k8s-manifests/configMap]$ kubectl apply -f 01.cm-configMap.yaml
configmap/oldboyedu-cm-linux81 created
查找configmaps的简称
[root@k8s151 /oldboyedu/data/kubernetes]$ kubectl api-resources |grep configmaps
configmaps cm true ConfigMap
[root@k8s151 /k8s-manifests/configMap]$ kubectl get configmaps
NAME DATA AGE
oldboyedu-cm-linux81 3 33s
可以使用configMap的简称代替
[root@k8s151 /k8s-manifests/configMap]$ kubectl get cm
NAME DATA AGE
oldboyedu-cm-linux81 3 33s
[root@k8s151 /k8s-manifests/configMap]$ kubectl get cm -o yaml
apiVersion: v1
items:
- apiVersion: v1
data:
class: 云计算
school: 清华
student.txt: |
姓名: 张三
年龄: 25
民族: 汉
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"class":"云计算","school":"清华","student.txt":"姓名: 张三\n年龄: 25\n民族: 汉\n"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"oldboyedu-cm-linux81","namespace":"default"}}
creationTimestamp: "2022-11-23T02:26:08Z"
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:class: {}
f:school: {}
f:student.txt: {}
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
manager: kubectl
operation: Update
time: "2022-11-23T02:26:08Z"
name: oldboyedu-cm-linux81
namespace: default
resourceVersion: "1059853"
selfLink: /api/v1/namespaces/default/configmaps/oldboyedu-cm-linux81
uid: f2a2e989-bae1-4c34-9751-22a495b51fc1
kind: List
metadata:
resourceVersion: ""
selfLink: ""
[root@k8s151 /k8s-manifests/configMap]$ kubectl describe cm oldboyedu-cm-linux81
Name: oldboyedu-cm-linux81
Namespace: default
Labels:
Annotations:
Data
====
school:
----
清华
student.txt:
----
姓名: 张三
年龄: 25
民族: 汉
class:
----
云计算
Events:
使用cm的资源
[root@k8s151 /k8s-manifests/configMap]$ cat > 01-pods-myweb-configMap.yaml <<'EOF'
kind: Pod
apiVersion: v1
metadata:
name: oldboyedu-volumes-cm-02
labels:
apps: myweb
spec:
nodeName: k8s152.oldboyedu.com
volumes:
- name: data01
# 定义数据卷类型是configMap.
configMap:
# 引用configMap的名称.
name: oldboyedu-cm-linux81
# 引用configMap的具体的Key相关信息.
items:
# 指定configmap的key名称,该名称必须在cm资源中存在.
- key: student.txt
# 可以暂时理解为挂载到容器的文件名称.
path: oldboyedu-student.txt
- key: class
path: oldboyedu-class
- key: school
path: oldboyedu-school
containers:
- name: myweb
image: k8s151.oldboyedu.com:5000/myweb:v0.1
volumeMounts:
- name: data01
mountPath: /oldboyedu-configMap/
EOF
[root@k8s151 /k8s-manifests/podes]$ kubectl apply -f 01-pods-myweb-configMap.yaml
pod/oldboyedu-volumes-cm-01 created
[root@k8s151 /k8s-manifests/podes]$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
cm-school-01 0/1 ContainerCreating 0 39s > k8s152.oldboyedu.com > >
[root@k8s151 /k8s-manifests/podes]$ kubectl exec -it oldboyedu-volumes-cm-01 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
root@oldboyedu-volumes-cm-01:/# ls /oldboyedu-configMap/
oldboyedu-class oldboyedu-school oldboyedu-student.txt
root@oldboyedu-volumes-cm-01:/# cat /oldboyedu-configMap/oldboyedu-student.txt
姓名: 张三
年龄: 25
民族: 汉
[root@k8s151 /k8s-manifests/configMap]$ cat > 02-cm-configMap.yaml <<'EOF'
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-config
data:
# 单行数据
name: "清华"
class: "云计算"
# 多行数据
student.msg: |
姓名: 张三
年龄: 25
民族: 汉
my.cnf: |
host: 10.0.0.201
port: 13306
socket: /tmp/mysql.sock
username: root
password: oldboyedu
redis.conf: |
host: 10.0.0.293
port: 6379
requirepass: oldboyedu
nginx.conf: |
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
# include /usr/local/nginx/conf/conf.d/*.conf;
server {
listen 81;
root /usr/local/nginx/html/bird/;
server_name game01.oldboyedu.com;
}
server {
listen 82;
root /usr/local/nginx/html/pinshu/;
server_name game02.oldboyedu.com;
}
server {
listen 83;
root /usr/local/nginx/html/tanke/;
server_name game03.oldboyedu.com;
}
server {
listen 84;
root /usr/local/nginx/html/pingtai/;
server_name game04.oldboyedu.com;
}
server {
listen 85;
root /usr/local/nginx/html/chengbao/;
server_name game05.oldboyedu.com;
}
}
cat > 02-cm-configMap.yaml <<'EOF'
apiVersion: v1
kind: ConfigMap
metadata:
name: linux81-config
data:
# 单行数据
name: "清华"
class: "云计算"
# 多行数据
student.txt: |
姓名: 张三
年龄: 25
民族: 汉
my.cnf: |
host: 10.0.0.201
port: 13306
socket: /tmp/mysql.sock
username: root
password: oldboyedu
redis.conf: |
host: 10.0.0.293
port: 6379
requirepass: oldboyedu
nginx.conf: |
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
# include /usr/local/nginx/conf/conf.d/*.conf;
server {
listen 81;
root /usr/local/nginx/html/bird/;
server_name game01.oldboyedu.com;
}
server {
listen 82;
root /usr/local/nginx/html/pinshu/;
server_name game02.oldboyedu.com;
}
server {
listen 83;
root /usr/local/nginx/html/tanke/;
server_name game03.oldboyedu.com;
}
server {
listen 84;
root /usr/local/nginx/html/pingtai/;
server_name game04.oldboyedu.com;
}
server {
listen 85;
root /usr/local/nginx/html/chengbao/;
server_name game05.oldboyedu.com;
}
}
EOF
[root@k8s151 /k8s-manifests/configMap]$ kubectl apply -f 02-cm-configMap.yaml
configmap/linux81-config created
[root@k8s151 /k8s-manifests/podes]$ cat > 09-pods-conf-configMap.yaml <<'EOF'
kind: Pod
apiVersion: v1
metadata:
name: volumes-cm-conf-01
spec:
nodeName: k8s152.oldboyedu.com
volumes:
- name: oldboyedu-myweb
configMap:
name: linux81-config
items:
- key: student.txt
path: oldboyedu-student.txt
- key: class
path: oldboyedu-class
- name: oldboyedu-nginx
configMap:
name: linux81-config
items:
- key: nginx.conf
path: nginx.conf
containers:
- name: myweb
image: k8s151.oldboyedu.com:5000/myweb:v0.1
volumeMounts:
- name: oldboyedu-myweb
mountPath: /oldboyedu-configMap/
- name: oldboyedu-nginx
mountPath: /oldboyedu-nginx/
EOF
[root@k8s151 /k8s-manifests/podes]$ kubectl apply -f 09-pods-conf-configMap.yaml
pod/volumes-cm-conf-01 created
[root@k8s151 /k8s-manifests/podes]$ kubectl exec -it volumes-cm-conf-01 -- bash
pod/volumes-cm-conf-02 created
[root@k8s151 /k8s-manifests/podes]$ kubectl exec -it volumes-cm-conf-01 -- bash
root@volumes-cm-conf-01:/oldboyedu-nginx# ls /oldboyedu-nginx/
nginx.conf
root@volumes-cm-conf-01:/oldboyedu-nginx# ls /oldboyedu-configMap/
oldboyedu-class oldboyedu-student.txt
[root@k8s151 /k8s-manifests/podes]$ cat > 09-pods-env-configMap.yaml <<'EOF'
kind: Pod
apiVersion: v1
metadata:
name: volumes-cm-env-01
spec:
nodeName: k8s152.oldboyedu.com
volumes:
- name: oldboyedu-myweb
configMap:
name: linux81-config
items:
- key: student.txt
path: oldboyedu-student.txt
- key: class
path: oldboyedu-class
- name: oldboyedu-nginx
configMap:
name: linux81-config
items:
- key: nginx.conf
path: nginx.conf
containers:
- name: myweb
image: k8s151.oldboyedu.com:5000/myweb:v0.1
env:
#自定义变量名称
- name: CLASS
#自定义值
value: linux81
- name: MYSQL_CONFIG
# 指定从哪里取值
valueFrom:
# 指定从configMap去引用数据
configMapKeyRef:
# 指定configMap的名称(从指定cm中读取数据)
name: linux81-config
# 指定configmap的key,即引用哪条数据!
key: my.cnf
volumeMounts:
- name: oldboyedu-myweb
mountPath: /oldboyedu-configMap/
- name: oldboyedu-nginx
mountPath: /oldboyedu-nginx/
EOF
[root@k8s151 /k8s-manifests/podes]$ kubectl apply -f 09-pods-env-configMap.yaml
pod/volumes-cm-conf-01 created
[root@k8s151 /k8s-manifests/podes]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
volumes-cm-conf-01 1/1 Running 0 47m
volumes-cm-env-01 1/1 Running 0 10s
[root@k8s151 /k8s-manifests/podes]$ kubectl exec -it volumes-cm-env-01 -- bash
root@volumes-cm-env-01:/# env
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_SERVICE_PORT=443
HOSTNAME=volumes-cm-env-01
PWD=/
PKG_RELEASE=2~buster
HOME=/root
KUBERNETES_PORT_443_TCP=tcp://10.254.0.1:443
NJS_VERSION=0.4.4
TERM=xterm
CLASS=linux81 #自定义的变量和值
SHLVL=1
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_ADDR=10.254.0.1
KUBERNETES_SERVICE_HOST=10.254.0.1
KUBERNETES_PORT=tcp://10.254.0.1:443
KUBERNETES_PORT_443_TCP_PORT=443
MYSQL_CONFIG=host: 10.0.0.201 #从cm中引用的配置
port: 13306
socket: /tmp/mysql.sock
username: root
password: oldboyedu
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
NGINX_VERSION=1.18.0
_=/usr/bin/env
与ConfigMap类似,区别在于secret存储敏感数据,所有的数据都需要经过base64进行编码。
使用secret主要存储的是凭据信息。
参考链接:
https://kubernetes.io/zh/docs/concepts/configuration/secret/#secret-types
[root@k8s151 /k8s-manifests/secret]$ cat > 01.linux81-secret.yaml <<'EOF'
apiVersion: v1
kind: Secret
metadata:
name: db-user-passwd
# Opaque类型是用户自定义类型.
type: Opaque
data:
# 定义两条数据,其值必须是base64编码后的数据,否则创建会报错哟~
username: YWRtaW4K
password: b2xkYm95ZWR1Cg==
EOF
[root@k8s151 /k8s-manifests/podes]$ kubectl apply -f 01.linux81-secret.yaml
secret/db-user-passwd created
[root@k8s151 /k8s-manifests/podes]$ kubectl get secrets
NAME TYPE DATA AGE
db-user-passwd Opaque 2 44s
default-token-c5lh2 kubernetes.io/service-account-token 3 25d #存服务账号令牌的
[root@k8s151 /k8s-manifests/podes]$ kubectl describe secrets db-user-passwd
Name: db-user-passwd
Namespace: default
Labels:
Annotations:
Type: Opaque
Data
====
password: 10 bytes
username: 6 bytes
[root@k8s151 /k8s-manifests/podes]$ kubectl get secrets db-user-passwd -o yaml
apiVersion: v1
data:
password: b2xkYm95ZWR1Cg==
username: YWRtaW4K
kind: Secret
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"password":"b2xkYm95ZWR1Cg==","username":"YWRtaW4K"},"kind":"Secret","metadata":{"annotations":{},"name":"db-user-passwd","namespace":"default"},"type":"Opaque"}
creationTimestamp: "2022-11-23T12:25:03Z"
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:password: {}
f:username: {}
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:type: {}
manager: kubectl
operation: Update
time: "2022-11-23T12:25:03Z"
name: db-user-passwd
namespace: default
resourceVersion: "1146120"
selfLink: /api/v1/namespaces/default/secrets/db-user-passwd
uid: b49b7664-dc8c-4265-b08c-82dd9de35b6e
type: Opaque
解码
[root@k8s151 /k8s-manifests/podes]$ echo YWRtaW4K |base64 -d
admin
[root@k8s151 /k8s-manifests/pods]$ cat > 12-pods-secrets.yaml << 'EOF'
kind: Pod
apiVersion: v1
metadata:
name: linux80-volume-secret
labels:
apps: myweb
spec:
nodeName: k8s152.oldboyedu.com
volumes:
- name: myweb
# 定义数据卷类型是secret
secret:
# 引用secret的名称.
secretName: db-user-passwd
# 引用secret具体的Key相关信息.
items:
# 指定secret的key名称,该名称必须在secret资源中存在.
- key: username
# 可以暂时理解为挂载到容器的名称.
path: username.txt
- key: password
path: password.txt
containers:
- name: linux80-web
image: k8s151.oldboyedu.com:5000/myweb:v0.1
volumeMounts:
- name: myweb
mountPath: /oldboyedu-linux80/
EOF
[root@k8s151 /k8s-manifests/pods]$ kubectl apply -f 12.pods-secrets.yaml
pod/linux80-volume-secret created
[root@k8s151 /k8s-manifests/pods]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
linux80-volume-secret 1/1 Running 0 10s
[root@k8s151 /k8s-manifests/pods]$ kubectl exec -it linux80-volume-secret -- bash
root@linux80-volume-secret:/# ls /oldboyedu-linux80/
password.txt username.txt
root@linux80-volume-secret:/# cat /oldboyedu-linux80/password.txt
oldboyedu #可以看到,密码已经解密了
[root@k8s151 /k8s-manifests/pods]$ cat > 12.pods-env-secrets.yaml << 'EOF'
kind: Pod
apiVersion: v1
metadata:
name: linux80-env-secret-demo
labels:
apps: myweb
spec:
nodeName: k8s152.oldboyedu.com
containers:
- name: linux80-web
image: k8s151.oldboyedu.com:5000/myweb:v0.1
env:
- name: oldboyedu-linux80-username
# 指定从哪里取值
valueFrom:
# 指定从secret去引用数据
secretKeyRef:
# 指定secret的名称
name: db-user-passwd
# 指定secret的key,即引用哪条数据!
key: username
- name: oldboyedu-linux80-password
valueFrom:
secretKeyRef:
name: db-user-passwd
key: password
EOF
[root@k8s151 /k8s-manifests/pods]$ kubectl apply -f 12.pods-env-secrets.yaml
pod/oldboyedu-linux80-env-secret-demo created
[root@k8s151 /k8s-manifests/pods]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
linux80-volume-secret 1/1 Running 0 11m
linux80-env-secret-demo 1/1 Running 0 18s
[root@k8s151 /k8s-manifests/pods]$ kubectl exec -it linux80-env-secret-demo -- bash
root@linux80-env-secret-demo:/#
[root@k8s151 /k8s-manifests/pods]$ kubectl exec -it linux80-env-secret-demo -- bash
root@linux80-env-secret-demo:/# env
linux80-username=admin #账号密码也解密了
linux80-password=oldboyedu
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_SERVICE_PORT=443
HOSTNAME=linux80-env-secret-demo
PWD=/
PKG_RELEASE=2~buster
HOME=/root
KUBERNETES_PORT_443_TCP=tcp://10.254.0.1:443
NJS_VERSION=0.4.4
TERM=xterm
SHLVL=1
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_ADDR=10.254.0.1
KUBERNETES_SERVICE_HOST=10.254.0.1
KUBERNETES_PORT=tcp://10.254.0.1:443
KUBERNETES_PORT_443_TCP_PORT=443
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
NGINX_VERSION=1.18.0
_=/usr/bin/env
root@linux80-env-secret-demo:/#
subPath的使用方法一共有两种:
(1) 同一个pod中多容器挂载同一个卷时提供隔离;
(2) 将configMap和secret作为文件挂载到容器中而不覆盖挂载目录下的文件;font>
#configmap挂载方式
[root@k8s151 /k8s-manifests/pods]$ cat > 09-pods-nginxConf-configMap.yaml <<'EOF'
kind: Pod
apiVersion: v1
metadata:
name: cm-nginx-conf-01
labels:
apps: myweb
spec:
nodeName: k8s152.oldboyedu.com
volumes:
- name: data01
configMap:
name: linux81-config
items:
- key: nginx.conf
path: nginx.conf
containers:
- name: test01
image: k8s151.oldboyedu.com:5000/myweb:v0.1
command: ['tail','-f','/etc/hosts']
volumeMounts:
- name: data01
mountPath: /etc/nginx/
EOF
[root@k8s151 /k8s-manifests/pods]$ kubectl apply -f 09-pods-nginxConf-configMap.yaml
pod/cm-nginx-conf-01 created
[root@k8s151 /k8s-manifests/pods]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
cm-nginx-conf-01 1/1 Running 0 16s
#可以看到,/etc/nginx中的其他文件都被删除了,因为我们挂载的只有nginx.conf
[root@k8s151 /k8s-manifests/pods]$ kubectl exec -it cm-nginx-conf-01 -- bash
root@cm-nginx-conf-01:/# ls /etc/nginx/
nginx.conf
kind: Pod
apiVersion: v1
metadata:
name: oldboyedu-linux80-subpath
labels:
apps: myweb
spec:
nodeName: k8s202.oldboyedu.com
volumes:
- name: data01
emptyDir: {}
containers:
- name: linux80-web
image: k8s201.oldboyedu.com:5000/nginx:1.20.1
volumeMounts:
- name: data01
mountPath: /oldboyedu-linux80-data
# 当挂载相同当存储卷时,如果subPath的值相同则共享数据,若不同,则隔离两者容器的数据共享。
subPath: "oldboyedu-linux80-c1"
- name: linux80-alpine
image: k8s201.oldboyedu.com:5000/alpine
command: ["sleep","600"]
volumeMounts:
- name: data01
mountPath: /oldboyedu-linux-data-001
subPath: "oldboyedu-linux80-c2"
[root@k8s151 /k8s-manifests/pods]$ cat > 09-pods-nginxConf-configMap.yaml <<'EOF'
kind: Pod
apiVersion: v1
metadata:
name: cm-nginx-conf-03
labels:
apps: myweb
spec:
nodeName: k8s152.oldboyedu.com
volumes:
- name: data01
configMap:
name: linux81-config
items:
- key: nginx.conf
path: nginx.conf #cm的值
containers:
- name: test01
image: k8s151.oldboyedu.com:5000/myweb:v0.1
command: ['tail','-f','/etc/hosts']
volumeMounts:
- name: data01
mountPath: /etc/nginx/nginx.conf
#当subPath的值与CM的path相同时,mountPath的路径为文件,反之为文件夹
subPath: nginx.conf
EOF
[root@k8s151 /k8s-manifests/podes]$ kubectl apply -f 09-pods-nginxConf-configMap.yaml
pod/cm-nginx-conf-03 created
[root@k8s151 /k8s-manifests/podes]$ kubectl exec -it cm-nginx-conf-03 -- bash
root@cm-nginx-conf-03:/# ls /etc/nginx/
conf.d fastcgi_params koi-utf koi-win mime.types modules nginx.conf scgi_params uwsgi_params win-utf
root@cm-nginx-conf-03:/# ll /etc/nginx/
bash: ll: command not found
root@cm-nginx-conf-03:/# ls -l /etc/nginx/
total 36
drwxr-xr-x 1 root root 26 Nov 22 03:30 conf.d
-rw-r--r-- 1 root root 1007 Apr 21 2020 fastcgi_params
-rw-r--r-- 1 root root 2837 Apr 21 2020 koi-utf
-rw-r--r-- 1 root root 2223 Apr 21 2020 koi-win
-rw-r--r-- 1 root root 5231 Apr 21 2020 mime.types
lrwxrwxrwx 1 root root 22 Oct 29 2020 modules -> /usr/lib/nginx/modules
-rw-r--r-- 1 root root 955 Nov 23 07:38 nginx.conf
-rw-r--r-- 1 root root 636 Apr 21 2020 scgi_params
-rw-r--r-- 1 root root 664 Apr 21 2020 uwsgi_params
-rw-r--r-- 1 root root 3610 Apr 21 2020 win-utf
root@cm-nginx-conf-03:/# cat /etc/nginx/nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
# include /usr/local/nginx/conf/conf.d/*.conf;
server {
listen 81;
root /usr/local/nginx/html/bird/;
server_name game01.oldboyedu.com;
}
server {
listen 82;
root /usr/local/nginx/html/pinshu/;
server_name game02.oldboyedu.com;
}
server {
listen 83;
root /usr/local/nginx/html/tanke/;
server_name game03.oldboyedu.com;
}
server {
listen 84;
root /usr/local/nginx/html/pingtai/;
server_name game04.oldboyedu.com;
}
server {
listen 85;
root /usr/local/nginx/html/chengbao/;
server_name game05.oldboyedu.com;
}
}
#可以从上面的结果看到,使用subPath后,只有nginx.conf是被更新了,其他的文件并没有被删除
(1)编写资源清单
cat > 21-pod-label.yaml <<'EOF'
kind: Pod
apiVersion: v1
metadata:
name: oldboyedu-linux80-label
labels:
school: oldboyedu
class: linux80
address: shahe_oldboyedu
spec:
containers:
- name: linux80-web
image: k8s151.oldboyedu.com:5000/myweb:v0.1
EOF
(2)创建资源清单
kubectl apply -f 21-pod-label.yaml
(3)查看标签
kubectl get -f 21-pod-label.yaml --show-labels
kubectl get po -l school=oldboyedu --show-labels
(4)修改标签
见视频。使用apply应用修改的labels字段即可。
kubectl label pods oldboyedu-linux80-labels address=zhumadian
(1)一次性打多个标签
kubectl label -f 21-pod-label.yaml title=linux price=6666 brand=k8s
kubectl label po oldboyedu-linux80-label title=linux price=6666 brand=k8s
(2)一次性移除多个标签
kubectl label -f 21-pod-label.yaml title- price- brand-
kubectl label po oldboyedu-linux80-label title- price- brand-
(3)修改标签
kubectl label -f 21-pod-label.yaml --overwrite school=oldboyedu2022
名称空间是用来隔离K8S集群的资源。我们通常使用名称空间对企业业务进行逻辑上划分。
温馨提示:
(1)在同一个名称空间下,同一个资源类型是不能重现重名的;
(2)在不同的名称空间下,相同的资源类型是能出现同名的;
查看现有的名称空间
kubectl get namespaces
kubectl get ns #简称
创建命名空间
[root@k8s151 /k8s-manifests/podes]$ kubectl create namespace oldboyedu-linux
namespace/oldboyedu-linux created
查看kube-system名称空间的所有pod,cm,secret等信息。
若创建/查看资源时,未使用"-n"选项显式指定名称空间,则默认使用"default"名称空间。
kubectl get pods,cm,secret -n kube-system
查看所有名称空间的pod,cm,secret等信息。
kubectl get pods,cm,secret -A
基于命令行的方式创建名称空间。
kubectl create namespace oldboyedu-linux80
基于命令行的方式删除名称空间.
kubectl delete namespaces oldboyedu-linux80
[root@k8s151 /k8s-manifests/podes]$ kubectl describe ns oldboyedu-linux
Name: oldboyedu-linux
Labels:
Annotations:
Status: Active
No resource quota.
No LimitRange resource.
温馨提示:
(1)删除名称空间时,会将该名称空间下的所有资源都会被随之删除哟~
(2)判断K8S集群资源是否支持名称空间,可以根据"kubectl api-resources"字段的NAMESPACE来判断;
[root@k8s151 /k8s-manifests/pods]$ cat > 10.ns-linux.yaml <<'EOF'
apiVersion: v1
kind: Namespace
metadata:
name: oldboyedu-linux80
---
kind: Pod
apiVersion: v1
metadata:
name: oldboyedu-linux80-pod-ns
namespace: oldboyedu-linux80
labels:
school: oldboyedu
class: linux80
spec:
containers:
- name: linux80-web
image: k8s151.oldboyedu.com:5000/myweb:v0.1
---
apiVersion: v1
kind: ConfigMap
metadata:
name: oldboyedu-nginx
namespace: oldboyedu-linux80
data:
nginx.conf: |
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
# include /usr/local/nginx/conf/conf.d/*.conf;
server {
listen 81;
root /usr/local/nginx/html/bird/;
server_name game01.oldboyedu.com;
}
server {
listen 82;
root /usr/local/nginx/html/pinshu/;
server_name game02.oldboyedu.com;
}
server {
listen 83;
root /usr/local/nginx/html/tanke/;
server_name game03.oldboyedu.com;
}
server {
listen 84;
root /usr/local/nginx/html/pingtai/;
server_name game04.oldboyedu.com;
}
server {
listen 85;
root /usr/local/nginx/html/chengbao/;
server_name game05.oldboyedu.com;
}
}
---
apiVersion: v1
kind: Secret
metadata:
name: db-user-passwd
namespace: oldboyedu-linux80
type: Opaque
data:
username: YWRtaW4K
password: b2xkYm95ZWR1Cg==
EOF
[root@k8s151 /k8s-manifests/pods]$ kubectl apply -f 10.ns-linux.yaml
namespace/oldboyedu-linux80 created
pod/oldboyedu-linux80-pod-ns created
configmap/oldboyedu-nginx created
secret/db-user-passwd created
不指定命令空间,只显示默认空间
[root@k8s151 /k8s-manifests/pods]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
cm-nginx-conf-03 1/1 Running 0 3h45m
指定命名空间
[root@k8s151 /k8s-manifests/pods]$ kubectl get pods -n oldboyedu-linux80
NAME READY STATUS RESTARTS AGE
oldboyedu-linux80-pod-ns 1/1 Running 0 47s
Pod的spec中包含一个restartPolicy字段,其可能取值包括 Always、OnFailure和Never。默认值是Always。
Always:
容器退出时,始终重启容器(即创建新容器),默认策略。
Never:
容器退出时,不重启容器(即不创建新容器)。
OnFailure:
当容器异常退出时(kill -9时容器的退出码非0,貌似是137),重启容器(即创建新容器)。
当容器正常退出(docker stop,退出码为0)不重启容器。
当Pod中的容器退出时,kubelet会按指数回退方式计算重启的延迟(10s、20s、40s、...),其最长延迟为5分钟。 一旦某容器执行了 10分钟并且没有出现问题,kubelet对该容器的重启回退计时器执行重置操作。
参考案例:
kind: Pod
apiVersion: v1
metadata:
name: oldboyedu-linux80-restartpolicy-01
labels:
class: linux80
school: oldboyedu
hobby: linux80
spec:
# 指定重启策略
# Never:
# 容器退出时,不重启容器(即不创建新容器)。
# Always:
# 容器退出时,始终重启容器(即创建新的容器),默认策略。
# OnFailure:
# 当容器异常退出时,重启容器(即创建新的容器)。
# restartPolicy: Never
# restartPolicy: Always
restartPolicy: OnFailure
containers:
- name: linux80-web
image: nginx:1.18
推荐阅读:
https://kubernetes.io/zh/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy
常用的探针(Probe):
livenessProbe:
健康状态检查,周期性检查服务是否存活,检查结果失败,将"重启"容器(删除源容器并重新创建新容器)。
如果容器没有提供健康状态检查,则默认状态为Success。
readinessProbe:
可用性检查,周期性检查服务是否可用,从而判断容器是否就绪。
若检测Pod服务不可用,则会将Pod从svc的ep列表中移除。
若检测Pod服务可用,则会将Pod重新添加到svc的ep列表中。
如果容器没有提供可用性检查,则默认状态为Success。
startupProbe: (1.16+之后的版本才支持)
如果提供了启动探针,则所有其他探针都会被禁用,直到此探针成功为止。
如果启动探测失败,kubelet将杀死容器,而容器依其重启策略进行重启。
如果容器没有提供启动探测,则默认状态为 Success。
探针(Probe)检测Pod服务方法:
exec:
执行一段命令,根据返回值判断执行结果。返回值为0或非0,有点类似于"echo $?"。
httpGet:
发起HTTP请求,根据返回的状态码来判断服务是否正常。
200: 返回状态码成功
301: 永久跳转
302: 临时跳转
401: 验证失败
403: 权限被拒绝
404: 文件找不到
413: 文件上传过大
500: 服务器内部错误
502: 无效的请求
504: 后端应用网关响应超时
…
tcpSocket:
测试某个TCP端口是否能够链接,类似于telnet,nc等测试工具。
参考链接:
https://kubernetes.io/zh/docs/concepts/workloads/pods/pod-lifecycle/#types-of-probe
[root@k8s151 /k8s-manifests/pods]$ cat > 16-pro-livenessProbe-exec.yaml <<'EOF'
kind: Pod
apiVersion: v1
metadata:
name: oldboyedu-linux80-exec-001
labels:
apps: myweb
spec:
containers:
- name: linux81-exec
image: nginx:1.18
command:
- /bin/bash
- -c
- touch /tmp/oldboyedu-linux81-healthy; sleep 5; rm -f /tmp/oldboyedu-linux81-healthy; sleep 600
# 健康状态检查,周期性检查服务是否存活,检查结果失败,将重启容器。
livenessProbe:
# 使用exec的方式去做健康检查
exec:
# 自定义检查的命令
command:
- cat
- /tmp/oldboyedu-linux81-healthy
# 检测服务失败次数的累加值,默认值是3次,最小值是1。当检测服务成功后,该值会被重置!
failureThreshold: 3
# 指定多久之后进行健康状态检查,即此时间段内检测服务失败并不会对failureThreshold进行计数。
initialDelaySeconds: 15
# 指定探针检测的频率,默认是10s,最小值为1.
periodSeconds: 1
# 检测服务成功次数的累加值,默认值为1次,最小值1.
successThreshold: 1
# 一次检测周期超时的秒数,默认值是1秒,最小值为1.
timeoutSeconds: 1
EOF
[root@k8s151 /k8s-manifests/pods]$ kubectl apply -f 16-pro-livenessProbe-exec.yaml
pod/oldboyedu-linux80-exec-001 created
[root@k8s151 /k8s-manifests/pods]$ cat > 17-pro-livenessProbe-httpGet.yaml <<'EOF'
kind: Pod
apiVersion: v1
metadata:
name: oldboyedu-linux81-httpget-001
labels:
apps: myweb
spec:
containers:
- name: linux81-httpget
image: nginx:1.18
# 健康状态检查,周期性检查服务是否存活,检查结果失败,将重启容器。
livenessProbe:
# 使用httpGet的方式去做健康检查
httpGet:
# 指定访问的端口号
port: 80
# 检测指定的访问路径
path: /index.html
# 检测服务失败次数的累加值,默认值是3次,最小值是1。当检测服务成功后,该值会被重置!
failureThreshold: 3
# 指定多久之后进行健康状态检查,即此时间段内检测服务失败并不会对failureThreshold进行计数。
initialDelaySeconds: 15
# 指定探针检测的频率,默认是10s,最小值为1.
periodSeconds: 1
# 检测服务成功次数的累加值,默认值为1次,最小值1.
successThreshold: 1
# 一次检测周期超时的秒数,默认值是1秒,最小值为1.
timeoutSeconds: 1
EOF
[root@k8s151 /k8s-manifests/pods]$ cat > 17-pro-livenessProbe-tcpSocket.yaml <<'EOF'
kind: Pod
apiVersion: v1
metadata:
name: oldboyedu-linux81-tcpsocket-001
labels:
apps: myweb
spec:
containers:
- name: linux80-tcpsocket
image: nginx:1.18
command:
- /bin/bash
- -c
- nginx ; sleep 10; nginx -s stop ; sleep 600
# 健康状态检查,周期性检查服务是否存活,检查结果失败,将重启容器。
livenessProbe:
# 使用tcpSocket的方式去做健康检查
tcpSocket:
port: 80
# 检测服务失败次数的累加值,默认值是3次,最小值是1。当检测服务成功后,该值会被重置!
failureThreshold: 3
# 指定多久之后进行健康状态检查,即此时间段内检测服务失败并不会对failureThreshold进行计数。
initialDelaySeconds: 15
# 指定探针检测的频率,默认是10s,最小值为1.
periodSeconds: 1
# 检测服务成功次数的累加值,默认值为1次,最小值1.
successThreshold: 1
# 一次检测周期超时的秒数,默认值是1秒,最小值为1.
timeoutSeconds: 1
EOF
一个新Pod创建后,Service就能立即选择到它,并会把请求转发给Pod,那问题就来了,通常一个Pod启动是需要时间的,如果Pod还没准备好(可能需要时间来加载配置或数据,或者可能需要执行一个预热程序之类),这时把请求转给Pod的话,Pod也无法处理,造成请求失败
Kubernetes解决这个问题的方法就是给Pod加一个业务就绪探针
Readiness Probe
,当检测到Pod就绪后才允许Service将请求转给Pod。
**readinessProbe:**指示容器是否准备好服务请求(是否启动完成并就绪)。绪探针初始延迟之前的就绪状态默认为Failure,待容器启动成功弹指指标探测结果为成功后,状态变更为 Success。如果未配置就绪探针,则默认状态为Success。
只有状态为 Success ,才会被纳入 pod 所属 service 中,也就是 service 接收到请求后才有可能会被分发处理请求。
通过Endpoints就可以实现Readiness Probe的效果,当Pod还未就绪时,将Pod的IP:Port从Endpoints中删除,Pod就绪后再加入到Endpoints中,如下图所示。
Probe执行容器中的命令并检查命令退出的状态码,如果状态码为0则说明已经就绪
[root@k8s151 /k8s-manifests/pods]$ cat > 18-pro-readinessProbe-exec.yaml <<'EOF'
kind: Pod
apiVersion: v1
metadata:
name: oldboyedu-linux81-readinessprobe-exec-001
labels:
apps: myweb
spec:
containers:
- name: linux80-exec
image: nginx:1.18
command:
- /bin/bash
- -c
- touch /tmp/oldboyedu-linux80-healthy; sleep 5; rm -f /tmp/oldboyedu-linux80-healthy; sleep 600
# 可用性检查,周期性检查服务是否可用,从而判断容器是否就绪.
readinessProbe:
# 使用exec的方式去做健康检查
exec:
# 自定义检查的命令
command:
- cat
- /tmp/oldboyedu-linux81-healthy
# 检测服务失败次数的累加值,默认值是3次,最小值是1。当检测服务成功后,该值会被重置!
failureThreshold: 3
# 指定多久之后进行可用性检查,在此之前,Pod始终处于未就绪状态。
initialDelaySeconds: 15
# 指定探针检测的频率,默认是10s,最小值为1.
periodSeconds: 1
# 检测服务成功次数的累加值,默认值为1次,最小值1.
successThreshold: 1
# 一次检测周期超时的秒数,默认值是1秒,最小值为1.
timeoutSeconds: 1
EOF
往容器的IP:Port发送HTTP GET请求,如果Probe收到2xx或3xx,说明已经就绪。
[root@k8s151 /k8s-manifests/pods]$ cat > 19-pro-readinessProbe-httpGet.yaml <<'EOF'
kind: Pod
apiVersion: v1
metadata:
name: oldboyedu-linux80-readinessprobe-httpget-001
labels:
apps: myweb
spec:
containers:
- name: linux81-exec
image: nginx:1.18
# 可用性检查,周期性检查服务是否可用,从而判断容器是否就绪.
readinessProbe:
# 使用httpGet的方式去做健康检查
httpGet:
# 指定访问的端口号
port: 80
# 检测指定的访问路径
path: /index.html
# 检测服务失败次数的累加值,默认值是3次,最小值是1。当检测服务成功后,该值会被重置!
failureThreshold: 3
# 指定多久之后进行可用性检查,在此之前,Pod始终处于未就绪状态。
initialDelaySeconds: 15
# 指定探针检测的频率,默认是10s,最小值为1.
periodSeconds: 3
# 检测服务成功次数的累加值,默认值为1次,最小值1.
successThreshold: 1
# 一次检测周期超时的秒数,默认值是1秒,最小值为1.
timeoutSeconds: 1
EOF
尝试与容器建立TCP连接,如果能建立连接说明已经就绪。
[root@k8s151 /k8s-manifests/pods]$ cat > 20-pro-readinessProbe-tcpSocket.yaml <<'EOF'
kind: Pod
apiVersion: v1
metadata:
name: oldboyedu-linux80-readinessprobe-tcpsocket-001
labels:
apps: myweb
spec:
containers:
- name: linux81-tcpsocket
image: nginx:1.18
command:
- /bin/bash
- -c
- sleep 25; nginx -g "daemon off;"
# 可用性检查,周期性检查服务是否可用,从而判断容器是否就绪.
readinessProbe:
# 使用tcpSocket的方式去做健康检查
tcpSocket:
port: 80
# 检测服务失败次数的累加值,默认值是3次,最小值是1。当检测服务成功后,该值会被重置!
failureThreshold: 3
# 指定多久之后进行可用性检查,在此之前,Pod始终处于未就绪状态。
initialDelaySeconds: 15
# 指定探针检测的频率,默认是10s,最小值为1.
periodSeconds: 1
# 检测服务成功次数的累加值,默认值为1次,最小值1.
successThreshold: 1
# 一次检测周期超时的秒数,默认值是1秒,最小值为1.
timeoutSeconds: 1
EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: oldboyedu-linux81-deploy-nginx-001
spec:
replicas: 5
selector:
matchLabels:
apps: oldboyedu-web
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 2
maxUnavailable: 1
template:
metadata:
name: linux80-pod
labels:
apps: oldboyedu-web
spec:
containers:
- name: linux81-web
image: nginx:1.20.1
livenessProbe:
httpGet:
port: 80
path: /index.html
failureThreshold: 3
initialDelaySeconds: 15
periodSeconds: 1
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
httpGet:
port: 80
path: /oldboyedu-linux81.html
failureThreshold: 3
initialDelaySeconds: 15
periodSeconds: 3
successThreshold: 1
timeoutSeconds: 1
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-nginx-probe-svc-001
spec:
replicas: 5
selector:
matchLabels:
apps: oldboyedu-web
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 2
maxUnavailable: 1
template:
metadata:
name: linux81-pod
labels:
apps: oldboyedu-web
spec:
containers:
- name: linux81-web
image: nginx:1.20.1
livenessProbe:
httpGet:
port: 80
path: /index.html
failureThreshold: 3
initialDelaySeconds: 15
periodSeconds: 1
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
httpGet:
port: 80
path: /oldboyedu-linux81.html
failureThreshold: 3
initialDelaySeconds: 15
periodSeconds: 3
successThreshold: 1
timeoutSeconds: 1
---
apiVersion: v1
kind: Service
metadata:
name: oldboyedu-linux81-nginx-svc
spec:
type: ClusterIP
selector:
apps: oldboyedu-web
ports:
- port: 80
protocol: TCP
targetPort: 80
温馨提示:
(1)当livenessProbe和readinessProbe检查成功时,Pod才会被关联到SVC的endpoints列表中哟;
(2)当readinessProbe检查失败时,SVC当endpoints列表会自动剔除未就绪的Pod哟;
(3)可以使用脚本实时测试访问页面状态,参考脚本如下:(下面的"10.254.74.162"是svc的VIP地址哟)
for i in `seq 1000`; do curl 10.254.74.162/oldboyedu-linux80.html; sleep 1; done;
vim /var/lib/kubelet/config.yaml
...
staticPodPath: /etc/kubernetes/manifests
温馨提示:
(1)静态Pod是由kubelet启动时通过"staticPodPath"配置参数指定路径
(2)静态Pod创建的Pod名称会自动加上kubelet节点的主机名,比如"-k8s201.oldboyedu.com"
(3)静态Pod的创建并不依赖API-Server,而是直接基于kubelet所在节点来启动Pod;
(4)静态Pod的删除只需要将其从staticPodPath指定的路径移除即可;
(5)静态Pod路径仅对Pod资源类型有效,其他类型资源将不被创建哟;
(6)咱们的kubeadm部署方式就是基于静态Pod部署的哟;
Replication Controller简称RC,RC是Kubernetes系统中的核心概念之一,简单来说,**RC可以保证在任意时间运行Pod的副本数量,能够保证Pod总是可用的。**如果实际Pod数量比指定的多那就结束掉多余的,如果实际数量比指定的少就新启动一些Pod,当Pod失败、被删除或者挂掉后,RC都会去自动创建新的Pod来保证副本数量,所以即使只有一个Pod,我们也应该使用RC来管理我们的Pod。可以说,通过ReplicationController,Kubernetes实现了集群的高可用性。
replicationcontrollers控制器简称"rc",可以保证指定数量的Pod始终存活,rc通过标签选择器来关联Pod(它能够保证Pod持续运行,并且在任何时候都有指定数量的Pod副本,在此基础上提供一些高级特性,比如滚动升级和弹性伸缩)
[root@k8s151 /k8s-manifests/rc]$ cat > 13.linux81-rc.yaml <<'EOF'
apiVersion: v1
kind: ReplicationController
metadata:
name: oldboyedu-linux80-rc
labels:
school: oldboyedu
class: linux80
spec:
# 指定创建Pod的副本数量,默认值为1.
replicas: 3
# 定义标签选择器,rc资源基于标签选择器关联对应的Pod哟~
selector:
class: linux80
school: oldboyedu
# 定义Pod资源创建的模板
template:
metadata:
name: linux80-pod
labels:
school: oldboyedu
class: linux80
spec:
containers:
- name: linux80-web
image: nginx:1.18
EOF
[root@k8s151 /k8s-manifests/rc]$ kubectl apply -f 13.linux81-rc.yaml
可以看到有5个rc的 pods
[root@k8s151 /k8s-manifests/rc]$ kubectl get pods | grep rc
NAME READY STATUS RESTARTS AGE
oldboyedu-linux80-rc-4hwgk 1/1 Running 0 116s
oldboyedu-linux80-rc-fp6qf 1/1 Running 0 116s
oldboyedu-linux80-rc-qdpzm 1/1 Running 0 117s
删除一个
[root@k8s151 /k8s-manifests/rc]$ kubectl delete pods oldboyedu-linux80-rc-4hwgk
再次查看,之前的....4hwgk已经消失,又新拉起一个fpt7t
[root@k8s151 /k8s-manifests/rc]$ kubectl get pods | grep rc
oldboyedu-linux80-rc-fp6qf 1/1 Running 0 3m16s
oldboyedu-linux80-rc-fpt7t 1/1 Running 0 7s
oldboyedu-linux80-rc-qdpzm 1/1 Running 0 3m17s
[root@k8s151 /k8s-manifests/rc]$ kubectl get rc
NAME DESIRED CURRENT READY AGE
oldboyedu-linux80-rc 5 5 5 87m
删除指定rc
[root@k8s151 /k8s-manifests/rc]$ kubectl delete rc/oldboyedu-linux80-rc
replicationcontroller "oldboyedu-linux80-rc" deleted
删除rc
[root@k8s151 /k8s-manifests/rc]$ kubectl delete rc --all
replicationcontroller "oldboyedu-linux80-rc" deleted
ReplicaSet控制器简称rs资源。也是用于控制Pod的副本数量,我们即将学习的Deployment资源底层使用的就是该资源。
- 是一个控制器,用于维护 同一个Pod 的数量。
- 被Deployment取代了,只要了解就行
ReplicaSet如何管理Pod?
[root@k8s151 /k8s-manifests/pods]$ mkdir /k8s-manifests/deploy
[root@k8s151 /k8s-manifests/pods]$ cat > 01.linux-rs.yaml << 'EOF'
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: oldboyedu-linux80-rs
labels:
school: oldboyedu
class: linux81
spec:
# 指定创建Pod的副本数量,默认值为1.
replicas: 3
# 定义标签选择器,rs资源基于标签选择器关联对应的Pod哟~
selector:
matchLabels:
apps: oldboyedu-web
# 定义Pod资源创建的模板
template:
metadata:
name: linux80-pod
labels:
apps: oldboyedu-web
spec:
containers:
- name: linux80-web
image: nginx:1.18
EOF
[root@k8s151 /k8s-manifests/deploy]$ kubectl apply -f 01.linux-rs.yaml
replicaset.apps/oldboyedu-linux80-rs created
[root@k8s151 /k8s-manifests/deploy]$ kubectl get rs
NAME DESIRED CURRENT READY AGE
oldboyedu-linux80-rs 3 3 1 20s
可以看到,一共拉起了3个pod
[root@k8s151 /k8s-manifests/deploy]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
oldboyedu-linux80-rs-4gnr9 1/1 Running 0 116s
oldboyedu-linux80-rs-6xthn 1/1 Running 0 116s
oldboyedu-linux80-rs-lxhnx 1/1 Running 0 116s
[root@k8s151 /k8s-manifests/deploy]$ kubectl edit -f 01.linux-rs.yaml #修改配置,或者直接修改yaml
Deployment是用于部署服务的资源,是最常用的控制器,有以下几个功能:
(1)管理RS,通过RS资源创建Pod;
(2)具有上线部署,副本设置,滚动升级,回滚等功能;
(3)提供声明式更新,即可以使用apply命令进行更新镜像版本之类的;
Deployment应用场景: 部署服务,例如网站,API,微服务等。
[root@k8s151 /k8s-manifests/deploy]$ kubectl explain Deployment
KIND: Deployment #Deployment的KIND
VERSION: apps/v1 #Deployment的apiVersion
......
apiVersion: apps/v1
kind: Deployment
metadata:
name: oldboyedu-linux81-deploy-web
labels:
school: oldboyedu
class: linux81
spec:
# 指定创建Pod的副本数量,默认值为1.
replicas: 3
# 定义标签选择器,rs资源基于标签选择器关联对应的Pod哟~
selector:
matchLabels:
apps: oldboyedu-web
# 定义Pod资源创建的模板
template:
metadata:
name: linux80-pod
labels:
apps: oldboyedu-web
address: ChangPing-ShaHe
spec:
containers:
- name: linux80-web
image: nginx:1.18
# image: nginx:1.20
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-3GKMITkU-1681480617437)(http://qn.jicext.cn/img-202304141755523.png)]
升级次数 | old | New | 条件 |
---|---|---|---|
第一次升级 | 4 | 3 | maxSurge=2,maxUnavailable=1,pods=5 |
第二次升级 | 4-3 = 1 | 3 +2 =5 | |
第三次清理 | 0 | 5 |
详解 :在升级的过程中,如果全部同时关闭,再升级,这个过程中,pod就会不能再为用户提供服务。 所以我们一般使用滚动升级,就是不能一次性同时升级,保留一部分旧版本的继续提供服务,然后有一部分先去升级,升级好的之后就可以继续提供服务,然后就再次把剩下的升级。 主要使用maxSurge和maxUnavailable控制。比如下面我们的案例。
replicas: 5
maxSurge: 2 多启动的pod数据,也就是说最多可以同时启动7个
maxUnavailable: 1 ,最大不可访问的pod数量,一共有5个副本,也就是说最多只能有一个不能提供服务,也就是说4个副本要能正常使用。第一轮升级:
旧的:4个 (保持服务)
新的:3个 (因为最多可以启动7个)
第二轮升级:
旧的:1个 (因为上一轮的升级成功了3个,就可以关闭3个旧的的,等于还剩一个旧的)
[root@k8s151 /k8s-manifests/deploy]$ cat > 02-nginx-update.yaml <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: linux81-deploy-web-strategy
labels:
school: oldboyedu
class: linux81
spec:
# 指定创建Pod的副本数量,默认值为1.
replicas: 5
# 定义标签选择器,rs资源基于标签选择器关联对应的Pod哟~
selector:
matchLabels:
apps: oldboyedu-web
# 定义升级策略
strategy:
# 升级的类型,"Recreate" or "RollingUpdate"
# Recreate:
# 先停止所有的Pod运行,然后在批量创建更新。
# 生产环节中不推荐使用这种策略,因为升级过程中用户将无法访问服务!
# RollingUpdate:
# 滚动更新,即先实现部分更新,逐步替换原有的pod,是默认策略。
type: RollingUpdate
# 自定义滚动更新的策略
rollingUpdate:
# 在原有Pod的副本基础上,多启动Pod的数量,也可以用百分比,比如20%
maxSurge: 2
# 在升级过程中最大不可访问的Pod数量,就是你至少得保留旧版本的数据继续提供服务。
maxUnavailable: 1
# 定义Pod资源创建的模板
template:
metadata:
name: linux80-pod
labels:
apps: oldboyedu-web
spec:
containers:
- name: linux80-web
image: nginx:1.18
# image: nginx:1.20
EOF
[root@k8s151 /k8s-manifests/deploy]$ kubectl apply -f 02-nginx-update.yaml
deployment.apps/linux81-deploy-web-strategy created
[root@k8s151 /k8s-manifests/deploy]$ kubectl get Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
linux81-deploy-web-strategy 5/5 5 5 4m17s
[root@k8s151 /k8s-manifests/deploy]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
linux81-deploy-web-strategy-7f9dbdfbfc-7sdbg 1/1 Running 0 3m9s
linux81-deploy-web-strategy-7f9dbdfbfc-dffsd 1/1 Running 0 3m9s
linux81-deploy-web-strategy-7f9dbdfbfc-f7dhd 1/1 Running 0 3m9s
linux81-deploy-web-strategy-7f9dbdfbfc-l8wgq 1/1 Running 0 3m9s
linux81-deploy-web-strategy-7f9dbdfbfc-n2tt9 1/1 Running 0 3m9s
修改image为1.20,既把nginx从1.18升级到1.20
.....
spec:
containers:
- name: linux80-web
#image: nginx:1.18
image: nginx:1.20
EOF
....
再次查看效果
[root@k8s151 /k8s-manifests/deploy]$ kubectl apply -f 02-nginx-update.yaml
可以看到,新启了3个
[root@k8s151 /k8s-manifests/deploy]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
linux81-deploy-web-strategy-6bf9d5b95d-4dgzr 0/1 ContainerCreating 0 9s
linux81-deploy-web-strategy-6bf9d5b95d-8qvzw 0/1 ContainerCreating 0 9s
linux81-deploy-web-strategy-6bf9d5b95d-ctvnm 0/1 ContainerCreating 0 9s
linux81-deploy-web-strategy-7f9dbdfbfc-7sdbg 1/1 Running 0 6m20s
linux81-deploy-web-strategy-7f9dbdfbfc-dffsd 1/1 Running 0 6m20s
linux81-deploy-web-strategy-7f9dbdfbfc-f7dhd 1/1 Running 0 6m20s
linux81-deploy-web-strategy-7f9dbdfbfc-n2tt9 1/1 Running 0 6m20s
然后继续运行之后,再次启动2个
[root@k8s151 /k8s-manifests/deploy]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
linux81-deploy-web-strategy-6bf9d5b95d-4dgzr 1/1 Running 0 42s
linux81-deploy-web-strategy-6bf9d5b95d-7s76k 1/1 Running 0 11s
linux81-deploy-web-strategy-6bf9d5b95d-8qvzw 1/1 Running 0 42s
linux81-deploy-web-strategy-6bf9d5b95d-ckpmr 1/1 Running 0 18s
linux81-deploy-web-strategy-6bf9d5b95d-ctvnm 1/1 Running 0 42s
linux81-deploy-web-strategy-7f9dbdfbfc-dffsd 0/1 Terminating 0 6m53s
linux81-deploy-web-strategy-7f9dbdfbfc-f7dhd 0/1 Terminating 0 6m53s
最终全部更新完毕(中间的过程就是如上面所讲)
[root@k8s151 /k8s-manifests/deploy]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
linux81-deploy-web-strategy-6bf9d5b95d-4dgzr 1/1 Running 0 76s
linux81-deploy-web-strategy-6bf9d5b95d-7s76k 1/1 Running 0 45s
linux81-deploy-web-strategy-6bf9d5b95d-8qvzw 1/1 Running 0 76s
linux81-deploy-web-strategy-6bf9d5b95d-ckpmr 1/1 Running 0 52s
linux81-deploy-web-strategy-6bf9d5b95d-ctvnm 1/1 Running 0 76s
选择任意pod进入,查看nginx版本
[root@k8s151 /k8s-manifests/deploy]$ kubectl exec -it linux81-deploy-web-strategy-6bf9d5b95d-4dgzr -- bash
root@linux81-deploy-web-strategy-6bf9d5b95d-4dgzr:/# nginx -v
nginx version: nginx/1.20.2
为了方便查看效果,可以把其他的pod删除
删除pod kubectl delete pods/pod名称 删除rc kubectl delete rc/rc名称 删除rs kubectl delete rs/rs名称 #删除所有 kubectl delete all --all
cat > 01-deploy-redis.yaml <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-leader
labels:
app: redis
role: leader
tier: backend
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
role: leader
tier: backend
spec:
containers:
- name: leader
image: "docker.io/redis:6.0.5"
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis-leader
labels:
app: redis
role: leader
tier: backend
spec:
type: NodePort
ports:
- port: 6379
targetPort: 6379
nodePort: 30080
selector:
app: redis
role: leader
tier: backend
EOF
(1)编写资源清单
先推送需要用到的镜像到仓库
docker pull mysql:5.7
docker tag mysql:5.7 k8s151.oldboyedu.com:5000/mysql:5.7
docker push k8s151.oldboyedu.com:5000/mysql:5.7
docker pull wordpress:latest
docker tag wordpress:latest k8s151.oldboyedu.com:5000/wordpress:latest
docker push k8s151.oldboyedu.com:5000/wordpress:latest
[root@k8s151 ~]$ mkdir -p /k8s-manifests/wordpress/v1 /k8s-manifests/wordpress/v2 /k8s-manifests/wordpress/v3
[root@k8s151 /k8s-manifests/wordpress/v1]$ cat > 01-deploy-wordpresss.yaml <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
spec:
replicas: 1
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
containers:
- name: mysql
image: k8s151.oldboyedu.com:5000/mysql:5.7
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: oldboyedu #msyql的root密码
- name: MYSQL_DATABASE
value: wordpress #msyql的库
- name: MYSQL_USER
value: wordpress #msyql的账号
- name: MYSQL_PASSWORD
value: wordpress #msyql的wordpress的密码
- name: wordpress
image: k8s151.oldboyedu.com:5000/wordpress:latest
ports:
- containerPort: 80
env:
- name: WORDPRESS_DB_HOST
value: 127.0.0.1
- name: WORDPRESS_DB_USER
value: wordpress #连接mysql的账号
- name: WORDPRESS_DB_PASSWORD
value: wordpress # 账号对应的密码
EOF
[root@k8s151 /k8s-manifests/wordpress/v1]$ kubectl apply -f 01-deploy-wordpresss.yaml
#等待状态变为Running
[root@k8s151 /k8s-manifests/wordpress/v1]$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
wordpress-645d955988-4jwpl 2/2 Running 0 6m39s 10.244.2.50 k8s153.oldboyedu.com > >
http://10.0.0.151:5000/v2/_catalog
(2) 暴露服务
[root@k8s151 /k8s-manifests/wordpress/v1]$ kubectl expose -f 01-deploy-wordpresss.yaml --type=NodePort
[root@k8s151 /k8s-manifests/deploy/wordpress/v1]$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.254.0.1 443/TCP 3h35m
oldboyedu-linux80-nginx-svc ClusterIP 10.254.41.138 8888/TCP 28m
oldboyedu-linux80-nginx-svc-nodeport NodePort 10.254.230.35 8888:30080/TCP 9m9s
web2 ClusterIP 10.254.196.54 80/TCP,12345/TCP 28m
wordpress NodePort 10.254.113.63 3306:32360/TCP,80:31432/TCP 12s
查看svc的详情,命令格式 kubectl describe svc
[root@k8s151 /k8s-manifests/wordpress/v1]$ kubectl describe svc wordpress
Name: wordpress
Namespace: default
Labels:
Annotations:
Selector: app=wordpress
Type: NodePort
IP: 10.254.209.88
Port: port-1 3306/TCP
TargetPort: 3306/TCP
NodePort: port-1 31360/TCP
Endpoints: 10.244.2.50:3306
Port: port-2 80/TCP
TargetPort: 80/TCP
NodePort: port-2 31786/TCP #80端口对应的NodePort,31786为对外暴露的端口
Endpoints: 10.244.2.50:80
Session Affinity: None
External Traffic Policy: Cluster
Events:
(3)测试
我们要想在集群外访问集群内的pod,需要使用svc的NodePort
http://10.0.0.151:31786
尝试扩容2个副本,并访问webUI,抛出问题。
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Zf5Va7Ye-1681480617438)(http://qn.jicext.cn/img-202304141756279.png)]
先删除
kubectl delete all --all
(1) 拆分mysql
[root@k8s151 /k8s-manifests/wordpress/v2]$ cat > deploy-mysql.yaml <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: oldboyedu-mysql
spec:
replicas: 1
selector:
matchLabels:
app: oldboyedu-mysql
template:
metadata:
labels:
app: oldboyedu-mysql
spec:
containers:
- name: oldboyedu-mysql
image: k8s151.oldboyedu.com:5000/mysql:5.7
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: oldboyedu
- name: MYSQL_DATABASE
value: wordpress
- name: MYSQL_USER
value: wordpress
- name: MYSQL_PASSWORD
value: wordpress
EOF
kubectl apply -f deploy-mysql.yaml
[root@k8s151 /k8s-manifests/wordpress/v2]$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
oldboyedu-mysql-b7b8f9bb9-82hsk 1/1 Running 0 17s 10.244.2.52 k8s153.oldboyedu.com > >
(2) 拆分wordpress
[root@k8s151 /k8s-manifests/wordpress/v2]$ cat > deploy-wordpresss.yaml <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: oldboyedu-wordpress
spec:
replicas: 3
selector:
matchLabels:
app: oldboyedu-wordpress
template:
metadata:
labels:
app: oldboyedu-wordpress
spec:
containers:
- name: oldboyedu-wordpress
image: k8s151.oldboyedu.com:5000/wordpress:latest
ports:
- containerPort: 80
env:
- name: WORDPRESS_DB_HOST
# 写mysql的Pod的IP地址哟~
value: 10.244.2.52
- name: WORDPRESS_DB_USER
value: wordpress
- name: WORDPRESS_DB_PASSWORD
value: wordpress
EOF
kubectl apply -f deploy-wordpresss.yaml
(3) 暴露workpress服务
kubectl expose deployment oldboyedu-wordpress --type=NodePort
[root@k8s151 /k8s-manifests/wordpress/v2]$ kubectl describe svc oldboyedu-wordpress
Name: oldboyedu-wordpress
Namespace: default
Labels:
Annotations:
Selector: app=oldboyedu-wordpress
Type: NodePort
IP: 10.254.124.120
Port: 80/TCP
TargetPort: 80/TCP
NodePort: 32128/TCP
Endpoints: 10.244.1.238:80,10.244.2.53:80,10.244.2.54:80
Session Affinity: None
External Traffic Policy: Cluster
Events:
任选一个节点,创建一个新文章
可以看到,任意节点创建的文章,可以在3个节点都能看到,实现了共用一个数据库
先清空
kubectl delete all --all
(1) 部署mysql服务
[root@k8s151 /k8s-manifests/wordpress/v3]$ cat > 01-deploy-mysql.yaml <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
volumes: #nfs,如果忘记了可以去看3.5
- name: data
nfs:
server: 10.0.0.151
path: /oldboyedu/data/kubernetes/mysql #把/var/lib/mysql文件挂载到宿主机/oldboyedu/data/kubernetes/mysql下面
containers:
- name: mysql
image: k8s151.oldboyedu.com:5000/mysql:5.7
volumeMounts:
- name: data
mountPath: /var/lib/mysql
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: oldboyedu
- name: MYSQL_DATABASE
value: wordpress
- name: MYSQL_USER
value: wordpress
- name: MYSQL_PASSWORD
value: wordpress
---
apiVersion: v1
kind: Service
metadata:
name: linux81-db
spec:
clusterIP: 10.254.2.111 #自定义clusterIP
selector:
app: mysql
ports:
- port: 3306
protocol: TCP
targetPort: 3306
name: db
EOF
(2) 部署wordpress服务
[root@k8s151 /k8s-manifests/wordpress/v3]$ cat > 02-deploy-wordpresss.yaml <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
spec:
replicas: 3
selector:
matchLabels:
apps: wordpress
template:
metadata:
labels:
apps: wordpress
spec:
containers:
- name: wordpress
image: k8s151.oldboyedu.com:5000/wordpress:latest
ports:
- containerPort: 80
env:
- name: WORDPRESS_DB_HOST
value: linux81-db #绑定mysql的svc名称,会自动解析到pod的ip
- name: WORDPRESS_DB_USER
value: wordpress
- name: WORDPRESS_DB_PASSWORD
value: wordpress
---
apiVersion: v1
kind: Service
metadata:
name: linux81-web
labels:
apps: wordpress
class: linux81
spec:
type: NodePort
selector:
apps: wordpress
ports:
- port: 8888
targetPort: 80
nodePort: 30088
name: web
EOF
(3) 创建服务
kubectl apply -f .
(4) 验证数据是否丢失
[root@k8s151 /k8s-manifests/wordpress/v4]$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.254.0.1 443/TCP 95m
linux81-db ClusterIP 10.254.2.111 3306/TCP 3m41s
linux81-web NodePort 10.254.132.74 8888:30088/TCP 3m41s
kubectl delete pod --all
(5) 回收资源
kubectl delete -f .
rm -rf /oldboyedu/data/kubernetes/*
创建deployment:
kubectl create deployment oldboyedu-linux81-2022 --image=nginx:1.20.1
删除deployment:
kubectl delete deployment oldboyedu-linux81-2022
修改deployment:
1)资源清单配置文件修改[交互式]
kubectl edit deployments oldboyedu-linux81-2022
2)修改容器的镜像[非交互式]
kubectl set image deploy oldboyedu-linux81-2022 nginx=nginx:1.14
一次性任务,Pod完成作业后并不重启容器。其重启策略为"restartPolicy: Never"
删除容器后,并不会重新启动
cat > job.yaml <<'EOF'
apiVersion: batch/v1
kind: Job
metadata:
name: oldboyedu-linux81-pi
spec:
template:
spec:
containers:
- name: pi
image: perl:latest
# 它计算π到2000个位置并打印出来。大约需要 10 秒才能完成。
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never
# 指定标记此作业失败之前的重试次数。默认值为6
backoffLimit: 4
EOF
[root@k8s151 /k8s-manifests/deploy]$ kubectl apply -f job.yaml
job.batch/oldboyedu-linux81-pi created
[root@k8s151 /k8s-manifests/deploy]$ kubectl get pods |grep pi
oldboyedu-linux81-pi-c9jwc 0/1 Completed 0 2m29s
[root@k8s151 /k8s-manifests/deploy]$ kubectl logs oldboyedu-linux81-pi-c9jwc
3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066.....
参考链接:
https://kubernetes.io/zh-cn/docs/concepts/workloads/controllers/job/
周期性任务,CronJob底层逻辑是周期性创建Job控制器来实现周期性任务的。
[root@k8s151 /k8s-manifests]$ mkdir -p /k8s-manifests/cronJob
[root@k8s151 /k8s-manifests/cronJob]$ cat > cronjob.yaml <<'EOF'
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: oldboyedu-hello
spec:
# 定义调度格式,参考链接:https://en.wikipedia.org/wiki/Cron
# ┌───────────── 分钟 (0 - 59)
# │ ┌───────────── 小时 (0 - 23)
# │ │ ┌───────────── 月的某天 (1 - 31)
# │ │ │ ┌───────────── 月份 (1 - 12)
# │ │ │ │ ┌───────────── 周的某天 (0 - 6)(周日到周一;在某些系统上,7 也是星期日)
# │ │ │ │ │ 或者是 sun,mon,tue,web,thu,fri,sat
# │ │ │ │ │
# │ │ │ │ │
# * * * * *
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox:1.28
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the oldboyedu linux81 Kubernetes cluster
restartPolicy: OnFailure
EOF
[root@k8s151 /k8s-manifests]$ kubectl delete all --all
[root@k8s151 /k8s-manifests/cronJob]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
oldboyedu-hello-1669379820-zpcnr 0/1 Completed 0 38s
[root@k8s151 /k8s-manifests/cronJob]$ kubectl get cronjobs.batch,job
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
cronjob.batch/oldboyedu-hello * * * * * False 0 34s 2m24s
NAME COMPLETIONS DURATION AGE
job.batch/oldboyedu-hello-1669379820 1/1 19s 91s
job.batch/oldboyedu-hello-1669379880 1/1 1s 30s
参考链接:
https://kubernetes.io/zh-cn/docs/concepts/workloads/controllers/cron-jobs/
DaemonSet 确保全部(或者某些)节点上运行一个 Pod 的副本。 当有节点加入集群时, 也会为他们新增一个 Pod 。 当有节点从集群移除时,这些 Pod 也会被回收。删除 DaemonSet 将会删除它创建的所有 Pod。
DaemonSet的一些典型用法:
(1)在每个节点上运行集群守护进程(flannel等)
(2)在每个节点上运行日志收集守护进程(flume,filebeat,fluentd等)
(3)在每个节点上运行监控守护进程(zabbix agent,node_exportor等)
温馨提示:
(1)当有新节点加入集群时,也会为新节点新增一个Pod;
(2)当有节点从集群移除时,这些Pod也会被回收;
(3)删除DaemonSet将会删除它创建的所有Pod;
(4)如果节点被打了污点的话,且DaemonSet中未定义污点容忍,则Pod并不会被调度到该节点上;(“flannel案例”)
cat > daemonset.yaml >> 'EOF'
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: oldboyedu-linux80-fluentd-elasticsearch
labels:
k8s-app: fluentd-logging
spec:
selector:
matchLabels:
name: fluentd-elasticsearch
template:
metadata:
labels:
name: fluentd-elasticsearch
spec:
containers:
- name: fluentd-elasticsearch
# image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
image: k8s151.oldboyedu.com:5000/fluentd_elasticsearch/fluentd:v2.5.2
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
EOF
此外,系统会自动添加
node.kubernetes.io/unschedulable:NoSchedule
容忍度到这些 DaemonSet Pod。在调度 DaemonSet Pod 时,默认调度器会忽略unschedulable
节点。参考链接:
https://kubernetes.io/zh-cn/docs/concepts/workloads/controllers/daemonset/
应用场景包括
1、稳定的持久化存储,即Pod重新调度后还是能访问到相同的持久化数据,基于PVC来实现
2、稳定的网络标志,即Pod重新调度后其PodName和HostName不变,基于Headless Service(即没有Cluster IP的Service)来实现
3、有序部署,有序扩展,即Pod是有顺序的,在部署或者扩展的时候要依据定义的顺序依次进行(即从0到N-1,在下一个Pod运行之前的所有Pod必须都是Running和Ready状态),基于init containers来实现
4、有序收缩,有序删除(即从N-1到0)
以Nginx的为例,当任意一个Nginx挂掉,其处理的逻辑是相同的,即仅需重新创建一个Pod副本即可,这类服务我们称之为无状态服务。
以MySQL主从同步为例,master,slave两个库任意一个库挂掉,其处理逻辑是不相同的,这类服务我们称之为有状态服务。
有状态服务面临的难题:
(1)启动/停止顺序;
(2)pod实例的数据是独立存储;
(3)需要固定的IP地址或者主机名;
StatefulSet一般用于有状态服务,StatefulSets对于需要满足以下一个或多个需求的应用程序很有价值。
(1)稳定唯一的网络标识符。
(2)稳定独立持久的存储。
(4)有序优雅的部署和缩放。
(5)有序自动的滚动更新。
稳定的网络标识:
其本质对应的是一个service资源,只不过这个service没有定义VIP,我们称之为headless service,即"无头服务"。
通过"headless service"来维护Pod的网络身份,会为每个Pod分配一个数字编号并且按照编号顺序部署。
综上所述,无头服务(“headless service”)要求满足以下两点:
(1) 将svc资源的clusterIP字段设置None,即"clusterIP: None
";
(2) 将sts资源的serviceName字段声明为无头服务的名称;
独享存储:
Statefulset的存储卷使用VolumeClaimTemplate创建,称为"存储卷申请模板"。
当sts资源使用VolumeClaimTemplate创建一个PVC时,同样也会为每个Pod分配并创建唯一的pvc编号,每个pvc绑定对应pv,从而保证每个Pod都有独立的存储。
(1)编写无头服务配置文件
[root@k8s151 ~]$ mkdir -p /k8s-manifests/statefulset && cd /k8s-manifests/statefulset
[root@k8s151 /k8s-manifests/statefulset]$
cat > 01-headless-svc.yaml <<'EOF'
apiVersion: v1
kind: Service
metadata:
name: linux81-headless
spec:
ports:
- port: 80
name: web
# 将clusterIP字段设置为None表示为一个无头服务,即svc将不会分配VIP。
clusterIP: None
selector:
app: nginx
EOF
(2)创建无头服务
kubectl apply -f 01-headless-svc.yaml
kubectl get svc |grep headless
(3)编写statefulset资源配置文件
[root@k8s151 /k8s-manifests/statefulset]$ cat > 02-statefulset-headless-network.yaml <<'EOF'
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: linux81-web
spec:
selector:
matchLabels:
app: nginx
# 声明无头服务
serviceName: linux81-headless
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.20.1
ports:
- containerPort: 80
EOF
(4) 创建statefulset资源
kubectl apply -f 02-statefulset-headless-network.yaml
kubectl get pods -o wide |grep linux81-web
(5) 使用响应式API创建测试Pod
kubectl run -it dns-test --rm --image=alpine -- sh
#
/ # for i in `seq 0 2`;do ping linux81-web-${i}.linux81-headless.default.svc.cluster.local -c 3;done
删掉pod后,ip发生变化
kubectl delete pods linux81-web-0 linux81-web-1 linux81-web-2
(6)删除sts资源便于后续测试
kubectl delete -f .
(1)创建headless
此步骤略,直接服用上一步骤的svc即可。
(2)创建sts资源和svc资源
[root@k8s151 /k8s-manifests/statefulset]$ cat > 03-tatefulset-headless-volumeClaimTemplates.yaml <<'EOF'
apiVersion: v1
kind: Service
metadata:
name: linux81-headless
spec:
ports:
- port: 80
name: web
# 将clusterIP字段设置为None表示为一个无头服务,即svc将不会分配VIP。
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: linux81-web
spec:
selector:
matchLabels:
app: nginx
# 声明无头服务
serviceName: linux81-headless
replicas: 3
# 卷申请模板,会为每个Pod去创建唯一的pvc并与之关联哟!
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
# 声明咱们自定义的动态存储类,即sc资源。
storageClassName: "nfs-pvc-sc"
resources:
requests:
storage: 2Gi
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.20.1
ports:
- containerPort: 80
volumeMounts:
- name: data
mountPath: /usr/share/nginx/html
---
apiVersion: v1
kind: Service
metadata:
name: oldboyedu-linux81-sts
spec:
selector:
app: nginx
ports:
- port: 80
targetPort: 80
EOF
(3) 测试SVC访问(需要链接到Pod逐个修改nginx首页文件)
kubectl exec -it linux81-web-0 -- bash
root@linux81-web-0:/# echo AAAAAAAAAA > /usr/share/nginx/html/index.html
kubectl exec -it linux81-web-1 -- bash
root@linux81-web-1:/# echo BBBBBBBBBB > /usr/share/nginx/html/index.html
kubectl exec -it linux81-web-2 -- bash
root@linux81-web-2:/# echo CCCCCCCCC > /usr/share/nginx/html/index.html
kubectl get svc | grep oldboyedu-linux81-sts
oldboyedu-linux81-sts ClusterIP 10.254.110.71 80/TCP 12m
for i in `seq 100`;do sleep 1; curl 10.254.110.71; done
污点通常情况下是作用在worker节点上,其可以影响Pod的调度。
污点的语法格式如下:
key[=value]:effect
相关字段说明:
key:
字母或数字开头,可以包含字母、数字、连字符(-)、点(.)和下划线(_),最多253个字符。
也可以以DNS子域前缀和单个"/"开头
value:
该值是可选的。如果给定,它必须以字母或数字开头,可以包含字母、数字、连字符、点和下划线,最多63个字符。
effect:
effect必须是NoSchedule、PreferNoSchedule或NoExecute。
NoSchedule:
该节点不再接收新的Pod调度,但不会驱赶已经调度到该节点的Pod。
PreferNoSchedule:
该节点可以接受调度,但会尽可能将Pod调度到其他节点,换句话说,让该节点的调度优先级降低啦。
NoExecute:
该节点不再接收新的Pod调度,与此同时,会立刻驱逐已经调度到该节点的Pod。
(1) 创建污点
kubectl taint node k8s153.oldboyedu.com school=oldboyedu:NoSchedule
(2) 查看污点
kubectl describe nodes k8s153.oldboyedu.com | grep Taints
(3)测试污点
cat > 01-deploy-nginx.yaml <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: oldboyedu-linux80-deploy
spec:
replicas: 5
selector:
matchLabels:
apps: oldboyedu-web
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 2
maxUnavailable: 1
template:
metadata:
name: linux80-pod
labels:
apps: oldboyedu-web
spec:
containers:
- name: linux80-web
image: k8s151.oldboyedu.com:5000/myweb:v0.1
EOF
先删除所有节点
[root@k8s151 /k8s-manifests/deploy]$ kubectl delete all --all
[root@k8s151 /k8s-manifests/deploy]$ kubectl apply -f 01-deploy-nginx.yaml
[root@k8s151 /k8s-manifests/deploy]$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
oldboyedu-linux80-deploy-dcc8c4ccf-46jwq 1/1 Running 0 9s 10.244.1.168 k8s152.oldboyedu.com
oldboyedu-linux80-deploy-dcc8c4ccf-7k52t 1/1 Running 0 9s 10.244.2.30 k8s153.oldboyedu.com
oldboyedu-linux80-deploy-dcc8c4ccf-lr7df 1/1 Running 0 9s 10.244.1.169 k8s152.oldboyedu.com
oldboyedu-linux80-deploy-dcc8c4ccf-qprg8 1/1 Running 0 9s 10.244.1.170 k8s152.oldboyedu.com
oldboyedu-linux80-deploy-dcc8c4ccf-tk4bt 1/1 Running 0 9s 10.244.2.29 k8s153.oldboyedu.com
给153节点创建污点
[root@k8s151 /k8s-manifests/deploy]$ kubectl taint node k8s153.oldboyedu.com school=oldboyedu:NoSchedule
node/k8s153.oldboyedu.com tainted
修改01-deploy-nginx.yaml中的副本数为15
[root@k8s151 /k8s-manifests/deploy]$ kubectl edit deployments/oldboyedu-linux80-deploy
....
spec:
replicas: 5
....
153设置污点为NoSchedule后,可以看到不再有新的pod,但之前已经存在的也不会驱逐
[root@k8s151 /k8s-manifests/deploy]$ kubectl get pods -o wide
同样的步骤,分别去尝试其他两种
(4)清除污点
kubectl taint node k8s153.oldboyedu.com school-
(1)创建污点
kubectl taint node k8s201.oldboyedu.com school=oldboyedu:PreferNoSchedule
(2)查看污点
kubectl describe nodes k8s201.oldboyedu.com | grep Taints
(3)测试污点
cat > 01-deploy-nginx.yaml <<'EOF'
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: oldboyedu-linux80-deploy
spec:
replicas: 5
selector:
matchLabels:
apps: oldboyedu-web
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 2
maxUnavailable: 1
template:
metadata:
name: linux80-pod
labels:
apps: oldboyedu-web
spec:
containers:
- name: linux80-web
image: k8s201.oldboyedu.com:5000/nginx:1.20.1
EOF
(3)测试污点
cat > 01-deploy-nginx.yaml <<'EOF'
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: oldboyedu-linux80-deploy
spec:
replicas: 5
selector:
matchLabels:
apps: oldboyedu-web
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 2
maxUnavailable: 1
template:
metadata:
name: linux80-pod
labels:
apps: oldboyedu-web
spec:
containers:
- name: linux80-web
image: k8s201.oldboyedu.com:5000/nginx:1.20.1
EOF
(4)清除污点
kubectl taint node k8s153.oldboyedu.com school-
(1)创建污点
kubectl taint node k8s153.oldboyedu.com school=oldboyedu:NoExecute
(2)查看污点
kubectl describe nodes k8s153.oldboyedu.com | grep Taints
(3)测试污点
cat > 01-deploy-nginx.yaml <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: oldboyedu-linux80-deploy
spec:
replicas: 5
selector:
matchLabels:
apps: oldboyedu-web
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 2
maxUnavailable: 1
template:
metadata:
name: linux80-pod
labels:
apps: oldboyedu-web
spec:
containers:
- name: linux80-web
image: k8s151.oldboyedu.com:5000/myweb:v0.1
EOF
(4) 清除污点
kubectl taint node k8s153.oldboyedu.com school-
(1)所有节点创建污点
kubectl taint node k8s151.oldboyedu.com school=oldboyedu:NoSchedule
kubectl taint node k8s152.oldboyedu.com school=oldboyedu:PreferNoSchedule
kubectl taint node k8s153.oldboyedu.com school=oldboyedu:NoExecute
(2)容忍污点
cat > 07-deploy-nginx-tolerations.yaml << 'EOF'
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: oldboyedu-linux80-tolerations
spec:
replicas: 10
selector:
matchLabels:
apps: oldboyedu-web
template:
metadata:
name: linux80-pod
labels:
apps: oldboyedu-web
spec:
nodeName: k8s153.oldboyedu.com
# school=oldboyedu:NoSchedule
tolerations:
# 容忍污点的key
- key: school
# 容忍污点的value,指定key对应的具体值,当operator: Exists时,value必须为空,默认值就为空。
value: oldboyedu
# operator表示key和value的关系,有效值为Exists和Equal,默认值为Equal。
# Equal:
# 表示key=value。
# Exists:
# 表示存在key,匹配所有的值。此时value的值必须为空。
# operator: Exists
# effect表示容忍污点的类型,若不定义,则匹配所有的污点类型。
# 若指定,则允许的值为: NoSchedule, PreferNoSchedule,NoExecute。
# 在k8s 1.15.12版本测试中发现,NoExecute这种影响度并不生效,尽管我们配置了这种影响度也不会容忍这类污点。
effect: NoSchedule
# effect: NoExecute
# tolerationSeconds仅对"effect: NoExecute"测试生效,可以设置驱逐Pod的超时时间,默认是永不驱逐。
# 在k8s 1.15.12版本测试中发现,当我们搭配nodeName调度到"effect: NoExecute"节点时,尽管Pod可以调度到该节点,但状态依旧处于"Pending"状态。
# tolerationSeconds: 15
containers:
- name: linux80-web
image: k8s151.oldboyedu.com:5000/myweb:v0.1
EOF
节点亲和性(nodeAffinity):
用于控制Pod调度到哪些worker节点上,以及不能部署在哪些机器上。
Pod亲和性(podAffinity):
Pod可以和哪些Pod部署在同一个拓扑域。
Pod反亲和性(podAntiAffinity):
Pod可以和哪些Pod部署在不同一个拓扑域。
(1)所有worker节点打标签
查看标签
kubectl get nodes --show-labels
给所有节点打标签
kubectl label nodes --all class=linux81
给指定节点设置标签
kubectl label nodes k8s153.oldboyedu.com class=linux81
删除标签
kubectl label nodes k8s152.oldboyedu.com school-
(2)创建资源清单
kubectl label nodes --all school-
cat > 23-affinity-nodeAffinity.yaml <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: oldboyedu-linux81-affinity-nodeaffinity
spec:
replicas: 20
selector:
matchLabels:
apps: oldboyedu-web
template:
metadata:
name: linux81-pod
labels:
apps: oldboyedu-web
spec:
#污点容忍
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
operator: Exists
affinity:
# 节点亲和性
nodeAffinity:
# 软限制策略:不一定要满足, 但会优先满足,相当于提高了调度的优先级
preferredDuringSchedulingIgnoredDuringExecution:
# 与相应权重关联的节点选择器项。
- preference:
# 匹配表达式
matchExpressions:
# 匹配标签的名称
- key: school
# 匹配标签的值
values:
- oldboyedu
- yitiantian
- laonanhai
# operator声明key和values之间的关系,有效值为: In,NotIn, Exists, DoesNotExist. Gt, Lt.
# In
# 包含
# NotIn
# 不包含
# Exists
# 存在
# DoesNotExist
# 不存在
# Gt
# 大于
# Lt
# 小于
operator: In
# 声明权重1-100
weight: 80
# 硬限制策略,必须满足
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: class
values:
- linux81
operator: NotIn
containers:
- name: linux81-web
image: nginx:1.20.1
EOF
给151设置class=linux81标签,此时不满足硬限制
kubectl label nodes k8s151.oldboyedu.com class=linux81
给153分配有80的权重
kubectl label nodes k8s153.oldboyedu.com school=yitiantian
kubectl apply -f 23-affinity-nodeAffinity.yaml
可以看到151因为不符合硬限制,所以没有
153符合软限制(权重是80,所以大部分都分配到了153节点)
例如某些应用 “必须/或者尽量” 跑在具有ssd储的节点上,有些彼此相关的Pod应用应该跑在同一个节点上
[root@k8s151 /k8s-manifests/pods]$ cat > 24-affinity-podAffinity.yaml <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: oldboyedu-linux80-affinity-podaffinity
spec:
replicas: 20
selector:
matchLabels:
apps: oldboyedu-web
template:
metadata:
name: linux80-pod
labels:
apps: oldboyedu-web
spec:
#污点容忍
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
operator: Exists
affinity:
# 定义Pod的亲和性
podAffinity:
# 定义软限制
# preferredDuringSchedulingIgnoredDuringExecution:
# - weight: 1
# # 指定Pod的亲和性
# podAffinityTerm:
# topologyKey: school
# 定义硬限制
requiredDuringSchedulingIgnoredDuringExecution:
# 指定的拓扑域为"kubernetes.io/hostname"时:
#就会发现所有的Pod被调度到同一个节点的现象,这是因为所有的node节点Key其valus值不同导致的。(每个节点的hostanme都不一样)
# 指定的拓扑域为"beta.kubernetes.io/os":
# 就会发现所有的Pod被调度到不同的节点,这是因为所有的Node节点的key其valus值相同。
# 综上所述,根本原因在于"kubernetes.io/hostname"和"beta.kubernetes.io/os"这两个KEY对应的Vlaue是否相同导致的.
- topologyKey: kubernetes.io/hostname
# - topologyKey: beta.kubernetes.io/os
#注意:上面的topologyKey拓扑域并不能立刻确定Pod应该调度到哪个节点。
#因为可能选择较多(既节点的key相同value不相同的情况),所以需要借助pod的标签选择器进行再次确认!
labelSelector:
matchExpressions:
# 这个key值需要和Pod的标签key值进行匹配哟(这个key是pod标签)~
- key: apps
#注意,如果Pod出现了key值相同,但value不相同的标签,这个时候不建议使用Exists
#而是建设设置白名单,既采用"operator:In"的方式进行匹配,当然此时values不能为空
# values:
#- linux80
# 定义key和values之间的关系,有效值为: In, NotIn, Exists and DoesNotExist.
operator: Exists
containers:
- name: linux81-web
image: nginx:1.20.1
EOF
kubectl label nodes k8s153.oldboyedu.com school=yitiantian
测试:topologyKey拓扑域为kubernetes.io/hostname(hostname是主机名,所以每个Node都不同)
第一次:设置副本数为1(replicas: 1)
先自己修改副本数
[root@k8s151 /k8s-manifests/pods]$ kubectl apply -f 24-affinity-podAffinity.yaml
可以看到分配到153节点
[root@k8s151 /k8s-manifests/pods]$ kubectl get pods -o wide
再次修改副本数为6
[root@k8s151 /k8s-manifests/pods]$ kubectl apply -f 24-affinity-podAffinity.yaml
执行后发现,再次的启动全部分配到153上面,因为第一次启动时,分配到 kubernetes.io/k8s153上面,再次分配的时候,根据亲和性原理,就会再次分配到这个节点上
[root@k8s151 /k8s-manifests/pods]$ kubectl get pods -o wide
[root@k8s151 /k8s-manifests/pods]$ cat > 24-affinity-podAffinity.yaml <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: oldboyedu-linux80-affinity-podaffinity
spec:
replicas: 20
selector:
matchLabels:
apps: oldboyedu-web
template:
metadata:
name: linux80-pod
labels:
apps: oldboyedu-web
spec:
#污点容忍
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
operator: Exists
affinity:
podAffinity:
# 定义硬限制
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: school
# - topologyKey: beta.kubernetes.io/os
#注意:上面的topologyKey拓扑域并不能立刻确定Pod应该调度到哪个节点。
#因为可能选择较多(既节点的key相同value不相同的情况),所以需要借助pod的标签选择器进行再次确认!
labelSelector:
matchExpressions:
# 这个key值需要和Pod的标签key值进行匹配哟(这个key是pod标签)~
- key: apps
#注意,如果Pod出现了key值相同,但value不相同的标签,这个时候不建议使用Exists
#而是建设设置白名单,既采用"operator:In"的方式进行匹配,当然此时values不能为空
# values:
#- linux80
# 定义key和values之间的关系,有效值为: In, NotIn, Exists and DoesNotExist.
operator: Exists
containers:
- name: linux81-web
image: nginx:1.20.1
EOF
把topologyKey拓扑域的值设置为school时,可能会出现节点school的值不同,这个时候,就要用到pod的标签选择器了(如下,153的school=yitiantian)
kubectl label nodes k8s152.oldboyedu.com school=oldboyedu
kubectl label nodes k8s153.oldboyedu.com school=yitiantian
使用场景:
当前有两个机房( beijing,shanghai),需要部署一个nginx产品,副本为两个,为了保证机房容灾高可用场景,需要在两个机房分别部署一个副本
[root@k8s151 /k8s-manifests/pods]$ cat > 25-affinity-podAntiAffinity.yaml <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: linux81-affinity-podantiaffinity
spec:
replicas: 10
selector:
matchLabels:
apps: oldboyedu-web
template:
metadata:
name: linux80-pod
labels:
apps: oldboyedu-web
spec:
#污点容忍
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
operator: Exists
affinity:
# 定义Pod的反亲和性
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: kubernetes.io/hostname
labelSelector:
matchExpressions:
- key: apps
values:
- oldboyedu-web
operator: In
containers:
- name: linux81-web
image: nginx:1.20.1
EOF
第一次测试:设置副本数为2(replicas: 2)
副本数自己修改
[root@k8s151 /k8s-manifests/pods]$ kubectl apply -f 25-affinity-podAntiAffinity.yaml
可以看到分配到两个节点上
[root@k8s151 /k8s-manifests/pods]$ kubectl get pods -o wide
第二次尝试:设置副本数为3 (replicas: 3)
[root@k8s151 /k8s-manifests/pods]$ kubectl apply -f 25-affinity-podAntiAffinity.yaml
可以看到之前分配的两个节点,没有被再次分配
[root@k8s151 /k8s-manifests/pods]$ kubectl get pods -o wide
第三次尝试:设置副本数为10(replicas: 3)
[root@k8s151 /k8s-manifests/pods]$ kubectl apply -f 25-affinity-podAntiAffinity.yaml
发现当151,152,153都被分配后,后面不会再次进行分配,这就是非亲和性,当topologyKey的值相同的节点被分配后,就不会再次进行分配
[root@k8s151 /k8s-manifests/pods]$ kubectl get pods -o wide
nodeName
nodeSelector
[root@k8s151 /k8s-manifests/pods]$ kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8s151.oldboyedu.com Ready master 5d23h v1.18.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s151.oldboyedu.com,kubernetes.io/os=linux,node-role.kubernetes.io/master=
k8s152.oldboyedu.com Ready > 5d23h v1.18.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s152.oldboyedu.com,kubernetes.io/os=linux
k8s153.oldboyedu.com Ready > 5d23h v1.18.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s153.oldboyedu.com,kubernetes.io/os=linux
给所有节点打标签
[root@k8s151 /k8s-manifests/pods]$ kubectl label nodes --all school=oldboyedu
删除152节点的标签
[root@k8s151 /k8s-manifests/pods]$ kubectl label nodes k8s152.oldboyedu.com school-
#结合上面的标签,不会部署在152节点
apiVersion: apps/v1beat1
kind: DaemnoSet
metadata:
name: oldboyedu-linux81-ds
school: oldboyedu
class: linux81
spec:
tmplate:
metadata:
name: linxu81-ds
labels:
apps: ds-web
spec:
nodeSelector: #指定节点的标签
school: oldboyedu
containers:
- name: ds-web-linux81
image: nginx:1.20.1
1、防止pod失联,准确找到提供同一服务的pod(服务发现)
2、定义一组Pod的访问策略(负载均衡)
即访问流程应该是
在前端访问后端时,如果直接绑定pod的ip,当pod进行更新时,pod的ip也会变化,前端无法动态的感知后端的变化。同时,还面临多个pod副本如何对外统一访问的问题
这里,就需要用到前面的服务发现与负载均衡的概念,
Service在很多情况下只是一个概念,真正起作用的其实是kube-proxy服务进程,每个Node节点上都运行了一个kube-proxy的服务
进程。当创建Service的时候会通过API Server向etcd写入创建的Service的信息,而kube-proxy会基于监听的机制发现这种Service的变化,然后它会将最新的Service信息转换为对应的访问规则。
service主要解决Pod的动态变化,提供统一的访问入口。
service有以下两个作用:
(1) 通过标签去关联一组Pod,以实现服务发现的功能;
(2) 基于iptables或者ipvs实现负载均衡的功能;
修改端口范围
编辑
kube-apiserver.yaml
文件vim /etc/kubernetes/manifests/kube-apiserver.yaml
找到 --service-cluster-ip-range 这一行,在这一行的下一行增加 如下内容
- –service-node-port-range=1-65535
[root@k8s151 ~]$ vim /etc/kubernetes/manifests/kube-apiserver.yaml
[root@k8s151 ~]$ systemctl restart kubelet
Service为一组Pod提供负载均衡能力
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-muIEAK7l-1681480617447)(http://qn.jicext.cn/img-202304141804920.png)]
查看service与他绑定的pod
[root@k8s151 ~]$ kubectl get ep
NAME ENDPOINTS AGE
kubernetes 10.0.0.151:6443 9h
nfstest-test 4h41m
注意:service与pod绑定是标签绑定,所以标签一定要正确,查看pod标签的命令为
[root@k8s151 ~]$ kubectl get pods --show-labels
加入这个pod中有两个端口需要访问,一个80一个12345.这时候就需要使用到service的多端口功能
[root@k8s151 /k8s-manifests/svc]$ cat > 03-service.yaml <<'EOF'
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx
name: web2
spec:
ports:
- port: 80
name: http
protocol: TCP
targetPort: 80
- port: 12345
name: metrics
protocol: TCP
targetPort: 12345
selector:
app: nginx
EOF
[root@k8s151 /k8s-manifests/svc]$ kubectl apply -f 03-service.yaml
创建成功,有两端口监听
[root@k8s151 /k8s-manifests/svc]$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
web2 ClusterIP 10.254.145.29 > 80/TCP,12345/TCP 13s
创建service的时候,自动创建了一个Endpoint对象,但是值是>,因为这个service没有关联pod
[root@k8s151 /k8s-manifests/apps]$ kubectl get ep
NAME ENDPOINTS AGE
web2 > 101m
ClusterIP
:
NodePort
:
LoadBalancer
:
默认,分配一个稳定的IP地址,即VIP,只能在集群内部访问。
一般k8s集群中不同项目沟通一般使用clusterip
在集群外部无法访问,在集群内部,可以通过访问service的端口(port: 8888),经过kube-proxy流入到pod的targetPort(targetPort: 80)上,最后进入容器。
先删除所有
kubectl delete all --all
kubectl apply -f 01-deploy-nginx.yaml #启动这个,供下面使用
[root@k8s151 /k8s-manifests/svc]$ mkdir /k8s-manifests/svc
[root@k8s151 /k8s-manifests/svc]$ cat > 01-svc-dep.yaml <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: svc-deploy-web
labels:
school: oldboyedu
class: linux81
spec:
# 指定创建Pod的副本数量,默认值为1.
replicas: 3
# 定义标签选择器,rs资源基于标签选择器关联对应的Pod哟~
selector:
matchLabels:
apps: oldboyedu-web
# 定义Pod资源创建的模板
template:
metadata:
name: linux80-pod
labels:
apps: oldboyedu-web
spec:
containers:
- name: linux80-svc-web
image: nginx:1.18
EOF
[root@k8s151 /k8s-manifests/svc]$
cat > 01-svc-expose-nginx.yaml <<'EOF'
apiVersion: v1
kind: Service
metadata:
name: oldboyedu-linux80-nginx-svc
spec:
# 声明Service的类型,主要有:ClusterIP, NodePort, LoadBalancer.
# ClusterIP:
# k8s集群内部使用的类型,仅供K8S集群内部访问,外部无法访问。默认值。(可以自定义ClusterIP,但是要要在初始化的网段内,如果不写,默认会生成一个)
# NodePort:
# 在ClusterIP基础之上,监听了所有的Node节点的端口号,可供K8s集群外部访问。
# LoadBalancer:
# 适合在公有云上对k8s集群外部暴露应用。
type: ClusterIP
# 声明标签选择器,即该svc资源关联哪些Pod,并将其加入到ep列表。
selector:
apps: oldboyedu-web #此处关联的是pod
# 声明Pod的端口的关系映射
ports:
# 指定service的端口
- port: 8888
# 指定协议,仅支持 "TCP", "UDP","SCTP",默认为"TCP"
protocol: TCP
#pod的端口
targetPort: 80
name: nginx
EOF
[root@k8s151 /k8s-manifests/svc]$ kubectl apply -f 01-svc-expose-nginx.yaml
service/oldboyedu-linux80-nginx-svc created
[root@k8s151 /k8s-manifests/svc]$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.254.0.1 > 443/TCP 11h
oldboyedu-linux80-nginx-svc ClusterIP 10.254.32.162 > 8888/TCP 17s
[root@k8s151 /k8s-manifests/svc]$ kubectl describe svc oldboyedu-linux80-nginx-svc
Name: oldboyedu-linux80-nginx-svc
Namespace: default
Labels: >
Annotations: Selector: apps=oldboyedu-web
Type: ClusterIP
IP: 10.254.32.162
Port: nginx 8888/TCP
TargetPort: 80/TCP
Endpoints: 10.244.1.226:80,10.244.1.227:80,10.244.1.228:80 + 2 more... #包含nginx80端口的机器
Session Affinity: None
Events: >
通过访问10.254.32.162
[root@k8s151 /k8s-manifests/svc]$ curl -I 10.254.32.162:8888
HTTP/1.1 200 OK
Server: nginx/1.18.0
Date: Sat, 26 Nov 2022 02:34:00 GMT
Content-Type: text/html
Content-Length: 14
Last-Modified: Fri, 25 Nov 2022 10:39:38 GMT
Connection: keep-alive
ETag: "63809b6a-e"
Accept-Ranges: bytes
在每个节点上启用一个端口来暴露服务,可以在集群 外部访问。也会分配一个稳定内部集群IP地址。
访问地址:<任意NodeIP>:
默认NodePort端口范围:30000-32767
从上图我们可以看到,通过
nodeIP或主机ip:nodePort
可以从外部访问到某个service(port:8888),再经过kube-proxy流入到pod的targetPort(targetPort:80)上,最后进入容器
cat > 02-svc-expose-nginx-NodePort.yaml <<'EOF'
apiVersion: v1
kind: Service
metadata:
name: oldboyedu-linux80-nginx-svc-nodeport
spec:
# 声明Service的类型,主要有:ClusterIP, NodePort, LoadBalancer.
# ClusterIP:
# k8s集群内部使用的类型,仅供K8S集群内部访问,外部无法访问。默认值。
# NodePort:
# 在ClusterIP基础之上,监听了所有的Node节点的端口号,可供K8s集群外部访问。
# LoadBalancer:
# 适合在公有云上对k8s集群外部暴露应用。
type: NodePort
# 声明标签选择器,即该svc资源关联哪些Pod,并将其加入到ep列表。
selector:
apps: oldboyedu-web #已经存在的pod的名字
# 声明Pod的端口的关系映射
ports:
# 指定service的端口
- port: 8888
# 指定协议,仅支持 "TCP", "UDP","SCTP",默认为"TCP"
protocol: TCP
#pod的端口
targetPort: 80
# 监听node节点的端口号,外部机器可访问的端口
nodePort: 30080
EOF
[root@k8s151 /k8s-manifests/svc]$ kubectl apply -f 02-svc-expose-nginx-NodePort.yaml
service/oldboyedu-linux80-nginx-svc-nodeport created
[root@k8s151 /k8s-manifests/svc]$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
oldboyedu-linux80-nginx-svc-nodeport NodePort 10.254.153.153 > 8888:30080/TCP 68s
http://10.0.0.152:30080/
LoadBalancer:与NodePort类似,在每个节点上启用一个端口来暴 露服务。除此之外,Kubernetes会请求底层云平台(例如阿里云、腾 讯云、AWS等)上的负载均衡器,将每个Node ([NodeIP]:[NodePort])作为后端添加进去。
kubectl delete all --all
[root@k8s151 /k8s-manifests/deploy]$ kubectl apply -f 01-deploy-nginx.yaml
deployment.apps/oldboyedu-linux80-deploy created
方式1:基于yaml文件
[root@k8s151 /k8s-manifests/deploy]$ kubectl expose -f 01-deploy-nginx.yaml --port=8888 --protocol=TCP --target-port=80 --name=linux81-svc-web --type=NodePort
service/linux81-svc-web exposed
[root@k8s151 /k8s-manifests/deploy]$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.254.0.1 443/TCP 19m
linux81-svc-web NodePort 10.254.75.21 8888:30410/TCP 28s
方式2:基于类型
kubectl expose <类型> --port #其他可省略
[root@k8s151 /k8s-manifests/svc]$ kubectl expose deployment oldboyedu-linux80-deploy --port=80
service/oldboyedu-linux80-deploy exposed
[root@k8s151 /k8s-manifests/svc]$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.254.0.1 443/TCP 40m
linux81-svc-web NodePort 10.254.202.252 8888:30410/TCP 18m
oldboyedu-linux80-deploy ClusterIP 10.254.147.99 80/TCP 17s
http://10.0.0.152:30410/
Endpoints 是一组实际服务的端点集合。一个 Endpoint 是一个可被访问的服务端点,即一个状态为 running 的 pod 的可访问端点。一般 Pod 都不是一个独立存在,所以一组 Pod 的端点合在一起称为 EndPoints。只有被 Service Selector 匹配选中并且状态为 Running 的才会被加入到和 Service 同名的 Endpoints 中。
K8S容器访问外部资源
K8S宿主机网络允许访问外网时,Pod是可以直接访问到公网资源的。但是比如数据库,消息队列类资源可能只对宿主机内网开放,这时Pod将无法访问到这些资源。解决这个问题,有2个方法
就是将外部IP地址和服务引入到k8s集群内部,由service作为一个代理来达到能够访问外部服务的目的。
endpoint 是k8s集群中一个资源对象,存储在etcd里面,用来记录一个service对应的所有pod的访问地址。service配置selector endpoint controller 才会自动创建对应的endpoint 对象,否则是不会生产endpoint 对象。(若service定义中没有selector字段,service被创建时,endpoint controller不会创建endpoint。)
(1) 在k8s集群外部部署一个mysql服务,此处我就是用docker来快速部署
[root@k8s152 ~]$ docker pull mysql:8.0
[root@k8s152 ~]$ docker tag mysql:8.0 k8s151.oldboyedu.com:5000/mysql:8.0
[root@k8s152 ~]$ docker tag mysql:8.0 k8s151.oldboyedu.com:5000/mysql:8.0
[root@k8s152 ~]$ docker push k8s151.oldboyedu.com:5000/mysql:8.0
[root@k8s154 ~]$ cat > /etc/docker/daemon.json <<'EOF'
{
"insecure-registries": ["k8s151.oldboyedu.com:5000"],
"registry-mirrors": ["https://hzow8dfk.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
[root@k8s154 ~]$ systemctl enable --now docker && systemctl restart docker
[root@k8s154 ~]$ docker run --name oldboyedu-mysql -de MYSQL_ROOT_PASSWORD=root123456 --network host k8s151.oldboyedu.com:5000/mysql:8.0 --default-authentication-plugin=mysql_native_password --character-set-server=utf8 --collation-server=utf8_bin
(2) 创建用户,可以暂时先不给权限,观察能否访问。(可以先跳过本步骤)
Grant <权限> on 表名[(列名)] to 用户 With grant option
[root@k8s154 ~]$ docker exec -it oldboyedu-mysql bash
root@k8s154:/# mysql -uroot -proot123456
mysql> CREATE DATABASE oldboyedu;
mysql> CREATE USER linux81 IDENTIFIED WITH mysql_native_password BY 'oldboyedu';
mysql> GRANT ALL ON oldboyedu.* TO linux81;
mysql> show grants for linux81;
+--------------------------------------------------------+
| Grants for linux81@% |
+--------------------------------------------------------+
| GRANT USAGE ON *.* TO `linux81`@`%` |
| GRANT ALL PRIVILEGES ON `oldboyedu`.* TO `linux81`@`%` |
+--------------------------------------------------------+
2 rows in set (0.00 sec)
mysql> use oldboyedu;
mysql> create table student (id INT PRIMARY KEY AUTO_INCREMENT,name VARCHAR(255) NOT NULL, age INT DEFAULT 0);
mysql> desc student;
+-------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------+--------------+------+-----+---------+----------------+
| id | int | NO | PRI | NULL | auto_increment |
| name | varchar(255) | NO | | NULL | |
| age | int | YES | | 0 | |
+-------+--------------+------+-----+---------+----------------+
3 rows in set (0.00 sec)
mysql> insert into student(name,age) values ('libai','28'),('dufu','45');
mysql> select * from student;
+----+-------+------+
| id | name | age |
+----+-------+------+
| 1 | libai | 28 |
| 2 | dufu | 45 |
+----+-------+------+
2 rows in set (0.00 sec)
(3) 创建ep资源
[root@k8s151 ~]$ mkdir /k8s-manifests/ep
[root@k8s151 /k8s-manifests/ep]$ cat > 01-mysql-endpoints.yaml <<'EOF'
apiVersion: v1
kind: Endpoints
metadata:
name: oldboyedu-mysql80
subsets: #自定义endpoints
- addresses:
- ip: 10.0.0.154
ports:
- port: 3306
EOF
[root@k8s151 /k8s-manifests/ep]$ kubectl apply -f 01-mysql-endpoints.yaml
endpoints/oldboyedu-mysql80 created
[root@k8s151 /k8s-manifests/ep]$ kubectl get ep
NAME ENDPOINTS AGE
kubernetes 10.0.0.151:6443 3h3m
oldboyedu-mysql80 10.0.0.154:3306 4m59s
(4) 创建svc资源
service 和endpoint的名称相同, 且在一个命名空间下面
[root@k8s151 /k8s-manifests/ep]$ cat > 02-mysql-service.yaml <<'EOF'
apiVersion: v1
kind: Service
metadata:
name: oldboyedu-mysql80
spec:
ports:
- port: 3306
EOF
[root@k8s151 /k8s-manifests/ep]$ kubectl apply -f 02-mysql-service.yaml
service/oldboyedu-mysql80 created
[root@k8s151 /k8s-manifests/ep]$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.254.0.1 > 443/TCP 8h
oldboyedu-mysql80 ClusterIP 10.254.233.133 > 3306/TCP 45s
可以看到service跟endpoint成功挂载一起了,表面外面服务成功挂载到k8s里面了
[root@k8s151 /k8s-manifests/ep]$ kubectl describe svc oldboyedu-mysql80
Name: oldboyedu-mysql80
Namespace: default
Labels: >
Annotations: Selector: >
Type: ClusterIP
IP: 10.254.233.133
Port: > 3306/TCP
TargetPort: 3306/TCP
Endpoints: 10.0.0.154:3306
Session Affinity: None
Events: >
[root@k8s151 /k8s-manifests/ep]$ cat > 03-deploy-mysql80.yaml <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: linux81-deploy-db
labels:
school: oldboyedu
class: linux81
spec:
replicas: 1
selector:
matchLabels:
apps: db
template:
metadata:
name: linux80-pod
labels:
apps: db
spec:
containers:
- name: linxu81-db
image: k8s151.oldboyedu.com:5000/mysql:8.0
ports:
- containerPort: 80
env:
- name: MYSQL_ROOT_PASSWORD
value: root123456
EOF
[root@k8s151 /k8s-manifests/ep]$ kubectl apply -f 03-deploy-mysql80.yaml
deployment.apps/linux81-deploy-db created
[root@k8s151 /k8s-manifests/ep]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
linux81-deploy-db-5c94bd6f9b-dqzgv 1/1 Running 0 84s
[root@k8s151 /k8s-manifests/ep]$ kubectl exec linux81-deploy-db-5c94bd6f9b-dqzgv -it -- bash
root@linux81-deploy-db-5c94bd6f9b-dqzgv:/# mysql -h oldboyedu-mysql80 -ulinux81 -poldboyedu
mysql> select * from oldboyedu.student;
+----+-------+------+
| id | name | age |
+----+-------+------+
| 1 | libai | 28 |
| 2 | dufu | 45 |
+----+-------+------+
2 rows in set (0.01 sec)
(5) 创建 wordpress
cat > 03-deploy-wordpress.yaml <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: oldboyedu-wordpress
spec:
replicas: 3
selector:
matchLabels:
apps: oldboyedu-wordpress
template:
metadata:
labels:
apps: oldboyedu-wordpress
spec:
containers:
- name: oldboyedu-wordpress
image: k8s151.oldboyedu.com:5000/wordpress:latest
ports:
- containerPort: 80
env:
- name: WORDPRESS_DB_HOST
value: oldboyedu-mysql80
- name: WORDPRESS_DB_USER
value: root
- name: WORDPRESS_DB_PASSWORD
value: root123456
---
apiVersion: v1
kind: Service
metadata:
name: oldboyedu-wordpress
spec:
type: NodePort
selector:
apps: oldboyedu-wordpress
ports:
- port: 80
targetPort: 80
nodePort: 30088
EOF
总结
通过创建一个无 Selector的Service,再创建一个同名的Endpoint。这样就实现了访问集群内的ClusterIP和端口就访问到指定外部资源了。
Service、Endpoint 和 Pod 的关系
通俗一点去理解: 当我们创建serviec的时候,这个sevice就会关联一个endpoit对象(关联简单理解为配对),整个endpint是一个动态列表,然后当通过nodeport访问service的时候,会先在集群内部的DNS查询service的IP地址,流量被发送到该IP地址后,会被Service转发到其中一个Pod。
自动关联体系内 Pod 服务(下图):
验证service和Endpoint的关系
[root@k8s151 /k8s-manifests/svc]$ 04-
[root@k8s151 /k8s-manifests/svc]$ cat > 03-service.yaml <<'EOF'
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx
name: web2
spec:
ports:
- port: 80
name: http
protocol: TCP
targetPort: 80
- port: 12345
name: metrics
protocol: TCP
targetPort: 12345
selector:
app: nginx
EOF
[root@k8s151 /k8s-manifests/svc]$ kubectl apply -f 03-service.yaml
创建成功,有两端口监听
[root@k8s151 /k8s-manifests/svc]$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
web2 ClusterIP 10.254.145.29 > 80/TCP,12345/TCP 13s
创建service的时候,自动创建了一个Endpoint对象,但是值是>,因为这个service没有关联pod
[root@k8s151 /k8s-manifests/apps]$ kubectl get ep
NAME ENDPOINTS AGE
web2 > 101m
在kubernets集群的每个节点上都运行着kube-proxy进程,负责实现Kubernetes中service组件的虚拟IP服务。目前kube-proxy有如下三种工作模式:
- User space 模式:该模式下,Kube-proxy充当了一个四层Load balancer的角色
- iptables 模式:
- IPVS 模式
Kube-proxy负责将发送到Cluster IP的请求转发到后端的Pod上
详解:https://blog.csdn.net/qq_36733838/article/details/128308672
kube-proxy & service必要说明
说到kube-proxy,就不得不提到k8s中service,下面对它们两做简单说明:
Cluster IP:Port
就能访问到集群内对应的serivce下的Pod。对于kube-proxy组件的作用就是为k8s集群外部用户提供访问服务的路由。
kube-proxy工作模式有两种,即**iptables** 和 ipvs,当集群规模较大时,推荐使用 ipvs。
[root@k8s151 ~]$ kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
.....
kube-system kube-proxy-797kb 1/1 Running 0 2d2h
kube-system kube-proxy-8w59n 1/1 Running 2 2d2h
kube-system kube-proxy-jtgtc 1/1 Running 0 2d2h
....
kubectl -n kube-system get pods kube-proxy-797kb -o yaml
[root@k8s151 ~]$ kubectl -n kube-system get cm kube-proxy
NAME DATA AGE
kube-proxy 2 2d4h
[root@k8s151 ~]$ kubectl -n kube-system describe cm kube-proxy
kubectl -n kube-system logs -f kube-proxy-ttrms
yum -y install conntrack-tools ipvsadm.x86_64
cat > /etc/sysconfig/modules/ipvs.modules <
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
(1)如上图所示,仅需修改工作模式为ipvs即可。切记,一定要保存退出!
[root@k8s151 ~]$ kubectl -n kube-system edit cm kube-proxy
(2)验证是否修改成功
[root@k8s151 ~]$ kubectl -n kube-system describe cm kube-proxy | grep mode
mode: "ipvs"
[root@k8s151 ~]$ kubectl get pods -A | grep kube-proxy |awk '{print $2}' | xargs kubectl -n kube-system delete pods
温馨提示:
在实际工作中,如果修改了kube-proxy服务时,若删除Pod,请逐个删除,不要批量删除哟!
(1) 查看日志
[root@k8s151 ~]$ kubectl -n kube-system get pods |grep kube-proxy
kube-proxy-75bk8 1/1 Running 0 7s
kube-proxy-hk6vb 1/1 Running 0 16s
kube-proxy-lmwb9 1/1 Running 0 17s
[root@k8s151 ~]$ kubectl -n kube-system logs -f kube-proxy-75bk8
I1128 09:39:25.921379 1 server_others.go:170] Using ipvs Proxier.
W1128 09:39:25.921682 1 proxier.go:401] IPVS scheduler not specified, use rr by default
I1128 09:39:25.921881 1 server.go:534] Version: v1.15.12
I1128 09:39:25.926188 1 conntrack.go:52] Setting nf_conntrack_max to 262144
I1128 09:39:25.926340 1 config.go:96] Starting endpoints config controller
I1128 09:39:25.926366 1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller
I1128 09:39:25.926439 1 config.go:187] Starting service config controller
I1128 09:39:25.926467 1 controller_utils.go:1029] Waiting for caches to sync for service config controller
I1128 09:39:26.028891 1 controller_utils.go:1036] Caches are synced for service config controller
I1128 09:39:26.028892 1 controller_utils.go:1036] Caches are synced for endpoints config controller
出现以下情况:
- 这个是kube-proxy 1.18版本自身问题,我们选择降低kube-proxy的版本为1.15.12
https://blog.csdn.net/qq_36733838/article/details/128084264
[root@k8s151 ~]$ kubectl edit ds kube-proxy -n kube-system 可以通过/image:查找 image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.15.12 #只要把这行版本改一下就可以 [root@k8s151 ~]$ kubectl get pods -A | grep kube-proxy |awk '{print $2}' | xargs kubectl -n kube-system delete pods
(2)测试服务是否正常访问
[root@k8s151 ~]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
linux81-deploy-db-5c94bd6f9b-5dqp4 1/1 Running 1 114m
[root@k8s151 ~]$ kubectl exec -it linux81-deploy-db-5c94bd6f9b-5dqp4 -- bash
root@linux81-deploy-db-5c94bd6f9b-5dqp4:/# mysql -h oldboyedu-mysql80 -ulinux81 -poldboyedu #可以正常连接到154机器的mysql
mysql>
(3) 验证ipvs的工作模式,如下图所示。
kubectl -n oldboyedu-homework get svc # 这个是我们留的作业6。
ipvsadm -ln | grep 10.254.183.82 -A 5
想要使用ingress,必要要创建一个控制器,可以是tarefik,或者nginx,如果两个同时时候,要注意端口80会起冲突
Ingress 工作原理
1.ingress controller通过和kubernetes api交互,动态的去感知集群中ingress规则变化,
2.然后读取它,按照自定义的规则,规则就是写明了哪个域名对应哪个service,生成一段nginx配置,
3.再写到nginx-ingress-control的pod里,这个Ingress controller的pod里运行着一个Nginx服务,控制器会把生成的nginx配置写入/etc/nginx.conf文件中,
4.然后reload一下使配置生效。以此达到域名分配置和动态更新的问题
通过 Ingress 用户可以实现使用 traefik 等开源的反向代理负载均衡器实现对外暴露服务
使用svc的NodePort类型暴露端口存在以下问题:
(1)随着服务的增多,占用端口会越来越多;
(2)当一个端口被多个服务使用的时候就力不从心了,比如将下面的域名都映射到 80或443端口时就暴露问题了;
game01.oldboyedu.com
game02.oldboyedu.com
game03.oldboyedu.com
使用 Ingress 时一般会有三个组件:
k8s中的抽象资源,给管理员提供暴露服务的入口定义方法,换句话说,就是编写规则。
Ingress 简单理解就是个规则定义;比如说某个域名对应某个 service,即当某个域名的请求进来时转发给某个 service;这个规则将与 Ingress Controller 结合,然后 Ingress Controller 将其动态写入到负载均衡器配置中,从而实现整体的服务发现和负载均衡
根据ingress生成具体路由规则,并借助svc实现Pod的负载均衡。
ingress Controller 实质上可以理解为是个监视器,Ingress Controller 通过不断地跟 kubernetes API 打交道,实时的感知后端 service、pod 等变化,比如新增和减少 pod,service 增加与减少等;当得到这些变化信息后,Ingress Controller 再结合下文的 Ingress 生成配置,然后更新反向代理负载均衡器,并刷新其配置,达到服务发现的作用
反向代理负载均衡器
反向代理负载均衡器很简单,说白了就是 nginx、apache 什么的;在集群中反向代理负载均衡器可以自由部署,可以使用 Replication Controller、Deployment、DaemonSet 等
traefik
与nginx
一样,是一款优秀的反向代理工具,或者叫Edge Router
。
DaemonSet和Deployment有什么区别?
Deployment 部署的副本 Pod 会分布在各个 Node 上,每个 Node 都可能运行好几个副本。DaemonSet 的不同之处在于:每个 Node 上最多只能运行一个副本,在所有节点或者是匹配的节点上都部署一个pod。
先上传k8s-manifests.tar到服务器
[root@k8s151 /app/tools]$ docker load -i tarefik-1.7.34.tar.gz #可以使用docker pull tarefik:v1.7.34 拉取镜像
[root@k8s151 /app/tools]$ docker tag traefik:v1.7.34 k8s151.oldboyedu.com:5000/traefik:v1.7.34
[root@k8s151 /app/tools]$ docker push k8s151.oldboyedu.com:5000/traefik:v1.7.34
[root@k8s151 ~]$ mkdir /k8s-manifests/ingress /k8s-manifests/ingress/http
#这个我们一般不用修改,直接拿来用就可以,只要知道这个的用途就可以
[root@k8s151 /k8s-manifests/ingress]$ cat > 01-rabc.yaml <<'EOF'
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system
EOF
[root@k8s151 /k8s-manifests/ingress]$ cat > 02-traefik.yaml <<'EOF'
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
operator: Exists
serviceAccountName: traefik-ingress-controller #使用rbac创建的对象
hostNetwork: true
containers:
- image: k8s151.oldboyedu.com:5000/traefik:v1.7.34
imagePullPolicy: IfNotPresent
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
hostPort: 80
- name: admin
containerPort: 8080
args:
- --api
- --kubernetes
- --logLevel=INFO
EOF
[root@k8s151 /k8s-manifests/ingress]$ kubectl apply -f .
查看traefik是否部署成功
[root@k8s151 /k8s-manifests/ingress]$ kubectl -n kube-system get pods -o wide |grep traefik
Path Type:
ImplementationSpecific:对于这种路径类型,匹配方法取决于 IngressClass。 具体实现可以将其作为单独的 pathType 处理或者与 Prefix 或 Exact 类型作相同处理。
Exact:精确匹配 URL 路径,且区分大小写。
Prefix:基于以 / 分隔的 URL 路径前缀匹配。匹配区分大小写,并且对路径中的元素逐个完成。 路径元素指的是由 / 分隔符分隔的路径中的标签列表。 如果每个 p 都是请求路径 p 的元素前缀,则请求与路径 p 匹配。
先上传images.tar.gz
[root@k8s151 /app/tools]$ tar xf images.tar.gz
批量导入镜像
[root@k8s151 /app/tools/images]$ for i in `ls *.tar.gz`; do docker load -i $i; done;
上传镜像到仓库
[root@k8s151 /app/tools/images]$ docker tag kod:v5.0 k8s151.oldboyedu.com:5000/homework/kod:v5.0
[root@k8s151 /app/tools/images]$ docker push k8s151.oldboyedu.com:5000/homework/kod:v5.0
[root@k8s151 /app/tools/images]$ docker tag pingtai:v5.0 k8s151.oldboyedu.com:5000/homework/pingtai:v5.0
[root@k8s151 /app/tools/images]$ docker push k8s151.oldboyedu.com:5000/homework/pingtai:v5.0
[root@k8s151 /app/tools/images/http]$ cat > 01.homework.yaml <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: oldboyedu-homework-kod
labels:
school: oldboyedu
homework: kod
spec:
replicas: 3
selector:
matchLabels:
apps: kod
template:
metadata:
name: games-pod
labels:
apps: kod
spec:
containers:
- name: linux81-kod
image: k8s151.oldboyedu.com:5000/homework/kod:v5.0
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: oldboyedu-homework-pingtai
labels:
school: oldboyedu
homework: pingtai
spec:
replicas: 3
selector:
matchLabels:
apps: pingtai
template:
metadata:
name: games-pingtai
labels:
apps: pingtai
spec:
containers:
- name: linux81-pingtaiu
image: k8s151.oldboyedu.com:5000/homework/pingtai:v5.0
---
apiVersion: v1
kind: Service
metadata:
name: games-kod
spec:
type: ClusterIP
selector:
apps: kod
ports:
- port: 80
protocol: TCP
targetPort: 85
---
apiVersion: v1
kind: Service
metadata:
name: games-pingtai
spec:
type: ClusterIP
selector:
apps: pingtai
ports:
- port: 80
protocol: TCP
targetPort: 82
EOF
[root@k8s151 /k8s-manifests/ingress/http]$ cat > 02-ingress-http.yaml <<'EOF'
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: traefik-myweb
annotations:
kubernetes.io/ingress.class: traefik # 指定让这个 Ingress 通过 ingress-traefik 来处理,如果不添加则不会被ingress控制器所监控到!
spec:
rules:
- host: kod.oldboyedu.com #访问的主机名
http:
paths:
- path: /
pathType: Prefix #路径类型
backend:
service:
name: games-kod #绑定service
port:
number: 80
- host: pingtai.oldboyedu.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: games-pingtai
port:
number: 80
EOF
ingress绑定service之后,就可以直接通过域名去访问servcie的pods了
[root@k8s151 /k8s-manifests/ingress/http]$ kubectl apply -f .
[root@k8s151 /k8s-manifests/ingress/http]$ kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
traefik-myweb > kod.oldboyedu.com,pingtai.oldboyedu.com 80 36s
[root@k8s151 /k8s-manifests/ingress/http]$ kubectl describe ingresses traefik-myweb
我们在公司里面,可能安装的k8s的版本不同,我们要注意,不同的版本要使用不同的ingress-nginx版本
https://github.com/kubernetes/ingress-nginx/
先确定自己的K8s版本,可以从下面看到,我们使用的是1.20.0版本,所以我要使用的ingress-nginx的版本应该是1.0.0-1.2.1,对nginx的要求是1.19.9+,所以我们如果使用1.18版本的nginx镜像,可能会出问题,这个要注意
[root@k8s151 ~]$ kubectl get node
NAME STATUS ROLES AGE VERSION
k8s151.oldboyedu.com Ready control-plane,master 2d7h v1.20.0
k8s152.oldboyedu.com Ready 2d7h v1.20.0
k8s153.oldboyedu.com Ready 2d7h v1.22.0
[root@k8s151 /app/tools]$ docker pull netonline/defaultbackend:1.4
[root@k8s151 /app/tools]$ docker tag netonline/defaultbackend:1.4 k8s151.oldboyedu.com:5000/defaultbackend:1.4
[root@k8s151 /app/tools]$ docker push k8s151.oldboyedu.com:5000/defaultbackend:1.4
我们使用这个0.20版本的也可以
[root@k8s151 /app/tools]$ docker load -i nginx-ingress-controller.0.20.0.tar
[root@k8s151 /app/tools]$ docker tag quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.20.0 k8s151.oldboyedu.com:5000/nginx-ingress-controller:0.20.0
[root@k8s151 /app/tools]$ docker push k8s151.oldboyedu.com:5000/nginx-ingress-controller:0.20.0
[root@k8s151 /k8s-manifests/ingress/nginx]$ cat > 01-nginx-ingress-controller.yaml <<'EOF'
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: default-http-backend
labels:
app.kubernetes.io/name: default-http-backend
app.kubernetes.io/part-of: ingress-nginx
namespace: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: default-http-backend
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: default-http-backend
app.kubernetes.io/part-of: ingress-nginx
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissible as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
# image: netonline/defaultbackend:1.4
image: k8s151.oldboyedu.com:5000/defaultbackend:1.4
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
namespace: ingress-nginx
labels:
app.kubernetes.io/name: default-http-backend
app.kubernetes.io/part-of: ingress-nginx
spec:
ports:
- port: 80 #暴露出来的端口是80
targetPort: 8080 #容器的端口是8080
selector:
app.kubernetes.io/name: default-http-backend
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
#ServiceAccount用作安全上下文来限制特定自动化操作的权限
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "-"
# Here: "-"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
operator: Exists
serviceAccountName: nginx-ingress-serviceaccount
hostNetwork: true
containers:
- name: nginx-ingress-controller
# image: quay.mirrors.ustc.edu.cn/kubernetes-ingress-controller/nginx-ingress-controller:0.20.0
image: k8s151.oldboyedu.com:5000/nginx-ingress-controller:0.20.0
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 33
runAsUser: 33
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
---
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: NodePort
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
EOF
defaultBackend:
可以设置不满足任何规则的请求应该处理的方式,如果没有指定 Rules,则必须指定 defaultBackend。如果 defaultBackend 没有设置,不满足任何规则的请求的处理方式,将会由 Ingress Controller 决定。(比如你访问www.test.com,并没有这个域名,这个时候如果有defaultBackend,就可以定制返回的错误信息,比如“你访问的网址不存在”)
rbac:
ingress需要使用到rbac,或者到节点的状态呀等等k8s信息,所以需要授权
Service
最后的service的作用,使用NodePort共享宿主机ip,可以实现在容器外访问。 同样的使用
hostNetwork: true
也可以,但是更推荐使用Service
创建svc
[root@k8s151 /k8s-manifests/ingress/nginx]$ cat > 02-nginx-svc.yaml <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-web
labels:
school: oldboyedu
homework: kod
spec:
replicas: 3
selector:
matchLabels:
apps: kod
template:
metadata:
name: games-kod
labels:
apps: kod
spec:
containers:
- name: linux81-kod
image: k8s151.oldboyedu.com:5000/homework/kod:v5.0
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: oldboyedu-homework-pingtai
labels:
school: oldboyedu
homework: pingtai
spec:
replicas: 3
selector:
matchLabels:
apps: pingtai
template:
metadata:
name: games-pingtai
labels:
apps: pingtai
spec:
containers:
- name: linux81-pingtaiu
image: k8s151.oldboyedu.com:5000/homework/pingtai:v5.0
---
apiVersion: v1
kind: Service
metadata:
name: games-kod
spec:
type: ClusterIP
selector:
apps: kod
ports:
- port: 80
protocol: TCP
targetPort: 85
---
apiVersion: v1
kind: Service
metadata:
name: games-pingtai
spec:
type: ClusterIP
selector:
apps: pingtai
ports:
- port: 80
protocol: TCP
targetPort: 82
EOF
#查看ingress-nginx-controller是否运行
[root@k8s151 ~]$ kubectl -n ingress-nginx get pods
NAME READY STATUS RESTARTS AGE
default-http-backend-67c475f6b5-qgzjk 1/1 Running 0 8h
nginx-ingress-controller-8p7kr 1/1 Running 0 8h
nginx-ingress-controller-c86gn 1/1 Running 0 8h
nginx-ingress-controller-n5mk5 1/1 Running 1 8h
创建Ingress
[root@k8s151 /k8s-manifests/ingress/nginx]$ cat > 03-ingress.yaml <<'EOF'
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-myweb
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: kod.oldboyedu.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: games-kod
port:
number: 80
- host: pingtai.oldboyedu.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: games-pingtai
port:
number: 80
EOF
kubectl apply -f .
kubectl get pods #查看项目节点是否运行
[root@k8s151 /k8s-manifests/ingress/nginx]$ kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
nginx-myweb kod.oldboyedu.com,pingtai.oldboyedu.com 80 9m3s
kubectl describe ing nginx-myweb #查看ingress详情
kubectl get svc #查看service的情况
vim /etc/kubernetes/manifests/kube-apiserver.yaml
...
spec:
containers:
- command:
- kube-apiserver
- --service-node-port-range=3000-50000 # 进行添加这一行即可
...
apiVersion: v1
kind: Service
metadata:
name: oldboyedu-linux80-nginx-svc-nodeport
spec:
type: NodePort
selector:
apps: oldboyedu-web
ports:
- port: 8888
protocol: TCP
targetPort: 80
nodePort: 8080
为什么需要coreDNS?
kubernets中的pod基于service域名解析后,再负载均衡分发到service后端的各个pod服务中,如果没有DNS解析,则无法查到各个服务对应的service服务。
coreDNS的作用就是将svc的名称解析为ClusterIP。
早期使用的skyDNS组件,需要单独部署,在k8s 1.9版本中,我们就可以直接使用kubeadm方式安装CoreDNS组件。
从k8s 1.12开始,CoreDNS就称为kubernetes默认的DNS服务器,但是kubeadm支持coreDNS的时间会更糟。
推荐阅读:
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns/coredns
显示/var/lib/kubelet/config.yaml的1
[root@k8s151 /k8s-manifests/pods]$ sed -n '15,17p' /var/lib/kubelet/config.yaml
clusterDNS:
- 10.254.0.10
clusterDomain: cluster.local
[root@k8s151 /k8s-manifests/pods]$ kubectl get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.254.0.1 443/TCP 38h
kube-system kube-dns ClusterIP 10.254.0.10 53/UDP,53/TCP,9153/TCP 7d15h
[root@k8s151 /k8s-manifests/pods]$ kubectl describe svc kube-dns -n kube-system
Name: kube-dns
Namespace: kube-system
Labels: k8s-app=kube-dns
kubernetes.io/cluster-service=true
kubernetes.io/name=KubeDNS
Annotations: prometheus.io/port: 9153
prometheus.io/scrape: true
Selector: k8s-app=kube-dns
Type: ClusterIP
IP: 10.254.0.10
Port: dns 53/UDP
TargetPort: 53/UDP
Endpoints: 10.244.0.4:53,10.244.1.18:53
Port: dns-tcp 53/TCP
TargetPort: 53/TCP
Endpoints: 10.244.0.4:53,10.244.1.18:53
Port: metrics 9153/TCP
TargetPort: 9153/TCP
Endpoints: 10.244.0.4:9153,10.244.1.18:9153
Session Affinity: None
Events:
DNS 如何解析,依赖容器内的resolv文件的配置
容器内
[root@k8s151 /k8s-manifests/pods]$ kubectl apply -f 01-pod-nginx.yaml
pod/oldboyedu-linux80-web created
[root@k8s151 /k8s-manifests/pods]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
oldboyedu-linux80-web 1/1 Running 0 21s
[root@k8s151 /k8s-manifests/pods]$ kubectl exec -it oldboyedu-linux80-web -- bash
root@oldboyedu-linux80-web:/# cat /etc/resolv.conf
nameserver 10.254.0.10
search default.svc.cluster.local svc.cluster.local cluster.local oldboyedu.com
options ndots:5
resolv.conf的关键字主要有4个,分别为:
nameserver:定义DNS服务器的IP地址
domain:定义本地域名
search:定义域名的搜索列表
sortlist:对返回的域名进行排序
注意:这里最主要的就是nameserver关键字,如果没有指定nameserver就找不到DNS服务,其它关键字是可选的。
ndots:5 :如果查询的域名包含的点"."不到5个,那么进行DNS查找,讲使用非完全限定名称(或者叫绝对域名)。如果你查询的域名包含点数大于等于5,那么DNS查询,默认会使用绝对域名进行查询。
k8s的A记录格式:
参考案例:
[root@k8s151 /k8s-manifests/pods]$ kubectl get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.254.0.1 443/TCP 38h
kube-system kube-dns ClusterIP 10.254.0.10 53/UDP,53/TCP,9153/TCP 7d15h
在容器内执行
ping kube-dns.kube-system.svc.cluster.local #kube-dsn:是service name kube-system是命名空间namespace
温馨提示:
(1)如果部署时直接写svc的名称,不写名称空间,则默认的名称空间为其引用资源的名称空间;
(2)kubeadm部署时,无需手动配置CoreDNS组件(默认在kube-system已创建),二进制部署时,需要手动安装该组件;
cat > deploy-wordpress.yaml <<'EOF'
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: oldboyedu-mysql
spec:
replicas: 1
selector:
matchLabels:
app: oldboyedu-mysql
template:
metadata:
labels:
app: oldboyedu-mysql
spec:
volumes:
- name: data
persistentVolumeClaim:
claimName: oldboyedu-linux81-tomcat
containers:
- name: oldboyedu-mysql
image: k8s151.oldboyedu.com:5000/mysql:5.7
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: 123456
volumeMounts:
- name: data
mountPath: /var/lib/mysql
#- name: MYSQL_DATABASE
# value: wordpress
#- name: MYSQL_USER
# value: wordpress
#- name: MYSQL_PASSWORD
# value: wordpress
---
apiVersion: v1
kind: Service
metadata:
name: oldboyedu-mysql
spec:
clusterIP: 10.254.131.222
selector:
app: oldboyedu-mysql
ports:
- port: 3306
targetPort: 3306
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: oldboyedu-wordpress
spec:
replicas: 3
selector:
matchLabels:
apps: oldboyedu-wordpress
template:
metadata:
labels:
apps: oldboyedu-wordpress
spec:
containers:
- name: oldboyedu-wordpress
image: k8s151.oldboyedu.com:5000/wordpress:latest
ports:
- containerPort: 80
env:
- name: WORDPRESS_DB_HOST
# 注意,这里的主机名我使用的时mysql服务的svc名称,
# 这依赖与coreDNS附加组件进行解析哟,因此,咱们要保重coreDNS组件是正常工作的
value: oldboyedu-mysql
- name: WORDPRESS_DB_USER
value: wordpress
- name: WORDPRESS_DB_PASSWORD
value: wordpress
---
apiVersion: v1
kind: Service
metadata:
name: oldboyedu-wordpress
spec:
type: NodePort
selector:
apps: oldboyedu-wordpress
ports:
- port: 80
targetPort: 80
nodePort: 30088
EOF
先上传镜像文件压缩包到/app/tools
[root@k8s151 /app/tools]$ docker load < tomcat-app-v1.tar.gz
4dcab49015d4: Loading layer [==================================================>] 130.9MB/130.9MB
....
[root@k8s151 /app/tools]$ docker image |grep tomcat-app
[root@k8s151 ~]$ docker images |grep tomcat-app
k8s151.oldboyedu.com:5000/tomcat-app v1 00beaa1d956d 6 years ago 358MB
[root@k8s151 ~]$ docker push k8s151.oldboyedu.com:5000/tomcat-app:v1
[root@k8s151 /k8s-manifests/tomcat]$ cat > deploy-tomcat.yaml <<'EOF'
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: oldboyedu-linux81-tomcat
spec:
storageClassName: nfs-pvc-sc #使用的是我们创建的nfs管理的类
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
replicas: 1
selector:
matchLabels:
app: oldboyedu-mysql
template:
metadata:
labels:
app: oldboyedu-mysql
spec:
volumes:
- name: data
persistentVolumeClaim:
claimName: oldboyedu-linux81-tomcat
containers:
- name: mysql
image: k8s151.oldboyedu.com:5000/mysql:5.7
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: '123456'
volumeMounts:
- name: data
mountPath: /var/lib/mysql
---
apiVersion: v1
kind: Service
metadata:
name: oldboyedu-mysql
spec:
selector:
app: oldboyedu-mysql
ports:
- port: 3306
targetPort: 3306
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: oldboyedu-tomcat-app
spec:
replicas: 1
selector:
matchLabels:
app: oldboyedu-tomcat-app
template:
metadata:
labels:
app: oldboyedu-tomcat-app
spec:
containers:
- name: myweb
# image: jasonyin2020/tomcat-app:v1
image: k8s151.oldboyedu.com:5000/tomcat-app:v1
resources:
limits:
cpu: "100m"
requests:
cpu: "100m"
ports:
- containerPort: 8080
env:
- name: MYSQL_SERVICE_HOST
value: oldboyedu-mysql
- name: MYSQL_SERVICE_PORT
value: '3306'
---
apiVersion: v1
kind: Service
metadata:
name: oldboyedu-tomcat-app
spec:
selector:
app: oldboyedu-tomcat-app
ports:
- port: 8080
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: linux81-tomcat
spec:
rules:
- host: tomcat.oldboyedu.com
http:
paths:
- backend:
serviceName: oldboyedu-tomcat-app
servicePort: 8080
EOF
[root@k8s151 /k8s-manifests/tomcat]$ kubectl apply -f deploy-tomcat.yaml
[root@k8s151 /k8s-manifests/tomcat]$ kubectl get ing
[root@k8s151 /k8s-manifests/tomcat]$ kubectl describe ingress linux81-tomcat
方式一:
直接使用alpine取ping您想测试的SVC名称即可,观察能否解析成对应的VIP即可。
方式二:
yum -y install bind-utils
[root@k8s151 /k8s-manifests/tomcat]$ dig @10.254.0.10 oldboyedu-tomcat-app 的.default.svc.cluster.local +short
10.254.43.44 #解析出oldboyedu-tomcat-app 的
Dashboard是K8S集群管理的一个GUI的WebUI实现,它是一个k8s附加组件,所以需要单独部署。
我们可以以图形化的方式创建k8s资源。
GitHub地址:
https://github.com/kubernetes/dashboard#kubernetes-dashboard
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.15.md#unchanged
wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
vim kubernetes-dashboard.yaml
...
spec:
...
template:
...
spec:
containers:
- name: kubernetes-dashboard
# 修改咱们自己的镜像即可
image: k8s201.oldboyedu.com:5000/kubernetes-dashboard-amd64:v1.10.1
kind: Service
...
spec:
# 类型改为NodePort
type: NodePort
ports:
- port: 443
targetPort: 8443
# 指定NodePort的端口。
nodePort: 8443
selector:
k8s-app: kubernetes-dashboard
kubectl apply -f kubernetes-dashboard.yaml
https://10.0.0.203:8443/
kubectl describe sa -n kube-system | grep kubernetes-dashboard | grep Tokens
kubectl describe secrets kubernetes-dashboard-token-ls4zt -n kube-system
如下图所示。
cat > oldboyedu-dashboard-rbac.yaml <<'EOF'
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
# 创建一个名为"oldboyedu"的账户
name: oldboyedu
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
# 既然绑定的是集群角色,那么类型也应该为"ClusterRole",而不是"Role"哟~
kind: ClusterRole
# 关于集群角色可以使用"kubectl get clusterrole | grep admin"进行过滤哟~
name: cluster-admin
subjects:
- kind: ServiceAccount
# 此处要注意哈,绑定的要和我们上面的服务账户一致哟~
name: oldboyedu
namespace: kube-system
EOF
kubectl apply -f oldboyedu-dashboard-rbac.yaml
kubectl describe serviceaccounts -n kube-system oldboyedu | grep Tokens
kubectl -n kube-system describe secrets oldboyedu-token-gns4h
如上图所示。
温馨提示:
如下图所示,由于咱们创建的ServiceAccount绑定的角色为"cluster-admin"这个角色,因此oldboyedu用户的token是可以访问集群的所有资源的哟~
cat > oldboyedu-generate-context-conf.sh <<'EOF'
#!/bin/bash
# auther: Jason Yin
# 获取secret的名称
SECRET_NAME=`kubectl get secrets -n kube-system | grep oldboyedu | awk {'print $1'}`
# 指定API SERVER的地址
API_SERVER=k8s201.oldboyedu.com:6443
# 指定kubeconfig配置文件的路径名称
KUBECONFIG_NAME=/root/oldboyedu-k8s-dashboard-admin.conf
# 获取oldboyedu用户的tocken
OLDBOYEDU_TOCKEN=`kubectl get secrets -n kube-system $SECRET_NAME -o jsonpath={.data.token} | base64 -d`
# 在kubeconfig配置文件中设置群集项
kubectl config set-cluster oldboyedu-k8s-dashboard-cluster --server=$API_SERVER --kubeconfig=$KUBECONFIG_NAME
# 在kubeconfig中设置用户项
kubectl config set-credentials oldboyedu-k8s-dashboard-user --token=$OLDBOYEDU_TOCKEN --kubeconfig=$KUBECONFIG_NAME
# 配置上下文,即绑定用户和集群的上下文关系,可以将多个集群和用户进行绑定哟~
kubectl config set-context oldboyedu-admin --cluster=oldboyedu-k8s-dashboard-cluster --user=oldboyedu-k8s-dashboard-user --kubeconfig=$KUBECONFIG_NAME
# 配置当前使用的上下文
kubectl config use-context oldboyedu-admin --kubeconfig=$KUBECONFIG_NAME
EOF
sz oldboyedu-k8s-dashboard-admin.conf
如下图所示,我们可以访问任意的Pod,当然也可以直接进入到有终端的容器哟
Metrics Server从kubelets收集资源指标,并通过Metrics API将它们暴露在Kubernetes apiserver中,以供HPA(Horizontal Pod Autoscaler)和VPA(Vertical Pod Autoscaler)使用。
Metrics API也可以通过kubectl top访问,从而更容易调试自动缩放管道。
参考链接:
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/metrics-server
https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/
https://github.com/kubernetes-sigs/metrics-server
可以从资料里面复制 /k8s-manifests/add-on/metric-server
[root@k8s151 /app/tools]$ docker load -i metrics-server-0.3.6.tar.gz
[root@k8s151 /app/tools]$ docker push k8s151.oldboyedu.com:5000/metrics-server-amd64:v0.3.3
[root@k8s151 /app/tools]$ docker push k8s151.oldboyedu.com:5000/addon-resizer:1.8.5
[root@k8s151 /app/tools]$ mkdir -p /k8s-manifests/add-on/metric-server && cd /k8s-manifests/add-on/metric-server
[root@k8s151 /k8s-manifests/add-on/metric-server]$
[root@k8s151 /k8s-manifests/add-on/metric-server]$ kubectl apply -f .
[root@k8s151 /k8s-manifests/add-on/metric-server]$ kubectl get pods -n kube-system
[root@k8s151 /k8s-manifests/add-on/metric-server]$ kubectl top nodes
cd /root/manifests/add-on/metrics-server/deploy && kubectl apply -f .
kubectl get pods -n kube-system -o wide | grep metrics
kubectl get pods -A -o wide |grep flannel #查询flannel节点
kubectl describe no |grep Taints #查看node污点
kubectl taint node k8s151.oldboyedu.com node-role.kubernetes.io/master- #取消污点
kubectl top nodes
kubectl top pods -A
HPA全称是Horizontal Pod Autoscaler,中文意思是POD水平自动伸缩.
可以基于 CPU 利用率自动扩缩 ReplicationController、Deployment、ReplicaSet 和 StatefulSet 中的 Pod 数量。
除了 CPU 利用率,内存占用外,也可以基于其他应程序提供的自定义度量指标来执行自动扩缩
Pod 自动扩缩不适用于无法扩缩的对象,比如 DaemonSet。
Pod 水平自动扩缩特性由 Kubernetes API 资源和控制器实现。资源决定了控制器的行为。
控制器会周期性的调整副本控制器或 Deployment 中的副本数量,以使得 Pod 的平均 CPU 利用率与用户所设定的目标值匹配
[root@k8s151 /metric-server]$ cat > deploy-tomcat.yaml <<'EOF'
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: oldboyedu-linux81-tomcat
spec:
storageClassName: nfs-pvc-sc #这个是我们之前创建的nginx动态存储,在pvc,所以这里可以直接拿来用
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
replicas: 1
selector:
matchLabels:
app: oldboyedu-mysql
template:
metadata:
labels:
app: oldboyedu-mysql
spec:
volumes:
- name: data
persistentVolumeClaim:
claimName: oldboyedu-linux81-tomcat
containers:
- name: mysql
image: k8s151.oldboyedu.com:5000/mysql:5.7
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: '123456'
volumeMounts:
- name: data
mountPath: /var/lib/mysql
---
apiVersion: v1
kind: Service
metadata:
name: oldboyedu-mysql
spec:
selector:
app: oldboyedu-mysql
ports:
- port: 3306
targetPort: 3306
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: oldboyedu-tomcat-app
spec:
replicas: 1
selector:
matchLabels:
app: oldboyedu-tomcat-app
template:
metadata:
labels:
app: oldboyedu-tomcat-app
spec:
containers:
- name: myweb
# image: jasonyin2020/tomcat-app:v1
image: k8s151.oldboyedu.com:5000/tomcat-app:v1
resources:
limits:
cpu: "100m"
requests:
cpu: "100m"
ports:
- containerPort: 8080
env:
- name: MYSQL_SERVICE_HOST
value: oldboyedu-mysql
- name: MYSQL_SERVICE_PORT
value: '3306'
---
apiVersion: v1
kind: Service
metadata:
name: oldboyedu-tomcat-app
spec:
selector:
app: oldboyedu-tomcat-app
ports:
- port: 8080
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: linux81-tomcat
spec:
rules:
- host: tomcat.oldboyedu.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: oldboyedu-tomcat-app
port:
number: 8080
EOF
[root@k8s151 /metric-server]$ kubectl apply -f deploy-tomcat.yaml
kubectl autoscale deployment oldboyedu-tomcat-app --max=10 --min=2 --cpu-percent=10
kubectl get hpa
相关参数说明:
--max:
指定最大的Pod数量,如果指定的数量越大,则弹性伸缩的资源创建的就越多,对服务器资源会进行消耗。
--minx:
指定最小的Pod数量。
--cpu-percent:
指定CPU的百分比。
温馨提示:
(1)测试时建议修改为CPU使用百分比为10%,生产环节建议设置成75%.
(2)测试时最大Pod数量建议为5个即可,生产环境根据需求而定,通常情况下,10是一个不错的建议;
通过yaml进行配置
apiVersion: apps/v1
kind: Deployment
metadata:
name: hpaTest
template:
metadata:
labels:
app: apache
spec:
containers:
- name: apache
image: nginx:1.18
ports:
- containerPort: 80
resources: # 计算此容器所需的资源
requests: # 描述所需的最小计算资源量
cpu: 200m #计量单位,可以看作100milli
restartPolicy: Always
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler # HPA 资源类型
metadata:
name: myweb # HPA名称
spec:
minReplicas: 1 #最小保留 POD数量
maxReplicas: 3 # 最大保留 POD数量
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: hpaTest #监视 deployment名称
targetCPUUtilizationPercentage: 50 # CPU 度量值
yum -y install httpd-tools
ab -c 1000 -n 2000000 http://tomcat.oldboyedu.com/demo/
相关参数说明:
-n:
指定总共压测的次数。
-c:
每次压测发起的并发请求数。
如果想要使用"kubectl top"指令,请一定要让master节点部署CNI插件哟。
解决问题方案:
sh kubectl taint node k8s201.oldboyedu.com node-role.kubernetes.io/master-
PersistentVolume(PV) 是集群中由管理员配置的一段网络存储。
集群中的资源就像一个节点是一个集群资源,可以从远程的 NFS 或分布式对象存储系统中创建得来(PV 存储空间大小、访问方式)。PV 就是从存储设备中的空间创建出一个存储资源
PV 是集群中的资源,PVC 是对这些资源的请求,也是对资源的索引检查。
PV 和 PVC 之间的相互作用遵循这个生命周期:
Provisioning(配置) —> Binding(绑定) —> Using(使用) —> Releasing(释放) —> Recycling(回收)
Volume Plugin | ReadWriteOnce | ReadOnlyMany | ReadWriteMany | ReadWriteOncePod |
---|---|---|---|---|
AWSElasticBlockStore | ✓ | - | - | - |
AzureFile | ✓ | ✓ | ✓ | - |
AzureDisk | ✓ | - | - | - |
CephFS | ✓ | ✓ | ✓ | - |
Cinder | ✓ | - | - | - |
CSI | depends on the driver | depends on the driver | depends on the driver | depends on the driver |
FC | ✓ | ✓ | - | - |
FlexVolume | ✓ | ✓ | depends on the driver | - |
Flocker | ✓ | - | - | - |
GCEPersistentDisk | ✓ | ✓ | - | - |
Glusterfs | ✓ | ✓ | ✓ | - |
HostPath | ✓ | - | - | - |
iSCSI | ✓ | ✓ | - | - |
Quobyte | ✓ | ✓ | ✓ | - |
NFS | ✓ | ✓ | ✓ | - |
RBD | ✓ | ✓ | - | - |
VsphereVolume | ✓ | - | - (works when Pods are collocated) | - |
PortworxVolume | ✓ | - | ✓ | - |
StorageOS | ✓ | - | - | - |
Capacity(存储能力)
目前只支持存储空间的设置
,就是我们这里的 storage=1Gi,不过未来可能会加入 IOPS、吞吐量等指标的配置AccessModes(访问模式)
AccessModes 是用来对 PV 进行访问模式的设置,用于描述用户应用对存储资源的访问权限,访问权限包括下面几种方式:
[root@k8s151 /k8s-manifests]$ mkdir /k8s-manifests/pv
[root@k8s151 /k8s-manifests/pv]$ cat > manual-pv.yaml <<'EOF'
apiVersion: v1
kind: PersistentVolume
metadata:
name: oldboyedu-linux81-pv01
labels:
person: wanyan
school: oldboyedu
spec:
# 声明PV的访问模式,常用的有"ReadWriteOnce","ReadOnlyMany"和"ReadWriteMany":
# ReadWriteOnce:(简称:"RWO")
# 只允许单个worker节点读写存储卷,但是该节点的多个Pod是可以同时访问该存储卷的。
# ReadOnlyMany:(简称:"ROX")
# 允许多个worker节点进行只读存储卷。
# ReadWriteMany:(简称:"RWX")
# 允许多个worker节点进行读写存储卷。
# ReadWriteOncePod:(简称:"RWOP")
# 该卷可以通过单个Pod以读写方式装入。
# 如果您想确保整个集群中只有一个pod可以读取或写入PVC,请使用ReadWriteOncePod访问模式。
# 这仅适用于CSI卷和Kubernetes版本1.22+。
accessModes:
- ReadWriteMany
# 声明存储卷的类型为nfs
nfs:
path: /oldboyedu/data/kubernetes/pv/linux81/pv001
server: 10.0.0.151
# 指定存储卷的回收策略,常用的有"Retain"和"Delete"
# Retain:
# "保留回收"策略允许手动回收资源。
# 删除PersistentVolumeClaim时,PersistentVolume仍然存在,并且该卷被视为"已释放"。
# 在管理员手动回收资源之前,使用该策略其他Pod将无法直接使用。
# Delete:
# 对于支持删除回收策略的卷插件,k8s将删除pv及其对应的数据卷数据。
# Recycle:
# 对于"回收利用"策略官方已弃用。相反,推荐的方法是使用动态资源调配。
# 如果基础卷插件支持,回收回收策略将对卷执行基本清理(rm -rf /thevolume/*),并使其再次可用于新的声明。
persistentVolumeReclaimPolicy: Retain
# 声明存储的容量
capacity:
storage: 2Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: oldboyedu-linux81-pv02
labels:
person: zhaojiaxin
school: oldboyedu
spec:
accessModes:
- ReadWriteMany
nfs:
path: /oldboyedu/data/kubernetes/pv/linux81/pv002
server: 10.0.0.151
persistentVolumeReclaimPolicy: Delete
capacity:
storage: 5Gi
---
FiVersion: v1
kind: PersistentVolume
metadata:
name: oldboyedu-linux81-pv03
labels:
person: heyingnan
school: oldboyedu
spec:
accessModes:
- ReadWriteMany
nfs:
path: /oldboyedu/data/kubernetes/pv/linux81/pv003
server: 10.0.0.151
persistentVolumeReclaimPolicy: Recycle
capacity:
storage: 10Gi
EOF
[root@k8s151 /k8s-manifests/pv]$ kubectl apply -f manual-pv.yaml
persistentvolume/oldboyedu-linux81-pv01 created
persistentvolume/oldboyedu-linux81-pv02 created
persistentvolume/oldboyedu-linux81-pv03 created
[root@k8s151 /k8s-manifests/pv]$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
oldboyedu-linux81-pv01 2Gi RWX Retain Available 30s
oldboyedu-linux81-pv02 5Gi RWX Delete Available 30s
oldboyedu-linux81-pv03 10Gi RWX Recycle Available 30s
[root@k8s151 /k8s-manifests/pv]$ mkdir -pv /oldboyedu/data/kubernetes/pv/linux81/pv00{1..3}
[root@k8s151 /k8s-manifests/pv]$ ll -R /oldboyedu/data/kubernetes/pv/linux81
参考链接:
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming
[root@k8s151 /k8s-manifests/pvc]$ cat > manual-pvc.yaml <<'EOF'
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: oldboyedu-linux81-pvc
spec:
# 声明资源的访问模式
accessModes:
- ReadWriteMany
# 声明资源的使用量
resources:
limits:
storage: 4Gi #最大申请的空间
requests:
storage: 3Gi #定义要申请的最小空间
EOF
kubectl apply -f manual-pvc.yaml
kubectl get pvc #可以看到绑定的是pv02
[root@k8s151 /k8s-manifests/wordpress]$ cp -r v3 v4
[root@k8s151 /k8s-manifests/wordpress/v4]$ cat > 01-deploy-mysql.yaml <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
volumes: #nfs,如果忘记了可以去看3.5
- name: data
# nfs:
# server: 10.0.0.151
# path: /oldboyedu/data/kubernetes
persistentVolumeClaim:
claimName: oldboyedu-linux81-pvc
containers:
- name: mysql
image: k8s151.oldboyedu.com:5000/mysql:5.7
volumeMounts:
- name: data
mountPath: /var/lib/mysql
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: oldboyedu
- name: MYSQL_DATABASE
value: wordpress
- name: MYSQL_USER
value: wordpress
- name: MYSQL_PASSWORD
value: wordpress
---
apiVersion: v1
kind: Service
metadata:
name: linux81-db
spec:
clusterIP: 10.254.111.111 #自定义clusterIP
selector:
app: mysql
ports:
- port: 3306
protocol: TCP
targetPort: 3306
name: db
EOF
[root@k8s151 /k8s-manifests/wordpress/v4]$ cat > 02-deploy-wordpresss.yaml <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
spec:
replicas: 3
selector:
matchLabels:
apps: wordpress
template:
metadata:
labels:
apps: wordpress
spec:
containers:
- name: wordpress
image: k8s151.oldboyedu.com:5000/wordpress:latest
ports:
- containerPort: 80
env:
- name: WORDPRESS_DB_HOST
value: linux81-db
- name: WORDPRESS_DB_USER
value: wordpress
- name: WORDPRESS_DB_PASSWORD
value: wordpress
---
apiVersion: v1
kind: Service
metadata:
name: linux81-web
labels:
apps: wordpress
class: linux81
spec:
type: NodePort
selector:
apps: wordpress
ports:
- port: 8888
targetPort: 80
nodePort: 30088
name: web
EOF
[root@k8s151 /k8s-manifests/wordpress/v4]$ kubectl apply -f .
[root@k8s151 /k8s-manifests/wordpress/v4]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
deploy-mysql-67cd9f6b88-s4255 1/1 Running 0 6m58s
wordpress-7778cdccdf-2rjkz 1/1 Running 0 6m58s
wordpress-7778cdccdf-4dfrc 1/1 Running 0 6m58s
wordpress-7778cdccdf-b5ng7 1/1 Running 0 6m58s
[root@k8s151 /k8s-manifests/wordpress/v4]$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
oldboyedu-linux81-pv01 2Gi RWX Retain Available 66m
oldboyedu-linux81-pv02 5Gi RWX Delete Failed default/oldboyedu-linux80-pvc 66m
oldboyedu-linux81-pv03 10Gi RWX Recycle Bound default/oldboyedu-linux81-pvc 66m
可以看到pvc绑定的是oldboyedu-linux81-pv03
[root@k8s151 /k8s-manifests/wordpress/v4]$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
oldboyedu-linux81-pvc Bound oldboyedu-linux81-pv03 10Gi RWX 114m
发现只有被绑定的pv003里面才有文件
[root@k8s151 /k8s-manifests/wordpress/v4]$ ll /oldboyedu/data/kubernetes/pv/linux81/pv003
total 188484
-rw-r----- 1 polkitd input 56 Nov 29 17:36 auto.cnf
-rw------- 1 polkitd input 1676 Nov 29 17:36 ca-key.pem
-rw-r--r-- 1 polkitd input 1112 Nov 29 17:36 ca.pem
-rw-r--r-- 1 polkitd input 1112 Nov 29 17:36 client-cert.pem
-rw------- 1 polkitd input 1676 Nov 29 17:36 client-key.pem
-rw-r----- 1 polkitd input 1352 Nov 29 17:38 ib_buffer_pool
-rw-r----- 1 polkitd input 79691776 Nov 29 17:38 ibdata1
-rw-r----- 1 polkitd input 50331648 Nov 29 17:38 ib_logfile0
-rw-r----- 1 polkitd input 50331648 Nov 29 17:36 ib_logfile1
-rw-r----- 1 polkitd input 12582912 Nov 29 17:42 ibtmp1
drwxr-x--- 2 polkitd input 4096 Nov 29 17:36 mysql
drwxr-x--- 2 polkitd input 8192 Nov 29 17:36 performance_schema
-rw------- 1 polkitd input 1676 Nov 29 17:36 private_key.pem
-rw-r--r-- 1 polkitd input 452 Nov 29 17:36 public_key.pem
-rw-r--r-- 1 polkitd input 1112 Nov 29 17:36 server-cert.pem
-rw------- 1 polkitd input 1676 Nov 29 17:36 server-key.pem
drwxr-x--- 2 polkitd input 8192 Nov 29 17:36 sys
drwxr-x--- 2 polkitd input 20 Nov 29 17:38 wordpress
Retain:
**“保留回收”**策略允许手动回收资源,删除pvc时,pv仍然存在,并且该卷被视为"已释放(Released)"。
在管理员手动回收资源之前,使用该策略其他Pod将无法直接使用。
温馨提示:
(1)在k8s 1.15.12版本测试时,删除pvc发现nfs存储卷的数据并不会被删除,pv也不会被删除;
Delete:
对于支持删除回收策略的卷插件,k8s将删除pv及其对应的数据卷数据。建议使用动态存储类(sc)实现,才能看到效果哟!
对于AWS EBS, GCE PD, Azure Disk, or OpenStack Cinder等存储卷会被删除。
温馨提示:
(1)在k8s 1.15.12版本测试时,在不使用sc时,则删除pvc发现nfs存储卷的数据并不会被删除;
(2)在k8s 1.15.12版本测试时,在使用sc后,可以看到删除效果哟;
Recycle:
对于"回收利用"策略官方已弃用。相反,推荐的方法是使用动态资源调配。
如果基础卷插件支持,回收回收策略将对卷执行基本清理(rm -rf /thevolume/*),并使其再次可用于新的声明。
温馨提示,在k8s 1.15.12版本测试时,删除pvc发现nfs存储卷的数据被删除。
kubectl patch pv oldboyedu-linux81-pv03 -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
参考链接:
https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/
温馨提示:
基于命令行的方式修改配置,基本上都是临时修改,当资源被删除后,重新创建时依旧会根据资源清单的配置创建哟。
Volume Plugin | Internal Provisioner | Config Example |
---|---|---|
AWSElasticBlockStore | ✓ | AWS EBS |
AzureFile | ✓ | Azure File |
AzureDisk | ✓ | Azure Disk |
CephFS | - | - |
Cinder | ✓ | OpenStack Cinder |
FC | - | - |
FlexVolume | - | - |
Flocker | ✓ | - |
GCEPersistentDisk | ✓ | GCE PD |
Glusterfs | ✓ | Glusterfs |
iSCSI | - | - |
Quobyte | ✓ | Quobyte |
NFS | - | NFS |
RBD | ✓ | Ceph RBD |
VsphereVolume | ✓ | vSphere |
PortworxVolume | ✓ | Portworx Volume |
ScaleIO | ✓ | ScaleIO |
StorageOS | ✓ | StorageOS |
Local | - | Local |
https://kubernetes.io/docs/concepts/storage/storage-classes/#provisioner
[root@k8s151 ~]$ mkdir -p /k8s-manifests/sc /oldboyedu/data/kubernetes/sc && cd /k8s-manifests/sc
[root@k8s151 /k8s-manifests/sc]$ git clone https://gitee.com/yinzhengjie/k8s-external-storage.git
或
[root@k8s151 /k8s-manifests/sc]$ git clone https://gitee.com/llllkkkkhhhh/k8s-external-storage.git
StorageClass详解:https://blog.csdn.net/weixin_41947378/article/details/111509849
在这里StorageClass相当于对nfs做了一系列的管理配置,后面可以直接引用这个StorageClass,就可以直接使用Nfs存储
要想使用nfs动态存储,前置条件要配置 部署nfs server (在四Pod基础管理下的10.4.2 )
[root@k8s151 ~]$ cd /k8s-manifests/sc/k8s-external-storage/nfs-client/deploy
[root@k8s151 /k8s-manifests/sc/k8s-external-storage/nfs-client/deploy]$ cat class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-pvc-sc #这个是名字(根据自己情况 选择是否修改)
provisioner: nfstest/test # 选择作者名字,必须从deployment中的env PROVISIONER_NAME里面选择(可以不用修改,修改是为了演示)
parameters:
# archiveOnDelete: "false"
archiveOnDelete: "true"
下面这个主要是负责指定nfs共享路径的
[root@k8s151 /k8s-manifests/sc/k8s-external-storage/nfs-client/deploy]$
vim deployment.yaml
...
spec:
...
template:
...
spec:
...
containers:
- name: nfs-client-provisioner
#image: quay.io/external_storage/nfs-client-provisioner:latest
image: k8s151.oldboyedu.com:5000/nfs-client-provisioner #镜像改成从我们仓库拉取,这样速度会更快
...
env:
- name: PROVISIONER_NAME
value: nfstest/test #作者的名字
# 指定NFS服务器地址
- name: NFS_SERVER
value: 10.0.0.151
# 指定NFS的共享路径
- name: NFS_PATH
value: /oldboyedu/data/kubernetes/sc
volumes:
- name: nfs-client-root
# 配置NFS共享
nfs:
server: 10.0.0.151
path: /oldboyedu/data/kubernetes/sc
创建StorageClass对象
[root@k8s151 /k8s-manifests/sc/k8s-external-storage/nfs-client/deploy]$ kubectl apply -f class.yaml
[root@k8s151 /k8s-manifests/sc/k8s-external-storage/nfs-client/deploy]$ kubectl get sc #或kubectl get StorageClass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-pvc-sc nfstest/test Delete Immediate false 5s
创建SA账户权限
[root@k8s151 /deploy]$ kubectl apply -f rbac.yaml
[root@k8s151 /k8s-manifests/sc/k8s-external-storage/nfs-client/deploy]$ kubectl get sa #或 kubectl get ServiceAccount
NAME SECRETS AGE
default 1 4h22m
nfs-client-provisioner 1 25s
创建pods
[root@k8s151 /k8s-manifests/sc/k8s-external-storage/nfs-client/deploy]$ kubectl apply -f deployment.yaml
查看nfs的pod是否部署成功
[root@k8s151 ~]$ kubectl get pods |grep nfs
nfs-client-provisioner-5948df8d48-c2dkm 1/1 Running 0 2m31s
nfs-client-provisioner镜像下载失败导致pod状态为 ContainerCreating
https://hub.docker.com/r/vbouchaud/nfs-client-provisioner
docker pull vbouchaud/nfs-client-provisioner docker tag vbouchaud/nfs-client-provisioner k8s151.oldboyedu.com:5000/nfs-client-provisioner docker push k8s151.oldboyedu.com:5000/nfs-client-provisioner
[root@k8s151 /k8s-manifests/sc/k8s-external-storage/nfs-client/deploy]$ cat > test-claim.yaml <<'EOF'
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
annotations:
volume.beta.kubernetes.io/storage-class: "nfs-pvc-sc" #直接引用创建好的storage-class
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
EOF
[root@k8s151 /k8s-manifests/sc/k8s-external-storage/nfs-client/deploy]$ kubectl apply -f class.yaml
storageclass.storage.k8s.io/nfs-pvc-sc created
[root@k8s151 /k8s-manifests/sc/k8s-external-storage/nfs-client/deploy]$ kubectl apply -f test-claim.yaml
persistentvolumeclaim/test-claim created
[root@k8s151 /k8s-manifests/sc/k8s-external-storage/nfs-client/deploy]$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-claim Bou