官方网址:https://v1-25.docs.kubernetes.io/zh-cn/docs/concepts/services-networking/service/
服务(Service)
将运行在一组 Pods 上的应用程序公开为网络服务的抽象方法。
使用 Kubernetes,你无需修改应用程序即可使用不熟悉的服务发现机制。 Kubernetes 为 Pod 提供自己的 IP 地址,并为一组 Pod 提供相同的 DNS 名, 并且可以在它们之间进行负载均衡。
Service可以看作是一组提供相同服务的Pod对外的访问接口。借助Service,应用可以方便地实现服务发现和负载均衡。
service默认只支持4层负载均衡能力,没有7层功能。(可以通过Ingress实现)
service的类型:(前三种是集群外部访问内部资源)
ClusterIP:默认值,k8s系统给service自动分配的虚拟IP,只能在集群内部访问。
NodePort:将Service通过指定的Node上的端口暴露给外部,访问任意一个NodeIP:nodePort都将路由到ClusterIP。
LoadBalancer:在 NodePort 的基础上,借助 cloud provider 创建一个外部的负载均衡器,并将请求转发到 :NodePort,此模式只能在云服务器上使用。
ExternalName:将服务通过 DNS CNAME 记录方式转发到指定的域名(通过 spec.externlName 设定)。[集群内部访问外部,通过内部调用外部资源]
services通过标签发现pod
创建测试示例
[root@k8s2 service]# vim myapp.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: myapp
name: myapp
spec:
replicas: 6
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- image: myapp:v1
name: myapp
---
apiVersion: v1
kind: Service
metadata:
labels:
app: myapp
name: myapp
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: myapp
type: ClusterIP
[root@k8s2 service]# kubectl apply -f myapp.yml
[root@k8s2 service]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 4d1h
myapp ClusterIP 10.107.249.53 80/TCP 7s
默认使用iptables调度
[root@server2 service]# iptables -t nat -nL | grep :80
Service 是由 kube-proxy 组件,加上 iptables 来共同实现的
kube-proxy 通过 iptables 处理 Service 的过程,需要在宿主机上设置相当多的 iptables 规则,如果宿主机有大量的Pod,不断刷新iptables规则,会消耗大量的CPU资源。
IPVS模式的service,可以使K8s集群支持更多量级的Pod。
查看没有设置ipvs模式时候的ipvs
[root@server2 ~]# lsmod | grep ip ##可以查看对应的ipvs是没有使用的,还是使用的iptables
ip6_udp_tunnel 12755 1 vxlan
ip_vs_sh 12688 0
ip_vs_wrr 12697 0
ip_vs_rr 12600 0
ip_vs 145497 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
部署ipvs模式
[root@server2 ~]# yum install -y ipvsadm ##安装ipvsadm
[root@server2 ~]# kubectl get pod -n kube-system | grep kube-proxy ##部署之前查看一下
[root@server2 ~]# kubectl -n kube-system edit cm kube-proxy
...
mode: "ipvs" ##进入修改mode为ipvs
重启pod:
[root@server2 ~]# kubectl -n kube-system get pod|grep kube-proxy | awk '{system("kubectl -n kube-system delete pod "$1"")}'
##更新kube-proxy pod(删除后自动生成)
[root@server2 ~]# kubectl get pod -n kube-system | grep kube-proxy ##部署之后查看是否发生变化
[root@server2 ~]# ipvsadm -ln
#IPVS模式下,kube-proxy会在service创建后,在宿主机上添加一个虚拟网卡:kube-ipvs0,并分配service IP
切换ipvs模式后,kube-proxy会在宿主机上添加一个虚拟网卡:kube-ipvs0,并分配service IP
测试:轮循机制
clusterip模式只能在集群内访问
[root@k8s2 service]# vim myapp.yml
---
apiVersion: v1
kind: Service
metadata:
labels:
app: myapp
name: myapp
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: myapp
type: ClusterIP ##模式
[root@server2 service]# kubectl apply -f myapp.yml
通过dig进行测试
[root@server2 service]# dig -t A myapp.default.svc.cluster.local. @10.96.0.10
Headless Service不需要分配一个VIP,而是直接以DNS记录的方式解析出被代理Pod的IP地址。
域名格式: ( s e r v i c e n a m e ) . (servicename). (servicename).(namespace).svc.cluster.local
[root@k8s2 service]# vim myapp.yml
...
---
apiVersion: v1
kind: Service
metadata:
labels:
app: myapp
name: myapp
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: myapp
type: ClusterIP ##
clusterIP: None ##
[root@k8s2 service]# kubectl delete svc myapp
[root@k8s2 service]# kubectl apply -f myapp.yml
headless模式不分配vip
headless通过svc名称访问,由集群内dns提供解析
[root@server2 service]# dig -t A myapp.default.svc.cluster.local. @10.96.0.10
[root@k8s2 service]# vim myapp.yml
---
apiVersion: v1
kind: Service
metadata:
labels:
app: myapp
name: myapp
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: myapp
type: NodePort ##改动处
[root@k8s2 service]# kubectl apply -f myapp.yml
[root@k8s2 service]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 4d1h
myapp NodePort 10.107.249.53 80:32199/TCP 12m
nodeport在集群节点上绑定端口,一个端口对应一个服务
[root@server1 harbor]# curl 192.168.117.12:32543
Hello MyApp | Version: v1 | Pod Name
[root@server1 harbor]# curl 192.168.117.13:32543
Hello MyApp | Version: v1 | Pod Name
[root@server1 harbor]# curl 192.168.117.14:32543
Hello MyApp | Version: v1 | Pod Name
[root@k8s2 service]# vim myapp.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: myapp
name: myapp
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- image: myapp:v1
name: myapp
---
apiVersion: v1
kind: Service
metadata:
labels:
app: myapp
name: myapp
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
nodePort: 33333 ###
selector:
app: myapp
type: NodePort ###
[root@k8s2 service]# kubectl apply -f myapp.yml
The Service "myapp" is invalid: spec.ports[0].nodePort: Invalid value: 33333: provided port is not in the valid range. The range of valid ports is 30000-32767
nodeport默认端口是30000-32767,超出会报错
[root@k8s2 service]# vim /etc/kubernetes/manifests/kube-apiserver.yaml
添加如下参数,端口范围可以自定义
查看是否修改成功:
[root@server2 ~]# kubectl -n kube-system describe pod kube-apiserver-server2
修改后api-server会自动重启,等apiserver正常启动后才能操作集群
注意:若加错,api-server无法重启,集群不可用,需要将文件重新修改(api-server唯一外部调度接口,若api-server不可用,kubectl下发不了新的指令,原先运行的应用不会有影响,因为k8s的组建不是强依懒性)
[root@k8s2 service]# vim myapp.yml
---
apiVersion: v1
kind: Service
metadata:
labels:
app: myapp
name: myapp
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: myapp
type: LoadBalancer ##
[root@k8s2 service]# kubectl apply -f myapp.yml
默认无法分配外部访问IP
[root@k8s2 service]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 4d1h
myapp LoadBalancer 10.107.23.134 80:32537/TCP 4s
LoadBalancer模式适用云平台,裸金属环境需要安装metallb提供支持(参考后面:8.metallb:实现分配ip)
从外部访问的第三种方式叫做ExternalName;直接给service指定外部地址
1.ExternalName
适用于迁移应用的场景
[root@server2 service]# vim externalname.yaml
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: ExternalName
externalName: www.westos.org ##
[root@server2 service]# kubectl apply -f externalname.yaml
[root@server2 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
exsvc ExternalName www.baidu.com 7s
[root@server2 ~]# dig -t A exsvc.default.svc.cluster.local. @10.96.0.10 ##要想访问到,要确保联网
2.直接分配一个公有ip:一般不推荐使用
[root@server2 ~]# vim damo.yml
[root@server2 ~]# cat damo.yml
---
apiVersion: v1
kind: Service
metadata:
name: myservice
spec:
selector:
app: myapp
ports:
- protocol: TCP
port: 80
targetPort: 80
#clusterIP: None
#type: NodePort
#type: LoadBalancer
externalIPs: ##分配的公有ip
- 172.25.13.100
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo2
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:v2
[root@server2 ~]# kubectl apply -f damo.yml
service/myservice configured
deployment.apps/demo2 unchanged
[root@server2 ~]# kubectl get svc ##查看分配的公有ip为172.25.0.100
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 2d20h
myservice ClusterIP 10.97.125.97 172.25.13.100 80/TCP 5m7s
[root@westos Desktop]# curl 172.25.13.100/hostname.html ##真机访问172.25.13.100
[root@server2 ~]# kubectl get services kube-dns --namespace=kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP,9153/TCP 2d19h
[root@server2 ~]# kubectl attach demo -it ##如果有demo就直接进
[root@server2 ~]# kubectl run demo --image=busyboxplus -it ##没有demo创建demo
[root@server2 ~]# yum install bind-utils -y ##安装dig工具
[root@server2 ~]# dig myservice.default.svc.cluster.local. @10.96.0.10 ##通过dig进行测试
metallb用来实现在私有化搭建的裸机环境中实现负载均衡器的功能,在没有云环境的情况下通过metallb将service暴露到网络环境中,供其他系统访问。
在裸金属环境中模拟云端环境,分配ip
它的作用就是通过k8s原生的方式提供LB(LoadBalancer )类型的Service支持,开箱即用。
官网:https://metallb.universe.tf/installation/
###mode: "ipvs"需执行此步,iptables不需要执行
[root@k8s2 service]# kubectl edit configmap -n kube-system kube-proxy
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs" ##
ipvs: ##
strictARP: true ##
[root@k8s2 service]# kubectl -n kube-system get pod|grep kube-proxy | awk '{system("kubectl -n kube-system delete pod "$1"")}'
下载部署文件
[root@k8s2 metallb]# wget https://raw.githubusercontent.com/metallb/metallb/v0.13.9/config/manifests/metallb-native.yaml
修改文件中镜像地址,与harbor仓库路径保持一致
[root@k8s2 metallb]# vim metallb-native.yaml
...
image: metallb/speaker:v0.13.9
image: metallb/controller:v0.13.9
上传镜像到harbor
[root@k8s1 ~]# docker pull quay.io/metallb/controller:v0.13.9
[root@k8s1 ~]# docker pull quay.io/metallb/speaker:v0.13.9
[root@k8s1 ~]# docker tag quay.io/metallb/controller:v0.13.9 reg.westos.org/metallb/controller:v0.13.9
[root@k8s1 ~]# docker tag quay.io/metallb/speaker:v0.13.9 reg.westos.org/metallb/speaker:v0.13.9
[root@k8s1 ~]# docker push reg.westos.org/metallb/controller:v0.13.9
[root@k8s1 ~]# docker push reg.westos.org/metallb/speaker:v0.13.9
部署服务
[root@k8s2 metallb]# kubectl apply -f metallb-native.yaml
[root@k8s2 metallb]# kubectl -n metallb-system get pod
NAME READY STATUS RESTARTS AGE
controller-74f844c699-gz9pt 1/1 Running 0 51s ##控制器
speaker-crr2r 1/1 Running 0 51s
speaker-kcv84 1/1 Running 0 51s
speaker-zxc6j 1/1 Running 0 51s
配置分配地址段
[root@k8s2 metallb]# vim config.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool
namespace: metallb-system
spec:
addresses:
- 192.168.117.100-192.168.117.110 #修改为自己本地地址段,确保没人使用该段的ip即可
---
apiVersion: metallb.io/v1beta1 ##将地址池ip进行分配
kind: L2Advertisement
metadata:
name: example
namespace: metallb-system
spec: ##以下为绑定地址池,本实验只有一个地址池
ipAddressPools:
- first-pool
[root@k8s2 metallb]# kubectl apply -f config.yaml
[root@k8s2 metallb]# kubectl get svc 从 LoadBalancer 之前pending已改为分的ip
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 4d1h
myapp LoadBalancer 10.107.23.134 192.168.117.100 80:32537/TCP 19m
官网:https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal-clusters
Ingress 公开从集群外部到集群内服务的 HTTP 和 HTTPS 路由。 流量路由由 Ingress 资源上定义的规则控制;
Ingress 控制器 通常负责通过负载均衡器来实现 Ingress;
Ingress 不会公开任意端口或协议。 将 HTTP 和 HTTPS 以外的服务公开到 Internet 时,通常使用 Service.Type=NodePort 或 Service.Type=LoadBalancer 类型的 Service。
一种全局的、为了代理不同后端 Service 而设置的负载均衡服务,就是 Kubernetes 里的Ingress 服务
Ingress由两部分组成:Ingress controller和Ingress服务。
Ingress Controller 会根据你定义的 Ingress 对象,提供对应的代理能力。业界常用的各种反向代理项目,比如 Nginx、HAProxy、Envoy、Traefik 等,都已经为Kubernetes 专门维护了对应的 Ingress Controller。
访问路径:client–ingress-service-pod;对集群内service做反向代理服务
本实验控制器为nginx
用DaemonSet结合nodeselector来部署ingress-controller到特定的node上,然后使用HostNetwork直接把该pod与宿主机node的网络打通,直接使用宿主机的80/443端口就能访问服务。
优点是整个请求链路最简单,性能相对NodePort模式更好。
缺点是由于直接利用宿主机节点的网络和端口,一个node只能部署一个ingress-controller pod。
比较适合大并发的生产环境使用。
下载部署文件
[root@k8s2 ingress]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.6.4/deploy/static/provider/baremetal/deploy.yaml
docker官网找镜像:[https://hub.docker.com/](https://hub.docker.com/)
上传镜像到harbor
[root@k8s1 harbor]# docker pull dyrnq/ingress-nginx-controller:v1.6.4
[root@k8s1 harbor]# docker pull dyrnq/kube-webhook-certgen:v20220916-gd32f8c343
[root@k8s1 harbor]# docker tag dyrnq/ingress-nginx-controller:v1.6.4 reg.westos.org/ingress-nginx/controller:v1.6.4
[root@k8s1 harbor]# docker tag dyrnq/kube-webhook-certgen:v20220916-gd32f8c343 reg.westos.org/ingress-nginx/kube-webhook-certgen:v20220916-gd32f8c343
[root@k8s1 harbor]# docker push reg.westos.org/ingress-nginx/controller:v1.6.4
[root@k8s1 harbor]# docker push reg.westos.org/ingress-nginx/kube-webhook-certgen:v20220916-gd32f8c343
打包虚拟机镜像命令:
修改3个镜像路径:修改至harbor仓库
[root@k8s2 ingress]# vim deploy.yaml
[root@k8s2 ingress]# kubectl apply -f deploy.yaml
[root@k8s2 ingress]# kubectl -n ingress-nginx get pod
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-dg4k8 0/1 Completed 0 29m
ingress-nginx-admission-patch-25c6g 0/1 Completed 1 29m
ingress-nginx-controller-7f6f95bb7c-zvgfr 1/1 Running 0 29m ##控制器
[root@k8s2 ingress]# kubectl -n ingress-nginx get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.97.24.180 80:42082/TCP,443:39966/TCP 48s
ingress-nginx-controller-admission ClusterIP 10.102.247.246 443/TCP 48s
修改为LoadBalancer方式
[root@k8s2 ingress]# kubectl -n ingress-nginx edit svc ingress-nginx-controller
type: LoadBalancer
[root@k8s2 ingress]# kubectl -n ingress-nginx get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.97.24.180 192.168.56.100 80:42082/TCP,443:39966/TCP 92s
ingress-nginx-controller-admission ClusterIP 10.102.247.246 443/TCP 92s
以上两个网址做例子都可
创建ingress策略
[root@k8s2 ingress]# vim ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
spec:
ingressClassName: nginx ##和集群内当前的要保持一致
rules:
- http:
paths:
- path: / ##访问网站根目录
pathType: Prefix ##路径匹配方式
backend: ##对应的后端,表示当访问Ingress服务,实际访问到集群的myapp服务
service:
name: myapp
port:
number: 80
[root@k8s2 ingress]# kubectl apply -f ingress.yml
ingress必须和输出的service资源处于同一namespace
测试
[root@k8s1 harbor]# curl 192.168.56.100
Hello MyApp | Version: v1 | Pod Name
回收资源
[root@k8s2 ingress]# kubectl delete -f myapp.yml ##之前实验留下的,myapp.yml 为内部访问ClusterIP方式
[root@k8s2 ingress]# kubectl delete -f ingress.yml
官方文档:https://v1-25.docs.kubernetes.io/zh-cn/docs/concepts/services-networking/ingress/
创建svc
[root@k8s2 ingress]# vim myapp-v1.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: myapp-v1
name: myapp-v1
spec:
replicas: 3
selector:
matchLabels:
app: myapp-v1
template:
metadata:
labels:
app: myapp-v1
spec:
containers:
- image: myapp:v1
name: myapp-v1
---
apiVersion: v1
kind: Service
metadata:
labels:
app: myapp-v1
name: myapp-v1
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: myapp-v1
type: ClusterIP
[root@k8s2 ingress]# kubectl apply -f myapp-v1.yml
[root@k8s2 ingress]# vim myapp-v2.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: myapp-v2
name: myapp-v2
spec:
replicas: 3
selector:
matchLabels:
app: myapp-v2
template:
metadata:
labels:
app: myapp-v2
spec:
containers:
- image: myapp:v2
name: myapp-v2
---
apiVersion: v1
kind: Service
metadata:
labels:
app: myapp-v2
name: myapp-v2
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: myapp-v2
type: ClusterIP
[root@k8s2 ingress]# kubectl apply -f myapp-v2.yml
[root@k8s2 ingress]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 5d20h
myapp-v1 ClusterIP 10.97.0.186 80/TCP 13m
myapp-v2 ClusterIP 10.107.186.198 80/TCP 13m
创建ingress
[root@k8s2 ingress]# vim ingress1.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: / ##重定向根目录
spec:
ingressClassName: nginx
rules:
- host: myapp.westos.org
http:
paths:
- path: /v1
pathType: Prefix
backend:
service:
name: myapp-v1
port:
number: 80
- path: /v2
pathType: Prefix
backend:
service:
name: myapp-v2
port:
number: 80
[root@k8s2 ingress]# kubectl apply -f ingress1.yml
测试
[root@k8s1 ~]# vim /etc/hosts
...
192.168.56.100 myapp.westos.org myapp1.westos.org myapp2.westos.org
[root@k8s1 harbor]# curl myapp.westos.org/v1
Hello MyApp | Version: v1 | Pod Name
[root@k8s1 harbor]# curl myapp.westos.org/v2
Hello MyApp | Version: v2 | Pod Name
注意:v2和v1在ingress做的识别,通过不同的路径,访问到不同的后端服务,但是不能把路径带到后端的web服务器去访问,
在后端服务器没有真正的运行;以上方法适合重定向到集群内不同的业务域,也可以通过不同的域名来配置
回收
[root@k8s2 ingress]# kubectl delete -f ingress1.yml
[root@k8s2 ingress]# vim ingress2.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
spec:
ingressClassName: nginx
rules:
- host: myapp1.westos.org
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp-v1
port:
number: 80
- host: myapp2.westos.org
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp-v2
port:
number: 80
[root@k8s2 ingress]# kubectl apply -f ingress2.yml
测试
[root@k8s1 harbor]# curl myapp1.westos.org
Hello MyApp | Version: v1 | Pod Name
[root@k8s1 harbor]# curl myapp2.westos.org
Hello MyApp | Version: v2 | Pod Name
回收
[root@k8s2 ingress]# kubectl delete -f ingress2.yml
创建证书
[root@k8s2 ingress]# openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"
[root@k8s2 ingress]# kubectl create secret tls tls-secret --key tls.key --cert tls.crt
[root@k8s2 ingress]# vim ingress3.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-tls
spec:
tls:
- hosts:
- myapp.westos.org ##支持加密的域名
secretName: tls-secret ##
ingressClassName: nginx
rules:
- host: myapp.westos.org
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp-v1
port:
number: 80
[root@k8s2 ingress]# kubectl apply -f ingress3.yml
测试
[root@k8s1 ~]# curl -k https://myapp.westos.org ##-k忽略证书
Hello MyApp | Version: v1 | Pod Name
创建认证文件
[root@k8s2 ingress]# yum install -y httpd-tools
[root@k8s2 ingress]# htpasswd -c auth wxh
New password: ***
Re-type new password: ***
Adding password for user gong
[root@k8s2 ingress]# cat auth
gong:$apr1$n3vTWwt2$cpjPPSjieF95IhWfOkILN/
[root@k8s2 ingress]# kubectl create secret generic basic-auth --from-file=auth
[root@k8s2 ingress]# vim ingress3.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-tls
annotations:
nginx.ingress.kubernetes.io/auth-type: basic ##认证类型
nginx.ingress.kubernetes.io/auth-secret: basic-auth ##认证方式
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - gong' ##认证的文件在哪
spec:
tls:
- hosts:
- myapp.westos.org
secretName: tls-secret
ingressClassName: nginx
rules:
- host: myapp.westos.org
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp-v1
port:
number: 80
[root@k8s2 ingress]# kubectl apply -f ingress3.yml
测试:浏览器会出现弹框
[root@k8s1 ~]# curl -k https://myapp.westos.org -u gong:westos
Hello MyApp | Version: v1 | Pod Name
示例1:
[root@k8s2 ingress]# cat ingress3.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-tls
annotations:
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - gong'
nginx.ingress.kubernetes.io/app-root: /hostname.html ##重定向路径,路径原始必须存在
spec:
tls:
- hosts:
- myapp.westos.org
secretName: tls-secret
ingressClassName: nginx
rules:
- host: myapp.westos.org
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp-v1
port:
number: 80
[root@k8s2 ingress]# kubectl apply -f ingress3.yml
示例二:
[root@k8s2 ingress]# vim ingress3.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-tls
annotations:
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - gong'
#nginx.ingress.kubernetes.io/app-root: /hostname.html
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
tls:
- hosts:
- myapp.westos.org
secretName: tls-secret
ingressClassName: nginx
rules:
- host: myapp.westos.org
http:
paths:
- path: / ##访问域名也行
pathType: Prefix
backend:
service:
name: myapp-v1
port:
number: 80
- path: /westos(/|$)(.*) ##必须加westos关键字才能访问,nginx本身做转发,服务端本身没有路径
pathType: Prefix
backend:
service:
name: myapp-v1
port:
number: 80
[root@k8s2 ingress]# kubectl apply -f ingress3.yml
测试:
直接不加任何关键字
[root@k8s1 ~]# curl -k https://myapp.westos.org -u gong:westos
Hello MyApp | Version: v1 | Pod Name
加westos触发重定向策略:
[root@k8s1 ~]# curl -k https://myapp.westos.org/westos/hostname.html -u gong:westos
myapp-v1-bf4fc4b85-x2spw
[root@k8s1 ~]# curl -k https://myapp.westos.org/hostname.html -u gong:westos##去掉westos
Hello MyApp | Version: v1 | Pod Name
回收
[root@k8s2 ingress]# kubectl delete -f ingress3.yml
金丝雀发布和蓝绿发布的区别:https://baijiahao.baidu.com/s?id=1764154544113604593&wfr=spider&for=pc
ingress中高级流量治理功能
接收正常访问流量:
[root@k8s2 ingress]# vim ingress4.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-v1-ingress
spec:
ingressClassName: nginx
rules:
- host: myapp.westos.org
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: myapp-v1
port:
number: 80
[root@k8s2 ingress]# kubectl apply -f ingress4.yml
[root@k8s2 ingress]# kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
myapp-v1-ingress nginx myapp.westos.org 80 2s
接收带有header的流量:
[root@k8s2 ingress]# vim ingress5.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/canary: "true" ##激活canary
nginx.ingress.kubernetes.io/canary-by-header: stage ##激活kv,kv值可以自己设置
nginx.ingress.kubernetes.io/canary-by-header-value: gray ##
name: myapp-v2-ingress
spec:
ingressClassName: nginx
rules:
- host: myapp.westos.org
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: myapp-v2
port:
number: 80
[root@k8s2 ingress]# kubectl apply -f ingress5.yml
测试
[root@k8s1 ~]# curl myapp.westos.org
Hello MyApp | Version: v1 | Pod Name
访问的头有stage: gray键值对:
[root@k8s1 ~]# curl -H “stage: gray” myapp.westos.org
Hello MyApp | Version: v2 | Pod Name
[root@k8s2 ingress]# vim ingress5.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/canary: "true"
#nginx.ingress.kubernetes.io/canary-by-header: stage
#nginx.ingress.kubernetes.io/canary-by-header-value: gray
nginx.ingress.kubernetes.io/canary-weight: "10" ##10%去v2
nginx.ingress.kubernetes.io/canary-weight-total: "100" ##
name: myapp-v2-ingress
spec:
ingressClassName: nginx
rules:
- host: myapp.westos.org
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: myapp-v2
port:
number: 80
[root@k8s2 ingress]# kubectl apply -f ingress5.yml
测试
[root@k8s1 ~]# vim ingress.sh
#!/bin/bash
v1=0
v2=0
for (( i=0; i<100; i++))
do
response=`curl -s myapp.westos.org |grep -c v1`
v1=`expr $v1 + $response`
v2=`expr $v2 + 1 - $response`
done
echo "v1:$v1, v2:$v2"
[root@k8s1 ~]# sh ingress.sh
v1:90, v2:10
回收
[root@k8s2 ingress]# kubectl delete -f ingress5.yml
[root@k8s2 ingress]# vim ingress6.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
name: rewrite-ingress
spec:
ingressClassName: nginx
rules:
- host: myapp.westos.org
http:
paths:
- path: /user/(.*) ##重定向去v1
pathType: Prefix
backend:
service:
name: myapp-v1
port:
number: 80
- path: /order/(.*) ##重定向去v2
pathType: Prefix
backend:
service:
name: myapp-v2
port:
number: 80
[root@k8s2 ingress]# kubectl apply -f ingress6.yml
测试
[root@k8s1 ~]# curl myapp.westos.org
Hello MyApp | Version: v1 | Pod Name
[root@k8s1 ~]# curl myapp.westos.org/user/hostname.html ##测v1(/user)
myapp-v1-bf4fc4b85-9m555
[root@k8s1 ~]# curl myapp.westos.org/order/hostname.html ##测v2(/order)
myapp-v2-66bc7954cb-w7pgb
回收
[root@k8s2 ingress]# kubectl delete -f ingress6.yml