目录
1. service 介绍
2. 开启 kube-proxy 的 ipvs 模式
3. 创建 service:(NodePort方式)
4. DNS 插件 Service
5. pod 滚动更新
6. 创建 service: (LoadBalancer)
7. 创建 service :(ExternalName)
8.ingress
1).ingress的配置
2).Ingress TLS 配置
3).Ingress 认证配置
4).Ingress地址重写
9. k8s 网络通信
1). flannel 网络
2)calico网络插件
3).允许指定pod访问服务
4).只允许指定namespace访问服务
Service可以看作是一组提供相同服务的Pod对外的访问接口。借助Service,应用可以方便地实现服务发现和负载均衡。
service默认只支持4层负载均衡能力,没有7层功能。(可以通过Ingress实现)
service的类型:
ClusterIP:默认值,k8s系统给service自动分配的虚拟IP,只能在集群内部访问。
NodePort:将Service通过指定的Node上的端口暴露给外部,访问任意一个NodeIP:nodePort都将路由到ClusterIP。
LoadBalancer:在 NodePort 的基础上,借助 cloud provider 创建一个外部的负载均衡器,并将请求转发到 :NodePort,此模式只能在云服务器上使用。
ExternalName:将服务通过 DNS CNAME 记录方式转发到指定的域名(通过 spec.externlName 设定)。
Service 是由 kube-proxy 组件,加上 iptables 来共同实现的.
kube-proxy 通过 iptables 处理 Service 的过程,需要在宿主机上设置相当多的 iptables 规则,如果宿主机有大量的Pod,不断刷新iptables规则,会消耗大量的CPU资源。
IPVS模式的 service,可以使K8s集群支持更多量级的Pod。
要确保仓库的存在;此处用的是本机的源;
[root@node22 yaml]# kubectl expose deployment deployment-example --port=80 --target-port=80
service/deployment-example exposed 暴露端口
root@node22 yaml]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
deployment-example ClusterIP 10.110.214.213 80/TCP 25s
kubernetes ClusterIP 10.96.0.1 443/TCP 17h
[root@node22 yaml]# kubectl describe svc deployment-example
Name: deployment-example
Namespace: default
Labels:
Annotations:
Selector: app=nginx
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.110.214.213
IPs: 10.110.214.213
Port: 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.1.84:80,10.244.1.85:80,10.244.2.12:80
Session Affinity: None
Events:
[root@node22 yaml]# curl 10.110.214.213
Hello MyApp | Version: v1 | Pod Name
[root@node22 yaml]# curl 10.110.214.213/hostname.html
deployment-example-6666f57846-bbnvs
[root@node22 yaml]# curl 10.110.214.213/hostname.html
deployment-example-6666f57846-lnl9h
[root@node22 yaml]# curl 10.110.214.213/hostname.html
deployment-example-6666f57846-lnl9h
[root@node22 yaml]# curl 10.110.214.213/hostname.html 访问
deployment-example-6666f57846-fq86m
每个结点安装yum install -y ipvsadm 软件;
安装完成之后:
[root@node22 yaml]# yum install -y ipvsadm.x86_64
[root@node22 yaml]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
[root@node22 yaml]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
deployment-example ClusterIP 10.110.214.213 80/TCP 10m
kubernetes ClusterIP 10.96.0.1 443/TCP 17h
[root@node22 yaml]# curl 10.110.214.213
Hello MyApp | Version: v1 | Pod Name
[root@node22 yaml]# curl 10.110.214.213/hostname.html
deployment-example-6666f57846-bbnvs
[root@node22 yaml]# curl 10.110.214.213/hostname.html
deployment-example-6666f57846-bbnvs
[root@node22 yaml]# curl 10.110.214.213/hostname.html
deployment-example-6666f57846-bbnvs
[root@node22 yaml]# curl 10.110.214.213/hostname.html
deployment-example-6666f57846-lnl9h
[root@node22 ~]# kubectl -n kube-system get cm 查看配置信息
NAME DATA AGE
coredns 1 17h
extension-apiserver-authentication 6 17h
kube-proxy 2 17h
kube-root-ca.crt 1 17h
kubeadm-config 1 17h
kubelet-config-1.23 1 17h
[root@node22 ~]# kubectl -n kube-system edit cm kube-proxy##编辑配置信息,指定使用 ipvs 的模式,不写时默认用的是 iptables
修改完信息之后,需要重载;由于当前的服务是由控制器所管理,此时只需删除之前的pod ,会再次读取配置文件重新拉取pod;
kube-proxy 通过 linux 的 IPVS 模块,以 rr 轮询方式调度 service 的Pod。
[root@node22 ~]# kubectl get pod -n kube-system |grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}' 重载
pod "kube-proxy-5qtrm" deleted
pod "kube-proxy-g6mp9" deleted
pod "kube-proxy-rmvxl" deleted
[root@node22 ~]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-7b56f6bc55-2pwnh 1/1 Running 1 (15h ago) 17h
coredns-7b56f6bc55-g458w 1/1 Running 1 (15h ago) 17h
etcd-node22 1/1 Running 1 (15h ago) 17h
kube-apiserver-node22 1/1 Running 1 (15h ago) 17h
kube-controller-manager-node22 1/1 Running 4 (7m44s ago) 17h
kube-proxy-6rsvk 1/1 Running 0 48s
kube-proxy-9z246 1/1 Running 0 57s
kube-proxy-c8ggx 1/1 Running 0 36s
kube-scheduler-node22 1/1 Running 2 17h
[root@node22 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.96.0.1:443 rr
-> 192.168.0.22:6443 Masq 1 0 0
TCP 10.96.0.10:53 rr
-> 10.244.0.4:53 Masq 1 0 0
-> 10.244.0.5:53 Masq 1 0 0
TCP 10.96.0.10:9153 rr
-> 10.244.0.4:9153 Masq 1 0 0
-> 10.244.0.5:9153 Masq 1 0 0
TCP 10.110.214.213:80 rr
-> 10.244.1.84:80 Masq 1 0 0
-> 10.244.1.85:80 Masq 1 0 0
-> 10.244.2.12:80 Masq 1 0 0
UDP 10.96.0.10:53 rr
-> 10.244.0.4:53 Masq 1 0 0
-> 10.244.0.5:53 Masq 1 0 0
[root@node22 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-example-6666f57846-bbnvs 1/1 Running 0 29m 10.244.2.12 node44
deployment-example-6666f57846-fq86m 1/1 Running 0 29m 10.244.1.84 node33
deployment-example-6666f57846-lnl9h 1/1 Running 0 29m 10.244.1.85 node33
IPVS 模式下,kube-proxy 会在 service 创建后,在宿主机上添加一个虚拟网卡:kube-ipvs0,并分配 service IP。
[root@node22 ~]# ip addr show kube-ipvs0
9: kube-ipvs0: mtu 1500 qdisc noop state DOWN group default
link/ether 8e:0a:c9:a1:6b:3b brd ff:ff:ff:ff:ff:ff
inet 10.96.0.1/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.96.0.10/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.110.214.213/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
为了从外部访问 service 的第一种方式,用 NodePort 的方式会绑定节点的端口,供外部来访问。
以上的方式都是 ClusterIP 的方式,此时修改一下格式:
[root@node22 yaml]# vim clusterip-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
ports:
- name: http
port: 80
targetPort: 80
selector:
app: nginx
type: ClusterIP
#clusterIP: None
[root@node22 yaml]# kubectl apply -f clusterip-svc.yaml
service/web-service created
[root@node22 yaml]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 17h
web-service ClusterIP 10.111.108.78 80/TCP 8s
[root@node22 yaml]# kubectl describe svc web-service
Name: web-service
Namespace: default
Labels:
Annotations:
Selector: app=nginx
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.111.108.78
IPs: 10.111.108.78
Port: http 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.1.84:80,10.244.1.85:80,10.244.2.12:80
Session Affinity: None
Events:
[root@node22 yaml]# vim nodeport-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
ports:
- name: http
port: 80
targetPort: 80
selector:
app: nginx
type: NodePort
#clusterIP: None
[root@node22 yaml]# kubectl apply -f nodeport-svc.yaml
service/web-service configured
[root@node22 yaml]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 17h
web-service NodePort 10.111.108.78 80:30510/TCP 4m1s
Kubernetes 提供了一个 DNS 插件 Service。
在集群内部直接用DNS记录的方式访问,而不需要一个VIP。
[root@node22 yaml]# vim clusterip-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
ports:
- name: http
port: 80
targetPort: 80
selector:
app: nginx
#type: ClusterIP
clusterIP: None
[root@node22 yaml]# kubectl apply -f clusterip-svc.yaml
[root@node22 yaml]# kubectl get pod
NAME READY STATUS RESTARTS AGE
deployment-example-6666f57846-bbnvs 1/1 Running 0 45m
deployment-example-6666f57846-fq86m 1/1 Running 0 45m
deployment-example-6666f57846-lnl9h 1/1 Running 0 45m
[root@node22 yaml]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 17h
web-service ClusterIP None 80/TCP 41s
此时看到的是没有分配到的VIP,但是可以根据 DNS 记录中的pod 的地址来访问;
[root@node22 yaml]# dig -t A web-service.default.svc.cluster.local. @10.96.0.10
; <<>> DiG 9.9.4-RedHat-9.9.4-72.el7 <<>> -t A web-service.default.svc.cluster.local. @10.96.0.10
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 40346
;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;web-service.default.svc.cluster.local. IN A
;; ANSWER SECTION:
web-service.default.svc.cluster.local. 30 IN A 10.244.1.85
web-service.default.svc.cluster.local. 30 IN A 10.244.2.12
web-service.default.svc.cluster.local. 30 IN A 10.244.1.84
;; Query time: 294 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Thu Aug 25 15:18:26 CST 2022
;; MSG SIZE rcvd: 225
以上无头服务,在 pod 滚动更新之后,其 IP 的变化是随着 pod 自动更新的;
[root@node22 yaml]# vim deployment.yaml 把pod更新为v2
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-example
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: myapp:v2
[root@node22 yaml]# kubectl apply -f deployment.yaml
deployment.apps/deployment-example configured
[root@node22 yaml]# kubectl get pod
NAME READY STATUS RESTARTS AGE
deployment-example-77cd76f9c5-jwb5d 1/1 Running 0 44s
deployment-example-77cd76f9c5-nl6ft 1/1 Running 0 39s
deployment-example-77cd76f9c5-v4txt 1/1 Running 0 47s
[root@node22 yaml]# dig -t A web-service.default.svc.cluster.local. @10.96.0.10
; <<>> DiG 9.9.4-RedHat-9.9.4-72.el7 <<>> -t A web-service.default.svc.cluster.local. @10.96.0.10
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 44644
;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;web-service.default.svc.cluster.local. IN A
;; ANSWER SECTION:
web-service.default.svc.cluster.local. 30 IN A 10.244.1.87
web-service.default.svc.cluster.local. 30 IN A 10.244.1.88
web-service.default.svc.cluster.local. 30 IN A 10.244.2.15
;; Query time: 10 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Thu Aug 25 15:21:55 CST 2022
;; MSG SIZE rcvd: 225
NodePort: 扩大端口限制
[root@node22 yaml]# vim nodeport-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
ports:
- name: http
port: 80
targetPort: 80
nodePort: 33333
selector:
app: nginx
type: NodePort
#clusterIP: None
[root@node22 yaml]# kubectl apply -f nodeport-svc.yaml
The Service "web-service" is invalid: spec.ports[0].nodePort: Invalid value: 33333: provided port is not in the valid range. The range of valid ports is 30000-32767
[root@node22 yaml]# cd /etc/kubernetes/
[root@node22 kubernetes]# ls
admin.conf controller-manager.conf kubelet.conf manifests pki scheduler.conf
[root@node22 kubernetes]# cd manifests/
[root@node22 manifests]# vim kube-apiserver.yaml
[root@node22 manifests]# kubectl get pod
NAME READY STATUS RESTARTS AGE
deployment-example-77cd76f9c5-jwb5d 1/1 Running 0 55m
deployment-example-77cd76f9c5-nl6ft 1/1 Running 0 55m
deployment-example-77cd76f9c5-v4txt 1/1 Running 0 55m
[root@node22 yaml]# kubectl apply -f nodeport-svc.yaml
service/web-service configured
[root@node22 yaml]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1
443/TCP 18h web-service NodePort 10.100.2.221
80:33333/TCP 53m
从外部访问 Service 的第二种方式,适用于公有云上的 Kubernetes 服务。这时候,可以指定一个
LoadBalancer 类型的 Service。
[root@node22 yaml]# vim loadbalancer-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
ports:
- name: http
port: 80
targetPort: 80
nodePort: 33333
selector:
app: nginx
type: LoadBalancer
[root@node22 yaml]# kubectl apply -f loadbalancer-svc.yaml
service/web-service configured
[root@node22 yaml]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1
443/TCP 19h web-service LoadBalancer 10.100.2.221
80:33333/TCP 61m 此时是在 nodeport的基础之上,从云端来分配一个 IP;此处没有云端时会一直处于
的状态。 当是云环境时会通过驱动去分配一个IP,供其访问;
当前是裸金属环境,那么分配IP 的动作由谁去做呢?
metallb
官网:https://metallb.universe.tf/installation/
设置ipvs模式:
[root@node22 yaml]# kubectl edit configmaps -n kube-system kube-proxy
configmap/kube-proxy edited
[root@node22 yaml]# kubectl get pod -n kube-system |grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'
pod "kube-proxy-6rsvk" deleted
pod "kube-proxy-9z246" deleted
pod "kube-proxy-c8ggx" deleted
部署:先下载资源清单,
[root@node22 yaml]# kubectl apply -f metallb-native.yaml
[root@node22 yaml]# kubectl get all
NAME READY STATUS RESTARTS AGE
pod/deployment-example-77cd76f9c5-jwb5d 1/1 Running 0 85m
pod/deployment-example-77cd76f9c5-nl6ft 1/1 Running 0 85m
pod/deployment-example-77cd76f9c5-v4txt 1/1 Running 0 85m
[root@node22 yaml]# kubectl -n metallb-system get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
controller-5c97f5f498-cb6fb 1/1 Running 0 3m56s 10.244.1.89 node33
speaker-2mlfr 1/1 Running 0 3m56s 192.168.0.33 node33
speaker-jkh2b 1/1 Running 0 3m57s 192.168.0.22 node22
speaker-s66q5 1/1 Running 0 3m56s 192.168.0.44 node44
配置清单metallb.yaml 指定的镜像需要提前下载,并将其上传至私有仓库;
[root@node11 harbor]# docker pull quay.io/metallb/speaker:v0.13.4
[root@node11 harbor]# docker pull quay.io/metallb/controller:v0.13.4
[root@node11 harbor]# docker tag quay.io/metallb/controller:v0.13.4
reg.westos.org/metallb/controller:v0.13.4
[root@node11 harbor]# docker push reg.westos.org/metallb/controller:v0.13.4
[root@node11 harbor]# docker tag quay.io/metallb/speaker:v0.13.4
reg.westos.org/metallb/speaker:v0.13.4
[root@node11 harbor]# docker push reg.westos.org/metallb/speaker:v0.13.4
以上创建完成之后,编辑一个文件:设置其分配IP的范围。
[root@node22 yaml]# vim config.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: default
namespace: metallb-system
spec:
addresses:
- 192.168.0.111-192.168.0.200
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: example
namespace: metallb-system
spec:
ipAddressPools:
- default
[root@node22 yaml]# kubectl apply -f config.yaml
ipaddresspool.metallb.io/default created
[root@node22 yaml]# kubectl -n metallb-system get ipaddresspools.metallb.io
NAME AGE
default 103s
[root@node22 yaml]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1
443/TCP 19h web-service LoadBalancer 10.100.2.221 192.168.0.111 80:33333/TCP 91m
此时已经分配到了vip ,外部直接可以访问:
[root@localhost ~]# ping 192.168.0.111
PING 192.168.0.111 (192.168.0.111) 56(84) bytes of data.
64 bytes from 192.168.0.111: icmp_seq=1 ttl=64 time=0.500 ms
64 bytes from 192.168.0.111: icmp_seq=2 ttl=64 time=0.258 ms
--- 192.168.0.111 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2001ms
rtt min/avg/max/mdev = 0.258/0.361/0.500/0.102 ms
[root@localhost ~]# curl 192.168.0.111
Hello MyApp | Version: v2 | Pod Name
[root@localhost ~]# curl 192.168.0.111/hostname.html
deployment-example-77cd76f9c5-nl6ft
[root@localhost ~]# curl 192.168.0.111/hostname.html
deployment-example-77cd76f9c5-v4txt
[root@localhost ~]# curl 192.168.0.111/hostname.html
deployment-example-77cd76f9c5-8qzfr
从外部访问的第三种方式叫做ExternalName,解析名称。常用于外部控制。
[root@node22 yaml]# vim ex-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: ExternalName
externalName: test.westos.org
[root@node22 yaml]# kubectl apply -f ex-svc.yaml
service/my-service created
[root@node22 yaml]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 21h
my-service ExternalName test.westos.org 13s
web-service LoadBalancer 10.102.58.152 192.168.0.111 80:33333/TCP 120m
[root@node22 yaml]# dig -t A my-service.default.svc.cluster.local. @10.96.0.10
; <<>> DiG 9.9.4-RedHat-9.9.4-72.el7 <<>> -t A my-service.default.svc.cluster.local. @10.96.0.10
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 39540
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;my-service.default.svc.cluster.local. IN A
;; ANSWER SECTION:
my-service.default.svc.cluster.local. 30 IN CNAME test.westos.org.
;; Query time: 1044 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Thu Aug 25 19:00:58 CST 2022
;; MSG SIZE rcvd: 130
假设外部资源的域名发生变化:
[root@node22 yaml]# vim ex-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: ExternalName
externalName: www.westos.org
[root@node22 yaml]# kubectl apply -f ex-svc.yaml
service/my-service configured
[root@node22 yaml]# dig -t A my-service.default.svc.cluster.local. @10.96.0.10
; <<>> DiG 9.9.4-RedHat-9.9.4-72.el7 <<>> -t A my-service.default.svc.cluster.local. @10.96.0.10
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 20138
;; flags: qr aa rd; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;my-service.default.svc.cluster.local. IN A
;; ANSWER SECTION:
my-service.default.svc.cluster.local. 30 IN CNAME www.westos.org.
www.westos.org. 30 IN CNAME westos.gotoip4.com.
westos.gotoip4.com. 30 IN CNAME ssl13.my.800cdn.com.
ssl13.my.800cdn.com. 30 IN A 211.149.140.190
;; Query time: 108 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Thu Aug 25 19:03:11 CST 2022
可以发现即使外部域名发生变化,不变的是svc,集群内部可以将地址设置为 svc 的地址;将其做个映射就可以,不用做太大的变更。以上的方式是分配地址,service 允许为其分配一个公有IP。
Kubernetes 里的 Ingress 服务是一种全局的、为了代理不同后端 Service 而设置的负载均衡服务。
Ingress由两部分组成:Ingress controller 和 Ingress 服务。
Ingress Controller 会根据你定义的 Ingress 对象,提供对应的代理能力。业界常用的各种反向代理项目,比如 Nginx、HAProxy、Envoy、Traefik 等,都已经为Kubernetes 专门维护了对应的 Ingress Controller。
官网:https://kubernetes.github.io/ingress-nginx/
应用 ingress controller 定义文件:
[root@node22 ~]# mkdir ingress
[root@node22 ~]# cd ingress/
[root@node22 ingress]# wget
https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.47.0/deploy/static/provider/baremetal/deploy.yaml
根据文件内容需要下载两个镜像;此处用直接下载好经过打包的镜像上传至私有仓库。
[root@node11 ~]# docker load -i ingress-nginx-v1.3.0.tar
[root@node11 ~]# docker push reg.westos.org/ingress-nginx/controller:v1.3.0
[root@node11 ~]# docker push reg.westos.org/ingress-nginx/kube-webhook-certgen:v1.1.1
然后修改文件中的镜像指向,并部
[root@node22 ingress]# vim deploy.yaml
433 image: ingress-nginx/controller:v1.3.0
530 image: ingress-nginx/kube-webhook-certgen:v1.1.1
579 image: ingress-nginx/kube-webhook-certgen:v1.1.1
[root@node22 ingress]# kubectl apply -f deploy.yaml
部署好 ingress之后,查看其相关一些信息;可以看到有 NodePort是 供集外部可以访问 和ClusterIP是集群内部访问;
[root@node22 ingress]# kubectl -n ingress-nginx get all
NAME READY STATUS RESTARTS AGE
pod/ingress-nginx-admission-create-skn49 0/1 ContainerCreating 0 37s
pod/ingress-nginx-admission-patch-lft5n 0/1 ContainerCreating 0 37s
pod/ingress-nginx-controller-bffcff6df-ztspw 0/1 ContainerCreating 0 37s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ingress-nginx-controller NodePort 10.111.97.28 80:55508/TCP,443:61940/TCP 41s
service/ingress-nginx-controller-admission ClusterIP 10.109.143.52 443/TCP 41s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ingress-nginx-controller 0/1 1 0 41s
NAME DESIRED CURRENT READY AGE
replicaset.apps/ingress-nginx-controller-bffcff6df 1 1 0 41s
NAME COMPLETIONS DURATION AGE
job.batch/ingress-nginx-admission-create 0/1 37s 40s
job.batch/ingress-nginx-admission-patch 0/1 37s 37s
[root@node22 ingress]# kubectl -n ingress-nginx get pod
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-skn49 0/1 Completed 0 6m6s
ingress-nginx-admission-patch-lft5n 0/1 Completed 1 6m6s
ingress-nginx-controller-bffcff6df-ztspw 1/1 Running 0 6m6s
[root@node22 ingress]# kubectl -n ingress-nginx describe svc ingress-nginx-controller
Name: ingress-nginx-controller
Namespace: ingress-nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.3.0
Annotations:
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.111.97.28
IPs: 10.111.97.28
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 55508/TCP
Endpoints: 10.244.1.93:80
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 61940/TCP
Endpoints: 10.244.1.93:443
Session Affinity: None
External Traffic Policy: Cluster
Events:
[root@node22 ingress]# kubectl -n ingress-nginx get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.111.97.28 80:55508/TCP,443:61940/TCP 7m21s
ingress-nginx-controller-admission ClusterIP 10.109.143.52 443/TCP 7m21s
[root@localhost ~]# curl 192.168.0.22:55508
404 Not Found
404 Not Found
nginx
为了可以访问,需要有可用的svc;
[root@node22 ingress]# kubectl delete svc web-service
service "web-service" deleted
[root@node22 ingress]# kubectl delete svc my-service
service "my-service" deleted
[root@node22 ingress]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1
443/TCP 38h [root@node22 ingress]# kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
deployment-example 3/3 3 3 20h
[root@node22 ingress]# kubectl delete deployments.apps deployment-example
deployment.apps "deployment-example" deleted
[root@node22 ingress]# kubectl delete pod deployment-example-77cd76f9c5-jwb5d --force
[root@node22 ingress]# kubectl -n ingress-nginx edit svc ingress-nginx-controller
[root@node22 ingress]# kubectl -n ingress-nginx get svc 得到一个外部地址
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.111.97.28 192.168.0.111 80:55508/TCP,443:61940/TCP 129m
ingress-nginx-controller-admission ClusterIP 10.109.143.52
443/TCP 129m [root@localhost ~]# curl 192.168.0.111 此时在外部不用加端口就可以访问
404 Not Found 404 Not Found
nginx [root@node22 ingress]# vim ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-service
port:
number: 80
[root@node22 ingress]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1
443/TCP 40h [root@node22 ingress]# kubectl get pod
No resources found in default namespace.
[root@node22 ingress]# cd
[root@node22 ~]# cd yaml/
为其添加控制器和标签:
[root@node22 yaml]# cp deployment.yaml ../ingress/
[root@node22 yaml]# cat clusterip-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
ports:
- name: http
port: 8080
targetPort: 80
selector:
app: nginx
type: ClusterIP
[root@node22 yaml]# cd
[root@node22 ~]# cd ingress/
[root@node22 ingress]# vim deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-1
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: myapp:v1
---
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
ports:
- name: http
port: 80
targetPort: 80
selector:
app: nginx
type: ClusterIP
[root@node22 ingress]# kubectl apply -f deployment.yaml
deployment.apps/myapp-1 created
service/web-service created
[root@node22 ingress]# kubectl get pod
NAME READY STATUS RESTARTS AGE
myapp-1-6666f57846-5n8v7 1/1 Running 0 3m2s
myapp-1-6666f57846-hvh4k 1/1 Running 0 3m2s
myapp-1-6666f57846-vx556 1/1 Running 0 3m2s
[root@node22 ingress]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1
443/TCP 40h web-service ClusterIP 10.109.238.119
80/TCP 17s [root@node22 ingress]# kubectl describe svc web-service
Name: web-service
Namespace: default
Labels:
Annotations:
Selector: app=nginx
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.109.238.119
IPs: 10.109.238.119
Port: http 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.1.94:80,10.244.1.95:80,10.244.1.96:80
Session Affinity: None
Events:
[root@node22 ingress]# kubectl apply -f ingress.yaml
ingress.networking.k8s.io/my-ingress created
[root@localhost ~]# curl 192.168.0.111
Hello MyApp | Version: v1 | Pod Name
[root@localhost ~]# curl 192.168.0.111/hostname.html
myapp-1-6666f57846-vx556
[root@localhost ~]# curl 192.168.0.111/hostname.html
myapp-1-6666f57846-hvh4k
[root@localhost ~]# curl 192.168.0.111/hostname.html
myapp-1-6666f57846-5n8v7
再创建一个svc:
[root@node22 ingress]# kubectl delete -f ingress.yaml
ingress.networking.k8s.io "my-ingress" deleted
[root@node22 ingress]# cp deployment.yaml deployment-2.yaml
[root@node22 ingress]# vim deployment-2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-2
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:v2
---
apiVersion: v1
kind: Service
metadata:
name: my-svc
spec:
ports:
- name: http
port: 80
targetPort: 80
selector:
app: myapp
type: ClusterIP
[root@node22 ingress]# kubectl apply -f deployment-2.yaml
deployment.apps/myapp-2 created
service/my-svc created
[root@node22 ingress]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 2d21h
my-svc ClusterIP 10.108.185.37 80/TCP 39s
web-service ClusterIP 10.109.238.119 80/TCP 28h
实现不同域名对应不同svc
先做一个地址解析:
[root@localhost ~]# vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.0.6 localhost
192.168.0.11 node11
192.168.0.22 node22
192.168.0.33 node33
192.168.0.111 www1.westos.org www2.westos.org
[root@node22 ingress]# vim ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: name-virtual-host-ingress
spec:
ingressClassName: nginx
rules:
- host: www1.westos.org
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: web-service
port:
number: 80
- host: www2.westos.org
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: my-svc
port:
number: 80
[root@node22 ingress]# kubectl apply -f ingress.yaml
ingress.networking.k8s.io/name-virtual-host-ingress created
实验效果:
[root@localhost ~]# curl www1.westos.org
Hello MyApp | Version: v1 | Pod Name
[root@localhost ~]# curl www2.westos.org
Hello MyApp | Version: v2 | Pod Name
要做加密之前首先来生成key:
[root@node22 ingress]# openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"
Generating a 2048 bit RSA private key
......................................................................+++
.............................................+++
writing new private key to 'tls.key'
-----
[root@node22 ingress]# kubectl create secret tls tls-secret --key tls.key --cert tls.crt
secret/tls-secret created
[root@node22 ingress]# kubectl get secret
NAME TYPE DATA AGE
default-token-pf6bb kubernetes.io/service-account-token 3 2d21h
tls-secret kubernetes.io/tls 2 19s
对网站www1.westos.org 进行加密:
[root@node22 ingress]# vim ingress.yaml
apiVersion: networking.k8s.ioi/v1
kind: Ingress
metadata:
name: name-virtual-host-ingress
spec:
ingressClassName: nginx
tls:
- hosts:
- www1.westos.org
secretName: tls-secret
rules:
- host: www1.westos.org
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: web-service
port:
number: 80
- host: www2.westos.org
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: my-svc
port:
number: 80
[root@node22 ingress]# kubectl apply -f ingress.yaml
ingress.networking.k8s.io/name-virtual-host-ingress configured
[root@node22 ingress]# kubectl describe ingress name-virtual-host-ingress
Name: name-virtual-host-ingress
Labels:
Namespace: default
Address: 192.168.0.33
Default backend: default-http-backend:80 ()
TLS:
tls-secret terminates www1.westos.org
Rules:
Host Path Backends
---- ---- --------
www1.westos.org
/ web-service:80 (10.244.1.104:80,10.244.1.105:80,10.244.1.106:80)
www2.westos.org
/ my-svc:80 (10.244.1.107:80,10.244.1.108:80,10.244.2.16:80)
Annotations:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 16s (x3 over 11m) nginx-ingress-controller Scheduled for sync
测试:由于此时开启了443,便会重定向到443,如果没有开启由于其是加密的便不能访问;www2没有做加密,直接返回值。
[root@localhost ~]# curl www1.westos.org
308 Permanent Redirect
308 Permanent Redirect
nginx
[root@localhost ~]# curl www1.westos.org -I
HTTP/1.1 308 Permanent Redirect
Date: Sat, 27 Aug 2022 10:51:13 GMT
Content-Type: text/html
Content-Length: 164
Connection: keep-alive
Location: https://www1.westos.org
[root@node22 ingress]# htpasswd -c auth zcx
New password:
Re-type new password:
Adding password for user zcx
[root@node22 ingress]# cat auth
zcx:$apr1$MzjNevL6$yFiqXCH.mF04VsgWWNcsj1
[root@node22 ingress]# kubectl create secret generic basic-auth --from-file=auth
secret/basic-auth created
[root@node22 ingress]# kubectl get pod
NAME READY STATUS RESTARTS AGE
myapp-1-6666f57846-5n8v7 1/1 Running 2 (41m ago) 29h
myapp-1-6666f57846-hvh4k 1/1 Running 2 (41m ago) 29h
myapp-1-6666f57846-vx556 1/1 Running 2 (41m ago) 29h
myapp-2-57c78c68df-dmckj 1/1 Running 0 24m
myapp-2-57c78c68df-dn2m7 1/1 Running 0 24m
myapp-2-57c78c68df-fwb9x 1/1 Running 0 24m
编辑文件来器用认证:
[root@node22 ingress]# vim ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: name-virtual-host-ingress
annotations:
# type of authentication
nginx.ingress.kubernetes.io/auth-type: basic
# name of the secret that contains the user/password definitions
nginx.ingress.kubernetes.io/auth-secret: basic-auth
# message to display with an appropriate context why the authentication is required
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - zcx'
spec:
ingressClassName: nginx
tls:
- hosts:
- www1.westos.org
secretName: tls-secret
rules:
- host: www1.westos.org
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: web-service
port:
number: 80
- host: www2.westos.org
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: my-svc
port:
number: 80
[root@node22 ingress]# kubectl apply -f ingress.yaml
ingress.networking.k8s.io/name-virtual-host-ingress configured
[root@node22 ingress]# kubectl describe ingress name-virtual-host-ingress
Name: name-virtual-host-ingress
Labels:
Namespace: default
Address: 192.168.0.33
Default backend: default-http-backend:80 ()
TLS:
tls-secret terminates www1.westos.org
Rules:
Host Path Backends
---- ---- --------
www1.westos.org
/ web-service:80 (10.244.1.104:80,10.244.1.105:80,10.244.1.106:80)
www2.westos.org
/ my-svc:80 (10.244.1.107:80,10.244.1.108:80,10.244.2.16:80)
Annotations: nginx.ingress.kubernetes.io/auth-realm: Authentication Required - zcx
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-type: basic
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 17s (x4 over 17m) nginx-ingress-controller Scheduled for sync
检测:
[root@localhost ~]# curl -k https://www1.westos.org
Hello MyApp | Version: v1 | Pod Name
[root@localhost ~]# curl -k https://www1.westos.org
401 Authorization Required
401 Authorization Required
nginx
[root@localhost ~]# curl -k https://www1.westos.org -uzcx:lee
Hello MyApp | Version: v1 | Pod Name
[root@node22 ingress]# vim ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: name-virtual-host-ingress
annotations:
# type of authentication
nginx.ingress.kubernetes.io/auth-type: basic
# name of the secret that contains the user/password definitions
nginx.ingress.kubernetes.io/auth-secret: basic-auth
# message to display with an appropriate context why the authentication is required
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - zcx'
nginx.ingress.kubernetes.io/app-root: /hostname.html
spec:
ingressClassName: nginx
tls:
- hosts:
- www1.westos.org
secretName: tls-secret
rules:
- host: www1.westos.org
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: web-service
port:
number: 80
- host: www2.westos.org
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: my-svc
port:
number: 80
[root@node22 ingress]# kubectl apply -f ingress.yaml
ingress.networking.k8s.io/name-virtual-host-ingress configured
此时访问:
[root@localhost ~]# curl -k https://www1.westos.org -uzcx:lee -I
HTTP/1.1 302 Moved Temporarily
Date: Sat, 27 Aug 2022 11:00:11 GMT
Content-Type: text/html
Content-Length: 138
Connection: keep-alive
Location: https://www1.westos.org/hostname.html
用DaemonSet结合nodeselector来部署ingress-controller到特定的node上,然后使用
HostNetwork直接把该pod与宿主机node的网络打通,直接使用宿主机的80/443端口就能访问服务:
• 优点是整个请求链路最简单,性能相对NodePort模式更好。
• 缺点是由于直接利用宿主机节点的网络和端口,一个node只能部署一个ingress
controller pod。
• 比较适合大并发的生产环境使用。
[root@node22 ingress]# kubectl label nodes node44 ingress=nginx 打标签
node/node44 labeled
[root@node22 ingress]# kubectl get node --show-labels 查看标签
NAME STATUS ROLES AGE VERSION LABELS
node22 Ready control-plane,master 2d22h v1.23.10 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node22,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
node33 Ready
2d22h v1.23.10 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node33,kubernetes.io/os=linux node44 Ready
2d21h v1.23.10 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux, ingress=nginx,kubernetes.io/arch=amd64,kubernetes.io/hostname=node44,kubernetes.io/os=linux[root@node22 ingress]# kubectl delete -f deploy.yaml 清理之前的
namespace "ingress-nginx" deleted
serviceaccount "ingress-nginx" deleted
serviceaccount "ingress-nginx-admission" deleted
role.rbac.authorization.k8s.io "ingress-nginx" deleted
role.rbac.authorization.k8s.io "ingress-nginx-admission" deleted
clusterrole.rbac.authorization.k8s.io "ingress-nginx" deleted
clusterrole.rbac.authorization.k8s.io "ingress-nginx-admission" deleted
rolebinding.rbac.authorization.k8s.io "ingress-nginx" deleted
rolebinding.rbac.authorization.k8s.io "ingress-nginx-admission" deleted
clusterrolebinding.rbac.authorization.k8s.io "ingress-nginx" deleted
clusterrolebinding.rbac.authorization.k8s.io "ingress-nginx-admission" deleted
configmap "ingress-nginx-controller" deleted
service "ingress-nginx-controller" deleted
service "ingress-nginx-controller-admission" deleted
deployment.apps "ingress-nginx-controller" deleted
job.batch "ingress-nginx-admission-create" deleted
job.batch "ingress-nginx-admission-patch" deleted
ingressclass.networking.k8s.io "nginx" deleted
validatingwebhookconfiguration.admissionregistration.k8s.io "ingress-nginx-admission" deleted
[root@node22 ingress]# vim deploy.yaml
[root@node22 ingress]# kubectl apply -f deploy.yaml
[root@node22 ingress]# kubectl -n ingress-nginx get pod
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-ljtbf 0/1 Completed 0 3m13s
ingress-nginx-admission-patch-2bnmm 0/1 Completed 0 3m13s
ingress-nginx-controller-56ddfffd4b-2jxmz 0/1 Running 0 3m13s
[root@node22 ingress]# kubectl -n ingress-nginx describe svc ingress-nginx-controller
Name: ingress-nginx-controller
Namespace: ingress-nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.3.0
Annotations:
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.101.189.208
IPs: 10.101.189.208
Port: http 80/TCP
TargetPort: http/TCP
Endpoints: 192.168.0.44:80
Port: https 443/TCP
TargetPort: https/TCP
Endpoints: 192.168.0.44:443
Session Affinity: None
Events:
[root@node22 ingress]# vim ingress.yaml
[root@node22 ingress]# vim ingress.yaml
[root@node22 ingress]# kubectl apply -f ingress.yaml
ingress.networking.k8s.io/name-virtual-host-ingress created
k8s通过CNI接口接入其他插件来实现网络通讯。目前比较流行的插件有flannel,calico等。
CNI插件存放位置:# cat /etc/cni/net.d/10-flannel.conflist
插件使用的解决方案如下:
• 虚拟网桥,虚拟网卡,多个容器共用一个虚拟网卡进行通信。
• 多路复用:MacVLAN,多个容器共用一个物理网卡进行通信。
• 硬件交换:SR-LOV,一个物理网卡可以虚拟出多个接口,这个性能最好。
• 容器间通信:同一个pod内的多个容器间的通信,通过lo即可实现;
• pod之间的通信:
• 同一节点的pod之间通过cni网桥转发数据包。
• 不同节点的pod之间的通信需要网络插件支持。
• pod和service通信: 通过iptables或ipvs实现通信,ipvs取代不了iptables,因为ipvs只能做
负载均衡,而做不了nat转换。
• pod和外网通信:iptables的MASQUERADE。
• Service与集群外部客户端的通信;(ingress、nodeport、loadbalancer)
1.通信结构
前面用到的都是 flannel,Flannel vxlan 模式跨主机通信原理:
• VXLAN,即Virtual Extensible LAN(虚拟可扩展局域网),是Linux本身支持的一网种网络虚 拟化技术。VXLAN可以完全在内核态实现封装和解封装工作,从而通过“隧道”机制,构建出覆盖网络(Overlay Network)。
• VTEP:VXLAN Tunnel End Point(虚拟隧道端点),在Flannel中 VNI的默认值是1,这也是为什么宿主机的VTEP设备都叫flannel.1的原因。
• Cni0: 网桥设备,每创建一个pod都会创建一对 veth pair。其中一端是pod中的eth0,另一端 是Cni0网桥中的端口(网卡)。
• Flannel.1: TUN设备(虚拟网卡),用来进行 vxlan 报文的处理(封包和解包)。不同node之间的pod数据流量都从overlay设备以隧道的形式发送到对端。
• Flanneld:flannel在每个主机中运行flanneld作为agent,它会为所在主机从集群的网络地址空间中,获取一个小的网段subnet,本主机内所有容器的IP地址都将从中分配。同时Flanneld监听K8s集群数据库,为flannel.1设备提供封装数据时必要的mac、ip等网络数据信息。
通信原理:
• 当容器发送IP包,通过veth pair 发往cni网桥,再路由到本机的flannel.1设备进行处理。
• VTEP设备之间通过二层数据帧进行通信,源VTEP设备收到原始IP包后,在上面加
上一个目的MAC地址,封装成一个内部数据帧,发送给目的VTEP设备。
• 内部数据桢,并不能在宿主机的二层网络传输,Linux内核还需要把它进一步封装成
为宿主机的一个普通的数据帧,承载着内部数据帧通过宿主机的eth0进行传输。
• Linux会在内部数据帧前面,加上一个VXLAN头,VXLAN头里有一个重要的标志叫
VNI,它是VTEP识别某个数据桢是不是应该归自己处理的重要标识。
• flannel.1设备只知道另一端flannel.1设备的MAC地址,却不知道对应的宿主机地址是
什么。在linux内核里面,网络设备进行转发的依据,来自FDB的转发数据库,这个flannel.1网桥对应的FDB信息,是由flanneld进程维护的。
• linux内核在IP包前面再加上二层数据帧头,把目标节点的MAC地址填进去,MAC地
址从宿主机的ARP表获取。
• 此时flannel.1设备就可以把这个数据帧从eth0发出去,再经过宿主机网络来到目标节
点的eth0设备。目标主机内核网络栈会发现这个数据帧有VXLAN Header,并且VNI为1,Linux内核会对它进行拆包,拿到内部数据帧,根据VNI的值,交给本机flannel.1设备处理,flannel.1拆包,根据路由表发往cni网桥,最后到达目标容器。
• flannel支持多种后端:
• Vxlan
• vxlan //报文封装,默认
• Directrouting //直接路由,跨网段使用vxlan,同网段使用host-gw模式。
• host-gw: //主机网关,性能好,但只能在二层网络中,不支持跨网络,如果有成千上万的Pod,容易产生广播风暴,不推荐
• UDP: //性能差,不推荐
• 配置flannel:
[root@node22 ingress]# vim deploy.yaml
[root@node22 ingress]# kubectl apply -f deploy.yaml
[root@node22 ~]# kubectl -n kube-flannel edit cm kube-flannel-cfg
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "host-gw"
}
[root@node22 ~]# kubectl -n kube-flannel delete pod --all
pod "kube-flannel-ds-6vx7v" deleted
pod "kube-flannel-ds-gq22n" deleted
pod "kube-flannel-ds-hwsf2" deleted
[root@node22 ~]# kubectl -n kube-flannel get pod
NAME READY STATUS RESTARTS AGE
kube-flannel-ds-bgxbz 1/1 Running 0 16s
kube-flannel-ds-bwzrg 1/1 Running 0 14s
kube-flannel-ds-kstxl 1/1 Running 0 18s
[root@node22 ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.0.1 0.0.0.0 UG 100 0 0 ens33
10.244.0.0 0.0.0.0 255.255.255.0 U 0 0 0 cni0
10.244.1.0 10.244.1.0 255.255.255.0 UG 0 0 0 flannel.1
10.244.2.0 10.244.2.0 255.255.255.0 UG 0 0 0 flannel.1
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.0.0 0.0.0.0 255.255.255.0 U 100 0 0 ens33
当访问本地网络时直接走cni,22网段走eth0的192.168.0.22; 33 网段直接走eth0到192.168.0.33 。
在所有结点上会生成主机网关;此模式的前提是所有节点在一个 vlan中。
官网:
https://docs.projectcalico.org/getting-started/kubernetes/self-managed-onprem/onpremises
calico简介:
flannel实现的是网络通信,calico的特性是在pod之间的隔离。
通过BGP路由,但大规模端点的拓扑计算和收敛往往需要一定的时间和计算资源。
纯三层的转发,中间没有任何的NAT和overlay,转发效率最好。
Calico 仅依赖三层路由可达。Calico 较少的依赖性使它能适配所有 VM、Container、白盒或者混合环境场景。
安装calico:在安装之前先清理之前插件的信息,避免两个之间冲突;
[root@node22 ~]# kubectl delete -f kube-flannel.yml
namespace "kube-flannel" deleted
clusterrole.rbac.authorization.k8s.io "flannel" deleted
clusterrolebinding.rbac.authorization.k8s.io "flannel" deleted
serviceaccount "flannel" deleted
configmap "kube-flannel-cfg" deleted
daemonset.apps "kube-flannel-ds" deleted
[root@node22 ~]# kubectl get ns
NAME STATUS AGE
default Active 3d1h
ingress-nginx Active 17m
kube-node-lease Active 3d1h
kube-public Active 3d1h
kube-system Active 3d1h
metallb-system Active 2d5h
[root@node22 ~]# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default myapp-1-6666f57846-5n8v7 1/1 Running 2 (4h14m ago) 32h
default myapp-1-6666f57846-hvh4k 1/1 Running 2 (4h14m ago) 32h
default myapp-1-6666f57846-vx556 1/1 Running 2 (4h14m ago) 32h
default myapp-2-57c78c68df-dmckj 1/1 Running 0 3h57m
default myapp-2-57c78c68df-dn2m7 1/1 Running 0 3h57m
default myapp-2-57c78c68df-fwb9x 1/1 Running 0 3h57m
ingress-nginx ingress-nginx-admission-create-ppqq4 0/1 Completed 0 17m
ingress-nginx ingress-nginx-admission-patch-nw9sz 0/1 Completed 1 17m
ingress-nginx ingress-nginx-controller-5bbfbbb9c7-29cvs 1/1 Running 0 17m
kube-system coredns-7b56f6bc55-2pwnh 1/1 Running 2 (4h18m ago) 3d1h
kube-system coredns-7b56f6bc55-g458w 1/1 Running 2 (4h18m ago) 3d1h
kube-system etcd-node22 1/1 Running 2 (4h18m ago) 3d1h
kube-system kube-apiserver-node22 1/1 Running 1 (4h18m ago) 2d6h
kube-system kube-controller-manager-node22 1/1 Running 9 (4h18m ago) 3d1h
kube-system kube-proxy-8qc8h 1/1 Running 2 (4h14m ago) 2d5h
kube-system kube-proxy-cscgp 1/1 Running 1 (4h18m ago) 2d5h
kube-system kube-proxy-zh89l 1/1 Running 1 (4h16m ago) 2d5h
kube-system kube-scheduler-node22 1/1 Running 8 (4h18m ago) 3d1h
metallb-system controller-5c97f5f498-cb6fb 1/1 Running 3 (4h14m ago) 2d5h
metallb-system speaker-2mlfr 1/1 Running 3 (4h14m ago) 2d5h
metallb-system speaker-jkh2b 1/1 Running 2 (4h18m ago) 2d5h
metallb-system speaker-s66q5 1/1 Running 1 (4h16m ago) 2d5h
[root@node22 ~]# cd /etc/cni
[root@node22 cni]# cd net.d/
[root@node22 net.d]# ls
10-flannel.conflist
[root@node22 net.d]# rm -rf 10-flannel.conflist 删除所有节点上的配置文件
[root@node33 net.d]# rm -rf 10-flannel.conflist
[root@node44 net.d]# rm -rf 10-flannel.conflist
用命令 arp -an 查看主机的 mac地址.
1).部署 calico 插件
使用 calico插件:
[root@node22 ~]# mkdir calico/
[root@node22 ~]# cd calico/
[root@node22 calico]# wget https://raw.githubusercontent.com/projectcalico/calico/v3.24.0/manifests/calico.yaml
上传镜像到私有仓库:
[root@node11 ~]# docker load -i calico-v3.24.0.tar
[root@node11 ~]# docker push reg.westos.org/calico/kube-controllers:v3.24.0
[root@node11 ~]# docker push reg.westos.org/calico/cni:v3.24.0
[root@node11 ~]# docker push reg.westos.org/calico/node:v3.24.0
此时在应用清单文件之后,会看到calico的插件信息;
[root@node22 calico]# kubectl apply -f calico.yaml
[root@node22 calico]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 3d2h
my-svc ClusterIP 10.108.185.37 80/TCP 5h47m
web-service ClusterIP 10.109.238.119 80/TCP 34h
[root@node22 calico]# curl 10.108.185.37
Hello MyApp | Version: v2 | Pod Name
[root@node22 calico]# curl 10.108.185.37/hostname.html
myapp-2-57c78c68df-dmckj
[root@node22 calico]# curl 10.108.185.37/hostname.html
myapp-2-57c78c68df-fwb9x
2).calico 网络架构
Felix:监听ECTD中心的存储获取事件,用户创建pod后,Felix负责将其网卡、IP、MAC都设置好,然后在内核的路由表里面写一条,注明这个IP应该到这张网卡。同样如果用户制定了隔离策略,Felix同样会将该策略创建到ACL中,以实现隔离。
BIRD:一个标准的路由程序,它会从内核里面获取哪一些IP的路由发生了变化,然后通过标准BGP的路由协议扩散到整个其他的宿主机上,让外界都知道这个IP在这里,路由的时候到这里来。
3).网络策略
NetworkPolicy策略模型:控制某个 namespace 下的 pod 网络出入站规则;
官网:https://kubernetes.io/zh/docs/concepts/services-networking/network-policies/
1.限制访问指定服务:
3.禁止 namespace 中所有 Pod 之间的相互访问
4.禁止其他 namespace 访问服务
6.允许外网访问服务
[root@node22 calico]# kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
myapp-1-6666f57846-5n8v7 1/1 Running 2 (6h17m ago) 34h app=nginx,pod-template-hash=6666f57846
myapp-1-6666f57846-hvh4k 1/1 Running 2 (6h17m ago) 34h app=nginx,pod-template-hash=6666f57846
myapp-1-6666f57846-vx556 1/1 Running 2 (6h17m ago) 34h app=nginx,pod-template-hash=6666f57846
myapp-2-57c78c68df-dmckj 1/1 Running 0 6h1m app=myapp,pod-template-hash=57c78c68df
myapp-2-57c78c68df-dn2m7 1/1 Running 0 6h1m app=myapp,pod-template-hash=57c78c68df
myapp-2-57c78c68df-fwb9x 1/1 Running 0 6h1m app=myapp,pod-template-hash=57c78c68df
[root@node22 calico]# kubectl delete deployments.apps myapp-2
deployment.apps "myapp-2" deleted
[root@node22 calico]# kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
myapp-1-6666f57846-5n8v7 1/1 Running 2 (6h22m ago) 34h app=nginx,pod-template-hash=6666f57846
myapp-1-6666f57846-hvh4k 1/1 Running 2 (6h22m ago) 34h app=nginx,pod-template-hash=6666f57846
myapp-1-6666f57846-vx556 1/1 Running 2 (6h22m ago) 34h app=nginx,pod-template-hash=6666f57846
[root@node22 calico]# kubectl get pod
NAME READY STATUS RESTARTS AGE
myapp-1-6666f57846-5n8v7 1/1 Running 2 (6h22m ago) 34h
myapp-1-6666f57846-hvh4k 1/1 Running 2 (6h22m ago) 34h
myapp-1-6666f57846-vx556 1/1 Running 2 (6h22m ago) 34h
[root@node22 calico]# vim networkpolicy.yaml
apiVersion: networking.k8s.io1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
app: nginx
policyTypes:
- Ingress
ingress:
- from:
- podSelector: 符合这个条件的才可以访问
matchLabels:
role: test
[root@node22 calico]# kubectl delete pod myapp-1-6666f57846-5n8v7
pod "myapp-1-6666f57846-5n8v7" deleted
[root@node22 calico]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myapp-1-6666f57846-9htdh 1/1 Running 0 8s 10.244.144.66 node33
[root@node22 calico]# kubectl run demo --image=busyboxplus -it
If you don't see a command prompt, try pressing enter.
/ # curl 10.244.144.66
Hello MyApp | Version: v1 | Pod Name
/ # ping 10.244.144.66
PING 10.244.144.66 (10.244.144.66): 56 data bytes
64 bytes from 10.244.144.66: seq=0 ttl=63 time=0.102 ms
64 bytes from 10.244.144.66: seq=1 ttl=63 time=0.054 ms
^C
--- 10.244.144.66 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.054/0.078/0.102 ms
/ #
策略生效后只允许符合条件的访问
[root@node22 calico]# kubectl apply -f networkpolicy.yaml
networkpolicy.networking.k8s.io/test-network-policy created
[root@node22 calico]# kubectl describe networkpolicies. test-network-policy
Name: test-network-policy
Namespace: default
Created on: 2022-08-28 01:01:08 +0800 CST
Labels:
Annotations:
Spec:
PodSelector: app=nginx
Allowing ingress traffic:
To Port: (traffic allowed to all ports)
From:
PodSelector: role=test
Not affecting egress traffic
Policy Types: Ingress
[root@node22 calico]# kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
demo 1/1 Running 1 (14m ago) 15m run=demo
myapp-1-6666f57846-9htdh 1/1 Running 0 16m app=nginx,pod-template-hash=6666f57846
[root@node22 calico]# kubectl attach demo -it 此时已经无法访问
If you don't see a command prompt, try pressing enter.
/ # curl 10.244.144.66
^C
/ #
[root@node22 calico]# kubectl label pod demo role=test 为pod加上role=test的标签
pod/demo labeled
[root@node22 calico]# kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
demo 1/1 Running 2 (2m12s ago) 23m role=test,run=demo
myapp-1-6666f57846-9htdh 1/1 Running 0 23m app=nginx,pod-template-hash=6666f57846
[root@node22 calico]# kubectl attach demo -it 符合条件后就可以访问
If you don't see a command prompt, try pressing enter.
/ # curl 10.244.144.66
Hello MyApp | Version: v1 | Pod Name
/ #
[root@node22 calico]# kubectl create namespace test
namespace/test created
[root@node22 calico]# kubectl run demo --image=busyboxplus -it -n test 无法访问
If you don't see a command prompt, try pressing enter.
/ # ip addr
1: lo: mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: tunl0@NONE: mtu 1480 qdisc noop qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
4: eth0@if26: mtu 1480 qdisc noqueue
link/ether f6:23:e7:b4:ee:e0 brd ff:ff:ff:ff:ff:ff
inet 10.244.144.68/32 scope global eth0
valid_lft forever preferred_lft forever
/ # ping 10.244.144.66
PING 10.244.144.66 (10.244.144.66): 56 data bytes
^C
--- 10.244.144.66 ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss
/ #
[root@node22 calico]# kubectl get pod --show-labels -n test
NAME READY STATUS RESTARTS AGE LABELS
demo 1/1 Running 1 (21s ago) 80s run=demo
[root@node22 calico]# kubectl label namespaces test role=myproject添加标签
[root@node22 calico]# vim networkpolicy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
app: nginx
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector: 符合role: myproject的namespace或者符合role: test的pod可访
matchLabels:
role: myproject
- podSelector:
matchLabels:
role: test
[root@node22 calico]# kubectl apply -f networkpolicy.yaml 生效
networkpolicy.networking.k8s.io/test-network-policy configured
[root@node22 calico]# kubectl -n test attach demo -it 符合条件就可以生效
If you don't see a command prompt, try pressing enter.
/ # curl 10.244.144.66
Hello MyApp | Version: v1 | Pod Name
/ # ping 10.244.144.66
PING 10.244.144.66 (10.244.144.66): 56 data bytes
64 bytes from 10.244.144.66: seq=0 ttl=63 time=0.134 ms
64 bytes from 10.244.144.66: seq=1 ttl=63 time=0.073 ms
^C
--- 10.244.144.66 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.073/0.103/0.134 ms
/ #
[root@node22 calico]# vim networkpolicy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
app: nginx
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:再namespace中的含role: test的pod才可以访问
matchLabels:
role: myproject
podSelector:
matchLabels:
role: test
[root@node22 calico]# kubectl apply -f networkpolicy.yaml 生效
networkpolicy.networking.k8s.io/test-network-policy configured
[root@node22 calico]# kubectl run demo --image=busyboxplus -it -n test 无法访问
If you don't see a command prompt, try pressing enter.
/ # ping 10.244.144.66
PING 10.244.144.66 (10.244.144.66): 56 data bytes
^C
--- 10.244.144.66 ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss
/ #
[root@node22 calico]# kubectl -n test label pod demo role=test 打标签
pod/demo labeled
[root@node22 calico]# kubectl -n test attach demo -it 符合条件就可以生效
If you don't see a command prompt, try pressing enter.
/ # curl 10.244.144.66
Hello MyApp | Version: v1 | Pod Name
/ # ping 10.244.144.66
PING 10.244.144.66 (10.244.144.66): 56 data bytes
64 bytes from 10.244.144.66: seq=0 ttl=63 time=0.134 ms
64 bytes from 10.244.144.66: seq=1 ttl=63 time=0.073 ms
^C
--- 10.244.144.66 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.073/0.103/0.134 ms
/ #