1、集群内部 不断跟踪pod的变化,更新endpoint中的pod对象,基于pod的IP地址不断变化的一种服务发现机制
2、集群外部 类似负载均衡器,把流量ip+端口,不涉及转发url(http,https),把请求转发到pod
nodePort:容器端口----service端口----nodePort,设定了nodePort,每个节点都会有一个端口被打开,30000-32767
IP+端口 节点ip+30000+32767,实现负载均衡
loadbalancer 云平台上的一种service服务,云平台提供负载均衡ip地址
extrenal 域名映射
ingress 基于域名进行映射,把url(http https)请求转发到service,再由service请求转发到每一个pod
ingress只要一个或者是少量的公网ip或者LB,可以把多个http请求暴漏到外网,七层反向代理
service的service,是一组域名和URL路径,把一个或者多个请求转发到service的规则
先是七层代理----四层代理----pod
ingress service nginx
ingress是一个api的对象,通过yaml文件来进行配置,ingress的作用是定义请求如何转发到service的规则,配置模版
ingress通过http和https暴漏集群内部service,给service提供一个外部的url,负载均衡,ssl/tls(https)的能力,实现一个基于域名的负载均衡
ingress-controller:具体的实现反向代理和负载均衡的程序,对ingress定义的规则进行解析,根据ingress的配置进行请求的转发
ingress-controller不是k8s自带的组件功能,ingress-controller一个统称
nginx ingress controller
traefik都是ingress-controller 都是开源的
1、定义外部流量的路由规则
2、定义服务的暴漏方式,主机名,访问路径和其他的选项
3、负载均衡(ingress-controller)
ingress-controller的运行方式是pod方式运行集群当中
wget https://gitee.com/mirrors/ingress-nginx/raw/nginx-0.30.0/deploy/static/mandatory.yaml
1、deployment+LoadBalancer模式,ingress部署在公有云,会ingress配置文件里面有type,type:LoadBalancer,公有云平台会为个LoadBalancer的service创建一个负载均衡器,绑定一个公网地址,通过域名指向这个公网地址就可以实现集群对外暴漏
2、方式二:Daemonset+hostnetwork+nodeSelector
Daemonsetset 在每个节点都会创建一个pod
hostnetwork pod共享节点主机的网络命名空间,容器内直接使用节点主机的ip+端口,pod中的容器可以直接访问主机上的网络资源
nodeSelecor 根据标签选择部署的节点,nginx-ingress-controller部署的节点
缺点:直接利用节点主机的网络和端口,一个node只能部署一个ingress-controller pod比较适合大并发的生产环境,性能最好的
修改生成的配置文件mandatory.yaml
.............
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
#replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
# wait up to five minutes for the drain of connections
terminationGracePeriodSeconds: 300
serviceAccountName: nginx-ingress-serviceaccount
hostNetwork: true
nodeSelector:
test1: "true"
# kubernetes.io/os: linux
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
.............
把镜像(拖到opt)上传
tar -xf ingree.contro-0.30.0.tar.gz
docker load -i ingree.contro-0.30.0.tar.gz
service 的yaml文件
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
spec:
accessModes:
- ReadWriteMany
storageClassName: nfs-client-storageclass
resources:
requests:
storage: 2Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-app
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.22
volumeMounts:
- name: nfs-pvc
mountPath: /usr/share/nginx/html
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: nfs-pvc
---
apiVersion: v1
kind: Service
metadata:
name: nginx-app-svc
spec:
ports:
- protocol: TCP
port: 80
targetPort: 80
selector:
app: nginx
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-app-ingress
spec:
rules:
- host: www.test1.com
http:
paths:
- path: /
pathType: Prefix
#根据前缀进行匹配 / www.test1.com/ www.test1.com/ www.test1.com /www/www1/www2
backend:
service:
name: nginx-app-svc
port:
number: 80
给node节点设置标签
kubectl label nodes node02 test1=true
给node02节点做映射
20.0.0.70 master01
20.0.0.71 node01
20.0.0.72 node02 www.test1.com
3、deployment+NodePort
主要通过nginx-ingress-controller
host-ingress的配置找到pod----controller---请求发到pod
NodePort---controller---ingress----service---pod
NodePort暴漏端口的方式是最简单的方法,NodePort多了一层nat(地址转换),并发量大的对性能会有一定影响,内部都会用NodePort
修改配置文件
生成service-nodePort的yaml文件
wget https://gitee.com/mirrors/ingress-nginx/raw/nginx-0.30.0/deploy/static/provider/baremetal/service-nodeport.yaml
用最原始的生成的mandatory.yaml
修改service的标签
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc1
spec:
accessModes:
- ReadWriteMany
storageClassName: nfs-client-storageclass
resources:
requests:
storage: 2Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-app1
labels:
app: nginx2
spec:
replicas: 3
selector:
matchLabels:
app: nginx2
template:
metadata:
labels:
app: nginx2
spec:
containers:
- name: nginx
image: nginx:1.22
volumeMounts:
- name: nfs-pvc
mountPath: /usr/share/nginx/html
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: nfs-pvc1
---
apiVersion: v1
kind: Service
metadata:
name: nginx-app-svc1
spec:
ports:
- protocol: TCP
port: 80
targetPort: 80
selector:
app: nginx2
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-app-ingress
spec:
rules:
- host: www.test2.com
http:
paths:
- path: /
pathType: Prefix
#根据前缀进行匹配 / www.test1.com/ www.test1.com/ www.test1.com /www/www1/www2
backend:
service:
name: nginx-app-svc1
port:
number: 80
生成的service-nodeport.yaml
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
- name: https
port: 443
targetPort: 443
protocol: TCP
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
#现在执行这个yaml文件,会生成一个service,在ingress-nginx这个命名空间生成一个service,所有的controller的请求都会
#从这个定义的service的nodeport的端口,把请求转发到自定义的service的pod
做映射
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
20.0.0.70 master01
20.0.0.71 node01 www.test2.com
20.0.0.72 node02 www.test1.com
ingress的方式实现:一个ingress可以访问不同的主机
pod1
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment1
labels:
test: nginx1
spec:
replicas: 1
selector:
matchLabels:
test: nginx1
template:
metadata:
labels:
test: nginx1
spec:
containers:
- name: nginx1
image: nginx:1.22
---
apiVersion: v1
kind: Service
metadata:
name: svc-1
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
test: nginx1
pod2
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment2
labels:
test: nginx2
spec:
replicas: 1
selector:
matchLabels:
test: nginx2
template:
metadata:
labels:
test: nginx2
spec:
containers:
- name: nginx2
image: nginx:1.22
---
apiVersion: v1
kind: Service
metadata:
name: svc-2
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
test: nginx2
pod-ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress1
spec:
rules:
- host: www1.test.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: svc-1
port:
number: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress2
spec:
rules:
- host: www2.test.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: svc-2
port:
number: 80
做映射
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
20.0.0.70 master01
20.0.0.71 node01 www2.test.com
20.0.0.72 node02 www1.test.com
查看对外访问的端口(service)
kubectl get svc -o wide -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
ingress-nginx NodePort 10.96.23.193 80:30150/TCP,443:30780/TCP 42m app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx
openssl req -x509 -sha256 -nodes -days 365 -newkey ras:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc
req 生成证书文件的请求
x509 生成x509自签名的证书
-sha256 表示使用sha-256的散列算法
-nodes 表示生成的密钥不加密
-days 365 证书有效期是365
-newkey rsa:2048 RSA的密钥对
创建tls
kubectl create secret tls tls-secret --key tls.key --cert tls.crt
mkdir https
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-https
labels:
app: https
spec:
replicas: 3
selector:
matchLabels:
app: https
template:
metadata:
labels:
app: https
spec:
containers:
- name: nignx
image: nginx:1.22
---
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: https
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nignx-ingress-https
spec:
tls:
- hosts:
- www.123cc.com
secretName: tls-secret
#加密的配置保存在ingress当中,请求--ingress-controller--ingress---转发service
#在代理进行时,就要先验证密钥对,然后再把请求转发service对应的pod
rules:
- host: www.123cc.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-svc
port:
number: 80
做映射
www.123cc.com
curl -k https://www.123cc.com:32674
mkdir basic-auth
cd basic-auth/
yum -y install httpd
htpasswd -c auth xiaobu
kubectl create secret generic basic-auth --from-file=auth
kubectl describe secrets basic-auth
Name: basic-auth
Namespace: default
Labels:
Annotations:
Type: Opaque
Data
====
auth: 45 bytes
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-auth
annotations:
#开启认证模块的配置
nginx.ingress.kubernets.io/auth-type: basic
#设置认证类型为basic,这是k8s自带的认证加密的模块
nginx.ingress.kubernets.io/auth-secret: basic-auth
#把认证的加密模块导入到ingress中
nginx.ingress.kubernets.io/auth-realm: 'Authentication Required -xiaobu'
#设置认证窗口的提示信息
spec:
rules:
- host: www.xiaobu.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-svc
port:
number: 80
做映射
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
20.0.0.70 master01 www.xiaobu.com
20.0.0.71 node01 www.test2.com
20.0.0.72 node02 www.test1.com www.123cc.com
20.0.0.73 hub.test.com www.test1.com www.test2.com
访问
www.xiaobu.com:31328
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-rewrite
annotations:
nginx.ingress.kubernetes.io/rewrite-target: https://www.123cc.com:32674
#访问页面会跳转到指定的页面
spec:
rules:
- host: www.xiaokai.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-svc
port:
number: 80
访问
www.xiaokai.com:31328
traefik ingress controller traefik是一个为了让部署微服务更加快捷而诞生的一个http反向代理,负载均衡
traefik设计时就能够实时的和k8s API交互,感知后端service以及pod的变化,可以自动更新配置和重载
pod内nginx 80 8081
daemonset
特点,优点: 每个节点都会部罢一个traefik,节点感知,可以自动发现,更新容器的配置。不需要手动重载。
缺点: 资源占用,大型集群中,deamonset可能会运行多个traefik的实力,尤其是节点上不需要大量容器运行的情况下,没有办法扩缩容
部署对外集群 对外的业务会经常变更,Daemonset可以更好的,自动大发现服务配置变更
deployment:
有点:集中办公控制,可以使用少量的实例来运行处理整个集群的流量,更容易升级和维护
缺点:deployment的负载均衡不会均分到每个节点
手动更新,他无法感知容器内部配置的变化
部署对内集群 对内的相对稳定,更新和变化也比较少,适合deployment
对外标签 traffic-type: internal 对内服务
对内标签 traffic-type: external 对外服务
nginx-ingress和traefik-ingress
工作原理都一样,都是7层代理,都可以动态的更新配置,都可以自动发现服务
traefik-ingress 自动更新的重载更快,更方便
traefik的并发能力只能nginx-ingress的6成 60%
生成的yaml文件
wget https://gitee.com/mirrors/traefik/raw/v1.7/examples/k8s/traefik-deployment.yaml
wget https://gitee.com/mirrors/traefik/raw/v1.7/examples/k8s/traefik-rbac.yaml
wget https://gitee.com/mirrors/traefik/raw/v1.7/examples/k8s/traefik-ds.yaml
wget https://gitee.com/mirrors/traefik/raw/v1.7/examples/k8s/ui.yaml
基于deployment的
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-traefik
labels:
nginx: traefik
spec:
replicas: 3
selector:
matchLabels:
nginx: traefik
template:
metadata:
labels:
nginx: traefik
spec:
containers:
- name: nginx
image: nginx:1.22
---
apiVersion: v1
kind: Service
metadata:
name: nginx-traefik-svc1
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
nginx: traefik
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-traefik-test1
spec:
rules:
- host: www.bu.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-traefik-svc1
port:
number: 80