一. 基于k8s版本:v1.19.1基础上部署ingress。
1. ingress介绍
K8s集群对外暴露服务的方式目前只有三种:
Loadblancer
Nodeport
ingress
前两种熟悉起来比较快,而且使用起来也比较方便,在此就不进行介绍了。
下面详细讲解下ingress这个服务,ingress由两部分组成:
a. ingress controller:将新加入的Ingress转化成Nginx的配置文件并使之生效
b. ingress服务:将Nginx的配置抽象成一个Ingress对象,每添加一个新的服务只需写一个新的Ingress的yaml文件即可
其中ingress controller目前主要有两种:基于nginx服务的ingress controller和基于traefik的ingress controller。
而其中traefik的ingress controller,目前支持http和https协议。由于对nginx比较熟悉,而且需要使用TCP负载,所以在此我们选择的是基于nginx服务的ingress controller。
但是基于nginx服务的ingress controller根据不同的开发公司,又分为k8s社区的ingres-nginx和nginx公司的nginx-ingress。
在此根据github上的活跃度和关注人数,我们选择的是k8s社区的ingres-nginx。
k8s社区提供的ingress,github地址如下:https://github.com/kubernetes/ingress-nginx
nginx社区提供的ingress,github地址如下:https://github.com/nginxinc/kubernetes-ingress
2. ingress的工作原理
ingress具体的工作原理如下:
step1:ingress contronler通过与k8s的api进行交互,动态的去感知k8s集群中ingress服务规则的变化,然后读取它,并按照定义的ingress规则,转发到k8s集群中对应的service。
step2:而这个ingress规则写明了哪个域名对应k8s集群中的哪个service,然后再根据ingress-controller中的nginx配置模板,生成一段对应的nginx配置。
step3:然后再把该配置动态的写到ingress-controller的pod里,该ingress-controller的pod里面运行着一个nginx服务,控制器会把生成的nginx配置写入到nginx的配置文件中,然后reload一下,使其配置生效,以此来达到域名分配置及动态更新的效果。
3. ingress可以解决的问题
a. 动态配置服务
如果按照传统方式, 当新增加一个服务时, 我们可能需要在流量入口加一个反向代理指向我们新的k8s服务. 而如果用了Ingress, 只需要配置好这个服务, 当服务启动时, 会自动注册到Ingress的中, 不需要而外的操作。
b. 减少不必要的端口暴露
配置过k8s的都清楚, 第一步是要关闭防火墙的, 主要原因是k8s的很多服务会以NodePort方式映射出去, 这样就相当于给宿主机打了很多孔, 既不安全也不优雅. 而Ingress可以避免这个问题, 除了Ingress自身服务可能需要映射出去, 其他服务都不要用NodePort方式。
4. 部署ingress(deployment的方式)
使用的是nginx-0.29.0版本,GitHub源码路径:
https://github.com/kubernetes/ingress-nginx/blob/nginx-0.29.0/deploy/static/mandatory.yaml
a. 配置文件准备mandatory.yaml文件
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "-"
# Here: "-"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
# wait up to five minutes for the drain of connections
terminationGracePeriodSeconds: 300
serviceAccountName: nginx-ingress-serviceaccount
nodeSelector:
kubernetes.io/os: linux
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.29.0
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 101
runAsUser: 101
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
---
apiVersion: v1
kind: LimitRange
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
limits:
- min:
memory: 90Mi
cpu: 100m
type: Container
b. 配置文件介绍和设置ingress的网络模式
namespace.yaml
创建一个独立的命名空间 ingress-nginx。
configmap.yaml
ConfigMap是存储通用的配置变量的,类似于配置文件,使用户可以将分布式系统中用于不同模块的环境变量统一到一个对象中管理;而它与配置文件的区别在于它是存在集群的“环境”中的,并且支持K8S集群中所有通用的操作调用方式。
从数据角度来看,ConfigMap的类型只是键值组,用于存储被Pod或者其他资源对象(如RC)访问的信息。这与secret的设计理念有异曲同工之妙,主要区别在于ConfigMap通常不用于存储敏感信息,而只存储简单的文本信息。
ConfigMap可以保存环境变量的属性,也可以保存配置文件。
创建pod时,对configmap进行绑定,pod内的应用可以直接引用ConfigMap的配置。相当于configmap为应用/运行环境封装配置。
pod使用ConfigMap,通常用于:设置环境变量的值、设置命令行参数、创建配置文件。
default-backend.yaml
如果外界访问的域名不存在的话,则默认转发到default-http-backend这个Service,其会直接返回404。
rbac.yaml
负责Ingress的RBAC授权的控制,其创建了Ingress用到的ServiceAccount、ClusterRole、Role、RoleBinding、ClusterRoleBinding。
with-rbac.yaml
是Ingress的核心,用于创建ingress-controller。前面提到过,ingress-controller的作用是将新加入的Ingress进行转化为Nginx的配置。
各文件的作用
configmap.yaml:提供configmap可以在线更新nginx的配置
default-backend.yaml:提供一个缺省的后台错误页面 404
namespace.yaml:创建一个独立的命名空间 ingress-nginx
rbac.yaml:创建对应的role rolebinding 用于rbac
tcp-services-configmap.yaml:修改L4负载均衡配置的configmap
udp-services-configmap.yaml:修改L4负载均衡配置的configmap
with-rbac.yaml:有应用rbac的nginx-ingress-controller组件
c. 修改配置文件
设置 hostNetwork: true
由于ingress 使用到物理机的80/443 端口,所以需要设置为hostNetwork模式
spec:
hostNetwork: true # 使用hostNetwork暴露服务
# wait up to five minutes for the drain of connections
terminationGracePeriodSeconds: 300
serviceAccountName: nginx-ingress-serviceaccount
# 选择对应标签的node
nodeSelector:
isIngress: "true"
containers:
- name: nginx-ingress-controller
image: harbor.jinyuyun.top/nginx/nginx-ingress-controller:0.29.0 #私有镜像
args:
指定运行节点,给节点打标签
[root@master ~]# kubectl label node node01 isIngress="true"
node/node01 labeled
镜像包tag到私有镜像
harbor.jinyuyun.top/nginx/nginx-ingress-controller:0.29.0
部署ingress-nginx
[root@master01 ingress-nginx]# kubectl apply -f mandatory.yaml
namespace/ingress-nginx created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
deployment.apps/nginx-ingress-controller created
limitrange/ingress-nginx created
#查看deployment状态
[root@master01 ingress-nginx]# kubectl get deployment -n ingress-nginx
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-ingress-controller 1/1 1 1 4h52m
# 查看pod状态
[root@master01 ingress-nginx]# kubectl get pod -n ingress-nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-ingress-controller-664ddd8c4d-flp29 1/1 Running 0 4h19m 192.168.1.213 node01
nginx-ingress-controller 创建service服务
[root@master01 ingress-nginx]# cat ingress-service.yaml
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
nodePort: 30080
- name: https
port: 443
targetPort: 443
protocol: TCP
nodePort: 30443
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
#部署svc
[root@master01 ingress-nginx]# kubectl apply -f ingress-service.yaml
#查看svc状态
[root@master01 ingress-nginx]# kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx NodePort 10.111.102.6 80:30080/TCP,443:30443/TCP 93m
二. 案例测试
创建一个nginx的deployment、svc 这里现有就不做详细介绍
[root@master01 ~]# kubectl get all -n jyy-test |grep nginx
service/nginx-service NodePort 10.100.139.122 80:30081/TCP 5h42m
[root@master01 ~]# kubectl get all -n jyy-test |grep web
pod/web-d8b759c55-f5vfm 1/1 Running 0 5h42m
pod/web-d8b759c55-h6qcc 1/1 Running 0 5h42m
pod/web-d8b759c55-rbd76 1/1 Running 0 5h42m
deployment.apps/web 3/3 3 3 5h42m
replicaset.apps/web-d8b759c55 3 3 3 5h42m
创建ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
namespace: jyy-test #命名空间
spec:
rules:
- host: jyy.k8s.xyz #外部访问的域名
http:
paths:
- path: /
backend:
serviceName: nginx-service #服务对应的service名称
servicePort: 80 #对应service的端口
[root@master01 ~]# kubectl get ingress -n jyy-test -o wide
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
nginx-ingress jyy.k8s.xyz 10.111.102.6 80 5h44m
绑定hosts 并测试连接状态
[root@master01 ~]# echo "192.168.1.211 jyy.k8s.xyz" >> /etc/hosts
[root@master01 ~]# curl jyy.k8s.xyz:30080
403 Forbidden
403 Forbidden
nginx
C:\Windows\System32\drivers\etc\hosts添加192.168.1.211 jyy.k8s.xyz
ingress部署完成。