前面总结了Kubernetes暴露服务的三种方式,这里再总结一下如何通过Ingress-nginx暴露service对象给外部网络访问。
Ingress是由两部分组成,Ingress资源和Ingress Controller。
Ingress暴露服务方式是基于NodePort的方式在所有节点开放一个端口给外部访问,再新建一个Service关联Ingress-Controller的Pod,将外部请求转发给Ingress-Controller的Pod,然后Ingress-Controller再根据Ingress资源的映射规则转发到需要提供服务的Service的Pod
Ingress资源是URL的转发规则,用于将请求对应的域名转发给我们的Service对象,和Nginx配置类似,我们先回顾一下Nginx配置转发
server {
listen 80;
server_name localhost;
location / {
#root html;
#index index.html index.htm;
root /usr/local/nginx/html/xinlin-blogs;
index calmlog-index.html calmlog-index.html;
}
location ~ ^/countgame{
proxy_pass http://121.42.162.203:8082;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
注:上面的配置是将[ IP+80端口/countgame ]开头的请求转发到121.42.162.203:8082,安装nginx的这台机器是配置域名的,所以外部网络访问http://xinzhihuo.top/countgame/xxx的请求就会转发成http://121.42.162.203:8082/countgame/xxx
Ingress资源就是类似这样的一个转发规则,Ingress资源对象被创建后会被Ingress Controller动态感知到并更新Ingress Controller的转发配置。
注:
- spec.rules:用于定义当前Ingress资源的转发规则列表数组;由rules定义规则,或没有匹配到规则时,所有的流量会转发到由backend定义的默认后端。
- backend:默认的后端用于服务那些没有匹配到任何规则的请求;定义Ingress资源时,必须要定义backend或rules两者之一,该字段用于让负载均衡器指定一个全局默认的后端
- spec.rules.host属性值,目前暂不支持使用IP地址定义,也不支持IP:Port的格式,该字段留空,代表着通配所有主机名
Ingress Controller不是Controller Manager中的一部分,他是一个或一组独立的Pod资源,他通常就是一个运行着有七层代理能力或调度能力的应用(可以简单理解为我们熟悉的Nginx,实际上不一定),比如:NGINX、HAproxy、Traefik、Envoy。
当Ingress资源创建或更新可以被Ingress Controller动态感知,但是当无状态的pod被kill或pod增加减少的时候,Pod IP地址必然发生变化,这时候Ingress Controller是如何跟着做动态的变更呢?
答案是 Ingress Controller仍需要借助Service的Selector功能帮助分类,关联着后端所有符合条件的Pod,当上述情况发生时,他会将变化反映至Ingress资源,(注意Ingress资源和Ingress Controller是两个概念。)再由Ingress资源把变化信息注入到Ingress Controller的配置文件中(非常方便),进而触发Ingress Controller重载配置文件的动作。
我们了解了Ingress,接下来最重要的是要怎么使用Ingress暴露服务
注:
1.这个yaml描述文件有点长,简单描述一下,里面主要创建了一个ingress-nginx的命名空间,nginx-ingress-controller的Deployment对象以及最后面创建ingress-nginx的Service,Service配置映射的端口
2.我安装的k8s版本是1.18.0
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: default-http-backend
labels:
app: default-http-backend
namespace: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app: default-http-backend
template:
metadata:
labels:
app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
image: registry.cn-qingdao.aliyuncs.com/kubernetes_xingej/defaultbackend-amd64:1.5 #建议提前在node节点下载镜像;
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
# 这里调整了cpu和memory的大小,可能不同集群限制的最小值不同,看部署失败的原因就清楚
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 100m
memory: 100Mi
---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
# namespace: ingress-nginx
namespace: ingress-nginx
labels:
app: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: default-http-backend
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "-"
# Here: "-"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
replicas: 2
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
# wait up to five minutes for the drain of connections
terminationGracePeriodSeconds: 300
serviceAccountName: nginx-ingress-serviceaccount
nodeSelector:
kubernetes.io/os: linux
containers:
- name: nginx-ingress-controller
image: suisrc/ingress-nginx:0.30.0 #建议提前在node节点下载镜像;
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 101
runAsUser: 101
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
---
apiVersion: v1
kind: LimitRange
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
limits:
- min:
memory: 90Mi
cpu: 100m
type: Container
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
# HTTP
nodePort: 80
- name: https
port: 443
targetPort: 443
protocol: TCP
# HTTPS
nodePort: 443
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
[root@localhost ingress-nginx]# kubectl apply -f deploy.yaml
注:运行完这一步应该会出错,提示80端口无效,把deploy.yaml后面Service名称为ingress-nginx的nodePort:80以及nodePort:443改成30000到32767之间的端口就可以了,但是我这里还是想要开放80端口,所以我有进行处理了,后面有总结这个异常的处理方案。
刚运行完yaml的文件,可能pod的STATUS会显示为ContainerCreating,是因为拉取镜像需要点时间,等一会就可以了
这时候我们通过节点IP+80端口访问一下,可以看到已经有提示404了,说明已经暴露80端口给外部访问了,提示404,这个因为当前ingress-nginx服务现在还没有后端服务
[root@localhost ingress-nginx]# vi ingress-countgame.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-countgame
namespace: default
annotations:
kubernets.io/ingress.class: "nginx"
spec:
rules:
- host: countgame.xinlin.com #将域名与node IP 绑定写入访问节点hosts文件
http:
paths:
- path:
backend:
serviceName: countgame
servicePort: 8082
[root@localhost ingress-nginx]# kubectl apply -f ingress-countgame.yaml
注:
运行完成后,生成的规则会自动注入到ingress-controller中,即会生成nginx配置文件
[root@localhost ingress-nginx]# kubectl -n ingress-nginx get pods
NAME READY STATUS RESTARTS AGE
default-http-backend-59c5fc7f59-vxz9g 1/1 Running 3 5d3h
nginx-ingress-controller-5cbc696d9f-t59hl 1/1 Running 3 5d3h
nginx-ingress-controller-5cbc696d9f-xcs9j 1/1 Running 3 5d3h
[root@localhost ingress-nginx]# kubectl exec -n ingress-nginx -it nginx-ingress-controller-5cbc696d9f-t59hl -- /bin/bash
bash-5.0$ cat nginx.conf
注:
kubectl exec -n ingress-nginx -it pod-name – /bin/bash进入pod里面的控制台,进入pod控制台后输入cat nginx.conf查看nginx.conf配置
可以看到Ingress-Conterller的pod里面的nginx.conf已经有我们刚才创建的Ingress资源的转发规则了
注:从pod的控制台退出,输入exit 按Enter即可
最后我们测试一下能否在浏览器通过域名直接访问,我们开发环境没有域名配置,我们可以在Host文件配置这个域名
192.168.137.112 countgame.xinlin.com
然后在浏览器访问,可以看到已经可以通过Ingress暴露服务给外部请求访问了
是因为k8s的node节点的端口默认被限制在30000-32767的范围,
编辑 kube-apiserver.yaml文件
vi /etc/kubernetes/manifests/kube-apiserver.yaml
在spec.containers.command的最后面加上 --service-cluster-ip-range 这一行,如下内容
- --service-node-port-range=1-65535
最后 重启 kubelet
systemctl daemon-reload
systemctl restart kubelet
参考:
烟雨浮华的《Kubernetes学习之路(十五)之Ingress和Ingress Controller》
风口向上的《kubernetes v1.18的ingress-nginx 0.30.0最新版本部署》
白日梦患者Mr.廖的《nodePort: Invalid value valid ports is 30000-32767》