15.1、内部服务发现
我们可以通过 Service 生成的 ClusterIP(VIP)来访问 Pod 提供的服务,但是在使用的时候还有一个问题:我们怎么知道某个应用的 VIP 呢?比如我们有两个应用,一个是 api 应用,一个是 db 应用,两个应用都是通过 Deployment 进行管理的,并且都通过 Service 暴露出了端口提供服务。api 需要连接到 db 这个应用,我们只知道 db 应用的名称和 db 对应的 Service 的名称,但是并不知道它的 VIP 地址。
apiserver
我们知道可以从 apiserver 中直接查询获取到对应 service 的后端 Endpoints信息,所以最简单的办法是从 apiserver 中直接查询,如果偶尔一个特殊的应用,我们通过 apiserver 去查询到 Service 后面的 Endpoints 直接使用是没问题的,但是如果每个应用都在启动的时候去查询依赖的服务,这不但增加了应用的复杂度,这也导致了我们的应用需要依赖 Kubernetes 了,耦合度太高了,不具有通用性。
15.1.1、环境变量
为了解决上面的问题,在之前的版本中,Kubernetes 采用了环境变量的方法,每个 Pod 启动的时候,会通过环境变量设置所有服务的 IP 和 port 信息,这样 Pod 中的应用可以通过读取环境变量来获取依赖服务的地址信息,这种方法使用起来相对简单,但是有一个很大的问题就是依赖的服务必须在 Pod 启动之前就存在,不然是不会被注入到环境变量中的。
创建文件test-nginx.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nginx-deploy
labels:
k8s-app: nginx-demo
spec:
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
labels:
name: nginx-service
spec:
ports:
- port: 5000
targetPort: 80
selector:
app: nginx
创建服务
kubectl create -f test-nginx.yaml
创建文件test-pod.yaml
观察下该 Pod 中的环境变量是否包含上面的 nginx-service 的服务信息。
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-service-pod
image: busybox
command: ["/bin/sh", "-c", "env"]
查看日志信息
kubectl logs test-pod
NGINX_DEPLOYMENT_PORT_80_TCP_ADDR=10.96.19.47
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.96.0.1:443
HOSTNAME=test-pod
NGINX_DEPLOYMENT_PORT_80_TCP_PORT=80
NGINX_DEPLOYMENT_PORT_80_TCP_PROTO=tcp
SHLVL=1
PHP_APACHE_PORT_80_TCP=tcp://10.101.227.21:80
HOME=/root
NGINX_PORT_80_TCP=tcp://10.106.145.100:80
NGINX_SERVICE_PORT_5000_TCP_ADDR=10.108.187.196
NGINX_DEPLOYMENT_PORT_80_TCP=tcp://10.96.19.47:80
NGINX_SERVICE_PORT_5000_TCP_PORT=5000
NGINX_SERVICE_PORT_5000_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
PHP_APACHE_SERVICE_HOST=10.101.227.21
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
NGINX_SERVICE_HOST=10.106.145.100
NGINX_SERVICE_SERVICE_HOST=10.108.187.196
NGINX_SERVICE_PORT_5000_TCP=tcp://10.108.187.196:5000
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
NGINX_DEPLOYMENT_SERVICE_HOST=10.96.19.47
PHP_APACHE_PORT=tcp://10.101.227.21:80
PHP_APACHE_SERVICE_PORT=80
NGINX_PORT=tcp://10.106.145.100:80
NGINX_SERVICE_SERVICE_PORT=5000
NGINX_SERVICE_PORT=80
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_HOST=10.96.0.1
PHP_APACHE_PORT_80_TCP_ADDR=10.101.227.21
PWD=/
NGINX_PORT_80_TCP_ADDR=10.106.145.100
NGINX_DEPLOYMENT_SERVICE_PORT=80
NGINX_DEPLOYMENT_PORT=tcp://10.96.19.47:80
PHP_APACHE_PORT_80_TCP_PORT=80
PHP_APACHE_PORT_80_TCP_PROTO=tcp
NGINX_PORT_80_TCP_PORT=80
NGINX_PORT_80_TCP_PROTO=tcp
但是我们也知道如果这个 Pod 启动起来的时候如果 nginx-service 服务还没启动起来,在环境变量中我们是无法获取到这些信息的,当然我们可以通过 initContainer 之类的方法来确保 nginx-service 启动后再启动 Pod,但是这种方法毕竟增加了 Pod 启动的复杂性,所以这不是最优的方法。
由于上面环境变量这种方式的局限性,我们需要一种更加智能的方案,其实我们可以自己想学一种比较理想的方案:那就是可以直接使用 Service 的名称,因为 Service 的名称不会变化,我们不需要去关心分配的 ClusterIP 的地址,因为这个地址并不是固定不变的,所以如果我们直接使用 Service 的名字,然后对应的 ClusterIP 地址的转换能够自动完成就很好了。Kubernetes 也提供了 DNS 的方案来解决上面的服务发现的问题。
15.2、外部服务发现之 ingress
15.2.1、ingress安装
官方地址
https://github.com/kubernetes/ingress-nginx
yaml文件下载
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
镜像下载
dcoker pull quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.0
给master节点打标签
kubectl label node master app=ingress
编辑yaml文件,添加相关内容
spec:
serviceAccountName: nginx-ingress-serviceaccount
hostNetwork: true
nodeSelector:
app: ingress
tolerations:
- key: "node-role.kubernetes.io/master"
operator: "Exists"
effect: "NoSchedule"
添加后文件为:
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "-"
# Here: "-"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
serviceAccountName: nginx-ingress-serviceaccount
hostNetwork: true
nodeSelector:
app: ingress
tolerations:
- key: "node-role.kubernetes.io/master"
operator: "Exists"
effect: "NoSchedule"
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.1
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 33
runAsUser: 33
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
---
创建ingress-controller
[root@master ~]# kubectl apply -f mandatory.yaml
namespace/ingress-nginx created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
deployment.apps/nginx-ingress-controller created
15.2.2、测试
创建测试文件nginx-ingress.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
spec:
rules:
- host: nginx.tk8s.com
http:
paths:
- backend:
serviceName: nginx
servicePort: 80
创建服务
kubectl apply -f nginx-ingress.yaml
编辑/etc/hosts
10.211.55.22 nginx.unicom.com
浏览器访问http://nginx.unicom.com