我需要一个负载均衡 -Metallb
Metalllb官方地址:https://metallb.universe.tf/installation/
在学习的过程中,会发现不是所有的需要都要走7层协议提供http api 接口,我们还需要有一个loadbalance ,做tcp 的服务。
在k8s 对外提供服务能力的介绍里,说明了三种方式: LoadBalance,ClusterIp,NodePort .
三种方式的区别
LoadBalance借助于负载IP通过端口达到Ip;port 的目的,这里的port 基本无限制。
ClusterIP 是集群内部 服务之间访问的方式,如果需要对外提供服务 ,还需要 Ingress做 路由转发。
NodePort 的端口有限制3000-32767,其实也是可以扩大 范围的,ApiServer 配置参数:–service-node-port-range=1-65535
但是这种暴露的方式不可取,因为他要暴露的是我们node 节点的对外IP+Port . 测试使用还行
不论是想自己搭建Ingress Controller 还是自由端口映射,我们 都需要一个LoadBalance ,这里使用Metallb 实现。
设置Master污点
设置master 污点的原因是因为机器内存不够,为了 能跑容器服务,只能在master 上跑了。默认master 不做为worker 。
让 master节点参与POD负载的命令为
kubectl taint nodes --all node-role.kubernetes.io/master-
让 master节点恢复不参与POD负载的命令为
kubectl taint nodes node-role.kubernetes.io/master=:NoSchedule
具体的就是:
[root@k8s-master etcd]#
[root@k8s-master etcd]# kubectl taint nodes k8s-master node-role.kubernetes.io/master-
node/k8s-master untainted
[root@k8s-master etcd]# kubectl taint nodes k8s-node1 node-role.kubernetes.io/master-
node/k8s-node1 untainted
[root@k8s-master etcd]#
搭建metallb
1. 配置文件: config.yaml
主要下面的 address 是一个ip 范围。
metallb-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: my-ip-space
protocol: layer2
addresses:
- 192.168.10.133-192.168.10.134
如果要更改ip地址,直接修改即可,然后apply之后,metallb会自动更新。
创建命名空间 metallb-system
[root@k8s-master metallb]#
[root@k8s-master metallb]# kubectl create ns metallb-system
namespace/metallb-system created
[root@k8s-master metallb]#
2. 配置安装文件:metallb.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: metallb-system
name: controller
labels:
app: metallb
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: metallb-system
name: speaker
labels:
app: metallb
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: metallb-system:controller
labels:
app: metallb
rules:
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["services/status"]
verbs: ["update"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: metallb-system:speaker
labels:
app: metallb
rules:
- apiGroups: [""]
resources: ["services", "endpoints", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: metallb-system
name: config-watcher
labels:
app: metallb
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: metallb-system:controller
labels:
app: metallb
subjects:
- kind: ServiceAccount
name: controller
namespace: metallb-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: metallb-system:controller
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: metallb-system:speaker
labels:
app: metallb
subjects:
- kind: ServiceAccount
name: speaker
namespace: metallb-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: metallb-system:speaker
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: metallb-system
name: config-watcher
labels:
app: metallb
subjects:
- kind: ServiceAccount
name: controller
- kind: ServiceAccount
name: speaker
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: config-watcher
---
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
namespace: metallb-system
name: speaker
labels:
app: metallb
component: speaker
spec:
selector:
matchLabels:
app: metallb
component: speaker
template:
metadata:
labels:
app: metallb
component: speaker
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "7472"
spec:
serviceAccountName: speaker
terminationGracePeriodSeconds: 0
hostNetwork: true
containers:
- name: speaker
image: metallb/speaker:v0.7.3
imagePullPolicy: IfNotPresent
args:
- --port=7472
- --config=config
env:
- name: METALLB_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
ports:
- name: monitoring
containerPort: 7472
resources:
limits:
cpu: 100m
memory: 100Mi
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- all
add:
- net_raw
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
namespace: metallb-system
name: controller
labels:
app: metallb
component: controller
spec:
revisionHistoryLimit: 3
selector:
matchLabels:
app: metallb
component: controller
template:
metadata:
labels:
app: metallb
component: controller
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "7472"
spec:
serviceAccountName: controller
terminationGracePeriodSeconds: 0
securityContext:
runAsNonRoot: true
runAsUser: 65534 # nobody
containers:
- name: controller
image: metallb/controller:v0.7.3
imagePullPolicy: IfNotPresent
args:
- --port=7472
- --config=config
ports:
- name: monitoring
containerPort: 7472
resources:
limits:
cpu: 100m
memory: 100Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- all
readOnlyRootFilesystem: true
3. 部署
[root@k8s-master metallb]# vim metallb.yaml
[root@k8s-master metallb]#
[root@k8s-master metallb]#
[root@k8s-master metallb]#
[root@k8s-master metallb]# kubectl apply -f config.yaml
configmap/config created
[root@k8s-master metallb]# kubectl apply -f metallb.yaml
serviceaccount/controller created
serviceaccount/speaker created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
role.rbac.authorization.k8s.io/config-watcher created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/config-watcher created
daemonset.apps/speaker created
deployment.apps/controller created
[root@k8s-master metallb]#
查看结果:
[root@k8s-master metallb]# kubectl get po -n metallb-system -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
controller-7cc9c87cfb-fn6wh 1/1 Running 0 92s 10.244.1.85 k8s-node1
speaker-np4qr 1/1 Running 0 92s 192.168.10.133 k8s-master
speaker-w2wb7 1/1 Running 0 92s 192.168.10.134 k8s-node1
[root@k8s-master metallb]#
示例: 使用LoadBalance 部署 DashBoard WEB UI ,其他文章大部分 都是 NodePort 的,其实不可取 。
官网的部署 文件:https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
修改后的,主要修改了RBAC权限 和 Service 类型为LoadBalance ,端口为 7443
dashboard/yaml 内容如下:
kubectl apply -f dashboard.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 7443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
type: LoadBalancer
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.0.0-beta4
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: dashboard-metrics-scraper
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
spec:
containers:
- name: dashboard-metrics-scraper
image: kubernetesui/metrics-scraper:v1.0.1
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
serviceAccountName: kubernetes-dashboard
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: {}
结果如下,可以 看到通过192.168.10.133:7443可以访问dashboard UI .
[root@k8s-master k8s]# kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.108.208.131 8000/TCP 6m38s
kubernetes-dashboard LoadBalancer 10.100.184.70 192.168.10.133 7443:32765/TCP 6m38s
[root@k8s-master k8s]#
必须使用 火狐浏览器,打开登录页,后面 用ingress 之后,就不是必须 了:
打开: https://192.168.10.133:7443
选择接受 风险 并继续,然后需要填写 token:
获取登录 token
[root@k8s-node1 .kube]# kubectl get secret -n kubernetes-dashboard
NAME TYPE DATA AGE
default-tls-cert kubernetes.io/tls 2 18m
default-token-n8kvc kubernetes.io/service-account-token 3 44m
kubernetes-dashboard-certs Opaque 0 44m
kubernetes-dashboard-csrf Opaque 1 44m
kubernetes-dashboard-key-holder Opaque 2 44m
kubernetes-dashboard-token-t8bz5 kubernetes.io/service-account-token 3 44m
[root@k8s-node1 .kube]# kubectl describe secret kubernetes-dashboard-token-t8bz5 -n kubernetes-dashboard
Name: kubernetes-dashboard-token-t8bz5
Namespace: kubernetes-dashboard
Labels:
Annotations: kubernetes.io/service-account.name: kubernetes-dashboard
kubernetes.io/service-account.uid: f96e760d-fe04-11e9-96d3-000c29a4e4b2
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 20 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi10OGJ6NSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImY5NmU3NjBkLWZlMDQtMTFlOS05NmQzLTAwMGMyOWE0ZTRiMiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.hyKSFITlSWyZn-TmJimQiWhUOsO38yNGnq-k3IREBhxWRjwx-Y-OjkCd1RUaRgW-ocGqYHrKqXWMsv1_Nv9UR1-CIAfdNzFwkb_RTf2UVIB6C098WizSTJeUzodUsGJDPh9QhWnSrZIFbOkKxzjll2mFEnhvnbmZil_VNYRo-Oi0rGLcKdChCkfq7RWinZL4xGlH8g3xbktuFGHSxrxHVr7If5yhSms82qD4WA5ePiJbDRIZHdBUQJM53VprG9CrzRFLMmWYOPlnf5CnSoQWbT9zgDGRGMCU04rZXRKRbvGw1pGbVHK2PKSmesddw_iVJDfRBA5o-MzOgozunsl7JQ
[root@k8s-node1 .kube]#
登录成功: