工作随笔 - k8s离线安装kube-prometheus以及创建自动伸缩容HPA

针对k8s 1.19版本安装官方kube-promethues,由于版本限制最高只能按照release-0.7版本,版本对应详情参考https://github.com/prometheus-operator/kube-prometheus,其中还是遇到多处比较坑的地方,特此记录。

组件版本信息

组件 版本 备注
k8s v1.19.16 已经安装,并且包含metrics-server
kube-prometheus release-0.7 https://github.com/prometheus-operator/kube-prometheus/archive/refs/tags/v0.7.0.tar.gz
prometheus-adapter 2.17.1 https://github.com/prometheus-community/helm-charts/releases/download/prometheus-adapter-2.17.1/prometheus-adapter-2.17.1.tgz
helm v3.5.4 已安装
centos 7.6

1. kube-prometheus安装

1.1 分类yaml

# 下载上面表格备注信息中安装包
# 解压,并分类
tar -xzvf kube-prometheus-0.7.0.tar.gz
cd kube-prometheus-0.7.0/manifests/
# 创建文件夹
mkdir -p node-exporter alertmanager grafana kube-state-metrics prometheus serviceMonitor adapter
# 移动 yaml 文件,进行分类到各个文件夹下
mv *-serviceMonitor* serviceMonitor/
mv grafana-* grafana/
mv kube-state-metrics-* kube-state-metrics/
mv alertmanager-* alertmanager/
mv node-exporter-* node-exporter/
mv prometheus-adapter* adapter/
mv prometheus-* prometheus/

最终结构如下:

[root@master-node-137 manifests]# tree .
.
├── adapter
│   ├── prometheus-adapter-apiService.yaml
│   ├── prometheus-adapter-clusterRoleAggregatedMetricsReader.yaml
│   ├── prometheus-adapter-clusterRoleBindingDelegator.yaml
│   ├── prometheus-adapter-clusterRoleBinding.yaml
│   ├── prometheus-adapter-clusterRoleServerResources.yaml
│   ├── prometheus-adapter-clusterRole.yaml
│   ├── prometheus-adapter-configMap.yaml
│   ├── prometheus-adapter-deployment.yaml
│   ├── prometheus-adapter-roleBindingAuthReader.yaml
│   ├── prometheus-adapter-serviceAccount.yaml
│   └── prometheus-adapter-service.yaml
├── alertmanager
│   ├── alertmanager-alertmanager.yaml
│   ├── alertmanager-secret.yaml
│   ├── alertmanager-serviceAccount.yaml
│   └── alertmanager-service.yaml
├── grafana
│   ├── grafana-dashboardDatasources.yaml
│   ├── grafana-dashboardDefinitions.yaml
│   ├── grafana-dashboardSources.yaml
│   ├── grafana-deployment.yaml
│   ├── grafana-serviceAccount.yaml
│   └── grafana-service.yaml
├── kube-state-metrics
│   ├── kube-state-metrics-clusterRoleBinding.yaml
│   ├── kube-state-metrics-clusterRole.yaml
│   ├── kube-state-metrics-deployment.yaml
│   ├── kube-state-metrics-serviceAccount.yaml
│   └── kube-state-metrics-service.yaml
├── node-exporter
│   ├── node-exporter-clusterRoleBinding.yaml
│   ├── node-exporter-clusterRole.yaml
│   ├── node-exporter-daemonset.yaml
│   ├── node-exporter-serviceAccount.yaml
│   └── node-exporter-service.yaml
├── prometheus
│   ├── prometheus-clusterRoleBinding.yaml
│   ├── prometheus-clusterRole.yaml
│   ├── prometheus-prometheus.yaml
│   ├── prometheus-roleBindingConfig.yaml
│   ├── prometheus-roleBindingSpecificNamespaces.yaml
│   ├── prometheus-roleConfig.yaml
│   ├── prometheus-roleSpecificNamespaces.yaml
│   ├── prometheus-rules.yaml
│   ├── prometheus-serviceAccount.yaml
│   └── prometheus-service.yaml
├── serviceMonitor
│   ├── alertmanager-serviceMonitor.yaml
│   ├── grafana-serviceMonitor.yaml
│   ├── kube-state-metrics-serviceMonitor.yaml
│   ├── node-exporter-serviceMonitor.yaml
│   ├── prometheus-adapter-serviceMonitor.yaml
│   ├── prometheus-operator-serviceMonitor.yaml
│   ├── prometheus-serviceMonitorApiserver.yaml
│   ├── prometheus-serviceMonitorCoreDNS.yaml
│   ├── prometheus-serviceMonitorKubeControllerManager.yaml
│   ├── prometheus-serviceMonitorKubelet.yaml
│   ├── prometheus-serviceMonitorKubeScheduler.yaml
│   └── prometheus-serviceMonitor.yaml
└── setup
    ├── 0namespace-namespace.yaml
    ├── prometheus-operator-0alertmanagerConfigCustomResourceDefinition.yaml
    ├── prometheus-operator-0alertmanagerCustomResourceDefinition.yaml
    ├── prometheus-operator-0podmonitorCustomResourceDefinition.yaml
    ├── prometheus-operator-0probeCustomResourceDefinition.yaml
    ├── prometheus-operator-0prometheusCustomResourceDefinition.yaml
    ├── prometheus-operator-0prometheusruleCustomResourceDefinition.yaml
    ├── prometheus-operator-0servicemonitorCustomResourceDefinition.yaml
    ├── prometheus-operator-0thanosrulerCustomResourceDefinition.yaml
    ├── prometheus-operator-clusterRoleBinding.yaml
    ├── prometheus-operator-clusterRole.yaml
    ├── prometheus-operator-deployment.yaml
    ├── prometheus-operator-serviceAccount.yaml
    └── prometheus-operator-service.yaml

1.2修改数据持久化存储

prometheus 实际上是通过 emptyDir 进行挂载的,我们知道 emptyDir 挂载的数据的生命周期和 Pod 生命周期一致的,如果 Pod 挂掉了,那么数据也就丢失了,这也就是为什么我们重建 Pod 后之前的数据就没有了的原因,所以这里修改它的持久化配置。本文默认已经安装好了longhorn存储,storageclass默认使用longhorn存储。

1.2.1 使用命令查询当前StorageClass的名称

# 查询当前的storeclass名称
kubectl get sc
# 结果类似如下
[root@master-node-137 manifests]# kubectl get sc
NAME                 PROVISIONER          RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
longhorn (default)   driver.longhorn.io   Delete          Immediate           true                   60d

1.2.2 修改Prometheus 持久化

prometheus是一种 StatefulSet 有状态集的部署模式,所以直接将 StorageClass 配置到里面,在下面的 yaml 中最下面添加持久化配置:

# 文件manifests/prometheus/prometheus-prometheus.yaml末尾添加
  serviceMonitorNamespaceSelector: {}
  serviceMonitorSelector: {}
  version: v2.22.1
  retention: 3d
  storage:
    volumeClaimTemplate:
      spec:
        storageClassName: longhorn
        resources:
          requests:
            storage: 5Gi

1.2.3 修改Prometheus 持久化(未做验证,本文未使用此grafana)

# 文件manifests/grafana/grafana-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: grafana
  namespace: monitoring  #---指定namespace为monitoring
spec:
  storageClassName: longhorn #---指定StorageClass
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi

# manifests/grafana/grafana-deployment.yaml
      serviceAccountName: grafana
      volumes:
      - name: grafana-storage       # 新增持久化配置
        persistentVolumeClaim:
          claimName: grafana        # 设置为创建的PVC名称
#      - emptyDir: {}               # 注释旧的注释
#        name: grafana-storage
      - name: grafana-datasources
        secret:
          secretName: grafana-datasources

1.3 修改 Service 端口设置

1.3.1 修改 Prometheus Service

修改prometheus Service端口类型为 NodePort,设置 NodePort 端口为 30010

# manifests/prometheus/prometheus-service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    prometheus: k8s
  name: prometheus-k8s
  namespace: monitoring
spec:
  type: NodePort
  ports:
  - name: web
    port: 9090
    targetPort: web
    nodePort: 30010
  selector:
    app: prometheus
    prometheus: k8s
  sessionAffinity: ClientIP

1.3.2 修改 Grafana Service (未验证,未使用此grafana)

# manifests/grafana/grafana-service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: grafana
  name: grafana
  namespace: monitoring
spec:
  type: NodePort
  ports:
  - name: http
    port: 3000
    targetPort: http
    nodePort: 32102
  selector:
    app: grafana

1.4 安装promethues-operator

离线环境,则额外多一步,需要提前下载镜像,并上传到本地镜像仓库,按需修改yaml文件中的镜像地址

# 需要修改的文件以及行数如下(未包含grafana)
adapter/prometheus-adapter-deployment.yaml:28:        image: directxman12/k8s-prometheus-adapter:v0.8.2
node-exporter/node-exporter-daemonset.yaml:28:        image: quay.io/prometheus/node-exporter:v1.0.1
node-exporter/node-exporter-daemonset.yaml:60:        image: quay.io/brancz/kube-rbac-proxy:v0.8.0
alertmanager/alertmanager-alertmanager.yaml:9:  image: quay.io/prometheus/alertmanager:v0.21.0
kube-state-metrics/kube-state-metrics-deployment.yaml:26:        image: quay.io/coreos/kube-state-metrics:v1.9.7
kube-state-metrics/kube-state-metrics-deployment.yaml:33:        image: quay.io/brancz/kube-rbac-proxy:v0.8.0
kube-state-metrics/kube-state-metrics-deployment.yaml:47:        image: quay.io/brancz/kube-rbac-proxy:v0.8.0
grafana/grafana-deployment.yaml:22:        image: grafana/grafana:7.3.4
setup/prometheus-operator-deployment.yaml:26:        - --prometheus-config-reloader=quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1
setup/prometheus-operator-deployment.yaml:27:        image: quay.io/prometheus-operator/prometheus-operator:v0.44.1
setup/prometheus-operator-deployment.yaml:46:        image: quay.io/brancz/kube-rbac-proxy:v0.8.0
prometheus/prometheus-prometheus.yaml:14:  image: quay.io/prometheus/prometheus:v2.22.1

安装步骤如下

cd kube-prometheus-0.7.0/manifests
kubectl apply -f setup/
# 查看 Pod,等 pod 创建起来在进行下一步:
kubectl get pods -n monitoring

# 接下来安装其他组件:
kubectl apply -f adapter/
kubectl apply -f alertmanager/
kubectl apply -f node-exporter/
kubectl apply -f kube-state-metrics/
kubectl apply -f grafana/
kubectl apply -f prometheus/
kubectl apply -f serviceMonitor/

# 查看 Pod 状态,等待所有状态均为Running:
kubectl get pods -n monitoring

验证

打开地址:http://:30010/targets,看看各个服务状态有没有问题,至此,我们在k8s上安装kube-promethues已经成功了!

2. 自动伸缩容HPA配置创建

Kubernetes定义了三种不同的监控数据接口,分别是Resource Metric,Custom Metric以及External Metric。核心指标只包含node和pod的cpu、内存等,一般来说,核心指标作HPA已经足够,但如果想根据自定义指标:如请求qps/5xx错误数来实现HPA,就需要使用自定义指标了,目前Kubernetes中自定义指标一般由Prometheus来提供,再利用k8s-prometheus-adpater聚合到apiserver,实现和核心指标(metric-server)同样的效果。

  • Resource Metric是通过metrics-server采集;从 Kubelet、cAdvisor 等获取度量数据,再由metrics-server提供给 Dashboard、HPA 控制器等使用。
  • Custom Metric是通过prometheus来实现自定义扩容。由Prometheus Adapter提供API custom.metrics.k8s.io,由此可支持任意Prometheus采集到的指标。
  • External Metric就是针对云场景的了,比方说通过获取slb最大连接数来实现自动扩容。

2.1 针对Resource Metric实现自动伸缩容

2.1.1 创建nginx的deployment,用了模拟应用自动伸缩容

# 确认metric -server安装成功
kubectl top nodes

# 创建hpa namespace
kubectl create ns hpa

# 创建nginx的deployment,CPU只分配了3m
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
   name: myngx
   namespace: hpa
spec:
   type: NodePort
   ports:
   - name: myngx
     nodePort: 30080
     port: 3080
     targetPort: 80
   selector:
     app: myngx
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myngx
  namespace: hpa
spec:
  replicas: 10
  strategy:
    rollingUpdate:
      maxSurge: 40%
      maxUnavailable: 40%
    type: RollingUpdate
  selector:
    matchLabels:
      app: myngx
  template:
     metadata:
       labels:
         app: myngx
     spec:
      containers:
      - image: harbor.test.lo:5000/dezhu/nginx:1.18.0
        name: myngx
        ports:
        - containerPort: 80
        resources:
          requests:
            cpu: 3m
EOF

# 查看pod情况
kubectl get po -n hpa

2.1.1 创建自动伸缩容autoscale

  • --min=1 : 最小一个pod数量
  • --max=20 : 最大20个pod的数量
  • --cpu-percent=5 : 当cup的使用百分比大于5%,自动扩容pod的数量
# 创建autoscale
kubectl autoscale deployment myngx --min=1 --max=20 --cpu-percent=5 -n hpa

# 安装模拟工具 
yum install -y httpd-tools

# 这个表示一共处理 10000 个请求,每次并发运行 1000 次 index.php 文件.
ab -c 1000 -n 10000 http://10.99.73.137:30080/

# 观察pod数目是否增加到了20个,后续变为1个

# 删除hpa
kubectl delete hpa myngx -n hpa

2.2 针对Custom Metric实现自动伸缩容(模拟应用自定义指标)

2.2.1 安装模拟应用(自带nginx-exporter的nginx应用)

 下载bitnami的nginx 
helm pull bitnami/nginx --version 13.2.11

# 其中系统提权下载的离线镜像如下:
bitnami/nginx:1.16.1-debian-10-r63
bitnami/nginx-exporter:0.11.0-debian-11-r12

# 安装nginx到dev命名空间下,注意按需修改namespace
helm install nginx ./nginx --namespace dev --create-namespace \
--set global.imageRegistry=harbor.test.lo:5000 \
--set image.tag=1.16.1-debian-10-r63 \
--set service.type=NodePort \
--set service.nodePorts.http="30081" \
--set resources.requests.cpu=3m \
--set metrics.enabled=true \
--set metrics.serviceMonitor.enabled=true \
--set metrics.serviceMonitor.namespace=monitoring

2.2.2 需要确认prometheus是否可以访问对应namespace(dev)的资源,按需创建role/rolebinding

# dev 创建Role,此Role赋权限可以读取dev命名空间内的api
cat<:30010/targets,可以看到nginx的target,并且可以搜索到query'nginx_http_requests_total',说明nginx的监控指标已被prometheus收集

2.2.3 安装官方prometheus-adapter

官方kube-promethues的prometheus-adapter不带v1beta1.custom.metrics.k8s.io接口,可以kubectl get apiservices命令进行查看,所以此处需要重新安装。
默认prometheus的地址http://prometheus-k8s.monitoring.svc.cluster.local:9090/

# 首先删除官方kube-prometheus中自带adapter
kubectl delete -f ./adapter/

# 提前离线镜像directxman12/k8s-prometheus-adapter:v0.8.2
helm install prometheus-adapter ./prometheus-adapter --namespace monitoring --create-namespace \
--set prometheus.url=http://prometheus-k8s.monitoring.svc.cluster.local \
--set image.repository=harbor.test.lo:5000/directxman12/k8s-prometheus-adapter \
--set image.tag=v0.8.2

# 验证:查询自定义Metrics接口是否可用
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1

# 验证:查询nginx的metrics是否可以采集到,注意修改namespace和pod
kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/dev/pods/nginx-946d5cc6d-hgw97/nginx_http_requests" | python -m json.tool

# 示例如下:
{
    "apiVersion": "custom.metrics.k8s.io/v1beta1",
    "items": [
        {
            "describedObject": {
                "apiVersion": "/v1",
                "kind": "Pod",
                "name": "nginx-946d5cc6d-hgw97",
                "namespace": "dev"
            },
            "metricName": "nginx_http_requests",
            "selector": null,
            "timestamp": "2022-12-08T02:55:59Z",
            "value": "266m"
        }
    ],
    "kind": "MetricValueList",
    "metadata": {
        "selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/dev/pods/nginx-946d5cc6d-hgw97/nginx_http_requests"
    }
}


2.2.4 对deployment创建HPA,并验证

# 准备hpa文件并应用
cat<:30081/

你可能感兴趣的:(工作随笔 - k8s离线安装kube-prometheus以及创建自动伸缩容HPA)