OpenShift 4 基于自定义应用度量实现 HPA - http_requests

背景

  • 对于某些应用,需要用于动态扩展的指标不仅仅是 CPU/Mem,有时候还需要用到自定义度量,比如 http_requests。
  • 在 OpenShift 4 目前的版本(OCP 4.3)里,exposing custom application metrics for HPA 功能还属于 Technology Preview feature only。
  • 但这不妨碍我们功能上的实现,我们可以直接利用 OpenShift 4 OperatorHub 中的 Prometheus Operator 来实现。

实现过程

  1. 创建运行 Prometheus Operator 的 namespace
oc new-project ns1
  1. 使用 OpenShift 4 的 UI 来部署 Prometheus Operator
  • 使用 UI 部署 Prometheus Operator
    • UI -> Administrator page -> OperatorHub -> Prometheus Operator -> install
      • 确保是安装到我们的目标 namespace -> ns1
  • 部署 Prometheus instance
    • UI -> Administrator page -> Installed Operators -> Prometheus Operator -> Create Instance
      • 直接使用 UI 中默认的 yaml 创建即可
  1. 创建 Prometheus instance 的 route
  • 主要是可以用于后面部署好测试应用之后可以在 Prometheus 的 UI 上验证我们的部署是否成功
    • 可以在页面上关注 Targets,以及在 Graph 执行查询
oc expose svc prometheus-operated -n ns1
  1. 创建该 Prometheus 需要的 RBAC 以及资源对象
  • 查找 OpenShift 4 中自带的 prometheus-adapter 使用的 image,用于创建我们自定义的 adapter
oc get -n openshift-monitoring deploy/prometheus-adapter -o jsonpath="{..image}"
  • 使用下面的 yaml 来创建对应的 RBAC 以及 Objects,注意替换上一步得到的 image 路径。
    • 下面的 yaml 是可以根据不同的环境做修改的,主要修改点是 namespace 以及 ConfigMap
      • ConfigMap 的内容主要是定义配置如何从 prometheus 获取数据,并与 Kubernetes 的资源做对应,以及如何在 api 接口中展示
cat << EOF > deploy.yaml
kind: ServiceAccount
apiVersion: v1
metadata:
  name: custom-metrics-apiserver
  namespace: ns1
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: custom-metrics-server-resources
rules:
- apiGroups:
  - custom.metrics.k8s.io
  resources: ["*"]
  verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: custom-metrics-resource-reader
rules:
- apiGroups:
  - ""
  resources:
  - namespaces
  - pods
  - services
  verbs:
  - get
  - list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: custom-metrics:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: custom-metrics-apiserver
  namespace: ns1
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: custom-metrics-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: custom-metrics-apiserver
  namespace: ns1
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: custom-metrics-resource-reader
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: custom-metrics-resource-reader
subjects:
- kind: ServiceAccount
  name: custom-metrics-apiserver
  namespace: ns1
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: hpa-controller-custom-metrics
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: custom-metrics-server-resources
subjects:
- kind: ServiceAccount
  name: horizontal-pod-autoscaler
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: adapter-config
  namespace: ns1
data:
  config.yaml: |
    rules:
    - seriesQuery: '{__name__=~"^container_.*",container_name!="POD",namespace!="",pod_name!=""}'
      seriesFilters: []
      resources:
        overrides:
          namespace:
            resource: namespace
          pod_name:
            resource: pod
      name:
        matches: ^container_(.*)_seconds_total$
        as: ""
      metricsQuery: sum(rate(<<.Series>>{<<.LabelMatchers>>,container_name!="POD"}[1m])) by (<<.GroupBy>>)
    - seriesQuery: '{__name__=~"^container_.*",container_name!="POD",namespace!="",pod_name!=""}'
      seriesFilters:
      - isNot: ^container_.*_seconds_total$
      resources:
        overrides:
          namespace:
            resource: namespace
          pod_name:
            resource: pod
      name:
        matches: ^container_(.*)_total$
        as: ""
      metricsQuery: sum(rate(<<.Series>>{<<.LabelMatchers>>,container_name!="POD"}[1m])) by (<<.GroupBy>>)
    - seriesQuery: '{__name__=~"^container_.*",container_name!="POD",namespace!="",pod_name!=""}'
      seriesFilters:
      - isNot: ^container_.*_total$
      resources:
        overrides:
          namespace:
            resource: namespace
          pod_name:
            resource: pod
      name:
        matches: ^container_(.*)$
        as: ""
      metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>,container_name!="POD"}) by (<<.GroupBy>>)
    - seriesQuery: '{namespace!="",__name__!~"^container_.*"}'
      seriesFilters:
      - isNot: .*_total$
      resources:
        template: <<.Resource>>
      name:
        matches: ""
        as: ""
      metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>}) by (<<.GroupBy>>)
    - seriesQuery: '{namespace!="",__name__!~"^container_.*"}'
      seriesFilters:
      - isNot: .*_seconds_total
      resources:
        template: <<.Resource>>
      name:
        matches: ^(.*)_total$
        as: ""
      metricsQuery: sum(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>)
    - seriesQuery: '{namespace!="",__name__!~"^container_.*"}'
      seriesFilters: []
      resources:
        template: <<.Resource>>
      name:
        matches: ^(.*)_seconds_total$
        as: ""
      metricsQuery: sum(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>)
    resourceRules:
      cpu:
        containerQuery: sum(rate(container_cpu_usage_seconds_total{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>)
        nodeQuery: sum(rate(container_cpu_usage_seconds_total{<<.LabelMatchers>>, id='/'}[1m])) by (<<.GroupBy>>)
        resources:
          overrides:
            instance:
              resource: node
            namespace:
              resource: namespace
            pod_name:
              resource: pod
        containerLabel: container_name
      memory:
        containerQuery: sum(container_memory_working_set_bytes{<<.LabelMatchers>>}) by (<<.GroupBy>>)
        nodeQuery: sum(container_memory_working_set_bytes{<<.LabelMatchers>>,id='/'}) by (<<.GroupBy>>)
        resources:
          overrides:
            instance:
              resource: node
            namespace:
              resource: namespace
            pod_name:
              resource: pod
        containerLabel: container_name
      window: 1m
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.alpha.openshift.io/serving-cert-secret-name: prometheus-adapter-tls
  labels:
    name: prometheus-adapter
  name: prometheus-adapter
  namespace: ns1
spec:
  ports:
  - name: https
    port: 443
    targetPort: 6443
  selector:
    app: prometheus-adapter
  type: ClusterIP
---
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
  name: v1beta1.custom.metrics.k8s.io
spec:
  service:
    name: prometheus-adapter
    namespace: ns1
  group: custom.metrics.k8s.io
  version: v1beta1
  insecureSkipTLSVerify: true
  groupPriorityMinimum: 100
  versionPriority: 100
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: prometheus-adapter
  name: prometheus-adapter
  namespace: ns1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: prometheus-adapter
  template:
    metadata:
      labels:
        app: prometheus-adapter
      name: prometheus-adapter
    spec:
      serviceAccountName: custom-metrics-apiserver
      containers:
      - name: prometheus-adapter
        image: <上一步得到的image替换这里>
        args:
        - --secure-port=6443
        - --tls-cert-file=/var/run/serving-cert/tls.crt
        - --tls-private-key-file=/var/run/serving-cert/tls.key
        - --logtostderr=true
        - --prometheus-url=http://prometheus-operated.ns1.svc:9090/
        - --metrics-relist-interval=1m
        - --v=4
        - --config=/etc/adapter/config.yaml
        ports:
        - containerPort: 6443
        volumeMounts:
        - mountPath: /var/run/serving-cert
          name: volume-serving-cert
          readOnly: true
        - mountPath: /etc/adapter/
          name: config
          readOnly: true
        - mountPath: /tmp
          name: tmp-vol
      volumes:
      - name: volume-serving-cert
        secret:
          secretName: prometheus-adapter-tls
      - name: config
        configMap:
          name: adapter-config
      - name: tmp-vol
        emptyDir: {}
---
EOF

# 创建
oc apply -f deploy.yaml 
  • 验证上一步我们创建的对象,比如 api
oc get apiservice v1beta1.custom.metrics.k8s.io

到这里,我们的部署工作基本完成了,剩下的就是使用应用来验证基于 http_requests 的 HPA 了

  1. 创建测试应用
  • 我们使用一个新的 namespace 来部署应用
    • 直接使用下边的 yaml 文件来创建即可,包括 my-new-hpa 这个 namespace
cat << EOF > prometheus-example-app.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: my-new-hpa
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: prometheus-example-app
  name: prometheus-example-app
  namespace: my-new-hpa
spec:
  replicas: 1
  selector:
    matchLabels:
      app: prometheus-example-app
  template:
    metadata:
      labels:
        app: prometheus-example-app
    spec:
      containers:
      - image: quay.io/brancz/prometheus-example-app:v0.2.0
        imagePullPolicy: IfNotPresent
        name: prometheus-example-app
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: prometheus-example-app
  name: prometheus-example-app
  namespace: my-new-hpa
spec:
  ports:
  - port: 8080
    protocol: TCP
    targetPort: 8080
    name: web
  selector:
    app: prometheus-example-app
  type: ClusterIP
EOF

# 创建应用
oc apply -f prometheus-example-app.yaml
  • 创建 ServiceMonitor
# 创建yaml文件
cat << EOF > example-app-service-monitor.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  labels:
    k8s-app: prometheus-example-monitor
  name: prometheus-example-monitor
  namespace: ns1
spec:
  endpoints:
  - interval: 30s
    port: web
    scheme: http
  namespaceSelector:
    matchNames:
    - my-new-hpa    
  selector:
    matchLabels:
      app: prometheus-example-app
EOF

# 创建ServiceMonitor
oc apply -f example-app-service-monitor.yaml
  • 给 Prometheus 访问新 namespace 的权限
echo "---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: my-new-hpa-rolebinding
  namespace: my-new-hpa
subjects:
  - kind: ServiceAccount
    name: prometheus-k8s
    namespace: ns1
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: view" | oc create -f -
  • 在 Prometheus UI 上查询 http_request 指标,或者通过 api 验证
# Prometheus UI
http_requests_total{job="prometheus-example-app"}
# OpenShift api
oc get --raw /apis/custom.metrics.k8s.io/v1beta1/ | jq -r '.resources[] | select(.name | contains("pods/http"))'
  • 创建我们应用的 HPA
echo "---
kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta1
metadata:
  name: pod-autoscale-custom
  namespace: my-new-hpa
spec:
  scaleTargetRef:
    kind: Deployment
    name: prometheus-example-app
    ## apiVersion: apps.openshift.io/v1 这个api没有deployment,所以需要使用extensions/v1beta1这个api
    apiVersion: extensions/v1beta1
  minReplicas: 1
  maxReplicas: 4
  metrics:
    - type: Pods
      pods:
        metricName: http_requests
        targetAverageValue: 300m" | oc create -f -
  • 给应用施加压力并观察应用实例数量是否随着 http_requests 压力的增加而扩展
oc expose service prometheus-example-app -n my-new-hpa

AUTOSCALE_ROUTE=$(oc get route prometheus-example-app -n my-new-hpa -o jsonpath='{ .spec.host}')

while true;do curl http://$AUTOSCALE_ROUTE;sleep .5;done

oc describe hpa pod-autoscale-custom -n my-new-hpa
oc get pods -n my-new-hpa

我们预期是 pod 的数量会随着 http_requests 的增加而扩展到 4 个,停止加压后一段时间又恢复到 1 个 pod

你可能感兴趣的:(OpenShift 4 基于自定义应用度量实现 HPA - http_requests)