k8s springboot 生产实践(高可用部署、基于qps动态扩缩容、prometheus监控)

以下均为亲自实践并应用

概要

1、k8s sprintboot eureka高可用配置、部署、优化
2、k8s sprintboot 微服务高可用配置、部署、优化
3、springboot安装prometheus依赖并获取metric
4、helm安装prometheus operator、prometheus adapter(custom metric)
5、配置prometheus及adapter获取应用qps
6、配置k8s hpa(HorizontalPodAutoscaler)
7、配置grafana展示监控数据

效果展示

1、k8s sprintboot eureka高可用配置、部署、优化

简介:eureka做集群,需要eureka互相注册,为方便注册,使用statefulSet方式部署
eureka server配置:

eureka:
  server:
    enable-self-preservation: false
    eviction-interval-timer-in-ms: 5000
  client:
    fetch-registry: true  
    register-with-eureka: true #互相注册
    serviceUrl:
      defaultZone: ${EUREKA_URL}
  instance:
    prefer-ip-address: true
    lease-renewal-interval-in-seconds: 3
    lease-expiration-duration-in-seconds: 10

eureka_url:将多实例地址均填写,方便配置

EUREKA_URL="http://eureka-0.eureka.default.svc.cluster.local:8761/eureka/,http://eureka-1.eureka.default.svc.cluster.local:8761/eureka/"

eureka statefulSet配置:

apiVersion:  apps/v1
kind: StatefulSet
metadata:
  labels:
    app: eureka
  name: eureka
  namespace: default
spec:
  replicas: 2
  serviceName: eureka
  podManagementPolicy: "Parallel"
  revisionHistoryLimit: 3
  selector:
    matchLabels:
      app: eureka
  template:
    metadata:
      labels:
        app: eureka
      annotations:   #用于prometheus收集数据,需micrometer-registry-prometheus插件
        prometheus.io/path: "/actuator/prometheus"
        prometheus.io/port: "8761"
        prometheus.io/scrape: "true"
    spec:
      containers:
      - image: eureka:latest              #镜像名
        imagePullPolicy: IfNotPresent
        name: eureka
        envFrom:
        - configMapRef:
            name: app-configmap           #环境变量
        env:
        - name: MEMORY
          value: "800"
        resources:
          limits:
            cpu: "0"
            memory: 1000Mi
          requests:
            cpu: "0"
            memory: 1000Mi
        readinessProbe:
          tcpSocket:
            port: 8761
          failureThreshold: 1
          initialDelaySeconds: 20
        livenessProbe:
          tcpSocket:
            port: 8761
          failureThreshold: 1
          initialDelaySeconds: 60
        volumeMounts:
        - mountPath: /logs
          name: log-vol
      volumes:
      - name: log-vol
        nfs:
          path: /logs/eureka         #多实例日志可统一在此次查看
          server: 127.0.0.1           #nfs地址
      dnsPolicy: ClusterFirst
      imagePullSecrets:
      - name: registry-harbor
      restartPolicy: Always

2、k8s sprintboot 微服务高可用配置、部署、优化

这边有思考点,client以什么地址注册eureka?
a、以deployment直接部署,默认情况会将容器名称注册到eureka,k8s不识别deployment方式容器名称域名,导致无法访问
b、使用statefulSet部署,通过pod名称注册,并且可以识别域名,以pod名称注册并使用,感觉完美,然而需要弹性扩缩容时候,pod缩容,注册信息不会立马从eureka server去除,有延迟,会导致请求到这个销毁的pod上,error!(如果不考虑hpa这方式ok)
c、使用service注册,通过endpoint分发请求,pod销毁后pod ip会里面从endpoint里去除,不会出现缩容时候请求到销毁pod上,扩容时候,服务没完全起来请求转发到此pod上会出现问题,不过可以通过健康检查解决
最终选择c方案:
client配置如下:
HOSTNAME即k8s service名称,注册时候会将service注册到eureka

#服务注册相关配置
eureka:
  client:
    serviceUrl:
      defaultZone: ${EUREKA_URL}
  instance:
    prefer-ip-address: true
    instance-id: ${HOSTNAME}:${server.port}
    ip-address: ${HOSTNAME}

deployment和service配置

apiVersion:  apps/v1
kind: Deployment
metadata:
  labels:
    app: client 
  name: client 
  namespace: default
spec:
  replicas: 2
  revisionHistoryLimit: 3
  selector:
    matchLabels:
      app: client 
  template:
    metadata:
      labels:
        app: client 
      annotations:
        prometheus.io/path: "/actuator/prometheus"
        prometheus.io/port: "8024"
        prometheus.io/scrape: "true"
    spec:
      containers:
      - image: client:latest
        imagePullPolicy: IfNotPresent
        name: client 
        envFrom:
        - configMapRef:
            name: client-configmap
        env:
        - name: MEMORY
          value: "1300"
        - name: HOSTNAME
          value: client 
        resources:
          limits:
            cpu: "0"
            memory: 1500Mi
          requests:
            cpu: "0"
            memory: 1500Mi
        readinessProbe:
          tcpSocket:
            port: 8024
          failureThreshold: 1
          initialDelaySeconds: 60
        livenessProbe:
          tcpSocket:
            port: 8024
          failureThreshold: 1
          initialDelaySeconds: 90
      dnsPolicy: ClusterFirst
      imagePullSecrets:
      - name: registry-harbor
      restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
  name: client 
  namespace: default
  labels:
    app: client  
spec:
  ports:
  - name: http-8024
    port: 8024
    protocol: TCP
    targetPort: 8024
  selector:
    app: client  

3、springboot安装prometheus依赖并获取metric

高可用部署完成后就需要弹性扩缩容, 需要获取qps数据,这里用的是prometheus,直接贴配置

增加pom依赖:

        
            io.micrometer
            micrometer-registry-prometheus
        

增加配置文件:

management:
  endpoints:
    web:
      exposure:
        include: '*'
  metrics:
    tags:
      application: ${spring.application.name}

主函数增加bean

import io.micrometer.core.instrument.MeterRegistry;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.actuate.autoconfigure.metrics.MeterRegistryCustomizer;
import org.springframework.context.annotation.Bean;

public class Application {
    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }
    @Bean
    MeterRegistryCustomizer configurer(
            @Value("${spring.application.name}") String applicationName) {
        return (registry) -> registry.config().commonTags("application", applicationName);
    }
}

配置完成后访问:http:localhost:8024/actuator/prometheus,返回数据如下(重点关注下http_server_requests_seconds_count,后面会用到)

# HELP jvm_memory_max_bytes The maximum amount of memory in bytes that can be used for memory management
# TYPE jvm_memory_max_bytes gauge
jvm_memory_max_bytes{application="portal-eureka",area="heap",id="Par Survivor Space",} 2.7918336E7
jvm_memory_max_bytes{application="portal-eureka",area="nonheap",id="Compressed Class Space",} 1.073741824E9
jvm_memory_max_bytes{application="portal-eureka",area="heap",id="Tenured Gen",} 5.59284224E8
jvm_memory_max_bytes{application="portal-eureka",area="nonheap",id="Metaspace",} -1.0
jvm_memory_max_bytes{application="portal-eureka",area="heap",id="Par Eden Space",} 2.23739904E8
jvm_memory_max_bytes{application="portal-eureka",area="nonheap",id="Code Cache",} 2.5165824E8
# HELP logback_events_total Number of error level events that made it to the logs
# TYPE logback_events_total counter
logback_events_total{application="portal-eureka",level="info",} 804.0
http_server_requests_seconds_count{application="portal-eureka",exception="None",method="POST",outcome="SUCCESS",status="204",uri="root",} 8.0
http_server_requests_seconds_sum{application="portal-eureka",exception="None",method="POST",outcome="SUCCESS",status="204",uri="root",} 0.29997471
http_server_requests_seconds_count{application="portal-eureka",exception="None",method="GET",outcome="SUCCESS",status="200",uri="/actuator/prometheus",} 114.0
...

4、helm安装prometheus operator、prometheus adapter(custom metric)

a、下载prometheus-operator

helm fetch stable/prometheus-operator --version 8.13.0

b、解压

tar -zxvf prometheus-operator-8.13.3.tgz

c、修改values.yaml文件,增加采集pod job(operator默认没有搜集kubernetes-pod的配置)

vim prometheus-operator/values.yaml

增加

    additionalScrapeConfigs: 
      - job_name: 'kubernetes-pods'
        honor_labels: false
        kubernetes_sd_configs:
        - role: pod
        tls_config:
          insecure_skip_verify: true
        relabel_configs:
        - source_labels: [__meta_kubernetes_namespace]
          action: replace
          target_label: namespace
        - source_labels: [__meta_kubernetes_pod_name]
          action: replace
          target_label: pod
        - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
          action: keep
          regex: true
        - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
          action: replace
          target_label: __metrics_path__
          regex: (.+)
        - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
          action: replace
          regex: ([^:]+)(?::\d+)?;(\d+)
          replacement: $1:$2
          target_label: __address__
        - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme]
          action: replace
          target_label: __scheme__
          regex: (.+)

d、安装

helm install -n prometheus-operator prometheus-operator --namespace monitor

e、暴露端口

kubectl edit svc prometheus-operator-prometheus -nmonitor #增加externalIPs或者nodeport

f、安装stable/prometheus-adapter

helm fetch stable/prometheus-adapter --version 2.3.1

解压

tar -zxvf prometheus-adapter-2.3.1.tgz

修改prometheus server配置

vim prometheus-adapter/values.yaml #将prometheus.url替换为http://prometheus-operator-prometheus.monitor.svc

安装

helm install -n prometheus-adapter prometheus-adapter   --namespace monitor

查看api

kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1

5、配置prometheus及adapter获取应用qps

a、增加adapter规则

kubectl edit cm -nmonitor prometheus-adapter

增加规则,将http_server_requests_seconds_count以每5分钟为时间段计算为qps,metric名称为http_server_requests_seconds

    - seriesQuery: '{namespace!="",__name__!~"^container_.*"}'
      seriesFilters:
      - isNot: .*_seconds_total
      resources:
        template: <<.Resource>>
      name:
        matches: ^(.*)_count$
        as: ""
      metricsQuery: sum(rate(<<.Series>>{<<.LabelMatchers>>}[5m])) by (<<.GroupBy>>)

b、重启adapter

kubectl -nmonitor delete pod prometheus-adapter-...

c、查看自定义metric qps

kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/http_server_requests_seconds"

6、配置k8s hpa(HorizontalPodAutoscaler)

直接贴配置:

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: client
  namespace: default
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: client
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Pods
    pods:
      metric:
        name: http_server_requests_seconds
      target:
        type: AverageValue
        averageValue: 50
kubectl get hpa #查看hpa是否获取到metric

7、配置grafana展示监控数据

展示应用qps sql(不包含/actuator/prometheus地址)

sum(rate(http_server_requests_seconds_count{job="kubernetes-pods",uri!="/actuator/prometheus"  }[5m])) by (application)
image.png

dashboard模板地址:https://grafana.com/grafana/dashboards?dataSource=prometheus&direction=asc&orderBy=name&search=spring

推荐Spring Boot 2.1 Statistics模板,导入方法如下图,复制id或者json即可


image.png

你可能感兴趣的:(k8s springboot 生产实践(高可用部署、基于qps动态扩缩容、prometheus监控))