数据模型:prometheus将所有数据存储为时间序列:属于相同 metric名称和相同标签组(键值对)的时间戳值流。
metric 和 标签:每一个时间序列都是由其 metric名称和一组标签(键值对)组成唯一标识,标签给prometheus建立了多维度数据模型。
实例与任务:在prometheus中,一个可以拉取数据的端点叫做实例(instance),一般等同于一个进程。一组有着同样目标的实例(例如为弹性或可用性而复制的进程副本)叫做任务(job)。
prometheus server
Retrieval 负责在活跃的 target 主机上抓取监控指标数据。
Storage 存储主要是把采集到的数据存储到磁盘中。
PromQL 是 Prometheus 提供的查询语言模块。
Exporters
prometheus 支持多种 exporter,通过 exporter 可以采集 metrics 数据,然后发送到prometheus server 端,所有向 promtheus server 提供监控数据的程序都可以被称为 exporter。
Client Library
客户端库,检测应用程序代码,当 Prometheus 抓取实例的 HTTP 端点时,客户端库会将所有跟踪的 metrics 指标的当前状态发送到 prometheus server 端。# Prometheus+Grafana搭建
Alertmanager
从 Prometheus server 端接收到 alerts 后,会进行去重,分组,并路由到相应的接收方,发出报警,常见的接收方式有:电子邮件,微信,钉钉, slack 等。
Grafana
监控仪表盘,可视化监控数据。
pushgateway
各个目标主机可上报数据到 pushgatewy,然后 prometheus server 统一从 pushgateway 拉取数据。
基于 1.23.1 版kubernetes,其他版本根据情况修改资源版本及配置
kubectl create ns monitor-sa # 创建名称空间
docker load -i node-exporter.tar.gz # 导入镜像
cat node-exporter.yaml # 编写yaml文件
通过daemonset部署可使每个节点都有一个Pod来采集数据,node-exporter.yaml 内容如下:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: node-exporter
namespace: monitor-sa
labels:
name: node-exporter
spec:
selector:
matchLabels:
name: node-exporter
template:
metadata:
labels:
name: node-exporter
spec:
hostPID: true
hostIPC: true
hostNetwork: true # 共享宿主机网络和进程
containers:
- name: node-exporter
image: prom/node-exporter:v0.16.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9100 # 容器暴露端口为9100
resources:
requests:
cpu: 0.15
securityContext:
privileged: true # 开启特权模式
args:
- --path.procfs
- /host/proc
- --path.sysfs
- /host/sys
- --collector.filesystem.ignored-mount-points
- '"^/(sys|proc|dev|host|etc)($|/)"'
volumeMounts: # 挂载宿主机目录以收集宿主机信息
- name: dev
mountPath: /host/dev
- name: proc
mountPath: /host/proc
- name: sys
mountPath: /host/sys
- name: rootfs
mountPath: /rootfs
tolerations: # 定义容忍度,使其可调度到默认有污点的master
- key: "node-role.kubernetes.io/master"
operator: "Exists"
effect: "NoSchedule"
volumes: # 定义存储卷
- name: proc
hostPath:
path: /proc
- name: dev
hostPath:
path: /dev
- name: sys
hostPath:
path: /sys
- name: rootfs
hostPath:
path: /
kubectl apply -f node-exporter.yaml # 应用yaml文件
kubectl get pods -n monitor-sa # 查看是否部署成功
curl http://主机 ip:9100/metrics |grep node_cpu_seconds
kubectl create serviceaccount monitor -n monitor-sa # 创建sa账号
kubectl create clusterrolebinding monitor-clusterrolebinding -n monitor-sa --clusterrole=cluster-admin --serviceaccount=monitor-sa:monitor # sa账号授权
mkdir /data # 在要安装prometheus server的节点创建数据目录
chmod 777 /data # 目录授权
使用configmap配置prometheus,yaml文件如下:
cat prometheus-cfg.yaml
kind: ConfigMap
apiVersion: v1
metadata:
labels:
app: prometheus
name: prometheus-config
namespace: monitor-sa
data:
prometheus.yml: |
global: # 全局配置
scrape_interval: 15s # 拉取数据频率
scrape_timeout: 10s # 拉取超时时间
evaluation_interval: 1m # 执行规则频率(这个值要大于拉取频率,否则会造成发生因一个故障而产生多次报警)
scrape_configs: # 拉取配置(有静态配置和服务发现两种)
- job_name: 'kubernetes-node' # 一个job为一个拉取任务
kubernetes_sd_configs: # k8s的服务发现
- role: node # 使用kubelet提供的http端口发现node
relabel_configs: # 重新标记标签
- source_labels: [__address__] # 原始标签,匹配地址
regex: '(.*):10250'
replacement: '${1}:9100'
target_label: __address__
action: replace # 这段配置表示把匹配到的ip:10250替换为ip:9100
- action: labelmap
regex: __meta_kubernetes_node_label_(.+) # 匹配到该表达式的标签会保留
- job_name: 'kubernetes-node-cadvisor' # 抓取 cAdvisor 数据,是获取 kubelet 上/metrics/cadvisor 接口数据来获取容器的资源使用情况
kubernetes_sd_configs:
- role: node
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
- job_name: 'kubernetes-apiserver'
kubernetes_sd_configs:
- role: endpoints
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: default;kubernetes;https
- job_name: 'kubernetes-service-endpoints'
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
action: replace
target_label: __scheme__
regex: (https?)
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
action: replace
target_label: kubernetes_name
kubectl apply -f prometheus-cfg.yaml # 应用configmap
通过deployment部署prometheus,yaml文件如下(其中nodeName值替换为要安装prometheus server节点的主机名,与刚创建的/data目录在同一节点):
cat prometheus-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus-server
namespace: monitor-sa
labels:
app: prometheus
spec:
replicas: 1
selector:
matchLabels:
app: prometheus
component: server
template:
metadata:
labels:
app: prometheus
component: server
annotations:
prometheus.io/scrape: 'false' # 该容器不会被prometheus发现并监控,其他pod可通过添加该注解(值为true)以服务发现的方式自动被prometheus监控到。
spec:
nodeName: s2
serviceAccountName: monitor # 指定sa,使容器有权限获取数据
containers:
- name: prometheus # 容器名称
image: prom/prometheus:v2.27.1 # 镜像名称
imagePullPolicy: IfNotPresent # 镜像拉取策略
command: # 容器启动时执行的命令
- prometheus
- --config.file=/etc/prometheus/prometheus.yml
- --storage.tsdb.path=/prometheus # 旧数据存储目录
- --storage.tsdb.retention=720h # 旧数据保留时间
- --web.enable-lifecycle # 开启热加载
ports: # 容器暴露的端口
- containerPort: 9090
protocol: TCP # 协议
volumeMounts: # 容器挂载的数据卷
- mountPath: /etc/prometheus # 要挂载到哪里
name: prometheus-config # 挂载谁(与下面定义的volume对应)
- mountPath: /prometheus/
name: prometheus-storage-volume
volumes: # 数据卷定义
- name: prometheus-config # 名称
configMap: # 从configmap获取数据
name: prometheus-config # configmap的名称
- name: prometheus-storage-volume
hostPath:
path: /data
type: Directory
kubectl apply -f prometheus-deploy.yaml # 应用deployment
kubectl get pods -n monitor-sa # 查看deployment是否部署成功
cat prometheus-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: prometheus
namespace: monitor-sa
labels:
app: prometheus
spec:
type: NodePort
ports:
- port: 9090
targetPort: 9090
protocol: TCP
selector:
app: prometheus
component: server
kubectl apply -f prometheus-svc.yaml # 应用service
kubectl get svc -n monitor-sa # 查看service在物理机映射的端口号
使用浏览器访问prometheus的web界面
http://主机ip:端口号/graph
在搜索栏输入内容点击Execute能够出来内容表示Prometheus server工作正常,如下图:
在要安装grafana的节点创建目录并授权
mkdir /var/lib/grafana/ -p
chmod 777 /var/lib/grafana/
使用deployment部署grafana,yaml文件如下(其中nodeName的值替换为要安装grafana节点的主机名,与刚创建的目录在同一节点):
cat grafana.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: monitoring-grafana
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
task: monitoring
k8s-app: grafana
template:
metadata:
labels:
task: monitoring
k8s-app: grafana
spec:
nodeName: s2 # 要安装到哪个节点
containers:
- name: grafana
image: grafana/grafana:7.5.7
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
protocol: TCP
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certificates
readOnly: true
- mountPath: /var
name: grafana-storage
- mountPath: /var/lib/grafana/
name: lib
env:
- name: INFLUXDB_HOST
value: monitoring-influxdb
- name: GF_SERVER_HTTP_PORT
value: "3000"
# The following env variables are required to make Grafana accessible via
# the kubernetes api-server proxy. On production clusters, we recommend
# removing these env variables, setup auth for grafana, and expose the grafana
# service using a LoadBalancer or a public IP.
- name: GF_AUTH_BASIC_ENABLED
value: "false"
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "true"
- name: GF_AUTH_ANONYMOUS_ORG_ROLE
value: Admin
- name: GF_SERVER_ROOT_URL
# If you're only using the API Server proxy, set this value instead:
# value: /api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
value: /
volumes:
- name: ca-certificates
hostPath:
path: /etc/ssl/certs
- name: grafana-storage
emptyDir: {}
- name: lib
hostPath:
path: /var/lib/grafana/
type: DirectoryOrCreate
---
apiVersion: v1
kind: Service
metadata:
labels:
# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out this line.
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: monitoring-grafana
name: monitoring-grafana
namespace: kube-system
spec:
# In a production setup, we recommend accessing Grafana through an external Loadbalancer
# or through a public IP.
# type: LoadBalancer
# You could also use NodePort to expose the service at a randomly-generated port
# type: NodePort
ports:
- port: 80
targetPort: 3000
selector:
k8s-app: grafana
type: NodePort
kubectl apply -f grafana.yaml # 应用yaml文件
kubectl get pods -n kube-system| grep monitor # 查看pod工作状态(running)
kubectl get svc -n kube-system | grep grafana # 查看前端service
使用浏览器登录grafana
主机ip:服务端口
进入grafana页面后点击添加数据源
选择prometheus,进入配置界面,配置内容如下:
配置完成点击左下角 Save & Test,出现如下 Data source is working,说明 prometheus 数据源成功的被grafana接入了。
到如下链接下载模板:
https://grafana.com/dashboards?dataSource=prometheus&search=kubernetes
下载完成后倒入模板,点击左侧 + 号,点击 import ,点击Upload json file,选择刚下载的模板,完成后选择数点击Import倒入即可出现监控数据:
创建rbac授权,yaml文件如下:
cat kube-state-metrics-rbac.yaml
---
apiVersion: v1 # api版本:v1
kind: ServiceAccount # 资源类型:服务账号
metadata: # 元数据
name: kube-state-metrics # 名称
namespace: kube-system # 名称空间
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole # 资源类型:集群角色
metadata:
name: kube-state-metrics
rules:
- apiGroups: [""]
resources: ["nodes", "pods", "services", "resourcequotas", "replicationcontrollers", "limitranges", "persistentvolumeclaims", "persistentvolumes", "namespaces", "endpoints"]
verbs: ["list", "watch"]
- apiGroups: ["extensions"]
resources: ["daemonsets", "deployments", "replicasets"]
verbs: ["list", "watch"]
- apiGroups: ["apps"]
resources: ["statefulsets"]
verbs: ["list", "watch"]
- apiGroups: ["batch"]
resources: ["cronjobs", "jobs"]
verbs: ["list", "watch"]
- apiGroups: ["autoscaling"]
resources: ["horizontalpodautoscalers"]
verbs: ["list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kube-state-metrics
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kube-state-metrics
subjects:
- kind: ServiceAccount
name: kube-state-metrics
namespace: kube-system
kubectl apply -f kube-state-metrics-rbac.yaml # 应用yaml文件
使用deployment安装kube-state-metrics,yaml文件如下:
cat kube-state-metrics-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-state-metrics
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: kube-state-metrics
template:
metadata:
labels:
app: kube-state-metrics
spec:
serviceAccountName: kube-state-metrics
containers:
- name: kube-state-metrics
image: quay.io/coreos/kube-state-metrics:v1.9.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
kubectl apply -f kube-state-metrics-deploy.yaml # 应用yaml文件
kubectl get pods -n kube-system -l app=kube-state-metrics # 查看是否部署成功
cat kube-state-metrics-svc.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/scrape: 'true'
name: kube-state-metrics
namespace: kube-system
labels:
app: kube-state-metrics
spec:
ports:
- name: kube-state-metrics
port: 8080
protocol: TCP
selector:
app: kube-state-metrics
kubectl apply -f kube-state-metrics-svc.yaml
kubectl get svc -n kube-system | grep kube-state-metrics # 查看服务状态
kube-state-metrics搭建完成,在grafana导入监控k8s集群状态模板即可查看相关数据。
部分模板下载链接:https://download.csdn.net/download/weixin_45826416/85018153
node-exporter工作状态检查:curl http://主机 ip:9100/metrics
可能出现的问题:pod状态异常(镜像、节点资源、容忍度)、端口占用
prometheus server工作状态检查:在web界面输入关键字进行查询,看是否有结果返回。
可能出现的问题:sa账号授权、数据目录权限、pod状态、端口占用
grafana工作状态检查:导入模板后是否有数据展示
可能出现的问题:目录权限、数据源设置、时间同步、json模板
以163邮箱为例,登录自己的邮箱,点击上面“设置”→“POP3/SMTP/IMAP”
点击开启IMAP/SMTP服务,按照提示发送验证码,成功后会出现“授权密码”。保存好授权密码,下面配置报警会用到。
通过configmap来管理alertmanager的配置,alertmanager-cm.yaml文件内容如下:
kind: ConfigMap
apiVersion: v1
metadata:
name: alertmanager
namespace: monitor-sa
data:
alertmanager.yml: |- # alertmanager配置文件
global:
resolve_timeout: 1m
smtp_smarthost: 'smtp.163.com:25' # 发送者的SMTP服务器
smtp_from: '1814553****@163.com' # 发送者的邮箱
smtp_auth_username: '1814553****' # 发送者的邮箱用户名(不是邮箱名)
smtp_auth_password: 'LAYXLXRZGFUBWOMZ' # 发送者授权密码(上面获取到的)
smtp_require_tls: false
route: # 配置告警分发策略
group_by: [alertname] # 采用哪个标签作为分组依据
group_wait: 10s # 组告警等待时间(10s内的同组告警一起发送)
group_interval: 10s # 两组告警的间隔时间
repeat_interval: 10m # 重复告警的间隔时间
receiver: default-receiver # 接收者配置
receivers:
- name: 'default-receiver' # 接收者名称(与上面对应)
email_configs: # 接收邮箱配置
- to: '[email protected]' # 接收邮箱(填要接收告警的邮箱)
send_resolved: true # 是否通知已解决的告警
kubectl apply -f alertmanager-cm.yaml # 应用配置
通过configmap管理prometheus的配置,配置中关于k8s组件的静态配置里的Ip地址根据自身情况进行修改。由于配置文件过长,下面只选部分具有代表性的进行讲解,完整yaml请自行下载、调整。
文件下载地址:prometheus-alertmanager-cfg.yaml
kind: ConfigMap
apiVersion: v1
metadata:
labels:
app: prometheus
name: prometheus-config
namespace: monitor-sa
data:
prometheus.yml: |
rule_files:
- /etc/prometheus/rules.yml # 指定报警规则配置文件
alerting: # 处理报警的媒介
alertmanagers:
- static_configs:
- targets: ["localhost:9093"]
global:
scrape_interval: 15s
scrape_timeout: 10s
evaluation_interval: 1m
scrape_configs:
- job_name: 'kubernetes-node' # 通过动态发现监控k8s node
kubernetes_sd_configs:
- role: node
relabel_configs: # 标签重写配置
- source_labels: [__address__]
regex: '(.*):10250'
replacement: '${1}:9100'
target_label: __address__
action: replace
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
...
...
rules.yml: | # 报警规则配置
groups: # 组
- name: example # 组名
rules: # 规则定义
- alert: kube-proxy的cpu使用率大于80% # 报警项
expr: rate(process_cpu_seconds_total{job=~"kubernetes-kube-proxy"}[1m]) * 100 > 80 # 表达式(基于PromQL编写)
for: 2s # 满足表达式多久触发报警
labels:
severity: warnning # 报警等级
annotations: # 报警时的提示信息
description: "{{$labels.instance}}的{{$labels.job}}组件的cpu使用率超过80%"
kubectl delete -f prometheus-cfg.yaml # 删除原有配置
kubectl apply -f prometheus-alertmanager-cfg.yaml # 应用刚创建的配置
先删除之前安装的prometheus,因为要把prometheus和alertmanager封装到同一个pod里。
kubectl delete -f prometheus-deploy.yaml
生成一个 etcd-certs,这个在部署 prometheus 需要。
kubectl -n monitor-sa create secret generic etcd-certs --from-file=/etc/kubernetes/pki/etcd/server.key --from-file=/etc/kubernetes/pki/etcd/server.crt --from-file=/etc/kubernetes/pki/etcd/ca.crt
通过deployment部署prometheus和alertmanager,根据自身环境修改“nodeName”的值, prometheus-alertmanager-deploy.yaml文件内容如下:
---
apiVersion: apps/v1 # api版本
kind: Deployment # 资源类型
metadata: # 元数据
name: prometheus-server # deployment名称
namespace: monitor-sa # 指定名称空间
labels:
app: prometheus # deployment具有的标签
spec:
replicas: 1 # 副本数
selector:
matchLabels: # 选择具有如下标签的pod
app: prometheus
component: server
template: # pod模板
metadata:
labels: # pod标签,与上面匹配
app: prometheus
component: server
annotations: # pod的注解
prometheus.io/scrape: 'false' # 不被prometheus动态发现
spec:
nodeName: s2 # 指定调度到主机名为s2的节点(根据自身调整)
serviceAccountName: monitor # 指定sa
containers: # 容器列表
- name: prometheus
image: prom/prometheus:v2.2.1
imagePullPolicy: IfNotPresent
command:
- "/bin/prometheus"
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus"
- "--storage.tsdb.retention=24h"
- "--web.enable-lifecycle"
ports:
- containerPort: 9090
protocol: TCP
volumeMounts:
- mountPath: /etc/prometheus
name: prometheus-config
- mountPath: /prometheus/
name: prometheus-storage-volume
- name: k8s-certs
mountPath: /var/run/secrets/kubernetes.io/k8s-certs/etcd/
- name: alertmanager
image: prom/alertmanager:v0.14.0
imagePullPolicy: IfNotPresent
args:
- "--config.file=/etc/alertmanager/alertmanager.yml"
- "--log.level=debug"
ports:
- containerPort: 9093
protocol: TCP
name: alertmanager
volumeMounts:
- name: alertmanager-config
mountPath: /etc/alertmanager
- name: alertmanager-storage
mountPath: /alertmanager
- name: localtime
mountPath: /etc/localtime # 挂载时间到容器(避免时序数据错乱)
volumes:
- name: prometheus-config
configMap:
name: prometheus-config
- name: prometheus-storage-volume
hostPath:
path: /data
type: Directory
- name: k8s-certs
secret:
secretName: etcd-certs
- name: alertmanager-config
configMap:
name: alertmanager
- name: alertmanager-storage
hostPath:
path: /data/alertmanager
type: DirectoryOrCreate
- name: localtime
hostPath:
path: /usr/share/zoneinfo/Asia/Shanghai
创建pod
kubectl apply -f prometheus-alertmanager-deploy.yaml
查看prometheus是否部署成功,
kubectl get pods -n monitor-sa | grep prometheus
显示如下,可看到 pod 状态是 running,说明 prometheus 部署成功。
apiVersion: v1
kind: Service
metadata:
labels:
name: prometheus
kubernetes.io/cluster-service: 'true'
name: alertmanager
namespace: monitor-sa
spec:
ports:
- name: alertmanager
nodePort: 30066
port: 9093
protocol: TCP
targetPort: 9093
selector:
app: prometheus
sessionAffinity: None
type: NodePort
kubectl apply -f alertmanager-svc.yaml # 应用
kubectl get svc -n monitor-sa # 查看svc
可以看到service 在物理机上映射的端口,可以通过 ip:port 进行访问。
注意:以下操作在生产环境慎重执行,可能会导致集群故障
访问 prometheus 的 web 界面,点击 status->targets,可看到如下:
从上面可以发现 kubernetes-controller-manager 和 kubernetes-schedule 都显示连接不上对应的端口,可按如下方法处理:
vim /etc/kubernetes/manifests/kube-scheduler.yaml
修改如下内容:
把–bind-address=127.0.0.1 变成–bind-address=192.168.1.63
把 httpGet:字段下的 hosts 由 127.0.0.1 变成 192.168.1.63
把—port=0 删除
192.168.1.63是k8s的控制节点的ip
重启各节点的kubelet
systemctl restart kubelet
kube-proxy默认端口 10249 是监听在 127.0.0.1 上的,需要改成监听到物理节点上,按如下方法修改,线上建议在安装 k8s 的时候就做修改,这样风险小一些:
kubectl edit configmap kube-proxy -n kube-system
把 metricsBindAddress 这段修改成 metricsBindAddress: 0.0.0.0:10249
然后重新启动 kube-proxy 这个 pod
kubectl get pods -n kube-system | grep kube-proxy |awk '{print $1}' | xargs kubectl delete pods -n kube-system
登陆网址:https://work.weixin.qq.com/ 注册企业微信。
找到应用管理,创建应用,应用名字“wechat”。创建成功后显示如下:
记录 AgentId和Secret,一会配置时要用。
修改 alertmanager-cm.yaml中报警接收方配置
vim alertmanager-cm.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: alertmanager
namespace: monitor-sa
data:
alertmanager.yml: |- # alertmanager配置文件
global:
resolve_timeout: 1m
smtp_smarthost: 'smtp.163.com:25' # 发送者的SMTP服务器
smtp_from: '1814553****@163.com' # 发送者的邮箱
smtp_auth_username: '1814553****' # 发送者的邮箱用户名(不是邮箱名)
smtp_auth_password: 'LAYXLXRZGFUBWOMZ' # 发送者授权密码(上面获取到的)
smtp_require_tls: false
route: # 配置告警分发策略
group_by: [alertname] # 采用哪个标签作为分组依据
group_wait: 10s # 组告警等待时间(10s内的同组告警一起发送)
group_interval: 10s # 两组告警的间隔时间
repeat_interval: 10m # 重复告警的间隔时间
receiver: prometheus # 接收者配置
receivers:
- name: 'prometheus' # 接收者名称(与上面对应)
wechat_configs: # 接收邮箱配置
- corp_id: "wwa82df90a693*****" # 企业信息
to_user: '@all' # 发送报警到所有人
agent_id: 1000002 # 刚记录的AgnetId
api_secret: "xPte8Jw6g1P****" # 刚记录的Secret
修改 prometheus 任何一个配置文件之后,可通过 kubectl apply 使配置生效,执行顺序如下:
kubectl delete -f alertmanager-cm.yaml # 删除原alert配置
kubectl apply -f alertmanager-cm.yaml # 使新配置生效
kubectl delete -f prometheus-alertmanager-cfg.yaml # 删除原prometheus配置
kubectl apply -f prometheus-alertmanager-cfg.yaml # 使新配置生效
kubectl delete-f prometheus-alertmanager-deploy.yaml # 删除原deployment
kubectl apply –f prometheus-alertmanager-deploy.yaml # 创建新的deployment
首先要有一个钉钉号,有一个钉钉群。
打开电脑版钉钉,在群中创建自定义机器人,可参考如下链接中的文档:
https://ding-doc.dingtalk.com/doc#/serverapi2/qf2nxq
https://developers.dingtalk.com/document/app/custom-robot-access
我的创建步骤如下:
群设置–>智能群助手–>添加机器人–>自定义–>添加
查看机器人的webhook和token:
找到刚创建的test机器人,点击就会进入到test机器人的设置界面,出现如下内容:
在k8s控制节点操作,插件下载地址:https://download.csdn.net/download/weixin_45826416/85309639
tar zxvf prometheus-webhook-dingtalk-0.3.0.linux-amd64.tar.gz
cd prometheus-webhook-dingtalk-0.3.0.linux-amd64
启动钉钉报警插件:
nohup ./prometheus-webhook-dingtalk --web.listen-address="0.0.0.0:8060" --ding.profile="cluster1=https://oapi.dingtalk.com/robot/send?access_token=******" &
# 其中cluster1为关键词,后面的为钉钉webhook(用自己刚复制的)
备份原来的 alertmanager-cm.yaml
cp alertmanager-cm.yaml alertmanager-cm.yaml.bak
修改 alertmanager-cm.yaml ,修改后内容如下:
kind: ConfigMap
apiVersion: v1
metadata:
name: alertmanager
namespace: monitor-sa
data:
alertmanager.yml: |-
global:
resolve_timeout: 1m
smtp_smarthost: 'smtp.163.com:25' # 发送者的SMTP服务器
smtp_from: '1814553****@163.com' # 发送者的邮箱
smtp_auth_username: '1814553****' # 发送者的邮箱用户名(不是邮箱名)
smtp_auth_password: 'LAYXLXRZGFUBWOMZ' # 发送者授权密码(上面获取到的)
smtp_require_tls: false
route: # 配置告警分发策略
group_by: [alertname] # 采用哪个标签作为分组依据
group_wait: 10s # 组告警等待时间(10s内的同组告警一起发送)
group_interval: 10s # 两组告警的间隔时间
repeat_interval: 10m # 重复告警的间隔时间
receiver: dingding # 接收者配置
receivers:
- name: 'dingding' # 与上面设置的接收者对应(不对应pod起不来)
webhook_configs:
- url: 'http://192.168.1.63:8060/dingtalk/cluster1/send'
send_resolved: true # 问题解决是否发送
修改 prometheus 任何一个配置文件之后,可通过 kubectl apply 使配置生效,执行顺序如下:
kubectl delete -f alertmanager-cm.yaml # 删除原alert配置
kubectl apply -f alertmanager-cm.yaml # 使新配置生效
kubectl delete -f prometheus-alertmanager-cfg.yaml # 删除原prometheus配置
kubectl apply -f prometheus-alertmanager-cfg.yaml # 使新配置生效
kubectl delete-f prometheus-alertmanager-deploy.yaml # 删除原deployment
kubectl apply –f prometheus-alertmanager-deploy.yaml # 创建新的deployment