本文介绍Prometheus 监控及在k8s集群中使用node-exporter、prometheus、grafana对集群进行监控。实现原理类似ELK、EFK组合。node-exporter组件负责收集节点上的metrics监控数据,并将数据推送给prometheus, prometheus负责存储这些数据,grafana将这些数据通过网页以图形的形式展现给用户。
在开始之前有必要了解下Prometheus是什么?
Prometheus (中文名:普罗米修斯)是由 SoundCloud 开发的开源监控报警系统和时序列数据库(TSDB).自2012年起,许多公司及组织已经采用 Prometheus,并且该项目有着非常活跃的开发者和用户社区.现在已经成为一个独立的开源项目。Prometheus 在2016加入 CNCF ( Cloud Native Computing Foundation 云原生计算基金会 ), 作为在 kubernetes 之后的第二个由基金会主持的项目。 Prometheus 的实现参考了 Google 内部的监控实现,与源自Google的Kubernetes结合起来非常合适。另外相比influxdb的方案,性能更加突出,而且还内置了报警功能。它针对大规模的集群环境设计了拉取式的数据采集方式,只需要在应用里面实现一个metrics接口,然后把这个接口告诉Prometheus就可以完成数据采集了,下图为prometheus的架构图。
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-jl4k1Khl-1618222030520)(assets/1005768-20181203155201461-1231183520.png)]
promethues是一套开源的系统监控报警框架。Prometheus 所有采集的监控数据均以指标(metric)的形式保存在内置的时间序列数据库当中(TSDB):属于同一指标名称,同一标签集合的、有时间戳标记的数据流。除了存储的时间序列,Prometheus 还可以根据查询请求产生临时的、衍生的时间序列作为返回结果。包含了以下组件
promethues 的各个组件基本都是用 golang 编写,对编译和部署十分友好.并且没有特殊依赖.基本都是独立工作。
注:
由于数据采集可能会有丢失,所以 Prometheus 不适用对采集数据要 100% 准确的情形。但如果用于记录时间序列数据,Prometheus 具有很大的查询优势,此外,Prometheus 适用于微服务的体系架构。
prometheus中存储的数据为时间序列,是由Metric的名字和一系列的标签(键值对)唯一标识的,不同的标签代表不同的时间序列。
样本:实际时间序列,每个序列包括一个float64的值和一个毫秒级的时间戳。(指标+时间戳+样本值)
metric名字: 具有语义,表示功能:例如:http_requeststotal, 表示 http 请求的总数。其中,metric 名字由 ASCII 字符,数字,下划线,以及冒号组成,且必须满足正则表达式[a-zA-Z:][a-zA-Z0-9_:]*。
标签:使一个时间序列有不同未读的识别。例如 http_requeststotal{method=“Get”} 表示所有 http 请求中的 Get 请求。当 method=“post” 时,则为新的一个 metric。标签中的键由 ASCII 字符,数字,以及下划线组成,且必须满足正则表达式[a-zA-Z:][a-zA-Z0-9_:]*。
格式:{ =, …},例如:http_requests_total{method=“POST”,endpoint="/api/tracks"}。
Prometheus 客户端库主要提供四种主要的 metric 类型:
一种累加的 metric,典型的应用如:
请求的个数
结束的任务数
出现的错误数
。。。
例如:
查询 promhttp_metric_handler_requests_total{code=“200”,instance=“localhost:9090”,job=“prometheus”}
返回 8,10 秒后再次查询,则返回 14。
一种常规的 metric,典型的应用如:
温度
运行的 go routines 的个数
可以任意加减。
例如:
go_goroutines{instance=“localhost:9090”,job=“prometheus”}
返回值 147,10 秒后返回 124。
注:
routines: go的日常工作?
注:
histogram 英[ˈhɪstəɡræm] 美[ˈhɪstəɡræm] 直方图;矩形图
可以理解为柱状图,典型的应用如:
请求持续时间
响应大小
可以对观察结果采样,分组及统计。
例如:
查询 go_gc_duration_seconds_sum{instance=“localhost:9090”,job=“prometheus”}时
返回结果如下:Histogram metric 返回结果图
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-lAMAcnGy-1618222030522)(assets/image-20190504160404467-1558565443544.png)]
类似于 Histogram,典型的应用如:
请求持续时间
响应大小
提供观测值的 count 和 sum 功能。
提供百分位的功能,即可以按百分比划分跟踪结果。
instance 和 jobs
instance:
一个单独 scrape(抓取) 的目标, 一般对应于一个进程。
jobs:
一组同种类型的 instances(主要用于保证可扩展性和可靠性),例如:
注:
scrape 英[skreɪp] 美[skreɪp] 刮掉; 削去; 擦坏; 擦伤; 刮坏; 蹭破; (使) 发出刺耳的刮擦声
当 scrape 目标时,Prometheus 会自动给这个 scrape 的时间序列附加一些标签以便更好的分别
例如: instance,job。
下面以实际的 metric 为例,对上述概念进行说明:
Metrics 示例
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-ZzMwpjc6-1618222030523)(assets/image-20190504160428088.png)]
如图所示,这三个 metric 的名字都一样,他们仅凭 handler 不同而被标识为不同的 metrics。
这类 metrics 只会向上累加,是属于 Counter 类型的 metric,且 metrics 中都含有 instance 和 job 这两个标签。
Prometheus基于Golang编写,编译后的软件包,不依赖于任何的第三方依赖。用户只需要下载对应平台的二进制包,解压并且添加基本的配置即可正常启动Prometheus Server。
对于非Docker用户,可以从https://prometheus.io/download/找到最新版本的Prometheus Sevrer软件包:
export VERSION=2.14.0
curl -LO https://github.com/prometheus/prometheus/releases/download/v$VERSION/prometheus-$VERSION.linux-amd64.tar.gz
解压,并将Prometheus相关的命令,添加到系统环境变量路径即可:
tar -xzf prometheus-${VERSION}.linux-amd64.tar.gz -C /usr/local/
cd /usr/local/prometheus-${VERSION}.linux-amd64
解压后当前目录会包含默认的Prometheus配置文件promethes.yml:
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
# - alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first_rules.yml"
# - "second_rules.yml"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=` to any timeseries scraped from this config.
- job_name: 'prometheus'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['localhost:9090']
scrape_interval:
抓取采样数据的 时间间隔, 默认 每15秒去被监控机上 采样一次,这个就是 prometheus的自定义数据采集频率了
evaluation_interval:
监控数据规则的评估频率
这个参数是prometheus多长时间会进行一次监控规则的评估
例: 假如 我们设置 当 内存使用量 > 70%时 发出报警 这么一条rule(规则) 那么prometheus 会默认 每15秒来执行一次这个规则 检查内存的情况
Alertmanager:
是prometheus的一个用于管理和发出报警的 插件
这里对 Alertmanger 暂时先不做介绍 暂时也不需要 (采用 4.0最新版的 Grafana , 本
身就已经支持报警发出功能了 往后我们会学习到)
再往后 从这里开始 进入prometheus重要的 配置采集节点的设置
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090','localhost:9100']
先定义一个 job的名称,然后 定义监控节点 targets,默认带了一个 prometheus本机的
这里可以继续扩展加入其他需要被监控的节点,如下是一个生产配置例子:
targets:
可以并列写入 多个节点 用逗号隔开, 机器名+端口号
端口号:
通常用的就是 exporters 的端口 在这里 9100 其实是 node_exporter 的默认端口
如此 prometheus 就可以通过配置文件 识别监控的节点,持续开始采集数据
prometheus到此就算初步的搭建好了
Promtheus作为一个时间序列数据库,其采集的数据会以文件的形似存储在本地中,默认的存储路径为data/
,因此我们需要先手动创建该目录:
mkdir -p data
用户也可以通过参数--storage.tsdb.path="data/"
修改本地数据存储的路径。
启动prometheus服务,其会默认加载当前路径下的prometheus.yaml文件:
./prometheus
正常的情况下,你可以看到以下输出内容:
level=info ts=2018-10-23T14:55:14.499484Z caller=main.go:554 msg="Starting TSDB ..."
level=info ts=2018-10-23T14:55:14.499531Z caller=web.go:397 component=web msg="Start listening for connections" address=0.0.0.0:9090
level=info ts=2018-10-23T14:55:14.507999Z caller=main.go:564 msg="TSDB started"
level=info ts=2018-10-23T14:55:14.508068Z caller=main.go:624 msg="Loading configuration file" filename=prometheus.yml
level=info ts=2018-10-23T14:55:14.509509Z caller=main.go:650 msg="Completed loading of configuration file" filename=prometheus.yml
level=info ts=2018-10-23T14:55:14.509537Z caller=main.go:523 msg="Server is ready to receive web requests."
对于Docker用户,直接使用Prometheus的镜像即可启动Prometheus Server:
docker run -p 9090:9090 -v /etc/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus
启动完成后,可以通过http://localhost:9090访问Prometheus的UI界面:
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-z6uMeZqR-1618222030525)(assets/assets_-LBdoxo9EmQ0bJP2BuUi_-LPMFlGDFIX7wuLhSHx9_-LPMFp6uLzmEmN2Fa-ZF_prometheus_ui_graph_query.png)]
使用Node Exporter采集主机数据
在Prometheus的架构设计中,Prometheus Server并不直接服务监控特定的目标,其主要任务负责数据的收集,存储并且对外提供数据查询支持。因此为了能够能够监控到某些东西,如主机的CPU使用率,我们需要使用到Exporter。Prometheus周期性的从Exporter暴露的HTTP服务地址(通常是/metrics)拉取监控样本数据。
从上面的描述中可以看出Exporter可以是一个相对开放的概念,其可以是一个独立运行的程序独立于监控目标以外,也可以是直接内置在监控目标中。只要能够向Prometheus提供标准格式的监控样本数据即可。
这里为了能够采集到主机的运行指标如CPU, 内存,磁盘等信息。我们可以使用Node Exporter。
Node Exporter同样采用Golang编写,并且不存在任何的第三方依赖,只需要下载,解压即可运行。可以从https://prometheus.io/download/获取最新的node exporter版本的二进制包。
curl -OL https://github.com/prometheus/node_exporter/releases/download/v0.18.1/node_exporter-0.18.1.linux-amd64.tar.gz
tar -xzf node_exporter-0.18.1.linux-amd64.tar.gz -C /usr/local/
运行node exporter:
cd node_exporter-0.18.1.linux-amd64
cp node_exporter-0.18.1.linux-amd64/node_exporter /usr/local/bin/
node_exporter
启动成功后,可以看到以下输出:
INFO[0000] Listening on :9100 source="node_exporter.go:76"
访问http://localhost:9100/可以看到以下页面:
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-KVKKtFnx-1618222030527)(assets/assets_-LBdoxo9EmQ0bJP2BuUi_-LPMFlGDFIX7wuLhSHx9_-LPMFp6oc_SZOU4__NeX_node_exporter_home_page.png)]
Node Exporter页面
访问http://localhost:9100/metrics,可以看到当前node exporter获取到的当前主机的所有监控数据,如下所示:
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Mqby6REK-1618222030529)(assets/assets_-LBdoxo9EmQ0bJP2BuUi_-LPMFlGDFIX7wuLhSHx9_-LPMFp6q8o3vUrTJOaFo_node_exporter_metrics_page.png)]
主机监控指标
每一个监控指标之前都会有一段类似于如下形式的信息:
# HELP node_cpu Seconds the cpus spent in each mode.
# TYPE node_cpu counter
node_cpu{
cpu="cpu0",mode="idle"} 362812.7890625
# HELP node_load1 1m load average.
# TYPE node_load1 gauge
node_load1 3.0703125
其中HELP用于解释当前指标的含义,TYPE则说明当前指标的数据类型。在上面的例子中node_cpu的注释表明当前指标是cpu0上idle进程占用CPU的总时间,CPU占用时间是一个只增不减的度量指标,从类型中也可以看出node_cpu的数据类型是计数器(counter),与该指标的实际含义一致。又例如node_load1该指标反映了当前主机在最近一分钟以内的负载情况,系统的负载情况会随系统资源的使用而变化,因此node_load1反映的是当前状态,数据可能增加也可能减少,从注释中可以看出当前指标类型为仪表盘(gauge),与指标反映的实际含义一致。
除了这些以外,在当前页面中根据物理主机系统的不同,你还可能看到如下监控指标:
为了能够让Prometheus Server能够从当前node exporter获取到监控数据,这里需要修改Prometheus配置文件。编辑prometheus.yml并在scrape_configs节点下添加以下内容:
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
# 采集node exporter监控数据
- job_name: 'node'
static_configs:
- targets: ['localhost:9100']
重新启动Prometheus Server
访问http://localhost:9090,进入到Prometheus Server。如果输入“up”并且点击执行按钮以后,可以看到如下结果:
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-nMB6agJP-1618222030529)(assets/assets_-LBdoxo9EmQ0bJP2BuUi_-LPMFlGDFIX7wuLhSHx9_-LPMFp6sd1HomUq2AhEt_prometheus_ui_up_query.png)]
Expression Browser
如果Prometheus能够正常从node exporter获取数据,则会看到以下结果:
up{
instance="localhost:9090",job="prometheus"} 1
up{
instance="localhost:9100",job="node"} 1
其中“1”表示正常,反之“0”则为异常。
Prometheus UI是Prometheus内置的一个可视化管理界面,通过Prometheus UI用户能够轻松的了解Prometheus当前的配置,监控任务运行状态等。 通过Graph
面板,用户还能直接使用PromQL
实时查询监控数据:
增加 cpu 负载
cat /dev/urandom | md5sum
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-qzE5T842-1618222030530)(assets/assets_-LBdoxo9EmQ0bJP2BuUi_-LPMFlGDFIX7wuLhSHx9_-LPMFp6uLzmEmN2Fa-ZF_prometheus_ui_graph_query.png)]
Graph Query
切换到Graph
面板,用户可以使用PromQL表达式查询特定监控指标的监控数据。如下所示,查询主机负载变化情况,可以使用关键字node_load1
可以查询出Prometheus采集到的主机负载的样本数据,这些样本数据按照时间先后顺序展示,形成了主机负载随时间变化的趋势图表:
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-9Aq8VUbi-1618222030531)(assets/assets_-LBdoxo9EmQ0bJP2BuUi_-LPMFlGDFIX7wuLhSHx9_-LPMFp6w2a-tNlAZqMYf_node_node1_graph.png)]
主机负载情况
PromQL是Prometheus自定义的一套强大的数据查询语言,除了使用监控指标作为查询关键字以为,还内置了大量的函数,帮助用户进一步对时序数据进行处理。例如使用rate()
函数,可以计算在单位时间内样本数据的变化情况即增长率,因此通过该函数我们可以近似的通过CPU使用时间计算CPU的利用率:
rate(node_cpu_seconds_total[2m])
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-eZ7ALtnS-1618222030532)(assets/assets_-LBdoxo9EmQ0bJP2BuUi_-LPMFlGDFIX7wuLhSHx9_-LPMFp6ykiXFktbE7AoS_node_cpu_usage_by_cpu_and_mode.png)]
系统进程的CPU使用率
这时如果要忽略是哪一个CPU的,只需要使用without表达式,将标签CPU去除后聚合数据即可:
avg without(cpu) (rate(node_cpu_seconds_total[2m]))
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-zt3HB7SE-1618222030532)(assets/assets_-LBdoxo9EmQ0bJP2BuUi_-LPMFlGDFIX7wuLhSHx9_-LPMFp7-zSOBoQPoKmGt_node_cpu_usage_by_mode.png)]
系统各mode的CPU使用率
那如果需要计算系统CPU的总体使用率,通过排除系统闲置的CPU使用率即可获得:
avg without(cpu) (rate(node_cpu_seconds_total{
mode="idle"}[2m]))
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-yAxPI6Q0-1618222030533)(assets/assets_-LBdoxo9EmQ0bJP2BuUi_-LPMFlGDFIX7wuLhSHx9_-LPMFp71s5xiaeewHj_D_node_cpu_usage_total.png)]
系统CPU使用率
通过PromQL我们可以非常方便的对数据进行查询,过滤,以及聚合,计算等操作。通过这些丰富的表达书语句,监控指标不再是一个单独存在的个体,而是一个个能够表达出正式业务含义的语言。
Prometheus UI提供了快速验证PromQL以及临时可视化支持的能力,而在大多数场景下引入监控系统通常还需要构建可以长期使用的监控数据可视化面板(Dashboard)。这时用户可以考虑使用第三方的可视化工具如Grafana,Grafana是一个开源的可视化平台,并且提供了对Prometheus的完整支持。
docker run -d -p 3000:3000 grafana/grafana
访问http://localhost:3000就可以进入到Grafana的界面中,默认情况下使用账户admin/admin进行登录。在Grafana首页中显示默认的使用向导,包括:安装、添加数据源、创建Dashboard、邀请成员、以及安装应用和插件等主要流程:
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-zk30JIpN-1618222030534)(assets/assets_-LBdoxo9EmQ0bJP2BuUi_-LPMFlGDFIX7wuLhSHx9_-LPMFp733ZL0hekE5YPo_get_start_with_grafana2.png)]
Grafana向导
这里将添加Prometheus作为默认的数据源,如下图所示,指定数据源类型为Prometheus并且设置Prometheus的访问地址即可,在配置正确的情况下点击“Add”按钮,会提示连接成功的信息:
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-wEZPJuOy-1618222030535)(assets/assets_-LBdoxo9EmQ0bJP2BuUi_-LPMFlGDFIX7wuLhSHx9_-LPMFp75TqWkguvVkIKH_add_default_prometheus_datasource.png)]
添加Prometheus作为数据源
在完成数据源的添加之后就可以在Grafana中创建我们可视化Dashboard了。Grafana提供了对PromQL的完整支持,如下所示,通过Grafana添加Dashboard并且为该Dashboard添加一个类型为“Graph”的面板。 并在该面板的“Metrics”选项下通过PromQL查询需要可视化的数据:
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-do28MrNH-1618222030536)(assets/assets_-LBdoxo9EmQ0bJP2BuUi_-LPMFlGDFIX7wuLhSHx9_-LPMFp770M-VxTvT4XU4_first_grafana_dashboard.png)]
第一个可视化面板
点击界面中的保存选项,就创建了我们的第一个可视化Dashboard了。 当然作为开源软件,Grafana社区鼓励用户分享Dashboard通过https://grafana.com/dashboards网站,可以找到大量可直接使用的Dashboard:
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-MiUTPy0v-1618222030536)(assets/assets_-LBdoxo9EmQ0bJP2BuUi_-LPMFlGDFIX7wuLhSHx9_-LPMFp79QVtMfJXHS6kb_grafana_dashboards.png)]
用户共享的Dashboard
Grafana中所有的Dashboard通过JSON进行共享,下载并且导入这些JSON文件,就可以直接使用这些已经定义好的Dashboard:
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-aEuIXsfd-1618222030537)(assets/assets_-LBdoxo9EmQ0bJP2BuUi_-LPMFlGDFIX7wuLhSHx9_-LPMFp7BMwAjVVS2GA8a_node_exporter_dashboard.png)]
在上一小节中,通过在prometheus.yml配置文件中,添加如下配置。我们让Prometheus可以从node exporter暴露的服务中获取监控指标数据。
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'node'
static_configs:
- targets: ['localhost:9100']
当我们需要采集不同的监控指标(例如:主机、MySQL、Nginx)时,我们只需要运行相应的监控采集程序,并且让Prometheus Server知道这些Exporter实例的访问地址。在Prometheus中,每一个暴露监控样本数据的HTTP服务称为一个实例。例如在当前主机上运行的node exporter可以被称为一个实例(Instance)。
而一组用于相同采集目的的实例,或者同一个采集进程的多个副本则通过一个一个任务(Job)进行管理。
* job: node
* instance 2: 1.2.3.4:9100
* instance 4: 5.6.7.8:9100
当前在每一个Job中主要使用了静态配置(static_configs)的方式定义监控目标。除了静态配置每一个Job的采集Instance地址以外,Prometheus还支持与DNS、Consul、E2C、Kubernetes等进行集成实现自动发现Instance实例,并从这些Instance上获取监控数据。
除了通过使用“up”表达式查询当前所有Instance的状态以外,还可以通过Prometheus UI中的Targets页面查看当前所有的监控采集任务,以及各个任务下所有实例的状态:
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-olpNWpZK-1618222030538)(assets/assets_-LBdoxo9EmQ0bJP2BuUi_-LPS8BVjkRvEjV8HmbBi_-LPS8D3XxcvpXC77SD3b_prometheus_ui_targets.png)]
target列表以及状态
我们也可以访问http://192.168.33.10:9090/targets直接从Prometheus的UI中查看当前所有的任务以及每个任务对应的实例信息。
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-lmlBXKAn-1618222030539)(assets/assets_-LBdoxo9EmQ0bJP2BuUi_-LPS8BVjkRvEjV8HmbBi_-LPS8D3ZspZkWr1CZto5_prometheus_ui_targets_status.png)]
Targets状态
现在我们正式开始部署工作。这里假设你已经为你的K8S集群部署过kube-dns或者coredns了。
操作系统环境:centos linux 7.5 64bit
Master节点IP: 10.40.0.151/24
Node01节点IP: 10.40.0.152/24
Node02节点IP: 10.40.0.153/24
所有node节点下载监控所需镜像
docker pull prom/node-exporter
docker pull prom/prometheus
docker pull grafana/grafana
mkdir k8s-prometheus
cat node-exporter.yaml
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: node-exporter
namespace: kube-system
labels:
k8s-app: node-exporter
spec:
template:
metadata:
labels:
k8s-app: node-exporter
spec:
containers:
- image: prom/node-exporter
name: node-exporter
ports:
- containerPort: 9100
protocol: TCP
name: http
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: node-exporter
name: node-exporter
namespace: kube-system
spec:
ports:
- name: http
port: 9100
nodePort: 31672
protocol: TCP
type: NodePort
selector:
k8s-app: node-exporter
kubectl create -f node-exporter.yaml
mkdir prometheus
cat rbac-setup.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: prometheus
rules:
- apiGroups: [""]
resources:
- nodes
- nodes/proxy
- services
- endpoints
- pods
verbs: ["get", "list", "watch"]
- apiGroups:
- extensions
resources:
- ingresses
verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus
subjects:
- kind: ServiceAccount
name: prometheus
namespace: kube-system
kubectl create -f prometheus/rbac-setup.yaml
cat configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-config
namespace: kube-system
data:
prometheus.yml: |
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: 'kubernetes-apiservers'
kubernetes_sd_configs:
- role: endpoints
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: default;kubernetes;https
- job_name: 'kubernetes-nodes'
kubernetes_sd_configs:
- role: node
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics
- job_name: 'kubernetes-cadvisor'
kubernetes_sd_configs:
- role: node
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
- job_name: 'kubernetes-service-endpoints'
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
action: replace
target_label: __scheme__
regex: (https?)
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
action: replace
target_label: kubernetes_name
- job_name: 'kubernetes-services'
kubernetes_sd_configs:
- role: service
metrics_path: /probe
params:
module: [http_2xx]
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]
action: keep
regex: true
- source_labels: [__address__]
target_label: __param_target
- target_label: __address__
replacement: blackbox-exporter.example.com:9115
- source_labels: [__param_target]
target_label: instance
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
target_label: kubernetes_name
- job_name: 'kubernetes-ingresses'
kubernetes_sd_configs:
- role: ingress
relabel_configs:
- source_labels: [__meta_kubernetes_ingress_annotation_prometheus_io_probe]
action: keep
regex: true
- source_labels: [__meta_kubernetes_ingress_scheme,__address__,__meta_kubernetes_ingress_path]
regex: (.+);(.+);(.+)
replacement: ${1}://${2}${3}
target_label: __param_target
- target_label: __address__
replacement: blackbox-exporter.example.com:9115
- source_labels: [__param_target]
target_label: instance
- action: labelmap
regex: __meta_kubernetes_ingress_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_ingress_name]
target_label: kubernetes_name
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: kubernetes_pod_name
kubectl create -f prometheus/configmap.yaml
cat prometheus.deploy.yml
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
labels:
name: prometheus-deployment
name: prometheus
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: prometheus
template:
metadata:
labels:
app: prometheus
spec:
containers:
- image: prom/prometheus
name: prometheus
command:
- "/bin/prometheus"
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus"
- "--storage.tsdb.retention=24h"
ports:
- containerPort: 9090
protocol: TCP
volumeMounts:
- mountPath: "/prometheus"
name: data
- mountPath: "/etc/prometheus"
name: config-volume
resources:
requests:
cpu: 100m
memory: 100Mi
limits:
cpu: 500m
memory: 2500Mi
serviceAccountName: prometheus
volumes:
- name: data
emptyDir: {
}
- name: config-volume
configMap:
name: prometheus-config
kubectl create -f prometheus/prometheus.deploy.yml
cat prometheus.svc.yml
---
kind: Service
apiVersion: v1
metadata:
labels:
app: prometheus
name: prometheus
namespace: kube-system
spec:
type: NodePort
ports:
- port: 9090
targetPort: 9090
nodePort: 30003
selector:
app: prometheus
kubectl create -f prometheus/prometheus.svc.yml
mkdir grafana
cat grafana-deploy.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: grafana-core
namespace: kube-system
labels:
app: grafana
component: core
spec:
replicas: 1
template:
metadata:
labels:
app: grafana
component: core
spec:
containers:
- image: grafana/grafana
name: grafana-core
imagePullPolicy: IfNotPresent
# env:
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 100m
memory: 100Mi
env:
# The following env variables set up basic auth twith the default admin user and admin password.
- name: GF_AUTH_BASIC_ENABLED
value: "true"
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "false"
# - name: GF_AUTH_ANONYMOUS_ORG_ROLE
# value: Admin
# does not really work, because of template variables in exported dashboards:
# - name: GF_DASHBOARDS_JSON_ENABLED
# value: "true"
readinessProbe:
httpGet:
path: /login
port: 3000
# initialDelaySeconds: 30
# timeoutSeconds: 1
volumeMounts:
- name: grafana-persistent-storage
mountPath: /var/lib/grafana/ # 从5.1版本以后路径由 /var改变为 /var/lib/grafana/
volumes:
- name: grafana-persistent-storage
emptyDir: {
}
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: grafana-core
namespace: kube-system
labels:
app: grafana
component: core
spec:
replicas: 1
template:
metadata:
labels:
app: grafana
component: core
spec:
containers:
- image: grafana/grafana:4.2.0
name: grafana-core
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 100m
memory: 100Mi
env:
- name: GF_AUTH_BASIC_ENABLED
value: "true"
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "false"
readinessProbe:
httpGet:
path: /login
port: 3000
volumeMounts:
- name: grafana-persistent-storage
mountPath: /var/lib/grafana/
volumes:
- name: grafana-persistent-storage
emptyDir: {
}
kubectl create -f grafana/grafana-deploy.yaml
cat grafana-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: grafana
namespace: kube-system
labels:
app: grafana
component: core
spec:
type: NodePort
ports:
- port: 3000
targetPort: 3000
nodePort: 30006
selector:
app: grafana
component: core
kubectl create -f grafana/grafana-svc.yaml
cat grafana-ing.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grafana
namespace: kube-system
spec:
rules:
- host: k8s.grafana
http:
paths:
- path: /
backend:
serviceName: grafana
servicePort: 3000
kubectl create -f grafana/grafana-ing.yaml
[root@k8s-node01 k8s-prometheus-bak1]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
prom/prometheus latest 7317640d555e 12 days ago 130MB
grafana/grafana latest 05d1bcf30d16 2 weeks ago 207MB
mirrorgooglecontainers/metrics-server-amd64 v0.3.6 9dd718864ce6 5 weeks ago 39.9MB
calico/node v3.3.7 3c0076aa43ee 3 months ago 75.3MB
calico/cni v3.3.7 1eea0201c5e0 3 months ago 75.4MB
k8s.gcr.io/kube-proxy v1.15.3 232b5c793146 3 months ago 82.4MB
k8s.gcr.io/kube-apiserver v1.15.3 5eb2d3fc7a44 3 months ago 207MB
k8s.gcr.io/kube-scheduler v1.15.3 703f9c69a5d5 3 months ago 81.1MB
k8s.gcr.io/kube-controller-manager v1.15.3 e77c31de5547 3 months ago 159MB
prom/node-exporter latest e5a616e4b9cf 5 months ago 22.9MB
quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 9 months ago 52.6MB
k8s.gcr.io/coredns 1.3.1 eb516548c180 10 months ago 40.3MB
fluent/fluentd latest 9406ff63f205 11 months ago 38.3MB
k8s.gcr.io/etcd 3.3.10 2c4adeb21b4f 11 months ago 258MB
jcdemo/flaskapp latest 4f7a2cc79052 13 months ago 88.7MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 23 months ago 742kB
quay.io/coreos/flannel v0.9.1 2b736d06ca4c 2 years ago 51.3MB
lizhenliang/nfs-client-provisioner v2.0.0 9c93b4cfcfcc 2 years ago 52.8MB
siriuszg/hpa-example latest 978f0e9e0991 3 years ago 481MB
pilchard/hpa-example latest 1ef959421baf 4 years ago 481MB
[root@k8s-node01 k8s-prometheus]# ll
total 4
drwxr-xr-x 2 root root 81 Nov 24 18:54 grafana
-rw-r--r-- 1 root root 668 Nov 24 18:54 node-exporter.yaml
drwxr-xr-x 2 root root 106 Nov 24 18:54 prometheus
[root@k8s-node01 ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
canal-6kc7l 3/3 Running 3 2d9h
canal-rmq57 3/3 Running 0 2d9h
canal-wv4hj 3/3 Running 0 2d9h
coredns-5c98db65d4-ddb98 1/1 Running 9 12d
coredns-5c98db65d4-rr2q8 0/1 Running 9 12d
etcd-k8s-node01 1/1 Running 8 18m
grafana-core-6cc587dcd9-m4z2n 1/1 Running 0 168m
kube-apiserver-k8s-node01 1/1 Running 8 12d
kube-controller-manager-k8s-node01 1/1 Running 6 12d
kube-flannel-ds-amd64-n9lp6 1/1 Running 0 10d
kube-flannel-ds-amd64-rcd7d 1/1 Running 3 12d
kube-flannel-ds-amd64-rtz5h 1/1 Running 1 12d
kube-proxy-2ghgz 1/1 Running 3 12d
kube-proxy-nz2c2 1/1 Running 0 10d
kube-proxy-r84jn 1/1 Running 1 12d
kube-scheduler-k8s-node01 1/1 Running 6 12d
kubernetes-dashboard-5c7687cf8-2kmqr 1/1 Running 4 12d
metrics-server-77cd5b5b96-g8lfk 1/1 Running 0 9d
node-exporter-hqwgr 1/1 Running 0 169m
node-exporter-s8ccx 1/1 Running 0 169m
prometheus-5999955985-l28bb 1/1 Running 0 13m
[root@k8s-node01 ~]# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana NodePort 10.106.112.45 3000:30006/TCP 170m
kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP,9153/TCP 12d
kubernetes-dashboard ClusterIP 10.104.209.181 443/TCP 12d
metrics-server ClusterIP 10.98.87.36 443/TCP 9d
node-exporter NodePort 10.101.160.228 9100:31672/TCP 170m
prometheus NodePort 10.110.48.143 9090:30003/TCP 15m
http://192.168.152.137:31672/metrics
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-nbVzz7dT-1618222030540)(assets/360截图1675102478108108.png)]
prometheus对应的nodeport端口为30003,通过访问http://192.168.152.137:30003/target 可以看到prometheus已经成功连接上了k8s的apiserver
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-3LJkbHXX-1618222030541)(assets/eb0e802f0c6e01d78f0110bc154ba305.png)]
通过端口进行granfana 访问,默认用户名密码均为 admin
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-KiGKioho-1618222030542)(assets/360截图170010167512982.png)]
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Avm5tgiE-1618222030542)(…/…/…/%E8%B5%84%E6%96%99/%E8%BF%90%E7%BB%B4%E6%96%87%E6%A1%A3/%E4%BC%81%E4%B8%9A%E7%BA%A7%20Prometheus%20%E7%9B%91%E6%8E%A7/assets/1005768-20181203184529615-1390942632.png)]
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Jmh9Db7m-1618222030543)(…/…/…/%E8%B5%84%E6%96%99/%E8%BF%90%E7%BB%B4%E6%96%87%E6%A1%A3/%E4%BC%81%E4%B8%9A%E7%BA%A7%20Prometheus%20%E7%9B%91%E6%8E%A7/assets/1005768-20181203184929559-14342691.png)]
可以直接输入模板编号315在线导入,或者下载好对应的json模板文件本地导入,面板模板下载地址https://grafana.com/dashboards/315
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-bjt4xhQn-1618222030544)(assets/1005768-20181203185014385-1684865615.png)]
在线加载模板OK,选择prometheus数据库实例
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-3mqsgWV3-1618222030545)(assets/1005768-20181203185040024-495459702.png)]
大功告成,可以看到炫酷的监控页面了。
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Chp858Al-1618222030546)(assets/1005768-20181203185106777-1932893079.png)]
导入dashboard,推荐
如上部署prometheus之后,默认的监控项是kubernetes-apiservers、kubernetes-nodes、kubernetes-service-endpoints(CoreDNS、kube-state-metric)等,这是都是在prometheus/configmap.yaml文件中配置的,而像其他组件kubernetes-schedule、kubernetes-control-manager、kubernetes-kubelet、kubernetes-kube-proxy、etcd就需要手动添加了
在prometheus里手动添加kubernetes-schedule、kubernetes-control-manager、kubernetes-kubelet、kubernetes-kube-proxy组件的连接配置,非证书连接!
以下组件的配置,还不需要使用证书连接,直接ip+port就可以,默认路径就是/metrics
确保以下四个组件的metrcis数据可以通过下面方式正常获取。
`1)在prometheus里手动添加kubernetes-schedule、kubernetes-control-manager、kubernetes-kubelet、kubernetes-kube-proxy组件的连接配置,非证书连接!``以下组件的配置,还不需要使用证书连接,直接ip+port就可以,默认路径就是``/metrics``确保以下四个组件的metrcis数据可以通过下面方式正常获取。``schedule的metrics接口 (Scheduler服务端口默认为10251)``# curl 172.16.60.241:10251/metrics``# curl 172.16.60.242:10251/metrics``# curl 172.16.60.243:10251/metrics``control-manager的metrics接口(ControllerManager服务端口默认为10252)``# curl 172.16.60.241:10252/metrics``# curl 172.16.60.242:10252/metrics``# curl 172.16.60.243:10252/metrics``kubelet的metrics接口 (kubelet服务只读端口,没有任何认证(0:disable),默认为10255,该功能只要配置端口,就必定开启服务)``而10250是kubelet的https端口,10248是healthz http服务端口。``# curl 172.16.60.244:10255/metrics``# curl 172.16.60.245:10255/metrics``# curl 172.16.60.246:10255/metrics``kube-proxy的metrics接口 (kube-proxy服务端口默认为10249)``# curl 172.16.60.244:10249/metrics``# curl 172.16.60.245:10249/metrics``# curl 172.16.60.246:10249/metrics`` ` `所以prometheus连接以上四个组件的配置为:``[root@k8s-master01 ~]``# cd /opt/k8s/work/k8s-prometheus-grafana/prometheus/``[root@k8s-master01 prometheus]``# vim configmap.yaml``.........``.........`` ``- job_name: ``'kubernetes-schedule'` `#任务名`` ``scrape_interval: 5s ``#本任务的抓取间隔,覆盖全局配置`` ``static_configs:`` ``- targets: [``'172.16.60.241:10251'``,``'172.16.60.242:10251'``,``'172.16.60.243:10251'``]`` ` ` ``- job_name: ``'kubernetes-control-manager'`` ``scrape_interval: 5s`` ``static_configs:`` ``- targets: [``'172.16.60.241:10252'``,``'172.16.60.242:10252'``,``'172.16.60.243:10252'``]`` ` ` ``- job_name: ``'kubernetes-kubelet'`` ``scrape_interval: 5s`` ``static_configs:`` ``- targets: [``'172.16.60.244:10255'``,``'172.16.60.245:10255'``,``'172.16.60.246:10255'``]`` ` ` ``- job_name: ``'kubernetes-kube-proxy'`` ``scrape_interval: 5s`` ``static_configs:`` ``- targets: [``'172.16.60.244:10249'``,``'172.16.60.245:10249'``,``'172.16.60.246:10249'``]`` ` `接着更新config配置:``[root@k8s-master01 prometheus]``# kubectl apply -f configmap.yaml`` ` `然后重启pod,重启的方式是:直接删除pod``这种方式只是在当前被调度的node节点上删除了pod,然后schedule再将pod重新调度到其他的node节点上。即删除pod后,pod会自动被创建~`` ` `[root@k8s-master01 prometheus]``# kubectl get pods -n kube-system|grep "prometheus"``prometheus-6b96dcbd87-lwwv7 1``/1` `Running 0 44h`` ` `[root@k8s-master01 prometheus]``# kubectl delete pods/prometheus-6b96dcbd87-lwwv7 -n kube-system``pod ``"prometheus-6b96dcbd87-lwwv7"` `deleted`` ` `删除后,再次查看,发现pod会自动创建,并可能被调度到其他node节点上了。可以理解为pod重启``[root@k8s-master01 prometheus]``# kubectl get pods -n kube-system|grep "prometheus" ``prometheus-6b96dcbd87-c2n59 1``/1` `Running 0 22s`` ` `2)在prometheus里手动添加etcd组件的连接配置,使用证书连接!``在prometheus配置文件configmap.yaml中,可以看出默认对kubernetes-apiservers的连接配置是将证书和token文件映射到了容器内部。``而对接etcd的配置,也是将etcd的证书映射到容器内部,方式如下:`` ` `首先创建secret,将需要的etcd证书保存到secret对象etcd-certs中:``[root@k8s-master01 prometheus]``# kubectl -n kube-system create secret generic etcd-certs --from-file=/etc/etcd/cert/etcd-key.pem`` ``--from-``file``=``/etc/etcd/cert/etcd``.pem --from-``file``=``/etc/kubernetes/cert/ca``.pem` `==================================================================================================================================``这里贴下之前线上k8s集群部署时用到的secret对象的创建命令``# kubectl -n kube-system create secret generic cmp-prometheus-certs --from-file=/opt/cmp/ssl/etcd/healthcheck-client.pem`` ``--from-``file``=``/opt/cmp/ssl/etcd/healthcheck-client-key``.pem --from-``file``=``/opt/cmp/ssl/etcd/ca``.pem``# kubectl -n kube-system create secret generic cmp-prometheus-kubernetes-ca --from-file=/opt/cmp/ssl/kubernetes-ca/admin.pem`` ``--from-``file``=``/opt/cmp/ssl/kubernetes-ca/admin-key``.pem --from-``file``=``/opt/cmp/ssl/kubernetes-ca/ca``.pem``==================================================================================================================================` `查看创建的secret``[root@k8s-master01 prometheus]``# kubectl get secret -n kube-system|grep etcd-certs``etcd-certs Opaque 3 82s`` ` `[root@k8s-master01 prometheus]``# kubectl describe secret/etcd-certs -n kube-system``Name: etcd-certs``Namespace: kube-system``Labels: ``Annotations: `` ` `Type: Opaque`` ` `Data``====``ca.pem: 1367 bytes``etcd-key.pem: 1675 bytes``etcd.pem: 1444 bytes`` ` `修改prometheus.deploy.yaml添加secrets,即将创建的secret对象``"etcd-certs"``通过volumes挂载方式,添加到prometheus.deploy.yaml部署文件中:``[root@k8s-master01 prometheus]``# cat prometheus.deploy.yaml``........`` ``spec:`` ``containers:`` ``- image: prom``/prometheus``:v2.0.0`` ``name: prometheus`` ``command``:`` ``- ``"/bin/prometheus"`` ``args:`` ``- ``"--config.file=/etc/prometheus/prometheus.yml"`` ``- ``"--storage.tsdb.path=/prometheus"`` ``- ``"--storage.tsdb.retention=24h"`` ``ports:`` ``- containerPort: 9090`` ``protocol: TCP`` ``volumeMounts:`` ``- mountPath: ``"/prometheus"`` ``name: data`` ``- mountPath: ``"/etc/prometheus"`` ``name: config-volume`` ``- name: k8s-certs ``#添加下面这三行内容,即将secret对象里的内容映射到容器的/var/run/secrets/kubernetes.io/k8s-certs/etcd/目录下(容器里会自动创建这个目录)`` ``mountPath: ``/var/run/secrets/kubernetes``.io``/k8s-certs/etcd/`` ``readOnly: ``true`` ``resources:`` ``requests:`` ``cpu: 100m`` ``memory: 100Mi`` ``limits:`` ``cpu: 500m`` ``memory: 2500Mi`` ``serviceAccountName: prometheus `` ``volumes:`` ``- name: data`` ``emptyDir: {}`` ``- name: config-volume`` ``configMap:`` ``name: prometheus-config `` ``- name: k8s-certs ``#添加下面这三行内容`` ``secret:`` ``secretName: etcd-certs`` ` ` ` `修改prometh的configmap.yaml配置文件,添加etcd连接配置 (注意.yaml结尾文件和.yml结尾文件都可以,不影响使用的)``[root@k8s-master01 prometheus]``# vim configmap.yaml ``.........`` ``- job_name: ``'kubernetes-etcd'`` ``scheme: https`` ``tls_config:`` ``ca_file: ``/var/run/secrets/kubernetes``.io``/k8s-certs/etcd/ca``.pem`` ``cert_file: ``/var/run/secrets/kubernetes``.io``/k8s-certs/etcd/etcd``.pem`` ``key_file: ``/var/run/secrets/kubernetes``.io``/k8s-certs/etcd/etcd-key``.pem`` ``scrape_interval: 5s`` ``static_configs:`` ``- targets: [``'172.16.60.241:2379'``,``'172.16.60.242:2379'``,``'172.16.60.243:2379'``]`` ` `更新config.yaml配置(也可以先delete删除,再create创建,但是不建议这么操作)``[root@k8s-master01 prometheus]``# kubectl apply -f configmap.yaml`` ` `更新prometheus.deploy.yml配置(也可以先delete删除,再create创建,但是不建议这么操作)``[root@k8s-master01 prometheus]``# kubectl apply -f prometheus.deploy.yaml`` ` `接着重启pods。只需要删除pod,然后就会自动拉起一个pod,即重启了一次``[root@k8s-master01 prometheus]``# kubectl get pods -n kube-system|grep prometheus``prometheus-76fb9bc788-w28pf 1``/1` `Running 0 11m`` ` `[root@k8s-master01 prometheus]``# kubectl delete pods/prometheus-76fb9bc788-w28pf -n kube-system``pod ``"prometheus-76fb9bc788-w28pf"` `deleted`` ` `查看prometheus的pod,发现pod已重启``[root@k8s-master01 prometheus]``# kubectl get pods -n kube-system|grep prometheus ``prometheus-76fb9bc788-lbf57 1``/1` `Running 0 5s`` ` `========================================================================================``注意:``如果没有修改configmap.yaml,只是修改的prometheus.deploy.yaml文件``那么只需要执行``"kubectl apply -f prometheus.deploy.yaml"``这样就自动实现了deploy.yaml文件里的pod的重启了``========================================================================================`` ` `查看prometheus的pod容器里是否正确挂载了secret(如下,一定要确保volumes挂载的secret生效了)``[root@k8s-master01 prometheus]``# kubectl describe pods/prometheus-76fb9bc788-lbf57 -n kube-system``............`` ``Mounts:`` ``/etc/prometheus` `from config-volume (rw)`` ``/prometheus` `from data (rw)`` ``/var/run/secrets/kubernetes``.io``/k8s-certs/etcd/` `from k8s-certs (ro)`` ``/var/run/secrets/kubernetes``.io``/serviceaccount` `from prometheus-token-mbvhb (ro)`` ` `登录prometheus容器查看``[root@k8s-master01 prometheus]``# kubectl exec -ti prometheus-76fb9bc788-lbf57 -n kube-system sh``/prometheus` `# ls /var/run/secrets/kubernetes.io/k8s-certs/etcd/``ca.pem etcd-key.pem etcd.pem`` ` `到这里,prometheus就已经成功配置了k8s的etcd集群,可以访问prometheus查看了`
对于 etcd 集群一般情况下,为了安全都会开启 https 证书认证的方式,所以要想让 Prometheus 访问到 etcd 集群的监控数据,就需要提供相应的证书校验。
由于我们这里演示环境使用的是 Kubeadm 搭建的集群,我们可以使用 kubectl 工具去获取 etcd 启动的时候使用的证书路径:
[root@k8s-node01 ~]# kubectl get pods -n kube-system | grep etcd
etcd-k8s-node01 1/1 Running 2 104d
[root@k8s-node01 ~]# kubectl get pods -n kube-system etcd-k8s-node01 -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/config.hash: 577996eeaa262f41eb65d286b822f36c
kubernetes.io/config.mirror: 577996eeaa262f41eb65d286b822f36c
kubernetes.io/config.seen: "2019-11-24T19:35:12.404831627+08:00"
kubernetes.io/config.source: file
creationTimestamp: "2019-11-24T11:37:14Z"
labels:
component: etcd
tier: control-plane
name: etcd-k8s-node01
namespace: kube-system
resourceVersion: "535651"
selfLink: /api/v1/namespaces/kube-system/pods/etcd-k8s-node01
uid: 869238c6-fff7-4299-95a3-da17fb112776
spec:
containers:
- command:
- etcd
- --advertise-client-urls=https://192.168.152.137:2379
- --cert-file=/etc/kubernetes/pki/etcd/server.crt
- --client-cert-auth=true
- --data-dir=/var/lib/etcd
- --initial-advertise-peer-urls=https://192.168.152.137:2380
- --initial-cluster=k8s-node01=https://192.168.152.137:2380
- --key-file=/etc/kubernetes/pki/etcd/server.key
- --listen-client-urls=https://127.0.0.1:2379,https://192.168.152.137:2379
- --listen-peer-urls=https://192.168.152.137:2380
- --name=k8s-node01
- --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
- --peer-client-cert-auth=true
- --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
- --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
- --snapshot-count=10000
- --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
image: k8s.gcr.io/etcd:3.3.10
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- /bin/sh
- -ec
- ETCDCTL_API=3 etcdctl --endpoints=https://0.0.0.0:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt
--cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key
get foo
failureThreshold: 8
initialDelaySeconds: 15
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 15
name: etcd
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/etcd
name: etcd-data
- mountPath: /etc/kubernetes/pki/etcd
name: etcd-certs
dnsPolicy: ClusterFirst
enableServiceLinks: true
hostNetwork: true
nodeName: k8s-node01
priority: 2000000000
priorityClassName: system-cluster-critical
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
operator: Exists
volumes:
- hostPath:
path: /etc/kubernetes/pki/etcd
type: DirectoryOrCreate
name: etcd-certs
- hostPath:
path: /var/lib/etcd
type: DirectoryOrCreate
name: etcd-data
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2019-11-24T11:55:09Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2019-11-24T12:05:33Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2019-11-24T12:05:33Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2019-11-24T11:55:09Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://26c1a6ee621766c176e14258b393f9f52c367d8d015974e3be5a98ddefadbe8b
image: k8s.gcr.io/etcd:3.3.10
imageID: docker://sha256:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d
lastState:
terminated:
containerID: docker://2d2f6160241a16152940ca12d76162e30bb293618045a8821d57a4dfaddd28fb
exitCode: 0
finishedAt: "2019-11-24T12:15:05Z"
reason: Completed
startedAt: "2019-11-24T12:13:36Z"
name: etcd
ready: true
restartCount: 16
state:
running:
startedAt: "2019-11-24T12:17:00Z"
hostIP: 192.168.152.137
phase: Running
podIP: 192.168.152.137
qosClass: BestEffort
startTime: "2019-11-24T11:55:09Z"
我们可以看到 etcd 使用的证书都对应在节点的 /etc/kubernetes/pki/etcd 这个路径下面,所以首先我们将需要使用到的证书通过 secret 对象保存到集群中去:(在 etcd 运行的节点)
[root@k8s-node01 ~]# kubectl -n kube-system create secret generic etcd-certs --from-file=/etc/kubernetes/pki/etcd/healthcheck-client.crt --from-file=/etc/kubernetes/pki/etcd/healthcheck-client.key --from-file=/etc/kubernetes/pki/etcd/ca.crt
secret/etcd-certs created
[root@k8s-master01 prometheus]# kubectl get secret -n kube-system|grep etcd-certs
etcd-certs Opaque 3 82s
[root@k8s-node01 ~]# systemctl restart kubelet.service
[root@k8s-node01 ~]# kubectl describe secret etcd-certs -n kube-system
Name: etcd-certs
Namespace: kube-system
Labels:
Annotations:
Type: Opaque
Data
====
ca.crt: 1017 bytes
healthcheck-client.crt: 1094 bytes
healthcheck-client.key: 1675 bytes
修改prometheus.deploy.yaml添加secrets,即将创建的secret对象"etcd-certs"通过volumes挂载方式,添加到prometheus.deploy.yaml部署文件中:
[root@k8s-master01 prometheus]# cat prometheus.deploy.yaml
........
spec:
containers:
- image: prom/prometheus:v2.0.0
name: prometheus
command:
- "/bin/prometheus"
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus"
- "--storage.tsdb.retention=24h"
ports:
- containerPort: 9090
protocol: TCP
volumeMounts:
- mountPath: "/prometheus"
name: data
- mountPath: "/etc/prometheus"
name: config-volume
- name: k8s-certs #添加下面这三行内容,即将secret对象里的内容映射到容器的/var/run/secrets/kubernetes.io/k8s-certs/etcd/目录下(容器里会自动创建这个目录)
mountPath: /var/run/secrets/kubernetes.io/k8s-certs/etcd/
readOnly: true
resources:
requests:
cpu: 100m
memory: 100Mi
limits:
cpu: 500m
memory: 2500Mi
serviceAccountName: prometheus
volumes:
- name: data
emptyDir: {}
- name: config-volume
configMap:
name: prometheus-config
- name: k8s-certs #添加下面这三行内容
secret:
secretName: etcd-certs
修改prometh的configmap.yaml配置文件,添加etcd连接配置 (注意.yaml结尾文件和.yml结尾文件都可以,不影响使用的)
[root@k8s-master01 prometheus]# vim configmap.yaml
.........
- job_name: 'kubernetes-etcd'
scheme: https
tls_config:
# insecure_skip_verify: true
ca_file: /etc/prometheus/secrets/etcd-certs/ca.crt
cert_file: /var/run/secrets/kubernetes.io/k8s-certs/etcd/healthcheck-client.crt
key_file: /var/run/secrets/kubernetes.io/k8s-certs/etcd/healthcheck-client.key
scrape_interval: 5s
static_configs:
- targets: ['192.168.152.137:2379']
[root@k8s-node01 prometheus]# kubectl delete -f .
configmap "prometheus-config" deleted
deployment.apps "prometheus" deleted
service "prometheus" deleted
clusterrole.rbac.authorization.k8s.io "prometheus" deleted
serviceaccount "prometheus" deleted
clusterrolebinding.rbac.authorization.k8s.io "prometheus" deleted
[root@k8s-node01 prometheus]# kubectl apply -f .
configmap/prometheus-config created
deployment.apps/prometheus created
service/prometheus created
clusterrole.rbac.authorization.k8s.io/prometheus created
serviceaccount/prometheus created
查看prometheus的pod容器里是否正确挂载了secret(如下,一定要确保volumes挂载的secret生效了)
[root@k8s-node01 prometheus]# kubectl describe pods/prometheus-76fb9bc788-lbf57 -n kube-system
............
Mounts:
/etc/prometheus from config-volume (rw)
/prometheus from data (rw)
/var/run/secrets/kubernetes.io/k8s-certs/etcd/ from k8s-certs (ro)
/var/run/secrets/kubernetes.io/serviceaccount from prometheus-token-mbvhb (ro)
[root@k8s-node01 ~]# kubectl exec -it -n kube-system prometheus-5999955985-l28bb -- /bin/sh
/prometheus $ ls /var/run/secrets/kubernetes.io/k8s-certs/etcd/
ca.crt healthcheck-client.crt healthcheck-client.key
到这里,prometheus就已经成功配置了k8s的etcd集群,可以访问prometheus查看了
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-PxkKR3bP-1618222030547)(assets/360截图17581006323054.png)]
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-FqOdJMRo-1618222030547)(assets/360截图1757111792112140.png)]
AlertManager用于接收Prometheus发送的告警并对于告警进行一系列的处理后发送给指定的用户,可以根据不同的需要可以设置邮件告警、短信告警等、钉钉告警(钉钉告警需要接入prometheus-webhook-dingtalk)等。Alertmanager与Prometheus是相互分离的两个部分。Prometheus服务器根据报警规则将警报发送给Alertmanager,然后Alertmanager将silencing、inhibition、aggregation等消息通过电子邮件、PaperDuty和HipChat发送通知。
安装配置Alertmanager
配置Prometheus通过-alertmanager.url标志与Alertmanager通信
在Prometheus中创建告警规则
Alertmanager处理由类似Prometheus服务器等客户端发来的警报,之后需要删除重复、分组,并将它们通过路由发送到正确的接收器,比如电子邮件、Slack等。Alertmanager还支持沉默和警报抑制的机制。
分组是指当出现问题时,Alertmanager会收到一个单一的通知,而当系统宕机时,很有可能成百上千的警报会同时生成,这种机制在较大的中断中特别有用。例如,当数十或数百个服务的实例在运行,网络发生故障时,有可能服务实例的一半不可达数据库。在告警规则中配置为每一个服务实例都发送警报的话,那么结果是数百警报被发送至Alertmanager。但是作为用户只想看到单一的报警页面,同时仍然能够清楚的看到哪些实例受到影响,因此,人们通过配置Alertmanager将警报分组打包,并发送一个相对看起来紧凑的通知。分组警报、警报时间,以及接收警报的receiver是在配置文件中通过路由树配置的。
抑制是指当警报发出后,停止重复发送由此警报引发其他错误的警报的机制。例如,当警报被触发,通知整个集群不可达,可以配置Alertmanager忽略由该警报触发而产生的所有其他警报,这可以防止通知数百或数千与此问题不相关的其他警报。抑制机制可以通过Alertmanager的配置文件来配置。
沉默是一种简单的特定时间静音提醒的机制。一种沉默是通过匹配器来配置,就像路由树一样。传入的警报会匹配RE,如果匹配,将不会为此警报发送通知。沉默机制可以通过Alertmanager的Web页面进行配置。
Prometheus以scrape_interval(默认为1m)规则周期,从监控目标上收集信息。其中scrape_interval可以基于全局或基于单个metric定义;然后将监控信息持久存储在其本地存储上。
Prometheus以evaluation_interval(默认为1m)另一个独立的规则周期,对告警规则做定期计算。其中evaluation_interval只有全局值;然后更新告警状态。其中包含三种告警状态:
inactive:没有触发阈值。即表示当前告警信息既不是firing状态,也不是pending状态;
pending:已触发阈值但未满足告警持续时间。即表示告警消息在设置的阈值时间范围内被激活了;
firing:已触发阈值且满足告警持续时间。即表示告警信息在超过设置的阈值时间内被激活了;
如果采用Prometheus Operator方式部署,则prometheus和alertmanager两个模块会一起被安装。这里我先安装的 prometheus 容器,然后再安装的 alertmanager容器,这两个是分开部署的。
[root@k8s-node01 ~]# cd k8s-prometheus/prometheus/
[root@k8s-node01 prometheus]# cat alertmanager-conf.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: alert-config
namespace: kube-system
data:
config.yml: |-
global:
#在没有告警的情况下声明为已解决的时间
resolve_timeout: 5m
smtp_smarthost: 'smtp.163.com:25'
smtp_from: '[email protected]'
smtp_auth_username: '[email protected]'
smtp_auth_password: 'kevin123@#$12'
smtp_hello: '163.com'
smtp_require_tls: false
#所有告警信息进入之后的根路由,用于设置告警的分发策略
route:
#这里的标签列表是接收到告警信息后的重新分组标签,例如,在接收到的告警信息里有许多具有 cluster=A 和 alertname=LatncyHigh 标签的告警信息会被批量聚合到一个分组里
group_by: ['alertname', 'cluster']
#在一个新的告警分组被创建后,需要等待至少 group_wait 时间来初始化通知,这种方式可以确保有足够的时间为同一分组收获多条告警,然后一起触发这条告警信息
group_wait: 30s
#在第 1 条告警发送后,等待group_interval时间来发送新的一组告警信息
group_interval: 5m
#如果某条告警信息已经发送成功,则等待repeat_interval时间重新发送他们。这里不启用这个功能~
#repeat_interval: 5m
#默认的receiver:如果某条告警没有被一个route匹配,则发送给默认的接收器
receiver: default
#上面的所有属性都由所有子路由继承,并且可以在每个子路由上覆盖
routes:
- receiver: email
group_wait: 10s
match:
team: node
receivers:
- name: 'default'
email_configs:
- to: '102******@qq.com'
send_resolved: true
- to: 'wang*****@sina.cn'
send_resolved: true
- name: 'email'
email_configs:
- to: '87486*****@163.com'
send_resolved: true
- to: 'wang*******@163.com'
send_resolved: true
[root@k8s-node01 prometheus]# cat alertmanager-conf.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: alert-config
namespace: kube-system
data:
config.yml: |-
global:
resolve_timeout: 5m
smtp_smarthost: 'smtp.163.com:25'
smtp_from: '[email protected]'
smtp_auth_username: '[email protected]'
smtp_auth_password: 'lysh1127'
smtp_hello: '163.com'
smtp_require_tls: false
route:
group_by: ['alertname', 'cluster']
group_wait: 30s
group_interval: 5m
receiver: default
routes:
- receiver: email
group_wait: 10s
match:
team: node
receivers:
- name: 'default'
email_configs:
- to: '[email protected]'
send_resolved: true
- to: '[email protected]'
send_resolved: true
- name: 'email'
email_configs:
- to: '[email protected]'
send_resolved: true
- to: '[email protected]'
send_resolved: true
上面在alertmanager-conf.yaml文件中配置了邮件告警信息的发送发和接收方,发送发为[email protected],接收方为[email protected]和[email protected]。
[root@k8s-master01 prometheus]# kubectl create -f alertmanager-conf.yaml
配置configmap.yaml,添加告警的监控项。如下,添加一个测试告警监控项,当内存使用率超过1%时候,就报警
[root@k8s-master01 prometheus]# vim configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-config
namespace: kube-system
data:
prometheus.yml: |
global:
scrape_interval: 15s
evaluation_interval: 15s
alerting: #添加下面紧接着的这四行
alertmanagers:
- static_configs:
- targets: ["localhost:9093"]
rule_files: #添加下面紧接着的这两行
- /etc/prometheus/rules.yml
scrape_configs:
- job_name: 'kubernetes-schedule'
scrape_interval: 5s
static_configs:
- targets: ['192.168.152.137:10251']
.......
.......
rules.yml: | #结尾添加下面这几行配置
groups:
- name: alert-rule
rules:
- alert: NodeMemoryUsage
expr: (node_memory_MemTotal_bytes - (node_memory_MemFree_bytes + node_memory_Buffers_bytes + node_memory_Cached_bytes)) / node_memory_MemTotal_bytes * 100 > 1
for: 1m
labels:
team: admin
annotations:
description: "{
{$labels.instance}}: Memory usage is above 1% (current value is: {
{ $value }}%)"
value: "{
{ $value }}%"
threshold: "1%"
[root@k8s-master01 prometheus]# vim prometheus.deploy.yml
.......
spec:
containers:
- image: prom/alertmanager #添加下面紧接的内容,即alertmanager容器配置
name: alertmanager
imagePullPolicy: IfNotPresent
args:
- "--config.file=/etc/alertmanager/config.yml"
- "--storage.path=/alertmanager/data"
ports:
- containerPort: 9093
name: http
volumeMounts:
- mountPath: "/etc/alertmanager"
name: alertcfg
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 100m
memory: 256Mi #上面添加的配置到这里结束
.......
volumes:
- name: alertcfg #添加下面紧接着这三行配置
configMap:
name: alert-config
.......
[root@k8s-node01 prometheus]# kubectl delete -f .
configmap "prometheus-config" deleted
deployment.apps "prometheus" deleted
service "prometheus" deleted
clusterrole.rbac.authorization.k8s.io "prometheus" deleted
serviceaccount "prometheus" deleted
clusterrolebinding.rbac.authorization.k8s.io "prometheus" deleted
[root@k8s-node01 prometheus]# kubectl apply -f .
configmap/prometheus-config created
deployment.apps/prometheus created
service/prometheus created
clusterrole.rbac.authorization.k8s.io/prometheus created
serviceaccount/prometheus created
[root@k8s-master01 prometheus]# kubectl get pods -n kube-system|grep prometheus
prometheus-8697656888-2vwbw 2/2 Running 0 5s
[root@k8s-master01 prometheus]# vim prometheus.svc.yml
---
kind: Service
apiVersion: v1
metadata:
labels:
app: prometheus
name: prometheus
namespace: kube-system
spec:
type: NodePort
ports:
- port: 9090
targetPort: 9090
nodePort: 30003
name: prom #添加下面这紧接的五行内容
- port: 9093
targetPort: 9093
nodePort: 30013
name: alert
selector:
app: prometheus
[root@k8s-master01 prometheus]# kubectl apply -f prometheus.svc.yml
[root@k8s-master01 prometheus]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5b969f4c88-pd5js 1/1 Running 0 30d
grafana-core-5f7c6c786b-x8prc 1/1 Running 0 17d
kube-state-metrics-5dd55c764d-nnsdv 2/2 Running 0 23d
kubernetes-dashboard-7976c5cb9c-4jpzb 1/1 Running 0 16d
metrics-server-54997795d9-rczmc 1/1 Running 0 24d
node-exporter-t65bn 1/1 Running 0 3m20s
node-exporter-tsdbc 1/1 Running 0 3m20s
node-exporter-zmb68 1/1 Running 0 3m20s
prometheus-8697656888-7kxwg 2/2 Running 0 11m
可以看出prometheus-8697656888-7kxwg的pod里面有两个容器都正常启动了(2/2),一个是prometheus容器,一个是altermanager容器(prometheus.deploy.yaml文件里配置)
[root@k8s-master01 prometheus]# kubectl exec -ti prometheus-8697656888-7kxwg -n kube-system -c prometheus /bin/sh
/prometheus #
[root@k8s-master01 prometheus]# kubectl exec -ti prometheus-8697656888-7kxwg -n kube-system -c alertmanager /bin/sh
/etc/alertmanager #
[root@k8s-master01 prometheus]# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana NodePort 10.254.95.120 3000:31821/TCP 17d
kube-dns ClusterIP 10.254.0.2 53/UDP,53/TCP,9153/TCP 30d
kube-state-metrics NodePort 10.254.228.212 8080:30978/TCP,8081:30872/TCP 23d
kubernetes-dashboard-external NodePort 10.254.223.104 9090:30090/TCP 16d
metrics-server ClusterIP 10.254.135.197 443/TCP 24d
node-exporter NodePort 10.254.168.172 9100:31672/TCP 11m
prometheus NodePort 10.254.241.170 9090:30003/TCP,9093:30013/TCP 10d
这时候,访问http://172.16.60.245:30003/alerts就能看到Prometheus的告警设置了。
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-6ihTBv7R-1618222030548)(assets/907596-20190726151036826-1168441776.png)]
双击上面的Alerts
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-tsKT099I-1618222030549)(assets/907596-20190726151046101-965167010.png)]
收到的邮件告警信息如下:
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-7m059siu-1618222030549)(assets/907596-20190729164213422-1351788615.png)]
访问30013端口,可以看到Alertmanager的Silences静默状态等
0 3m20s
prometheus-8697656888-7kxwg 2/2 Running 0 11m
可以看出prometheus-8697656888-7kxwg的pod里面有两个容器都正常启动了(2/2),一个是prometheus容器,一个是altermanager容器(prometheus.deploy.yaml文件里配置)
##### 9、登录容器
[root@k8s-master01 prometheus]# kubectl exec -ti prometheus-8697656888-7kxwg -n kube-system -c prometheus /bin/sh
/prometheus #
[root@k8s-master01 prometheus]# kubectl exec -ti prometheus-8697656888-7kxwg -n kube-system -c alertmanager /bin/sh
/etc/alertmanager #
##### 10、查看services
[root@k8s-master01 prometheus]# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana NodePort 10.254.95.120 3000:31821/TCP 17d
kube-dns ClusterIP 10.254.0.2 53/UDP,53/TCP,9153/TCP 30d
kube-state-metrics NodePort 10.254.228.212 8080:30978/TCP,8081:30872/TCP 23d
kubernetes-dashboard-external NodePort 10.254.223.104 9090:30090/TCP 16d
metrics-server ClusterIP 10.254.135.197 443/TCP 24d
node-exporter NodePort 10.254.168.172 9100:31672/TCP 11m
prometheus NodePort 10.254.241.170 9090:30003/TCP,9093:30013/TCP 10d
这时候,访问http://172.16.60.245:30003/alerts就能看到Prometheus的告警设置了。
[外链图片转存中...(img-6ihTBv7R-1618222030548)]
双击上面的Alerts
[外链图片转存中...(img-tsKT099I-1618222030549)]
收到的邮件告警信息如下:
**[外链图片转存中...(img-7m059siu-1618222030549)]**
**访问30013端口,可以看到Alertmanager的Silences静默状态等**
**[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-E3qzshtc-1618222030550)(assets/907596-20190729115203761-1092471620.png)]**