官方文档
下载最新的版本
wget https://github.com/prometheus/prometheus/releases/download/v2.19.2/prometheus-2.19.2.linux-amd64.tar.gz
tar xzvf prometheus-2.19.2.linux-amd64.tar.gz
cd prometheus-2.19.2.linux-amd64
配置
[sysadmin@VM_201_13_centos prometheus-2.19.2.linux-amd64]$ ls
console_libraries consoles LICENSE NOTICE prometheus prometheus.yml promtool tsdb
Prometheus 从被监控目标设备通过检索 metrics HTTP endpoints 收集metrics。因为Prometheus自身也以同样的方式发布数据,所以它也可以监控自身的健康状态。
虽然Prometheus server仅采集自身的数据没什么意义,但是作为开始的范例很合适。
将下面的内容保存为 Prometheus 配置文件prometheus.yml:
实际上安装包内已经包含了该文件,无需修改
global:
scrape_interval: 15s # By default, scrape targets every 15 seconds.
# Attach these labels to any time series or alerts when communicating with
# external systems (federation, remote storage, Alertmanager).
external_labels:
monitor: 'codelab-monitor'
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=` to any timeseries scraped from this config.
- job_name: 'prometheus'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
static_configs:
- targets: ['localhost:9090']
查看其他配置项,请参阅 configuration documentation。
启动 Prometheus
[sysadmin@VM_201_13_centos prometheus-2.19.2.linux-amd64]$ ls
console_libraries consoles LICENSE NOTICE prometheus prometheus.yml promtool tsdb
[sysadmin@VM_201_13_centos prometheus-2.19.2.linux-amd64]$ ./prometheus --config.file=prometheus.yml
level=info ts=2020-07-02T12:28:39.465Z caller=main.go:302 msg="No time or size retention was set so using the default time retention" duration=15d
level=info ts=2020-07-02T12:28:39.465Z caller=main.go:337 msg="Starting Prometheus" version="(version=2.19.2, branch=HEAD, revision=c448ada63d83002e9c1d2c9f84e09f55a61f0ff7)"
level=info ts=2020-07-02T12:28:39.465Z caller=main.go:338 build_context="(go=go1.14.4, user=root@dd72efe1549d, date=20200626-09:02:20)"
level=info ts=2020-07-02T12:28:39.465Z caller=main.go:339 host_details="(Linux 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64 VM_201_13_centos (none))"
level=info ts=2020-07-02T12:28:39.465Z caller=main.go:340 fd_limits="(soft=100001, hard=100002)"
level=info ts=2020-07-02T12:28:39.465Z caller=main.go:341 vm_limits="(soft=unlimited, hard=unlimited)"
level=info ts=2020-07-02T12:28:39.468Z caller=main.go:678 msg="Starting TSDB ..."
level=info ts=2020-07-02T12:28:39.471Z caller=head.go:645 component=tsdb msg="Replaying WAL and on-disk memory mappable chunks if any, this may take a while"
level=info ts=2020-07-02T12:28:39.471Z caller=web.go:524 component=web msg="Start listening for connections" address=0.0.0.0:9090
level=info ts=2020-07-02T12:28:39.472Z caller=head.go:706 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0
level=info ts=2020-07-02T12:28:39.472Z caller=head.go:709 component=tsdb msg="WAL replay completed" duration=975.63µs
level=info ts=2020-07-02T12:28:39.473Z caller=main.go:694 fs_type=EXT4_SUPER_MAGIC
level=info ts=2020-07-02T12:28:39.473Z caller=main.go:695 msg="TSDB started"
level=info ts=2020-07-02T12:28:39.473Z caller=main.go:799 msg="Loading configuration file" filename=prometheus.yml
level=info ts=2020-07-02T12:28:39.474Z caller=main.go:827 msg="Completed loading of configuration file" filename=prometheus.yml
level=info ts=2020-07-02T12:28:39.474Z caller=main.go:646 msg="Server is ready to receive web requests."
新开一个bash shell,查看服务和端口
[sysadmin@VM_201_13_centos ~]$ netstat -anpl |grep prometheus
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 127.0.0.1:44570 127.0.0.1:9090 ESTABLISHED 8214/./prometheus
tcp6 0 0 :::9090 :::* LISTEN 8214/./prometheus
tcp6 0 0 ::1:58018 ::1:9090 ESTABLISHED 8214/./prometheus
tcp6 0 0 127.0.0.1:9090 127.0.0.1:44570 ESTABLISHED 8214/./prometheus
tcp6 0 0 ::1:9090 ::1:58018 ESTABLISHED 8214/./prometheus
访问测试 http://ip:9090/
访问测试 http://ip:9090/metrics/
Using the expression browser
我们试试查看Prometheus 收集的自身的数据。要使用 Prometheus 自带的 expression browser,导航到 http://localhost:9090/graph ,选择"Graph" tab内的 "Console" view 。
既然你可以从localhost:9090/metrics查看数据,那么可以看到一个Prometheus自己发布的指标 prometheus_target_interval_length_seconds
(目标采集的实际间隔实际)。输入到expression console,然后点击 "Execute":
prometheus_target_interval_length_seconds
如上,返回了一些不同的时间序列 (along with the latest value recorded for each), 都是 prometheus_target_interval_length_seconds的metric name,但是拥有不同的标签。这些标签显示了不同的时间段( latency percentiles)和 目标组间隔(target group intervals)。
如果我们只对 99th percentile latencies有兴趣,我们可以通过如下查询:
prometheus_target_interval_length_seconds{quantile="0.99"}
要统计返回的时间序列数量,可以查询如下:
count(prometheus_target_interval_length_seconds)
更多表达式语言参见 expression language documentation.
Using the graphing interface
要图形表示,导航到 http://localhost:9090/graph ,点击 "Graph" tab。
比如,使用如下查询,图形表示demo Prometheus的per-second rate of chunks :
rate(prometheus_tsdb_head_chunks_created_total[1m])
Experiment with the graph range parameters and other settings.
Starting up some sample targets
开始接入几个sample targets来演示。
Node Exporter用来作为范例,怎么使用参见 see these instructions.
tar -xzvf node_exporter-*.*.tar.gz
cd node_exporter-*.*
# Start 3 example targets in separate terminals:
./node_exporter --web.listen-address 0.0.0.0:8080
./node_exporter --web.listen-address 0.0.0.0:8081
./node_exporter --web.listen-address 0.0.0.0:8082
范例监听在 http://ip:8080/metrics, http://ip:8081/metrics, and http://ip:8082/metrics。
Configure Prometheus to monitor the sample targets
现在我们配置 Prometheus 来采集这些对象。 我们把三个endpoints组成一个group到job内,称为node。 假设,前两个是生产,第三个是金丝雀实例。为了在Prometheus区分,我们增加了数个endpoint组,给每个组增加标签。在范例中,我们将增加 group="production" 标签给第一个组, group="canary" 给第二个。
要实现这些,在prometheus.yml文件添加如下job定义到scrape_configs section,然后重启Prometheus 实例。
scrape_configs:
- job_name: 'node'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
static_configs:
- targets: ['localhost:8080', 'localhost:8081']
labels:
group: 'production'
- targets: ['localhost:8082']
labels:
group: 'canary'
回到 expression browser ,确认 Prometheus 现在已经有了关于这三个范例的信息,比如查询
node_cpu_seconds_total
Configure rules for aggregating scraped data into new time series
虽然在范例中没有这个问题,但是当computed ad-hoc ,跨成千上万时间序列的查询会变慢。为提升效率, Prometheus 允许通过配置recording rules 将预记录的表达式插入到新时间序列中。 假设我们需要5分钟时间窗口(job, instance and mode dimensions)的平均值 per-second rate of cpu time (node_cpu_seconds_total) 。我们可以查询如下:
avg by (job, instance, mode) (rate(node_cpu_seconds_total[5m]))
实施图形展示。
要将这个表达式的值放到一个新metric job_instance_mode:node_cpu_seconds:avg_rate5m,使用下面的recording rule 保存为 prometheus.rules.yml:
groups:
- name: cpu-node
rules:
- record: job_instance_mode:node_cpu_seconds:avg_rate5m
expr: avg by (job, instance, mode) (rate(node_cpu_seconds_total[5m]))
要让 Prometheus 使用这个 rule,在 prometheus.yml 添加 rule_files 声明部分。 如下是范例:
global:
scrape_interval: 15s # By default, scrape targets every 15 seconds.
evaluation_interval: 15s # Evaluate rules every 15 seconds.
# Attach these extra labels to all timeseries collected by this Prometheus instance.
external_labels:
monitor: 'codelab-monitor'
rule_files:
- 'prometheus.rules.yml'
scrape_configs:
- job_name: 'prometheus'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
static_configs:
- targets: ['localhost:9090']
- job_name: 'node'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
static_configs:
- targets: ['localhost:8080', 'localhost:8081']
labels:
group: 'production'
- targets: ['localhost:8082']
labels:
group: 'canary'
使用新配置文件重启 Prometheus ,确认一个新指标job_instance_mode:node_cpu_seconds:avg_rate5m可以通过expression browser查询了。