prometheus 学习(1)

prometheus 配置文档

(github地址)
https://github.com/prometheus/prometheus/blob/master/docs/configuration/configuration.md
标签学习

global:
  # How frequently to scrape targets by default.
  [ scrape_interval:  | default = 1m ]

  # How long until a scrape request times out.
  [ scrape_timeout:  | default = 10s ] 超时时间

  # How frequently to evaluate rules.
  [ evaluation_interval:  | default = 1m ]

  # The labels to add to any time series or alerts when communicating with
  # external systems (federation, remote storage,  Alertmanager).
  external_labels:(外部系统标签)
     [ :  ... ]

# Rule files specifies a list of globs. Rules and alerts are read from
# all matching files.
rule_files:(搜集规则匹配的文件)
  [ -  ... ]

# A list of scrape configurations.
scrape_configs: (抓取配置)
  [ -  ... ]

# Alerting specifies settings related to the Alertmanager.
alerting:(报警配置)
  alert_relabel_configs:
    [ -  ... ]
  alertmanagers:(报警管理)
    [ -  ... ]

# Settings related to the remote write feature.(远程写相关的功能)
remote_write:
  [ -  ... ]

# Settings related to the remote read feature.
remote_read:(远程读相关的功能)
  [ -  ... ]

上面的标签有些不清楚,但是这些平时用到的频率稍微低一些

主要对scrap-config 进行理解

scrap-config

# The job name assigned to scraped metrics by default.
job_name:  这个就是job的名字la,在prothemeus 中可以单独看到这一项

# How frequently to scrape targets from this job.
[ scrape_interval:  | default =  ] 采集的间隔啦如果没有配置默认是global_config中指定的啦

# Per-scrape timeout when scraping this job.
[ scrape_timeout:  | default =  ] 和上面一样的啦

# The HTTP resource path on which to fetch metrics from targets.
[ metrics_path:  | default = /metrics ] 需要抓取的配置文件/默认是metrics啦

# honor_labels controls how Prometheus handles conflicts between labels that are
# already present in scraped data and labels that Prometheus would attach
# server-side ("job" and "instance" labels, manually configured target
# labels, and labels generated by service discovery implementations).
#
# If honor_labels is set to "true", label conflicts are resolved by keeping label
# values from the scraped data and ignoring the conflicting server-side labels.
#
# If honor_labels is set to "false", label conflicts are resolved by renaming
# conflicting labels in the scraped data to "exported_" (for
# example "exported_instance", "exported_job") and then attaching server-side
# labels. This is useful for use cases such as federation, where all labels
# specified in the target should be preserved.
#
# Note that any globally configured "external_labels" are unaffected by this
# setting. In communication with external systems, they are always applied only
# when a time series does not have a given label yet and are ignored otherwise.
[ honor_labels:  | default = false ]  暂时先不了解这个具体是解决冲突时候的标签啦

# Configures the protocol scheme used for requests.
[ scheme:  | default = http ] 具体数据请求的格式是http or https

# Optional HTTP URL parameters.
params:
  [ : [, ...] ] 具体请求的HTTP URL

# Sets the `Authorization` header on every scrape request with the
# configured username and password.
# password and password_file are mutually exclusive.
basic_auth:
  [ username:  ]
  [ password:  ]
  [ password_file:  ]

# Sets the `Authorization` header on every scrape request with
# the configured bearer token. It is mutually exclusive with `bearer_token_file`.
[ bearer_token:  ]

# Sets the `Authorization` header on every scrape request with the bearer token
# read from the configured file. It is mutually exclusive with `bearer_token`.
[ bearer_token_file: /path/to/bearer/token/file ]

# Configures the scrape request's TLS settings.
tls_config:
  [  ]

# Optional proxy URL.
[ proxy_url:  ]

# List of Azure service discovery configurations.
azure_sd_configs:
  [ -  ... ]

# List of Consul service discovery configurations.
consul_sd_configs:
  [ -  ... ]

# List of DNS service discovery configurations.
dns_sd_configs:
  [ -  ... ]

# List of EC2 service discovery configurations.
ec2_sd_configs:
  [ -  ... ]

# List of OpenStack service discovery configurations.
openstack_sd_configs:
  [ -  ... ]

# List of file service discovery configurations.
file_sd_configs:
  [ -  ... ]

# List of GCE service discovery configurations.
gce_sd_configs:
  [ -  ... ]

# List of Kubernetes service discovery configurations.
kubernetes_sd_configs:
  [ -  ... ]

# List of Marathon service discovery configurations.
marathon_sd_configs:
  [ -  ... ]

# List of AirBnB's Nerve service discovery configurations.
nerve_sd_configs:
  [ -  ... ]

# List of Zookeeper Serverset service discovery configurations.
serverset_sd_configs:
  [ -  ... ]

# List of Triton service discovery configurations.
triton_sd_configs:
  [ -  ... ]

# List of labeled statically configured targets for this job.
static_configs:
  [ -  ... ]

# List of target relabel configurations.
relabel_configs: 目标重新添加表切的配置
  [ -  ... ]

# List of metric relabel configurations.
metric_relabel_configs:
  [ -  ... ]细节文件标签的配置文档

# Per-scrape limit on number of scraped samples that will be accepted.
# If more than this number of samples are present after metric relabelling
# the entire scrape will be treated as failed. 0 means no limit.
[ sample_limit:  | default = 0 ] 

< kubernetes_sd_config >

Kubernetes SD配置允许从Kubernetes的REST API获取刮取目标,并始终与集群状态保持同步。


可以配置以下角色类型之一以发现目标:


node

node角色发现每个集群节点有一个目标,该目标的地址默认为Kubelet的HTTP端口。目标地址默认为NodeInternalIP、NodeExternalIP、nodelegate yhostip和NodeHostName中Kubernetes节点对象的第一个现有地址。


可用的元标签:


__meta_kubernetes_node_name:节点对象的名称。

__meta_kubernetes_node_label_:来自节点对象的每个标签。

__meta_kubernetes_node_annotation_:来自节点对象的每个注释。

__meta_kubernetes_node_address_:每个节点地址类型的第一个地址,如果存在的话。

此外,节点的实例标签将被设置为从API服务器检索到的节点名。


service

service角色为每个服务的每个服务端口发现一个目标。这对于黑盒监视服务通常是有用的。地址将设置为服务的Kubernetes DNS名称和各自的服务端口。


可用的元标签:


__meta_kubernetes_namespace:服务对象的名称空间。

__meta_kubernetes_service_name:服务对象的名称。

__meta_kubernetes_service_label_:服务对象的标签。

__meta_kubernetes_service_annotation_:服务对象的注释。

__meta_kubernetes_service_port_name:目标服务端口的名称。

__meta_kubernetes_service_port_number:目标的服务端口数量。

__meta_kubernetes_service_port_protocol:目标服务端口的协议。

pod

pod角色会发现所有的pod,并将它们的容器暴露为目标。对于容器的每个声明端口,都生成一个目标。如果一个容器没有指定的端口,则为每个容器创建一个无端口目标,通过重新标记手动添加一个端口。


可用的元标签:


__meta_kubernetes_namespace: pod对象的名称空间。

__meta_kubernetes_pod_name: pod对象的名称。

__meta_kubernetes_pod_ip: pod对象的pod IP。

__meta_kubernetes_pod_label_: pod对象的标签。

__meta_kubernetes_pod_annotation_: pod对象的注释。

__meta_kubernetes_pod_container_name:目标地址指向的容器的名称。

__meta_kubernetes_pod_container_port_name:容器端口的名称。

__meta_kubernetes_pod_container_port_number:容器端口的数量。

__meta_kubernetes_pod_container_port_protocol:容器端口的协议。

__meta_kubernetes_pod_ready:为pod的就绪状态设置为true或false。

__meta_kubernetes_pod_node_name:计划在pod上的节点的名称。

__meta_kubernetes_pod_host_ip: pod对象的当前主机IP。

__meta_kubernetes_pod_uid: pod对象的UID。

__meta_kubernetes_pod_controller_kind: pod控制器的对象。

__meta_kubernetes_pod_controller_name: pod控制器的名称。
endpoints (service and endpoints 的关系)

端点角色从列出的服务端点中发现目标。对于每个端点地址,每个端口发现一个目标。如果端点由pod支持,那么pod的所有附加容器端口(不绑定到端点端口)也将被发现为目标。


可用的元标签:


__meta_kubernetes_namespace:endpoints对象的名称空间。

__meta_kubernetes_endpoints_name:端点对象的名称。

对于直接从端点列表中发现的所有目标(那些不是从底层pod中推断出来的目标),附加以下标签:

__meta_kubernetes_endpoint_ready:为端点的就绪状态设置为true或false。

__meta_kubernetes_endpoint_port_name:端点端口的名称。

__meta_kubernetes_endpoint_port_protocol:端点端口的协议。

__meta_kubernetes_endpoint_address_target_kind:某种端点地址目标。

__meta_kubernetes_endpoint_address_target_name:端点地址目标的ingress

如果端点属于服务,则会附加角色的所有标签:服务发现。

对于所有由pod支持的目标,所有角色标签:pod discovery被附加。

入口

入口角色为每个入口路径发现一个目标。这通常用于对ingress的blackbox监控。地址将被设置为入口规范中指定的主机。


可用的元标签:


__meta_kubernetes_namespace:入口对象的名称空间。

__meta_kubernetes_ingress_name: ingress对象的名称。

__meta_kubernetes_ingress_label_:入口对象的标签。

__meta_kubernetes_ingress_annotation_:入口对象的注释。

__meta_kubernetes_ingress_scheme:如果设置了TLS配置,则为https,默认为http。

__meta_kubernetes_ingress_path:来自ingress规范的路径。

下面有个官方的prometheus 的配置文档,一起分析一下

你可能感兴趣的:(prometheus 学习(1))