基于Prometheus监控Kubernetes集群

目录

ometheus优势

Prometheus工作服务过程

Prometheus核心组件

Prometheus实践架构图

Grafana简介

Grafana特点

一、环境准备

1.1、主机初始化配置

1.2、部署docker环境

二、部署kubernetes集群

2.1、组件介绍

2.2、配置阿里云yum源

2.3、安装kubelet kubeadm kubectl

2.4、配置init-config.yaml

2.5、安装master节点

2.6、安装node节点

2.7、安装flannel

2.8、部署测试应用

三、部署Prometheus监控平台

3.1、准备Prometheus相关YAML文件

3.2、部署prometheus

四、部署Grafana服务

4.1、部署Grafana相关yaml文件

4.2、配置Grafana数据源


Prometheus是由SoundCloud开发的开源监控报警系统和时序列数据库(TSDB);Prometheus使用Go语言开发,是Google BorgMon监控系统的开源版本;2016年由Google发起Linux基金会旗下的原生云基金会(Cloud Native Computing Foundation), 将Prometheus纳入其下第二大开源项目;Prometheus和Heapster(Heapster是K8S的一个子项目,用于获取集群的性能数据),相比功能更完善、更全面;Prometheus性能也足够支撑上万台规模的集群 

ometheus优势

  1. 多维度数据模型。
  2. 灵活的查询语言。
  3. 不依赖分布式存储,单个服务器节点是自主的。
  4. 通过基于HTTP的pull方式采集时序数据。
  5. 可以通过中间网关进行时序列数据推送。
  6. 通过服务发现或者静态配置来发现目标服务对象。
  7. 支持多种多样的图表和界面展示,比如Grafana等。

Prometheus工作服务过程

  1. Prometheus Daemon负责定时去目标上抓取metrics(指标)数据,每个抓取目标需要暴露一个http服务的接口给它定时抓取。Prometheus支持通过配置文件、文本文件、Zookeeper、Consul、DNS SRV Lookup服务注册与发现等方式指定抓取目标。Prometheus采用PULL的方式进行监控,即服务器可以直接通过目标PULL数据或者间接地通过中间网关来Push数据。
  2. Prometheus在本地存储抓取的所有数据,并通过一定规则进行清理和整理数据,并把得到的结果存储到新的时间序列中。
  3. Prometheus通过PromQL和其他API可视化地展示收集的数据。Prometheus支持很多方式的图表可视化,例如Grafana、自带的Promdash以及自身提供的模版引擎等等。Prometheus还提供HTTP API的查询方式,自定义所需要的输出。
  4. PushGateway支持Client主动推送metrics到PushGateway,而Prometheus只是定时去Gateway上抓取数据。
  5. Alertmanager是独立于Prometheus的一个组件,可以支持Prometheus的查询语句,提供十分灵活的报警方式。

Prometheus核心组件

Server 主要负责数据采集和存储,提供PromQL查询语言的支持 Alertmanager 警告管理器,用来进行报警

node_exporter 用来监控服务器CPU、内存、磁盘、I/O等信息

Prometheus实践架构图

基于Prometheus监控Kubernetes集群_第1张图片

    • Push Gateway 主要是实现接收由Client push过来的指标数据,在指定的时间间隔,由主程序来抓取。

Grafana简介

Grafana是一个可视化面板(Dashboard),有着非常漂亮的图表和布局展示,功能齐全的度量仪表盘和图形编辑器。支持Graphite、zabbix、InfluxDB、Prometheus和OpenTSDB作为数据源。

Grafana特点

  1. Grafana是一个可视化面板(Dashboard),有着非常漂亮的图表和布局展示,功能齐全的度量仪表盘和图形编辑器。支持Graphite、zabbix、InfluxDB、Prometheus和OpenTSDB作为数据源。
  2. Grafana支持许多不同的时间序列数据(数据源)存储后端。每个数据源都有一个特定查询编辑器。官方支持以下数据源:Graphite、infloxdb、opensdb、prometheus、elasticsearch、cloudwatch。每个数据源的查询语言和功能明显不同。你可以将来自多个数据源的数据组合到一个仪表板上,但每个面板都要绑定到属于特定组织的特定数据源。
  3. Grafana中的警报允许您将规则附加到仪表板面板上。保存仪表板时,Gravana会将警报规则提取到单独的警报规则存储中,并安排它们进行评估。报警消息还能通过钉钉、邮箱等推送至移动端。但目前grafana只支持graph面板的报警。
  4. Grafana使用来自不同数据源的丰富事件注释图表,将鼠标悬停在事件上会显示完整的事件元数据和标记;
  5. Grafana使用Ad-hoc过滤器允许动态创建新的键/值过滤器,这些过滤器会自动应用于使用该数据源的所有查询

一、环境准备

操作系统

IP地址

主机名

组件

CentOS7.5

192.168.50.53

k8s-master

kubeadm、kubelet、kubectl、docker-ce

CentOS7.5

192.168.50.51

k8s-node01

kubeadm、kubelet、kubectl、docker-ce

CentOS7.5

192.168.50.50

k8s-node02

kubeadm、kubelet、kubectl、docker-ce

注意:所有主机配置推荐CPU2C+  Memory:2G+

1.1、主机初始化配置

所有主机配置禁用防火墙和selinux 

[root@localhost ~]#  setenforce 0

[root@localhost ~]#  iptables -F

[root@localhost ~]#  systemctl stop firewalld

[root@localhost ~]#  systemctl disable firewalld

Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.

Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

[root@localhost ~]# systemctl stop NetworkManager

[root@localhost ~]# systemctl disable NetworkManager

Removed symlink /etc/systemd/system/multi-user.target.wants/NetworkManager.service.

Removed symlink /etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service.

Removed symlink /etc/systemd/system/network-online.target.wants/NetworkManager-wait-online.service.

[root@localhost ~]#  sed -i '/^SELINUX=/s/enforcing/disabled/' /etc/selinux/config

配置主机名并绑定hosts,不同主机名称不同

[root@localhost ~]#  hostname k8s-master

[root@localhost ~]#  bash

[root@k8s-master ~]#  cat << EOF >> /etc/hosts

> 192.168.50.53 k8s-master

> 192.168.50.51 k8s-node01

> 192.168.50.50 k8s-node02

> EOF

主机配置初始化

[root@k8s-master ~]# yum -y install vim wget net-tools lrzsz

[root@k8s-master ~]# swapoff -a

[root@k8s-master ~]# sed -i '/swap/s/^/#/' /etc/sysctl.conf

[root@k8s-master ~]#  cat << EOF >> /etc/sysctl.conf

> net.bridge.bridge-nf-call-ip6tables = 1

> net.bridge.bridge-nf-call-iptables = 1

> EOF

[root@k8s-master ~]# modprobe br_netfilter

[root@k8s-master ~]# sysctl -p

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

1.2、部署docker环境

三台主机上分别部署 Docker 环境,因为 Kubernetes 对容器的编排需要 Docker 的支持。

[root@k8s-master ~]#  wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-.repo

[root@k8s-master ~]#  yum install -y yum-utils device-mapper-persistent-data lvm2

使用 YUM 方式安装 Docker 时,推荐使用阿里的 YUM 源。

[root@k8s-master ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/dcker-ce.repo

已加载插件:fastestmirror

adding repo from: https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

grabbing file https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo to /etc/yum.repos.d/docer-ce.repo

repo saved to /etc/yum.repos.d/docker-ce.repo

[root@k8s-master ~]#  yum clean all && yum makecache fast

[root@k8s-master ~]# yum -y install docker-ce

[root@k8s-master ~]#  systemctl start docker

[root@k8s-master ~]# systemctl enable docker

Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/sytem/docker.service.

镜像加速器(所有主机配置)

[root@k8s-master ~]#  cat << END > /etc/docker/daemon.json

> {

>         "registry-mirrors":[ "https://nyakyfun.mirror.aliyuncs.com" ]

> }

> END

[root@k8s-master ~]# systemctl daemon-reload

[root@k8s-master ~]#  systemctl restart docker

二、部署kubernetes集群

2.1、组件介绍

三个节点都需要安装下面三个组件

  1. kubeadm:安装工具,使所有的组件都会以容器的方式运行
  2. kubectl:客户端连接K8S API工具
  3. kubelet:运行在node节点,用来启动容器的工具

2.2、配置阿里云yum源

使用 YUM 方式安装 Kubernetes时,推荐使用阿里的 YUM 源。

[root@k8s-master ~]#  cat < /etc/yum.repos.d/kubernetes.repo

> [kubernetes]

> name=Kubernetes

> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

> enabled=1

> gpgcheck=1

> repo_gpgcheck=1

> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg

>        https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

> EOF

2.3、安装kubelet kubeadm kubectl

所有主机配置

[root@k8s-master ~]#  yum install -y kubelet-1.20.0 kubeadm-1.20.0 kubectl-1.20.0

[root@k8s-master ~]# systemctl enable kubelet

Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

2.4、配置init-config.yaml

Kubeadm 提供了很多配置项,Kubeadm 配置在 Kubernetes 集群中是存储在ConfigMap 中的,也可将这些配置写入配置文件,方便管理复杂的配置项。Kubeadm 配内容是通过 kubeadm config 命令写入配置文件的。

在master节点安装,master 定于为192.168.200.111,通过如下指令创建默认的init-config.yaml文件:

[root@k8s-master ~]# kubeadm config print init-defaults > init-config.yaml

init-config.yaml配置

[root@k8s-master ~]# vim init-config.yaml

apiVersion: kubeadm.k8s.io/v1beta2

bootstrapTokens:

- groups::

  - system:bootstrappers:kubeadm:default-node-token

  token: abcdef.0123456789abcdef

  ttl: 24h0m0s

  usages:

  - signing

  - authentication

kind: InitConfiguration

localAPIEndpoint:

  advertiseAddress: 192.168.50.53

  bindPort: 6443

nodeRegistration:

  criSocket: /var/run/dockershim.sock

  name: k8s-master

  taints:

  - effect: NoSchedule

    key: node-role.kubernetes.io/master

---

apiServer:

  timeoutForControlPlane: 4m0s

apiVersion: kubeadm.k8s.io/v1beta2

certificatesDir: /etc/kubernetes/pki

clusterName: kubernetes

controllerManager: {}

dns:

  type: CoreDNS

etcd:

  local:

    dataDir: /var/lib/etcd

imageRepository: registry.aliyuncs.com/google_containers

kind: ClusterConfiguration

kubernetesVersion: v1.20.0

networking:

  dnsDomain: cluster.local

  serviceSubnet: 10.96.0.0/12

  podSubnet: 10.244.0.0/16

scheduler: {}

2.5、安装master节点

拉取所需镜像

[root@k8s-master ~]#  kubeadm config images list --config init-config.yaml

registry.aliyuncs.com/google_containers/kube-apiserver:v1.20.0

registry.aliyuncs.com/google_containers/kube-controller-manager:v1.20.0

registry.aliyuncs.com/google_containers/kube-scheduler:v1.20.0

registry.aliyuncs.com/google_containers/kube-proxy:v1.20.0

registry.aliyuncs.com/google_containers/pause:3.2

registry.aliyuncs.com/google_containers/etcd:3.4.13-0

registry.aliyuncs.com/google_containers/coredns:1.7.0

[root@k8s-master ~]# ls | while read line

> do

> docker load < $line

> done

archive/tar: invalid tar header

archive/tar: invalid tar header

[root@k8s-master ~]#  kubeadm config images pull --config=init-config.yaml

[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.20.0

[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.20.0

[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.20.0

[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.20.0

[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.2

[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.4.13-0

[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:1.7.0

安装matser节点//初始化安装K8S

根据提示操作

kubectl 默认会在执行的用户家目录下面的.kube 目录下寻找config 文件。这里是将在初始化时[kubeconfig]步骤生成的admin.conf 拷贝到.kube/config

[root@k8s-master ~]# kubeadm init --config=init-config.yaml

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.50.53:6443 --token abcdef.0123456789abcdef \

    --discovery-token-ca-cert-hash sha256:86256c4dcbc093cfe3f794d250fb83912dc9c4e3263c96726b93fabff87c29bb

[root@k8s-master ~]#   mkdir -p $HOME/.kube

[root@k8s-master ~]#   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

[root@k8s-master ~]#   sudo chown $(id -u):$(id -g) $HOME/.kube/config

Kubeadm 通过初始化安装是不包括网络插件的,也就是说初始化之后是不具备相关网络功能的,比如 k8s-master 节点上查看节点信息都是“Not Ready”状态、Pod 的 CoreDNS无法提供服务等。

2.6、安装node节点

根据master安装时的提示信息

[root@k8s-node01 ~]# kubeadm join 192.168.50.53:6443 --token abcdef.0123456789abcdef \

>     --discovery-token-ca-cert-hash sha256:86256c4dcbc093cfe3f794d250fb83912dc9c4e3263c96726b93fabff87c29bb

[preflight] Running pre-flight checks

[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/

[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.5. Latest validated version: 19.03

[preflight] Reading configuration from the cluster...

[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

[kubelet-start] Starting the kubelet

[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:

* Certificate signing request was sent to apiserver and a response was received.

* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@k8s-node02 ~]# kubeadm join 192.168.50.53:6443 --token abcdef.0123456789abcdef \

>     --discovery-token-ca-cert-hash sha256:86256c4dcbc093cfe3f794d250fb83912dc9c4e3263c96726b93fabff87c29bb

[preflight] Running pre-flight checks

[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/

[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.5. Latest validated version: 19.03

[preflight] Reading configuration from the cluster...

[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

[kubelet-start] Starting the kubelet

[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:

* Certificate signing request was sent to apiserver and a response was received.

* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@k8s-master ~]# kubectl get nodes

NAME         STATUS     ROLES                  AGE   VERSION

k8s-master   NotReady   control-plane,master   51s   v1.20.0

k8s-node01   NotReady                    20s   v1.20.0

k8s-node02   NotReady                    16s   v1.20.0

前面已经提到,在初始化 k8s-master 时并没有网络相关配置,所以无法跟 node 节点通信,因此状态都是“NotReady”。但是通过 kubeadm join 加入的 node 节点已经在k8s-master 上可以看到。

2.7、安装flannel

Master 节点NotReady 的原因就是因为没有使用任何的网络插件,此时Node 和Master的连接还不正常。目前最流行的Kubernetes 网络插件有Flannel、Calico、Canal、Weave 这里选择使用flannel。

所有主机上传flannel_v0.12.0-amd64.tar   cni-plugins-linux-amd64-v0.8.6.tgz

[root@k8s-master ~]# docker load

256a7af3acb1: Loading layer  5.844MB/5.844MB

d572e5d9d39b: Loading layer  10.37MB/10.37MB

57c10be5852f: Loading layer  2.249MB/2.249MB

7412f8eefb77: Loading layer  35.26MB/35.26MB

05116c9ff7bf: Loading layer   5.12kB/5.12kB

Loaded image: quay.io/coreos/flannel:v0.12.0-amd64

[root@k8s-master ~]# tar xf cni-plugins-linux-amd64-v0.8.6.tgz

[root@k8s-master ~]# cp flannel /opt/cni/bin

master上传kube-flannel.yml

master主机配置:

[root@k8s-master ~]# kubectl apply -f kube-flannel.yml

[root@k8s-master ~]# kubectl get nodes

NAME         STATUS   ROLES                  AGE   VERSION

k8s-master   Ready    control-plane,master   16m   v1.20.0

k8s-node01   Ready                     15m   v1.20.0

k8s-node02   Ready                     15m   v1.20.0

[root@k8s-master ~]# kubectl get pods -n kube-system

NAME                                 READY   STATUS    RESTARTS   AGE

coredns-7f89b7bc75-hbh2b             1/1     Running   0          16m

coredns-7f89b7bc75-vng6t             1/1     Running   0          16m

etcd-k8s-master                      1/1     Running   0          16m

kube-apiserver-k8s-master            1/1     Running   0          16m

kube-controller-manager-k8s-master   1/1     Running   0          16m

kube-flannel-ds-amd64-8k4g6          1/1     Running   0          49s

kube-flannel-ds-amd64-g9hgl          1/1     Running   0          49s

kube-flannel-ds-amd64-n555s          1/1     Running   0          49s

kube-proxy-cdhhw                     1/1     Running   0          16m

kube-proxy-crv94                     1/1     Running   0          16m

kube-proxy-rvtwx                     1/1     Running   0          16m

kube-scheduler-k8s-master            1/1     Running   0          16m

已经是ready状态

2.8、部署测试应用

所有node主机导入测试镜像

[root@~]# docker pull nginx

在Kubernetes集群中创建一个pod,验证是否正常运行。

[root@k8s-master ~]# mkdir demo

 

[root@k8s-master ~]# cd demo/

[root@k8s-master demo]# vim nginx-deployment.yaml

apiVersion: apps/v1

kind: Deployment

metadata:

  name: nginx-deployment

  labels:

    app: nginx

spec:

  replicas: 3

  selector:

    matchLabels:

      app: nginx

  template:

    metadata:

      labels:

        app: nginx

    spec:

      containers:

      - name: nginx

        image: nginx:1.19.6

        ports:

        - containerPort: 80

创建完 Deployment 的资源清单之后,使用 create 执行资源清单来创建容器。通过 get pods 可以查看到 Pod 容器资源已经自动创建完成。

[root@k8s-master demo]#  kubectl create -f nginx-deployment.yaml

deployment.apps/nginx-deployment created

[root@k8s-master demo]#  kubectl get pods

NAME                                READY   STATUS    RESTARTS   AGE

nginx-deployment-76ccf9dd9d-2tzp8   1/1     Running   0          12m

nginx-deployment-76ccf9dd9d-lcrbj   1/1     Running   0          12m

nginx-deployment-76ccf9dd9d-nttfp   1/1     Running   0          12m

[root@k8s-master demo]# kubectl get pods -o wide

NAME                                READY   STATUS    RESTARTS   AGE   IP           NODE         NOMINATED NODE   READINESS GATES

nginx-deployment-76ccf9dd9d-2tzp8   1/1     Running   0          12m   10.244.1.5   k8s-node01              

nginx-deployment-76ccf9dd9d-lcrbj   1/1     Running   0          12m   10.244.1.4   k8s-node01              

nginx-deployment-76ccf9dd9d-nttfp   1/1     Running   0          12m   10.244.1.6   k8s-node01              

创建Service资源清单

在创建的 nginx-service 资源清单中,定义名称为 nginx-service 的 Service、标签选择器为 app: nginx、type 为 NodePort 指明外部流量可以访问内部容器。在 ports 中定义暴露的端口库号列表,对外暴露访问的端口是 80,容器内部的端口也是 80。

[root@k8s-master demo]# vim nginx-service.yaml

kind: Service

apiVersion: v1

metadata:

  name: nginx-service

spec:

  selector:

    app: nginx

  type: NodePort

  ports:

  - protocol: TCP

    port: 80

    targetPort: 80

[root@k8s-master demo]# kubectl create -f nginx-service.yaml

service/nginx-service created

[root@k8s-master demo]# kubectl get svc

NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE

kubernetes      ClusterIP   10.96.0.1                443/TCP        42m

nginx-service   NodePort    10.107.198.156           80:32206/TCP   116s

 通过浏览器访问nginx: htttp:// 192.168.50.53:32206

基于Prometheus监控Kubernetes集群_第2张图片

三、部署Prometheus监控平台

3.1、准备Prometheus相关YAML文件

在master节点/opt目录下新建pgmonitor目录

[root@k8s-master demo]# mkdir /opt/pgmonitor

[root@k8s-master demo]# cd /opt/pgmonitor/

将下载yaml包上传至/opt/pgmonitor目录并解压

[root@k8s-master pgmonitor]# unzip k8s-prometheus-grafana-master.zip

Archive:  k8s-prometheus-grafana-master.zip

   creating: k8s-prometheus-grafana-master/

  inflating: k8s-prometheus-grafana-master/.DS_Store  

  inflating: __MACOSX/k8s-prometheus-grafana-master/._.DS_Store  

   creating: k8s-prometheus-grafana-master/grafana/

   creating: k8s-prometheus-grafana-master/prometheus/

  inflating: k8s-prometheus-grafana-master/node-exporter.yaml  

  inflating: __MACOSX/k8s-prometheus-grafana-master/._node-exporter.yaml  

  inflating: k8s-prometheus-grafana-master/grafana/grafana-ing.yaml  

  inflating: __MACOSX/k8s-prometheus-grafana-master/grafana/._grafana-ing.yaml  

  inflating: k8s-prometheus-grafana-master/grafana/grafana-deploy.yaml  

  inflating: __MACOSX/k8s-prometheus-grafana-master/grafana/._grafana-deploy.yaml  

  inflating: k8s-prometheus-grafana-master/grafana/grafana-svc.yaml  

  inflating: __MACOSX/k8s-prometheus-grafana-master/grafana/._grafana-svc.yaml  

  inflating: k8s-prometheus-grafana-master/prometheus/prometheus.deploy.yml  

  inflating: __MACOSX/k8s-prometheus-grafana-master/prometheus/._prometheus.deploy.yml  

  inflating: k8s-prometheus-grafana-master/prometheus/prometheus.svc.yml  

  inflating: __MACOSX/k8s-prometheus-grafana-master/prometheus/._prometheus.svc.yml  

  inflating: k8s-prometheus-grafana-master/prometheus/configmap.yaml  

  inflating: __MACOSX/k8s-prometheus-grafana-master/prometheus/._configmap.yaml  

  inflating: k8s-prometheus-grafana-master/prometheus/rbac-setup.yaml  

  inflating: __MACOSX/k8s-prometheus-grafana-master/prometheus/._rbac-setup.yaml

3.2、部署prometheus

部署守护进程

[root@k8s-master pgmonitor]# cd k8s-prometheus-grafana-master

[root@k8s-master k8s-prometheus-grafana-master]# kubectl create -f node-exporter.yaml

daemonset.apps/node-exporter created

service/node-exporter created

部署其他yaml文件

进入/opt/pgmonitor/k8s-prometheus-grafana-master/prometheus目录

[root@k8s-master k8s-prometheus-grafana-master]# cd prometheus/

部署rbac

[root@k8s-master prometheus]# kubectl create -f rbac-setup.yaml

clusterrole.rbac.authorization.k8s.io/prometheus created

serviceaccount/prometheus created

clusterrolebinding.rbac.authorization.k8s.io/prometheus created

部署configmap.yaml

[root@k8s-master prometheus]# kubectl create -f configmap.yaml

configmap/prometheus-config created

部署prometheus.deploy.yml

[root@k8s-master prometheus]#  kubectl create -f prometheus.deploy.yml

deployment.apps/prometheus created

部署prometheus.svc.yml

[root@k8s-master prometheus]#  kubectl create -f prometheus.svc.yml

service/prometheus created

[root@k8s-master prometheus]# ll

总用量 20

-rw-rw-r--. 1 root root 5631 1月   7 2019 configmap.yaml

-rw-rw-r--. 1 root root 1114 11月 17 2020 prometheus.deploy.yml

-rw-rw-r--. 1 root root  237 1月   7 2019 prometheus.svc.yml

-rw-rw-r--. 1 root root  716 1月   7 2019 rbac-setup.yaml

查看prometheus状态

[root@k8s-master prometheus]# kubectl get pods -n kube-system

NAME                                 READY   STATUS    RESTARTS   AGE

coredns-7f89b7bc75-hbh2b             1/1     Running   0          52m

coredns-7f89b7bc75-vng6t             1/1     Running   0          52m

etcd-k8s-master                      1/1     Running   0          52m

kube-apiserver-k8s-master            1/1     Running   0          52m

kube-controller-manager-k8s-master   1/1     Running   0          52m

kube-flannel-ds-amd64-8k4g6          1/1     Running   0          36m

kube-flannel-ds-amd64-g9hgl          1/1     Running   0          36m

kube-flannel-ds-amd64-n555s          1/1     Running   0          36m

kube-proxy-cdhhw                     1/1     Running   0          51m

kube-proxy-crv94                     1/1     Running   0          51m

kube-proxy-rvtwx                     1/1     Running   0          52m

kube-scheduler-k8s-master            1/1     Running   0          52m

node-exporter-hl5xn                  1/1     Running   0          5m25s

node-exporter-s26nq                  1/1     Running   0          5m25s

prometheus-68546b8d9-q45wc           1/1     Running   0          71s

四、部署Grafana服务

4.1、部署Grafana相关yaml文件

进入/opt/pgmonitor/k8s-prometheus-grafana-master/grafana目录

[root@k8s-master prometheus]# cd ../grafana/

部署grafana-deploy.yaml

[root@k8s-master grafana]# kubectl create -f grafana-deploy.yaml

deployment.apps/grafana-core created

部署grafana-svc.yaml

[root@k8s-master grafana]# kubectl create -f grafana-svc.yaml

service/grafana created

部署grafana-ing.yaml

[root@k8s-master grafana]# kubectl create -f grafana-ing.yaml

Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress

ingress.extensions/grafana created

查看Grafana状态

[root@k8s-master grafana]# kubectl get pods -n kube-system

NAME                                 READY   STATUS              RESTARTS   AGE

coredns-7f89b7bc75-hbh2b             1/1     Running             0          54m

coredns-7f89b7bc75-vng6t             1/1     Running             0          54m

etcd-k8s-master                      1/1     Running             0          54m

grafana-core-6d6fb7566-r6cs8         0/1     ContainerCreating   0          87s

kube-apiserver-k8s-master            1/1     Running             0          54m

kube-controller-manager-k8s-master   1/1     Running             0          54m

kube-flannel-ds-amd64-8k4g6          1/1     Running             0          38m

kube-flannel-ds-amd64-g9hgl          1/1     Running             0          38m

kube-flannel-ds-amd64-n555s          1/1     Running             0          38m

kube-proxy-cdhhw                     1/1     Running             0          53m

kube-proxy-crv94                     1/1     Running             0          53m

kube-proxy-rvtwx                     1/1     Running             0          54m

kube-scheduler-k8s-master            1/1     Running             0          54m

node-exporter-hl5xn                  1/1     Running             0          7m23s

node-exporter-s26nq                  1/1     Running             0          7m23s

prometheus-68546b8d9-q45wc           1/1     Running             0          3m9s

4.2、配置Grafana数据源

查看grafana的端口

[root@k8s-master grafana]# kubectl get svc -n kube-system

NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE

grafana         NodePort    10.100.142.110           3000:30428/TCP           2m6s

kube-dns        ClusterIP   10.96.0.10               53/UDP,53/TCP,9153/TCP   55m

node-exporter   NodePort    10.103.195.106           9100:31672/TCP           8m19s

prometheus      NodePort    10.105.216.233           9090:30003/TCP           3m57s

 通过浏览器访问grafana,http://[masterIP]:[grafana端口]

默认的用户名和密码:admin/admin     192.168.50.53:30428

基于Prometheus监控Kubernetes集群_第3张图片

设置DataSource

 基于Prometheus监控Kubernetes集群_第4张图片

prometheus      NodePort    10.105.216.233           9090:30003/TCP           3m57s

基于Prometheus监控Kubernetes集群_第5张图片

设置显示数据的模版

 基于Prometheus监控Kubernetes集群_第6张图片

输入315并移除光标,等一会儿即可进入下一个页面

 基于Prometheus监控Kubernetes集群_第7张图片

 基于Prometheus监控Kubernetes集群_第8张图片

 基于Prometheus监控Kubernetes集群_第9张图片

 基于Prometheus监控Kubernetes集群_第10张图片

 至此已经利用Prometheus+Granfana监控了Kubernetes平台。

你可能感兴趣的:(prometheus,kubernetes,容器)