Lightweight log, metric, and network data open source shippers, or Beats, from Elastic are deployed in the same Kubernetes cluster as the guestbook
来自 Elastic
的轻量级日志、度量和网络数据开源交付器,Beats
,部署在与 guestbook
相同的Kubernetes
集群中。
The Beats
collect, parse, and index the data into Elasticsearch so that you can view and analyze the resulting operational information in Kibana.
Beats
收集,解析和索引数据到Elasticsearch,您可以在Kibana中查看和分析生成的操作信息。
This example consists of the following components:
这个例子包含以下组件:
- Elasticsearch and Kibana
- Filebeat
- Metricbeat
- Packetbeat
目录
[TOC]
增加 Cluster role binding
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole=cluster-admin \
--user=
安装 kube-state-metrics
Kubernetes kube-state-metrics is a simple service that listens to the Kubernetes API server and generates metrics about the state of the objects.
Kubernetes kube-state-metrics是一个简单的服务,它侦听Kubernetes API服务器并生成关于对象状态的度量。
Metricbeat reports these metrics. Add kube-state-metrics to the Kubernetes cluster that the guestbook is running in.
Metricbeat报告这些指标。将kube-state指标添加到运行来宾簿的Kubernetes集群中。
Check to see if kube-state-metrics is running
kubectl get pods --namespace=kube-system | grep kube-state
Install kube-state-metrics if needed
git clone https://github.com/kubernetes/kube-state-metrics.git kube-state-metrics
kubectl create -f kube-state-metrics/kubernetes
kubectl get pods --namespace=kube-system | grep kube-state
Verify that kube-state-metrics is running and ready
kubectl get pods -n kube-system -l k8s-app=kube-state-metrics
Output:
NAME READY STATUS RESTARTS AGE
kube-state-metrics-89d656bf8-vdthm 2/2 Running 0 21s
Clone the Elastic examples GitHub repo
git clone https://github.com/elastic/examples.git
The rest of the commands will reference files in the examples/beats-k8s-send-anywhere
directory, so change dir there:
其余的命令将引用 examples/beats-k8s-send-anywhere
目录中的文件,所以在这里更改dir:
cd examples/beats-k8s-send-anywhere
创建 Kubernetes Secret
A Kubernetes Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key.
Kubernetes Secret 是一个对象,其中包含少量的敏感数据,如一个密码,一个令牌,或一个键。
Such information might otherwise be put in a Pod specification or in an image; 否则,
这些信息可以放在Pod规范或图像中
putting it in a Secret object allows for more control over how it is used, and reduces the risk of accidental exposure.
把它在一个秘密的对象允许更好地控制它的使用方式,并减少意外接触的风险。
Note: There are two sets of steps here, one for self managed Elasticsearch and Kibana (running on your servers or using the Elastic Helm Charts), and a second separate set for the managed service Elasticsearch Service in Elastic Cloud. Only create the secret for the type of Elasticsearch and Kibana system that you will use for this tutorial.
这里有两组步骤,一组用于自管理Elasticsearch和Kibana(运行在服务器上或使用Elastic Helm图表),另一组用于在Elastic Cloud中管理服务Elasticsearch服务。只创建您将在本教程中使用的Elasticsearch和Kibana系统类型的秘密。
Self managed
Set the credentials
There are four files to edit to create a k8s secret when you are connecting to self managed Elasticsearch and Kibana (self managed is effectively anything other than the managed Elasticsearch Service in Elastic Cloud).
当您连接到self-managed Elasticsearch
和Kibana
(self-managed
实际上是Elastic Cloud
中托管的Elasticsearch
服务之外的任何东西)时,需要编辑四个文件来创建k8s机密。
The files are:
ELASTICSEARCH_HOSTS
ELASTICSEARCH_PASSWORD
ELASTICSEARCH_USERNAME
KIBANA_HOST
Set these with the information for your Elasticsearch cluster and your Kibana host.
使用Elasticsearch集群和Kibana主机的信息设置这些
Here are some examples
ELASTICSEARCH_HOSTS
1 A nodeGroup from the Elastic Elasticsearch Helm Chart:
["http://elasticsearch-master.default.svc.cluster.local:9200"]
2 A single Elasticsearch node running on a Mac where your Beats are running in Docker for Mac:
["http://host.docker.internal:9200"]
3 Two Elasticsearch nodes running in VMs or on physical hardware:
["http://host1.example.com:9200", "http://host2.example.com:9200"]
Edit ELASTICSEARCH_HOSTS
vi ELASTICSEARCH_HOSTS
ELASTICSEARCH_PASSWORD
Just the password; no whitespace, quotes, or <>:
Edit ELASTICSEARCH_PASSWORD
vi ELASTICSEARCH_PASSWORD
vi ELASTICSEARCH_PASSWORD
Just the username; no whitespace, quotes, or <>:
Edit ELASTICSEARCH_USERNAME
vi ELASTICSEARCH_USERNAME
KIBANA_HOST
1 The Kibana instance from the Elastic Kibana Helm Chart. 来自弹Elastic Kibana Helm Chart
的Kibana实例。 subdomain default refers to the default namespace.
子域缺省指的是缺省名称空间。
If you have deployed the Helm Chart using a different namespace, then your subdomain will be different:
如果您已经使用不同的名称空间部署了Helm Chart
,那么您的子域将会不同
"kibana-kibana.default.svc.cluster.local:5601"
A Kibana instance running on a Mac where your Beats are running in Docker for Mac:
一个运行在Mac上的Kibana实例,你的Beats在Mac的Docker中运行:
"host.docker.internal:5601"
Two Elasticsearch nodes running in VMs or on physical hardware:
运行在vm或物理硬件上的两个Elasticsearch节点
"host1.example.com:5601"
Edit KIBANA_HOST
vi KIBANA_HOST
Create a Kubernetes secret
This command creates a secret in the Kubernetes system level namespace (kube-system) based on the files you just edited:
kubectl create secret generic dynamic-logging \
--from-file=./ELASTICSEARCH_HOSTS \
--from-file=./ELASTICSEARCH_PASSWORD \
--from-file=./ELASTICSEARCH_USERNAME \
--from-file=./KIBANA_HOST \
--namespace=kube-system
Managed service
This tab is for Elasticsearch Service in Elastic Cloud only, if you have already created a secret for a self managed Elasticsearch and Kibana deployment, then continue with Deploy the Beats.
此选项卡仅适用于Elastic Cloud中的Elasticsearch服务,如果您已经为自管理的Elasticsearch和Kibana部署创建了一个秘密,那么继续部署Beats。
Set the credentials
There are two files to edit to create a k8s secret when you are connecting to the managed Elasticsearch Service in Elastic Cloud. The files are:
ELASTIC_CLOUD_AUTH
ELASTIC_CLOUD_ID
Set these with the information provided to you from the Elasticsearch Service console when you created the deployment.
当您连接到Elastic Cloud中的托管Elasticsearch服务时,需要编辑两个文件来创建k8s机密
Here are some examples:
ELASTIC_CLOUD_ID
evk8s:ABC123def456ghi789jkl123mno456pqr789stu123vwx456yza789bcd012efg345hijj678klm901nop345zEwOTJjMTc5YWQ0YzQ5OThlN2U5MjAwYTg4NTIzZQ==
ELASTIC_CLOUD_AUTH
Just the username, a colon (:), and the password, no whitespace or quotes:
elastic:VFxJJf9Tjwer90wnfTghsn8w
Edit the required files:
vi ELASTIC_CLOUD_ID
vi ELASTIC_CLOUD_AUTH
Create a Kubernetes secret
This command creates a secret in the Kubernetes system level namespace (kube-system) based on the files you just edited:
该命令根据刚刚编辑的文件在Kubernetes系统级名称空间(kube-system)中创建一个秘密
kubectl create secret generic dynamic-logging \
--from-file=./ELASTIC_CLOUD_ID \
--from-file=./ELASTIC_CLOUD_AUTH \
--namespace=kube-system
部署 Beats
Manifest files are provided for each Beat. These manifest files use the secret created earlier to configure the Beats to connect to your Elasticsearch and Kibana servers.
每个Beat
提供清单文件。这些清单文件使用前面创建的secret
来配置Beat
,以连接到Elasticsearch和Kibana服务器。
About Filebeat
Filebeat will collect logs from the Kubernetes nodes and the containers running in each pod running on those nodes.
Filebeat将从Kubernetes节点和在这些节点上运行的每个pod中运行的容器中收集日志。
Filebeat is deployed as a DaemonSet .
Filebeat被部署为一个守护进程。
Filebeat can autodiscover applications running in your Kubernetes cluster.
Filebeat可以自动发现运行在Kubernetes集群中的应用程序。
At startup Filebeat scans existing containers and launches the proper configurations for them, then it will watch for new start/stop events.
在启动时,Filebeat扫描现有容器并为它们启动适当的配置,然后它将监视新的开始/停止事件。
Here is the autodiscover configuration that enables Filebeat to locate and parse Redis logs from the Redis containers deployed with the guestbook application.
这是自动发现配置,使Filebeat能够从部署在来宾簿应用程序中的Redis容器中定位和解析Redis日志。
This configuration is in the file filebeat-kubernetes.yaml:
- condition.contains:
kubernetes.labels.app: redis
config:
- module: redis
log:
input:
type: docker
containers.ids:
- ${data.kubernetes.container.id}
slowlog:
enabled: true
var.hosts: ["${data.host}:${data.port}"]
This configures Filebeat to apply the Filebeat module redis when a container is detected with a label app containing the string redis.
当使用包含字符串redis的标签应用程序检测到容器时,配置Filebeat以应用Filebeat模块redis。
The redis module has the ability to collect the log stream from the container by using the docker input type (reading the file on the Kubernetes node associated with the STDOUT stream from this Redis container).
redis模块能够使用docker输入类型从容器中收集日志流(从这个redis容器中读取与STDOUT流关联的Kubernetes节点上的文件)。
Additionally, the module has the ability to collect Redis slowlog entries by connecting to the proper pod host and port, which is provided in the container metadata.
此外,模块还能够通过连接到容器元数据中提供的适当pod主机和端口来收集Redis slowlog条目。
Deploy Filebeat:
kubectl create -f filebeat-kubernetes.yaml
Verify
kubectl get pods -n kube-system -l k8s-app=filebeat-dynamic
About Metricbeat
Metricbeat autodiscover is configured in the same way as Filebeat.
Metricbeat自动发现的配置方法与Filebeat相同。
Here is the Metricbeat autodiscover configuration for the Redis containers.
下面是Redis容器的Metricbeat自动发现配置。
This configuration is in the file metricbeat-kubernetes.yaml:
- condition.equals:
kubernetes.labels.tier: backend
config:
- module: redis
metricsets: ["info", "keyspace"]
period: 10s
# Redis hosts
hosts: ["${data.host}:${data.port}"]
This configures Metricbeat to apply the Metricbeat module redis when a container is detected with a label tier equal to the string backend. The redis module has the ability to collect the info and keyspace metrics from the container by connecting to the proper pod host and port, which is provided in the container metadata.
这个配置Metricbeat应用Metricbeat模块复述,当检测到一个容器标签层等于字符串后端。复述,模块有能力收集信息和用于度量从容器连接到适当的pod主机和端口,容器提供的元数据。
Deploy Metricbeat
kubectl create -f metricbeat-kubernetes.yaml
Verify
kubectl get pods -n kube-system -l k8s-app=metricbeat
About Packetbeat
Packetbeat configuration is different than Filebeat and Metricbeat. Rather than specify patterns to match against container labels the configuration is based on the protocols and port numbers involved. Shown below is a subset of the port numbers.
Packetbeat配置与Filebeat和Metricbeat不同,它不是指定与容器标签匹配的模式,而是基于所涉及的协议和端口号进行配置。
Note: If you are running a service on a non-standard port add that port number to the appropriate type in filebeat.yaml and delete / create the Packetbeat DaemonSet.
packetbeat.interfaces.device: any
packetbeat.protocols:
- type: dns
ports: [53]
include_authorities: true
include_additionals: true
- type: http
ports: [80, 8000, 8080, 9200]
- type: mysql
ports: [3306]
- type: redis
ports: [6379]
packetbeat.flows:
timeout: 30s
period: 10s
Deploy Packetbeat
kubectl create -f packetbeat-kubernetes.yaml
Verify
kubectl get pods -n kube-system -l k8s-app=packetbeat-dynamic
View in Kibana
Open Kibana in your browser and then open the Dashboard application. In the search bar type Kubernetes and click on the Metricbeat dashboard for Kubernetes. This dashboard reports on the state of your Nodes, deployments, etc.
在浏览器中打开Kibana,然后打开Dashboard应用程序,在搜索栏中键入Kubernetes,然后单击用于Kubernetes的Metricbeat仪表板,该仪表板将报告您的节点、部署等状态。
Search for Packetbeat on the Dashboard page, and view the Packetbeat overview.
在仪表板页面上搜索Packetbeat,并查看Packetbeat概览。
Similarly, view dashboards for Apache and Redis.
类似地,查看Apache和Redis的仪表板。
You will see dashboards for logs and metrics for each.
您将看到仪表盘的日志和指标。
the Apache Metricbeat dashboard will be blank.
Apache Metricbeat仪表板将空白。
Look at the Apache Filebeat dashboard and scroll to the bottom to view the Apache error logs. This will tell you why there are no metrics available for Apache.
看看Apache Filebeat仪表板和滚动查看底部Apache错误日志。这将告诉你为什么没有指标用于Apache。
To enable Metricbeat to retrieve the Apache metrics, enable server-status by adding a ConfigMap including a mod-status configuration file and re-deploy the guestbook.
要启用Metricbeat来检索Apache指标,可以通过添加包含mode -status配置文件的ConfigMap并重新部署来宾簿来启用server-status。
Scale your deployments and see new pods being monitored
扩展您的部署,并查看正在监视的新吊舱
List the existing deployments:
列出现有的部署:
kubectl get deployments
The output:
NAME READY UP-TO-DATE AVAILABLE AGE
frontend 3/3 3 3 3h27m
redis-master 1/1 1 1 3h27m
redis-slave 2/2 2 2 3h27m
Scale the frontend down to two pods:
kubectl scale --replicas=2 deployment/frontend
The output:
deployment.extensions/frontend scaled
View the changes in Kibana
See the screenshot, add the indicated filters and then add the columns to the view.
看到截图,添加指定的过滤器,然后将列添加到视图。
You can see the ScalingReplicaSet entry that is marked, following from there to the top of the list of events shows the image being pulled, the volumes mounted, the pod starting, etc.
你可以看到ScalingReplicaSet条目标记,之后从这里到事件列表的顶部显示图像被拉,卷装舱开始等。
Kibana Discover
Cleaning up
Deleting the Deployments and Services also deletes any running Pods. Use labels to delete multiple resources with one command.
Run the following commands to delete all Pods, Deployments, and Services.
kubectl delete deployment -l app=redis
kubectl delete service -l app=redis
kubectl delete deployment -l app=guestbook
kubectl delete service -l app=guestbook
kubectl delete -f filebeat-kubernetes.yaml
kubectl delete -f metricbeat-kubernetes.yaml
kubectl delete -f packetbeat-kubernetes.yaml
kubectl delete secret dynamic-logging -n kube-system
Query the list of Pods to verify that no Pods are running:
kubectl get pods
The response should be this:
No resources found.