先放yaml配置文件
apiVersion: v1
kind: ConfigMap
metadata:
name: k8s-logs-filebeat-config
namespace: my-namespace
data:
filebeat.yml: |-
filebeat.inputs:
- type: container
enabled: true
symlinks: true
paths:
- /var/log/containers/mylog-*.log
multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}\s[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3}'
multiline.negate: true
multiline.match: after
multiline.timeout: 10s
processors:
- drop_fields:
fields: ["host", "ecs", "log", "agent", "input"]
ignore_missing: false
output.logstash:
hosts: ["192.168.1.2:5044"]
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: k8s-logs
namespace: my-namespace
spec:
selector:
matchLabels:
project: k8s
app: filebeat
template:
metadata:
labels:
project: k8s
app: filebeat
spec:
containers:
- name: filebeat
imagePullPolicy: IfNotPresent
image: elastic/filebeat:7.13.4
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
securityContext:
runAsUser: 0
volumeMounts:
- name: filebeat-config
mountPath: /etc/filebeat.yml
subPath: filebeat.yml
- name: k8s-docker
mountPath: /var/lib/docker/containers
readOnly: true
- name: k8s-pods
mountPath: /var/log/pods
readOnly: true
- name: k8s-logs
mountPath: /var/log/containers
readOnly: true
volumes:
- name: k8s-docker
hostPath:
path: /var/lib/docker/containers
- name: k8s-pods
hostPath:
path: /var/log/pods
- name: k8s-logs
hostPath:
path: /var/log/containers
- name: filebeat-config
configMap:
name: k8s-logs-filebeat-config
multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}\s[0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3}'
可以匹配每个错误开头的日期时间格式。例如:2021-08-02 20:30:30.451这样的。securityContext:
runAsUser: 0
"-c", "/etc/filebeat.yml"
加载配置文件- pipeline.id: mypipeline
pipeline.workers: 8
pipeline.batch.size: 1000
path.config: "/home/elk/app/logstash-7.13.4/config/my.config"
创建pipeline。workers线程数,batch.size一次处理事件数按照实际情况配置。pipeline的配置文件使用my.config。
input {
beats {
codec => json
ssl => false
port => 5044
}
}
filter {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:datetime}\|%{WORD:loglevel}\|%{IPORHOST:ipaddress}\|%{DATA:appname}\|%{NUMBER:pid}\|%{DATA:method}\|%{GREEDYDATA:loginfo}" }
}
mutate {
remove_field => ["message", "stream", "tags", "@timestamp", "time", "@version", "path", "host"]
}
#all
if "_grokparsefailure" in [tags] {
drop {}
}
date {
match => [ "datetime", "ISO8601" ]
target => "@timestamp"
}
}
output {
elasticsearch {
hosts => ["http://192.168.1.2:9200"]
index => "%{[appname]}-%{+YYYY.MM.dd}"
user => "elastic"
password => "mypassword"
}
}
<conversionRule conversionWord="ip" converterClass="com.enjoy.common.log.LogIpConfig" />
<springProperty scope="context" name="appName" source="spring.application.name" defaultValue="myapp"/>
<property name="ELK_LOG_PATTERN"
value="%d{yyyy-MM-dd HH:mm:ss.SSS}|%level|%ip|${appName}|${PID:- }|%class:%method|%msg%n" />
所以对应使用"%{TIMESTAMP_ISO8601:datetime}|%{WORD:loglevel}|%{IPORHOST:ipaddress}|%{DATA:appname}|%{NUMBER:pid}|%{DATA:method}|%{GREEDYDATA:loginfo}"来解析。这样可以把日志内容解析为json。
node.name: elk-1
network.host: 192.168.1.2
http.port: 9200
discovery.seed_hosts: ["192.168.1.2"]
cluster.initial_master_nodes: ["elk-1"]
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-headers: Authorization
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
配置一个带用户名密码的简单的单节点的elasticsearch。
elasticsearch-setup-passwords interactive
server.port: 7788
server.host: "192.168.1.2"
elasticsearch.hosts: ["http://192.168.1.2:9200"]
elasticsearch.username: "kibana_system"
elasticsearch.password: "mypassword"
左侧菜单->Stack Management->Index Management。日志会通过Logstash收集之后会建立对应的索引。
Index Management的下面选择Index patterns。然后选择Create index pattern,Index pattern name输入索引的通配符。本例使用appname开头的,这样可以模糊搜索应用下所有日期的日志,然后查询条件选择@timestamp,根据日期搜索。
左侧菜单->Discover。根据appname和日期查看日志。