目录
1.安装配置Logstash
(1)安装
(2)测试文件
(3)配置
服务器
安装软件 | 主机名 | IP地址 | 系统版本 | 配置 |
---|---|---|---|---|
Logstash | Elk | 10.3.145.14 | centos7.5.1804 | 2核4G |
软件版本:logstash-7.13.2.tar.gz
Logstash运行同样依赖jdk,本次为节省资源,故将Logstash安装在了10.3.145.14节点。
[root@elk ~]# tar zxf /usr/local/package/logstash-7.13.2.tar.gz -C /usr/local/
标准输入=>标准输出
1、启动logstash
2、logstash启动后,直接进行数据输入
3、logstash处理后,直接进行返回
input {
stdin {}
}
output {
stdout {
codec => rubydebug
}
}
标准输入=>标准输出及es集群
1、启动logstash
2、启动后直接在终端输入数据
3、数据会由logstash处理后返回并存储到es集群中
input {
stdin {}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => ["10.3.145.14","10.3.145.56","10.3.145.57"]
index => 'logstash-debug-%{+YYYY-MM-dd}'
}
}
端口输入=>字段匹配=>标准输出及es集群
1、由tcp 的8888端口将日志发送到logstash
2、数据被grok进行正则匹配处理
3、处理后,数据将被打印到终端并存储到es
input {
tcp {
port => 8888
}
}
filter {
grok {
match => {"message" => "%{DATA:key} %{NUMBER:value:int}"}
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => ["10.3.145.14","10.3.145.56","10.3.145.57"]
index => 'logstash-debug-%{+YYYY-MM-dd}'
}
}
# yum install -y nc
# free -m |awk 'NF==2{print $1,$3}' |nc logstash_ip 8888
文件输入=>字段匹配及修改时间格式修改=>es集群
1、直接将本地的日志数据拉去到logstash当中
2、将日志进行处理后存储到es
input {
file {
type => "nginx-log"
path => "/var/log/nginx/error.log"
start_position => "beginning" # 此参数表示在第一次读取日志时从头读取
# sincedb_path => "自定义位置" # 此参数记录了读取日志的位置,默认在 data/plugins/inputs/file/.sincedb*
}
}
filter {
grok {
match => { "message" => '%{DATESTAMP:date} [%{WORD:level}] %{DATA:msg} client: %{IPV4:cip},%{DATA}"%{DATA:url}"%{DATA}"%{IPV4:host}"'}
}
date {
match => [ "timestamp" , "dd/MMM/YYYY:HH:mm:ss Z" ]
}
}
output {
if [type] == "nginx-log" {
elasticsearch {
hosts => ["192.168.249.139:9200","192.168.249.149:9200","192.168.249.159:9200"]
index => 'logstash-audit_log-%{+YYYY-MM-dd}'
}
}
}
filebeat => 字段匹配 => 标准输出及es
input {
beats {
port => 5000
}
}
filter {
grok {
match => {"message" => "%{IPV4:cip}"}
}
}
output {
elasticsearch {
hosts => ["192.168.249.139:9200","192.168.249.149:9200","192.168.249.159:9200"]
index => 'test-%{+YYYY-MM-dd}'
}
stdout { codec => rubydebug }
}
创建目录,我们将所有input、filter、output配置文件全部放到该目录中。
[root@elk ~]# mkdir -p /usr/local/logstash-7.13.2/etc/conf.d
[root@elk ~]# vim /usr/local/logstash-7.13.2/etc/conf.d/input.conf
input {
kafka {
type => "audit_log"
codec => "json"
topics => "nginx"
decorate_events => true
bootstrap_servers => "10.3.145.41:9092, 10.3.145.42:9092, 10.3.145.43:9092"
}
}
[root@elk ~]# vim /usr/local/logstash-7.13.2/etc/conf.d/filter.conf
filter {
json { # 如果日志原格式是json的,需要用json插件处理
source => "message"
target => "nginx" # 组名
}
}
[root@elk ~]# vim /usr/local/logstash-7.13.2/etc/conf.d/output.conf
output {
if [type] == "audit_log" {
elasticsearch {
hosts => ["10.3.145.14","10.3.145.56","10.3.145.57"]
index => 'logstash-audit_log-%{+YYYY-MM-dd}'
}
}
}
(3)启动
[root@elk ~]# cd /usr/local/logstash-7.13.2
[root@elk ~]# nohup bin/logstash -f etc/conf.d/ --config.reload.automatic &