ELK集群+Filebeat+Kafka

ELK+Filebeat+Kafka

  • 概述
  • 配置Filebeat
    • 以wender用户,修改 filebeat.yml
    • 启动filebeat
  • 配置Kafka
    • 启动zk
    • 启动Kafka,创建topic
  • 配置Logstash
    • 创建logstash-msg-a.conf,内容如下:
    • 创建logstash-msg-b.conf,内容如下:
    • 创建/logstash-msg-c.conf,内容如下:
    • 启动Logstash
  • 配置ElasticSearch
    • 解析ES异常
    • 启动ES
  • 配置Kibana
    • 启动停止访问kibana
  • 各组件启动顺序
  • 参考资料

概述

Linux主机的ip是10.10.20.26,版本为CentOS Linux release 7.6.1810 (Core) ,当前用户为wender,jdk版本为1.8.0_121。各组件的版本为:
filebeat-6.4.3-linux-x86_64
elasticsearch-6.4.3
logstash-6.4.3
kibana-6.4.3-linux-x86_64
kafka_2.12-2.3.0

Filebeat监控三个文件;把文件的变更,发送到Kafka,Logstash读取Kafka的topic中的消息,再发送到ElasticSearch,Kibana读取ElasticSearch的日志展示到浏览器

文件夹权限:

chown -R wender /home/wender/app/
chown -R wender /home/wender/app/jdk1.8.0_121

配置Filebeat

以wender用户,修改 filebeat.yml

su wender
vi filebeat.yml

filebeat.inputs:
- type: log
  enabled: true
  # 题库
  paths:
    - /var/logs/zbg/xapi/xapi/*.json
  tail_files: true
  fields:
    logtopic: elk-msg-a
    
- type: log
  enabled: true
  # 学习中心
  paths:
    - /var/logs/study/study/*.json
  tail_files: true
  fields:
    logtopic: elk-msg-b

- type: log
  enabled: true
  # 教务
  paths:
    - /var/logs/zbg/zbg/*.json
  tail_files: true
  fields:
    logtopic: elk-msg-c

output.kafka:
  enabled: true
  hosts: ["10.10.20.26:9092"]
  
  topic: '%{[fields.logtopic]}'
  partition.round_robin:
    reachable_only: false

  required_acks: 1
  compression: gzip
  max_message_bytes: 1000000
  name: 10.10.20.26_filebeat

启动filebeat

su wender
nohup /home/wender/app/filebeat-6.4.3-linux-x86_64/filebeat -c /home/wender/app/filebeat-6.4.3-linux-x86_64/filebeat.cluster.kafka.yml &

配置Kafka

启动zk

##以root身份启动zk
su
/home/wender/app/kafka_2.12-2.3.0/bin/zookeeper-server-start.sh /home/wender/app/kafka_2.12-2.3.0/config/zookeeper.properties >/dev/null 2>&1 &
##或执行下面的命令
nohup /home/wender/app/kafka_2.12-2.3.0/bin/zookeeper-server-start.sh /home/wender/app/kafka_2.12-2.3.0/config/zookeeper.properties &

启动Kafka,创建topic

##start kafka
/home/wender/app/kafka_2.12-2.3.0/bin/kafka-server-start.sh /home/wender/app/kafka_2.12-2.3.0/config/server.properties >/dev/null 2>&1 &

##create topic
/home/wender/app/kafka_2.12-2.3.0/bin/kafka-topics.sh --create --zookeeper 10.10.20.26:2181 --replication-factor 1 --partitions 1 --topic elk-msg-a
/home/wender/app/kafka_2.12-2.3.0/bin/kafka-topics.sh --create --zookeeper 10.10.20.26:2181 --replication-factor 1 --partitions 1 --topic elk-msg-b
/home/wender/app/kafka_2.12-2.3.0/bin/kafka-topics.sh --create --zookeeper 10.10.20.26:2181 --replication-factor 1 --partitions 1 --topic elk-msg-c

##以下命令在kafka中常用,仅做记录,不在本次ELK配置中使用
##get all topics
/home/wender/app/kafka_2.12-2.3.0/bin/kafka-topics.sh --list --zookeeper localhost:2181

##delete topic
/home/wender/app/kafka_2.12-2.3.0/bin/kafka-topics.sh --delete --zookeeper 10.10.20.26:2181 --topic elk-msg-c
/home/wender/app/kafka_2.12-2.3.0/bin/zookeeper-shell.sh 10.10.20.26:2181
ls /brokers/topics
rmr /brokers/topics/elk-msg-c
quit

##consume message on topic
/home/wender/app/kafka_2.12-2.3.0/bin/kafka-console-consumer.sh --bootstrap-server 10.10.20.26:9092 --topic elk-msg-a --from-beginning
/home/wender/app/kafka_2.12-2.3.0/bin/kafka-console-consumer.sh --bootstrap-server 10.10.20.26:9092 --topic elk-msg-b --from-beginning
/home/wender/app/kafka_2.12-2.3.0/bin/kafka-console-consumer.sh --bootstrap-server 10.10.20.26:9092 --topic elk-msg-c --from-beginning

配置Logstash

创建logstash-msg-a.conf,内容如下:

cat /home/wender/app/logstash-6.4.3/config/conf.d/conf.d_kafka/logstash-msg-a.conf

input {
    kafka {
        bootstrap_servers => "10.10.20.26:9092"
        topics => ["elk-msg-a"]
        auto_offset_reset => "earliest"
        codec => json {
            charset => "UTF-8"
        }
        client_id => "client-msg-a"
        group_id => "group-msg-a"
    }
    #如果有其他数据源,直接在下面追加
}

filter {
    #将message转为json格式
    json {
        source => "message"
    }

    mutate {
    #移除message字段
        remove_field => ["message"]
    }
}

output {
    #处理后的日志入es
    if [fields][logtopic] == "elk-msg-a" {
        elasticsearch {
            hosts => "10.10.20.26:9200"
            index => "elk-msg-a"
            codec => "json"
        }
    }
} 

创建logstash-msg-b.conf,内容如下:

cat /home/wender/app/logstash-6.4.3/config/conf.d/conf.d_kafka/logstash-msg-b.conf

input {
    kafka {
        bootstrap_servers => "10.10.20.26:9092"
        topics => ["elk-msg-b"]
        auto_offset_reset => "earliest"
        codec => json {
            charset => "UTF-8"
        }
        client_id => "client-msg-b"
        group_id => "group-msg-b"
    }
    #如果有其他数据源,直接在下面追加
}

filter {
    #将message转为json格式
    json {
        source => "message"
    }

    mutate {
        #移除message字段
        remove_field => ["message"]
    }
}

output {
    if [fields][logtopic] == "elk-msg-b" {
    #处理后的日志入es
        elasticsearch {
            hosts => "10.10.20.26:9200"
            index => "elk-msg-b"
            codec => "json"
       }
    }
}

创建/logstash-msg-c.conf,内容如下:

cat /home/wender/app/logstash-6.4.3/config/conf.d/conf.d_kafka/logstash-msg-c.conf

input {
    kafka {
        bootstrap_servers => "10.10.20.26:9092"
        topics => ["elk-msg-c"]
        auto_offset_reset => "earliest"
        codec => json {
            charset => "UTF-8"
        }
        client_id => "client-msg-c"
        group_id => "group-msg-c"
    }
    #如果有其他数据源,直接在下面追加
}

filter {
    #将message转为json格式
    json {
        source => "message"
    }
    
    mutate {
            #移除message字段
        remove_field => ["message"]
    }            
}

output {
    #处理后的日志入es
    if [fields][logtopic] == "elk-msg-c" {
        elasticsearch {
            hosts => "10.10.20.26:9200"
            index => "elk-msg-c"
            codec => "json"
        }
    }
}

启动Logstash

#start logstash
vi /home/wender/app/logstash-6.4.3/bin/logstash
export JAVA_CMD="/home/wender/app/jdk1.8.0_121/bin"  #改为自己服务器JDK位置
export JAVA_HOME="/home/wender/app/jdk1.8.0_121/"    #改为自己服务器JDK位置

#启动/home/wender/app/logstash-6.4.3/config/conf.d/目录下的所有配置文件,末尾一定要有/,表示是目录
su wender
nohup /home/wender/app/logstash-6.4.3/bin/logstash -f /home/wender/app/logstash-6.4.3/config/conf.d_kafka/ &

配置ElasticSearch

解析ES异常

[2020-05-07T13:25:01,579][ERROR][o.e.b.Bootstrap          ] [node-26] node validation exception
[1] bootstrap checks failed
[1]: max file descriptors [65535] for elasticsearch process is too low, increase to at least [65536]
1、修改/etc/security/limits.conf
*    hard    nofile    65536
*    soft    nofile    65536
2、修改(如果没有文件,新建)
[root@master config]# cat /etc/pam.d/sshd
session    required   /lib64/security/pam_limits.so
[root@master config]# cat /etc/pam.d/common-session
session required /lib64/security/pam_limits.so
3、修改/etc/profile增加一条
ulimit -n 65536
4、重启sshd服务
/etc/init.d/sshd restart
service sshd restart

启动ES

[elasticsearch-6.4.3]chown -R wender 
su wender
/home/wender/app/elasticsearch-6.4.3/bin/elasticsearch -d

配置Kibana

vi config/kibana.yml
server.port: 5601
server.host: "10.10.20.26"
server.name: "wender"
elasticsearch.url: "http://10.10.20.26:9200" # es的地址
kibana.index: ".kibana"

启动停止访问kibana

#启动
su wender
./kibana
#或
nohup ./kibana > /dev/null 2>&1 &

#停止
fuser -n tcp 5601 #查看端口进程
kill -9  进程号 #杀死上步执行结果进程

#访问地址
http://10.10.20.26:5601

各组件启动顺序

1.zookeeper
2.kafka
3.filebeat
4.elasticsearch
5.logstash
6.kibana


参考资料

https://www.jianshu.com/p/c3d35cceaabc
https://blog.csdn.net/wyalonehome/article/details/83990512

你可能感兴趣的:(ELK,elk,kafka,elasticsearch)