这篇文章只是操作记录,对于这些组件的原理研究,可以去elasticsearch官网
Centos7.0及以上
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.6.1-x86_64.rpm
rpm -ivh filebeat-7.6.1-x86_64.rpm
docker pull wurstmeister/zookeeper:latest
docker pull wurstmeister/kafka:latest
docker pull logstash:7.6.1
docker pull elasticsearch:7.6.1
docker pull kibana:7.6.1
docker pull nginx:latest
#zookeeper port:2181
docker run -d --name zookeeper -p 2181:2181 --log-opt max-size=100m wurstmeister/zookeeper:latest
#kafka port:9092
docker run -d --name kafka --publish 9092:9092 \
--link zookeeper \
--env KAFKA_ZOOKEEPER_CONNECT=192.168.1.128:2181 \
--env KAFKA_ADVERTISED_HOST_NAME=192.168.1.128 \
--env KAFKA_ADVERTISED_PORT=9092 \
--volume /etc/localtime:/etc/localtime \
wurstmeister/kafka:latest
需要设置系统内核参数,否则会因为内存不足无法启动
sysctl -w vm.max_map_count=262144
使之立即生效
sysctl -p
docker run -d --name es -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" -e "cluster.name=elasticsearch" -e "ES_JAVA_OPTS=-Xms512m -Xmx512m" --log-opt max-size=100m -d elasticsearch:7.6.1
docker run -d -p 5601:5601 --name kibana -e ELASTICSEARCH_HOSTS=http://192.168.1.128:9200 --log-opt max-size=100m kibana:7.6.1
在/opt/elk/目录下新建logstash.conf文件
#输入配置
input {
kafka {
bootstrap_servers => "192.168.1.128:9092"
topics_pattern => ["filebeat"]
group_id => "test-consumer-group"
codec => "json"
consumer_threads => 1
decorate_events => true
}
}
#输出到es,并新建索引
output {
stdout{ codec=>rubydebug}
if [fields][log_topic] == "sys_log"{
elasticsearch {
hosts => ["192.168.1.128:9200"]
index => "sys_log-%{+YYYY.MM.dd}"
}
}
if [fields][log_topic] == "docker_log"{
elasticsearch {
hosts => ["192.168.1.128:9200"]
index => "docker_log-%{+YYYY.MM.dd}"
}
}
if [fields][log_topic] == "nginx_log"{
elasticsearch {
hosts => ["192.168.1.128:9200"]
index => "nginx_log-%{+YYYY.MM.dd}"
}
}
}
在/opt/elk/目录下新建logstash.yml,修改你自己的ip
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://192.168.1.128:9200" ]
docker run -d --name=logstash \
-v /opt/elk/logstash.yml:/usr/share/logstash/config/logstash.yml \
-v /opt/elk/logstash.conf:/usr/share/logstash/pipeline/logstash.conf \
--log-opt max-size=100m \
logstash:7.6.1
# 日志输入配置
filebeat.inputs:
#- type: log
# enabled: true
# paths: #需要监控的日志文件位置
# - /var/lib/docker/containers/*.log
# fields: #关键字
# log_topic: docker_log
#- type: log
# enabled: true
# paths:
# - /var/log/*/*.log
# fields:
# log_topic: sys_log
- type: log
enabled: true
paths:
- /opt/nginx/logs/*.log
fields:
log_topic: nginx_log
#日志输出配置(采用 logstash 收集日志,5044为logstash端口)
#output.logstash:
# hosts: ['192.168.30.23:5044']
#日志输出配置(采用kafka缓冲日志数据)
output.kafka:
enabled: true
hosts: ["192.168.1.128:9092"]
topic: 'filebeat'
systemctl start filebeat
安装之后没有产生日志文件的话,可以使用systemctl status filebeat -l查看部分当前日志
docker run -p 8888:80 --name nginx \
-v /opt/nginx/logs:/var/log/nginx \
-d nginx:latest