jdk1.8.0_191
.bashrc
JAVA_HOME=~/jdk1.8.0_191/
PATH=$JAVA_HOME/bin:$PATH
filebeat
安装在业务系统的机器上, 优点比较轻量, 几乎不占资源, 用于收集nginx/apache日志
使用yum安装
配置 cat /etc/filebeat/filebeat.yml
filebeat.inputs:
– type: log
enabled: true
paths:
– [nginx log路径]
fields:
format: COMMONAPACHELOG
index_name: 用于在es中标识
#================================ Outputs =====================================
output.kafka:
enabled: true
hosts: [‘192.168.3.24:9092′,’192.168.3.218:9092′,’192.168.3.219:9092’]
max_retries: 5
timeout: 300
topic: “filebeat”
下面是用于监控的
xpack.monitoring:
enabled: true
elasticsearch:
hosts: [“http://ip1:9200”, “http://ip2:9200”]
相关命令:
rm /var/lib/filebeat/registry
/etc/init.d/filebeat restart
kafka 安装请参见其他文章 这里使用了3台机器, 每台机器启动zookeeper和kafka
使用kafka_2.11-1.0.0 由于centos版本较低
配置文件
kafka_2.11-1.0.0/config/zookeeper.properties
dataDir=/home/elk/kafka_2.11-1.0.0/data
dataLogDir=/home/elk/kafka_2.11-1.0.0/logs
# the port at which the clients will connect
clientPort=2181
# disable the per-ip limit on the number of connections since this is a non-production config
maxClientCnxns=100
tickTime=2000
initLimit=10
syncLimit=5
server.24=ip1:2888:3888
server.218=ip2:2888:3888
server.219=ip3:2888:3888
server.properties
broker.id=0 #必须设置, 集群内唯一
listeners=PLAINTEXT://0.0.0.0:9092 #外网访问
advertised.listeners=PLAINTEXT://ip1:9092
zookeeper.connect=ip24:2181,ip218:2181,ip219:2181
常用命令:
./bin/zookeeper-server-start.sh config/zookeeper.properties #启动zookeeper
./bin/kafka-server-start.sh config/server.properties #启动kafka
data目录下建立文件myid, 写入内容是当前的id号 ,如24 218 219 等
kibana安装:
注意与elasticsearch版本一致, 启动:
nohup ./bin/kibana > nohup.out &
使用方法自行百度
JDK安装
logstash
logstash-6.4.2
input {
kafka {
bootstrap_servers => “192.168.3.24:9092,192.168.3.218:9092,192.168.3.219:9092”
topics => [“filebeat”]
group_id => “test-consumer-group”
codec => “json”
consumer_threads => 3
decorate_events => true
}
#beats {
# port => “5044”
#}
}
filter {
if [fields][format] == “COMMONAPACHELOG” {
grok {
match => { “message” => “%{COMMONAPACHELOG}” }
}
date {
match => [ “timestamp”, “dd/MMM/YYYY:H:m:s Z” ]
target => “@timestamp”
}
} else if [fields][format] == “COMBINEDAPACHELOG” {
grok {
match => { “message” => “%{COMBINEDAPACHELOG}” }
}
date {
match => [ “timestamp”, “dd/MMM/YYYY:H:m:s Z” ]
target => “@timestamp”
}
} else if [fields][format] == “nginx” {
grok {
match => { “message” => “%{IPORHOST:clientip} %{USER:ident} %{NOTSPACE:auth} \[%{HTTPDATE:timestamp}\] \”(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})\” %{NUMBER:response} (?:%{NUMBER:bytes}|-) \”%{NOTSPACE:referrer}\” \”%{GREEDYDATA:agent}\” \”%{DATA:x_forword_for}\”” }
}
date {
match => [ “timestamp”, “dd/MMM/YYYY:H:m:s Z” ]
target => “@timestamp”
}
}
}
output {
elasticsearch {
#hosts => [“localhost:9200”]
hosts => [“192.168.3.181:9200″,”192.168.3.182:9200″,”192.168.3.128:9200″,”192.168.3.124:9200″,”192.168.3.178:9200″,”192.168.3.246:9200″,”192.168.3.245:9200″,”192.168.3.150:9200”]
index => “%{[fields][index_name]}-%{+YYYY.MM.dd}”
}
#stdout { codec => rubydebug }
}
logstash.yml
node.name: 218-0
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: password
xpack.monitoring.collection.pipeline.details.enabled: false
elasticsearch-6.4.2
elasticsearch.yml
cluster.name: my-application
node.name: node-124
bootstrap.memory_lock: false
bootstrap.system_call_filter: false
discovery.zen.ping.unicast.hosts: [“ip1″,”ip2”,…]
network.host: 0.0.0.0