因为不同的版本号之间可能会有一定的差异,所以我这里为了保证大家能够下一步下一步的顺利执行。先确认一下版本号和部署的环境。
elasticsearch 5.5.1
logstash 5.5.1
kibana 5.5.1
这三个服务器统一部署在一台服务器。业务量大的可以考虑将elasticsearch分离开来做集群。
为了方便以下都以es来代表elasticsearc。
es + kibana 为了不熟方便选择使用docker
在linux 下安装docker 后执行:
docker pull docker.elastic.co/elasticsearch/elasticsearch:5.5.1
docker pull docker.elastic.co/kibana/kibana:5.5.1
logstash 使用源码安装:
wget https://artifacts.elastic.co/downloads/logstash/logstash-5.6.1.tar.gz
cp logstash-5.6.1.tar.gz /usr/share/logstash.tar.gz
tar -zxvf logstash.tar.gz
cd logstash
./bin/logstash -e 'input { stdin {} } output { stdout {} }'
测试安装是否成功
现在尝试启动es和kibana:
elasticsearch启动:
docker run -p 9200:9200 -e "http.host=0.0.0.0" -e "transport.host=127.0.0.1" --name my-elastic -d docker.elastic.co/elasticsearch/elasticsearch:5.5.1
kibana启动:
docker run -p 5601:5601 -e "ELASTICSEARCH_URL=http://localhost:9200" --name my-kibana --network host -d docker.elastic.co/kibana/kibana:5.5.1
这时如果一切执行的顺利的话应该就可以访问kibana了
访问地址为你部署的http://ip:5601 访问
当然这个时候是没有数据的,现在在项目服务器上部署采集数据的filebeat 和metricbeat
1.在项目服务器中下载这两个文件:
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.6.1-linux-x86_64.tar.gz
wget https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-5.6.1-linux-x86_64.tar.gz
2.解压
tar -zxvf *.tar.gz
3.配置filebeat 以ngxin 日志为例
cd filebeat-5.6.1-linux-x86_64
vim nginx.yml
将以下文件内容复制进去
filebeat.prospectors:
- input_type: log
paths:
- /var/log/nginx/access.log
document_type: nginx_access
#后续对日志进行分组统计
fields:
level: debug
#Filebeat从文件尾开始监控文件新增内容,把新增的每一行文件作为一个事件依次发送
tail_files: true
shipper:
tags: ['nginx-access']
# 项目名称和项目服务的ip地址可自定义
tags: ["myserver", "101.110.56.78"]
output.logstash:
# logstash服务器的ip地址
hosts: ["11.142.42.77:5044"]
shipper 是logstash接收日志时做条件控制使用
output.logstash 配置logstash服务器的ip地址+端口号
启动filebeat
./filebeat -e -c ./nginx.yml -d "publish"
配置logstash
进入logstash安装目录
vim logstash.yml
input {
beats {
port => 5044
}
}
filter {
if [type] == "nginx_access" {
ruby {
init => "@kname = ['remote_addr','remote_user','time_local','request','status','body_bytes_sent','http_referer','http_user_agent','http_x_forwarded_for']"
code => "event.append(LogStash::Event.new(Hash[@kname.zip(event.get('message').split(' | '))]))"
}
if [request] {
ruby {
init => "@kname = ['method','uri','verb']"
code => "event.append(LogStash::Event.new(Hash[@kname.zip(event.get('request').split(' '))]))"
}
if [uri] {
ruby {
init => "@kname = ['url_path','url_args']"
code => "event.append(LogStash::Event.new(Hash[@kname.zip(event.get('request').split('?'))]))"
}
kv {
prefix => "url_"
source => "url_args"
field_split => "& "
remove_field => [ "url_args","uri","request" ]
}
}
}
mutate {
convert => [ "body_bytes_sent" , "integer" ]
}
date {
match => [ "time_local", "dd/MMM/yyyy:hh:mm:ss Z" ]
locale => "en"
}
}
}
output {
if [type] == "nginx_access" {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => ["localhost:9200"]
index => "nginx_access_%{+YYYY.MM.dd}"
user => "elastic"
password => "changeme"
}
if [status] != "200"{
exec {
command => "sh /root/sh/alarm.sh %{tags[0]}发现了BUG,请尽快处理 服务器IP:%{tags[1]} request:%{request} 访问状态:%{status} 时间:%{time_local}"
}
}
}
else {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => ["localhost:9200"]
index => "metricbeat-%{+YYYY.MM.dd}"
user => "elastic"
password => "changeme"
}
}
}
需要注意的是:
1.nginx.conf中需要配置一下nginx的格式便于logstash 做解析
log_format main "$remote_addr | $remote_user | $time_local | $request | $status | $body_bytes_sent | $http_referer | $http_user_agent | $http_x_forwarded_for";
access_log /var/log/nginx/access.log main;
2.需要安装logstash插件exec 才能执行脚本
3.脚本alarm.sh的内容为:
启动logstash:
./logstash -f ./logstash.yml