参考网站:http://kibana.logstash.es/content/
一.elasticsearch安装
1.先下载elasticsearch,kibana,logstash,redis的安装包:
wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.3.tar.gz
wget https://download.elastic.co/kibana/kibana/kibana-4.1.8-linux-x64.tar.gz
wget http://download.redis.io/releases/redis-3.0.7.tar.gz
wget https://download.elastic.co/logstash/logstash/logstash-1.5.5.tar.gz
2.安装elasticsearch:
tar -zxvf elasticsearch-1.7.3.tar.gz
修改配置文件 vim config/elasticsearch.yml
cluster.name: elk-test
node.name: "elk-node1"
path.logs: /usr/local/elasticsearch/logs
修改内核参数:vm.max_map_count=262144 (生产必改)
3.启动:
/usr/local/elasticsearch/bin/elasticsearch &
curl 127.0.0.1:9200 查看es的状态信息
4.ES服务管理插件:
wget https://github.com/elastic/elasticsearch-servicewrapper/archive/master.zip
mv elasticsearch-servicewrapper-master/service /usr/local/elasticsearch/bin/ 放到es的bin目录下面
/usr/local/elasticsearch/bin/service/elasticsearch install 安装完成以后就可以用init.d启动es了
二.elasticsearch使用
1.es的管理插件的安装:
/usr/local/elasticsearch/bin/plugin install mobz/elasticsearch-head 安装插件
http://172.16.1.210:9200/_plugin/marvel/ 访问管理插件,由于是收费先点击试用
2.插入数据:
在网站中点击右上角Dashboards/sense
创建索引,并记录下ID:
POST index-demo/test
{
"user":"wmj",
"msg":"hello word!"
}
使用get获取数据:
GET index-demo/test/AVVx0dpGfWPOVuhqIoN7
GET index-demo/test/AVVx0dpGfWPOVuhqIoN7/_source
进行全文搜索:
GET index-demo/test/_search?q=hello
3.安装ES的集群管理插件:
/usr/local/elasticsearch/bin/plugin -i mobz/elasticsearch-head 安装插件head集群管理
http://172.16.1.210:9200/_plugin/head/ 访问集群管理插件
4.使用url监控集群健康状态:
curl -XGET 172.16.1.210:9200/_cluster/health?pretty
5.不使用广播发现的需要修改下面配置:
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["host1", "host2:port"]
6.ES的中文在线使用指南:
http://es.xiaoleilu.com/
三.logstash:
1.两种安装方法
wget https://download.elastic.co/logstash/logstash/logstash-1.5.5.tar.gz 解压安装
https://www.elastic.co/guide/en/logstash/1.5/package-repositories.html yum 安装说明
2. 标准输出方式启动:
./bin/logstash -e 'input { stdin{} } output{ stdout{} }' 手动输入会屏幕输出
3.将记录输入到ES方式启动:
./bin/logstash -e 'input { stdin{} } output{ elasticsearch{ host =>"172.16.1.210" protocol =>"http"} }' 手动输入会记录到ES上面
4.配置文件方式启动logstash:
vim /etc/logstash.conf :
input{
file{
path => "/var/log/messages"
}
}
output{
file{
path => "/tmp/%{+YYYY-MM-dd}-messages.gz"
gzip => true
}
elasticsearch{
host => "172.16.1.211"
protocol => "http"
index => "system-mesages-%{+YYYY.MM.dd}"
}
}
从message文件输入,一份输出到/tmp/下面,并且压缩,一份输出到ES里面。
./logstash -f /etc/logstash.conf 使用配置文件启动logstash
https://www.elastic.co/guide/en/logstash/1.5/output-plugins.html 配置文件写法的官方说明文档
5.生成环境ELK的logstash配置,
一。日志写入redis
input{
file{
path => "/var/log/messages"
}
}
output{
redis{
data_type => "list" #列表形式写入
key => "system-messages" #key 的名称
host => "172.16.1.211"
port => "6379"
db => "1" #生产环境每一种日志分别写入一个db
}
}
PS:可以连接到redis里面,输入select 1 , keys * , LLEN system-messages 来查看是否正常写入
二。在redis服务器上面用logstash采集redis数据存到ES上面。
input{
redis{
data_type => "list"
key => "system-messages"
host => "172.16.1.211"
port => "6379"
db => "1"
}
}
output{
elasticsearch{
hosts => "172.16.1.210"
protocol => "http"
index => "system-redis-messages-%{+YYYY.MM.dd}"
}
}
6.生成环境让nginx生成的日志采用json方式输出,并使用logstash进行采集。
一。配置nginx.conf文件,采用json输出日志:
http 配置里面:
log_format logstash_json '{ "@timestamp": "$time_iso8601", '
'"host": "$server_addr", '
'"client": "$remote_addr", '
'"size": $body_bytes_sent, '
'"response_time": $request_time, '
'"domain": "$host", '
'"url": "$uri", '
'"referer": "$http_referer", '
'"agent": "$http_user_agent", '
'"status":"$status"}';
access_log /var/log/nginx/access_json.log logstash_json;
二。使用AB命令制作测试数据:
ab -n1000 -c10 http://172.16.1.210:81/
三。配置logstash采集nginx的数据写入到redis里面。
input{
file{
path => "/var/log/nginx/access_json.log"
codec => "json"
}
}
output{
redis{
data_type => "list"
key => "nginx-access-log"
host => "172.16.1.211"
port => "6379"
db => "2"
}
}
四。将redis的数据通过logstash写入到es里面。
input{
redis{
data_type => "list"
key => "nginx-access-log"
host => "172.16.1.211"
port => "6379"
db => "2"
}
}
output{
elasticsearch{
hosts => "172.16.1.210"
protocol => "http"
index => "logstash-nginx-redis-messages-%{+YYYY.MM.dd}"
}
}
ps:输出到es的时候表名称前面要有logstash,否则类型会有问题。
五。将nginx的日志使用geoip处理,加上地理位置信息。
filter { if [type] == "gigold-nginx-access-log"{ geoip { source => "clientip" database => "/etc/logstash/GeoLiteCity.dat" fields => ["city_name", "country_name", "real_region_name", "ip"] } } if [type] == "lehome-nginx-access-log"{ geoip { source => "xff" database => "/etc/logstash/GeoLiteCity.dat" fields => ["city_name", "country_name", "real_region_name", "ip"] } } mutate { convert => ["status", "integer"] } }
四.KIBANA学习:
1.安装kibana并配置访问的es:
tar -zxvf kibana-4.1.8-linux-x64.tar.gz
vim config/kibana.yml:
elasticsearch_url: "http://172.16.1.210:9200" 只要配置这一项
2.启动和访问kibana:
nohup ./bin/kibana &
http://172.16.1.210:5601 访问地址
3.初始化配置:
Index name or pattern: [nginx-redis-messages-]YYYY.MM.DD
4.kibana的搜索语法:
status:200 OR status:404 查找status等于200或者404的
status:[400 TO 499] 查找status等于400到499的