介绍:ELK套件(ELK stack)是指ElasticSearch、Logstash和Kibana三件套。这三个软件可以组成一套日志分析和监控工具。Logstash:负责在客户端收集日志信息
ElasticSearch(es):负责将logstash 收集的信息整理存储并建立索引es会建立索引并且把日志信息存放在索引中es多节点:可做集群,会进行选举,会选出一个主节点Kibana:是一个web界面,便于日志分析
本文把ELK套件部署在一台CentOS单机上。
具体的版本要求如下:
操作系统版本:CentOS 6.4;
JDK版本:1.8.0;
Logstash版本:1.4.2;
ElasticSearch版本:1.4.2;
Kibana版本:4.1.2;
为了正常使用HTTP服务等,需要关闭防火墙:
# service iptables stop
一、安装ElasticSearch (简称es):
1、ElasticSearch和Logstash依赖于JDK,所以需要安装JDK:
# yum -y install java-1.8.0-openjdk*
# java -version
2、ElasticSearch默认的对外服务的HTTP端口是9200,节点间交互的TCP端口是9300。
下载ElasticSearch:
# mkdir -p /soft
# sudo wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.6.0.tar.gz
# sudo tar zxvf elasticsearch-1.6.0.tar.gz -C /usr/local/
# ln -s /usr/local/elasticsearch-1.6.0 /usr/local/elasticsearch
3、启动ElasticSearch服务:
# /usr/local/elasticsearch/bin/elasticsearch -d
# 启动后等会才能看到端口
netstat -nptul #查看端口 9200 和 9300
4、测试ElasticSearch服务是否正常,预期返回200的状态码:
# curl -X GET http://10.0.0.51:9200
{
"status" : 200,
"name" : "Immortus",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "1.6.0",
"build_hash" : "cdd3ac4dde4f69524ec0a14de3828cb95bbb86d0",
"build_timestamp" : "2015-06-09T13:36:34Z",
"build_snapshot" : false,
"lucene_version" : "4.10.4"
},
"tagline" : "You Know, for Search"
5、安装es插件
Elasticsearch-kopf插件可以查询Elasticsearch中的数据
elasticsearch-marvel插件
elasticsearch-head插件可以查询Elasticsearch中的数据和 kopf差不多
# /usr/local/elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf
# /usr/local/elasticsearch/bin/plugin -i elasticsearch/marvel/latest
# /usr/local/elasticsearch/bin/plugin -i mobz/elasticsearch-head
6、访问插件http://10.0.0.51:9200/_plugin/marvel
http://10.0.0.51:9200/_plugin/head/
http://10.0.0.51:9200/_plugin/kopf
# wget https://download.elasticsearch.org/logstash/logstash/logstash-1.4.2.tar.gz # tar zxvf logstash-1.4.2.tar.gz -C /usr/local/ # ln -s /usr/local/logstash-1.4.2 /usr/local/logstash |
2、简单测试Logstash服务是否正常,预期可以将输入内容以简单的日志形式打印在界面上:
# /usr/local/logstash/bin/logstash -e 'input { stdin { } } output { stdout {} }'
this is a test #如入测试内容(等待一会)
2016-05-23T23:31:14.525+0000 0.0.0.0 this is a test #看到输出测试内容
3、创建Logstash配置文件,并再次测试Logstash服务是否正常,预期可以将输入内容以结构化的日志形式打印在界面上:
# mkdir -p /usr/local/logstash/etc
# vim /usr/local/logstash/etc/hello_search.conf
input {
stdin {
type => "human"
}
file {
type => "messages"
path => ["/var/log/messages"]
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
host => "10.0.0.51" #指定es服务器所在ip
port => 9300
}
}
启动logstash:
# /usr/local/logstash/bin/logstash -f /usr/local/logstash/etc/hello_search.conf
test #输入内容(等一会)
{
"message" => "test", #输出内容
"@version" => "1",
"@timestamp" => "2016-05-23T23:41:17.098Z",
"type" => "human",
"host" => "0.0.0.0"
三、安装Kibana1、解压kibana
# wget https://download.elastic.co/kibana/kibana/kibana-4.1.2-linux-x64.tar.gz # tar zxvf kibana-4.1.2-linux-x64.tar.gz -C /usr/local/ # ln -s /usr/local/kibana-4.1.2-linux-x64 /usr/local/kibana |
2、修改kibana配置文件
# vi /usr/local/kibana/config/kibana.yml elasticsearch_url: "http://10.0.0.51:9200" 填写用于访问本机的ip |
3、启动kibana
# nohup /usr/local/kibana/bin/kibana & # 查看端口 5601: # netstat -npult|grep java |
4、访问kibana: http://10.0.0.51:5601