准备工作
为了安装时不出错,建议选择一样的版本,本文全部选择6.2.2版本。
首先讲一下elk吧,elk是指ElasticSearch,Logstash,Kibana。
下载
下载地址:
https://www.elastic.co/downloads
自己挑选适合自己的版本进行下载安装
1.解压
我这边把文件下载上传到/opt/elk下了()没有的自己建目录。
解压到当前目录
[root@hadoop01 elasticsearch-6.2.2]# cd /opt/elk/
[root@hadoop01 elk]# tar -xvf elasticsearch-6.2.2.tar.gz
2.修改配置文件
[root@hadoop01 elasticsearch-6.2.2]#vim ./config/elasticsearch.yml
在文件中修改或者添加上如下内容,可以根据实际需求做相应修改。
# 这里指定的是集群名称,需要修改为对应的,开启了自发现功能后,ES会按照此集群名称进行集群发现
cluster.name: skynet_es_cluster
node.name: skynet_es_cluster_dev1
# 数据目录
path.data: /data/elk/data
# log 目录
path.logs: /data/elk/logs
# 修改一下ES的监听地址,这样别的机器也可以访问
network.host: 0.0.0.0
# 默认的端口号
http.port: 9200
discovery.zen.ping.unicast.hosts: ["172.18.5.111", "172.18.5.112"]
# discovery.zen.minimum_master_nodes: 3
# enable cors,保证_site类的插件可以访问es
http.cors.enabled: true
http.cors.allow-origin: "*"
# Centos6不支持SecComp,而ES5.2.0默认bootstrap.system_call_filter为true进行检测,所以导致检测失败,失败后直接导致ES不能启动。
bootstrap.memory_lock: false
bootstrap.system_call_filter: false
3.修改系统的资源参数
这一步主要是为了确保有足够资原启动ES
这里提供两种方法进行设置,一个是脚本直接设置,一种是一项一项的去修改。
(1)直接命令添加
echo "* soft nofile 65536" >> /etc/security/limits.conf
echo "* hard nofile 131072" >> /etc/security/limits.conf
echo "* soft nproc 65536" >> /etc/security/limits.conf
echo "* hard nproc 131072">> /etc/security/limits.conf
echo "vm.max_map_count = 655360" >>/etc/sysctl.conf
设置用户资源参数
vi /etc/security/limits.d/20-nproc.conf
设置elk用户参数
elasticsearch soft nproc 65536
注意:命令是添加命令,不要重复地跑,也可以选择vi进入到文件中进行修改
4. 添加启动用户,设置权限
启动ElasticSearch要非root用户,需要新建一个用户来启动elasticsearch
useradd elasticsearch #创建用户elk
groupadd elasticsearch #创建组elk
useradd elasticsearch -g elasticsearch #将用户添加到组
mkdir -pv /data/elk/{data,logs} # 创建数据和日志目录
# 修改文件所有者
chown -R elasticsearch:elasticsearch /data/elk/
chown -R elasticsearch:elasticsearch /opt/elk/elasticsearch-6.2.2/
5.启动ES
使用elasticsearch 用户启动elasticsearch服务
切换至elasticsearch 用户
[root@hadoop01 elasticsearch-6.2.2]# su elasticsearch
[elasticsearch@hadoop01 elasticsearch-6.2.2]# ./bin/elasticsearch #后面加上-d,后台运行
6.测试
[root@hadoop01 elasticsearch-6.2.2]# curl http://10.25.0.165:9200/
{
"name" : "skynet_es_cluster_dev1",
"cluster_name" : "skynet_es_cluster",
"cluster_uuid" : "pTYGn8IfQi6B7YKZ_sYBdQ",
"version" : {
"number" : "6.2.2",
"build_hash" : "10b1edd",
"build_date" : "2018-02-16T19:01:30.685723Z",
"build_snapshot" : false,
"lucene_version" : "7.2.1",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
[root@hadoop01 elasticsearch-6.2.2]#
有返回,能导通就说明通过了。
1.解压
同样的,压缩包被传到/opt/elk/目录下
解压到当前目录
[root@hadoop01 elasticsearch-6.2.2]# cd /opt/elk/
[root@hadoop01 elk]# tar -xvf logstash-6.2.2.tar.gz
2.测试
测试一、快速启动,标准输入输出作为input和output,没有filter
[root@hadoop01 elk]# cd /opt/elk/logstash-6.2.2/
[root@hadoop01 logstash-6.2.2]# ./bin/logstash -e 'input { stdin {} } output { stdout {} }'
Sending Logstash's logs to /usr/local/logstash/logstash-5.4.1/logs which is now configured via log4j2.properties
[2017-06-17T13:37:13,449][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/local/logstash/logstash-5.4.1/data/queue"}
[2017-06-17T13:37:13,467][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"dcfdb85f-9728-46b2-91ca-78a0d6245fba", :path=>"/usr/local/logstash/logstash-5.4.1/data/uuid"}
[2017-06-17T13:37:13,579][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250}
[2017-06-17T13:37:13,612][INFO ][logstash.pipeline ] Pipeline main started
The stdin plugin is now waiting for input:
[2017-06-17T13:37:13,650][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
此时命令窗口停留在等待输入状态,键盘键入任意字符
hello world
下方是Logstash输出到效果
2017-06-17T05:37:29.401Z chenlei.master hello world
测试二、在测试一堆基础上加上codec进行格式化输出
[root@hadoop01 logstash-6.2.2]# ./bin/logstash -e 'input{stdin{}} output{stdout{codec=>rubydebug}}'
Sending Logstash's logs to /usr/local/logstash/logstash-5.4.1/logs which is now configured via log4j2.properties
[2017-06-17T14:01:50,325][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250}
[2017-06-17T14:01:50,356][INFO ][logstash.pipeline ] Pipeline main started
The stdin plugin is now waiting for input:
[2017-06-17T14:01:50,406][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
此时命令窗口停留在等待输入状态,键盘键入任意字符
hello world
下方是Logstash输出到效果
{
"@timestamp" => 2017-06-17T06:02:19.189Z,
"@version" => "1",
"host" => "chenlei.master",
"message" => "hello world"
}
3.测试Elasticsearch 和 Logstash 来收集日志数据
前提要保证elasticsearch和logstash都正常启动(需要先启动elasticsearch,再启动logstash)
[root@hadoop01 logstash-6.2.2]# cat config/logstash-test.conf
input { stdin { } }
output {
elasticsearch {hosts => "10.25.0.165:9200" } #elasticsearch服务地址
stdout { codec=> rubydebug }
}
[root@hadoop01 logstash-6.2.2]#
开启服务,执行如下命令:
[root@hadoop01 logstash-6.2.2]# ./bin/logstash -f ./config/logstash-test.conf
我们可以使用 curl 命令发送请求来查看 ES 是否接收到了数据:
[root@hadoop01 logstash-6.2.2]# curl 'http://10.25.0.165:9200/_search?pretty'
{
"took" : 1,
"timed_out" : false,
"_shards" : {
"total" : 0,
"successful" : 0,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : 0,
"max_score" : 0.0,
"hits" : [ ]
}
}
[root@hadoop01 logstash-6.2.2]#
至此,你已经成功利用 Elasticsearch 和 Logstash 来收集日志数据了。
1.解压
同样的,压缩包被传到/opt/elk/目录下
解压到当前目录
[root@hadoop01 elasticsearch-6.2.2]# cd /opt/elk/
[root@hadoop01 elk]# tar -xvf kibana-6.2.2-linux-x86_64.tar.gz
2.配置kibana
编辑kibana.yml配置文件
[root@hadoop01 kibana-6.2.2-linux-x86_64]# vim ./config/kibana.yml
修改以下参数:
server.port: 5601 #开启默认端口5601
server.host: 10.25.0.165 #站点地址
elasticsearch.url: http://10.25.0.165 :9200 #指向elasticsearch服务的ip地址
kibana.index: “.kibana”
3.启动
启动命令
[root@hadoop01 kibana-6.2.2-linux-x86_64]# ./bin/kibana
4.测试
访问:http://10.25.0.165:5601
如下图所示,则说明安装成功了
image.png