产生日志—>收集日志→>存储日志->展示日志->查看日志
#环境
elk-node1:192.168.100.10 #master机器
elk-node2:192.168.100.20 #slave机器
关于ELK-slave模式
#域名解析
[root@elk-node1 ~]# vim /etc/hosts
192.168.100.10 elk-node1
192.168.100.20 elk-node2
[root@elk-node1 ~]# scp /etc/hosts 192.168.100.20:/etc/
#基础环境安装:elk-node1和elk-node2同时操作
#下载并安装GPG-KEY
[root@elk-node1 ~]# rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
#添加YUM仓库
[root@elk-node1 ~]# vim /etc/yum.repos.d/elk.repo
[elasticsearch-2.x]
name=Elasticsearch repository for 2.x packages
baseurl=http://packages.elastic.co/elasticsearch/2.x/centos
gpgcheck=1 #开启校验
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch #仓库位置
enabled=1 #启用
#安装elasticsearch
[root@elk-node1 ~]# yum -y install elasticsearch redis nginx java #redis作为数据库,nginx作为前端,java开发
#测试java环境
[root@elk-node1 ~]# java -version
openjdk version "1.8.0_292"
OpenJDK Runtime Environment (build 1.8.0_292-b10)
OpenJDK 64-Bit Server VM (build 25.292-b10, mixed mode)
elk-node1 192.168.100.10
#elk-node1
#自定义日志存储目录
[root@elk-node1 ~]# mkdir -p /data/es-data
[root@elk-node1 ~]# chown -R elasticsearch.elasticsearch /data/
#追加配置文件
[root@elk-node1 ~]# vim /etc/elasticsearch/elasticsearch.yml
...
cluster.name: huanqiu #集群名称
...
node.name: elk-node1 #节点名称,建议与主机名一致
...
path.data: /data/es-data #数据存放路径
...
path.logs: /var/log/elasticsearch/ #日志存放路径
...
bootstrap.mlockall: true #锁住内存,不被使用交换分区
...
network.host: 0.0.0.0 #网络设置,所有IP均可访问
...
http.port: 9200 #端口设置
...
# discovery.zen.ping.unicast.hosts: ["host1", "host2"]
discovery.zen.ping.multicast.enabled : false #关闭组播
discovery.zen.ping.unicast.hosts: ["192.168.100.10","192.168.100.20"] #单播,唯一通信,集群由10和20组成
[root@elk-node1 ~]# systemctl enable elasticsearch --now
[root@elk-node1 ~]# netstat -antp | egrep "9200|9300"
Web测试
http://192.168.100.10:9200/
[root@elk-node1 ~]# curl -i -XGET 'http://192.168.100.10:9200/_count?pretty' -d '{"query":{match_all:{}}}'
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
Content-Length: 95
{
"count" : 0,
"_shards" : {
"total" : 0,
"successful" : 0,
"failed" : 0
}
}
elk-node2 192.168.100.20
[root@elk-node2 ~]# mkdir -p /data/es-data
[root@elk-node2 ~]# chown -R elasticsearch.elasticsearch /data/
[root@elk-node2 ~]# vim /etc/elasticsearch/elasticsearch.yml
cluster.name: huanqiu #组名(同组,组名必须一致)
node.name: elk-node2 #节点名称(和主机名一致)
path.data: /data/es-data #数据存放路径
path.logs: /var/log/elasticsearch/ #日志存放路径
bootstrap.mlockall: true #锁住内存
network.host: 0.0.0.0 #网络设置,所有IP均可访问
http.port: 9200 #端口
discovery.zen.ping.multicast.enabled: false #关闭组播
discovery.zen.ping.unicast.hosts: ["192.168.100.10","192.168.100.20"] #单播,指明唯一通信,集群由10和20组成
[root@elk-node2 ~]# systemctl enable elasticsearch --now
[root@elk-node1 ~]# netstat -antp | egrep "9200|9300"
Web测试
http://192.168.100.20:9200/
elk-node1 192.168.100.10
#elk-node1和elk-node2上都要安装
#最实用的通过web界面来查看elasticsearch集群状态信息
[root@elk-node1 ~]# /usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head
[root@elk-node1 ~]# chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/plugins/
[root@elk-node1 ~]# systemctl restart elasticsearch
#通过web界面管理和监控 elasticsearch 集群状态信息
[root@elk-node1 ~]# /usr/share/elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf
[root@elk-node1 ~]# systemctl restart elasticsearch
elk-node2 192.168.100.20
[root@elk-node2 ~]# /usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head
[root@elk-node2 ~]# chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/plugins/
[root@elk-node2 ~]# /usr/share/elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf
[root@elk-node2 ~]# systemctl restart elasticsearch
两个节点同样的测试方法
http://192.168.100.20:9200/_plugin/head/
http://192.168.100.20:9200/_plugin/kopf/
搜集日志 部署在应用服务器上
elk-node1 192.168.100.10
#elk-node1和elk-node2模拟应用服务器,与分离部署本质区别只是上报地址不同
#下载并安装GPG-KEY
[root@elk-node1 ~]# rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
#添加YUM仓库
[root@elk-node1 ~]# vim /etc/yum.repos.d/logstash.repo
[logstash-2.1]
name=Logstash repository for 2.1.x packages
baseurl=http://packages.elastic.co/logstash/2.1/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1
#安装logstash
[root@elk-node1 ~]# yum -y install logstash
elk-node2 192.168.100.20
[root@elk-node2 ~]# rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
[root@elk-node2 ~]# vim /etc/yum.repos.d/logstash.repo
[logstash-2.1]
name=Logstash repository for 2.1.x packages
baseurl=http://packages.elastic.co/logstash/2.1/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1
[root@elk-node2 ~]# yum -y install logstash
命令行单行操作
#基本输入输出
[root@elk-node1 ~]# /opt/logstash/bin/logstash -e 'input {stdin{}} output {stdout {}}'
# -e执行 input{}:输入函数 output{}:输出函数 stdin{}:标准输入 stdout{}:标准输出
#使用rubydebug详细输出<列表>
[root@elk-node1 ~]# /opt/logstash/bin/logstash -e 'input {stdin{}} output {stdout {codec => rubydebug}}'
#把内容写到elasticsearch中
[root@elk-node1 ~]# /opt/logstash/bin/logstash -e 'input {stdin{}} output { elasticsearch { hosts => ["192.168.100.10:9200"] } }'
123 #输入123测试
Settings: Default filter workers: 1
Logstash startup completed
http://192.168.100.20:9200/_plugin/head/
说明logstash搜集信息时会交给elasticsearch进行存储
#写在elasticsearch后又在文件中写一份
[root@elk-node1 ~]# /opt/logstash/bin/logstash -e 'input {stdin{}} output { stdout { codec => rubydebug {} } elasticsearch { hosts => ["192.168.100.10:9200"] } }'
简单的收集方式
[root@elk-node1 ~]# vim /etc/logstash/conf.d/01-logstash.conf #编写配置文件
input {
stdin {
} }
output {
elasticsearch {
hosts => ["192.168.100.10:9200"] } #输入进elasticsearch
stdout {
codec => rubydebug } #同时输出到屏幕上
}
#执行
[root@elk-node1 ~]# /opt/logstash/bin/logstash -f /etc/logstash/conf.d/01-logstash.conf
hahahaha
{
"message" => "hahahaha",
"@version" => "1",
"@timestamp" => "2021-06-14T09:23:44.864Z",
"host" => "elk-node1"
}
http://192.168.100.20:9200/_plugin/head/
收集系统日志
[root@elk-node1 ~]# vim /etc/logstash/conf.d/systemlog.conf
input {
file {
path => "/var/log/messages"
type => "system"
start_position =>"beginning" #收集日志开始位置,从第一行开始
}
}
output {
elasticsearch {
hosts => ["192.168.100.10:9200"]
index => "system-%{+YYYY.MM.dd}" #索引,手动标识
}
}
#执行
[root@elk-node1 ~]# /opt/logstash/bin/logstash -f /etc/logstash/conf.d/systemlog.conf
http://192.168.100.20:9200/_plugin/head/
模块总结
提供良好的Web界面
#kibana的安装,可以是一台独立服务器
[root@elk-node1 ~]# cd /usr/local/src/ #进入源码常用安装目录
[root@elk-node1 src]# wget https://download.elastic.co/kibana/kibana/kibana-4.3.1-linux-x64.tar.gz
[root@elk-node1 src]# tar -xvzf kibana-4.3.1-linux-x64.tar.gz #解压至当前
[root@elk-node1 src]# mv kibana-4.3.1-linux-x64 /usr/local/ #移动至/usr/local
[root@elk-node1 src]# ln -s /usr/local/kibana-4.3.1-linux-x64/ /usr/local/kibana
#修改配置文件
[root@elk-node1 ~]# cd /usr/local/kibana/config/
[root@elk-node1 config]# cp kibana.yml kibana.yml.bak #备份配置文件
[root@elk-node1 config]# vim kibana.yml
...
server.port: 5601 #端口5601
...
server.host: "0.0.0.0"
...
elasticsearch.url: "http://localhost:9200"
...
kibana.index: ".kibana"
...
#运行,因为一直运行在前台,要么选择开一个shell窗口,要么使用screen(屏风)
[root@elk-node1 config]# yum -y install screen
[root@elk-node1 config]# screen
[root@elk-node1 config]# /usr/local/kibana/bin/kibana #自动建立前台
http://192.168.100.10:5601/
添加索引名称 > 创建
点击上方Discover,在Discover中查看
查看日志登陆,需要点击"Discover"–> “message”,点击后面的"add"
Elasticsearch常用插件集合