目录
一、elasticsearch
1、集群部署
2、cerebro部署
3、elasticsearch-head插件部署
4、elasticsearch集群角色分类
二、logstash
1、部署
2、elasticsearch输出插件
3、file输入插件
4、file输出插件
5、syslog 插件
6、多行过滤插件
7、grok过滤
三、kibana数据可视化
1、部署
2、定制数据可视化
(1)网站访问量
(2)访问量排行榜
(3)创建dashboard(仪表盘),大屏展示
四、ES集群监控
1、启用xpack认证
2、metricbeat监控
3、filebeat日志采集
Elasticsearch 是一个开源的分布式搜索分析引擎,建立在一个全文搜索引擎库 Apache Lucene基础之上。
Elasticsearch 不仅仅是 Lucene,并且也不仅仅只是一个全文搜索引擎:
基础模块
elasticsearch应用场景:
官网:https://www.elastic.co/cn/
主机 |
ip |
角色 |
docker |
192.168.67.10 |
cerebro/elasticsearch-head |
elk1 |
192.168.67.31 |
elasticsearch |
elk2 |
192.168.67.32 |
elasticsearch |
elk3 |
192.168.67.33 |
elasticsearch |
elk4 |
192.168.67.34 |
logstash |
elk5 |
192.168.67.35 |
kibana |
软件安装
rpm -ivh elasticsearch-7.6.1-x86_64.rpm
修改配置
cluster.name: my-es
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["server1", "server2", "server3"]
cluster.initial_master_nodes: ["server1", "server2", "server3"]
系统设置
vim /etc/security/limits.conf
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited
elasticsearch - nofile 65535
elasticsearch - nproc 4096
vim /usr/lib/systemd/system/elasticsearch.service
[service]
...
LimitMEMLOCK=infinity
systemctl daemon-reload
swapoff -a
vim /etc/fstab
#/dev/mapper/rhel-swap swap swap defaults 0 0
systemctl daemon-reload
systemctl enable --now elasticsearch
安装依赖
yum install -y nodejs-9.11.2-1nodesource.x86 64.rpm
tar xf phantomjs-2.1.1-linux-x86 64.tar.bz2
cd phantomjs-2.1.1-linux-x86 64/
cd bin/
mv phantomjs /usr/local/bin/
phantomjs
安装插件
rpm -ivh nodejs-9.11.2-1nodesource.x86_64.rpm
unzip elasticsearch-head-master.zip
cd elasticsearch-head-master/
npm install --registry=https://registry.npm.taobao.org
vim _site/app.js
注意:此属性的值为true,并不意味着这个节点就是主节点。因为真正的主节点,是由多个具有主节点资格的节点进行选举产生的。
这样的配置可能会导致数据写入不均匀,建议只指定一个数据路径,磁盘可以使用raid0阵列,而不需要成本高的ssd。
vim /etc/elasticsearch/elasticsearch.yml
node.master: true
node.data: false
node.ingest: true
node.ml: false
等组合 node.ingest: true 至少一个节点要有
如果重启有错误 这个上面有数据需要清理迁移到其他节点
查看:
不同插件查看
cd /etc/logstash/conf.d
vim test.conf
input {
stdin { }
}
output {
stdout {}
elasticsearch {
hosts => "192.168.67.31:9200"
index => "logstash-%{+YYYY.MM.dd}"
}
}
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf
vim fileput.conf
input {
file {
path => "/var/log/messages"
start_position => "beginning"
}
}
output {
stdout {}
elasticsearch {
hosts => "192.168.67.31:9200"
index => "syslog-%{+YYYY.MM.dd}"
}
}
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/fileput.conf
.sincedb文件保存文件读取进度,避免数据冗余读取
cd /usr/share/logstash/data/plugins/inputs/file/
sincedb文件一共6个字段
删除后重新读取
vim file.conf
input {
stdin { }
}
output {
file {
path => "/tmp/logstash.txt"
codec => line { format => "custom format: %{message}"}
}
}
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/file.conf
vim syslog.conf
input {
syslog {}
}
output {
stdout {}
elasticsearch {
hosts => "192.168.67.31:9200"
index => "rsyslog-%{+YYYY.MM.dd}"
}
}
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/syslog.conf
多行过滤可以把多行日志记录合并为一行事件
cd /var/log/elasticsearch
scp my-es.log elk4:/var/log/
在elk4上执行
vim multiline.conf
input {
file {
path => "/var/log/my-es.log"
start_position => "beginning"
codec => multiline {
pattern => "^\["
negate => true
what => previous
}
}
}
output {
stdout {}
elasticsearch {
hosts => "192.168.67.31:9200"
index => "myeslog-%{+YYYY.MM.dd}"
}
}
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/multiline.conf
安装httpd
yum install -y httpd
systemctl enablel --now httpd
echo www.westos.org > /var/www/html/index.html
访问此站点生成日志信息
ab -c1 -n 300 http://192.168.67.34/index.html
编写文件
vim grok.conf
input {
file {
path => "/var/log/httpd/access_log"
start_position => "beginning"
}
}
filter {
grok {
match => { "message" => "%{HTTPD_COMBINEDLOG}" }
}
}
output {
stdout {}
elasticsearch {
hosts => "192.168.67.31:9200"
index => "apachelog-%{+YYYY.MM.dd}"
}
}
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/grok.conf
启动
systemctl enable --now kibana
netstat -antlp |grep :5601
访问:
创建索引
提前在各个节点 ab -c1 -n 500 http://192.168.67.34/index.html 一下
保存视图
把上面创建的两个可视化添加到仪表板中