一、部署环境
10.0.1.11 elasticsearch、kibana
10.0.1.12 elasticsearch、logstash
10.0.1.13 elasticsearch、logstash
10.0.1.14 redis、nginx、filebeat
二、部署elasticstack
1.在10.0.1.11服务器上部署elasticsearch和kibana
1.1下载deb安装包并安装
dpkg -i elasticsearch-7.11.1-amd64.deb
dpkg -i kibana-7.11.1-amd64.deb
1.2编辑elasticsearch配置文件
vim /etc/elasticsearch/elasticsearch.yml
cluster.name: magedu-elastic-cluster
node.name: node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 10.0.1.11
discovery.seed_hosts: ["10.0.1.11", "10.0.1.12", "10.0.1.13"]
cluster.initial_master_nodes: ["node-1", "node-2", "node-3"]
1.3启动elasticsearch服务
systemctl start elasticsearch.service
systemctl enable elasticsearch.service
1.4elasticsearch的日志文件
vim /var/log/elasticsearch/magedu-elastic-cluster.log
1.5编辑kibana的配置文件
vim /etc/kibana/kibana.yml
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://10.0.1.13:9200"]
i18n.locale: "zh-CN"
1.6启动kibana
systemctl start kibana
systemctl enable kibana.service
1.7kibana的日志文件
vim /var/log/kibana/kibana.log
2、10.0.1.12服务器上部署elasticsearch
2.1下载deb安装包并安装
dpkg -i elasticsearch-7.11.1-amd64.deb
2.2编辑elasticsearch配置文件
vim /etc/elasticsearch/elasticsearch.yml
cluster.name: magedu-elastic-cluster
node.name: node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 10.0.1.12
discovery.seed_hosts: ["10.0.1.11", "10.0.1.12", "10.0.1.13"]
cluster.initial_master_nodes: ["node-1", "node-2", "node-3"]
2.3启动elasticsearch服务
systemctl start elasticsearch.service
systemctl enable elasticsearch.service
2.4elasticsearch的日志文件
vim /var/log/elasticsearch/magedu-elastic-cluster.log
3、10.0.1.13服务器上部署elasticsearch
3.1下载deb安装包并安装
dpkg -i elasticsearch-7.11.1-amd64.deb
3.2编辑elasticsearch配置文件
vim /etc/elasticsearch/elasticsearch.yml
cluster.name: magedu-elastic-cluster
node.name: node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 10.0.1.13
discovery.seed_hosts: ["10.0.1.11", "10.0.1.12", "10.0.1.13"]
cluster.initial_master_nodes: ["node-1", "node-2", "node-3"]
3.3启动elasticsearch服务
systemctl start elasticsearch.service
systemctl enable elasticsearch.service
4、在10.0.1.12和10.0.1.13服务器上部署logstash
4.1部署logstash
dpkg -i logstash-7.11.1-amd64.deb
4.2 收集日志
- 方式1:直接由logstash收集日志并发送给elasticsearch
#logstatsh直接收集日志的配置
cat /etc/logstash/conf.d/magedu-log.conf
input {
file {
path => "/var/log/syslog" #日志路径
type => "systemlog" #事件的唯一类型
start_position => "beginning" #第一次收集日志的位置
stat_interval => "3" #日志收集的间隔时间
}
file {
path => "/var/log/auth.log"
type => "securelog"
start_position => "beginning"
stat_interval => "3"
}
}output {
if [type] == "systemlog" {
elasticsearch {
hosts => ["10.0.1.13:9200"]
index => "system-log-%{+YYYY.MM.dd}"
}}
if [type] == "securelog" {
elasticsearch {
hosts => ["10.0.1.13:9200"]
index => "secury-log-%{+YYYY.MM.dd}"
}}
}
- 方式二:由filebeat收集日志,并转送logstash,logstash再将日志写入redis,然后再由另一个logstash从redis取出日志并输出到elasticsearch,最后由kibana展示
10.0.1.13的logstash配置文件,收集filebeat的数据
vim /etc/logstash/conf.d/filebeat-logstash-redis.conf
input{
beats{
port => 5044
codec => "json"
}
}
output{
# stdout{ codec => "rubydebug"}
if [fields][type] == "nginx_access_log" {
redis{
host => ["10.0.1.14"]
port => "6379"
password => "123456"
db => 1
data_type => "list"
key => "nginx_access_log"
codec => "json"
}
}
if [fields][type] == "nginx_error_log" {
redis{
host => ["10.0.1.14"]
port => "6379"
password => "123456"
db => 1
data_type => "list"
key => "nginx_error_log"
}
}
}
10.0.1.12的logstash的配置文件
cat /etc/logstash/conf.d/redis-logstash-elastic.conf
input{
redis{
host => ["10.0.1.14"]
port => "6379"
password => "123456"
db => 1
data_type => "list"
key => "nginx_access_log"
codec => "json"
}
redis{
host => ["10.0.1.14"]
port => "6379"
password => "123456"
db => 1
data_type => "list"
key => "nginx_error_log"
}
}
output{
#stdout{ codec => "rubydebug"}
if [fields][type] == "nginx_access_log"{
elasticsearch{
hosts => ["10.0.1.12:9200"]
codec => "json"
index => "nginx_access_log_1.14_%{+YYYY.MM.dd}"
}
}
if [fields][type] == "nginx_error_log"{
elasticsearch{
hosts => ["10.0.1.12:9200"]
index => "nginx_error_log_1.14_%{+YYYY.MM.dd}"
}
}
}
4.3设置logstash用户对syslog和auth.log的读权限
setfacl -m u:logstash:r /var/log/syslog
setfacl -m u:logstash:r /var/log/auth.log
4.4启动logstash服务
systemctl start logstash.service
systemctl enable logstash.service
5 10.0.1.14服务器配置
5.1部署redis、filebeat和nginx
apt install redis nginx
dpkg -i filebeat-7.11.1-amd64.deb
5.2编辑redis配置文件
#修改监听地址端口和密码
vim /etc/redis/redis.conf
bind 0.0.0.0
port 6379
requirepass 123456
5.3编辑nginx配置,修改日志为json格式
##
# Logging Settings
##
log_format access_json '{"@timestamp":"$time_iso8601",'
'"host":"$server_addr",'
'"clientip":"$remote_addr",'
'"size":$body_bytes_sent,'
'"responsetime":"$request_time",'
'"upstreamtime":"$upstream_response_time",'
'"upstreamhost":"$upstream_addr",'
'"http_host":"$host",'
'"url":"$uri",'
'"domain":"$host",'
'"xff":"$http_x_forwarded_for",'
'"referer":"$http_referer",'
'"status":"$status"}';
access_log /var/log/nginx/access.log access_json;
error_log /var/log/nginx/error.log;
5.4编辑filebeat配置文件
仅修改如下行,分别配置access.log和error.log的收集数据,fields字段用于logstash判断
cat /etc/filebeat/filebeat.yml
# ============================== Filebeat inputs ===============================
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log
fields:
type: nginx_access_log
- type: log
enabled: true
paths:
- /var/log/nginx/error.log
fields:
type: nginx_error_log
# ------------------------------ Logstash Output -------------------------------
output.logstash:
# The Logstash hosts
hosts: ["10.0.1.13:5044"] #定义输出日志指向10.0.1.13上logstash定义的端口5044
5.5启动所有服务
systemctl start nginx redis filebeat
systemclt enable nginx redis filebeat