elasticsearch+ kibana+ logstash+ filebeat构建高可用分布式日志集群系统(二):kibana + logstash+ filebea集群的安装

elk 提供了完备且成熟的日志存储和分析的解决方案,本文主要介绍 kibana + logstash + filebeat集群的环境搭建

在上篇文章中介绍了elasticsearch 集群的环境搭建。本文章为大家介绍如何实现kibana + logstash + filebeat集群的环境搭建。 

elasticsearch+ kibana+ logstash+ filebeat构建高可用分布式日志集群系统(一):elasticsearch集群的安装

elasticsearch+ kibana+ logstash+ filebeat构建高可用分布式日志集群系统(二):kibana + logstash + filebeat集群的安装(本文)

下载 kibana(120节点操作)


wget https://artifacts.elastic.co/downloads/kibana/kibana-7.1.0-linux-x86_64.tar.gz

下载 logstash(120/130/140节点操作)

wget https://artifacts.elastic.co/downloads/logstash/logstash-7.1.0.tar.gz

下载 filebeat(120/130/140节点操作)

wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.1.0-linux-x86_64.tar.gz

解压logstash/filebeat至安装目录(120/130/140节点操作)

tar -zxvf kibana-7.1.0-linux-x86_64.tar.gz -C /usr/local #仅120节点操作

tar -zxvf logstash-7.1.0.tar.gz -C /usr/local

tar -zxvf filebeat-7.1.0-linux-x86_64.tar.gz -C /usr/local

cd /usr/local

重命名安装目录(120/130/140节点操作)

mv kibana-7.1.0 kibana #仅120节点操作

mv logstash-7.1.0 logstash

mv filebeat-7.1.0 filebeat

创建kibana日志存放路径(120节点操作)

mkdir /opt/ELK/kibana

修改kibana配置文件(120节点操作)

vim /usr/local/kibana/config/kibana.yml
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://192.168.77.120:9200","http://192.168.77.130:9200"]
logging.dest: /opt/ELk/kibana/kibana.log

修改logstash配置文件(120/130/140节点操作)

vim /usr/local/logstash/config/logstash-sample.conf
# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.

input {
  beats {
    port => 5044
  }
}

filter {

  if [fields][logtype] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }else{
    multiline {
            pattern => '^\s*({")'
            negate => true
            what => "previous"
        }

    grok {
      match => { "message" => "%{DATA:timestamp}\|%{IP:springHost}\|\|"}
    }

    date {
      match => [ "timestamp","yyyy-MM-dd-HH:mm:ss"]
      locale => "cn"
    }
    
    geoip {
      source => "springHost"
    }

    json {
      source => "message"
      target => "content"
      remove_field=>["tags", "beat"]
    }
  }

}

output {

  if [fields][logtype] == "account-log" {
     elasticsearch {
        hosts => ["http://192.168.77.120:9200","http://192.168.77.130:9200"]
    	index => "account-%{+YYYY.MM.dd}"
   	#user => "elastic"
   	#password => "changeme"
     }
  }

  if [fields][logtype] == "product-log" {
     elasticsearch {
        hosts => ["http://192.168.77.120:9200","http://192.168.77.130:9200"]
        index => "product-%{+YYYY.MM.dd}"
        #user => "elastic"
        #password => "changeme"
     }
  }

  if [fields][logtype] == "insurance-log" {
     elasticsearch {
        hosts => ["http://192.168.77.120:9200","http://192.168.77.130:9200"]
        index => "insurance-%{+YYYY.MM.dd}"
        #user => "elastic"
        #password => "changeme"
      }
   }
  
  if [fields][logtype] == "syslog"{
    elasticsearch {
      hosts => ["http://192.168.77.120:9200","http://192.168.77.130:9200"]
      index => "filebeat-%{+YYYY.MM.dd}"
    }
  }
}

修改filebeat配置文件(120/130/140节点操作)

filebeat.inputs:
  - type: tcp
    max_message_size: 10MiB
    host: "0.0.0.0:45001"
    ienabled: true
    document_type: "account-log"
    fields:
      logtype: account-log
  - type: tcp
    max_message_size: 10MiB
    host: "0.0.0.0:45002"
    ienabled: true
    document_type: "product-log"
    fields:
      logtype: product-log
  - type: tcp
    max_message_size: 10MiB
    host: "0.0.0.0:45003"
    ienabled: true
    document_type: "insurance-log"
    fields:
      logtype: insurance-log
  - type: log
    paths:
      - /var/log/messages
    document_type: "syslog"
    fields:
      logtype: syslog

output.logstash:
  # The Logstash hosts
  hosts: ["192.168.77.120:5044","192.168.77.130:5044","192.168.77.140:5044"]
  

启动安装程序(120/130/140节点操作)

1.启动kibana

nohup ./usr/local/kibana/bin/kibana -d &

2.启动logstash

nohup ./usr/local/logstash/bin/logstash -f ../config/logstash-sample.conf &

3.启动filebeat

nohup ./usr/local/filebeat/filebeat &

系统管理界面/客户端配置

1.kibana可视化管理地址

http://192.168.77.120:5601/app/infra

2.filebeat客户端连接ip端口

192.168.77.120:45001,192.168.77.130:45001,192.168.77.140:45001
192.168.77.120:45002,192.168.77.130:45002,192.168.77.140:45002
192.168.77.120:45003,192.168.77.130:45003,192.168.77.140:45003

到此,关于kibana + logstash + filebeat集群的环境搭建完毕

 

 

你可能感兴趣的:(Linux相关,java相关)