ELK 采集系统日志,nginx日志

背景

为了满足公司的安全合规要求,我们需要采集系统的服务的系统日志,nignx日志,能够提供给政府安全部门查询。这里选择(ELK)Elasticsearch、Logstash、Kibana 作为落地实现工具。

日志样例

接口调用日志包括两部分:
ys-web服务
日志位置:
/usr/local/hikvision/tomcat6/logs/shipin7-api.log
日志格式:
2018-08-16 12:29:13,438 INFO  API - execution(UserController.login(..)), args: Object[][{AuthParam{sessionId='null', clientType='1', osVersion='11.4.1', clientVersion='3.7.4.216926', netType='WIFI', clientNo='null', areaId=null, lang='null'},com.hikvision.shipin7.web.api.Request.LoginReq@5379dd6a[[email protected],password=ed6e04ded0662de03a6ac1f8458f9b87,imageCode=,featureCode=348aace2f2499847c6632d6c4702b06f,smCode=,cuName=aVBob25l,oAuth=,redirect=0],org.apache.catalina.connector.RequestFacade@45ee0daa}], result: ResultCode: 1013, exception: false, cost: 4ms
execution中为调用方法
clientType对于客户端类型
ResultCode对应返回的错误码
cost对应接口耗时
示例:
用户注册
2018-08-17 00:32:03,305 INFO  API - execution(UserController.registerUser(..)), args: Object[][{AuthParam{sessionId='null', clientType='1', osVersion='11.3.1', clientVersion='3.7.4.216926', netType='WIFI', clientNo='null', areaId=314, lang='null'},com.hikvision.shipin7.web.api.Request.RegistReq@6c580d12[userName=evka123,password=aed9bcc552285190730380f2b2539280,userType=0,contact=,domicile=,phoneNumber=13474799490,email=,smsCode=2416,companyAddress=,fixedPhone=,enableFeatureCode=1,featureCode=6eba1cd7ce1b0946dafca23cd4a2dd3e,cuName=aVBob25l,oAuth=,oAuthId=,oAuthAccToken=,referrals=,areaId=314,regType=1],org.apache.catalina.connector.RequestFacade@625e41d2}], result: ResultCode: 0, exception: false, cost: 409ms
 
api-gateway服务
日志位置:
/usr/local/hikvision/tomcat8/logs/api-access.log
日志格式:
2018-08-17 09:11:33.541 [http-nio-8080-exec-76] DEBUG ASYNC - uri=/api/p2p/devices,method=POST, client=109.175.38.189, userId=fa069bfcd3f44b23b8f0c15958ede501, headers={}, payload=[deviceSerials=744838280%2C506006791&clientType=55&osVersion=8.0.0&clientVersion=3.5.2.0727&netType=UMTS&clientNo=google_play&sessionId=b7bffdfb210b4e9a897e4fcf27febe62&areaId=110&lang=2], status=200, meta=[resultCode=0, resultDes=请求成功],cost=5ms, productId=product-userdevice, hostIp=10.201.12.145, idc=MASTER
uri后面为请求路径
method后面为请求方法
client为客户端ip
userid为请求用户的id,可能为空,表示该请求不需要登陆或者登陆已失效
payload里为请求参数,不同请求参数不同
status为请求结果状态码
meta里为请求结果
resultCode表示返回的错误码
cost表示请求耗时
produceid表示该请求的处理服务
实例:
通道列表:
2018-08-17 09:11:33.341 [http-nio-8080-exec-43] DEBUG ASYNC - uri=/api/camera/list,method=POST, client=5.165.8.118, userId=046a4cd8f1334e73ac59d8748a2ff5c2, headers={}, payload=[sessionId=77ff23227bfd4af581d9f137d2f2dcf9&clientType=9&clientVersion=1&groupId=-1&pageStart=0&pageSize=40], status=200, meta=[resultCode=0, resultDes=请求成功],cost=103ms, productId=product-userdevice, hostIp=10.201.12.145, idc=MASTER
设备列表
2018-08-17 09:11:33.311 [http-nio-8080-exec-32] DEBUG ASYNC - uri=/api/device/pagelist,method=POST, client=109.175.38.189, userId=fa069bfcd3f44b23b8f0c15958ede501, headers={}, payload=[userType=0&pyronix=1&filterType=&pageStart=0&pageSize=10&kms=true&areaId=110&clientNo=google_play&clientType=55&clientVersion=3.5.2.0727&netType=UMTS&osVersion=8.0.0&sessionId=b7bffdfb210b4e9a897e4fcf27febe62], status=200, meta=[resultCode=0, resultDes=请求成功],cost=126ms, productId=product-userdevice, hostIp=10.201.12.145, idc=MASTER
设备列表
2018-08-17 09:11:31.330 [http-nio-8080-exec-52] DEBUG ASYNC - uri=/api/device/list,method=POST, client=94.254.241.13, userId=de72e3e5f8204d5a94291a6f07ce37c3, headers={}, payload=[sessionId=1a476d3c9e2247f4b29e7fe35f449939&clientType=3&osVersion=6.0&clientVersion=3.6.0.0
320&netType=LTE&clientNo=web_site], status=200, meta=[resultCode=0, resultDes=请求成功],cost=177ms, productId=product-userdevice, hostIp=10.201.12.145, idc=MASTER
通道信息
2018-08-17 09:11:33.269 [http-nio-8080-exec-67] DEBUG ASYNC - uri=/api/camera/infos,method=POST, client=2.30.236.74, userId=4af0879a5b7941b7a69053fb3b30547e, headers={}, payload=[areaId=142&clientType=54&clientVersion=3.5.2.180727&netType=WIFI&osVersion=11.4.1&serials=113689098&sessionId=cf3c8c543b394f27896cf3f66221bbbb], status=200, meta=[resultCode=0, resultDes=请求成功],cost=65ms, productId=product-userdevice, hostIp=10.201.12.145, idc=MASTER
搜索设备
2018-08-17 09:11:32.921 [http-nio-8080-exec-13] DEBUG ASYNC - uri=/api/device/search,method=POST, client=151.45.181.218, userId=e2ec310743024f4eabd326bbf9192818, headers={}, payload=[sessionId=8caaf6904b9f4d36b874389be3ea0835&clientType=3&osVersion=8.0.0&clientVersion=3.7.4.0808&netType=WIFI&clientNo=web_site&deviceSerialNo=558F5C2EE], status=200, meta=[resultCode=2000, resultDes=null],cost=80ms, productId=product-userdevice, hostIp=10.201.12.145, idc=MASTE

实现

采用ELK 基本不需要开发,难度主要还是集中在logstash的解析上面。

软件版本
  1. filesbeat-6.2.3

  2. logstash-5.6.2 (要求 4 G mem,4vCPu, 要求jdk 1.8 )

  3. elasticsearch-2.4.6(包含 head license marvel sense插件)

  4. kafka_2.11-0.10.0.1(jdk-1.8) 集群模式

  5. zookeeper-3.4.6

配置
es 配置

编辑 /home/es/elasticsearch-2.4.6/bin/elasticsearch, 在前面加上这个变量 export ES_HEAP_SIZE=4g

#
#
# Optionally, exact memory values can be set using the following values, note,
# they can still be set using the `ES_JAVA_OPTS`. Sample format include "512m", and "10g".
export ES_HEAP_SIZE=4g
#   ES_HEAP_SIZE -- Sets both the minimum and maximum memory to allocate (recommended)
#
# As a convenience, a fragment of shell is sourced in order to set one or
# more of these variables. This so-called `include' can be placed in a
# number of locations and will be searched for in order. The lowest
# priority search path is the same directory as the startup script, and
# since this is the location of the sample in the project tree, it should
# almost work Out Of The Box.

编辑 /home/es/elasticsearch-2.4.6/config/elasticsearch.yml

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please see the documentation for further information on configuration options:
# 
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: bigdata
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node  #主机名
#
# Add custom attributes to the node:
#
# node.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /data1/esdata
#
# Path to log files:
#
path.logs: /home/es/log
node.master: true
node.data: false
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
# bootstrap.memory_lock: true
#
# Make sure that the `ES_HEAP_SIZE` environment variable is set to about half the memory
# available on the system and that the owner of the process is allowed to use this limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 10.97.202.9 #主机ip
#
# Set a custom port for HTTP:
#
# http.port: 9200
#
# For more information, see the documentation at:
# 
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.zen.ping.unicast.hosts: ["ip1", "ip2","ip3"]   #(三个节点主机ip,master节点必须配三个)
#
# Prevent the "split brain" by configuring the majority of nodes (total number of nodes / 2 + 1):
#
# discovery.zen.minimum_master_nodes: 3
#
# For more information, see the documentation at:
# 
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
# gateway.recover_after_nodes: 3
#
# For more information, see the documentation at:
# 
#
# ---------------------------------- Various -----------------------------------
#
# Disable starting multiple nodes on a single system:
#
# node.max_local_storage_nodes: 1
#
# Require explicit names when deleting indices:
#
# action.destructive_requires_name: true
Kafka配置

安装好kafka,需要创建多个topic:

bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 2 --partitions 4 --topic message-log

bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 2 --partitions 4 --topic secure-log

bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 2 --partitions 4 --topic cron-log

bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 2 --partitions 4 --topic audit-log

bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 2 --partitions 4 --topic sudo-log

bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 2 --partitions 4 --topic nginx-log

bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 2 --partitions 4 --topic application-log

filebeat配置
###################### Filebeat Configuration Example #########################
# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.
#=========================== Filebeat prospectors =============================
filebeat.prospectors:
# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.
- type: log
  enabled: true
  paths:
    - /var/log/messages
  fields:
    hostname: ${HOSTNAME}
    log_topics: message-log
    log_type: message
- type: log
  enabled: true
  paths:
    - /var/log/secure
  fields:
    hostname: ${HOSTNAME}
    log_topics: secure-log
    log_type: secure
  
- type: log
  enabled: true
  paths:
    - /var/log/audit.log
  fields:
    hostname: ${HOSTNAME}
    log_topics: audit-log
    log_type: audit
- type: log
  enabled: true
  paths:
    - /var/log/cron
  fields:
    hostname: ${HOSTNAME}
    log_topics: cron-log
    log_type: cron
- type: log
  enabled: true
  paths:
    - /var/log/sudo
  fields:
    hostname: ${HOSTNAME}
    log_topics: sudo-log
    log_type: sudo
- type: log
  enabled: true
  paths:
    - /usr/local/hikvision/nginx/logs/access.log
  fields:
    hostname: ${HOSTNAME}
    log_topics: nginx-log
    log_type: nginx
- type: log
  enabled: true
  paths:
    -/usr/local/hikvision/tomcat6/logs/shipin7-api.log
  fields:
    hostname: ${HOSTNAME}
    log_topics: application-log
    log_type: shipin7-api
- type: log
  enabled: true
  paths:
    - /usr/local/hikvision/tomcat8/logs/api-access.log
  fields:
    hostname: ${HOSTNAME}
    log_topics: application-log
    log_type: api-access
  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']
  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']
  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: ['.gz$']
  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1
  ### Multiline options
  # Mutiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation
  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[
  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false
  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after
 
#============================= Filebeat modules ===============================
filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml
  # Set to true to enable config reloading
  reload.enabled: false
  # Period on which files under path should be checked for changes
  #reload.period: 10s
#==================== Elasticsearch template setting ==========================
setup.template.settings:
  index.number_of_shards: 3
  #index.codec: best_compression
  #_source.enabled: false
#================================ General =====================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging
 
#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: false
# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
#============================== Kibana =====================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"
#============================= Elastic Cloud ==================================
# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:
# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `:`.
#cloud.auth:
#================================ Outputs =====================================
# Configure what output to use when sending the data collected by the beat.
#---------------------------------------------kafka output---------------------------
#
output.kafka:
## initial brokers for reading cluster metadata
   hosts: ["localhost:9092"]
#
## message topic selection + partitioning
   topic: '%{[fields][log_topics]}'
   partition.round_robin:
      reachable_only: false
   codec.format:
      string: '%{[fields]}`` %{[message]}'
   required_acks: 1
   compression: gzip
   max_message_bytes: 1000000
#partition.round_robin:
#reachable_only: false
#
#codec.format:
#    string: '%{[fields][log_type]} %{[message]}'
#    required_acks: 1
#    compression: gzip
#    max_message_bytes: 1000000
#
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
 # hosts: ["localhost:9200"]
  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"
#----------------------------- Logstash output --------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]
  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"
  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"
#================================ Logging =====================================
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]
#============================== Xpack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.
# Set to true to enable the monitoring reporter.
#xpack.monitoring.enabled: false
# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well. Any setting that is not set is
# automatically inherited from the Elasticsearch output configuration, so if you
# have the Elasticsearch output configured, you can simply uncomment the
# following line.
#xpack.monitoring.elasticsearch:

后台启动filebeat命令 : nohup ./filebeat -e -c filebeat.yml >/dev/null 2>&1 &

logstash配置

安装好logstash后,在/usr/local/hikvision/logstash/config目录下 新建syslog.conf

input{
  kafka{
    bootstrap_servers => ["bigdata1.kafka.ez.aws:9092,bigdata2kafka.ez.aws:9092,bigdata3.kafka.ez.aws:9092"]
    group_id => "logstash"
    max_poll_records => "1000"
    topics => ["message-log","secure-log","cron-log","audit-log","nginx-log","sudo-log","application-log"]
    client_id => "logstash"
    }
}
filter {
  grok
     {
     patterns_dir => ["/usr/local/hikvision/logstash/overseas_patterns"]
     match => { "message" => "%{REST:add_info}`` %{REST:restmsg}"}
     }
     json {
     source => "add_info"
     }
   if [log_type] == "audit" {
      grok
       {
        patterns_dir => ["/usr/local/hikvision/logstash/overseas_patterns"]
        match => { "restmsg" => "%{AuditTIME:operationtime}" }
        remove_field => [ "message","@version","log_topics","add_info"]
       }
      date
       {
         match => ["operationtime", "yyyy-MM-dd_HH:mm:ss"]
         target => "@timestamp"
         locale => "en"
       }
       mutate
       {
        remove_field => ["operationtime"]
       }
    }else if [log_type] == "nginx" {
       grok
       {
        patterns_dir => ["/usr/local/hikvision/logstash/overseas_patterns"]
        match => { "restmsg" => "%{IPV4:remote_addr|-} - (?:%{USERNAME:user}|-) \[%{HTTPDATE:log_timestamp}\] \"(%{WORD:request_method}|-) (%{URIPATH1:url}|-|) (HTTP/%{NUMBER:httpversion})\" (%{STATUS:http_status}) (?:%{BASE10NUM:body_bytes_sent}|-) \"(?:%{GREEDYDATA:http_referrer}|-)\" \"(%{GREEDYDATA:user_agent}|-)\"" }
        remove_field => [ "message","@version","log_topics","add_info"]
       }
       date
       {
          match => ["log_timestamp", "dd/MMM/yyyy:HH:mm:ss Z"]
          target => "@timestamp"
          locale => "en"
       }
       mutate
       {
        remove_field => ["log_topics","add_info","httpversion","log_timestamp"]
       }    
    }else if [log_type] == "shipin7-api"{
       grok
       {
         patterns_dir => ["/usr/local/hikvision/logstash/overseas_patterns"]
         match => { "restmsg" => "%{AppTIME:request_time}\s(?\w*)\s\s(?\w*)\s-\s.*\((?\w*.\w*)\(..\)\),\s.*sessionId=\'(?\w*)\'.*clientType=\'(?\w*)\'.*result\:\s(?[A-Za-z0-9_ :\-]*),.*cost\:\s(?\d+)ms" }
         remove_field => [ "message","@version","log_topics","add_info"]
       }
       date
       {
          match => ["request_time", "yyyy-MM-dd HH:mm:ss,SSS"]
          target => "@timestamp"
          locale => "en"
        }
       mutate {
        convert => ["cost", "integer"]
        }    
    }else if [log_type] == "api-access"{
       grok
       {
         patterns_dir => ["/usr/local/hikvision/logstash/overseas_patterns"]
         match => { "restmsg" => "%{ApiTIME:request_time}.*uri=(?[A-Za-z0-9_/]+),.*method=(?\w*),.*client=(?[0-9.]+),.*userId=(?\w*),.*payload=\[(?[^\[\]]+)\],.*status=(?\w*),.*resultCode=(?[A-Za-z0-9_\-]*),.*resultDes=(?[^\]]+)\],.*cost=(?\d+)ms,.*productId=(?[A-Za-z0-9_-]+)," }
         remove_field => [ "message","@version","log_topics","add_info"]
        }
       date
       {
          match => ["request_time", "yyyy-MM-dd HH:mm:ss.SSS"]
          target => "@timestamp"
          locale => "en"
        }
       mutate {
        convert => ["cost", "integer"]
        }
    }else if [log_type] == "dclog-log"{
       grok
       {
         patterns_dir => ["/usr/local/hikvision/logstash/overseas_patterns"]
         match => { "restmsg" => "%{AppTIME:operationtime}" }
         remove_field => [ "message","@version","log_topics","add_info"]
        }
       date
       {
          match => ["operationtime", "yyyy-MM-dd HH:mm:ss.SSS"]
          target => "@timestamp"
          locale => "en"
        }
       mutate {
        convert => ["cost", "integer"]
        }
    }
    else {
       grok
       {
         patterns_dir => ["/usr/local/hikvision/logstash/overseas_patterns"]
         match => { "restmsg" => "%{TIME:operationtime}" }
         remove_field => [ "message","@version","log_topics","add_info"]
       }
       date
       {
         match => ["operationtime", "MMM dd HH:mm:ss","MMM  d HH:mm:ss"]
         target => "@timestamp"
         locale => "en"
        }
       mutate {
          remove_field => ["operationtime"]
       }
    }
 }
output {
 if [log_type] == "nginx" {
# stdout { codec => rubydebug }
 elasticsearch{
    hosts => ["10.200.14.115:9200","10.200.14.220:9200","10.200.15.163:9200"]
    index => "%{log_type}%{+YYYYMMdd}"
    manage_template => true
    template_name => "nginx"
    template_overwrite => true
    template => "/usr/local/hikvision/logstash/nginx.json"
    }
  }else if [log_type] == "shipin7-api"
  {
 #  stdout { codec => rubydebug }
   elasticsearch{
     hosts => ["10.200.14.115:9200","10.200.14.220:9200","10.200.15.163:9200"]
     index => "%{log_type}%{+YYYYMMdd}"
     manage_template => true
     template_name => "shipin7-api"
     template_overwrite => true
     template => "/usr/local/hikvision/logstash/shipin7-api.json"
    }
  }else if [log_type] == "api-access"
  {
 #  stdout { codec => rubydebug }
   elasticsearch{
     hosts => ["10.200.14.115:9200","10.200.14.220:9200","10.200.15.163:9200"]
     index => "%{log_type}%{+YYYYMMdd}"
     manage_template => true
     template_name => "api-access"
     template_overwrite => true
     template => "/usr/local/hikvision/logstash/api-access.json"
    }
  }else if [log_type] == "dclog-log"
  {
 #  stdout { codec => rubydebug }
   elasticsearch{
     hosts => ["10.200.14.115:9200","10.200.14.220:9200","10.200.15.163:9200"]
     index => "system-%{log_type}%{+YYYYMM}"
     manage_template => true
     template_name => "dclog-log"
     template_overwrite => true
     template => "/usr/local/hikvision/logstash/system.json"
    }
  }else {
 #  stdout { codec => rubydebug }
   elasticsearch{
    hosts => ["10.200.14.115:9200","10.200.14.220:9200","10.200.15.163:9200"]
    index => "system-%{log_type}%{+YYYYMM}"
    manage_template => true
    template_name => "system"
    template_overwrite => true
    template => "/usr/local/hikvision/logstash/system.json"
    }
  }
}

新建文件 /usr/local/hikvision/logstash/overseas_patterns 文件内容为:

AuditTIME  ^[0-9]{4}-[0-9]{2}-[0-9]{2}_[0-9]{2}\:[0-9]{2}\:[0-9]{2}
AppTIME [0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}\:[0-9]{2}\:[0-9]{2},[0-9]{3}
ApiTIME [0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}\:[0-9]{2}\:[0-9]{2}[.][0-9]{3}
TIME ^[A-Za-z]{3} [ 0-9]{2} [0-9]{2}\:[0-9]{2}\:[0-9]{2}
REST .*
URIPARM1 [A-Za-z0-9$.+!*'|(){},~@#%&/=:;^\\_<>`?\-\[\]]*
URIPATH1 (?:/[\\A-Za-z0-9$.+!*'(){},~:;=@#% \[\]_<>^\-&?]*)+
STATUS ([0-9.]{0,3}[, ]{0,2})+
USERNAME [a-zA-Z0-9._-]+
HTTPDATE %{MONTHDAY}/%{MONTH}/%{YEAR}:%{NGTIME} %{INT}
WORD \b\w+\b
BASE10NUM (?<![0-9.+-])(?>[+-]?(?:(?:[0-9]+(?:\.[0-9]+)?)|(?:\.[0-9]+)))
GREEDYDATA .*
NUMBER (?:%{BASE10NUM})
IPV4 (?<![0-9])(?:(?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2})[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2})[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2})[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2}))(?![0-9])
MONTH \b(?:Jan(?:uary)?|Feb(?:ruary)?|Mar(?:ch)?|Apr(?:il)?|May|Jun(?:e)?|Jul(?:y)?|Aug(?:ust)?|Sep(?:tember)?|Oct(?:ober)?|Nov(?:ember)?|Dec(?:ember)?)\b
MONTHNUM (?:0?[1-9]|1[0-2])
MONTHDAY (?:(?:0[1-9])|(?:[12][0-9])|(?:3[01])|[1-9])
DAY (?:Mon(?:day)?|Tue(?:sday)?|Wed(?:nesday)?|Thu(?:rsday)?|Fri(?:day)?|Sat(?:urday)?|Sun(?:day)?)
YEAR (?>\d\d){1,2}
HOUR (?:2[0123]|[01]?[0-9])
MINUTE (?:[0-5][0-9])
SECOND (?:(?:[0-5][0-9]|60)(?:[:.,][0-9]+)?)
NGTIME (?!<[0-9])%{HOUR}:%{MINUTE}(?::%{SECOND})(?![0-9])
INT (?:[+-]?(?:[0-9]+))

新建文件/usr/local/hikvision/logstash/system.json,内容为:

{
  "template": "*",
  "settings": {
    "index.number_of_shards": 10,
    "number_of_replicas": 1
  },
  "mappings": {
    "logs": {
      "properties": {
        "hostname": {
          "type": "string",
          "include_in_all": false,
          "index": "not_analyzed"
        },
        "log_type": {
          "type": "string",
          "include_in_all": false
        },
        "@timestamp": {
          "type": "string",
          "include_in_all": false,
          "index": "not_analyzed"
        },
        "restmsg": {
          "type": "string",
          "include_in_all": false,
          "index": "not_analyzed"
        }
      }
    }
  }
}

新建文件/usr/local/hikvision/logstash/nginx.json,内容为:

{
  "template": "*",
  "settings": {
    "index.number_of_shards": 10,
    "number_of_replicas": 1
  },
  "mappings": {
    "_default_": {
      "_source": {
        "enabled": false
      }
    },
    "logs": {
      "_source": {
        "enabled": true
      },
      "properties": {
        "remote_addr": {
          "type": "string",
          "include_in_all": false,
          "index": "not_analyzed"
        },
        "restmsg": {
          "type": "string",
          "include_in_all": false,
          "index": "no"
        },
        "body_bytes_sent": {
          "type": "string",
          "include_in_all": false,
          "index": "not_analyzed"
        },
        "request_method": {
          "type": "string",
          "include_in_all": false,
          "index": "not_analyzed"
        },
        "url": {
          "type": "string",
          "include_in_all": false,
          "index": "not_analyzed"
        },
        "hostname": {
          "type": "string",
          "include_in_all": false,
          "index": "not_analyzed"
        },
        "log_type": {
          "type": "string",
          "include_in_all": false
        },
        "@timestamp": {
          "type": "string",
          "include_in_all": false,
          "index": "not_analyzed"
        },
        "http_referrer": {
          "type": "string",
          "include_in_all": false,
          "index": "not_analyzed"
        },
        "http_status": {
          "type": "string",
          "include_in_all": false,
          "index": "not_analyzed"
        },
        "user": {
          "type": "string",
          "include_in_all": false,
          "index": "not_analyzed"
        },
        "user_agent": {
          "type": "string",
          "include_in_all": false,
          "index": "not_analyzed"
        }
      }
    }
  }
}

新建文件/usr/local/hikvision/logstash/shipin7-api.json,内容为:

{
  "template": "*",
  "settings": {
    "index.number_of_shards": 10,
    "number_of_replicas": 1
  },
  "mappings": {
    "logs": {
      "properties": {
        "hostname": {
          "type": "string",
          "include_in_all": false,
          "index": "not_analyzed"
        },
        "log_type": {
          "type": "string",
          "include_in_all": false
        },
        "@timestamp": {
          "type": "string",
          "include_in_all": false,
          "index": "not_analyzed"
        },
        "restmsg": {
          "type": "string",
          "include_in_all": false,
          "index": "no"
        }
      }
    }
  }
}

新建文件/usr/local/hikvision/logstash/api-access.json,内容为:

{
  "template": "*",
  "settings": {
    "index.number_of_shards": 10,
    "number_of_replicas": 1
  },
  "mappings": {
    "logs": {
      "properties": {
        "hostname": {
          "type": "string",
          "include_in_all": false,
          "index": "not_analyzed"
        },
        "log_type": {
          "type": "string",
          "include_in_all": false
        },
        "@timestamp": {
          "type": "string",
          "include_in_all": false,
          "index": "not_analyzed"
        },
        "restmsg": {
          "type": "string",
          "include_in_all": false,
          "index": "no"
        }
      }
    }
  }
}

后台启动logstash命令:nohup bin/logstash -f config/syslog.conf >/dev/null 2>&1 &

kibana配置

编辑kibana.yml配置文件,添加如下配置:

#配置本机ip
server.host: "10.200.14.116"
#配置es集群url
elasticsearch.url: "http://10.200.14.115:9200"

进入kibana安装bin目录执行 ./kibana &

你可能感兴趣的:(Elasticsearch)