elk日志上报

(一)ELK介绍:

Elasticsearch + Logstash + Kibana(ELK)是一套开源的日志管理方案
Logstash:负责日志的收集,处理和储存
Elasticsearch:负责日志检索和分析
Kibana:负责日志的可视化

工作流程:

elk日志上报_第1张图片

nginx日志格式分析

log_format  main  '$host $remote_addr - $remote_user [$time_local] "$request" '
                  '$status $body_bytes_sent $upstream_response_time "$http_referer" '
                  '"$http_user_agent" "$http_x_forwarded_for" "$uid_got" "$uid_set" "$http_x_tencent_ua" "$upstream_addr"';

解析nginx日志格式

$host           客户端请求的时输入域名或IP
$remote_addr    客户端的IP
$remote_user    客户的名称
[$time_local]   发送请求时的本地时间
$request        请求的url和使用http协议版本
$status         对于此次请求返回的状态
$body_bytes_sent  此次请求得到的字节数(不包含头部)
$upstream_response_time upstream  响应这次请求的时间
$http_referer          请求的页面是从哪个页面跳转过来的
$http_user_agent       客户端使用的浏览器类型
$http_x_forwarded_for  使用代理后,能够记录真实用户的IP地址
$uid_got   收到的客户端的cookie标识
$uid_set   发送给客户端的cookie标识
$http_x_tencent_ua 
$upstream_addr  真实处理这次请求的主机ip

关于nginx中logstash中grok规则

%{IP:ServerHost} %{IP:agencyip} - - \[(?<localTime>.*)\] \"(?<verb>\w{3,4}) (?<site>.*?(?=\s)) (?<httpprotcol>.*?)\" (?<statuscode>\d{3}) (?<bytes>\d+) (?<responsetime>(\d+|-)) \"(?<referer>.*?)\" \"(?<agent>.*?)\" \"(?<realclientip>(-|\d+))\" \"(?<uid_got>(-|\d+))\" \"(?<uid_set>(-|\d+))\" \"(?<tencent_ua>(-|\d+))\" \"(?<upstream_ip>(-|\d+))\"

(二)实现elk收集Nginx日志

1、filebeat

安装

#wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.2.2-linux-x86.tar.gz
# tar xvf filebeat-5.2.2-linux-x86.tar.gz

配置

#vim filebeat.yml
    - input_type: log
      paths:
         - /var/log/nginx/*log
    output.logstash:
          hosts: ["192.168.1.141:5044"]

启动

# ./filebeat  -e -c filebeat.yml

2、logstash

安装

#wget https://artifacts.elastic.co/downloads/logstash/logstash-5.2.2.tar.gz
# tar xvf logstash-5.2.2.tar.gz

logstash配置:grok的正则

[root@localhost logstash-5.2.2]# vim patterns/test
#对访问日志的匹配

ACCESSLOG %{HOSTNAME:http_host} %{IPORHOST:remote_addr} - (%{USERNAME:user}|-) \[%{HTTPDATE:log_timestamp}\] "(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{BASE10NUM:http_status} (?:%{BASE10NUM:body_bytes_sent}|-) (%{BASE16FLOAT:upstream_response_time}|-) "(%{DATA:http_referer}|-)" %{QS:client_agent} "(%{DATA:http_x_forwarded_for}|-)" "(%{DATA:uid_got}|-)" %{DATA:uid_set} "(%{DATA:http_x_tencent_ua}|-)" "(?:%{DATA:upstream_add}|-)"

对错误日志的匹配

ERRORLOG %{DATESTAMP:timestamp} \[%{LOGLEVEL:err_level}\] %{GREEDYDATA:err_mess}

配置文件

# cat conf.d/nginx.conf 
input {
    beats {
        port => 5044 
    }
}

filter {
           if ([source] =~ "access")
           {
            grok {
              patterns_dir => "/usr/local/logstash-5.2.2/patterns/"
              match => {"message" => "%{ACCESSLOG}"}
             }

           date {
                  match => ["timestamp","dd/MMM/yyyy:HH:mm:ss Z"]
                 }
           }

           if ([source] =~ "error" )
           { 
             grok{
              patterns_dir => "/usr/local/logstash-5.2.2/patterns/"
              match => {"message" => "%{ERRORLOG}"}
             }

            date {
                match => ["log_timestamp","yy/MM/dd HH:mm:ss"]
               } 
           }
}

output {
     elasticsearch { hosts => ["192.168.1.134:9200"] }
     stdout { codec=>rubydebug}  #输出到屏幕
}

启动logstash:

# bin/logstash -f conf.d/mytest.conf --config.test_and_exit
# bin/logstash -f conf.d/nginx.conf 

3、ElasticSearch

安装

#wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.2.2.zip
unzip elasticsearch-5.2.2.zip
cd elasticsearch-5.2.2/ 

elasticsearch配置

[es@localhost elasticsearch-5.2.2]$ cat config/elasticsearch.yml |grep -Ev "^#|^$" 
network.host: 192.168.1.134
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.168.1.134"]
discovery.zen.minimum_master_nodes: 1
bootstrap.system_call_filter: false

$ cat config/jvm.options |grep -Ev "^#|^$" 
-Xmx128M
-Xms128M
.......(默认配置)

es切换到普通用户启动

#useradd es -p 123456
#su - es
$bin/elasticsearch

4、kibana

#kibana-5.2.2-windows-x86.zip
#解压后进入目录编辑配置文件
server.port: 5601
server.host: "192.168.3.57"
elasticsearch.url: "http://192.168.1.134:9200"
启动 
#bin\kibana.bat
验证
浏览器输入: 192.168.3.57:5601
可以查看到收集的nginx日志信息

(三)elk安装报错与解决

1、logstash测试报错

[2017-03-20T14:53:41,781][ERROR][logstash.agent           ] Cannot load an invalid configuration {:reason=>"Expected one of #, input, filter, output at line 1, column 19 (byte 19) after "}

解决:

 bin/logstash -e 'input { stdin { } } output { stdout {codec=>rubydebug} }'

2、启动elasticsearch失败

[root@localhost elasticsearch-5.2.2]# bin/elasticsearch
Error: encountered environment variables that are no longer supported
Use jvm.options or ES_JAVA_OPTS to configure the JVM
ES_HEAP_SIZE=512m: set -Xms512m and -Xmx512m in jvm.options or add "-Xms512m -Xmx512m" to ES_JAVA_OPTS

解决jvm:

$ vim config/jvm.options 
-Xmx128M
-Xms128M
#sh -x bin/elasticsearch

3、启动elasticsearch失败

# ./bin/elasticsearch
[2017-03-21T09:49:15,924][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:125) ~[elasticsearch-5.2.2.jar:5.2.2]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:112) ~[elasticsearch-5.2.2.jar:5.2.2]
at org.elasticsearch.cli.SettingCommand.execute(SettingCommand.java:54) ~[elasticsearch-5.2.2.jar:5.2.2]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:122) ~[elasticsearch-5.2.2.jar:5.2.2]
at org.elasticsearch.cli.Command.main(Command.java:88) ~[elasticsearch-5.2.2.jar:5.2.2]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:89) ~[elasticsearch-5.2.2.jar:5.2.2]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:82) ~[elasticsearch-5.2.2.jar:5.2.2]
Caused by: java.lang.RuntimeException: can not run elasticsearch as root

解决:

#useradd es -p 123456
#su - es
$ ./bin/elasticsearch

4、修改network=IP报错

ERROR: bootstrap checks failed
initial heap size [130023424] not equal to maximum heap size [134217728]; this can cause resize pauses and prevents mlockall from locking the entire heap
max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk

解决:

[root@localhost elasticsearch-5.2.2]# sysctl -w 
vm.max_map_count=262144
vm.max_map_count = 262144
[root@localhost elasticsearch-5.2.2]# sysctl -a |grep vm.max_map_count
vm.max_map_count = 262144

# tail -1  /etc/sysctl.conf
vm.max_map_count=262144
# sysctl -p

5、bookstrap checks failed失败

ERROR: bootstrap checks failed
max file descriptors [65535] for elasticsearch process is too low, increase to at least [65536]
system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk

解决:

# tail -2 /etc/security/limits.conf
es soft nofile 65536
es hard nofile 65536

6、bookstrap checks failed失败

[FMOoYUT] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
ERROR: bootstrap checks failed
system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk

解决:

# tail -1 config/elasticsearch.yml
 bootstrap.system_call_filter: false

你可能感兴趣的:(elastic)