在生产环境中,nginx日志格式往往使用的是自定义的格式,我们需要把logstash中的message结构化后再存储,方便kibana的搜索和统计,因此需要对message进行解析。
本文采用grok过滤器,使用match正则表达式解析,根据自己的log_format定制。
1、nginx日志格式
log_format配置如下:
log_format main '$remote_addr - $remote_user [$time_local] $http_host $request_method "$uri" "$query_string" ' '$status $body_bytes_sent "$http_referer" $upstream_status $upstream_addr $request_time $upstream_response_time ' '"$http_user_agent" "$http_x_forwarded_for"' ;
对应的日志如下:
192.172.2.1 - - [06/Jun/2016:00:00:01 +0800] test.changh.com GET "/api/index" "?cms=0&rnd=1692442321" 200 4 "http://www.test.com/?cp=sfwefsc" 200 192.168.0.122:80 0.004 0.004 "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.101 Safari/537.36" "-"
2、编写正则表达式
logstash中默认存在一部分正则让我们来使用,可以访问Grok Debugger来查看,可以在logstash/
vendor/bundle/jruby/1.9/gems/logstash-patterns-core-4.1.1/patterns/ 目录中查看
基本定义在grok-patterns中,我们可以使用其中的正则,当然并不是所有的都适合nginx字段,这时就需要我们自定义正则,然后通过指定patterns_dir来调用。
同时在写正则的时候可以使用Grok Debugger或者Grok Comstructor工具来帮助我们更快的调试。在不知道如何使用logstash中的正则的时候也可使用Grok Debugger的Descover来自动匹配。(注意网络是否通,需要墙)
1)nginx标准日志格式
logstash自带的grok正则中有Apache的标准日志格式:
COMMONAPACHELOG %{IPORHOST:clientip} %{HTTPDUSER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] "(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{NUMBER:response} (?:%{NUMBER:bytes}|-) COMBINEDAPACHELOG %{COMMONAPACHELOG} %{QS:referrer} %{QS:agent}
对于nginx标准日志格式,可以发现只是最后多了一个 $http_x_forwarded_for 变量。则nginx标准日志的grok正则定义为:
MAINNGINXLOG %{COMBINEDAPACHELOG} %{QS:x_forwarded_for}
2)自定义格式
通过log_format来匹配对应的正则如下:
%{IPV4:remote_addr} - (%{USERNAME:user}|-) \[%{HTTPDATE:log_timestamp}\] (%{HOSTNAME1:http_host}|-) (%{WORD:request_method}|-) \"(%{URIPATH1:uri}|-|)\" \"(%{URIPARM1:param}|-)\" %{STATUS:http_status} (?:%{BASE10NUM:body_bytes_sent}|-) \"(?:%{GREEDYDATA:http_referrer}|-)\" (%{STATUS:upstream_status}|-) (?:%{HOSTPORT1:upstream_addr}|-) (%{BASE16FLOAT:upstream_response_time}|-) (%{STATUS:request_time}|-) \"(%{GREEDYDATA:user_agent}|-)\" \"(%{FORWORD:x_forword_for}|-)\"
这里面有几个是我自定义的正则:
URIPARM1 [A-Za-z0-9$.+!*'|(){},~@#%&/=:;^\\_<>`?\-\[\]]*
URIPATH1 (?:/[\\A-Za-z0-9$.+!*'(){},~:;=@#% \[\]_<>^\-&?]*)+
HOSTNAME1 \b(?:[0-9A-Za-z_\-][0-9A-Za-z-_\-]{0,62})(?:\.(?:[0-9A-Za-z_\-][0-9A-Za-z-:\-_]{0,62}))*(\.?|\b)
STATUS ([0-9.]{0,3}[, ]{0,2})+
HOSTPORT1 (%{IPV4}:%{POSINT}[, ]{0,2})+
FORWORD (?:%{IPV4}[,]?[ ]?)+|%{WORD}
logstash中的message是每段读进来的日志,IPORHOST、USERNAME、HTTPDATE等都是patterns/grok-patterns中定义好的正则格式名称,对照日志进行编写。
grok pattren的语法为:%{SYNTAX:semantic},":" 前面是grok-pattrens中定义的变量,后面可以自定义变量的名称。(?:%{SYNTAX:semantic}|-)这种形式是条件判断。
如果有双引号""或者中括号[],需要加 \ 进行转义。
详解自定义正则:
URIPARAM \?[A-Za-z0-9$.+!*'|(){},~@#%&/=:;_?\-\[\]<>]*
URIPARM1 [A-Za-z0-9$.+!*'|(){},~@#%&/=:;^\\_<>`?\-\[\]]* grok-patterns中正则表达式,可以看到grok-patterns中是以“?”开始的参数,在nginx的 $query_string 中已经把“?”去掉了,所以我们这里不再需要“?”。另外单独加入日志中出现的 ^ \ _ < > ` 特殊符号
URIPATH (?:/[A-Za-z0-9$.+!*'(){},~:;=@#%&_\-]*)+
URIPATH1 (?:/[\\A-Za-z0-9$.+!*'(){},~:;=@#% \[\]_<>^\-&?]*)+ grok-patterns中正则表达式,grok-patterns中的URIPATH不能匹配带空格的URI,于是在中间加一个空格。另外还有 \ [ ] < > ^ 特殊符号。
HOSTNAME \b(?:[0-9A-Za-z][0-9A-Za-z-]{0,62})(?:\.(?:[0-9A-Za-z][0-9A-Za-z-]{0,62}))*(\.?|\b)
HOSTNAME1 \b(?:[0-9A-Za-z_\-][0-9A-Za-z-_\-]{0,62})(?:\.(?:[0-9A-Za-z_\-][0-9A-Za-z-:\-_]{0,62}))*(\.?|\b) 添加匹配 http_host 中带有 "-" 的字符。
HOSTPORT %{IPORHOST}:%{POSINT}
HOSTPORT1 (%{IPV4}:%{POSINT}[, ]{0,2})+ 在匹配 upstream_addr 字段时发现,会出现多个IP地址的情况出现,匹配多个IP地址。
STATUS ([0-9.]{0,3}[, ]{0,2})+ 该字段是当出现多个 upstream_addr 字段时匹配多个 http_status 。
FORWORD (?:%{IPV4}[,]?[ ]?)+|%{WORD} 当 x_forword_for 字段出现多个IP地址时匹配。
nginx左右字段都定义完成,可以使用Grok Debugger或者Grok Comstructor工具来测试。添加自定义正则的时候,在Grok Debugger中可以勾选“Add custom patterns”。
以上日志匹配结果为:
{
"remote_addr": [
"1.1.1.1"
],
"user": [
"-"
],
"log_timestamp": [
"06/Jun/2016:00:00:01 +0800"
],
"http_host": [
"www.test.com"
],
"request_method": [
"GET"
],
"uri": [
"/api/index"
],
"param": [
"?cms=0&rnd=1692442321"
],
"http_status": [
"200"
],
"body_bytes_sent": [
"4"
],
"http_referrer": [
"http://www.test.com/?cp=sfwefsc"
],
"port": [
null
],
"upstream_status": [
"200"
],
"upstream_addr": [
"192.168.0.122:80"
],
"upstream_response_time": [
"0.004"
],
"request_time": [
"0.004"
],
"user_agent": [
""Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.101 Safari/537.36""
],
"client_ip": [
"2.2.2.2"
],
"x_forword_for": [
null
]
}
但是、我们的环境中 nginx 的log_format 定义如下:
log_format access '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" $http_x_forwarded_for'
'$upstream_addr $upstream_response_time $request_time ';
故,我在grokdubug调试配置就必须这样写 ,同时要添加自定义的正则表达式:
%{IPV4:remote_addr} - (%{USERNAME:user}|-) \[%{HTTPDATE:log_timestamp}\] \"%{WORD:request_method} %{URIPATH1:uri}\" %{BASE10NUM:http_status} (?:%{BASE10NUM:body_bytes_sent}|-) \"(?:%{GREEDYDATA:http_referrer}|-)\" \"(%{GREEDYDATA:user_agent}|-)\"
3、logstash的配置文件
创建自定义正则目录
# mkdir -p /usr/local/logstash/patterns # vi /usr/local/logstash/patterns/nginx
然后写入上面自定义的正则
URIPARM1 [A-Za-z0-9$.+!*'|(){},~@#%&/=:;^\\_<>`?\-\[\]]*
URIPATH1 (?:/[\\A-Za-z0-9$.+!*'(){},~:;=@#% \[\]_<>^\-&?]*)+
HOSTNAME1 \b(?:[0-9A-Za-z_\-][0-9A-Za-z-_\-]{0,62})(?:\.(?:[0-9A-Za-z_\-][0-9A-Za-z-:\-_]{0,62}))*(\.?|\b)
STATUS ([0-9.]{0,3}[, ]{0,2})+
HOSTPORT1 (%{IPV4}:%{POSINT}[, ]{0,2})+
FORWORD (?:%{IPV4}[,]?[ ]?)+|%{WORD}
URIPARM [A-Za-z0-9$.+!*'|(){},~@#%&/=:;_?\-\[\]]*
URIPATH (?:/[A-Za-z0-9$.+!*'(){},~:;=@#%&_\- ]*)+
URI1 (%{URIPROTO}://)?(?:%{USER}(?::[^@]*)?@)?(?:%{URIHOST})?(?:%{URIPATHPARAM})?
NGINXACCESS %{IPORHOST:remote_addr} - (%{USERNAME:user}|-) \[%{HTTPDATE:log_timestamp}\] \"{WORD:request_method} %{URIPATH1:uri}\" %{BASE10NUM:http_status} (?:%{BASE10NUM:body_bytes_sent}|-) \"(?:%{GREEDYDATA:http_referrer}|-)\" \"(%{GREEDYDATA:user_agent}|-)\" (%{FORWORD:x_forword_for}|-) (?:%{HOSTPORT1:upstream_addr}|-) ({BASE16FLOAT:upstream_response_time}|-) (%{STATUS:request_time}|-)
logstash.conf配置文件内容
input {
beats {
port => 5044
type => "nginx-log"
}
}
filter {
if [type] == "nginx-log"{
grok {
patterns_dir => "/usr/local/logstash/patterns"
match => {"message" => "%{NGINXACCESS}" }
}
date {
match => [ "timestamp" , "dd/MMM/YYYY:HH:mm:ss Z" ]
}
geoip {
source => "clientip"
}
}
}
output {
elasticsearch {
hosts => ["10.129.11.87:9200","10.129.11.88:9200"]
index => "logstash-custom-nginx%{+YYYY.MM.dd}"
document_type => "%{type}"
flush_size => 20000
idle_flush_time => 10
sniffing => true
template_overwrite => true
}
}
4、启动logstash,然后就可以查看日志是否写入elasticsearch中。
==========================
如果用 grafana 读取es日志好看监控数据:
则可以将nginx 配置为:
log_format main '{"@timestamp":"$time_iso8601",'
'"@source":"$server_addr",'
'"hostname":"$hostname",'
'"ip":"$http_x_forwarded_for",'
'"client":"$remote_addr",'
'"request_method":"$request_method",'
'"scheme":"$scheme",'
'"domain":"$server_name",'
'"referer":"$http_referer",'
'"request":"$request_uri",'
'"args":"$args",'
'"size":$body_bytes_sent,'
'"status": $status,'
'"responsetime":$request_time,'
'"upstreamtime":"$upstream_response_time",'
'"upstreamaddr":"$upstream_addr",'
'"http_user_agent":"$http_user_agent",'
'"https":"$https"'
'}';
这样的字段可以用 grafana 相对的模板数据;
input {
file {
#这里根据自己日志命名使用正则匹配所有域名访问日志
path => [ "/usr/local/nginx/logs/*_access.log" ]
ignore_older => 0
codec => json
}
}
filter {
mutate {
convert => [ "status","integer" ]
convert => [ "size","integer" ]
convert => [ "upstreatime","float" ]
remove_field => "message"
}
geoip {
source => "ip"
}
}
output {
elasticsearch {
hosts => "127.0.0.1:9200"
index => "logstash-nginx-access-%{+YYYY.MM.dd}"
}
# stdout {codec => rubydebug}
}
https://grafana.com/dashboards/2292 (grafana 的nginx-access 模板)
我用的filebeat采集的则如下配置
input {
beats {
port => 5044
}
}
filter {
if [fields][doc_type] == "nginx_access_log" {
mutate {
convert => [ "status","integer" ]
convert => [ "size","integer" ]
convert => [ "upstreatime","float" ]
}
geoip {
source => "ip"
}
}
}
output {
if [fields][doc_type] == "nginx_access_log"{
elasticsearch {
hosts => ["10.129.11.87:9200","10.129.11.88:9200"]
index => "logstash-nginx-access%{+YYYY.MM.dd}"
document_type => "%{type}"
flush_size => 20000
idle_flush_time => 10
sniffing => true
template_overwrite => true
}
stdout{codec => rubydebug}
}
if [fields][doc_type] == "nginx_error_log" {
elasticsearch {
hosts => ["10.129.11.87:9200","10.129.11.88:9200"]
index => "logstash-nginx-error%{+YYYY.MM.dd}"
document_type => "%{type}"
flush_size => 20000
idle_flush_time => 10
sniffing => true
template_overwrite => true
}
}
}