ELK5.5日志系统-RPM搭建文档(服务端单机部署)

文章目录

    • @[toc]
        • **简介**
        • **安装包准备**
        • **服务端安装**
          • **java安装配置**
          • **elasticsearch安装配置**
          • **logstash安装配置**
          • **kibana安装配置**
        • **客户端安装**
          • **filebeat安装配置**
        • **日志采集配置文件编写**
          • **客户端filebeat配置文件示例**
          • **服务端logstash配置文件示例**
          • **自定义日志解析规则**
          • **某tomcat项目的简单解析规则示例**
          • **logstash解析日志字段数量说明**
          • **grok插件的解析部分关键字说明(重要)**

简介

  • ELK由Elasticsearch、Logstash和Kibana三部分组件组成;
  • Elasticsearch是个开源分布式搜索引擎,它的特点有:分布式,零配置,自动发现,索引自动分片,索引副本机制,restful风格接口,多数据源,自动搜索负载等;
  • Logstash是一个完全开源的工具,它可以对你的日志进行收集、分析,并将其存储供以后使用;
  • kibana 是一个开源和免费的工具,它可以为 Logstash 和 ElasticSearch 提供的日志分析友好的 Web 界面,可以帮助您汇总、分析和搜索重要数据日志;
  • Filebeat是一个日志文件托运工具,在你的服务器上安装客户端后,filebeat会监控日志目录或者指定的日志文件,追踪读取这些文件(追踪文件的变化,不停的读),并且转发这些信息到elasticsearch或者logstarsh中存放;

安装包准备

基于CentOS7环境,服务端IP:10.168.11.10
logstash5.5 下载地址
kibana5.5 下载地址
JDK8 下载地址
filebeat5.5 下载地址
elasticsearch5.5 下载地址

PS:
安装包默认放置于/usr/local/src目录下;
ELK服务端需部署logstash | kibana | jdk | elasticsearch不需要filebeat;
客户端只要部署filebeat即可;


服务端安装

java安装配置
cd /usr/local/src
rpm -ivh jdk-8_65-linux-x64.rpm  #安装java rpm包
vim /etc/profile  #添加java环境变量,打开文件在末尾添加以下内容
JAVA_HOME=/usr/java/jdk1.8.0_65
JRE_HOME=/usr/java/jdk1.8.0_65/jre
PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib
export JAVA_HOME JRE_HOME PATH CLASSPATH
source /etc/profile  #使环境变量立即生效
elasticsearch安装配置
cd /usr/local/src
rpm -ivh elasticsearch-5.5.0.rpm  #安装rpm包
systemctl enable elasticsearch  #设定开机启动
systemctl start elasticsearch  #启动elasticsearch此时用netstat -lntp查看端口会出现9200和9300两个端口的侦听进程

测试:
访问http://10.168.11.10:9200

PS:配置文件位于/etc/elasticsearch/elasticsearch.yml,修改并编辑以下内容

network.host: 10.168.11.10  #指定服务端IP地址
http.port: 9200  #指定elasticsearch侦听的端口
bootstrap.system_call_filter: false  #系统调用过滤器,建议禁用该项检查,否则容易报错

PS:编辑/etc/elasticsearch/jvm.options文件修改以下内容,否则elasticsearch很容易发生内存溢出问题

-Xms2g  #修改为你服务器的实际内存的50%,如实际内存为16G就修改为8G
-Xmx2g  #修改为你服务器的实际内存的50%,如实际内存为16G就修改为8G
logstash安装配置
cd /usr/local/src
rpm -ivh logstash-5.5.0.rpm  #安装rpm包
systemctl enable logstash  #设定开机启动logstash
systemctl start logstash  #测试启动logstash此时用netstat -lntp查看端口会出现9600和5044两个端口的侦听进程

PS:配置文件位于/etc/logstash/logstash.yml此处没有什么可修改的地方,主要为三处:

 1. path.data: /var/lib/logstash  #修改logstash数据存储的位置
 2. path.config: /etc/logstash/conf.d  #修改logstash读取自定义的客户端日志抓取配置文件目录
 3. path.logs: /var/log/logstash  #修改logstash日志存放位置
kibana安装配置
cd /usr/local/src
rpm -ivh kibana-5.5.0-x86_64.rpm  #安装rpm包
systemctl enable kibana  #设定开机启动kibana
systemctl start kibana  #启动kibana

PS:配置文件位于/etc/kibana/kibana.yml,修改并编辑以下内容

#server.port: 80  #默认为注释,可修改WEB端口为80,默认为5601
server.host: "10.168.11.10"  #修改为服务端IP地址
elasticsearch.url: "http://10.168.11.10:9200"  #填入elasticsearch的地址

测试:
访问http://10.168.11.10:5601

  • 至此服务端部署完毕,防火墙部分请自行添加相关策略,亦可把kibana的5601端口通过防火墙转发到80端口上:
firewall-cmd --add-forward-port=port=80:proto=tcp:toport=5601 --permanent
firewall-cmd --reload

客户端安装

filebeat安装配置
cd /usr/local/src
rpm -ivh filebeat-5.5.0-x86_64.rpm  #安装rpm包
systemctl enable filebeat  #设定开机启动
systemctl start filebeat  #启动filebeat

PS:客户端只是把采集到的日志内容发送到服务端的logstash,所以不需要安装其他就安装filebeat即可;


日志采集配置文件编写

客户端filebeat配置文件示例

(编辑/etc/filebeat/filebeat.yml)

filebeat:
  prospectors:
    -
      paths:
        - "/usr/local/tomcat/logs/localhost_*.txt"  #指定采集的日志路径
      fields:
        input_type: log
        tag: 11_18-ycwb-wcp-tomcatlog  #自定义此日志对应的标签名(与elasticsearch的索引名对应)
 
    -
      paths:
        - "/var/log/messages*"   
      fields:
        tag: 11_18-ycwb-wcp-messageslog

    -
      paths:
        - "/var/log/secure*"
      fields:
        tag: 11_18-ycwb-wcp-securelog

    -
      paths:
        - "/var/log/cron*"
      fields:
        tag: 11_18-ycwb-wcp-cronlog

    -
      paths:
        - "/var/log/boot.log"
      fields:
        tag: 11_18-ycwb-wcp-bootlog
 
output:
  logstash:
       hosts: ["10.168.11.10:5044"]  #指定输出到10.168.11.10的logstash服务5044端口
服务端logstash配置文件示例

(编辑/etc/logstash/conf.d/logstash.conf)

input {
  beats {
    port => "5044"  #指定filebeat通讯的端口
  }
}

output {
	if [fields][tag] == "11_18-ycwb-wcp-tomcatlog"{  #此tab与filebeat的tag名对应
		elasticsearch {
			hosts => "10.168.11.10:9200"  #输入elasticsearch的地址
			index => "11_18-ycwb-wcp-tomcatlog"  #生成索引名,与filebeat的tag名对应
		}
	}
	if [fields][tag] == "11_18-ycwb-wcp-messageslog"{
		elasticsearch {
                        hosts => "10.168.11.10:9200"
                        index => "11_18-ycwb-wcp-messageslog-%{+YYYY.MM.dd}"  #这种格式可以在生成的索引名后缀加入索引创建的日期,这样每天都会生成一条索引名+日期
        }
	}
	if [fields][tag] == "11_18-ycwb-wcp-securelog"{
		elasticsearch {
                        hosts => "10.168.11.10:9200"
                        index => "11_18-ycwb-wcp-securelog"
        }
	}
	if [fields][tag] == "11_18-ycwb-wcp-cronlog"{
	    elasticsearch {
                        hosts => "10.168.11.10:9200"
                        index => "11_18-ycwb-wcp-cronlog"
        }
    }
	if [fields][tag] == "11_18-ycwb-wcp-bootlog"{
	    elasticsearch {
                        hosts => "10.168.11.10:9200"
                        index => "11_18-ycwb-wcp-bootlog"
        }
    }
}
自定义日志解析规则

PS:以上配置在kibana输入格式都是默认的,用filter配合grok插件对输出内容进行规则解析及定义,如nginx的日志能解析每个日志段落为一个字段,或者加上IP地址的解析等,需要安装logstash的grok及geoip插件(grok和geoip插件在logstash目录下的bin下的logstash-plugin脚本进行安装):

logstash/bin/logstash-plugin install logstash-filter-geoip  #安装geoip插件
logstash/bin/logstash-plugin install logstash-filter-grok  #安装grok插件

安装完毕后可建立新的测试文件进行字段测试,下面是针对tomcat日志的一个测试文件:

input { stdin { } }  #从本地控制台输入测试日志内容

filter {
    grok {
	match => { "message" => "%{IPORHOST:clientip} - - %{NOTSPACE:LogTime} %{NOTSPACE:timezone} %{NOTSPACE:Method} %{NOTSPACE:Path} %{NOTSPACE:httpversion} %{NOTSPACE:ReturnValue} %{NOTSPACE:Value}" }  #日志解析规则,具体请查看官方文档,稍微复杂
    }
    geoip {
        source => "clientip"
    }
}

output {
	stdout { codec=> rubydebug }  #解析后的内容输出到控制台
}

某tomcat项目的简单解析规则示例
input {
  beats {
    port => "5044"
  }
}

filter {
    grok {
	match => { "message" => "%{IPORHOST:clientip} - - %{NOTSPACE:LogTime} %{NOTSPACE:timezone} %{NOTSPACE:Method} %{NOTSPACE:Path} %{NOTSPACE:httpversion} %{NOTSPACE:ReturnValue} %{NOTSPACE:Value}" }
    }
    geoip {
        source => "clientip"
    }
}

output {
	if [fields][tag] == "11_18-ycwb-wcp-tomcatlog"{
		elasticsearch {
			hosts => "10.168.11.10:9200"
			index => "11_18-ycwb-wcp-tomcatlog"
		}
	}
}
logstash解析日志字段数量说明

logstash解析的字段达到一定的数量会有延迟,测试后数据如下:
解析字段22个以内:延迟1秒以内到1秒之间
解析字段23个:延迟2秒以内
解析字段24个:延迟2秒左右
解析字段25个:延迟3秒左右
解析字段26个:延迟5-6秒左右
解析字段26个以上:延迟30秒或以上

grok插件的解析部分关键字说明(重要)
USERNAME [a-zA-Z0-9._-]+
USER %{USERNAME}
INT (?:[+-]?(?:[0-9]+))
BASE10NUM (?[+-]?(?:(?:[0-9]+(?:\.[0-9]+)?)|(?:\.[0-9]+)))
NUMBER (?:%{BASE10NUM})
BASE16NUM (?(?"(?>\\.|[^\\"]+)+"|""|(?>'(?>\\.|[^\\']+)+')|''|(?>`(?>\\.|[^\\`]+)+`)|``))
UUID [A-Fa-f0-9]{8}-(?:[A-Fa-f0-9]{4}-){3}[A-Fa-f0-9]{12}
# Networking
MAC (?:%{CISCOMAC}|%{WINDOWSMAC}|%{COMMONMAC})
CISCOMAC (?:(?:[A-Fa-f0-9]{4}\.){2}[A-Fa-f0-9]{4})
WINDOWSMAC (?:(?:[A-Fa-f0-9]{2}-){5}[A-Fa-f0-9]{2})
COMMONMAC (?:(?:[A-Fa-f0-9]{2}:){5}[A-Fa-f0-9]{2})
IPV6 ((([0-9A-Fa-f]{1,4}:){7}([0-9A-Fa-f]{1,4}|:))|(([0-9A-Fa-f]{1,4}:){6}(:[0-9A-Fa-f]{1,4}|((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3})|:))|(([0-9A-Fa-f]{1,4}:){5}(((:[0-9A-Fa-f]{1,4}){1,2})|:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3})|:))|(([0-9A-Fa-f]{1,4}:){4}(((:[0-9A-Fa-f]{1,4}){1,3})|((:[0-9A-Fa-f]{1,4})?:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){3}(((:[0-9A-Fa-f]{1,4}){1,4})|((:[0-9A-Fa-f]{1,4}){0,2}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){2}(((:[0-9A-Fa-f]{1,4}){1,5})|((:[0-9A-Fa-f]{1,4}){0,3}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){1}(((:[0-9A-Fa-f]{1,4}){1,6})|((:[0-9A-Fa-f]{1,4}){0,4}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(:(((:[0-9A-Fa-f]{1,4}){1,7})|((:[0-9A-Fa-f]{1,4}){0,5}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:)))(%.+)?
IPV4 (?/(?>[\w_%!$@:.,-]+|\\.)*)+
TTY (?:/dev/(pts|tty([pq])?)(\w+)?/?(?:[0-9]+))
WINPATH (?>[A-Za-z]+:|\\)(?:\\[^\\?*]*)+
URIPROTO [A-Za-z]+(\+[A-Za-z+]+)?
URIHOST %{IPORHOST}(?::%{POSINT:port})?
# uripath comes loosely from RFC1738, but mostly from what Firefox
# doesn't turn into %XX
URIPATH (?:/[A-Za-z0-9$.+!*'(){},~:;=@#%_\-]*)+
#URIPARAM \?(?:[A-Za-z0-9]+(?:=(?:[^&]*))?(?:&(?:[A-Za-z0-9]+(?:=(?:[^&]*))?)?)*)?
URIPARAM \?[A-Za-z0-9$.+!*'|(){},~@#%&/=:;_?\-\[\]]*
URIPATHPARAM %{URIPATH}(?:%{URIPARAM})?
URI %{URIPROTO}://(?:%{USER}(?::[^@]*)?@)?(?:%{URIHOST})?(?:%{URIPATHPARAM})?
# Months: January, Feb, 3, 03, 12, December
MONTH \b(?:Jan(?:uary)?|Feb(?:ruary)?|Mar(?:ch)?|Apr(?:il)?|May|Jun(?:e)?|Jul(?:y)?|Aug(?:ust)?|Sep(?:tember)?|Oct(?:ober)?|Nov(?:ember)?|Dec(?:ember)?)\b
MONTHNUM (?:0?[1-9]|1[0-2])
MONTHNUM2 (?:0[1-9]|1[0-2])
MONTHDAY (?:(?:0[1-9])|(?:[12][0-9])|(?:3[01])|[1-9])
# Days: Monday, Tue, Thu, etc...
DAY (?:Mon(?:day)?|Tue(?:sday)?|Wed(?:nesday)?|Thu(?:rsday)?|Fri(?:day)?|Sat(?:urday)?|Sun(?:day)?)
# Years?
YEAR (?>\d\d){1,2}
HOUR (?:2[0123]|[01]?[0-9])
MINUTE (?:[0-5][0-9])
# '60' is a leap second in most time standards and thus is valid.
SECOND (?:(?:[0-5]?[0-9]|60)(?:[:.,][0-9]+)?)
TIME (?!<[0-9])%{HOUR}:%{MINUTE}(?::%{SECOND})(?![0-9])
# datestamp is YYYY/MM/DD-HH:MM:SS.UUUU (or something like it)
DATE_US %{MONTHNUM}[/-]%{MONTHDAY}[/-]%{YEAR}
DATE_EU %{MONTHDAY}[./-]%{MONTHNUM}[./-]%{YEAR}
ISO8601_TIMEZONE (?:Z|[+-]%{HOUR}(?::?%{MINUTE}))
ISO8601_SECOND (?:%{SECOND}|60)
TIMESTAMP_ISO8601 %{YEAR}-%{MONTHNUM}-%{MONTHDAY}[T ]%{HOUR}:?%{MINUTE}(?::?%{SECOND})?%{ISO8601_TIMEZONE}?
DATE %{DATE_US}|%{DATE_EU}
DATESTAMP %{DATE}[- ]%{TIME}
TZ (?:[PMCE][SD]T|UTC)
DATESTAMP_RFC822 %{DAY} %{MONTH} %{MONTHDAY} %{YEAR} %{TIME} %{TZ}
DATESTAMP_RFC2822 %{DAY}, %{MONTHDAY} %{MONTH} %{YEAR} %{TIME} %{ISO8601_TIMEZONE}
DATESTAMP_OTHER %{DAY} %{MONTH} %{MONTHDAY} %{TIME} %{TZ} %{YEAR}
DATESTAMP_EVENTLOG %{YEAR}%{MONTHNUM2}%{MONTHDAY}%{HOUR}%{MINUTE}%{SECOND}
# Syslog Dates: Month Day HH:MM:SS
SYSLOGTIMESTAMP %{MONTH} +%{MONTHDAY} %{TIME}
PROG (?:[\w._/%-]+)
SYSLOGPROG %{PROG:program}(?:\[%{POSINT:pid}\])?
SYSLOGHOST %{IPORHOST}
SYSLOGFACILITY <%{NONNEGINT:facility}.%{NONNEGINT:priority}>
HTTPDATE %{MONTHDAY}/%{MONTH}/%{YEAR}:%{TIME} %{INT}
# Shortcuts
QS %{QUOTEDSTRING}
# Log formats
SYSLOGBASE %{SYSLOGTIMESTAMP:timestamp} (?:%{SYSLOGFACILITY} )?%{SYSLOGHOST:logsource} %{SYSLOGPROG}:
COMMONAPACHELOG %{IPORHOST:clientip} %{USER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] "(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{NUMBER:response} (?:%{NUMBER:bytes}|-)
COMBINEDAPACHELOG %{COMMONAPACHELOG} %{QS:referrer} %{QS:agent}

如文章对您有帮助,请打开支付宝扫码领取红包,就当做对作者的支持,谢谢
这里写图片描述

你可能感兴趣的:(Linux运维文档)