rabbitmq-3.7.6 自行到官网下载安装对应的版本以及对应的Erlang可参考地址
elasticsearch-7.6.2(下载解压即可) 集群搭建选看
elasticsearch-head(下载解压放到对应目录下,稍后讲解)
kibana-7.6.2(下载解压即可)
logstash-7.6.2(下载解压即可)
1、配置elasticsearch
打开elasticsearch的目录下elasticsearch.yml文件添加配置
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
# 配置elasticsearch数据目录
path.data: D:\dev\devsoft\elk\data
#
# Path to log files:
# 配置elasticsearch日志目录
path.logs: D:\dev\devsoft\elk\logs
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
#
# Set a custom port for HTTP:
#
#http.port: 9200
# 以下两个配置问了配置elasticsearch-head防止跨域问题
http.cors.enabled: true
http.cors.allow-origin: "*"
在elasticsearch的bin目录下双击elasticsearch.bat
文件即可启动
启动成功后访问: http://localhost:9200
2、配置elasticsearch-head插件
在elasticsearch-head目录下打开命令窗口执行以下命令(前提安装了node.js/npm命令)
> npm install
# 启动elasticsearch-head
> npm run start
访问:http://localhost:9100 即可访问
3、配置kibana
打开kibana的配置目录conf下的kibana.yml配置文件,添加配置
# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601
# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "localhost"
# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["http://localhost:9200"]
# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English - en , by default , Chinese - zh-CN .
#i18n.locale: "en"
i18n.locale: "zh-CN"
在bin目录下双击启动文件kibana.bat
启动
4、logstash接入RabbitMQ
在logstash的bin目录新建配置文件rabbitmq-log.conf 并加入配置,具体配置可查看上面官方文档地址
input {
# 输入配置,这里选用Rabbitmq的配置
rabbitmq {
# rabbitmq地址
host => "127.0.0.1"
# rabibtmq端口
port => 5672
# codec为来源数据格式
codec => "json"
# rabbitmq中的交换器
exchange=> "ex_es_logstash"
# 监听的mq的queue,设置该项可以不用配置key
queue => "es-log-queue"
# 监听的路由key
key => "elk-es-log"
# queue是否持久化
durable => true
# type内容可以自由定义,可作为标识
type => "es"
}
}
filter {
# 过滤,非必须
# input使用codec为json格式,可以不进行grok正则匹配处理
}
output {
# 在字符串外的变量使用中括号"[]"包裹,如[type]
if [type] == "es" {
elasticsearch {
# elasticsearch地址
hosts => ["127.0.0.1:9200"]
# 根据输入的type类型与爬虫名称构建index名称
# 字符串内变量使用"%{}"包裹,如%{type}
index => "es-%{type}_log-%{+YYYY.MM.dd}"
}
} else {
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "es-%{type}-log-%{+YYYY.MM.dd}"
}
}
}
在bin目录下打开命令窗口执行以下命令
> logstash.bat -f rabbitmq-log.conf
5、项目中日志接入rabbitmq收集
rabbitmq包引入
<dependency>
<groupId>org.springframework.bootgroupId>
<artifactId>spring-boot-starter-amqpartifactId>
<version>2.2.1.RELEASEversion>
dependency>
在项目中的日志配置文件logback-spring.xml
中配置以下信息
<appender name="AMQP" class="org.springframework.amqp.rabbit.logback.AmqpAppender">
<layout>
<pattern>
{
"time": "%date{ISO8601}",
"thread": "%thread",
"level":
"%level",
"class": "%logger{60}",
"message": "%msg"
}
pattern>
layout>
<host>127.0.0.1host>
<port>5672port>
<username>adminusername>
<password>rootpassword>
<applicationId>byterun-es-serviceapplicationId>
<routingKeyPattern>elk-es-logroutingKeyPattern>
<declareExchange>truedeclareExchange>
<exchangeType>directexchangeType>
<exchangeName>ex_es_logstashexchangeName>
<generateId>truegenerateId>
<charset>UTF-8charset>
<durable>truedurable>
<deliveryMode>PERSISTENTdeliveryMode>
appender>
<root level="INFO">
<appender-ref ref="AMQP" />
root>
<logger name="com.xxx" level="DEBUG" additivity="false">
<appender-ref ref="AMQP" />
logger>
具体可用配置请查看源码org.springframework.amqp.rabbit.logback.AmqpAppender
如图:
完整配置
<configuration>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<Pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%nPattern>
<charset>UTF-8charset>
encoder>
appender>
<appender name="FILE"
class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>logs/byterun-es-service.logfile>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>logs/byterun-es-service.%d{yyyy-MM-dd}-%i.logfileNamePattern>
<maxHistory>10maxHistory>
<timeBasedFileNamingAndTriggeringPolicy
class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
<MaxFileSize>30MBMaxFileSize>
timeBasedFileNamingAndTriggeringPolicy>
rollingPolicy>
<encoder>
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%npattern>
encoder>
appender>
<appender name="FILE-ERROR"
class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>logs/byterun-es-service.errfile>
<filter class="ch.qos.logback.classic.filter.LevelFilter">
<level>ERRORlevel>
<onMatch>ACCEPTonMatch>
<onMismatch>DENYonMismatch>
filter>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>logs/byterun-es-service.%d{yyyy-MM-dd}-%i.errfileNamePattern>
<maxHistory>10maxHistory>
<timeBasedFileNamingAndTriggeringPolicy
class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
<MaxFileSize>30MBMaxFileSize>
timeBasedFileNamingAndTriggeringPolicy>
rollingPolicy>
<encoder>
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%npattern>
encoder>
appender>
<appender name="AMQP" class="org.springframework.amqp.rabbit.logback.AmqpAppender">
<layout>
<pattern>
{
"time": "%date{ISO8601}",
"thread": "%thread",
"level":
"%level",
"class": "%logger{60}",
"message": "%msg"
}
pattern>
layout>
<host>127.0.0.1host>
<port>5672port>
<username>adminusername>
<password>rootpassword>
<applicationId>byterun-es-serviceapplicationId>
<routingKeyPattern>elk-es-logroutingKeyPattern>
<declareExchange>truedeclareExchange>
<exchangeType>directexchangeType>
<exchangeName>ex_es_logstashexchangeName>
<generateId>truegenerateId>
<charset>UTF-8charset>
<durable>truedurable>
<deliveryMode>PERSISTENTdeliveryMode>
appender>
<root level="INFO">
<appender-ref ref="STDOUT"/>
<appender-ref ref="FILE"/>
<appender-ref ref="FILE-ERROR"/>
<appender-ref ref="AMQP" />
root>
<logger name="com.es" level="DEBUG" additivity="false">
<appender-ref ref="STDOUT"/>
<appender-ref ref="FILE"/>
<appender-ref ref="FILE-ERROR"/>
<appender-ref ref="AMQP" />
logger>
<appender name="accessLog" class="ch.qos.logback.core.FileAppender">
<file>logs/access_log.logfile>
<encoder>
<pattern>%msg%npattern>
encoder>
appender>
<appender name="async" class="ch.qos.logback.classic.AsyncAppender">
<appender-ref ref="accessLog"/>
appender>
<logger name="org.springframework.jdbc.core" level="DEBUG" additivity="false">
<appender-ref ref="STDOUT"/>
<appender-ref ref="FILE"/>
logger>
<appender name="eventTrackLog" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>logs/byterun-es-service-event-track.logfile>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>logs/byterun-es-service-event-track.%d{yyyy-MM-dd}-%i.logfileNamePattern>
<maxHistory>10maxHistory>
<timeBasedFileNamingAndTriggeringPolicy
class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
<MaxFileSize>30MBMaxFileSize>
timeBasedFileNamingAndTriggeringPolicy>
rollingPolicy>
<encoder>
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} %msg%npattern>
encoder>
appender>
<logger name="service-event-track" level="INFO" additivity="false">
<appender-ref ref="eventTrackLog"/>
logger>
configuration>
启动顺序elasticsearch=>elasticsearch-head=>kibana=>logstash=>项目