springboot使用logback将日志发送到kafka
- spring-boot-starter-parent 版本:1.5.8.RELEASE
- kafka版本:2.2.1
- 实现功能:logback将日志发送到kafka,并且打印在本地文件和控制台中
kafka的安装调试
kafka和zookeeper安装略。
kafka2.2.1和之前的部分版本命令略有差异,此处列出部分kafka命令。
查看topic列表:
cd /usr/local/kafka_2.11-2.2.1/bin/
./kafka-topics.sh --zookeeper localhost:2181 --list
topic创建:
./kafka-topics.sh --zookeeper localhost:2181 --create --topic topic名称 --partitions 1 --replication-factor 1
发送消息:
./kafka-console-producer.sh --broker-list localhost:9092 --topic zuul_kafka_log_topic
消费消息:
./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic zuul_kafka_log_topic
安装好kafka并启动后,首先创建队列;
然后启动console消息消费命令,最后启动console发送消息命令开始输入消息;
我们可以看到消费命令的console上会显示出发送的消息,此时说明kafka搭建完成并可以正常运行:
logback 配置
首先保证springboot和logback集成无误,并且能打印出日志到console和日志文件中:
%d{HH:mm:ss} %-5level [%thread] %logger - %msg%n
${LOG_HOME}/mservice-zuul.%d{yyyy-MM-dd}.log
15
%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n
导入kafka包
maven:
org.apache.kafka
kafka_2.11
2.2.1
org.slf4j
slf4j-log4j12
导入kafka相关包是需要注意:
- kafka客户端版本和服务端需保持一致,否则会出现乱七八糟的错误一堆
- kafka客户端需要排除slf4j-log4j12冲突
自定义logback appender
自定义logback appender类,在此类中将消息发送给kafka服务器。
package com.test.kafkalog;
import ch.qos.logback.classic.spi.ILoggingEvent;
import ch.qos.logback.core.AppenderBase;
import com.bigdata.constant.SystemConstant;
import java.util.Properties;
import lombok.Data;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* @author test
* @date 2019/6/11 13:34
* @description: 日志输送kafka
*/
@Data
public class KafkaAppender extends AppenderBase {
private static Logger logger = LoggerFactory.getLogger(KafkaAppender.class);
private String bootstrapServers;
//kafka生产者
private Producer producer;
@Override
public void start() {
super.start();
if (producer == null) {
Properties props = new Properties();
props.put("bootstrap.servers", bootstrapServers);
//判断是否成功,我们指定了“all”将会阻塞消息
// props.put("acks", "all");
props.put("retries", 0);
props.put("batch.size", 0);
//延迟1s,1s内数据会缓存进行发送
props.put("linger.ms", 1);
props.put("buffer.memory", 33554432);
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
producer = new KafkaProducer(props);
}
}
@Override
protected void append(ILoggingEvent eventObject) {
String msg = eventObject.getFormattedMessage();
logger.debug("向kafka推送日志开始:" + msg);
ProducerRecord record = new ProducerRecord(
SystemConstant.KAFKA_LOG_TOPIC, msg, msg);
producer.send(record);
}
}
logback配置加入自定义appender
logback配置加入自定义appender,指向自定义appender类,并在此处为定义类中属性值
10.41.1.112:9092
logback配置加入自定义logger
logback配置加入自定义logger,将日志输送到控制台、日志文件、以及kafka中(即自定义appender类)
使用以及测试
先说使用:
/**
* 使用自定义logger:kafka_logger
*/
private static Logger kafkaLogger = LoggerFactory.getLogger("kafka_logger");
public void test() {
kafkaLogger.info("this is a test msg from kafka cloent " + i);
}
springboot 测试:
org.springframework.boot
spring-boot-starter-test
测试类:
package com.test;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;
@RunWith(SpringJUnit4ClassRunner.class)
@SpringBootTest(classes = ZuulApplication.class, webEnvironment = SpringBootTest.WebEnvironment.MOCK)
public class KafkaLogTest {
private static Logger kafkaLogger = LoggerFactory.getLogger("kafka_logger");
@Test
public void sendLogToKafka() {
for (int i = 0; i < 10; i++) {
kafkaLogger.info("this is a test msg from kafka client " + i);
}
}
}
运行报错:
17:00:28 WARN [kafka-producer-network-thread | producer-2] org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-2] Error connecting to node sit001:9092 (id: 0 rack: null)
java.net.UnknownHostException: sit001
at java.net.InetAddress.getAllByName0(InetAddress.java:1280)
at java.net.InetAddress.getAllByName(InetAddress.java:1192)
at java.net.InetAddress.getAllByName(InetAddress.java:1126)
at org.apache.kafka.clients.ClientUtils.resolve(ClientUtils.java:104)
at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.currentAddress(ClusterConnectionStates.java:394)
at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.access$200(ClusterConnectionStates.java:354)
at org.apache.kafka.clients.ClusterConnectionStates.currentAddress(ClusterConnectionStates.java:142)
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:920)
at org.apache.kafka.clients.NetworkClient.access$700(NetworkClient.java:67)
at org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:1092)
at org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:983)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:533)
at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:312)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:235)
at java.lang.Thread.run(Thread.java:748)
这个错误说是找不到sit001这个node,我们使用hostname查看kafka服务器,发现sit001正是这台机器的机器名,所以我们在本地修改host(C:\Windows\System32\drivers\etc)文件进行映射:
10.41.1.112 sit001
再次运行测试,运行后观察消费者console:
this is a test msg from kafka client 0
this is a test msg from kafka client 1
this is a test msg from kafka client 2
this is a test msg from kafka client 3
this is a test msg from kafka client 4
this is a test msg from kafka client 5
this is a test msg from kafka client 6
this is a test msg from kafka client 7
this is a test msg from kafka client 8
this is a test msg from kafka client 9
打开本地日志文件,也可以看到日志有存储在本地文件中,集成成功。