ELK+Kafka分布式日志收集系统环境搭建

一、ES与kafka环境搭建

1、使用Docker搭建Elasticsearch Docker安装ES
2、使用Docker搭建Kafka,因为这里是演示,所以Kafka没有搭建集群。 Docker安装kafka

二、在项目中集成

maven依赖:

<dependencies>
	
	<dependency>
		<groupId>org.springframework.bootgroupId>
		<artifactId>spring-boot-starterartifactId>
	dependency>
	<dependency>
		<groupId>org.springframework.kafkagroupId>
		<artifactId>spring-kafkaartifactId>
	dependency>
	<dependency>
		<groupId>com.alibabagroupId>
		<artifactId>fastjsonartifactId>
		<version>1.2.47version>
	dependency>
dependencies>

KafkaSender

@Component
@Slf4j
public class KafkaSender<T> {
    @Autowired
    private KafkaTemplate<String,Object> kafkaTemplate;

    /**
     * kafka发送消息
     *
     * @param msg
     */
    public void send(T msg){
        String jsonObj = JSON.toJSONString(msg);
        log.info("------- message = {}"+jsonObj);

        //发送消息
        ListenableFuture<SendResult<String, Object>> future = kafkaTemplate.send("goods_mylog", jsonObj);
        future.addCallback(new ListenableFutureCallback<SendResult<String, Object>>() {
            @Override
            public void onFailure(Throwable ex) {
                log.info("Produce: The message failed to be sent:" + ex.getMessage());
            }

            @Override
            public void onSuccess(SendResult<String, Object> stringObjectSendResult) {
                // 业务处理
                log.info("Produce: The message was sent successfully:");
                log.info("Produce: _+_+_+_+_+_+_+ result: " + stringObjectSendResult.toString());
            }
        });
    }
}

AOP拦截:AopLogAspect

@Aspect
@Component
public class AopLogAspect {
    @Autowired
    private KafkaSender<JSONObject> kafkaSender;

    // 申明一个切点 里面是 execution表达式
    @Pointcut("execution(* com.xwhy.*.service.impl.*.*(..))")
    private void serviceAspect() {
    }

    // 请求method前打印内容
    @Before(value = "serviceAspect()")
    public void methodBefore(JoinPoint joinPoint) {
        ServletRequestAttributes requestAttributes = (ServletRequestAttributes) RequestContextHolder
                .getRequestAttributes();
        HttpServletRequest request = requestAttributes.getRequest();
        JSONObject jsonObject = new JSONObject();
        SimpleDateFormat df = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");// 设置日期格式
        jsonObject.put("request_time", df.format(new Date()));
        jsonObject.put("request_url", request.getRequestURL().toString());
        jsonObject.put("request_method", request.getMethod());
        jsonObject.put("signature", joinPoint.getSignature());
        jsonObject.put("request_ip",request.getRemoteAddr());
        jsonObject.put("request_port",request.getRemotePort());
        jsonObject.put("request_args", Arrays.toString(joinPoint.getArgs()));
        JSONObject requestJsonObject = new JSONObject();
        requestJsonObject.put("request", jsonObject);
        kafkaSender.send(requestJsonObject);

    }

    // 在方法执行完结后打印返回内容
    //returning="o"表示的是目标方式执行完成之后的返回值
    @AfterReturning(returning = "o", pointcut = "serviceAspect()")
    public void methodAfterReturing(Object o) {
        JSONObject respJSONObject = new JSONObject();
        JSONObject jsonObject = new JSONObject();
        SimpleDateFormat df = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");// 设置日期格式
        jsonObject.put("response_time", df.format(new Date()));
        jsonObject.put("response_content", JSONObject.toJSONString(o));
        respJSONObject.put("response", jsonObject);
        kafkaSender.send(respJSONObject);

    }
}

全局捕获异常:GlobalExceptionHandler

@ControllerAdvice
@Slf4j
public class GlobalExceptionHandler {
    @Autowired
    private KafkaSender<JSONObject> kafkaSender;

    @ExceptionHandler(RuntimeException.class)
    @ResponseBody
    public JSONObject exceptionHandler(Exception e) {
        log.info("###全局捕获异常###,error:{}", e);
        // 1.封装异常日志信息
        JSONObject errorJson = new JSONObject();
        JSONObject logJson = new JSONObject();
        SimpleDateFormat df = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");// 设置日期格式
        logJson.put("request_time", df.format(new Date()));
        logJson.put("error_info", e);
        errorJson.put("request_error", logJson);
        kafkaSender.send(errorJson);
        // 2. 返回错误信息
        JSONObject result = new JSONObject();
        result.put("code", 500);
        result.put("msg", "系统错误");
        return result;
    }
}

测试类:(包名要确保被扫描到)

@RestController
public class TestServiceImpl {
    @RequestMapping("/test")
    public String test(){
        int i = 1/0;
        return "fail";
    }
}

启动类:

@SpringBootApplication
public class AppProduct {
    public static void main(String[] args) {
        SpringApplication.run(AppProduct.class, args);
    }
}

application.yml

###服务启动端口号
server:
  port: 8500
###服务名称(服务注册到eureka名称)
eureka:
  client:
    service-url:
      defaultZone: http://localhost:8100/eureka

spring:
  application:
    name:  app-xwhy-goods
  redis:
    host: 192.168.112.150
    port: 6379
    password: 123456
  ###数据库相关连接
  datasource:
    username: root
    password: 123456
    driver-class-name: com.mysql.jdbc.Driver
    url: jdbc:mysql://127.0.0.1:3306/xwhy_goods?useUnicode=true&characterEncoding=UTF-8&useSSL=false&serverTimezone=UTC
  data:
    elasticsearch:
      ####集群名称
      cluster-name: elasticsearch-cluster
      ####地址
      cluster-nodes: 192.168.112.150:9300
  kafka:
    ### kafka服务器地址(可以多个)
    bootstrap-servers: 192.168.112.150:9092

配置logstash的加载文件:
cd /usr/local/logstash-6.4.3
vi goods_mylog.conf

input {
  kafka {
    # kafka地址
    bootstrap_servers => "192.168.112.150:9092"
    topics => ["goods_mylog"]
  }
}
output {
    stdout { codec => rubydebug }
    elasticsearch {
       # ES集群地址
       hosts => ["192.168.112.150:9200","192.168.112.150:9201"]
       index => "goods_mylog"
    }
}

启动logstash:./bin/logstash goods_mylog.conf

访问:http://127.0.0.1:8500/test
可以看到logstash打印的日志:
ELK+Kafka分布式日志收集系统环境搭建_第1张图片

使用kibana查看:
ELK+Kafka分布式日志收集系统环境搭建_第2张图片
ELK+Kafka分布式日志收集系统环境搭建_第3张图片
日志打印非常详细,可以看到具体哪一行报的错,这里报了一个 /by zero

你可能感兴趣的:(kafka,ELK,Docker)