基于nginx插件和kafka实现日志收集

一、环境

机器名称 ip 安装软件
节点1 192.168.137.201 jdk、kafka、zoopeeker
节点2 192.168.137.202 jdk、kafka、zoopeeker
节点3 192.168.137.203 jdk、kafka、zoopeeker
nginx 192.168.137.204 nginx、ngx_kafka_module、librdkafka

二、节点配置

三个节点的配置基本一样,有差别的地方在于:zoopeeker执行echo 1 > /var/lagou/zookeeper/data/myid的时候要改一下这个myid的数字;kafka配置文件中的broker.id参数与advertised.listeners

准备工作

# 永久关闭linux系统防火墙
$ systemctl status firewalld	#查看防火墙状态
$ systemctl stop firewalld		#关闭防火墙
$ systemctl disable firewalld	#关闭防火墙开机自启动

JDK

# 安装JDK
$ rpm -ivh jdk-8u261-linux-x64.rpm

# 配置java环境变量
$ vim /etc/profile
export JAVA_HOME=/usr/java/jdk1.8.0_261-amd64
export PATH=$PATH:$JAVA_HOME/bin

# 保存退出,使配置生效
$ source /etc/profile

# 查看Java版本
$ java -version

zookeeper

# 解压到zookeeper /opt目录
$ tar -zxf zookeeper-3.4.14.tar.gz -C /opt
# 修改zookeeper配置
$ cd /opt/zookeeper-3.4.14/conf
$ cp zoo_sample.cfg zoo.cfg
$ vim zoo.cfg

server.1=192.168.137.201:2881:3881
server.2=192.168.137.202:2881:3881
server.3=192.168.137.203:2881:3881
dataDir=/var/lagou/zookeeper/data

# 退出vim
$ mkdir -p /var/lagou/zookeeper/data
$ echo 1 > /var/lagou/zookeeper/data/myid

# 配置zookeeper环境变量
$ vim /etc/profile

export ZOOKEEPER_PREFIX=/opt/zookeeper-3.4.14
export PATH=$PATH:$ZOOKEEPER_PREFIX/bin
export ZOO_LOG_DIR=/var/lagou/zookeeper/log

# 退出vim,让配置生效
$ source /etc/profile

Kafka

# 解压kafka
$ tar -zxf kafka_2.12-1.0.2.tgz
$ mv kafka_2.12-1.0.2 /opt

# 配置kafka环境变量
$ vim /etc/profile

export KAFKA_HOME=/opt/kafka_2.12-1.0.2
export PATH=$PATH:$KAFKA_HOME/bin

# 退出vim,让配置生效
$ source /etc/profile

# kafka配置
$ vim /opt/kafka_2.12-1.0.2/config/server.properties

broker.id=0
listeners=PLAINTEXT://:9092
advertised.listeners=PLAINTEXT://node1:9092
log.dirs=/var/lagou/kafka/kafka-logs
zookeeper.connect=192.168.137.201:2181,192.168.137.202:2181,192.168.137.203:2181/myKafka

三、启动集群

先将所有的zookeeper启动,再启动所有的kafka

# 启动Zookeeper
$ zkServer.sh start

# 启动kafka
$ kafka-server-start.sh -daemon /opt/kafka_2.12-1.0.2/config/server.properties

四、nginx服务配置

准备工作

# 永久关闭linux系统防火墙
$ systemctl status firewalld	#查看防火墙状态
$ systemctl stop firewalld		#关闭防火墙
$ systemctl disable firewalld	#关闭防火墙开机自启动

# 安装依赖
$ yum install wget git -y
$ yum install gcc-c++ -y

nginx安装及配置

# 安装配置kafka客户端库
$ tar -zxf librdkafka-1.5.2.tar.gz 
$ cd librdkafka-1.5.2
$ ./configure
$ make
$ sudo make install

# ngx_kafka_module
$ tar -zxf ngx_kafka_module-0.9.1.tar.gz

# nginx
$ tar -zxf nginx-1.17.8.tar.gz 
$ cd nginx-1.17.8
$ yum install gcc zlib zlib-devel openssl openssl-devel pcre pcre-devel -y
$ ./configure --add-module=/root/ngx_kafka_module-0.9.1
$ make
$ sudo make install

# 修改nginx.conf文件
$ cd /usr/local/nginx/conf
$ vim nginx.conf

# 在http{}中添加,引入所有配置
include /usr/local/nginx/conf/conf.d/*.conf;

cd /usr/local/nginx/conf/conf.d
vim nginx_kafka.conf

增加kafka配置

kafka;
kafka_broker_list 127.0.0.1:9092;

server {
     
    listen       8081;
    server_name  localhost;

    location = /log{
     
        add_header 'Access-Control-Allow-Origin' $http_origin;
        add_header 'Access-Control-Allow-Credentials' 'true';
        add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
        add_header 'Access-Control-Allow-Headers' 'DNT,web-token,app-token,Authorization,Accept,Origin,Keep-Alive,User-Agent,X-Mx-ReqToken,X-Data-Type,X-Auth-Token,
X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
        add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range';
        kafka_topic topic_1;
    }
}

启动nginx

# 启动nginx
$ /usr/local/nginx/sbin/nginx

# 测试
$ curl localhost:8081/log -d "hello ngx_kafka_module" -v

# 新开一个窗口,记录kafka的消费情况
$ kafka-console-consumer.sh --bootstrap-server 192.168.137.201:9002 --topic topic_1 --from-beginning

五、测试

# 发起一个POST请求
curl localhost:8081/log -d "hello ngx_kafka_module" -v

# 消费者监听
kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic topic_1 

基于nginx插件和kafka实现日志收集_第1张图片
在这里插入图片描述

你可能感兴趣的:(Nginx,消息队列,nginx,kafka,推荐系统,日志收集,数据分析)