Apache Kafka是Apache软件基金会的开源的流处理平台,该平台提供了消息的订阅与发布的消息队列,一般用作系统间解耦、异步通信、削峰填谷等场景。
同时Kafka又提供了Kafka streaming插件包实现了实时在线流处理。相比较一些专业的流处理框架不同,Kafka Streaming计算是运行在应用端,具有简单、入门要求低、部署方便等优点。总体来说Kafka这门课程需要大家掌握以下两个方面:
消息队列是一种在分布式和大数据开发中不可或缺的中间件。在分布式或者大数据开发中通常使用消息队列进行缓冲、系统间解耦和削峰填谷等业务场景,常见的消息队列服务工作模式大致会分为两大类:
至多一次:消息生产者将数据写入消息系统,然后由消费者负责去拉去消息服务器中的消息,一旦消息被确认消费之后 ,由消息服务器主动删除队列中的数据,这种消费方式一般只允许被一个消费者消费,并且消息队列中的数据不允许被重复消费。
没有限制:同上诉消费形式不同,生产者发不完数据以后,该消息可以被多个消费者同时消费,并且同一个消费者可以多次消费消息服务器中的同一个记录。主要是因为消息服务器一般可以长时间存储海量消息。
Kafka集群以Topic形式负责分类集群中的Record,每一个Record属于一个Topic,生产者负责发送数据到Kafka集群中的某一个Topic中,同时消费者可以订阅kafka集群中的Topic.
每个Topic底层都会对应一组分区的日志用于持久化Topic中的Record。在Kafka集群中,Topic的每一个日志的分区都一定会有1个Borker担当该分区的Leader,其他的Broker担当该分区的follower,Leader负责分区数据的读写操作,follower负责同步改分区的数据。这样如果分区的Leader宕机,改分区的其他follower会选取出新的leader继续负责该分区数据的读写。其中集群的中Leader的监控和Topic的部分元数据是存储在Zookeeper中。
生产者将数据发布到他们选择的Topic。生产者负责选择将记录分配给Topic中的哪个Partition。可以以round-robin方式完成此操作,仅是为了平衡负载,也可以根据某些语义分区功能(例如基于记录中的Key)进行此操作
Kafka中所有消息是通过Topic为单位进行管理,每个Kafka中的Topic通常会有多个订阅者,负责订阅发送到改Topic中的数据。每个在Kafka 集群中的Topic,Kafka负责管理改Topic的一组日志分区:
每组日志分区是一个有序的不可变的的日志序列,分区中的每一个Record都被分配了唯一的序列编号称为是offset,Kafka 集群会持久化所有发布到Topic中的Record信息,改Record的持久化时间是通过配置文件指定,默认是168小时。
log.retention.hours=168
Kafka底层会定期的check日志文件,然后将过期的数据从log中移除,由于Kafka使用硬盘存储日志文件,因此使用Kafka长时间缓存一些日志文件是不存在问题的。
在消费者消费Topic中数据的时候,每个消费者会维护本次消费对应分区的偏移量,消费者会在消费完一个批次的数据之后,会将本次消费的偏移量提交给Kafka集群,因此对于每个消费者而言可以随意的控制该消费者的偏移量。因此在Kafka中,消费者可以从一个topic分区中的任意位置读取队列数据,由于每个消费者控制了自己的消费的偏移量,因此多个消费者之间彼此相互独立。消费者使用ConsumerGroup名称标记自己,并且发布到Topic的每条记录都会传递到每个订阅ConsumerGroup中的一个消费者实例。如果所有Consumer实例都具有相同的ConsumerGroup,那么Topic中的记录会在该ConsumerGroup中的Consumer实例进行均分消费;如果所有Consumer实例具有不同的ConsumerGroup,则每条记录将广播到所有ConsumerGroup进程。
更常见的是,我们发现Topic具有少量的Consumer Group,每个ConsumerGroup可以理解为一个“逻辑的订阅者”。每个ConsumerGroup均由许多Consumer实例组成,以实现可伸缩性和容错能力。这无非就是发布-订阅模型,其中订阅者是消费者的集群而不是单个进程。这种消费方式Kafka会将Topic按照分区的方式均分给一个ConsumerGroup下的实例,如果ConsumerGroup下有新的成员介入,则新介入的Consumer实例会去接管ConsumerGroup内其他消费者负责的某些分区,同样如果一下ConsumerGroup下的有其他Consumer实例宕机,则由该ConsumerGroup其他实例接管。
由于Kafka的Topic的分区策略,因此Kafka仅提供分区中记录的有序性,也就意味着相同Topic的不同分区记录之间无顺序。因为针对于绝大多数的大数据应用和使用场景, 使用分区内部有序或者使用key进行分区策略已经足够满足绝大多数应用场景。但是,如果您需要记录全局有序,则可以通过只有一个分区Topic来实现,尽管这将意味着每个ConsumerGroup只有一个Consumer进程。
Kafka的特性之一就是高吞吐率,但是Kafka的消息是保存或缓存在磁盘上的,一般认为在磁盘上读写数据是会降低性能的,但是Kafka即使是普通的服务器,Kafka也可以轻松支持每秒百万级的写入请求,超过了大部分的消息中间件,这种特性也使得Kafka在日志处理等海量数据场景广泛应用。Kafka会把收到的消息都写入到硬盘中,防止丢失数据。为了优化写入速度Kafka采用了两个技术, 顺序写和MMFile 。
因为硬盘是机械结构,每次读写都会寻址,然后写入,其中寻址是一个“机械动作”,它是最耗时的。所以硬盘最讨厌随机I/O,最喜欢顺序I/O。为了提高读写硬盘的速度,Kafka就是使用顺序I/O。这样省去了大量的内存开销以及节省了IO寻址的时间。但是单纯的使用顺序写入,Kafka的写入性能也不可能和内存进行对比,因此Kafka的数据并不是实时的写入硬盘中 ,
Kafka充分利用了现代操作系统分页存储来利用内存提高I/O效率。Memory Mapped Files(后面简称mmap)也称为内存映射文件,在64位操作系统中一般可以表示20G的数据文件,它的工作原理是直接利用操作系统的Page来实现文件到物理内存的直接映射。完成MMP映射后,用户对内存的所有操作会被操作系统自动的刷新到磁盘上,极大地降低了IO使用率。
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-wVL7MnG8-1604132386043)(assets/20200104182633932.png)]
Kafka服务器在响应客户端读取的时候,底层使用ZeroCopy技术,直接将磁盘无需拷贝到用户空间,而是直接将数据通过内核空间传递输出,数据并没有抵达用户空间。常规的IO操作流程如下:
1.用户进程调用read等系统调用向操作系统发出IO请求,请求读取数据到自己的内存缓冲区中。自己进入阻塞状态
2.操作系统收到请求后,进一步将IO请求发送磁盘
3.磁盘驱动器收到内核的IO请求,把数据从磁盘读取到驱动器的缓冲中。此时不占用CPU。当驱动器的缓冲区被读满后,向内核发起中断信号告知自己缓冲区已满
4.内核收到中断,使用CPU时间将磁盘驱动器的缓存中的数据拷贝到内核缓冲区中。
5.如果内核缓冲区的数据少于用户申请的读的数据,重复步骤3跟步骤4,直到内核缓冲区的数据足够多为止。
6.将数据从内核缓冲区拷贝到用户缓冲区,同时从系统调用中返回完成读取任务
缺点:用户的每次IO请求,都需要CPU多次参与。
现代操作系统大都引入了协处理器的概念,也就是说系统在读取的磁盘文件的时候,无需CPU直接参与。而是将数据读取的任务交给DMA控制器,负责辅助CPU完成数据的读取过程。
1.用户进程调用read等系统调用向操作系统发出IO请求,请求读取数据到自己的内存缓冲区中。自己进入阻塞状态。
2.操作系统收到请求后,进一步将IO请求发送DMA。然后让CPU干别的活去。
3.DMA进一步将IO请求发送给磁盘。
4.磁盘驱动器收到DMA的IO请求,把数据从磁盘读取到驱动器的缓冲中。当驱动器的缓冲区被读满后,向DMA发起中断信号告知自己缓冲区已满。
5.DMA收到磁盘驱动器的信号,将磁盘驱动器的缓存中的数据拷贝到内核缓冲区中。此时不占用CPU。这个时候只要内核缓冲区的数据少于用户申请的读的数据,内核就会一直重复步骤3跟步骤4,直到内核缓冲区的数据足够多为止。
6.当DMA读取了足够多的数据,就会发送中断信号给CPU。
7.CPU手动DMA的信号,知道数据已经准备好,于是将数据从内核拷贝到用户空间,系统调用返回。
跟IO中断模式相比,DMA模式下,DMA就是CPU的一个代理,它负责了一部分的拷贝工作,从而减轻了CPU的负担。DMA的优点就是:中断少,CPU负担低。
在了解底层操作系统的IO操作以后,我们来看一下网络场景。文件在磁盘中数据被copy到内核缓冲区 ->从内核缓冲区copy到用户缓冲区->用户缓冲区copy到内核与socket相关的缓冲区-> 数据从socket缓冲区copy到相关协议引擎发送出去。
从上图可以看出,默认数据需要从内核到用户在到内核空间的这么一个过程是多余的,所谓ZeroCopy的做法是直接在内核中将数据发送出去,因此节省了1次拷贝。文件在磁盘中数据被copy到内核缓冲区->从内核缓冲区copy到内核与socket相关的缓冲区->数据从socket缓冲区copy到相关协议引擎发送出去。
[root@CentOS ~]# rpm -ivh jdk-8u191-linux-x64.rpm
warning: jdk-8u191-linux-x64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
Preparing... ########################################### [100%]
1:jdk1.8 ########################################### [100%]
Unpacking JAR files...
tools.jar...
plugin.jar...
javaws.jar...
deploy.jar...
rt.jar...
jsse.jar...
charsets.jar...
localedata.jar...
[root@CentOS ~]# vi .bashrc
JAVA_HOME=/usr/java/latest
CLASSPATH=.
PATH=$PATH:$JAVA_HOME/bin
export JAVA_HOME
export CLASSPATH
export PATH
[root@CentOS ~]# vi .bashrc
[root@CentOS ~]# vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=CentOS
[root@CentOS ~]# ifconfig
eth0 Link encap:Ethernet HWaddr 00:0C:29:D3:EA:13
inet addr:192.168.52.129 Bcast:192.168.52.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fed3:ea13/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:641 errors:0 dropped:0 overruns:0 frame:0
TX packets:379 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:56374 (55.0 KiB) TX bytes:57374 (56.0 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
[root@CentOS ~]# vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.52.129 CentOS
[root@CentOS ~]# service iptables stop
iptables: Setting chains to policy ACCEPT: filter [ OK ]
iptables: Flushing firewall rules: [ OK ]
iptables: Unloading modules: [ OK ]
[root@CentOS ~]# chkconfig iptables off
[root@CentOS ~]# chkconfig --list | grep iptables
iptables 0:关闭 1:关闭 2:关闭 3:关闭 4:关闭 5:关闭 6:关闭
[root@CentOS ~]# tar -zxf zookeeper-3.4.6.tar.gz -C /usr/
[root@CentOS ~]# cd /usr/zookeeper-3.4.6/
[root@CentOS zookeeper-3.4.6]# cp conf/zoo_sample.cfg conf/zoo.cfg
[root@CentOS zookeeper-3.4.6]# vi conf/zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/root/zkdata
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
[root@CentOS ~]# mkdir /root/zkdata
[root@CentOS zookeeper-3.4.6]# ./bin/zkServer.sh
JMX enabled by default
Using config: /usr/zookeeper-3.4.6/bin/../conf/zoo.cfg
Usage: ./bin/zkServer.sh {start|start-foreground|stop|restart|status|upgrade|print-cmd}
[root@CentOS zookeeper-3.4.6]# ./bin/zkServer.sh start zoo.cfg
JMX enabled by default
Using config: /usr/zookeeper-3.4.6/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@CentOS zookeeper-3.4.6]#
[root@CentOS zookeeper-3.4.6]# ./bin/zkServer.sh status zoo.cfg
JMX enabled by default
Using config: /usr/zookeeper-3.4.6/bin/../conf/zoo.cfg
Mode: standalone
[root@CentOS zookeeper-3.4.6]# jps
1778 Jps
1733 QuorumPeerMain
[root@CentOS ~]# tar -zxf kafka_2.11-2.2.0.tgz -C /usr/
[root@CentOS ~]# cd /usr/kafka_2.11-2.2.0/
[root@CentOS kafka_2.11-2.2.0]# vi config/server.properties
############################# Server Basics #############################
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0
############################# Socket Server Settings #############################
listeners=PLAINTEXT://CentOS:9092
############################# Log Basics #############################
log.dirs=/usr/kafka-logs
############################# Log Retention Policy #############################
log.retention.hours=168
############################# Zookeeper #############################
zookeeper.connect=CentOS:2181
[root@CentOS kafka_2.11-2.2.0]# ./bin/kafka-server-start.sh -daemon config/server.properties
测试Kafka服务
创建Topic
[root@CentOS kafka_2.11-2.2.0]# ./bin/kafka-topics.sh \
--create
--bootstrap-server CentOS:9092 \
--topic topic01 \
--partitions 3 \
--replication-factor 1
[root@CentOS kafka_2.11-2.2.0]# ./bin/kafka-topics.sh \
--create \
--zookeeper CentOS:2181 \
--topic topic02 \
--partitions 3 \
--replication-factor 1
kafka自2.2.0版本以后,Toipic的管理使用的的是
--bootstrap-server
不在使用--zookeeper
,--partitions
:指定分区数、--replication-factor
指定副本因子数,该副本因子不能大于可用的broker节点的个数
查看Topic列表
[root@CentOS kafka_2.11-2.2.0]# ./bin/kafka-topics.sh \
--bootstrap-server CentOS:9092 \
--list
topic01
topic02
查看Topic详情
[root@CentOS kafka_2.11-2.2.0]# ./bin/kafka-topics.sh \
--describe \
--bootstrap-server CentOS:9092 \
--topic topic01
Topic:topic01 PartitionCount:3 ReplicationFactor:1 Configs:segment.bytes=1073741824
Topic: topic01 Partition: 0 Leader: 0 Replicas: 0 Isr: 0
Topic: topic01 Partition: 1 Leader: 0 Replicas: 0 Isr: 0
Topic: topic01 Partition: 2 Leader: 0 Replicas: 0 Isr: 0
segment.bytes
:Kafka底层在存储分区的文件的时候是按照段落存储的,也就是某个分区的文件达到1GB(1073741824 bytes)的时候,系统会生成新的段落,这种设计有助于Broker节点索引文件。Replicas
:表示副本集成员broker-id,Isr
:表示处于同步中的正常副本集全称(In Synch Replicate)
订阅topic
[root@CentOS kafka_2.11-2.2.0]# ./bin/kafka-console-consumer.sh \
--bootstrap-server CentOS:9092 \
--group g1 \
--topic topic01
查看消费组
[root@CentOS kafka_2.11-2.2.0]# ./bin/kafka-consumer-groups.sh \
--bootstrap-server CentOS:9092 \
--describe \
--group g1
TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
topic01 0 0 0 0 consumer-** /192.168.52.129 consumer-1
[root@CentOS kafka_2.11-2.2.0]# ./bin/kafka-consumer-groups.sh
--bootstrap-server CentOS:9092
--describe
--group g1
--members
CONSUMER-ID HOST CLIENT-ID #PARTITIONS
consumer-*** /192.168.52.129 consumer-1 1
生产消息
[root@CentOS kafka_2.11-2.2.0]# ./bin/kafka-console-producer.sh \
--broker-list CentOS:9092 \
--topic topic01
>
[root@CentOSX ~]# rpm -ivh jdk-8u191-linux-x64.rpm
warning: jdk-8u191-linux-x64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
Preparing... ########################################### [100%]
1:jdk1.8 ########################################### [100%]
Unpacking JAR files...
tools.jar...
plugin.jar...
javaws.jar...
deploy.jar...
rt.jar...
jsse.jar...
charsets.jar...
localedata.jar...
[root@CentOSX ~]# vi .bashrc
JAVA_HOME=/usr/java/latest
CLASSPATH=.
PATH=$PATH:$JAVA_HOME/bin
export JAVA_HOME
export CLASSPATH
export PATH
[root@CentOSX ~]# vi .bashrc
[root@CentOS ~]# vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=CentOS[A,B,C]
[root@CentOS ~]# vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
192.168.52.130 CentOSA
192.168.52.131 CentOSB
192.168.52.132 CentOSC
[root@CentOSX ~]# service iptables stop
iptables: Setting chains to policy ACCEPT: filter [ OK ]
iptables: Flushing firewall rules: [ OK ]
iptables: Unloading modules: [ OK ]
[root@CentOSX ~]# chkconfig iptables off
[root@CentOSX ~]# chkconfig --list | grep iptables
iptables 0:关闭 1:关闭 2:关闭 3:关闭 4:关闭 5:关闭 6:关闭
[root@CentOSX ~]# tar -zxf zookeeper-3.4.6.tar.gz -C /usr/
[root@CentOSX ~]# cd /usr/zookeeper-3.4.6/
[root@CentOSX zookeeper-3.4.6]# cp conf/zoo_sample.cfg conf/zoo.cfg
[root@CentOSX zookeeper-3.4.6]# vi conf/zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/root/zkdata
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
server.1=CentOSA::2888:3888
server.2=CentOSA::2888:3888
server.3=CentOSA::2888:3888
[root@CentOSX ~]# mkdir /root/zkdata
[root@CentOSA ~]# echo 1 > /root/zkdata/myid
[root@CentOSB ~]# echo 2 > /root/zkdata/myid
[root@CentOSC ~]# echo 3 > /root/zkdata/myid
[root@CentOSX zookeeper-3.4.6]# ./bin/zkServer.sh
JMX enabled by default
Using config: /usr/zookeeper-3.4.6/bin/../conf/zoo.cfg
Usage: ./bin/zkServer.sh {start|start-foreground|stop|restart|status|upgrade|print-cmd}
[root@CentOS zookeeper-3.4.6]# ./bin/zkServer.sh start zoo.cfg
JMX enabled by default
Using config: /usr/zookeeper-3.4.6/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@CentOS zookeeper-3.4.6]#
[root@CentOS zookeeper-3.4.6]# ./bin/zkServer.sh status zoo.cfg
JMX enabled by default
Using config: /usr/zookeeper-3.4.6/bin/../conf/zoo.cfg
Mode: leader|follower
[root@CentOSX zookeeper-3.4.6]# jps
1778 Jps
1733 QuorumPeerMain
[root@CentOSX ~]# tar -zxf kafka_2.11-2.2.0.tgz -C /usr/
[root@CentOSX ~]# cd /usr/kafka_2.11-2.2.0/
[root@CentOSX kafka_2.11-2.2.0]# vi config/server.properties
############################# Server Basics #############################
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=[0,1,2]
############################# Socket Server Settings #############################
listeners=PLAINTEXT://CentOS[A,B,C]:9092
############################# Log Basics #############################
log.dirs=/usr/kafka-logs
############################# Log Retention Policy #############################
log.retention.hours=168
############################# Zookeeper #############################
zookeeper.connect=CentOSA:2181,CentOSB:2181,CentOSC:2181
[root@CentOSX kafka_2.11-2.2.0]#./bin/kafka-server-start.sh -daemon config/server.properties
[root@CentOSA kafka_2.11-2.2.0]# ./bin/kafka-topics.sh \
--bootstrap-server CentOS:9092
--create \
--topic topic01 \
--partitions 1 \
--replication-factor 3
[root@CentOSA kafka_2.11-2.2.0]# ./bin/kafka-topics.sh \
--bootstrap-server CentOSA:9092,CentOSB:9092,CentOSC:9092 \
--list
topic01
topic02
[root@CentOSA kafka_2.11-2.2.0]# ./bin/kafka-topics.sh \
--bootstrap-server CentOSA:9092,CentOSB:9092,CentOSC:9092 \
--describe \
--topic topic01
Topic:topic01 PartitionCount:3 ReplicationFactor:3 Configs:segment.bytes=1073741824
Topic: topic01 Partition: 0 Leader: 0 Replicas: 0,2,3 Isr: 0,2,3
Topic: topic01 Partition: 1 Leader: 2 Replicas: 2,3,0 Isr: 2,3,0
Topic: topic01 Partition: 2 Leader: 0 Replicas: 3,0,2 Isr: 0,2,3
[root@CentOSA kafka_2.11-2.2.0]# ./bin/kafka-topics.sh \
--bootstrap-server CentOSA:9092,CentOSB:9092,CentOSC:9092 \
--alter \
--topic topic03 \
--partitions 4 \
仅仅只允许用户增加topic的分区数,不允许减小
[root@CentOSA kafka_2.11-2.2.0]# ./bin/kafka-topics.sh
--bootstrap-server CentOSA:9092,CentOSB:9092,CentOSC:9092
--delete
--topic topic03
[root@CentOSA kafka_2.11-2.2.0]# ./bin/kafka-console-consumer.sh \
--bootstrap-server CentOSA:9092,CentOSB:9092,CentOSC:9092 \
--topic topic01 \
--group g1 \
--property print.key=true
--property print.value=true
--property key.separator=,
[root@CentOSA kafka_2.11-2.2.0]# ./bin/kafka-consumer-groups.sh \
--bootstrap-server CentOSA:9092,CentOSB:9092,CentOSC:9092 \
--list
g1
[root@CentOSA kafka_2.11-2.2.0]# ./bin/kafka-consumer-groups.sh
--bootstrap-server CentOSA:9092,CentOSB:9092,CentOSC:9092
--describe
--group g1
TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
topic01 1 0 0 0 consumer-1-** /192.168.52.130 consumer-1
topic01 0 0 0 0 consumer-1-** /192.168.52.130 consumer-1
topic01 2 1 1 0 consumer-1-** /192.168.52.130 consumer-1
[root@CentOSA kafka_2.11-2.2.0]# ./bin/kafka-console-producer.sh \
--broker-list CentOSA:9092,CentOSB:9092,CentOSC:9092 \
--topic topic01
org.apache.kafka
kafka-clients
2.2.0
log4j
log4j
1.2.17
org.slf4j
slf4j-api
1.7.25
org.slf4j
slf4j-log4j12
1.7.25
log4j.rootLogger = info,console
log4j.appender.console = org.apache.log4j.ConsoleAppender
log4j.appender.console.Target = System.out
log4j.appender.console.layout = org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern = %p %d{yyyy-MM-dd HH:mm:ss} %c - %m%n
Properties props = new Properties();
//必须配置
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,"CentOS:9092");
props.put(ConsumerConfig.GROUP_ID_CONFIG,"g3");
//默认配置
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
//如果消费者没有订阅过消息,默认会从当前最新的offset位置开始消费,如果用户希望从最早位置消费可以配置为earliest,默认值是latest
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,"latest");
Consumer<String,String> consumer= new KafkaConsumer<String,String>(props);
//订阅相关的topic开头的所有消息
consumer.subscribe(Pattern.compile("^topic.*")); //consumer.subscribe(Arrays.asList("topic01"));
//迭代遍历消息队列
try {
while (true){
ConsumerRecords<String, String> records = consumer.poll(Duration.ofSeconds(1));//抓取数据,指定抓取的周期
if(records!=null && !records.isEmpty()){//迭代结果
Iterator<ConsumerRecord<String, String>> recordIterator = records.iterator();
while (recordIterator.hasNext()){
ConsumerRecord<String, String> record = recordIterator.next();
System.out.println(record.key()+"\t"+record.value()+"\t"+record.offset()+"\t"+record.partition()+"\t"+record.timestamp());
}
}
}
} catch (Exception e) {
consumer.close();
}
Properties props = new Properties();
//必选参数
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"CentOS:9092");
//默认配置 可选
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,StringSerializer.class);
//优化配置 批处理、缓冲
props.put(ProducerConfig.BATCH_SIZE_CONFIG,1000);//设置分区 批次大小1000 bytes
props.put(ProducerConfig.LINGER_MS_CONFIG,1000);//最多等待1s钟
Producer<String,String> producer= new KafkaProducer<String,String>(props);
for (int i = 0; i < 10; i++) {
DecimalFormat format = new DecimalFormat("00");
String key = format.format(i);
//默认会按照 hash(key)% 分区数
ProducerRecord<String,String> record=new ProducerRecord<String, String>("topic01",key,"value"+ key);
producer.send(record);
}
producer.flush();
producer.close();
反序列化
public class ObjectDeserializer implements Deserializer<Object> {
@Override
public void configure(Map<String, ?> configs, boolean isKey) {
System.out.println("configure");
}
@Override
public Object deserialize(String topic, byte[] data) {
return SerializationUtils.deserialize(data);
}
@Override
public void close() {
System.out.println("close");
}
}
序列化
public class ObjectSerializer implements Serializer<Object> {
@Override
public void configure(Map<String, ?> configs, boolean isKey) {
System.out.println("configure");
}
@Override
public byte[] serialize(String topic, Object data) {
return SerializationUtils.serialize((Serializable) data);
}
@Override
public void close() {
System.out.println("close");
}
}
生产者
//1.创建链接参数
Properties props=new Properties();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"CentOSA:9092,CentOSB:9092,CentOSC:9092");
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,ObjectSerializer.class.getName());
//2.创建生产者
KafkaProducer<String,User> producer=new KafkaProducer<String, User>(props);
//3.封装消息队列
for(Integer i=0;i< 10;i++){
ProducerRecord<String, User> record = new ProducerRecord<>("topic01", "key"+i,new User(i,"user"+i,new Date()));
producer.send(record);
}
producer.close();
消费者
//1.创建Kafka链接参数
Properties props=new Properties();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,"CentOSA:9092,CentOSB:9092,CentOSC:9092");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,ObjectDeserializer.class.getName());
props.put(ConsumerConfig.GROUP_ID_CONFIG,"group01");
//2.创建Topic消费者
KafkaConsumer<String,User> consumer=new KafkaConsumer<String, User>(props);
//3.订阅topic开头的消息队列
consumer.subscribe(Pattern.compile("^topic.*$"));
while (true){
ConsumerRecords<String, User> consumerRecords = consumer.poll(Duration.ofSeconds(1));
Iterator<ConsumerRecord<String, User>> recordIterator = consumerRecords.iterator();
while (recordIterator.hasNext()){
ConsumerRecord<String, User> record = recordIterator.next();
String key = record.key();
User value = record.value();
long offset = record.offset();
int partition = record.partition();
System.out.println("key:"+key+",value:"+value+",partition:"+partition+",offset:"+offset);
}
}
public class UserDefinePartitioner implements Partitioner {
private AtomicInteger atomicInteger=new AtomicInteger(0);
@Override
public int partition(String topic, Object key, byte[] keyBytes, Object value, byte[] valueBytes, Cluster cluster) {
int numPartitions = cluster.partitionsForTopic(topic).size();
if(keyBytes==null || keyBytes.length==0){
return atomicInteger.addAndGet(1) & Integer.MAX_VALUE% numPartitions;
} else {
return Utils.toPositive(Utils.murmur2(keyBytes)) % numPartitions;
}
}
@Override
public void close() {
System.out.println("close");
}
@Override
public void configure(Map<String, ?> configs) {
System.out.println("configure");
}
}
//1.创建链接参数
Properties props=new Properties();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"CentOSA:9092,CentOSB:9092,CentOSC:9092");
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,StringSerializer.class.getName());
props.put(ProducerConfig.PARTITIONER_CLASS_CONFIG,UserDefinePartitioner.class.getName());
//2.创建生产者
KafkaProducer<String,String> producer=new KafkaProducer<String, String>(props);
//3.封账消息队列
for(Integer i=0;i< 10;i++){
ProducerRecord<String, String> record = new ProducerRecord<>("topic01", "value" + i);
producer.send(record);
}
producer.close();
public class UserDefineProducerInterceptor implements ProducerInterceptor {
@Override
public ProducerRecord onSend(ProducerRecord record) {
ProducerRecord wrapRecord = new ProducerRecord(record.topic(), record.key(), record.value());
wrapRecord.headers().add("user","baizhi edu".getBytes());
return wrapRecord;
}
@Override
public void onAcknowledgement(RecordMetadata metadata, Exception exception) {
System.out.println("metadata:"+metadata+",exception:"+exception);
}
@Override
public void close() {
System.out.println("close");
}
@Override
public void configure(Map<String, ?> configs) {
System.out.println("configure");
}
}
//1.创建链接参数
Properties props=new Properties();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"CentOSA:9092,CentOSB:9092,CentOSC:9092");
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,StringSerializer.class.getName());
props.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG,UserDefineProducerInterceptor.class.getName());
//2.创建生产者
KafkaProducer<String,String> producer=new KafkaProducer<String, String>(props);
//3.封账消息队列
for(Integer i=0;i< 10;i++){
ProducerRecord<String, String> record = new ProducerRecord<>("topic01", "key" + i, "value" + i);
producer.send(record);
}
producer.close();
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-AxkOPbFE-1604132386055)(assets/image-20201020100259319.png)]
为了避免消费到重复数据,我们可以考虑使用手动提交offset偏移量,注意在提交的时候我们可以通过record实例获取到当前消费分区的最新偏移量offset,但是需要在提交的时候给每个分区的offset加1,因为提交的offset是消费者下一次读取分区的起始位置。
Properties props = new Properties();
//必须配置
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,"CentOS:9092");
props.put(ConsumerConfig.GROUP_ID_CONFIG,"g3");
//默认配置
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
//如果消费者没有订阅过消息,默认会从当前最新的offset位置开始消费,如果用户希望从最早位置消费可以配置为earliest,默认值是latest
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,"latest");
//取消手动提交,此时auto.commit.interval.ms就不在生效
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG,false);
props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG,5000);
Consumer<String,String> consumer= new KafkaConsumer<String,String>(props);
//订阅相关的topic开头的所有消息
consumer.subscribe(Pattern.compile("^topic.*")); //consumer.subscribe(Arrays.asList("topic01"));
//迭代遍历消息队列
try {
while (true){
ConsumerRecords<String, String> records = consumer.poll(Duration.ofSeconds(1));//抓取数据,指定抓取的周期
//存储 分区和offset信息
Map<TopicPartition,OffsetAndMetadata> offsets=new HashMap<TopicPartition,OffsetAndMetadata>();
if(records!=null && !records.isEmpty()){//迭代结果
Iterator<ConsumerRecord<String, String>> recordIterator = records.iterator();
while (recordIterator.hasNext()){
ConsumerRecord<String, String> record = recordIterator.next();
//获取分区和offset
TopicPartition topicPartition=new TopicPartition(record.topic(),record.partition());
//必须加1
OffsetAndMetadata currentOffsetAndMetadata=new OffsetAndMetadata(record.offset()+1);
offsets.put(topicPartition,currentOffsetAndMetadata);
System.out.println(record.key()+"\t"+record.value()+"\t"+record.offset()+"\t"+record.partition()+"\t"+record.timestamp());
}
}
//提交偏移量信息
consumer.commitSync(offsets,Duration.ofSeconds(1));
offsets.clear();
}
} catch (Exception e) {
consumer.close();
}
注意消费端提交的offset必须是下一次读取的起始位置因此提交的分区的offset需要+1操作
Kafka生产者在发送完一个的消息之后,要求Broker在规定的额时间内应答,如果没有在规定时间内应答,Kafka生产者会尝试n次重新发送消息。
如果重试N<=n次成功则认定此消息发送成功,如果N>n次依然失败,则认定本次发送失败,向上层跑出异常。开启重试虽然增强了可靠性,但是可能会导致服务器端存储重复消息。
Properties props = new Properties();
//必选参数
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"CentOS:9092");
//默认配置 可选
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,StringSerializer.class);
//优化配置 批处理、缓冲
props.put(ProducerConfig.BATCH_SIZE_CONFIG,1000);//设置分区 批次大小1000 bytes
props.put(ProducerConfig.LINGER_MS_CONFIG,1000);//最多等待1s钟
// 1:leader写成功即可,相对来说比较安全 相对较快 0:不会等待服务器应答,丢数据 快 -1/all: 必须等所有的ISR节点同步完成后,才会给应答
props.put(ProducerConfig.ACKS_CONFIG,"-1");
// 生产者默认最多等待30s,如果在规定的时间内,服务器没有给出Ack信号,生产者会根据配置的retries次数进行重试
props.put(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG,1);
props.put(ProducerConfig.RETRIES_CONFIG,5);
Producer<String,String> producer= new KafkaProducer<String,String>(props);
for (int i = 0; i < 10; i++) {
DecimalFormat format = new DecimalFormat("00");
String key = format.format(i);
//默认会按照 hash(key)% 分区数
ProducerRecord<String,String> record=new ProducerRecord<String, String>("topic01",key,"value"+ key);
producer.send(record);
}
producer.flush();
producer.close();
HTTP/1.1中对幂等性的定义是:一次和多次请求某一个资源对于资源本身应该具有同样的结果(网络超时等问题除外)。也就是说,其任意多次执行对资源本身所产生的影响均与一次执行的影响相同。
Methods can also have the property of “idempotence” in that (aside from error or expiration issues) the side-effects of N > 0 identical requests is the same as for a single request.
Kafka在0.11.0.0版本支持增加了对幂等的支持。幂等是针对生产者角度的特性。幂等可以保证上生产者发送的消息,不会丢失,而且不会重复。实现幂等的关键点就是服务端可以区分请求是否重复,过滤掉重复的请求。要区分请求是否重复的有两点:
唯一标识
:要想区分请求是否重复,请求中就得有唯一标识。例如支付请求中,订单号就是唯一标识
记录下已处理过的请求标识
:光有唯一标识还不够,还需要记录下那些请求是已经处理过的,这样当收到新的请求时,用新请求中的标识和处理记录进行比较,如果处理记录中有相同的标识,说明是重复交易,拒绝掉。
Kafka可能存在多个生产者,会同时产生消息,但对Kafka来说,只需要保证每个生产者内部的消息幂等就可以了,所有引入了PID来标识不同的生产者。
对于Kafka来说,要解决的是生产者发送消息的幂等问题。也即需要区分每条消息是否重复。
Kafka通过为每条消息增加一个Sequence Numbler,通过Sequence Numbler来区分每条消息。每条消息对应一个分区,不同的分区产生的消息不可能重复。所有Sequence Numbler对应每个分区
Broker端在缓存中保存了这seq number,对于接收的每条消息,如果其序号比Broker缓存中序号大于1则接受它,否则将其丢弃。
Properties props = new Properties();
//必选参数
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"CentOS:9092");
//默认配置 可选
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,StringSerializer.class);
//优化配置 批处理、缓冲
props.put(ProducerConfig.BATCH_SIZE_CONFIG,1000);//设置分区 批次大小1000 bytes
props.put(ProducerConfig.LINGER_MS_CONFIG,1000);//最多等待1s钟
// 1:leader写成功即可,相对来说比较安全 相对较快 0:不会等待服务器应答,丢数据 快 -1/all: 必须等所有的ISR节点同步完成后,才会给应答
props.put(ProducerConfig.ACKS_CONFIG,"-1");
// 生产者默认最多等待30s,如果在规定的时间内,服务器没有给出Ack信号,生产者会根据配置的retries次数进行重试
props.put(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG,1);
props.put(ProducerConfig.RETRIES_CONFIG,5);
//开启生产者的幂等性,解决retries带来的重复数据 kafka 0.11版本引入新特性
props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG,true);
Producer<String,String> producer= new KafkaProducer<String,String>(props);
for (int i = 0; i <5; i++) {
DecimalFormat format = new DecimalFormat("00");
String key = format.format(i);
//默认会按照 hash(key)% 分区数
ProducerRecord<String,String> record=new ProducerRecord<String, String>("topic01",key,"value"+ key);
producer.send(record);
}
producer.flush();
producer.close();
幂等性和重试机制仅仅只能保证分区单条记录的原子性写入,但是如果用户需要实现夸分区的原子性写入此时需要开启kafka的事务控制。
生产者Only
Properties props = new Properties();
//必选参数
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"CentOS:9092");
//默认配置 可选
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,StringSerializer.class);
//优化配置 批处理、缓冲
props.put(ProducerConfig.BATCH_SIZE_CONFIG,1000);//设置分区 批次大小1000 bytes
props.put(ProducerConfig.LINGER_MS_CONFIG,1000);//最多等待1s钟
// 1:leader写成功即可,相对来说比较安全 相对较快 0:不会等待服务器应答,丢数据 快 -1/all: 必须等所有的ISR节点同步完成后,才会给应答
props.put(ProducerConfig.ACKS_CONFIG,"-1");
// 生产者默认最多等待30s,如果在规定的时间内,服务器没有给出Ack信号,生产者会根据配置的retries次数进行重试
props.put(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG,5000);
props.put(ProducerConfig.RETRIES_CONFIG,5);
//开启生产者的幂等性,解决retries带来的重复数据 kafka 0.11版本引入新特性
props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG,true);
//1、配置事务ID,要求ID必须唯一
props.put(ProducerConfig.TRANSACTIONAL_ID_CONFIG,"transaction-id001");
Producer<String,String> producer= new KafkaProducer<String,String>(props);
//2.初始化事务
producer.initTransactions();
try{
//3.开启事务
producer.beginTransaction();
for (int i = 0; i <5; i++) {
DecimalFormat format = new DecimalFormat("00");
String key = format.format(i);
//默认会按照 hash(key)% 分区数
ProducerRecord<String,String> record=new ProducerRecord<String, String>("topic01",key,"value"+ key);
producer.send(record);
if(i==3) {
int b=i/0;
}
producer.flush();
}
//4.提交事务
producer.commitTransaction();
}catch (Exception e){
System.err.println(e.getMessage());
//5.终止事务
producer.abortTransaction();
}
producer.close();
需要注意消费者那边必须升级事务的隔离级别,否则有可能读到生产者未提交的数据。
Properties props = new Properties();
//必须配置
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,"CentOS:9092");
props.put(ConsumerConfig.GROUP_ID_CONFIG,"g3");
//默认配置
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
//如果消费者没有订阅过消息,默认会从当前最新的offset位置开始消费,如果用户希望从最早位置消费可以配置为earliest,默认值是latest
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,"latest");
//取消手动提交,此时auto.commit.interval.ms就不在生效
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG,false);
props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG,5000);
//如果生产者那边控制了事务,消费这边默认的事务隔离是read_uncommitted,必须配置成read_committed才可以
props.put(ConsumerConfig.ISOLATION_LEVEL_CONFIG,"read_committed");
Consumer<String,String> consumer= new KafkaConsumer<String,String>(props);
//订阅相关的topic开头的所有消息
consumer.subscribe(Pattern.compile("^topic.*")); //consumer.subscribe(Arrays.asList("topic01"));
//迭代遍历消息队列
try {
while (true){
ConsumerRecords<String, String> records = consumer.poll(Duration.ofSeconds(1));//抓取数据,指定抓取的周期
//存储 分区和offset信息
Map<TopicPartition,OffsetAndMetadata> offsets=new HashMap<TopicPartition,OffsetAndMetadata>();
if(records!=null && !records.isEmpty()){//迭代结果
Iterator<ConsumerRecord<String, String>> recordIterator = records.iterator();
while (recordIterator.hasNext()){
ConsumerRecord<String, String> record = recordIterator.next();
//获取分区和offset
TopicPartition topicPartition=new TopicPartition(record.topic(),record.partition());
//必须加1
OffsetAndMetadata currentOffsetAndMetadata=new OffsetAndMetadata(record.offset()+1);
offsets.put(topicPartition,currentOffsetAndMetadata);
System.out.println(record.key()+"\t"+record.value()+"\t"+record.offset()+"\t"+record.partition()+"\t"+record.timestamp());
}
}
//提交偏移量信息
consumer.commitSync(offsets,Duration.ofSeconds(1));
offsets.clear();
}
} catch (Exception e) {
consumer.close();
}
消费者&生产者
//1.生产者&消费者
KafkaProducer<String,String> producer=buildKafkaProducer();
KafkaConsumer<String, String> consumer = buildKafkaConsumer("group01");
consumer.subscribe(Arrays.asList("topic01"));
producer.initTransactions();//初始化事务
try{
while(true){
ConsumerRecords<String, String> consumerRecords = consumer.poll(Duration.ofSeconds(1));
Iterator<ConsumerRecord<String, String>> consumerRecordIterator = consumerRecords.iterator();
//开启事务控制
producer.beginTransaction();
Map<TopicPartition, OffsetAndMetadata> offsets=new HashMap<TopicPartition, OffsetAndMetadata>();
while (consumerRecordIterator.hasNext()){
ConsumerRecord<String, String> record = consumerRecordIterator.next();
//创建Record
ProducerRecord<String,String> producerRecord=new ProducerRecord<String,String>("topic02",record.key(),record.value());
producer.send(producerRecord);
//记录元数据
offsets.put(new TopicPartition(record.topic(),record.partition()),new OffsetAndMetadata(record.offset()+1));
}
//提交事务
producer.sendOffsetsToTransaction(offsets,"group01");
producer.commitTransaction();
}
}catch (Exception e){
producer.abortTransaction();//终止事务
}finally {
producer.close();
}
public static KafkaProducer<String,String> buildKafkaProducer(){
Properties props=new Properties();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"CentOSA:9092,CentOSB:9092,CentOSC:9092");
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,StringSerializer.class.getName());
props.put(ProducerConfig.TRANSACTIONAL_ID_CONFIG,"transaction-id");
return new KafkaProducer<String, String>(props);
}
public static KafkaConsumer<String,String> buildKafkaConsumer(String group){
Properties props=new Properties();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,"CentOSA:9092,CentOSB:9092,CentOSC:9092");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,StringDeserializer.class.getName());
props.put(ConsumerConfig.GROUP_ID_CONFIG,group);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG,false);
props.put(ConsumerConfig.ISOLATION_LEVEL_CONFIG,"read_committed");
return new KafkaConsumer<String, String>(props);
}
这是一个监视系统,监视您的kafka群集以及可视的使用者线程,偏移量,所有者等。当您安装Kafka Eagle时,用户可以看到当前的使用者组,对于每个组,他们正在消耗的Topic以及该组在每个主题中的偏移量,滞后,日志大小和位置。这对于了解用户从消息队列消耗的速度以及消息队列增加的速度很有用。
下载地址:https://codeload.github.com/smartloli/kafka-eagle-bin/tar.gz/v1.4.0
安装
[root@CentOS ~]# tar -zxf kafka-eagle-web-1.4.0-bin.tar.gz -C /usr/
[root@CentOS ~]# mv /usr/kafka-eagle-web-1.4.0 /usr/kafka-eagle
[root@CentOS ~]# vi .bashrc
KE_HOME=/usr/kafka-eagle
M2_HOME=/usr/apache-maven-3.6.3
SQOOP_HOME=/usr/sqoop-1.4.7
HIVE_HOME=/usr/apache-hive-1.2.2-bin
JAVA_HOME=/usr/java/latest
HADOOP_HOME=/usr/hadoop-2.9.2/
HBASE_HOME=/usr/hbase-1.2.4/
ZOOKEEPER_HOME=/usr/zookeeper-3.4.6
PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HBASE_HOME/bin:$M2_HOME/bin:$HIVE_HOME/bin:$SQOOP_HOME/bin:$ZOOKEEPER_HOME/bin:$KE_HOME/bin
CLASSPATH=.
export JAVA_HOME
export PATH
export CLASSPATH
export HADOOP_HOME
export HBASE_HOME
HBASE_CLASSPATH=$(/usr/hbase-1.2.4/bin/hbase classpath)
HADOOP_CLASSPATH=/root/mysql-connector-java-5.1.49.jar
export HADOOP_CLASSPATH
export M2_HOME
export HIVE_HOME
export SQOOP_HOME
export ZOOKEEPER_HOME
export KE_HOME
[root@CentOS ~]# source .bashrc
[root@CentOS ~]# cd /usr/kafka-eagle/
[root@CentOS kafka-eagle]# vi conf/system-config.properties
######################################
# multi zookeeper&kafka cluster list
######################################
kafka.eagle.zk.cluster.alias=cluster1
cluster1.zk.list=CentOS:2181
######################################
# zk client thread limit
######################################
kafka.zk.limit.size=25
######################################
# kafka eagle webui port
######################################
kafka.eagle.webui.port=8048
######################################
# kafka offset storage
######################################
cluster1.kafka.eagle.offset.storage=kafka
######################################
# kafka metrics, 30 days by default
######################################
kafka.eagle.metrics.charts=true
kafka.eagle.metrics.retain=30
######################################
# kafka sql topic records max
######################################
kafka.eagle.sql.topic.records.max=5000
kafka.eagle.sql.fix.error=false
######################################
# delete kafka topic token
######################################
kafka.eagle.topic.token=keadmin
######################################
# kafka sasl authenticate
######################################
cluster1.kafka.eagle.sasl.enable=false
cluster1.kafka.eagle.sasl.protocol=SASL_PLAINTEXT
cluster1.kafka.eagle.sasl.mechanism=SCRAM-SHA-256
cluster1.kafka.eagle.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="kafka" password="kafka-eagle";
cluster1.kafka.eagle.sasl.client.id=
######################################
# kafka sqlite jdbc driver address
######################################
#kafka.eagle.driver=org.sqlite.JDBC
#kafka.eagle.url=jdbc:sqlite:/hadoop/kafka-eagle/db/ke.db
#kafka.eagle.username=root
#kafka.eagle.password=www.kafka-eagle.org
######################################
# kafka mysql jdbc driver address
######################################
kafka.eagle.driver=com.mysql.jdbc.Driver
kafka.eagle.url=jdbc:mysql://CentOS:3306/ke?useUnicode=true&characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNull
kafka.eagle.username=root
kafka.eagle.password=root
[root@CentOS kafka-eagle]# chmod u+x bin/ke.sh
如果需要检测Kafka性能指标需要修改Kafka启动文件
[root@CentOS ~]# cd /usr/kafka_2.11-2.2.0/
[root@CentOS kafka_2.11-2.2.0]# vi bin/kafka-server-start.sh
...
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
export KAFKA_HEAP_OPTS="-server -Xms2G -Xmx2G -XX:PermSize=128m -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:ParallelGCThreads=8 -XX:ConcGCThreads=5 -XX:InitiatingHeapOccupancyPercent=70"
export JMX_PORT="9999"
#export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
fi
重启Kafka服务使用
kafka-server-stop.sh
关闭kafka服务!
[root@CentOS kafka-eagle]# ./bin/ke.sh start
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = netcat
a1.sources.r1.bind = CentOS
a1.sources.r1.port = 44444
# Describe the sink
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.kafka.topic = topic01
a1.sinks.k1.kafka.bootstrap.servers = CentOSA:9092,CentOSB:9092,CentOSC:9092
a1.sinks.k1.kafka.flumeBatchSize = 20
a1.sinks.k1.kafka.producer.acks = -1
a1.sinks.k1.kafka.producer.linger.ms = 100
a1.sinks.k1.kafka.producer.compression.type = snappy
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
<parent>
<groupId>org.springframework.bootgroupId>
<artifactId>spring-boot-starter-parentartifactId>
<version>2.1.5.RELEASEversion>
parent>
<dependencies>
<dependency>
<groupId>org.springframework.bootgroupId>
<artifactId>spring-boot-starterartifactId>
dependency>
<dependency>
<groupId>org.springframework.kafkagroupId>
<artifactId>spring-kafkaartifactId>
dependency>
<dependency>
<groupId>org.springframework.bootgroupId>
<artifactId>spring-boot-starter-testartifactId>
<scope>testscope>
dependency>
dependencies>
@SpringBootTest(classes = {KafkaSpringBootApplication.class})
@RunWith(SpringRunner.class)
public class KafkaTempolateTests {
@Autowired
private KafkaTemplate kafkaTemplate;
@Autowired
private IOrderService orderService;
@Test
public void testOrderService(){
orderService.saveOrder("001","baizhi edu ");
}
@Test
public void testKafkaTemplate(){
kafkaTemplate.executeInTransaction(new KafkaOperations.OperationsCallback() {
@Override
public Object doInOperations(KafkaOperations kafkaOperations) {
return kafkaOperations.send(new ProducerRecord("topic01","002","this is a demo"));
}
});
}
}
spring.kafka.bootstrap-servers=CentOSA:9092,CentOSB:9092,CentOSC:9092
spring.kafka.producer.retries=5
spring.kafka.producer.acks=all
spring.kafka.producer.batch-size=16384
spring.kafka.producer.buffer-memory=33554432
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.properties.enable.idempotence=true
spring.kafka.producer.transaction-id-prefix=transaction-id-
spring.kafka.consumer.group-id=group1
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.enable-auto-commit=true
spring.kafka.consumer.auto-commit-interval=100
spring.kafka.consumer.properties.isolation.level=read_committed
spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer
@SpringBootApplication
@EnableKafkaStreams
@EnableKafka
public class KafkaSpringBootApplication {
public static void main(String[] args) throws IOException {
SpringApplication.run(KafkaSpringBootApplication.class,args);
System.in.read();
}
@KafkaListeners(value = {@KafkaListener(topics = {"topic04"})})
@SendTo(value = {"topic05"})
public String listenner(ConsumerRecord<?, ?> cr) {
return cr.value()+" baizhi edu";
}
}
@Transactional
@Service
public class OrderService implements IOrderService {
@Autowired
private KafkaTemplate kafkaTemplate;
@Override
public void saveOrder(String id,Object message) {
kafkaTemplate.send(new ProducerRecord("topic04",id,message));
}
}