Kafka集群安装和使用

Kafka集群安装

安装Kafka之前需安装Zookeeper:https://hubaoquan.cn/zookeeperanzhuangjijiandanshiyong/

1.集群规划

ip:10.7.68.213 主机名:ecs-az3-yc-0020 实例id: broker.id=0
ip:10.7.68.188 主机名:ecs-az3-yc-0021 实例id: broker.id=1
ip:10.7.68.197 主机名:ecs-az3-yc-0022 实例id: broker.id=2

端口:9092
kafka版本为kafka_2.12-2.2.0.tgz

2.jar包下载

[root@ecs-az3-yc-0020 module]$ wget http://soft.hubaoquan.cn/kafka_2.12-2.2.0.tgz

3.kafka集群部署

1)解压安装包
[root@ecs-az3-yc-0020 module]$ tar -zxvf kafka_2.12-2.2.0.tgz
2)在/opt/module/kafkHa目录下创建logs文件夹
[root@ecs-az3-yc-0020 kafka_2.12-2.2.0]$ mkdir logs
3)修改配置文件
[root@ecs-az3-yc-0020 kafka_2.12-2.2.0]$ cd config/
[root@ecs-az3-yc-0020 config]$ vi server.properties
修改*标注内容其他根据主机环境,也可以保持默认:


#************broker.id的全局唯一编号,不能重复
broker.id=0

#开启删除topic功能
delete.topic.enable=true

#处理网络请求的线程数量​
num.network.threads=3
​
#用来处理磁盘IO的现成数量
num.io.threads=8
​
#发送套接字的缓冲区大小
socket.send.buffer.bytes=102400
​
#接收套接字的缓冲区大小
socket.receive.buffer.bytes=102400
​
#请求套接字的缓冲区大小
socket.request.max.bytes=104857600
​
#*****kafka运行日志存放的路径
log.dirs=/opt/module/kafka_2.12-2.2.0/logs
​
#topic在当前broker上的分区个数
num.partitions=1
​
#用来恢复和清理data下数据的线程数量
num.recovery.threads.per.data.dir=1
​
#segment文件保留的最长时间,超时将被删除
log.retention.hours=168
​
#*****配置连接Zookeeper集群地址
zookeeper.connect=10.7.68.197:2181,10.7.68.188:2181,10.7.68.213:2181

4)配置环境变量
[root@ecs-az3-yc-0020 module]$ sudo vi /etc/profile
​添加以下内容:

#KAFKA_HOME
export KAFKA_HOME=/opt/module/kafka_2.12-2.2.0
export PATH=$PATH:$KAFKA_HOME/bin

使环境变量生效:
[root@ecs-az3-yc-0020 module]$ source /etc/profile

5)分发安装包
[root@ecs-az3-yc-0020 module]$ scp –r root@ecs-az3-yc-0021:/opt/module
[root@ecs-az3-yc-0020 module]$ scp –r root@ecs-az3-yc-0022:/opt/module

6)分别在ecs-az3-yc-0021和ecs-az3-yc-0022上修改配置文件/opt/module/kafka_2.12-2.2.0/config/server.properties中的broker.id=1、broker.id=2
注:broker.id不得重复

7)启动集群
依次在ecs-az3-yc-0020、ecs-az3-yc-0021、ecs-az3-yc-0022节点上启动kafka

[root@ecs-az3-yc-0020 kafka_2.12-2.2.0]$ bin/kafka-server-start.sh config/server.properties &
[root@ecs-az3-yc-0021 kafka_2.12-2.2.0]$ bin/kafka-server-start.sh config/server.properties &
[root@ecs-az3-yc-0022 kafka_2.12-2.2.0]$ bin/kafka-server-start.sh config/server.properties &

8)关闭集群

[root@ecs-az3-yc-0020 kafka_2.12-2.2.0]$ bin/kafka-server-stop.sh stop
[root@ecs-az3-yc-0021 kafka_2.12-2.2.0]$ bin/kafka-server-stop.sh stop
[root@ecs-az3-yc-0022 kafka_2.12-2.2.0]$ bin/kafka-server-stop.sh stop

4.kafka命令行操作

1)查看当前服务器中的所有topic
[root@ecs-az3-yc-0020 kafka_2.12-2.2.0]$ bin/kafka-topics.sh --zookeeper ecs-az3-yc-0020:2181 --list
2)创建topic

[root@ecs-az3-yc-0020 kafka_2.12-2.2.0]# bin/kafka-topics.sh --zookeeper ecs-az3-yc-0020:2181 --create --replication-factor 3 --partitions 1 --topic hbqtest
Created topic hbqtest.

选项说明:
–topic 定义topic名
–replication-factor 定义副本数
–partitions 定义分区数
4)查看topic新增了hbqtest

[root@ecs-az3-yc-0020 kafka_2.12-2.2.0]# bin/kafka-topics.sh --zookeeper ecs-az3-yc-0020:2181 --li​st
hbqtest
nts-foundation-bids

5)生产者ecs-az3-yc-0020发送消息

[root@ecs-az3-yc-0020 kafka_2.12-2.2.0]# bin/kafka-console-producer.sh --broker-list 10.7.68.213:9092 --topic hbqtest
>hello world
>hi kafka
>^C

5)消费者ecs-az3-yc-0022接收消息

从当前时间接收消息

[root@ecs-az3-yc-0022 kafka_2.12-2.2.0]# bin/kafka-console-consumer.sh --bootstrap-server 10.7.68.213:9092  --topic hbqtest
hello world
hi kafka

从初始位置接收消息

[root@ecs-az3-yc-0022 kafka_2.12-2.2.0]# bin/kafka-console-consumer.sh --bootstrap-server 10.7.68.213:9092 --from-beginning --topic hbqtest

–from-beginning:会把first主题中以往所有的数据都读取出来。根据业务场景选择是否增加该配置。

显示key消费

[root@ecs-az3-yc-0022 kafka_2.12-2.2.0]# bin/kafka-console-consumer.sh --bootstrap-server  10.7.68.213:9092 --from-beginning --property print.key=true  --topic hbqtest
null   hello world
null   hi kafka

6)查看某个Topic的详情

[root@ecs-az3-yc-0021 kafka_2.12-2.2.0]# bin/kafka-topics.sh --zookeeper 10.7.68.213:2181 --describe --topic hbqtest
Topic:hbqtest PartitionCount:1 ReplicationFactor:3 Configs:
Topic:hbqtest Partition: 0 Leader: 3 Replicas: 3,1,0 Isr: 3,1,0

3)删除topic

[root@ecs-az3-yc-0021 kafka_2.12-2.2.0]# bin/kafka-topics.sh --zookeeper 10.7.68.197:2181 --delete --topic hbqtest
Topic hbqtest is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.

需要server.properties中设置delete.topic.enable=true否则只是标记删除或者直接重启。

4)删除topic后消费者会因找不到topic而报错

[20xx-04-24 17:00:26,545] WARN [Consumer clientId=consumer-1, groupId=console-consumer-14567] Received unknown topic or partition error in fetch for partition hbqtest-0 (org.apache.kafka.clients.consumer.internals.Fetcher)
[20xx-04-24 17:00:26,545] WARN [Consumer clientId=consumer-1, groupId=console-consumer-14567] Error while fetching metadata with correlation id 662 : {hbqtest=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)

你可能感兴趣的:(Kafka)