Zookeeper+Kafka集群环境搭建,SpringBoot整合KafkaDemo

Zookeeper+Kafka集群环境搭建,SpringBoot整合KafkaDemo_第1张图片
kafka集群原理.png

Kafka集群环境搭建

服务器环境准备

使用vm虚拟三个linux主机

192.168.212.174

192.168.212.175

192.168.212.176

Zookeeper集群环境搭建

1.每台服务器节点上安装jdk1.8环境

使用java-v命令测试

2.每台服务器节点上安装Zookeeper

|

1.下载并且安装zookeeper安装包

wget https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/zookeeper-3.4.10/zookeeper-3.4.10.tar.gz

2. 解压安装包

tar -zxvf zookeeper-3.4.10.tar.gz

3. 重命名

重命名: mv zookeeper-3.4.10 zookeeper

|

3.搭建Zookeeper集群环境

修改zoo_sample.cfg文件

|

cd /usr/local/zookeeper/conf mv zoo_sample.cfg zoo.cfg 修改conf: vi zoo.cfg 修改两处 (1) dataDir=/usr/local/zookeeper/data(注意同时在zookeeper创建data目录) (2)最后面添加 server.0=192.168.212.174:2888:3888 server.1=192.168.212.175:2888:3888 server.2=192.168.212.176:2888:3888

|

4.创建服务器标识 服务器标识配置: 创建文件夹: mkdir data 创建文件myid并填写内容为0: vi myid (内容为服务器标识 : 0)

5.复制zookeeper

进行复制zookeeper目录到hadoop01和hadoop02 还有/etc/profile文件 把hadoop01、 hadoop02中的myid文件里的值修改为1和2 路径(vi /usr/local/zookeeper/data/myid)

关闭每台服务器节点防火墙,systemctl stop firewalld.service

启动zookeeper

启动zookeeper: 路径: /usr/local/zookeeper/bin 执行: zkServer.sh start (注意这里3台机器都要进行启动) 状态: zkServer.sh status(在三个节点上检验zk的mode,一个leader和俩个follower)

Kafka集群环境搭建

3台虚拟机均进行以下操作:

|

// 解压下载好的kafka压缩包并重命名

cd /usr/local

wget http://mirror.bit.edu.cn/apache/kafka/1.0.0/kafka_2.11-1.0.0.tgz

tar -zxvf kafka_2.11-1.0.0.tgz

mv kafka_2.12-0.11.0.0 kafka

// 修改配置文件

vi ./kafka/config/server.properties

|

需要修改的内容如下(192.168.212.169)

|

broker.id=0

listeners=PLAINTEXT://192.168.131.130:9092

zookeeper.connect=192.168.131.130:2181,192.168.131.131:2181,192.168.131.132:2181

|

需要修改的内容如下(192.168.212.170)

|

broker.id=1

listeners=PLAINTEXT://192.168.212.170:9092

zookeeper.connect=192.168.131.130:2181,192.168.131.131:2181,192.168.131.132:2181

|

需要修改的内容如下(192.168.212.171)

|

broker.id=2

listeners=PLAINTEXT://192.168.212.171:9092

zookeeper.connect=192.168.131.130:2181,192.168.131.131:2181,192.168.131.132:2181

|

// 在系统环境中配置kafka的路径

vi /etc/profile

|

// 在文件最下方添加kafka路径

export KAFKA_HOME=/usr/local/kafka

// 多路径PATH写法为PATH={KAFKA_HOME}/bin:$PATH

PATH=PATH

export PATH

|

// 使修改完的环境变量生效

source /etc/profile

192.168.212.169:2181,192.168.212.172:2181,192.168.212.173:2181

Kafka集群环境测试

1、开启3台虚拟机的zookeeper程序

/usr/local/zookeeper/bin/zkServer.sh start

开启成功后查看zookeeper集群的状态

/usr/local/zookeeper/bin/zkServer.sh status

出现Mode:follower或是Mode:leader则代表成功

2、在后台开启3台虚拟机的kafka程序(cd /usr/local/kafka)

./bin/kafka-server-start.sh -daemon config/server.properties

3、在其中一台虚拟机(192.168.131.130)创建topic

/usr/local/kafka/bin/kafka-topics.sh –create –zookeeper 192.168.131.130:2181 –replication-factor 3 –partitions 1 –topic my-replicated-topic

// 查看创建的topic信息

/usr/local/kafka/bin/kafka-topics.sh –describe –zookeeper 192.168.131.130:2181 –topic my-replicated-topic

SpringBoot整合kafka

'''
@RestController
@SpringBootApplication
public class KafkaController {

/**
 * 注入kafkaTemplate
 */
@Autowired
private KafkaTemplate kafkaTemplate;

/**
 * 发送消息的方法
 *
 * @param key
 *            推送数据的key
 * @param data
 *            推送数据的data
 */
private void send(String key, String data) {
    // topic 名称 key data 消息数据
    kafkaTemplate.send("my_test", key, data);

}
// test 主题 1 my_test 3

@RequestMapping("/kafka")
public String testKafka() {
    int iMax = 6;
    for (int i = 1; i < iMax; i++) {
        send("key" + i, "data" + i);
    }
    return "success";
}

public static void main(String[] args) {
    SpringApplication.run(KafkaController.class, args);
}

/**
 * 消费者使用日志打印消息
 */

@KafkaListener(topics = "my_test")
public void receive(ConsumerRecord consumer) {
    System.out.println("topic名称:" + consumer.topic() + ",key:" + consumer.key() + ",分区位置:" + consumer.partition()
            + ", 下标" + consumer.offset());
}

}
'''

resources
'''

kafka

spring:
kafka:
# kafka服务器地址(可以多个)
bootstrap-servers: 192.168.212.174:9092,192.168.212.175:9092,192.168.212.176:9092
consumer:
# 指定一个默认的组名
group-id: kafka2
# earliest:当各分区下有已提交的offset时,从提交的offset开始消费;无提交的offset时,从头开始消费
# latest:当各分区下有已提交的offset时,从提交的offset开始消费;无提交的offset时,消费新产生的该分区下的数据
# none:topic各分区都存在已提交的offset时,从offset后开始消费;只要有一个分区不存在已提交的offset,则抛出异常
auto-offset-reset: earliest
# key/value的反序列化
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
producer:
# key/value的序列化
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.apache.kafka.common.serialization.StringSerializer
# 批量抓取
batch-size: 65536
# 缓存容量
buffer-memory: 524288
# 服务器地址
bootstrap-servers: 192.168.212.174:9092,192.168.212.175:9092,192.168.212.176:9092
'''

你可能感兴趣的:(Zookeeper+Kafka集群环境搭建,SpringBoot整合KafkaDemo)