Kafka环境搭建

目录

一. Kafka0.8集群部署

二. Kafka0.10集群部署

三. Kafka1.1.1集群部署

1. 伪集群部署

2. 集群部署


一. Kafka0.8集群部署

安装步骤

1. 下载Kafka安装包: http://kafka.apache.org/downloads
2. 上传安装包
3. 解压
4. 修改配置文件 config/server.properties

broker.id=0
host.name=node1
log.dirs=/data/kafka
zookeeper.connect=node1:2181,node2:2181,node3:2181

5. 将配置好的kafka拷贝到其他机器上, 修改每个机器上的broker.id和host.name
6. 关闭防火墙(修改window上的主机映射)

kafka相关命令

#启动kafka(在每个机器上单独启动kafka)
[ ] /root/hadoop/kafka_2.11-0.8.2.2/bin/kafka-server-start.sh -daemon /root/hadoop/kafka_2.11-0.8.2.2/config/server.properties
#创建topic
[ ] /bigdata/kafka_2.11-0.8.2.2/bin/kafka-topics.sh
–create –zookeeper node-1:2181,node-2:2181
–replication-factor 3
–partitions 3
–topic xiaoniu
#查看topic信息
[ ] /bigdata/kafka_2.11-0.8.2.2/bin/kafka-topics.sh –list –zookeeper node6:2181,node7:2181,node8:2181
#启动生产者
[ ] /root/hadoop/kafka_2.11-0.8.2.2/bin/kafka-console-producer.sh --broker-list node-6:9092,node7:9092,node8:9092,node9:9092 --topic lic
#启动消费者
[ ] /root/hadoop/kafka_2.11-0.8.2.2/bin/kafka-console-consumer.sh --zookeeper node6:2181,node7:2181,node8:2181 --topic lic --from-beginning


二. Kafka0.10集群部署

#修改kafka配置文件
[ ] vi config/server.properties

broker.id=1
delete.topic.enable=true
listeners=PLAINTEXT://node6:9092
advertised.listeners=PLAINTEXT://node6:9092
log.dirs=/data/kafka
zookeeper.connect=node-1:2181,node-2:2181,node-3:2181

#在其他机器上修改配置:

broker.id=1
listeners=PLAINTEXT://node6:9092
advertised.listeners=PLAINTEXT://node6:9092

启动kafka
[ ] /root/hadoop/zookeeper-3.4.6/bin/zkServer.sh start
[ ] /root/hadoop/kafka_2.11-0.10.2.1/bin/kafka-server-start.sh -daemon /root/hadoop/kafka_2.11-0.10.2.1/config/server.properties 
停止kafka
[ ] /root/hadoop/kafka_2.11-0.10.2.1/bin/kafka-server-stop.sh
创建topic
[ ] /root/hadoop/kafka_2.11-0.10.2.1/bin/kafka-topics.sh --create --zookeeper node6:2181,node7:2181,node8:2181 --replication-factor 3 --partitions 3 --topic record
列出所有topic
[ ] /root/hadoop/kafka_2.11-0.10.2.1/bin/kafka-topics.sh --list --zookeeper node6:2181,node7:2181,node8:2181
查看某个topic信息
[ ] /root/hadoop/kafka_2.11-0.10.2.1/bin/kafka-topics.sh --describe --zookeeper node6:2181,node7:2181,node8:2181 --topic record
启动一个命令行的生产者
[ ] /root/hadoop/kafka_2.11-0.10.2.1/bin/kafka-console-producer.sh --broker-list node6:9092,node7:9092,node8:9092,node9:9092 --topic usertest
启动一个命令行的消费者
[ ] /root/hadoop/kafka_2.11-0.10.2.1/bin/kafka-console-consumer.sh --zookeeper node6:2181,node7:2181,node8:2181 --topic user --from-beginning
消费者连接到borker的地址
[ ] /root/hadoop/kafka_2.11-0.10.2.1/bin/kafka-console-consumer.sh
–bootstrap-server node-1:9092,node-2:9092,node-3:9092
–topic xiaoniu
–from-beginning
 

三. Kafka1.1.1集群部署

1. 伪集群部署

1. 下载解压后, 将config目录下的server.properties文件复制两份, 并重命名为server-1.properties和server-2.properties

[ ] cp config/server.properties config/server-1.properties
[ ] cp config/server.properties config/server-2.properties

2. 配置文件的内容分别如下:

config/server.properties:

broker.id=0
listeners=PLAINTEXT://:9092
log.dir=/tmp/kafka-logs-0
zookeeper.connect=node1:2181,node2:2181,node3:2181

config/server-1.properties:

broker.id=1
listeners=PLAINTEXT://:9093
log.dir=/tmp/kafka-logs-1
zookeeper.connect=node1:2181,node2:2181,node3:2181

config/server-2.properties:

broker.id=2
listeners=PLAINTEXT://:9094
log.dir=/tmp/kafka-logs-2
zookeeper.connect=node1:2181,node2:2181,node3:2181

3. 启动

[ ] bin/kafka-server-start.sh -daemon config/server.properties
[ ] bin/kafka-server-start.sh -daemon config/server-1.properties
[ ] bin/kafka-server-start.sh -daemon config/server-2.properties

4. 创建topic

[ ] bin/kafka-topics.sh --create –zookeeper node1:2181,node2:2181,node3:2181 --replication-factor 3 --partitions 1 --topic lic

5. 查看topic信息

[ ] bin/kafka-topics.sh --describe --zookeeper node1:2181,node2:2181,node3:2181 --topic lic

6. 测试

生产者: [ ] bin/kafka-console-producer.sh --broker-list localhost:9092 --topic lic

消费者: [ ] bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --from-beginning --topic lic

7. 关闭kafka

[ ] bin/kafka-server-stop.sh


2. 集群部署

1. 下载解压, 修改config目录下的server.properties文件

config/server.properties:

broker.id=0
log.dir=/data/kafka-logs
zookeeper.connect=node1:2181,node2:2181,node3:2181

2. 将kafka复制到另外两个服务器上

[ ] scp -r /apps/kafka_2.11-1.1.1 node2:/apps/

[ ] scp -r /apps/kafka_2.11-1.1.1 node3:/apps/

3. 修改node2和node3上的server.properties文件

node2: config/server.properties:

broker.id=1

node3: config/server.properties:

broker.id=2

4. 启动

[node1] bin/kafka-server-start.sh -daemon config/server.properties

[node2] bin/kafka-server-start.sh -daemon config/server.properties

[node3] bin/kafka-server-start.sh -daemon config/server.properties

5. 创建topic

[ ] bin/kafka-topics.sh --create –zookeeper node1:2181,node2:2181,node3:2181 --replication-factor 3 --partitions 1 --topic lic

6. 查看topic信息

[ ] bin/kafka-topics.sh --describe --zookeeper node1:2181,node2:2181,node3:2181 --topic lic

7. 测试

生产者: [ ] bin/kafka-console-producer.sh --broker-list node1:9092,node2:9092,node3:9092 --topic lic

消费者: [ ] bin/kafka-console-consumer.sh --bootstrap-server node1:9092,node2:9092,node3:9092 --from-beginning --topic lic

8. 关闭kafka

[node1] bin/kafka-server-stop.sh

[node2] bin/kafka-server-stop.sh

[node3] bin/kafka-server-stop.sh

你可能感兴趣的:(分布式架构环境搭建,kafka)