Kafka1.1.1搭建

1.下载Kafka1.1.1

2.在kafka的目录下创建文件夹data

[zuowei.zhang@master kafka-1.1.1]$ ll
total 48
drwxr-xr-x 3 zuowei.zhang zuowei.zhang  4096 Jul  7 12:15 bin
drwxr-xr-x 2 zuowei.zhang zuowei.zhang  4096 Jul  7 12:15 config
drwxrwxr-x 2 zuowei.zhang zuowei.zhang   187 Jan  3 20:17 data
drwxr-xr-x 2 zuowei.zhang zuowei.zhang  4096 Jan  3 20:07 libs
-rw-r--r-- 1 zuowei.zhang zuowei.zhang 28824 Jul  7 12:12 LICENSE
drwxrwxr-x 2 zuowei.zhang zuowei.zhang   174 Jan  3 20:14 logs
-rw-r--r-- 1 zuowei.zhang zuowei.zhang   336 Jul  7 12:12 NOTICE
drwxr-xr-x 2 zuowei.zhang zuowei.zhang    44 Jul  7 12:15 site-docs

3.配置conf目录下的 server.properties文件

需要配置的项:

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0

# The address the socket server listens on. It will get the value returned from 
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://master.cn:9092

# A comma separated list of directories under which to store log files
log.dirs=/opt/cdh5.15.0/kafka-1.1.1/data

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=master.cn:2181,slave1.cn:2181,slave2.cn:2181

4.将整个kafka文件夹发送到其他两台机器,并修改其他两台机器如下两个配置

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0

# The address the socket server listens on. It will get the value returned from 
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://master.cn:9092

5.添加环境变量

#KAFKA_HOME
export KAFKA_HOME=/opt/cdh5.15.0/kafka-1.1.1
export PATH=$PATH:$KAFKA_HOME/bin

使其立即生效

 source /etc/profile

6.启动kafka集群前必须先启动Zookeeper 

7.后台kafka集群:

bin/kafka-server-start.sh config/server.properties 1>/dev/null 2>&1 &

8.简单测试

a.在master.cn节点创建topic

bin/kafka-topics.sh --create --zookeeper master.cn:2181,slave1.cn:2181,slave2.cn:2181 --replication-factor 3 --partitions 3 --topic Topic1

--topic 定义topic名

--replication-factor  定义副本数

--partitions  定义分区数

b.查看当前的kafka topic列表

bin/kafka-topics.sh --list --zookeeper slave2.cn:2181

c.在master节点上生产消息

bin/kafka-console-producer.sh --broker-list master.cn:9092 --topic Topic1

d.在slave1节点上测试接收消息

bin/kafka-console-consumer.sh --zookeeper slave1.cn:2181 --from-beginning --topic Topic1

结果:

在producer节点输入后,在consumer节点上就能接收到消息

producer:

Kafka1.1.1搭建_第1张图片

consumer:

 

 

 

 

 

 

 

你可能感兴趣的:(大数据)