linux下kafka环境搭建

上传:scala-2.11.4.tgz到/usr/local目录下

执行命令:

tar -zxvf scala-2.11.4.tgz

rm -rf scala-2.11.4.tgz

mv scala-2.11.4/ scala

配置scala的环境变量:

vim ~/.bashrc

添加内容:

export SCALA_HOME=/usr/local/scala

export PATH=$PATH:$SCALA_HOME/bin

执行命令:source ~/.bashrc

执行命令查看scala是否安装成功:source ~/.bashrc


上传kafka_2.9.2-0.8.1.tgz到/usr/local目录下

执行命令:

tar -zxvf kafka_2.9.2-0.8.1.tgz

rm -rf kafka_2.9.2-0.8.1.tgz

mv kafka_2.9.2-0.8.1/ kafka

vim /usr/local/kafka/config/server.properties

修改内容:

broker.id:依次增长的整数:0  1  2,集群中Broker的唯一id

zookeeper.connect=192.168.2.161:2181,192.168.2.162:2181,192.168.2.163:2181


安装saf4j

上传slf4j-1.7.6.zip到/usr/local目录下

执行命令:

unzip slf4j-1.7.6.zip

cp /usr/local/slf4j-1.7.6/slf4j-nop-1.7.6.jar /usr/local/kafka/libs/

rm -rf slf4j-1.7.6

rm -rf slf4j-1.7.6.zip


执行命令启动kfka:

nohup /usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties &

cat nohup.out

执行命令:jps查看kfka进程是否开启


测试:

/usr/local/kafka/bin/kafka-topics.sh --zookeeper 192.168.2.161:2181,192.168.2.162:2181,192.168.2.163:2181 --topic test --replication-factor 1 --partitions 1 --create

/usr/local/kafka/bin/kafka-console-producer.sh --broker-list 192.168.2.161:9092,192.168.2.162:9092,192.168.2.163:9092 --topic test

/usr/local/kafka/bin/kafka-console-consumer.sh --zookeeper 192.168.2.161:2181,192.168.2.162:2181,192.168.2.163:2181 --topic test --from-beginning

你可能感兴趣的:(linux下kafka环境搭建)