kafka2.11-1.0.0集群安装

kafka安装记录
1、准备环境
使用VMware安装3台Linux服务器

mini版本
hostname 依次为 node01、node02、node03
关闭防火墙
每台机器都操作一遍
service iptables stop && chkconfig iptables off
配置网卡
配置/etc/hosts文件

192.168.140.128 node01 zk01 kafka01
192.168.140.129 node02 zk02 kafka02
192.168.140.130 node03 zk03 kafka03
scp /etc/hosts node02:/etc/
scp /etc/hosts node03:/etc/
配置Yum

yum install -y lrzsz
yum install -y wget
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
cd /etc/yum.repos.d
wget http://mirrors.163.com/.help/CentOS6-Base-163.repo
yum clean all && yum makecache
配置免密登录
yum -y install openssh-clients
ssh-keygen 四个回车
ssh-copy-id node1
ssh-copy-id node2
ssh-copy-id node3

2、安装JDK&ZK&KAFKA
安装JDK
mkdir -p /export/servers
mkdir -p /export/software
mkdir -p /export/logs
mkdir -p /export/data
mkdir -p /export/data/zk
mkdir -p /export/data/kafka
mkdir -p /export/logs/zk

cd /export/software/
rz 选择 jdk-8u141-linux-x64.tar.gz
tar -zxvf jdk-8u141-linux-x64.tar.gz -C ../servers/
cd ../servers/
mv jdk1.8.0_141 jdk
source /etc/profile
-
export JAVA_HOME=/export/servers/jdk
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$PATH
-
scp -r jdk node02:/export/servers/
scp -r jdk node03:/export/servers/
scp /etc/profile node02:/etc/
scp /etc/profile node03:/etc/
-
source /etc/profile on node01、node02、node03
java -version on node01 on node01、node02、node03
安装zookeeper

cd /export/software/
wget http://219.238.7.73/files/703900000A354B91/apache.fayea.com/zookeeper/zookeeper-3.4.9/zookeeper-3.4.9.tar.gz
tar -zxvf zookeeper-3.4.9.tar.gz -C ../servers/
cd ../servers/
mv zookeeper-3.4.9/ zk
-
scp -r zk node02:/export/servers/
scp -r zk node03:/export/servers/
-
touch /export/data/zk/myid on node01、node02、node03
echo 1 > /export/data/zk/myid on node01
echo 2 > /export/data/zk/myid on node02
echo 3 > /export/data/zk/myid on node03
-
cd /export/servers/zk/conf/
touch zoo.cfg 
vi zoo.cfg
-
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/export/data/zk
dataLogDir=/export/logs/zk
clientPort=2181
server.1=node01:2887:3887
server.2=node02:2887:3887
server.3=node03:2887:3887
-
scp zoo.cfg node02:$PWD
scp zoo.cfg node03:$PWD
-
-
zkServer.sh start on node01、node02、node03
zkServer.sh status on node01、node02、node03
安装Kafka

cd /export/software/
wget http://219.238.7.67/files/518200000AE89181/mirror.bit.edu.cn/apache/kafka/1.0.0/kafka_2.11-1.0.0.tgz
tar -zxvf kafka_2.11-1.0.0.tgz -C ../servers/
cd ../servers/
mv kafka_2.11-1.0.0 kafka
-
scp -r kafka node03:/export/servers/
scp -r kafka node03:/export/servers/
-
cd /export/servers/kafka/config/
-
修改配置文件
cd /export/servers/kafka/
vi server.properties

#broker的全局唯一编号,不能重复
broker.id=0

#用来监听链接的端口,producer或consumer将在此端口建立连接
port=9092

#处理网络请求的线程数量
num.network.threads=3

#用来处理磁盘IO的现成数量
num.io.threads=8

#发送套接字的缓冲区大小
socket.send.buffer.bytes=102400

#接受套接字的缓冲区大小
socket.receive.buffer.bytes=102400

#请求套接字的缓冲区大小
socket.request.max.bytes=104857600

#kafka运行日志存放的路径
log.dirs=/export/servers/logs/kafka

#topic在当前broker上的分片个数
num.partitions=2

#用来恢复和清理data下数据的线程数量
num.recovery.threads.per.data.dir=1

#segment文件保留的最长时间,超时将被删除
log.retention.hours=168

#滚动生成新的segment文件的最大时间
log.roll.hours=168

#日志文件中每个segment的大小,默认为1G
log.segment.bytes=1073741824

#周期性检查文件大小的时间
log.retention.check.interval.ms=300000

#日志清理是否打开
log.cleaner.enable=true

#broker需要使用zookeeper保存meta数据
zookeeper.connect=zk01:2181,zk02:2181,zk03:2181

#zookeeper链接超时时间
zookeeper.connection.timeout.ms=6000

#partion buffer中,消息的条数达到阈值,将触发flush到磁盘
log.flush.interval.messages=10000

#消息buffer的时间,达到阈值,将触发flush到磁盘
log.flush.interval.ms=3000

#删除topic需要server.properties中设置delete.topic.enable=true否则只是标记删除
delete.topic.enable=true

#此处的host.name为本机IP(重要),如果不改,则客户端会抛出:Producer connection to localhost:9092 unsuccessful 错误!
host.name=kafka01

advertised.host.name=192.168.239.128


重点修改这几个
    * broker.id=0 唯一标示
    * log.dirs=/export/servers/logs/kafka 日志存放
    * zookeeper.connect=zk01:2181,zk02:2181,zk03:2181 zk接口
    * host.name=kafka01 当前主机的用户名一定要一致
    * advertised.host.name=192.168.239.128 当前主机的ip
单个启动
nohup bin/kafka-server-start.sh config/server.properties & on node01、node02、node03
脚本启动
配置KAFKA_HOME
#set KAFKA_HOME
export KAFKA_HOME=/export/app/kafka_2.11-1.0.0
export PATH=$PATH:$KAFKA_HOME/bin

创建一键启动脚本文件
mkdir -r  /export/app/onkey/kafka
创建三个脚本
vi slave
  node01
  node02
  node03

vi startkafka.sh
cat /export/app/onkey/kafka/slave | while read line
do
{
 echo $line
 ssh $line "source /etc/profile;nohup kafka-server-start.sh /export/servers/kafka/config/server.properties >/dev/null 2>&1 &"
}&
wait
done 

vi stopkafka.sh
cat /export/app/onkey/kafka/slave | while read line
do
{
 echo $line
 ssh $line "source /etc/profile;jps |grep Kafka |cut -c 1-4 |xargs kill -s 9 "
}&
wait
done 

给予权限
chomd 777 startkafka.sh & stopkafka.sh
小技巧
history | grep create 打印所有命令

在window使用工具查看kafka集群

首先使用hosts.exe工具配置本地host
192.168.140.128 node01 zk01 kafka01
192.168.140.129 node02 zk02 kafka02
192.168.140.130 node03 zk03 kafka03
zookeeper_java_client.zip
解压、链接即可。

你可能感兴趣的:(kafka2.11-1.0.0集群安装)