kafka集群安装

1、环境准备

             表1-1 集群节点配置信息

ip 操作系统 内存 磁盘空间 安装软件
10.166.50.231 64位Centos7标准版 8G 127G jdk1.8、zookeeper3.4.10、centos7、kafka2.11-0.11.0.2
10.166.50.232 64位Centos7标准版 8G 127G jdk1.8、zookeeper3.4.10、centos7、kafka2.11-0.11.0.2
10.166.50.233 64位Centos7标准版 8G 127G jdk1.8、zookeeper3.4.10、centos7、kafka2.11-0.11.0.2

2、配置hosts

配置hosts.png

3、创建对应的目录和文件

mkdir /opt/software/kafka/log

在/opt/software/kafka路径下创建文件夹logs。

3.1、修改配置文件

vim /opt/software/kafka/config/server.properties

输入以下内容:

# The id of the broker. This must be set to a unique integer for each broker.

# 对于kafka而言,一个broker即对应集群中的一个机器,broker.id这个参数是为了唯一

# 标识集群中的机器。

broker.id=0

# Switch to enable topic deletion or not, default value is false

# 该参数为true才能将kafka对应的topic删除。

delete.topic.enable=true

# The number of threads that the server uses for receiving requests from the network and sending responses to the network

num.network.threads=3

# The number of threads that the server uses for processing requests, which may include disk I/O

num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server

socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server

socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)

socket.request.max.bytes=104857600

# A comma seperated list of directories under which to store log files

log.dirs=/opt/software/kafka/logs

# The default number of log partitions per topic. More partitions allow greater

# parallelism for consumption, but this will also result in more files across

# the brokers.

num.partitions=1

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.

# This value is recommended to be increased for installations with data dirs located in RAID array.

num.recovery.threads.per.data.dir=1

log.retention.hours=168

zookeeper.connect=cluster1:2181,cluster2:2181,cluster3:2181

3.2、配置环境变量

vim /etc/profile

进入文件后修改添加内容如下:

#KAFKA_HOME
export KAFKA_HOME=/opt/module/kafka
export PATH=$PATH:$KAFKA_HOME/bin

将配置好的文件kafka分发到kafka集群其他节点上,并修改对应的broker.id配置值。
分发命令如下所示:

scp -r /opt/software/kafka/ [email protected]:/opt/software/
scp -r /opt/software/kafka/ [email protected]:/opt/software/

我这里的另外两台机器分别改为了1和2。
在10.166.50.232机器上配置如下所示:

# 标识集群中的机器。
broker.id=1

在10.166.50.233机器上配置如下所示:

# 标识集群中的机器。
broker.id=2

4、启动测试kafka集群

在10.166.50.231机器上启动如下命令:

nohup sh /opt/software/kafka/bin/kafka-server-start.sh /opt/software/kafka/config/server.properties &

在10.166.50.232机器上启动如下命令:

nohup sh /opt/software/kafka/bin/kafka-server-start.sh /opt/software/kafka/config/server.properties &

在10.166.50.233机器上启动如下命令:

nohup sh /opt/software/kafka/bin/kafka-server-start.sh /opt/software/kafka/config/server.properties &
kafka集群启动状态界面.png

4.1、关闭kafka集群

在10.166.50.231机器上启动如下命令:

sh /opt/software/kafka/bin/kafka-server-stop.sh stop

在10.166.50.232机器上启动如下命令:

sh /opt/software/kafka/bin/kafka-server-stop.sh stop

在10.166.50.233机器上启动如下命令:

sh /opt/software/kafka/bin/kafka-server-stop.sh stop

你可能感兴趣的:(kafka集群安装)