kafka集群部署


0. zookeeper集群部署

kafka依赖于zookeeper,在安装kafka集群之前,请先安装zookeeper,zookeepr集群的安装,参见我另外一篇文章 linux下zookeeper集群部署以及测试

  • 我们假定安装好的集群是 localhost:2181,localhost:2182,localhost:2183

1. 安装包下载

  • 官网

2. 安装

因为测试条件有限,故使用单机模拟集群部署

  • 创建安装根目录
mkdir kafka_cluster
cd kafka_cluster
  • 将安装包拷贝到安装根目录
cp ../kafka_2.12-2.0.0.tgz ./
  • 解压安装包
tar zxvf kafka_2.12-2.0.0.tgz
  • 文件夹改名
mv kafka_2.12-2.0.0 k1
  • 创建数据目录
mkdir k1/data
  • 创建三个broker
cp -r k1 k2
cp -r k1 k3

这样 k1, k2, k3 分别表示三个broker的安装目录

[root@test kafka_cluster]# ls -l
total 6
drwxr-xr-x 8 root root 113 Apr 18 20:23 k1
drwxr-xr-x 8 root root 113 Apr 18 20:23 k2
drwxr-xr-x 8 root root 113 Apr 18 20:23 k3
  • 分别修改三个broker的配置

    • k1/config/server.properties

      ############################# Server Basics #############################
      
      # The id of the broker. This must be set to a unique integer for each broker.
      broker.id=0
      port=9092
      host.name=127.0.0.1
      
      # A comma separated list of directories under which to store log files
      log.dirs=/data/soft/kafka_cluster/k1/data/
      
      # The default number of log partitions per topic. More partitions allow greater
      # parallelism for consumption, but this will also result in more files across
      # the brokers.
      num.partitions=3
      
      ############################# Zookeeper #############################
      
      # Zookeeper connection string (see zookeeper docs for details).
      # This is a comma separated host:port pairs, each corresponding to a zk
      # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
      # You can also append an optional chroot string to the urls to specify the
      # root directory for all kafka znodes.
      zookeeper.connect=localhost:2181,localhost:2182,localhost:2183 
      
    • k2/config/server.properties

      ############################# Server Basics #############################
      
      # The id of the broker. This must be set to a unique integer for each broker.
      broker.id=1
      port=9093
      host.name=127.0.0.1
      
      # A comma separated list of directories under which to store log files
      log.dirs=/data/soft/kafka_cluster/k2/data/
      
      # The default number of log partitions per topic. More partitions allow greater
      # parallelism for consumption, but this will also result in more files across
      # the brokers.
      num.partitions=3
      
      ############################# Zookeeper #############################
      
      # Zookeeper connection string (see zookeeper docs for details).
      # This is a comma separated host:port pairs, each corresponding to a zk
      # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
      # You can also append an optional chroot string to the urls to specify the
      # root directory for all kafka znodes.
      zookeeper.connect=localhost:2181,localhost:2182,localhost:2183 
      
    • k3/config/server.properties

      ############################# Server Basics #############################
      
      # The id of the broker. This must be set to a unique integer for each broker.
      broker.id=3
      port=9094
      host.name=127.0.0.1
      
      # A comma separated list of directories under which to store log files
      log.dirs=/data/soft/kafka_cluster/k3/data/
      
      # The default number of log partitions per topic. More partitions allow greater
      # parallelism for consumption, but this will also result in more files across
      # the brokers.
      num.partitions=3
      
      ############################# Zookeeper #############################
      
      # Zookeeper connection string (see zookeeper docs for details).
      # This is a comma separated host:port pairs, each corresponding to a zk
      # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
      # You can also append an optional chroot string to the urls to specify the
      # root directory for all kafka znodes.
      zookeeper.connect=localhost:2181,localhost:2182,localhost:2183 
      
    • k1(k2, k3)/config/consumer.properties

      # list of brokers used for bootstrapping knowledge about the rest of the cluster
      # format: host1:port1,host2:port2 ...
      bootstrap.servers=localhost:9092,localhost:9093,localhost:9094
      
    • k1(k2, k3)/config/producer.properties

      # list of brokers used for bootstrapping knowledge about the rest of the cluster
      # format: host1:port1,host2:port2 ...
      bootstrap.servers=localhost:9092,localhost:9093,localhost:9094
      
  • 启动实例

    ./k1/bin/kafka-server-start.sh ./k1/config/server.properties &
    ./k2/bin/kafka-server-start.sh ./k2/config/server.properties &
    ./k3/bin/kafka-server-start.sh ./k3/config/server.properties &
    

你可能感兴趣的:(kafka集群部署)