因服务器之前安装过zookeeper和kafka,不确定是否会对新装服务产生影响,为避免对新装集群产生冲突,故将原来文件备份到其他服务器,然后本机删除。
原有路径:各主机均在 /data/opt/下
备份路径:xx.xxx.xx.xxx:/data/******/kafka_bak/
本次下载zookeeper版本为3.4.14,Kafka版本为2.12-2.5.1
Zookeeper下载路径:https://mirrors.cnnic.cn/apache/zookeeper/
Kafka下载路径:http://kafka.apache.org/downloads
本次为集群式搭建,预计新业务数据量不是很大,故准备服务器3台
创建用户
[xxxxxx@xxxxxx]sudo useradd usertest
[xxxxxx@xxxxxx]sudo passwd usertset
密码: passwd
创建文件夹(为后续做准备)
[usertest@xxxxxx]cd /data
[usertest@xxxxxx data]mkdir test_data
[usertest@xxxxxx data]cd test_data
[usertest@xxxxxx test_data]mkdir zookeeper --为存放zookeeper数据、日志文件做准备
[usertest@xxxxxx test_data]cd zookeeper
[usertest@xxxxxx zookeeper]mkdir zkdata --存放zookeeper数据文件
[usertest@xxxxxx zookeeper]mkdir zkdatalog --存放zookeeper日志文件
[usertest@xxxxxx zookeeper]cd ../../
[usertest@xxxxxx data]chown -R usertest:usertest test_data/ --新建文件夹赋权给usertest用户
[usertest@xxxxxx data]cd /data/test_data/kafka
[usertest@xxxxxx kafka]mkdir kafkalogs --存放Kafka日志文件
下面的操作为:3台服务器统一操作
解压:将zookeeper安装包放到usertest家目录,然后解压
[usertest@xxxxxx]cd /home/usertest
[usertest@xxxxxx]tar -zxvf zookeeper-3.4.14.tar.gz --解压压缩包
修改配置文件:解压之后切到conf路径,将配置文件 zoo_sample.cfg 重命名为 zoo.cfg,然后修改配置文件
[usertest@xxxxxx]cd zookeeper-3.4.14/conf/
[usertest@xxxxxx conf]pwd
/home/usertest/zookeeper-3.4.14/conf
[usertest@xxxxxx conf]ll
total 12
-rw-------. 1 usertest usertest 535 Mar 6 2019 configuration.xsl
-rw-------. 1 usertest usertest 2161 Mar 6 2019 log4j.properties
-rw-------. 1 usertest usertest 1104 Sep 21 14:03 zoo_sample.cfg
[usertest@xxxxxx conf]mv zoo_sample.cfg zoo.cfg
[usertest@xxxxxx conf]vi zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/data/test_data/zookeeper/zkdata
dataLogDir=/data/test_data/zookeeper/zkdatalog
# the port at which the clients will connect
clientPort=12181 --默认端口为2181,这里设置监听端口为10081
server.1=10.xxx.xx.xxx:12888:12888 --默认端口为2888:3888
server.2=10.xxx.xx.xxx:12888:13888
server.3=10.xxx.xx.xxx:12888:13888
-------------------------------
#配置参数解读:server.A=B:C:D 例如:server.1=10.xxx.xx.xxx:12888:12888
#A 是一个数字,表示这个是第几号服务器;
#集群模式下配置一个文件 myid,这个文件在 dataDir 目录下,这个文件里面有一个数据就是 A 的值,Zookeeper 启动时读取此文件,拿到里面的数据与 zoo.cfg 里面的配置信息比较从而判断到底是哪个 server。
#B 是这个服务器的地址;
#C 是这个服务器 Follower 与集群中的 Leader 服务器交换信息的端口;
#D 是万一集群中的 Leader 服务器挂了,需要一个端口来重新进行选举,选出一个新的Leader,而这个端口就是用来执行选举时服务器相互通信的端口。
-------------------------------
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
创建myid文件
切到数据文件存放路径
[usertest@xxxxxx conf]cd /data/test_data/zookeeper/zkdata
[usertest@xxxxxx zkdata]echo "1" >myid
[usertest@xxxxxx zkdata]ll
-rw-------. 1 usertest usertest 2 Sep 21 10:49 myid
[usertest@xxxxxx zkdata]cat myid
1
--------
三台主机分别在各自主机的数据文件存放路径执行
echo "1" >myid
echo "2" >myid
echo "3" >myid
配置环境变量
vi .bash_profile
--添加内容
export ZOOKEEPER_HOME=/home/usertest/zookeeper-3.4.14
export PATH=$PATH:$ZOOKEEPER_HOME/bin
--保存退出,使环境变量生效
source .bash_profile
启动服务并查看
[usertest@xxxxxx]cd /home/usertest/zookeeper-3.4.14/bin
[usertest@xxxxxx bin]./zkServer.sh start (3台都需要执行)
--检查服务状态
[usertest@xxxxxx bin]./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/usertest/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: follower
--三台节点,其中一台为leader,另两台均为follower
下面的操作为:3台服务器统一操作
解压:将Kafka安装包放到usertest家目录,然后解压
[usertest@xxxxxx]cd /home/usertest
[usertest@xxxxxx]tar -zxvf kafka_2.12-2.5.1.tgz --解压压缩包
修改配置文件:解压之后切到config路径
[usertest@xxxxxx]cd /home/testuser/kafka_2.12-2.5.1/config
[usertest@xxxxxx config]vi server.properties
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults
############################# Server Basics #############################
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=1 --三台节点依次为1、2、3
############################# Socket Server Settings #############################
# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
# FORMAT:
# listeners = listener_name://host_name:port
# EXAMPLE:
# listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092
port=19092
host.name=10.xxx.xx.xxx
# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured. Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092
# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3
# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8
# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400
# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400
# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600
############################# Log Basics #############################
# A comma separated list of directories under which to store log files
log.dirs=/data/test_data/kafka/kafkalogs
# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1
# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1
############################# Internal Topic Settings #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
############################# Log Flush Policy #############################
# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# 1. Durability: Unflushed data may be lost if you are not using replication.
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
# The maximum amount of time a message can sit in a log before we force a flush
############################# Log Retention Policy #############################
# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.
# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168
----配置文件添加以下三行内容
message.max.byte=5242880
default.replication.factor=2
replica.fetch.max.bytes=5242880
# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824
# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000
############################# Zookeeper #############################
# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
----zookeeper链接
zookeeper.connect=10.xxx.xx.xxx:12181,10.xxx.xx.xxx:12181,10.xxx.xx.xxx:12181
# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=18000
############################# Group Coordinator Settings #############################
# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of ma
x.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially
expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0
配置环境变量
vi .bash_profile
----添加内容
export KAFKA_HOME=/home/usertest/kafka_2.12-2.5.1
export PATH=$PATH:$KAFKA_HOME/bin
---保存退出,使环境变量生效
source .bash_profile
启动服务并查看
[usertest@xxxxxx]cd /home/usertest/kafka_2.12-2.5.1/bin
[usertest@xxxxxx bin]./kafka-server-start.sh -daemon ../config/server.properties(3台都需要执行)
检查服务状态
检查日志发现启动报错,报错信息如下:
Exception in thread "main" java.lang.UnsupportedClassVersionError: kafka/Kafka : Unsupported major.minor version 52.0
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:482)
------查询得知是因为jdk版本过低,当前版本为1.7,需改为1.8
切记:升级之前将zookeeper关闭
安装包准备
先下载 jdk-8u45-linux-x64.rpm 然后上传到 /usr/local/src 去。当然其他目录也可以。这里是默认位置
[root@xxxxxx]mv /home/usertest/jdk-8u45-linux-x64.rpm /usr/local/src
----给所有用户添加可执行权限
[root@xxxxxx src]sudo chmod 751 jdk-8u45-linux-x64.rpm
[root@xxxxxx src]ll
-rwxr-x--x. 1 usertest usertest 152239254 Sep 21 12:44 jdk-8u45-linux-x64.rpm
开始安装
[root@xxxxxx src]rpm -ivh jdk-8u45-linux-x64.rpm
----安装结束后,jdk会安装在/usr/java/jdk1.8.0_45里
[root@xxxxxx src]cd /usr/java/
[root@xxxxxx java]ll
total 8
lrwxrwxrwx 1 root root 16 May 31 2018 default -> /usr/java/latest
drwxr-xr-x 8 root root 4096 May 31 2018 jdk1.7.0_80
drwxr-xr-x 9 root root 4096 Sep 21 17:56 jdk1.8.0_45
lrwxrwxrwx 1 root root 21 Sep 21 17:56 latest -> /usr/java/jdk1.8.0_45
配置环境变量
[root@xxxxxx java]vi /etc/profile
----添加如下内容
export JAVA_HOME=/usr/java/jdk1.8.0_45
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=$JAVA_HOME/lib:$CLASSPATH
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
---保存退出,使环境变量生效
[root@xxxxxx java] source /etc/profile
查看jdk新版本是否生效
[root@xxxxxx java]java -version
java version "1.8.0_45"
Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)
--如上所示,jdk已升级为1.8版本
注:升级jdk直接安装1.8版本即可,不需卸载原有版本
启动zookeeper
[usertest@xxxxxx]cd /zookeeper-3.4.14/bin
[usertest@xxxxxx bin]pwd
/home/usertest/zookeeper-3.4.14/bin
[usertest@xxxxxx bin]ll
total 80
-rwx------ 1 usertest usertest 232 Sep 21 18:04 README.txt
-rwx------ 1 usertest usertest 1937 Sep 21 18:04 zkCleanup.sh
-rwx------ 1 usertest usertest 1056 Sep 21 18:04 zkCli.cmd
-rwx------ 1 usertest usertest 1534 Sep 21 18:04 zkCli.sh
-rwx------ 1 usertest usertest 1759 Sep 21 18:04 zkEnv.cmd
-rwx------ 1 usertest usertest 2919 Sep 21 18:04 zkEnv.sh
-rwx------ 1 usertest usertest 1089 Sep 21 18:04 zkServer.cmd
-rwx------ 1 usertest usertest 6773 Sep 21 18:04 zkServer.sh
-rwx------ 1 usertest usertest 996 Sep 21 18:04 zkTxnLogToolkit.cmd
-rwx------ 1 usertest usertest 1385 Sep 21 18:04 zkTxnLogToolkit.sh
-rw------- 1 usertest usertest 33372 Sep 21 21:17 zookeeper.out
----启动
[usertest@xxxxxx bin]./zkServer.sh start
--查看启动日志
[usertest@xxxxxx bin]tail -f zookeeper.out
2020-09-21 21:16:31,808 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:10081:NIOServerCnxnFactory@222] - Accepted socket connection from /127.0.0.1:53255
2020-09-21 21:16:31,950 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:12181:ServerCnxn@324] - The list of known four letter word commands is : [{1936881266=srvr, 1937006964=stat, 2003003491=wchc, 1685417328=dump, 1668445044=crst, 1936880500=srst, 1701738089=envi, 1668247142=conf, 2003003507=wchs, 2003003504=wchp, 1668247155=cons, 1835955314=mntr, 1769173615=isro, 1920298859=ruok, 1735683435=gtmk, 1937010027=stmk}]
2020-09-21 21:16:31,950 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:10081:ServerCnxn@325] - The list of enabled four letter word commands is : [[wchs, stat, stmk, conf, ruok, mntr, srvr, envi, srst, isro, dump, gtmk, crst, cons]]
2020-09-21 21:16:31,950 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:12181:NIOServerCnxn@908] - Processing srvr command from /127.0.0.1:53255
2020-09-21 21:16:31,958 [myid:1] - INFO [Thread-1:NIOServerCnxn@1056] - Closed socket connection for client /127.0.0.1:53255 (no session established for client)
2020-09-21 21:17:53,233 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:12181:NIOServerCnxnFactory@222] - Accepted socket connection from /10.xxx.xx.xxx:56824
2020-09-21 21:17:53,239 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:12181:ZooKeeperServer@949] - Client attempting to establish new session at /10.xxx.xx.xxx:56824
2020-09-21 21:17:53,251 [myid:1] - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:12181:Follower@119] - Got zxid 0x100000001 expected 0x1
2020-09-21 21:17:53,252 [myid:1] - INFO [SyncThread:1:FileTxnLog@216] - Creating new log file: log.100000001
2020-09-21 21:17:53,271 [myid:1] - INFO [CommitProcessor:1:ZooKeeperServer@694] - Established session 0x104077ff2ea0000 with negotiated timeout 18000 for client /10.xxx.xx.xxx:56824
--查看状态
[usertest@xxxxxx bin] ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/usertest/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: follower
启动Kafka
[usertest@xxxxxx]cd kafka_2.12-2.5.1/bin
[usertest@xxxxxx bin]pwd
/home/usertest/kafka_2.12-2.5.1/bin
[usertest@xxxxxx bin]ll
total 140
-rwx------ 1 usertest usertest 1421 Sep 21 18:03 connect-distributed.sh
-rwx------ 1 usertest usertest 1394 Sep 21 18:03 connect-mirror-maker.sh
-rwx------ 1 usertest usertest 1418 Sep 21 18:03 connect-standalone.sh
-rwx------ 1 usertest usertest 861 Sep 21 18:03 kafka-acls.sh
-rwx------ 1 usertest usertest 873 Sep 21 18:03 kafka-broker-api-versions.sh
-rwx------ 1 usertest usertest 864 Sep 21 18:03 kafka-configs.sh
-rwx------ 1 usertest usertest 945 Sep 21 18:03 kafka-console-consumer.sh
-rwx------ 1 usertest usertest 944 Sep 21 18:03 kafka-console-producer.sh
-rwx------ 1 usertest usertest 871 Sep 21 18:03 kafka-consumer-groups.sh
-rwx------ 1 usertest usertest 948 Sep 21 18:03 kafka-consumer-perf-test.sh
-rwx------ 1 usertest usertest 871 Sep 21 18:03 kafka-delegation-tokens.sh
-rwx------ 1 usertest usertest 869 Sep 21 18:03 kafka-delete-records.sh
-rwx------ 1 usertest usertest 866 Sep 21 18:03 kafka-dump-log.sh
-rwx------ 1 usertest usertest 870 Sep 21 18:03 kafka-leader-election.sh
-rwx------ 1 usertest usertest 863 Sep 21 18:03 kafka-log-dirs.sh
-rwx------ 1 usertest usertest 862 Sep 21 18:03 kafka-mirror-maker.sh
-rwx------ 1 usertest usertest 886 Sep 21 18:03 kafka-preferred-replica-election.sh
-rwx------ 1 usertest usertest 959 Sep 21 18:03 kafka-producer-perf-test.sh
-rwx------ 1 usertest usertest 874 Sep 21 18:03 kafka-reassign-partitions.sh
-rwx------ 1 usertest usertest 874 Sep 21 18:03 kafka-replica-verification.sh
-rwx------ 1 usertest usertest 9923 Sep 21 18:03 kafka-run-class.sh
-rwx------ 1 usertest usertest 1376 Sep 21 18:03 kafka-server-start.sh
-rwx------ 1 usertest usertest 997 Sep 21 18:03 kafka-server-stop.sh
-rwx------ 1 usertest usertest 945 Sep 21 18:03 kafka-streams-application-reset.sh
-rwx------ 1 usertest usertest 863 Sep 21 18:03 kafka-topics.sh
-rwx------ 1 usertest usertest 958 Sep 21 18:03 kafka-verifiable-consumer.sh
-rwx------ 1 usertest usertest 958 Sep 21 18:03 kafka-verifiable-producer.sh
-rwx------ 1 usertest usertest 1722 Sep 21 18:03 trogdor.sh
drwx------ 2 usertest usertest 4096 Sep 21 18:03 windows
-rwx------ 1 usertest usertest 867 Sep 21 18:03 zookeeper-security-migration.sh
-rwx------ 1 usertest usertest 1393 Sep 21 18:03 zookeeper-server-start.sh
-rwx------ 1 usertest usertest 1001 Sep 21 18:03 zookeeper-server-stop.sh
-rwx------ 1 usertest usertest 1017 Sep 21 18:03 zookeeper-shell.sh
----启动
[usertest@xxxxxx bin]./kafka-server-start.sh -daemon ../config/server.properties(3台都需要执行)
----查看启动日志
[usertest@xxxxxx bin]cd ../logs
[usertest@xxxxxx bin]ll
total 148
-rw-r----- 1 usertest usertest 307 Sep 21 21:18 controller.log
-rw------- 1 usertest usertest 6618 Sep 21 18:03 controller.log.2020-09-21-18
-rw------- 1 usertest usertest 0 Sep 21 18:03 kafka-authorizer.log
-rw------- 1 usertest usertest 0 Sep 21 18:03 kafka-request.log
-rw------- 1 usertest usertest 419 Sep 21 18:03 kafkaServer-gc.log.0
-rw------- 1 usertest usertest 7193 Sep 21 21:23 kafkaServer-gc.log.0.current
-rw------- 1 usertest usertest 32423 Sep 21 22:28 kafkaServer.out
-rw-r----- 1 usertest usertest 172 Sep 21 21:18 log-cleaner.log
-rw------- 1 usertest usertest 172 Sep 21 18:03 log-cleaner.log.2020-09-21-18
-rw-r----- 1 usertest usertest 471 Sep 21 22:28 server.log
-rw------- 1 usertest usertest 43858 Sep 21 18:03 server.log.2020-09-21-18
-rw-r----- 1 usertest usertest 31952 Sep 21 21:58 server.log.2020-09-21-21
-rw------- 1 usertest usertest 705 Sep 21 18:03 state-change.log
[usertest@xxxxxx bin]tail -f kafkaServer.out
[2020-09-21 21:18:02,968] INFO Kafka commitId: 0efa8fb0f4c73d92 (org.apache.kafka.common.utils.AppInfoParser)
[2020-09-21 21:18:02,968] INFO Kafka startTimeMs: 1600694282960 (org.apache.kafka.common.utils.AppInfoParser)
[2020-09-21 21:18:02,973] INFO [KafkaServer id=1] started (kafka.server.KafkaServer)
[2020-09-21 21:28:02,791] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-09-21 21:38:02,791] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-09-21 21:48:02,791] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-09-21 21:58:02,791] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-09-21 22:08:02,791] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-09-21 22:18:02,791] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-09-21 22:28:02,791] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
----日志正常
[usertest@xxxxxx bin]jps
801 QuorumPeerMain
6738 Jps
1706 Kafka
------到此,Kafka即安装成功,等待开发连接测试,测试后续更新