如果Linux服务器上有JDK环境,则可以忽略此步骤,如果没有需要安装JDK,如何安装JDK请查看如下链接
安装JDK
JDK提取可以根据自己的实际情况来提取路径,这里我们解压缩到/user/java/jdk1.8
, 入下图所示:
cd /usr/java
tar -zxvf jdk-xxxx.tar.gz
mv jdk-xxxx jdk1.8
...
vi /etc/profile
export JAVA_HOME=/usr/java/jdk1.8
export PATH=$PATH:$JAVA_HOME/
从新编译环境
source /etc/profile
最后,我们 java -version
根据以下信息输
java version "1.8.0_60"
Java(TM) SE Runtime Environment (build 1.8.0_60-b27)
Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mod
https://github.com/smartloli/kafka-eagle-bin/archive/v1.2.6.tar
tar -zxvf kafka-eagle-${version}-bin.tar.gz
mv kafka-eagle-${version} kafka-eagle
vi /etc/profile
export KE_HOME=/data/soft/new/kafka-eagle
export PATH=$PATH:$KE_HOME/bin
进入Kafka-eagle的安装目录
cd ${KE_HOME}/conf
vi system-config.properties
# Multi zookeeper&kafka cluster list -- The client connection address of the Zookeeper
cluster is set here
kafka.eagle.zk.cluster.alias=cluster1,cluster2
cluster1.zk.list=tdn1:2181,tdn2:2181,tdn3:2181
cluster2.zk.list=xdn1:2181,xdn2:2181,xdn3:2181
# Zkcli limit -- Zookeeper cluster allows the number of clients to connect to
kafka.zk.limit.size=25
# Kafka Eagle webui port -- WebConsole port access address
kafka.eagle.webui.port=8048
# Kafka offset storage -- Offset stored in a Kafka cluster, if stored in the zookeeper,
you can not use this option
cluster1.kafka.eagle.offset.storage=kafka
cluster2.kafka.eagle.offset.storage=kafka
# Whether the Kafka performance monitoring diagram is enabled
kafka.eagle.metrics.charts=false
# If offset is out of range occurs, enable this property -- Only suitable for kafka sql
kafka.eagle.sql.fix.error=false
# Delete kafka topic token -- Set to delete the topic token, so that administrators can
have the right to delete
kafka.eagle.topic.token=keadmin
# kafka sasl authenticate, current support SASL_PLAINTEXT
kafka.eagle.sasl.enable=false
kafka.eagle.sasl.protocol=SASL_PLAINTEXT
kafka.eagle.sasl.mechanism=PLAIN
kafka.eagle.sasl.client=
# Default use sqlite to store data
kafka.eagle.driver=org.sqlite.JDBC
# It is important to note that the '/hadoop/kafka-eagle/db' path must exist.
kafka.eagle.url=jdbc:sqlite:/hadoop/kafka-eagle/db/ke.db
kafka.eagle.username=root
kafka.eagle.password=smartloli
# set mysql address
#kafka.eagle.driver=com.mysql.jdbc.Driver
#kafka.eagle.url=jdbc:mysql://127.0.0.1:3306/ke?useUnicode=true&characterEncoding=UTF-
8&zeroDateTimeBehavior=convertToNull
#kafka.eagle.username=root
#kafka.eagle.password=smartloli
cd ${KE_HOME}/bin
chmod +x ke.sh./ke.sh start
详见百度
转到%KE_HOME%\bin目录并单击该ke.bat文
我们http://host:port/ke
通过浏览器进入,访问Kafka Eagle DashBoard页面,该页面包含以下内容:
当前主题列包含创建和列表,通过创建模块可以创建自定义分区和备份主题的数量。
partition index number
,Leader
,Replicas
和Isr
,如下图的图中:改模块显示消费者记录的主题信息,其中包含以下内容:
* Running
* Pending
* Active Topic Graph
如下图所示:
每个Group
名称都是超链接,显示消费的详细信息,如下所示:
单击Topic
正在使用的名称,显示主题的消耗和生产率图表,如下所示:
此模块显示Kafka集群信息和Zookeeper集群信息,包括以下内容:
新的报警模块,要注意自己的Topic报警设置,主题没有消费者信息,超过阀值,报警。目前,报警方式通过消息发出警报,设置如下图所示:
ke.sh启动脚本中包含以下命令:
Zookeeper客户端命令操作时,目前仅支持ls,delete,get命令操作时,命令不支持,如下图所示:
此模块显示Multi Kafka集群信息和Zookeeper集群信息,包括以下内容:
note: Access to topic message data, depending on the underlying interface record of theearliest and latest offset, the default display up to 5000 record
vi kafka-server-start.sh...
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
export KAFKA_HEAP_OPTS="-server -Xms2G -Xmx2G -XX:PermSize=128m -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:ParallelGCThreads=8 -XX:ConcGCThreads=5 -XX:InitiatingHeapOccupancyPercent=70"
export JMX_PORT="9999"
#export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
fi
消息数据源kafkaEagel监测(兼容 __consumer_offsets 于 offset 从zookeeper)。由于创建,修改或消费Kafka消息将在Zookeeper中注册,我们可以从变更中获取数据,例如:主题,Brokers,分区和组,Kafka在Zookeeper的存储结构中,如下图所: