kafka集群监控管理工具之Kafka-Eagle安装流程

一、概述

Kafka Eagle是一款开源可视化和管理软件。它允许您查询、可视化、告警和探索您的指标,而不管它们存储在哪里。简单地说,它为您提供了将kafka集群数据转换为漂亮的图形和可视化的工具。

  • 官方文档:https://www.kafka-eagle.org/
  • 下载地址:http://download.kafka-eagle.org/
  • github源码下载地址:https://github.com/smartloli/kafka-eagle/releases

从github下载安装包比较慢可以查看我这篇文章如何快速下载github源码
github.com后面添加cnpmjs.org即可提高下载速度

二、kafka-eagle安装流程(点击查看官网安装)

1、下载安装kafka-eagle

# 在`github.com`后面添加`cnpmjs.org`即可提高下载速度
wget https://github.com.cnpmjs.org/smartloli/kafka-eagle-bin/archive/v2.0.0.tar.gz

注意这里下载的是kafka-eagle-bin目录下的v2.0.0.tar.gz, 如果改为kafka-eagle则是下载的源码。

2、安装jdk

如果你的Linux服务器中已经安装了jdk,可以直接跳过此步骤。如果没有安装jdk可以按照以下步骤安装jdk

  • 点击这里下载
  • 解压到/usr/java目录
mkdir /usr/java
tar -zxvf jdk-8u191-linux-x64.tar.gz -C /usr/java
  • 配置环境变量
# 编辑profile文件
vi /etc/profile
# 在profile文件末尾添加JAVA_HOME路径
export JAVA_HOME=/usr/java/jdk1.8.0_191
export PATH=$PATH:$JAVA_HOME/bin
  • 使用如下命令让配置生效
source /etc/profile
  • 验证java安装成功
# 1、查看版本,出现版本号说明java安装成功
java -version
#2、 直接通过jps验证
jps

3、解压kafka-eagle

# 创建目录
mkdir /opt/software
# 解压到指定目录
tar -zxvf v2.0.0.tar.gz -C /opt/software
# 进入到解压目录
cd /opt/software/kafka-eagle-bin-2.0.0
# 解压kafka-eagle
tar -zxvf kafka-eagle-web-2.0.0-bin.tar.gz
# 如果不想重命名的话,可以设置一个软链接
ln -s kafka-eagle-web-2.0.0 kafka-eagle-web

配置环境变量,方便直接使用ke.sh命令

vim /etc/profile
export KE_HOME= /opt/software/kafka-eagle-bin-2.0.0/kafka-eagle-web
PATH=$PATH:$KE_HOME/bin

# 使配置生效
source /etc/profile

4、配置kafka-eagle

根据实际kafka集群的情况进行配置,要修改的配置如下:

cd ${KE_HOME}/conf
vi system-config.properties

# Multi zookeeper&kafka cluster list -- The client connection address of the Zookeeper cluster is set here
kafka.eagle.zk.cluster.alias=cluster1,cluster2
cluster1.zk.list=tdn1:2181,tdn2:2181,tdn3:2181
cluster2.zk.list=xdn1:2181,xdn2:2181,xdn3:2181

# Add zookeeper acl
cluster1.zk.acl.enable=false
cluster1.zk.acl.schema=digest
cluster1.zk.acl.username=test
cluster1.zk.acl.password=test123

# Kafka broker nodes online list
cluster1.kafka.eagle.broker.size=10
cluster2.kafka.eagle.broker.size=20

# Zkcli limit -- Zookeeper cluster allows the number of clients to connect to
kafka.zk.limit.size=25

# Kafka Eagle webui port -- WebConsole port access address
kafka.eagle.webui.port=8048

# Kafka offset storage -- Offset stored in a Kafka cluster, if stored in the zookeeper, you can not use this option
cluster1.kafka.eagle.offset.storage=kafka
cluster2.kafka.eagle.offset.storage=kafka

# Whether the Kafka performance monitoring diagram is enabled
kafka.eagle.metrics.charts=false

# Kafka Eagle keeps data for 30 days by default
kafka.eagle.metrics.retain=30

# If offset is out of range occurs, enable this property -- Only suitable for kafka sql
kafka.eagle.sql.fix.error=false
kafka.eagle.sql.topic.records.max=5000

# Delete kafka topic token -- Set to delete the topic token, so that administrators can have the right to delete
kafka.eagle.topic.token=keadmin

# Kafka sasl authenticate
cluster1.kafka.eagle.sasl.enable=false
cluster1.kafka.eagle.sasl.protocol=SASL_PLAINTEXT
cluster1.kafka.eagle.sasl.mechanism=SCRAM-SHA-256
cluster1.kafka.eagle.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-secret";
# If not set, the value can be empty
cluster1.kafka.eagle.sasl.client.id=
# Add kafka cluster cgroups
cluster1.kafka.eagle.sasl.cgroup.enable=false
cluster1.kafka.eagle.sasl.cgroup.topics=kafka_ads01,kafka_ads02

cluster2.kafka.eagle.sasl.enable=true
cluster2.kafka.eagle.sasl.protocol=SASL_PLAINTEXT
cluster2.kafka.eagle.sasl.mechanism=PLAIN
cluster2.kafka.eagle.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin-secret";
cluster2.kafka.eagle.sasl.client.id=
cluster2.kafka.eagle.sasl.cgroup.enable=false
cluster2.kafka.eagle.sasl.cgroup.topics=kafka_ads03,kafka_ads04

# Default use sqlite to store data
kafka.eagle.driver=org.sqlite.JDBC
# It is important to note that the '/hadoop/kafka-eagle/db' path must be exist.
kafka.eagle.url=jdbc:sqlite:/hadoop/kafka-eagle/db/ke.db
kafka.eagle.username=root
kafka.eagle.password=smartloli

# (Optional) set mysql address
#kafka.eagle.driver=com.mysql.jdbc.Driver
#kafka.eagle.url=jdbc:mysql://127.0.0.1:3306/ke?useUnicode=true&characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNull
#kafka.eagle.username=root
#kafka.eagle.password=smartloli

5、启动kafka-eagle

使用ke.sh来启动kafka-eagle
查看帮助命令:ke.sh --help

ke.sh --help
Usage: /opt/software/kafka-eagle-bin-2.0.0/kafka-eagle-web/bin/ke.sh {start|stop|restart|status|stats|find|gc|jdk|version|sdate}

启动kafka-eagle

# 启动命令
ke.sh start
# 重启
ke.sh restart
# 停止
ke.sh stop

启动成功如下图:我这里修改了端口为38048, 默认端口为8048

启动成功

6、其他说明

  • 如果启动不成功,可以查看kafka-eaglelogs目录下的日志
  • 如果使用mysql作为数据库,只需要配置连接地址、用户名和密码,kafka-eagle所需的数据库和表会自动创建
  • 访问地址:http://localhost:38048
  • 默认用户名:admin
  • 默认密码:123456

登录成功界面如下


登录成功
  • kafka-eagle删除topic需要在conf/system-config.properties配置文件指定token,默认如下图
    image.png

你可能感兴趣的:(kafka集群监控管理工具之Kafka-Eagle安装流程)