kafka-manager连接kerberos认证的kafka

操作系统:centos 6.10
kafka版本:kafka_2.11-0.11.0.3.tgz 下载
kafka-manager版本:kafka-manager-1.3.3.21.zip 下载

1. Kerberos 环境

1.1 Kerberos简介

kerberos 简介部分挂个外部连接…感兴趣的自己看看

1.2 Kerberos环境搭建

1.2.1 安装前准备

选一台机器安装KDC server,我的机器Ip为192.168.1.100,hostname为centos610,
修改/etc/hosts文件,添加一行192.168.1.100 centos100执行ping centos610能够ping通)

1.2.2 安装kdc-server

[root@centos610 ~]# yum install krb5-server

1.2.3 修改kdc配置文件:

EXAMPLE.COM改为KRB5.COM ,可随意修改,一般为公司域名大写

[root@centos610 ~]# vim var/kerberos/krb5kdc/kdc.conf
[kdcdefaults]
 kdc_ports = 88
 kdc_tcp_ports = 88 
[realms]
# 
 EXAMPLE.COM = {
  #master_key_type = aes256-cts
  acl_file = /var/kerberos/krb5kdc/kadm5.acl
  dict_file = /usr/share/dict/words
  admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
  supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
 }

1.2.4 修改krb5配置文件

此文件为安装krb5-libs生成的,一般系统装好后就安装了此包,如果没有安装使用yum install krb5-libs安装

EXAMPLE.COM改为KRB5.COM ,与上面kdc.conf中名称要一致
example.com改为krb5.com
kerberos.example.com改为主机名,我的是centos610

[root@centos610 ~]# vim /etc/krb5.conf
[logging]
 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

[libdefaults]
 default_realm = KRB5.COM
 dns_lookup_realm = false
 dns_lookup_kdc = false
 ticket_lifetime = 24h
 renew_lifetime = 7d
 forwardable = true

[realms]
 KRB5.COM = {
  kdc = centos610
  admin_server = centos610
 }

[domain_realm]
 .krb5.com = KRB5.COM
 krb5.com = KRB5.COM

1.2.5 初始化KDC数据库

使用kdb5_util初始化KDC数据库,期间需设置密码

[root@centos610 ~]# kdb5_util create -s

如果卡在Loading random data界面,可以另开一个shell窗口执行命令cat /dev/sda > /dev/urandom

# 使用kadmin.local添加principal test
[root@centos610 ~]# kadmin.local -q "addprinc test"
Authenticating as principal kafka/[email protected] with password.
WARNING: no policy specified for [email protected]; defaulting to no policy
Enter password for principal "[email protected]":
Re-enter password for principal "[email protected]":
Principal "[email protected]" created.
# 1.2.7 生成keytab文件
[root@centos610 ~]# kadmin.local -q "ktadd -k /root/test.keytab test"
Authenticating as principal kafka/[email protected] with password.
Entry for principal test with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:/root/test.keytab.
Entry for principal test with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:/root/test.keytab.
Entry for principal test with kvno 2, encryption type des3-cbc-sha1 added to keytab WRFILE:/root/test.keytab.
Entry for principal test with kvno 2, encryption type arcfour-hmac added to keytab WRFILE:/root/test.keytab.
Entry for principal test with kvno 2, encryption type des-hmac-sha1 added to keytab WRFILE:/root/test.keytab.
Entry for principal test with kvno 2, encryption type des-cbc-md5 added to keytab WRFILE:/root/test.keytab.
# 查看文件是否生成
[root@centos610 ~]# ll |grep test.keytab
-rw-------. 1 root root   316 12月 14 00:05 test.keytab

1.2.6 启动KDC服务

[root@centos610 init.d]# service krb5kdc start
正在启动 Kerberos 5 KDC:                                  [确定]

1.2.7 验证kafka.keytab文件

# 安装krb5-workstation,相当于客户端(可以不再KDC服务器上安装,此处简化操作,都用一台机器操作)
[root@centos610 ~]# yum install krb5-workstation
# 使用kinit命令加载kafka.keytab文件,认证
[root@centos610 ~]# kinit -k -t kafka.keytab kafka
# 使用klist命令查看当前认证情况
[root@centos610 ~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: [email protected]

Valid starting     Expires            Service principal
12/13/18 21:26:15  12/14/18 21:26:15  krbtgt/[email protected]
	renew until 12/20/18 21:26:15

至此,kerberos环境搭建,验证完成.

2. zookeeper&kafka 安装

2.1 zookeeper安装

安装软件包统一放到/opt/software
软件包统一解压到/opt/module

# 创建软件包存放目录和解压后目录
[root@centos610 ~]# mkdir /opt/{software,module}
[root@centos610 ~]# cd /opt/software/
# 将软件包上传到/opt/softwate,或者使用wget下载
[root@centos610 software]# ls
kafka_2.11-0.11.0.3.tgz
[root@centos610 software]# ll -h
总用量 41M
-rw-r--r--. 1 root root 41M 12月 13 21:44 kafka_2.11-0.11.0.3.tgz
# 解压
[root@centos610 software]# tar -xf kafka_2.11-0.11.0.3.tgz -C ../module/

zookeeper配置文件修改

# 添加principal(zookeeper/centos610),主机名自己定义
[root@centos610 ~]# kadmin.local -q "addprinc zookeeper/centos610"
# 生成zookeeper.keytab文件,路径可自定义(下面zookeeper_jaas.conf中做对应修改)
[root@centos610 ~]# kadmin.local -q "ktadd -k /root/zookeeper.keytab zookeeper/centos610"

[root@centos610 software]# cd /opt/module/kafka_2.11-0.11.0.3/config/
[root@centos610 config]# vim zookeeper.properties
# 在文件最下面添加以下三行
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
jaasLoginRenew=3600000
# 添加/opt/module/kafka_2.11-0.11.0.3/config/zookeeper_jaas.conf文件,
# keyTab为第一部分生成的文件,路径可以自己定义(文件需要copy过去)
# 内容如下:
Server{
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    storeKey=true
    useTicketCache=false
    keyTab="/root/zookeeper.keytab"
    principal="zookeeper/centos610";
};
# 添加zookeeper启动脚本文件/opt/module/kafka_2.11-0.11.0.3/rz.sh,内容如下
export KAFKA_OPTS='-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/module/kafka_2.11-0.11.0.3/config/zookeeper_jaas.conf'
bin/zookeeper-server-start.sh config/zookeeper.properties
# 启动zookeeper
# 简单起见用yum安装jdk,yum install java-1.8.0-openjdk-devel,可选择自己用tar包配置)
[root@centos610 kafka_2.11-0.11.0.3]# chmod +x /opt/module/kafka_2.11-0.11.0.3/rz.sh
[root@centos610 kafka_2.11-0.11.0.3]# /opt/module/kafka_2.11-0.11.0.3/rz.sh

2.2 kafka安装

# 添加principal(kafka/centos610),主机名自己定义
[root@centos610 ~]# kadmin.local -q "addprinc kafka/centos610"
# 生成kafka.keytab文件,路径可自定义(下面kafka_xx_jaas.conf中做对应修改)
[root@centos610 ~]# kadmin.local -q "ktadd -k /root/kafka.keytab kafka/centos610"
# 修改/opt/module/kafka_2.11-0.11.0.3/config/server.properties,添加以下内容(位置随意,最好在Socket Server Settings段内):
listeners=SASL_PLAINTEXT://centos610:9092
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=GSSAPI
sasl.enabled.mechanisms=GSSAPI
sasl.kerberos.service.name=kafka
# 此项为修改,原值为localhost:2181
zookeeper.connect=centos610:2181
# 添加文件/opt/module/kafka_2.11-0.11.0.3/config/kafka_server_jaas.conf,
# KafkaServer段用于kafkaserver认证,Client段用于zookeeper的client端认证
内容如下:
KafkaServer {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    storeKey=true
    keyTab="/root/kafka.keytab"
    principal="kafka/centos610";
};
Client {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    storeKey=true
    keyTab="/root/test.keytab"
    principal="test";
};

# 添加kafka启动脚本文件/opt/module/kafka_2.11-0.11.0.3/rk.sh,内容如下:
export KAFKA_OPTS='-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/module/kafka_2.11-0.11.0.3/config/kafka_server_jaas.conf'
bin/kafka-server-start.sh config/server.properties
# 启动kafka
[root@centos610 kafka_2.11-0.11.0.3]# chmod +x /opt/module/kafka_2.11-0.11.0.3/rk.sh
[root@centos610 kafka_2.11-0.11.0.3]# /opt/module/kafka_2.11-0.11.0.3/rk.sh

2.3 收发消息测试

# 添加文件/opt/module/kafka_2.11-0.11.0.3/config/kafka_client_jaas.conf,内容如下:
KafkaClient {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    storeKey=true
    keyTab="/root/test.keytab"
    principal="test";
};
# 修改/opt/module/kafka_2.11-0.11.0.3/config/producer.properties文件,在最下方添加以下三行
security.protocol=SASL_PLAINTEXT
sasl.mechanism=GSSAPI
sasl.kerberos.service.name=kafka
# 修改/opt/module/kafka_2.11-0.11.0.3/config/consumer.properties文件,在最下方添加以下三行
security.protocol=SASL_PLAINTEXT
sasl.mechanism=GSSAPI
sasl.kerberos.service.name=kafka
# 添加生产者脚本 /opt/module/kafka_2.11-0.11.0.3/p.sh,内容如下
export KAFKA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/module/kafka_2.11-0.11.0.3/config/kafka_client_jaas.conf"
bin/kafka-console-producer.sh --broker-list cnetos610:9092 --topic test --producer.config config/producer.properties
# 启动生产者
[root@centos610 ~]# chmod +x /opt/module/kafka_2.11-0.11.0.3/p.sh
[root@centos610 ~]# cd /opt/module/kafka_2.11-0.11.0.3/
[root@centos610 kafka_2.11-0.11.0.3]# ./p.sh
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
# 发送消息 hello kafka
>hello kafka
>
# 添加消费者脚本 /opt/module/kafka_2.11-0.11.0.3/c.sh,内容如下
export KAFKA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.auth.login.config=/opt/module/kafka_2.11-0.11.0.3/config/kafka_client_jaas.conf"
bin/kafka-console-consumer.sh --bootstrap-server cnetos610:9092 --topic test --new-consumer --from-beginning --consumer.config config/consumer.properties
# 启动消费者
[root@centos610 ~]# chmod +x /opt/module/kafka_2.11-0.11.0.3/c.sh
[root@centos610 ~]# cd /opt/module/kafka_2.11-0.11.0.3/
[root@centos610 kafka_2.11-0.11.0.3]# ./c.sh
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
[2018-12-14 01:00:55,650] WARN The configuration 'zookeeper.connect' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig)
[2018-12-14 01:00:55,651] WARN The configuration 'zookeeper.connection.timeout.ms' was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig)
# 成功接收到hello kafka消息
hello kafka

至此kafka安装验证完成.

3. kafka-manager安装

  • 添加文件/opt/module/kafka-manager/config/jaas.conf,内容如下:
# 其中Clinet段用于连接zookeeper认证,KafkaClient用于连接kafka服务器认证

Client{
  com.sun.security.auth.module.Krb5LoginModule required
  useKeyTab=true
  keyTab="/root/test.keytab"
  principal="test"
  serviceName="kafka"
  doNotPrompt=true;
};
KafkaClient{
  com.sun.security.auth.module.Krb5LoginModule required
  useKeyTab=true
  keyTab="/root/test.keytab"
  principal="test"
  serviceName="kafka"
  doNotPrompt=true;
};
  • 添加启动脚本,/opt/module/kafka-manager/scripts/rm.sh,内容如下
echo '-------------------------------------------------------------------分界线'$(date +%F%t%T)> manager.out

# 配置kafka-manager元数据使用的zookeeper,此处必须是用export
export ZK_HOSTS=centos610:2181
# kafka-manager 路径
MANAGER_HOME=/opt/module/kafka-manager
# 可执行文件路径
KAFKA_MANAGER=$MANAGER_HOME/bin/kafka-manager
# 日志位置
APP_HOME=-Dapplication.home=$MANAGER_HOME
# 端口
HTTP_PORT=-Dhttp.port=9001

# SASL安全认证
JAAS_CONF=-Djava.security.auth.login.config=$MANAGER_HOME/conf/jaas.conf
KRB5_CONF=-Djava.security.krb5.conf=$MANAGER_HOME/conf/krb5.conf

nohup  $KAFKA_MANAGER $JAAS_CONF $KRB5_CONF $APP_HOME $HTTP_PORT >manager.out 2>&1 &

echo "$!"
echo "$!" >mpid
tailf manager.out
  • 添加停止脚本,/opt/module/kafka-manager/scripts/sm.sh,内容如下
echo '-------------------------------------------------------------------分界线'$(date +%F%t%T)> manager.out
kill `cat mpid`
tailf manager.out

你可能感兴趣的:(数据库)