部署Kerberos KDC主从

一、环境

主机 域名
master kdc hadoop01
slave kdc hadoop02
client hadoop03

注意:
1、这里的域名不能使用大写的英文字母。
2、kerberos 涉及到的主机时钟必须同步。

二、配置主KDC服务(master kdc)

2.1、部署服务

$ yum install -y krb5-server openldap-clients krb5-workstation krb5-libs

2.2、修改配置

$ vi /etc/krb5.conf
# Configuration snippets may be placed in this directory as well
includedir /etc/krb5.conf.d/

[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log

[libdefaults]
default_realm = BONC.COM.CN
dns_lookup_realm = false
dns_lookup_kdc = false
ticket_lifetime = 1d
renew_lifetime = 7d
forwardable = true
rdns = false
renewable = true
udp_preference_limit = 1
kdc_timeout = 3000
max_retries = 3

[realms]
BONC.COM.CN = {
kdc = hadoop01
kdc = hadoop02
admin_server = hadoop01
default_domain = BONC.COM.CN
}

[domain_realm]
.bonc.com.cn = BONC.COM.CN
bonc.com.cn = BONC.COM.CN

$ vi /var/kerberos/krb5kdc/kdc.conf
[kdcdefaults]
kdc_ports = 88
kdc_tcp_ports = 88

[realms]
BONC.COM.CN = {
master_key_type = aes128-cts
acl_file = /var/kerberos/krb5kdc/kadm5.acl
dict_file = /usr/share/dict/words
admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
max_life = 1d
max_renewable_life = 7d
supported_enctypes = aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
}

$ vi /var/kerberos/krb5kdc/kadm5.acl
*/[email protected]  *

注意:
这里使用 aes128-cts ,若使用 aes256-cts 需要将所有 Kerberos 涉及到的主机的 $JAVA_HOME/jre/lib/security 的两个 jar 包进行替换

2.3、创建数据库

$ kdb5_util create -r BONC.COM.CN -s

输入数据库密码,之后将在 /var/kerberos/krb5kdc/ 目录下生成多个 principal 文件,如果遇到数据库已经存在的提示,且需要重建数据库时,可以把 /var/kerberos/krb5kdc/ 目录下的 principal 的相关文件都删除掉。

2.4、添加admin用户

$ kadmin.local
addprinc admin/[email protected]

2.5、启动kdc及kadmin服务及查看状态

$ /bin/systemctl start  krb5kdc.service
$ /bin/systemctl start  kadmin.service

$ /bin/systemctl status  krb5kdc.service
$ /bin/systemctl status  kadmin.service

2.4、验证admin账户

$ kinit admin/admin

输如密码后无任何提示表示验证成功

2.5、生成principal

$ kadmin.local -q "ank -randkey host/[email protected]"
$ kadmin.local -q "ank -randkey host/[email protected]"

$ kadmin.local -q "xst host/[email protected]"
$ kadmin.local -q "xst host/[email protected]"

格式为:
host/[主KDC主机名]@[域名]
host/[备KDC主机名]@[域名]

三、配置从KDC服务(slave kdc)

3.1、部署服务

$ yum install -y krb5-server openldap-clients krb5-workstation krb5-libs

3.2、创建kpropd.acl文件

$ vi /var/kerberos/krb5kdc/kpropd.acl
host/[email protected]
host/[email protected]

3.3、同步主 kdc 配置文件

$ scp /etc/krb5.conf hadoop02:/etc/
$ scp /var/kerberos/krb5kdc/kdc.conf /var/kerberos/krb5kdc/kadm5.acl /var/kerberos/krb5kdc/.k5.BONC.COM.CN hadoop02:/var/kerberos/krb5kdc/

3.4、启动kpropd服务及查看状态

$ /bin/systemctl start  kprop.service
$ /bin/systemctl status  kprop.service

3.5、 同步数据库(只在master kdc)

$ kdb5_util dump /var/kerberos/krb5kdc/slave_datatrans
$ kprop -f /var/kerberos/krb5kdc/slave_datatrans hadoop02

执行成功会提示SUCCEEDED。
kdc的数据库没有自动同步的命令,需要手动同步。设置一个 crontab 作业,定时同步数据库。在 /var/kerberos/ 下新建 dump 目录,用来存放备份的数据库。在 /var/kerberos/ 下新建 dump_sh目录,用来存放执行备份的脚本和日志。

$ mkdir  /var/kerberos/dump_sh /var/kerberos/dump
$ vi /var/kerberos/dump_sh/sync_db.sh
#!/bin/sh

slaves="hadoop02"
echo `date`"Start sync!"
sudo kdb5_util dump /var/kerberos/dump/DB_backup
for slave in $slaves;
do
    sudo kprop -f /var/kerberos/dump/DB_backup $slave
done
echo `date`"Completing sync!"

3.6、启动 kdc 服务

$ /bin/systemctl start  krb5kdc.service

四、验证

4.1、部署客户端

$ yum install -y krb5-workstation krb5-libs krb5-auth-dialog

4.2、同步主 kdc 配置文件

$ scp /etc/krb5.conf hadoop02:/etc/

4.3、获取ticket

$ kinit admin/admin

输入密码,正常情况下能够从master上获取

4.4、停止 master kdc服务

$ /bin/systemctl stop  krb5kdc.service

4.3、再次获取ticket

$ kinit admin/admin

如果成功,说明生效。可以在 /var/log/krb5kdc.log 观察 kdc 的日志。

你可能感兴趣的:(部署Kerberos KDC主从)