Kerberos HA高可用配置

Kerberos 高可用配置

Kerberos 安装(主节点操作)

节点信息

data80
data81
data82
data83
  • 备注
    • 主Kerberos节点:data80
    • 备Kerberos节点:data81

安装 kdc server

  • 在KDC(name01)上安装包 krb5、krb5-server 和 krb5-client

    yum install krb5-server krb5-libs krb5-auth-dialog krb5-workstation  -y
    
  • 安装 krb5-devel、krb5-workstation

    yum install krb5-devel krb5-workstation -y
    

修改 krb5.conf

/etc/krb5.conf

includedir /etc/krb5.conf.d/

[logging]
 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

[libdefaults]
 dns_lookup_realm = false
 ticket_lifetime = 24h
 renew_lifetime = 7d
 forwardable = true
 rdns = false
 pkinit_anchors = /etc/pki/tls/certs/ca-bundle.crt
 default_realm = HADOOP.COM
 default_ccache_name = KEYRING:persistent:%{uid}

[realms]
 HADOOP.COM = {
  kdc = data80
  admin_server = data80
  kdc = data81
  admin_server = data81
}

同步配置文件

sudo scp /etc/krb5.conf  data81:/etc/
sudo scp /etc/krb5.conf  data82:/etc/
sudo scp /etc/krb5.conf  data83:/etc/

创建数据库(在data80节点)

kdb5_util create -r HADOOP.COM -s

启动服务(在data80节点)

chkconfig --level 35 krb5kdc on
chkconfig --level 35 kadmin on
service krb5kdc start
service kadmin start

创建主从同步账号,并为账号生成keytab文件

 sudo kadmin.local
 kadmin.local:  addprinc -randkey host/[email protected]
 kadmin.local:  addprinc -randkey host/[email protected]
 
 kadmin.local:  ktadd  host/[email protected]
 kadmin.local:  ktadd  host/[email protected]

使用随机生成秘钥的方式创建同步账号,并使用ktadd命令生成同步账号的keytab文件,默认文件生成在/etc/krb5.keytab下

复制以下文件到备Kerberos服务器相应目录

  • 将/etc目录下的krb5.conf和krb5.keytab文件拷贝至备Kerberos服务器的/etc目录下

  • 将/var/kerberos/krb5kdc目录下的.k5.HADOOP.COM、kadm5.acl和krb5.conf文件拷贝至备Kerberos服务器的/var/kerberos/krb5kdc目录

注意: .k5.HADOOP.COM 为隐藏文件,一定不要忘记拷贝

备Kerberos节点操作

需要申明用来同步的用户

在/var/kerberos/krb5kdc/kpropd.acl配置文件中添加对应账户,如果配置文件不存在则新增

cd /var/kerberos/krb5kdc

sudo vim kpropd.acl

host/[email protected]
host/[email protected]

启动kprop服务并加入系统自启动

 sudo systemctl enable kprop
 sudo systemctl start kprop
 sudo systemctl status kprop

主节点数据同步至备节点

sudo kdb5_util dump /var/kerberos/krb5kdc/master.dump

导出成功后生成master.dump和master.dump.dump_ok两个文件。

在主节点上使用kprop命令将master.dump文件同步至备节点

sudo kprop -f /var/kerberos/krb5kdc/master.dump -d -P 754  data81
  • 日志

    3769 bytes sent.
    Database propagation to data81: SUCCEEDED
    

在备节点的/var/kerberos/krb5kdc目录下查看

-rw-------. 1 root root 3769 Apr  8 01:25 from_master
-rw-------. 1 root root   22 Apr  8 00:22 kadm5.acl
-rw-------. 1 root root  451 Sep 14  2019 kdc.conf
-rw-r--r--. 1 root root   46 Apr  8 00:27 kpropd.acl
-rw-------. 1 root root 8192 Apr  8 01:25 principal
-rw-------. 1 root root 8192 Apr  8 01:25 principal.kadm5
-rw-------. 1 root root    0 Apr  8 00:29 principal.kadm5.lock
-rw-------. 1 root root    0 Apr  8 01:25 principal.ok

在备节点的/var/kerberos/krb5kdc目录下增加了如下文件:

  • from_master
  • principal
  • principal.kadm5
  • principal.kadm5.lock
  • principal.ok

在备节点上测试通过过来的数据是否能启动Kerberos服务

  • 首先将kprop服务停止,将kpropd.acl文件备份并删除,然后启动krb5kdc和kadmin服务

    sudo systemctl stop kprop
    sudo mv /var/kerberos/krb5kdc/kpropd.acl/var/kerberos/krb5kdc/kpropd.acl.bak
    sudo systemctl start krb5kdc
    sudo systemctl start kadmin
    
  • 修改备服务器的/etc/krb5.conf文件,将kdc和kadmin_server修改为备ls -l服务器地址,测试kinit是否正常

     HADOOP.COM = {
     # kdc = data80
     # admin_server = data80
      kdc = data81
      admin_server = data81
    }
    

设置定时

 crontab -e
 */5 * * * * root/var/kerberos/krb5kdc/kprop_sync.sh >/var/kerberos/krb5kdc/lastupdate

常见问题

  • list 无权限

    kadmin:  list_principals 
    get_principals: Operation requires ``list'' privilege while retrieving list.
    

参考资料

  • https://cloud.tencent.com/developer/article/1078314

  • http://web.mit.edu/kerberos/krb5-1.12/doc/admin/conf_files/kadm5_acl.html

你可能感兴趣的:(Hadoop,Hive)