安装kerberos可以看一下我另外一篇。下面直接开始配置hadoop
CDH配置Kerberos和Sentry详解_cdh kerberos配置_Mumunu-的博客-CSDN博客
部署好了kerberos之后,首先添加用户和生成认证文件
在KDC中添加需要认证的用户具体用户看情况而定(hadoop集群主要由hdfs管理,所以除了建hdfs账户还有HTTP账户,另外还有hive、hbase、dwetl也会访问hadoop集群。如有别的用户可用这种方式另行添加,如下图:
格式为:用户名/主机[email protected]
kadmin.local -q "addprinc -randkey HTTP/[email protected]"
所有需要加入集群的节点都需要一个对应的账密和keytab
kadmin.local -q "addprinc -randkey HTTP/[email protected]"
kadmin.local -q "addprinc -randkey HTTP/[email protected]"
像这样 下面省略
kadmin.local -q "addprinc -randkey hive/[email protected]"
kadmin.local -q "addprinc -randkey hbase/[email protected]"
kadmin.local -q "addprinc -randkey hdfs/[email protected]"
kadmin.local -q "addprinc -randkey presto/[email protected]"
kadmin.local -q "addprinc -randkey dwetl/[email protected]"
2、按用户批量生成 keytab
kadmin.local -q "xst -k /export/common/kerberos5/hdfs.keytab HTTP/[email protected]"
kadmin.local -q "xst -k /export/common/kerberos5/hdfs.keytab HTTP/[email protected]"
kadmin.local -q "xst -k /export/common/kerberos5/hdfs.keytab HTTP/[email protected]"
kadmin.local -q "xst -k /export/common/kerberos5/hive.keytab hive/[email protected]"
kadmin.local -q "xst -k /export/common/kerberos5/hbase.keytab hbase/[email protected]"
kadmin.local -q "xst -k /export/common/kerberos5/hdfs.keytab hdfs/[email protected]"
kadmin.local -q "xst -k /export/common/kerberos5/presto-server.keytab presto/[email protected]"
kadmin.local -q "xst -k /export/common/kerberos5/dwetl.keytab dwetl/[email protected]"
会在当前 /export/common/kerberos5目录生成 相应用户的keytab 文件。再将 hadoop.keytab 分发到每台机器,包括从kdc和客户端(注意:分发后,由于是不同用户访问keytab,要给keytab文件赋相应权限)。
A、生成密钥对
Keytool 是 java 数据证书的管理工具,使用户能够管理自己的公/私钥对及相关证书。
-keystore
指定密钥库的名称及位置(产生的各类信息将存在.keystore文件中)genkey
(或者-genkeypair) 生成密钥对-alias
为生成的密钥对指定别名,如果没有默认是 mykey-keyalg
指定密钥的算法 RSA/DSA 默认是 DSA生成 keystore 的密码及相应信息的密钥库
[root@hadoop102 ~]# keytool -keystore /etc/security/keytab/keystore -alias jetty -genkey -keyalg RSA输入密钥库口令:
再次输入新口令:
您的名字与姓氏是什么?[Unknown]:
您的组织单位名称是什么?[Unknown]:
您的组织名称是什么?[Unknown]:
您所在的城市或区域名称是什么?[Unknown]:
您所在的省/市/自治区名称是什么?[Unknown]:
该单位的双字母国家/地区代码是什么?[Unknown]:
CN=Unknown, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, C=Unknown是否正确?[否]: y
输入 的密钥口令
(如果和密钥库口令相同, 按回车):
再次输入新口令:
B、修改 keystore文件的所有者和访问权限
[root@hadoop102 ~]# chown -R root:hadoop /etc/security/keytab/keystore
[root@hadoop102 ~]# chmod 660 /etc/security/keytab/keystore
注意:
C、将该证书分发到集群中的每台节点的相同路径
[root@hadoop102 ~]# xsync /etc/security/keytab/keystore
二、修改集群配置文件
1.hdfs添加以下配置
core-site.xml
hadoop.security.authorization
true
hadoop.security.authentication
kerberos
hdfs-site.xml
dfs.namenode.keytab.file
/export/common/kerberos5/hdfs.keytab
dfs.namenode.kerberos.principal
hdfs/[email protected]
dfs.namenode.kerberos.internal.spnego.principal
HTTP/[email protected]
dfs.namenode.kerberos.internal.spnego.keytab
/export/common/kerberos5/HTTP.keytab
dfs.web.authentication.kerberos.principal
HTTP/[email protected]
dfs.web.authentication.kerberos.keytab
/export/common/kerberos5/HTTP.keytab
dfs.datanode.keytab.file
/export/common/kerberos5/hdfs.keytab
dfs.datanode.kerberos.principal
hdfs/[email protected]
dfs.http.policy
HTTPS_ONLY
dfs.data.transfer.protection
integrity
dfs.block.access.token.enable
true
dfs.datanode.data.dir.perm
700
dfs.journalnode.keytab.file
/export/common/kerberos5/hdfs.keytab
dfs.journalnode.kerberos.principal
hdfs/[email protected]
dfs.journalnode.kerberos.internal.spnego.principal
HTTP/[email protected]
dfs.journalnode.kerberos.internal.spnego.keytab
/export/common/kerberos5/HTTP.keytab
hadoop_env.sh
export HADOOP_OPTS="$HADOOP_OPTS -Djava.library.path=${JAVA_HOME}/lib -Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.krb5.realm=HADOOP.COM -Djava.security.krb5.kdc=192.168.0.49:88"
ssl-server.xml(放在hadoop配置目录下:/export/common/hadoop/conf,赋权hdfs:hadoop)
ssl.server.truststore.location
/etc/security/keytab/keystore
Truststore to be used by NN and DN. Must be specified.
ssl.server.truststore.password
123456
Optional. Default value is "".
ssl.server.truststore.type
jks
Optional. The keystore file format, default value is "jks".
ssl.server.truststore.reload.interval
10000
Truststore reload check interval, in milliseconds.
Default value is 10000 (10 seconds).
ssl.server.keystore.location
/etc/security/keytab/keystore
Keystore to be used by NN and DN. Must be specified.
ssl.server.keystore.password
123456
Must be specified.
ssl.server.keystore.keypassword
123456
Must be specified.
ssl.server.keystore.type
jks
Optional. The keystore file format, default value is "jks".
ssl.server.exclude.cipher.list
TLS_ECDHE_RSA_WITH_RC4_128_SHA,SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA,
SSL_RSA_WITH_DES_CBC_SHA,SSL_DHE_RSA_WITH_DES_CBC_SHA,
SSL_RSA_EXPORT_WITH_RC4_40_MD5,SSL_RSA_EXPORT_WITH_DES40_CBC_SHA,
SSL_RSA_WITH_RC4_128_MD5
Optional. The weak security cipher suites that you want excluded
from SSL communication.
ssl-client.xml(放在hadoop配置目录下:/export/common/hadoop/conf,赋权hdfs:hadoop)
ssl.client.truststore.location
/etc/security/keytab/keystore
Truststore to be used by clients like distcp. Must be
specified.
ssl.client.truststore.password
123456
Optional. Default value is "".
ssl.client.truststore.type
jks
Optional. The keystore file format, default value is "jks".
ssl.client.truststore.reload.interval
10000
Truststore reload check interval, in milliseconds.
Default value is 10000 (10 seconds).
ssl.client.keystore.location
/etc/security/keytab/keystore
Keystore to be used by clients like distcp. Must be
specified.
ssl.client.keystore.password
123456
Optional. Default value is "".
ssl.client.keystore.keypassword
123456
Optional. Default value is "".
ssl.client.keystore.type
jks
Optional. The keystore file format, default value is "jks".
2.yarn添加以下配置
yarn-site.xml
yarn.web-proxy.principal
HTTP/[email protected]
yarn.web-proxy.keytab
/export/common/kerberos5/HTTP.keytab
yarn.resourcemanager.principal
hdfs/[email protected]
yarn.resourcemanager.keytab
/export/common/kerberos5/hdfs.keytab
yarn.nodemanager.principal
hdfs/[email protected]
yarn.nodemanager.keytab
/export/common/kerberos5/hdfs.keytab
yarn.nodemanager.container-executor.class
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor
yarn.nodemanager.linux-container-executor.group
hdfs
yarn.timeline-service.http-authentication.type
kerberos
Defines authentication used for the timeline server HTTP endpoint. Supported values are: simple | kerberos | #AUTHENTICATION_HANDLER_CLASSNAME#
yarn.timeline-service.principal
hdfs/[email protected]
yarn.timeline-service.keytab
/export/common/kerberos5/hdfs.keytab
yarn.timeline-service.http-authentication.kerberos.principal
HTTP/[email protected]
yarn.timeline-service.http-authentication.kerberos.keytab
/export/common/kerberos5/HTTP.keytab
yarn.nodemanager.container-localizer.java.opts
-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.krb5.realm=HADOOP.COM -Djava.security.krb5.kdc=192.168.0.49 :88
yarn.nodemanager.health-checker.script.opts
-Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.krb5.realm=HADOOP.COM -Djava.security.krb5.kdc=192.168.0.49:88
mapred-site.xml
mapreduce.map.java.opts
-Xmx1638M -Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.krb5.realm=HADOOP.COM -Djava.security.krb5.kdc=192.168.0.49:88
mapreduce.reduce.java.opts
-Xmx3276M -Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.krb5.realm=HADOOP.COM -Djava.security.krb5.kdc=192.168.0.49:88
mapreduce.jobhistory.keytab
/export/common/kerberos5/hdfs.keytab
mapreduce.jobhistory.principal
hdfs/[email protected]
mapreduce.jobhistory.webapp.spnego-keytab-file
/export/common/kerberos5/HTTP.keytab
mapreduce.jobhistory.webapp.spnego-principal
HTTP/[email protected]
mapred.child.java.opts
-Xmx1024m -Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.krb5.realm=HADOOP.COM -Djava.security.krb5.kdc=192.168.0.49:88
yarn.app.mapreduce.am.command-opts
-Xmx3276m -Djava.security.krb5.conf=/etc/krb5.conf -Djava.security.krb5.realm=HADOOP.COM -Djava.security.krb5.kdc=192.168.0.49:88
3.hive添加以下配置
hive-site.xml
hive.server2.authentication
KERBEROS
hive.server2.authentication.kerberos.principal
hdfs/[email protected]
hive.server2.authentication.kerberos.keytab
/export/common/kerberos5/hdfs.keytab
hive.metastore.sasl.enabled
true
hive.metastore.kerberos.keytab.file
/export/common/kerberos5/hdfs.keytab
hive.metastore.kerberos.principal
hdfs/[email protected]
4.hbase添加以下配置
hbase-site.xml
hbase.security.authentication
kerberos
hbase.rpc.engine
org.apache.hadoop.hbase.ipc.SecureRpcEngine
hbase.master.kerberos.principal
hdfs/[email protected]
-->
hbase.master.keytab.file
/export/common/kerberos5/hdfs.keytab
-->
hbase.regionserver.kerberos.principal
hdfs/[email protected]
-->
hbase.regionserver.keytab.file
/export/common/kerberos5/hdfs.keytab
三、kerberos相关命令
退出授权:kdestroy
主kdc打开kadmin管理:kadmin.local
查看当前系统使用的Kerberos账户:klist
使用keytab获取用户凭证:
kinit -kt /export/common/kerberos5/kadm5.keytab admin/[email protected]
查看keytab内容:
klist -k -e /export/common/kerberos5/hdfs.keytab
生成keytab文件:
kadmin.local -q "xst -k /export/common/kerberos5/hdfs.keytab admin/[email protected]"
延长kerberos认证时长:kinit -R
删除kdc数据库:rm -rf /export/common/kerberos5/principal(这个路径是create时新建的数据库路径)
四、快速测试
测试hdfs:切换到hdfs用户,键入命令:hdfs dfs -ls /后,需要认证。再次键入命令
“kinit -kt /export/common/kerberos5/hdfs.keytab hdfs/`hostname | awk '{print tolower($0)}'`”
后可以查出结果即为与hdfs集成成功。
还要注意一些显著的变化:1、任务可以由提交作业的用户以操作系统账户启动运行,而不一定要由运行节点管理的用户启动。这意味着,可以借助操作系统来隔离正在运行的任务,使它们之间无法相互传送指令,这样的话,诸如任务数据等本地信息的隐私即可通过本地文件系统的安全性而得到保护。
(需要将yarn.nodemanager.container-executor.class设为org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor。)