今天尝试在Hadoop 2.x开发集群上配置Kerberos,遇到一些问题,记录一下
设置hadoop security
core-site.xml
<property> <name>hadoop.security.authentication</name> <value>kerberos</value> </property> <property> <name>hadoop.security.authorization</name> <value>true</value> </property>
hadoop.security.authentication默认是simple方式,也就是基于linux操作系统的验证方式,用户端调用whoami命令,然后RPC call给服务端,恶意用户很容易在其他host伪造一个相同的用户。这里我们改为kerberos。
<property> <name>dfs.block.access.token.enable</name> <value>true</value> </property> <property> <name>dfs.https.enable</name> <value>false</value> </property> <property> <name>dfs.namenode.https-address</name> <value>dev80.hadoop:50470</value> </property> <property> <name>dfs.https.port</name> <value>50470</value> </property> <property> <name>dfs.namenode.keytab.file</name> <value>/etc/hadoop.keytab</value> </property> <property> <name>dfs.namenode.kerberos.principal</name> <value>hadoop/[email protected]</value> </property> <property> <name>dfs.namenode.kerberos.https.principal</name> <value>host/[email protected]</value> </property> <property> <name>dfs.namenode.secondary.http-address</name> <value>dev80.hadoop:50090</value> </property> <property> <name>dfs.namenode.secondary.https-port</name> <value>50470</value> </property> <property> <name>dfs.namenode.secondary.keytab.file</name> <value>/etc/hadoop.keytab</value> </property> <property> <name>dfs.namenode.secondary.kerberos.principal</name> <value>hadoop/[email protected]</value> </property> <property> <name>dfs.namenode.secondary.kerberos.https.principal</name> <value>host/[email protected]</value> </property> <property> <name>dfs.datanode.data.dir.perm</name> <value>700</value> </property> <property> <name>dfs.datanode.address</name> <value>0.0.0.0:1003</value> </property> <property> <name>dfs.datanode.http.address</name> <value>0.0.0.0:1007</value> </property> <property> <name>dfs.datanode.https.address</name> <value>0.0.0.0:1005</value> </property> <property> <name>dfs.datanode.keytab.file</name> <value>/etc/hadoop.keytab</value> </property> <property> <name>dfs.datanode.kerberos.principal</name> <value>hadoop/[email protected]</value> </property> <property> <name>dfs.datanode.kerberos.https.principal</name> <value>host/[email protected]</value> </property> <property> <name>dfs.datanode.data.dir.perm</name> <value>700</value> </property> <property> <name>dfs.datanode.address</name> <value>0.0.0.0:1003</value> </property> <property> <name>dfs.datanode.http.address</name> <value>0.0.0.0:1007</value> </property> <property> <name>dfs.datanode.https.address</name> <value>0.0.0.0:1005</value> </property> <property> <name>dfs.datanode.keytab.file</name> <value>/etc/hadoop.keytab</value> </property> <property> <name>dfs.datanode.kerberos.principal</name> <value>hadoop/[email protected]</value> </property> <property> <name>dfs.datanode.kerberos.https.principal</name> <value>host/[email protected]</value> </property> <property> <name>dfs.web.authentication.kerberos.principal</name> <value>HTTP/[email protected]</value> </property> <property> <name>dfs.web.authentication.kerberos.keytab</name> <value>/etc/hadoop.keytab</value> <description> The Kerberos keytab file with the credentials for the HTTP Kerberos principal used by Hadoop-Auth in the HTTP endpoint. </description> </property>
wget http://mirror.esocc.com/apache//commons/daemon/binaries/commons-daemon-1.0.15-bin.tar.gz cd src/native/unix; configure; make生成jsvc 64位executable,把它拷贝到$HADOOP_HOME/libexec,然后需要在hadoop-env.sh中指定JSVC_HOME到此路径,否则会报错"It looks like you're trying to start a secure DN, but $JSVC_HOME isn't set. Falling back to starting insecure DN."
[hadoop@dev80 unix]$ file jsvc jsvc: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), dynamically linked (uses shared libs), for GNU/Linux 2.6.18, not strippedmvn package
# The jsvc implementation to use. Jsvc is required to run secure datanodes. export JSVC_HOME=/usr/local/hadoop/hadoop-2.1.0-beta/libexec # On secure datanodes, user to run the datanode as after dropping privileges export HADOOP_SECURE_DN_USER=hadoop # The directory where pid files are stored. /tmp by default export HADOOP_SECURE_DN_PID_DIR=/usr/local/hadoop # Where log files are stored in the secure data environment. export HADOOP_SECURE_DN_LOG_DIR=/data/logs
exec "$JSVC" \ -Dproc_$COMMAND -outfile "$JSVC_OUTFILE" \ -errfile "$JSVC_ERRFILE" \ -pidfile "$HADOOP_SECURE_DN_PID" \ -nodetach \ -user "$HADOOP_SECURE_DN_USER" \ -cp "$CLASSPATH" \ $JAVA_HEAP_MAX $HADOOP_OPTS \ org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter "$@"如果启动过程中有什么问题可以查看 $JSVC_OUTFILE(默认是$HADOOP_LOG_DIR/jsvc.out) 和 $JSVC_ERRFILE(默认是$HADOOP_LOG_DIR/jsvc.err) 信息来排错
<property> <name>yarn.resourcemanager.keytab</name> <value>/etc/hadoop.keytab</value> </property> <property> <name>yarn.resourcemanager.principal</name> <value>hadoop/[email protected]</value> </property> <property> <name>yarn.nodemanager.keytab</name> <value>/etc/hadoop.keytab</value> </property> <property> <name>yarn.nodemanager.principal</name> <value>hadoop/[email protected]</value> </property> <property> <name>yarn.nodemanager.container-executor.class</name> <value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value> </property> <property> <name>yarn.nodemanager.linux-container-executor.group</name> <value>hadoop</value> </property>
Caused by: org.apache.hadoop.util.Shell$ExitCodeException: File /usr/local/hadoop/hadoop-2.1.0-beta/etc/hadoop must be owned by root, but is owned by 500 at org.apache.hadoop.util.Shell.runCommand(Shell.java:458) at org.apache.hadoop.util.Shell.run(Shell.java:373) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:578) at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.init(LinuxContainerExecutor.java:147)
[root@dev80 bin]# strings container-executor | grep etc ../etc/hadoop/container-executor.cfg看出来是默认加载$HADOOP_HOME/etc/hadoop/container-executor.cfg
[hadoop@dev80 bin]$ strings container-executor | grep etc /etc/container-executor.cfg
yarn.nodemanager.linux-container-executor.group=hadoop min.user.id=499其中min.user.id表示启动container的最小uid,如果有低于这个值的uid启动task,就会fail掉。一般Centos,RHEL用户帐号uid是从500开始
chown root:hadoop container-executor /etc/container-executor.cfg chmod 4750 container-executor chmod 400 /etc/container-executor.cfg同步配置文件到整个集群,用hadoop帐号启动ResourceManager和Nodemanager
<property> <name>mapreduce.jobhistory.keytab</name> <value>/etc/hadoop.keytab</value> </property> <property> <name>mapreduce.jobhistory.principal</name> <value>hadoop/[email protected]</value> </property>
[hadoop@dev80 hadoop]$ kinit -r 24l -k -t /home/hadoop/.keytab hadoop [hadoop@dev80 hadoop]$ klist Ticket cache: FILE:/tmp/krb5cc_500 Default principal: [email protected] Valid starting Expires Service principal 09/11/13 15:25:34 09/12/13 15:25:34 krbtgt/[email protected] renew until 09/12/13 15:25:34其中/tmp/krb5cc_500就是kerberos ticket cache, 默认会在/tmp下创建名字为“krb5cc_”加上uid的文件,此处500表示hadoop帐号的uid
[hadoop@dev80 hadoop]$ getent passwd hadoop:x:500:500::/home/hadoop:/bin/bash用户也可以通过设置export KRB5CCNAME=/tmp/krb5cc_500到环境变量来指定ticket cache路径
[hadoop@dev80 hadoop]$ kdestroy [hadoop@dev80 hadoop]$ klist klist: No credentials cache found (ticket cache FILE:/tmp/krb5cc_500)
13/09/11 16:21:35 ERROR security.UserGroupInformation: PriviledgedActionException as:hadoop (auth:KERBEROS) cause:java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
[hadoop@dev80 hadoop]$ klist -k -t /etc/hadoop.keytab Keytab name: WRFILE:/etc/hadoop.keytab KVNO Timestamp Principal ---- ----------------- -------------------------------------------------------- 1 06/17/12 22:01:24 hadoop/[email protected] 1 06/17/12 22:01:24 hadoop/[email protected] 1 06/17/12 22:01:24 hadoop/[email protected] 1 06/17/12 22:01:24 hadoop/[email protected] 1 06/17/12 22:01:24 hadoop/[email protected] 1 06/17/12 22:01:24 hadoop/[email protected] 1 06/17/12 22:01:24 host/[email protected] 1 06/17/12 22:01:24 host/[email protected] 1 06/17/12 22:01:24 host/[email protected] 1 06/17/12 22:01:24 host/[email protected] 1 06/17/12 22:01:24 host/[email protected] 1 06/17/12 22:01:24 host/[email protected] 1 06/17/12 22:01:24 HTTP/[email protected] 1 06/17/12 22:01:24 HTTP/[email protected] 1 06/17/12 22:01:24 HTTP/[email protected] 1 06/17/12 22:01:24 HTTP/[email protected] 1 06/17/12 22:01:24 HTTP/[email protected] 1 06/17/12 22:01:24 HTTP/[email protected]
[hadoop@dev80 hadoop]$ klist -k -t /home/hadoop/.keytab Keytab name: WRFILE:/home/hadoop/.keytab KVNO Timestamp Principal ---- ----------------- -------------------------------------------------------- 1 04/11/12 13:56:29 [email protected]