Spark提交任务,两个集群kerberos互信

背景

spark向集群1中的yarn提交任务,任务运行在集群1的yarn容器中。数据写入集群2的hdfs。集群1与集群2开通kerberos互信操作。

异常

Exception in thread "main" java.io.IOException: Failed on local exception: java.io.IOException: java.lang.IllegalArgumentException: Server has invalid Kerberos principal: hdfs/****.host@****.HADOOP; Host Details : local host is: "****.host/****"; destination host is: "****.host":8020; 
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772)
    at org.apache.hadoop.ipc.Client.call(Client.java:1474)
    at org.apache.hadoop.ipc.Client.call(Client.java:1401)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
    at com.sun.proxy.$Proxy9.getDelegationToken(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getDelegationToken(ClientNamenodeProtocolTranslatorPB.java:909)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy10.getDelegationToken(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.getDelegationToken(DFSClient.java:1018)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getDelegationToken(DistributedFileSystem.java:1355)
    at org.apache.hadoop.fs.FileSystem.collectDelegationTokens(FileSystem.java:529)
    at org.apache.hadoop.fs.FileSystem.addDelegationTokens(FileSystem.java:507)
    at org.apache.hadoop.hdfs.DistributedFileSystem.addDelegationTokens(DistributedFileSystem.java:2041)
    at org.apache.spark.deploy.security.HadoopFSDelegationTokenProvider$$anonfun$org$apache$spark$deploy$security$HadoopFSDelegationTokenProvider$$fetchDelegationTokens$1.apply(HadoopFSDelegationTokenProvider.scala:98)
    at org.apache.spark.deploy.security.HadoopFSDelegationTokenProvider$$anonfun$org$apache$spark$deploy$security$HadoopFSDelegationTokenProvider$$fetchDelegationTokens$1.apply(HadoopFSDelegationTokenProvider.scala:96)
    at scala.collection.immutable.Set$Set2.foreach(Set.scala:128)
    at org.apache.spark.deploy.security.HadoopFSDelegationTokenProvider.org$apache$spark$deploy$security$HadoopFSDelegationTokenProvider$$fetchDelegationTokens(HadoopFSDelegationTokenProvider.scala:96)
    at org.apache.spark.deploy.security.HadoopFSDelegationTokenProvider.obtainDelegationTokens(HadoopFSDelegationTokenProvider.scala:49)
    at org.apache.spark.deploy.security.HadoopDelegationTokenManager$$anonfun$obtainDelegationTokens$2.apply(HadoopDelegationTokenManager.scala:132)
    at org.apache.spark.deploy.security.HadoopDelegationTokenManager$$anonfun$obtainDelegationTokens$2.apply(HadoopDelegationTokenManager.scala:130)
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
    at scala.collection.Iterator$class.foreach(Iterator.scala:891)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
    at scala.collection.MapLike$DefaultValuesIterable.foreach(MapLike.scala:206)
    at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
    at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
    at org.apache.spark.deploy.security.HadoopDelegationTokenManager.obtainDelegationTokens(HadoopDelegationTokenManager.scala:130)
    at org.apache.spark.deploy.yarn.security.YARNHadoopDelegationTokenManager.obtainDelegationTokens(YARNHadoopDelegationTokenManager.scala:59)
    at org.apache.spark.deploy.yarn.security.AMCredentialRenewer$$anon$4.run(AMCredentialRenewer.scala:168)
    at org.apache.spark.deploy.yarn.security.AMCredentialRenewer$$anon$4.run(AMCredentialRenewer.scala:165)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692)
    at org.apache.spark.deploy.yarn.security.AMCredentialRenewer.obtainTokensAndScheduleRenewal(AMCredentialRenewer.scala:165)
    at org.apache.spark.deploy.yarn.security.AMCredentialRenewer.org$apache$spark$deploy$yarn$security$AMCredentialRenewer$$startInternal(AMCredentialRenewer.scala:108)
    at org.apache.spark.deploy.yarn.security.AMCredentialRenewer$$anon$2.run(AMCredentialRenewer.scala:91)
    at org.apache.spark.deploy.yarn.security.AMCredentialRenewer$$anon$2.run(AMCredentialRenewer.scala:89)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692)
    at org.apache.spark.deploy.yarn.security.AMCredentialRenewer.start(AMCredentialRenewer.scala:89)
    at org.apache.spark.deploy.yarn.ApplicationMaster.(ApplicationMaster.scala:113)
    at org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:803)
    at org.apache.spark.deploy.yarn.ApplicationMaster.main(ApplicationMaster.scala)
Caused by: java.io.IOException: java.lang.IllegalArgumentException: Server has invalid Kerberos principal: hdfs/****.host@****.HADOOP
    at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:682)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692)
    at org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:645)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:732)
    at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:370)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1523)
    at org.apache.hadoop.ipc.Client.call(Client.java:1440)
    ... 48 more
Caused by: java.lang.IllegalArgumentException: Server has invalid Kerberos principal: hdfs/****.host@****.HADOOP
    at org.apache.hadoop.security.SaslRpcClient.getServerPrincipal(SaslRpcClient.java:334)
    at org.apache.hadoop.security.SaslRpcClient.createSaslClient(SaslRpcClient.java:231)
    at org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:159)
    at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:396)
    at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:555)
    at org.apache.hadoop.ipc.Client$Connection.access$1800(Client.java:370)
    at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:724)
    at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:720)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:719)
    ... 51 more

解决方案

  • 了解spark submit提交原理:这篇文章非常经典,建议一定要看一下,https://www.jianshu.com/p/ae5a3f39a9af
  • 了解spark submit to yarn + kerberos的配置项:官网,http://spark.apache.org/docs/latest/running-on-yarn.html#kerberos
  • 确保core-site.xml hdfs-site.xml yarn-site.xml文件一定是最新的最全的(cloudera manager管理的集群上各个节点的*-site.xml文件是不一致的)
  • 调整日志级别:spark-2.4.5-bin-hadoop2.6,/spark-2.4.5-bin-hadoop2.6/config/log4j.properties,添加如下参数,来查看kerbero认证过程日志。
log4j.logger.org.apache.spark.deploy.security=DEBUG
log4j.logger.org.apache.hadoop.security=DEBUG
log4j.logger.org.apache.spark.deploy.yarn.Client=DEBUG
  • 报错出查看源码:SaslRpcClient,对源码分析,找到错误原因。
  • 因为引起这个异常有很多中。顾此处只列出,我的解决原因:

    dfs.namenode.kerberos.principal.pattern
    *

此处在cloudera manager 配置错误,正确配置在hdfs-site.xml 的 HDFS 客户端高级配置代码段(安全阀)

关于大数据方面技术问题可以咨询,替你解决你的苦恼。 参考:https://www.jianshu.com/p/d148af2bda64

你可能感兴趣的:(Spark提交任务,两个集群kerberos互信)