java.io.FileNotFoundException: File does not exist: hdfs://mycluster/home/sqoop- 1.4.6/lib/commons-

在hadoop的集群中运行sqoop时报错如下:

16/04/28 06:21:41 ERROR tool.ImportTool: Encountered IOException running import job: 

java.io.FileNotFoundException: File does not exist: hdfs://mycluster/home/sqoop-

1.4.6/lib/commons-codec-1
.4.jar    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall

(DistributedFileSystem.java:1072)
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall

(DistributedFileSystem.java:1064)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve

(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus

(DistributedFileSystem.java:1064)
    at 

org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus

(ClientDistributedCacheManager.java:288)
    at 

org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus

(ClientDistributedCacheManager.java:224)
    at 

org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamp

s(ClientDistributedCacheManager.java:93)
    at 

org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamp

sAndCacheVisibilities(ClientDistributedCacheManager.java:57)
    at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles

(JobSubmitter.java:265)
    at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles

(JobSubmitter.java:301)
    at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal

(JobSubmitter.java:389)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs

(UserGroupInformation.java:1614)
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
    at org.apache.sqoop.mapreduce.ImportJobBase.doSubmitJob

(ImportJobBase.java:196)
    at org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:169)
    at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:266)
    at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:673)
    at org.apache.sqoop.manager.MySQLManager.importTable(MySQLManager.java:118)
    at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:497)
    at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605)
    at org.apache.sqoop.Sqoop.run(Sqoop.java:143)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179)
    at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218)
    at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)
    at org.apache.sqoop.Sqoop.main(Sqoop.java:236)

错误中可以看到有这么语句话
hdfs://mycluster/home/sqoop-1.4.6/lib/commons-codec-1.4.jar 红色部分“hdfs://mycluster” 说明查找commons-codec-1.4.jar 是在hadoop集群中查找,同时jar包是个资源是由resourcemanager统一管理,如此推测很有可能是yarn-site.xml中配置不正确所导致。

解决办法如下:

1.修改hadoop配置目录中的yarn-site.xml的内容如下:红色部分需根据自己的集群配置情况而定



        yarn.nodemanager.aux-services
        mapreduce_shuffle
   


   yarn.resourcemanager.ha.enabled
   true
 

 
   yarn.resourcemanager.cluster-id
   mycluster
 

 
   yarn.resourcemanager.ha.rm-ids
   rm1,rm2
 

 
   yarn.resourcemanager.hostname.rm1
   node5
 

 
   yarn.resourcemanager.hostname.rm2
   node8
 

 
   yarn.resourcemanager.zk-address
   node5,node6,node7
 



2.修改mapred-site.xml内容如下:

       
                mapreduce.framework.name
                yarn
         


修改完以上两个文件的内容后重新启动hadoop集群和运行sqoop命令即可,
如果此文章对你有帮助请点个赞写这么多也不容易旨在分享自己的知识,与大家共勉,点赞哦
--------------------- 
作者:富的只剩下代码 
来源:CSDN 
原文:https://blog.csdn.net/liyongke89/article/details/51276384 
版权声明:本文为博主原创文章,转载请附上博文链接!

你可能感兴趣的:(Hadoop)