java.io.FileNotFoundException: File does not exist: hdfs://mycluster/home/sqoop- 1.4.6/lib/commons-

在hadoop的集群中运行sqoop时报错如下:

16/04/28 06:21:41 ERROR tool.ImportTool: Encountered IOException running import job: 

java.io.FileNotFoundException: File does not exist: <span style="color:#ff0000;">hdfs://mycluster/home/sqoop-

1.4.6/lib/commons-codec-1</span>
.4.jar	at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall

(DistributedFileSystem.java:1072)
	at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall

(DistributedFileSystem.java:1064)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve

(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus

(DistributedFileSystem.java:1064)
	at 

org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus

(ClientDistributedCacheManager.java:288)
	at 

org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus

(ClientDistributedCacheManager.java:224)
	at 

org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamp

s(ClientDistributedCacheManager.java:93)
	at 

org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamp

sAndCacheVisibilities(ClientDistributedCacheManager.java:57)
	at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles

(JobSubmitter.java:265)
	at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles

(JobSubmitter.java:301)
	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal

(JobSubmitter.java:389)
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs

(UserGroupInformation.java:1614)
	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
	at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
	at org.apache.sqoop.mapreduce.ImportJobBase.doSubmitJob

(ImportJobBase.java:196)
	at org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:169)
	at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:266)
	at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:673)
	at org.apache.sqoop.manager.MySQLManager.importTable(MySQLManager.java:118)
	at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:497)
	at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605)
	at org.apache.sqoop.Sqoop.run(Sqoop.java:143)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
	at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179)
	at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218)
	at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)
	at org.apache.sqoop.Sqoop.main(Sqoop.java:236)

错误中可以看到有这么语句话

hdfs://mycluster/home/sqoop-1.4.6/lib/commons-codec-1.4.jar 红色部分“hdfs://mycluster” 说明查找commons-codec-1.4.jar 是在hadoop集群中查找,同时jar包是个资源是由resourcemanager统一管理,如此推测很有可能是yarn-site.xml中配置不正确所导致。

解决办法如下:

1.修改hadoop配置目录中的yarn-site.xml的内容如下:红色部分需根据自己的集群配置情况而定

<configuration>
<property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
<property>
   <name>yarn.resourcemanager.ha.enabled</name>
   <value>true</value>
 </property>
 <property>
   <name>yarn.resourcemanager.cluster-id</name>
   <value><span style="color:#ff0000;">mycluste</span>r</value>
 </property>
 <property>
   <name>yarn.resourcemanager.ha.rm-ids</name>
   <value>rm1,rm2</value>
 </property>
 <property>
   <name>yarn.resourcemanager.hostname.rm1</name>
   <value><span style="color:#ff0000;">node5</span></value>
 </property>
 <property>
   <name>yarn.resourcemanager.hostname.rm2</name>
   <value><span style="color:#ff0000;">node8</span></value>
 </property>
 <property>
   <name>yarn.resourcemanager.zk-address</name>
   <value><span style="color:#ff0000;">node5,node6,node7</span></value>
 </property>


</configuration>
2.修改mapred-site.xml内容如下:

<configuration>
        <property>
                <name>mapreduce.framework.name</name>
                <value>yarn</value>
         </property>
</configuration>


修改完以上两个文件的内容后重新启动hadoop集群和运行sqoop命令即可,

如果此文章对你有帮助请点个赞写这么多也不容易旨在分享自己的知识,与大家共勉,点赞哦吐舌头






你可能感兴趣的:(sqoop)