hadoop :java.io.FileNotFoundException: File does not exist:

点击打开链接转自:http://blog.163.com/silver9886@126/blog/static/35971862201441134010403/

1.用hadoop的eclipse插件下M/R程序的时候,有时候会报
Exception in thread "main" java.lang.IllegalArgumentException: Pathname /D:/hadoop/hadoop-2.2.0/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar from hdfs://uat84:49100/D:/hadoop/hadoop-2.2.0/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar is not a valid DFS filename
服务器下报错是:
[ 2014-05-11 19:09:40,019] ERROR [main] (UserGroupInformation.java:1494) org.apache.hadoop.security.UserGroupInformation - PriviledgedActionException as:hadoop (auth:SIMPLE) cause:java.io.FileNotFoundException: File does not exist: hdfs://uat84:49100/usr/local/bigdata/hbase/lib/hadoop-common-2.2.0.jar
Exception in thread "main" java.io.FileNotFoundException: File does not exist: hdfs://uat84:49100/usr/local/bigdata/hbase/lib/hadoop-common-2.2.0.jar
        at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1110)
        at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102)
        at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)
        at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheMa
这是tm的什么fucking错误!!拿着别人正确的程序一点一点查,发现是因为有这句话:
Configuration conf = new Configuration();
conf.set("fs.default.name", "hdfs://uat84:49100");
这是什么意思呢,就是说,你如果是本地跑,就是不引入mapred-site,yarn-site,core-site这些配置文件,那么这个地方也不要设置,因为你是在本地跑M/R程序,( fs.default.name默认值是file:///,表示本地文件系统)这个地方却又告诉hadoop,需要的jar包从hdfs中取,当然会报以上的问题。那么,在本地跑直接去掉这句话就ok了。
反之,如果你是提交到集群,引入了mapred-site,yarn-site,却没有引入core-site,也没有设置fs.default.name,那么,因为不知道namenode的地址,无法将job.jar提交到hadoop集群上,因此会报如下错误:
[ 2014-05-13 16:35:03,625] INFO [main] (Job.java:1358) org.apache.hadoop.mapreduce.Job - Job job_1397132528617_2814 failed with state FAILED due to: Application application_1397132528617_2814 failed 2 times due to AM Container for appattempt_1397132528617_2814_000002 exited with  exitCode: -1000 due to: File file:/tmp/hadoop-yarn/staging/hadoop/.staging/job_1397132528617_2814/job.jar does not exist
.Failing this attempt.. Failing the application.
牛不牛!因此我们只要告诉hadoop我们的namenode地址就可以了。引入core-site或是设置
fs.default.name 都是一样的

2.在跑hdfs到hbase的代码的时候,报出如下错误:
java.lang.RuntimeException: java.lang.NoSuchMethodException: CopyOfHive2Hbase$Redcuer.()
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:629)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:405)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:445)
Caused by: java.lang.NoSuchMethodException: CopyOfHive2Hbase$Redcuer.()
at java.lang.Class.getConstructor0(Class.java:2715)
at java.lang.Class.getDeclaredConstructor(Class.java:1987)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:125)
... 3 more
fucking man啊,光看这个,谁能看出来Rducer哪里写的有问题!我服他了!!!
也是对比人家正确的代码,一点一点看,发现,原来是因为我的reduce方法不是static的!!!! hadoop 简单程序下的错误和陷阱 - silver9886@126 - silver9886@126的博客想死的心都有了
public static class Redcuer extends TableReducer {  
private String[] columname ;
public void reduce(Text key, Iterable values, Context context) throws IOException, InterruptedException {  
System.out.println("!!!!!!!!!!!!!!!!!!!reduce!!!!!!!" + key.toString());
Put put = new Put(Bytes.toBytes("test1234"));
put.add(Bytes.toBytes("fc"), Bytes.toBytes("1"), Bytes.toBytes("2"));
    context.write(NullWritable.get(), put);
}
}

你可能感兴趣的:(hadoop异常)