运行hadoop程序 结果文件大小为0


在eclipse运行hadoop程序,显示:

12/03/01 09:22:31 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
12/03/01 09:22:31 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
12/03/01 09:22:31 WARN mapred.JobClient: No job jar file set.  User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
12/03/01 09:22:32 INFO input.FileInputFormat: Total input paths to process : 1
12/03/01 09:22:32 INFO mapred.JobClient: Running job: job_local_0001
12/03/01 09:22:32 INFO input.FileInputFormat: Total input paths to process : 1
12/03/01 09:22:32 INFO mapred.MapTask: io.sort.mb = 100
12/03/01 09:22:32 INFO mapred.MapTask: data buffer = 79691776/99614720
12/03/01 09:22:32 INFO mapred.MapTask: record buffer = 262144/327680
12/03/01 09:22:32 INFO mapred.MapTask: Starting flush of map output
12/03/01 09:22:32 INFO mapred.TaskRunner: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting
12/03/01 09:22:32 INFO mapred.LocalJobRunner:
12/03/01 09:22:32 INFO mapred.TaskRunner: Task 'attempt_local_0001_m_000000_0' done.
12/03/01 09:22:32 INFO mapred.LocalJobRunner:
12/03/01 09:22:32 INFO mapred.Merger: Merging 1 sorted segments
12/03/01 09:22:32 INFO mapred.Merger: Down to the last merge-pass, with 0 segments left of total size: 0 bytes
12/03/01 09:22:32 INFO mapred.LocalJobRunner:
12/03/01 09:22:32 INFO mapred.TaskRunner: Task:attempt_local_0001_r_000000_0 is done. And is in the process of commiting
12/03/01 09:22:32 INFO mapred.LocalJobRunner:
12/03/01 09:22:32 INFO mapred.TaskRunner: Task attempt_local_0001_r_000000_0 is allowed to commit now
12/03/01 09:22:32 INFO output.FileOutputCommitter: Saved output of task 'attempt_local_0001_r_000000_0' to hdfs://localhost:9000/usr/mjiang/output
12/03/01 09:22:32 INFO mapred.LocalJobRunner: reduce > reduce
12/03/01 09:22:32 INFO mapred.TaskRunner: Task 'attempt_local_0001_r_000000_0' done.
12/03/01 09:22:33 INFO mapred.JobClient:  map 100% reduce 100%
12/03/01 09:22:33 INFO mapred.JobClient: Job complete: job_local_0001
12/03/01 09:22:33 INFO mapred.JobClient: Counters: 12
12/03/01 09:22:33 INFO mapred.JobClient:   FileSystemCounters
12/03/01 09:22:33 INFO mapred.JobClient:     FILE_BYTES_READ=32992
12/03/01 09:22:33 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=66580
12/03/01 09:22:33 INFO mapred.JobClient:   Map-Reduce Framework
12/03/01 09:22:33 INFO mapred.JobClient:     Reduce input groups=0
12/03/01 09:22:33 INFO mapred.JobClient:     Combine output records=0
12/03/01 09:22:33 INFO mapred.JobClient:     Map input records=0
12/03/01 09:22:33 INFO mapred.JobClient:     Reduce shuffle bytes=0
12/03/01 09:22:33 INFO mapred.JobClient:     Reduce output records=0
12/03/01 09:22:33 INFO mapred.JobClient:     Spilled Records=0
12/03/01 09:22:33 INFO mapred.JobClient:     Map output bytes=0
12/03/01 09:22:33 INFO mapred.JobClient:     Combine input records=0
12/03/01 09:22:33 INFO mapred.JobClient:     Map output records=0
12/03/01 09:22:33 INFO mapred.JobClient:     Reduce input records=0

查看日志文件:/home/mjiang/hadoop-0.20.2/logs/hadoop-mjiang-namenode-venus.log:

2012-03-01 10:32:02,998 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 9000, call addBlock(/tmp/hadoop-mjiang/mapred/system/jobtracker.info, DFSClient_-273794341) from 127.0.0.1:36549: error: java.io.IOException: File /tmp/hadoop-mjiang/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1

java.io.IOException: File /tmp/hadoop-mjiang/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1

在google:新手求教:could only be replicated to 0 nodes, instead of 1 

以为是防火墙的问题,也没太在意这句话:

这个异常主要是因为hdfs文件系统出现异常,解决方法是:
先停hadoop;
到hadoop.tmp.dir这里配置路径清除文件;(hadoop.tmp.dir默认:/tmp/hadoop-${user.name})
然后hadoop namenode -format;
最后重启hadoop。就一直找防火墙的问题,

$ netstat -an 9000

结果为:

tcp        0      0 127.0.0.1:9001          127.0.0.1:58822         ESTABLISHED
tcp        0      0 127.0.0.1:9000          127.0.0.1:54320         ESTABLISHED
tcp        0      0 127.0.0.1:9000          127.0.0.1:55914         ESTABLISHED
 

可能就不是端口的问题了。

想到可能是datanode没有成功启动的问题

重新启动后OK


你可能感兴趣的:(hadoop)