出处:http://blog.chinaunix.net/u3/105376/showart_2329753.html
虽说现在用Eclipse下开发hadoop程序很方便了,但是命令行方式对于小程序开发验证很方便。这是初学hadoop时的笔记,记录下来以备查。1. 经典的WordCound程序(WordCount.java),见 hadoop0.18文档
import java.io.IOException; import java.util.*; import org.apache.hadoop.fs.Path; import org.apache.hadoop.conf.*; import org.apache.hadoop.io.*; import org.apache.hadoop.mapred.*; import org.apache.hadoop.util.*; public class WordCount { public static class Map extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> { private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException { String line = value.toString(); StringTokenizer tokenizer = new StringTokenizer(line); while (tokenizer.hasMoreTokens()) { word.set(tokenizer.nextToken()); output.collect(word, one); } } } public static class Reduce extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> { public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException { int sum = 0; while (values.hasNext()) { sum += values.next().get(); } output.collect(key, new IntWritable(sum)); } } public static void main(String[] args) throws Exception { JobConf conf = new JobConf(WordCount.class); conf.setJobName("wordcount"); conf.setOutputKeyClass(Text.class); conf.setOutputValueClass(IntWritable.class); conf.setMapperClass(Map.class); conf.setCombinerClass(Reduce.class); conf.setReducerClass(Reduce.class); conf.setInputFormat(TextInputFormat.class); conf.setOutputFormat(TextOutputFormat.class); FileInputFormat.setInputPaths(conf, new Path(args[0])); FileOutputFormat.setOutputPath(conf, new Path(args[1])); JobClient.runJob(conf); } }
javac -classpath /home/admin/hadoop/hadoop-0.19.1-core.jar WordCount.java -d /home/admin/WordCount
jar cvf WordCount.jar *.class
这样就生成了WordCount.jar。
当然这里也可以通过eclipse 加上hadoop的jar包完成本地打jar包过程。
[admin@host WordCount]$ cat input1.txt Hello, i love china are you ok? [admin@host WordCount]$ cat input2.txt hello, i love word You are ok
在hadoop上新建目录,和put程序运行所需要的输入文件:
hadoop fs -mkdir /tmp/input hadoop fs -mkdir /tmp/output hadoop fs -put input1.txt /tmp/input/ hadoop fs -put input2.txt /tmp/input/
5. 运行程序,会显示job运行时的一些信息
[admin@host WordCount]$ hadoop jar WordCount.jar WordCount /tmp/input /tmp/output 10/09/16 22:49:43 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 10/09/16 22:49:43 INFO mapred.FileInputFormat: Total input paths to process :2 10/09/16 22:49:43 INFO mapred.JobClient: Running job: job_201008171228_76165 10/09/16 22:49:44 INFO mapred.JobClient: map 0% reduce 0% 10/09/16 22:49:47 INFO mapred.JobClient: map 100% reduce 0% 10/09/16 22:49:54 INFO mapred.JobClient: map 100% reduce 100% 10/09/16 22:49:55 INFO mapred.JobClient: Job complete: job_201008171228_76165 10/09/16 22:49:55 INFO mapred.JobClient: Counters: 16 10/09/16 22:49:55 INFO mapred.JobClient: File Systems 10/09/16 22:49:55 INFO mapred.JobClient: HDFS bytes read=62 10/09/16 22:49:55 INFO mapred.JobClient: HDFS bytes written=73 10/09/16 22:49:55 INFO mapred.JobClient: Local bytes read=152 10/09/16 22:49:55 INFO mapred.JobClient: Local bytes written=366 10/09/16 22:49:55 INFO mapred.JobClient: Job Counters 10/09/16 22:49:55 INFO mapred.JobClient: Launched reduce tasks=1 10/09/16 22:49:55 INFO mapred.JobClient: Rack-local map tasks=2 10/09/16 22:49:55 INFO mapred.JobClient: Launched map tasks=2 10/09/16 22:49:55 INFO mapred.JobClient: Map-Reduce Framework 10/09/16 22:49:55 INFO mapred.JobClient: Reduce input groups=11 10/09/16 22:49:55 INFO mapred.JobClient: Combine output records=14 10/09/16 22:49:55 INFO mapred.JobClient: Map input records=4 10/09/16 22:49:55 INFO mapred.JobClient: Reduce output records=11 10/09/16 22:49:55 INFO mapred.JobClient: Map output bytes=118 10/09/16 22:49:55 INFO mapred.JobClient: Map input bytes=62 10/09/16 22:49:55 INFO mapred.JobClient: Combine input records=14 10/09/16 22:49:55 INFO mapred.JobClient: Map output records=14 10/09/16 22:49:55 INFO mapred.JobClient: Reduce input records=14
6. 查看运行结果
[admin@host WordCount]$ hadoop fs -ls /tmp/output/ Found 2 items drwxr-x--- - admin admin 0 2010-09-16 22:43 /tmp/output/_logs -rw-r----- 1 admin admin 102 2010-09-16 22:44 /tmp/output/part-00000 [admin@host WordCount]$ hadoop fs -cat /tmp/output/part-00000 Hello, 1 You 1 are 2 china 1 hello, 1 i 2 love 2 ok 1 ok? 1 word 1 you 1