关于单机版的配置,eclipse环境搭建,以后再补充吧

首先是程序

project: wordcunt

   
   
   
   
  1. import java.io.IOException; 
  2. import java.util.StringTokenizer; 
  3.  
  4. import org.apache.hadoop.io.IntWritable; 
  5. import org.apache.hadoop.io.Text; 
  6. import org.apache.hadoop.mapreduce.Mapper; 
  7.  
  8. public class TokenizerMapper extends 
  9.         Mapper { 
  10.  
  11.     private final static IntWritable one = new IntWritable(1); 
  12.     private Text word = new Text(); 
  13.  
  14.     public void map(Object key, Text value, Context context) 
  15.             throws IOException, InterruptedException { 
  16.         StringTokenizer itr = new StringTokenizer(value.toString()); 
  17.         while (itr.hasMoreTokens()) { 
  18.             word.set(itr.nextToken()); 
  19.             context.write(word, one); 
  20.         } 
  21.     } 
  22.  
  23.  
  24. import java.io.IOException; 
  25.  
  26. import org.apache.hadoop.io.IntWritable; 
  27. import org.apache.hadoop.io.Text; 
  28. import org.apache.hadoop.mapreduce.Reducer; 
  29.  
  30. public class IntSumReducer extends 
  31.         Reducer { 
  32.     private IntWritable result = new IntWritable(); 
  33.  
  34.     public void reduce(Text key, Iterable values, Context context) 
  35.             throws IOException, InterruptedException { 
  36.         int sum = 0
  37.         for (IntWritable val : values) { 
  38.             sum += val.get(); 
  39.         } 
  40.         result.set(sum); 
  41.         context.write(key, result); 
  42.     } 
  43.  
  44.  
  45. import org.apache.hadoop.conf.Configuration; 
  46. import org.apache.hadoop.fs.Path; 
  47. import org.apache.hadoop.io.IntWritable; 
  48. import org.apache.hadoop.io.Text; 
  49. import org.apache.hadoop.mapreduce.Job; 
  50. import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; 
  51. import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; 
  52. import org.apache.hadoop.util.GenericOptionsParser; 
  53.  
  54. public class WordCount { 
  55.  
  56.       public static void main(String[] args) throws Exception { 
  57.             Configuration conf = new Configuration(); 
  58. //          String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs(); 
  59. //          if (otherArgs.length != 2) { 
  60. //            System.err.println("Usage: wordcount  "); 
  61. //            System.exit(2); 
  62. //          } 
  63.             Job job = new Job(conf, "word count"); 
  64.             job.setJarByClass(WordCount.class); 
  65.             job.setMapperClass(TokenizerMapper.class); 
  66.             job.setCombinerClass(IntSumReducer.class); 
  67.             job.setReducerClass(IntSumReducer.class); 
  68.             job.setOutputKeyClass(Text.class); 
  69.             job.setOutputValueClass(IntWritable.class); 
  70.             FileInputFormat.addInputPath(job, new Path("/tmp/input")); 
  71.             FileOutputFormat.setOutputPath(job, new Path("/tmp/output")); 
  72.             System.exit(job.waitForCompletion(true) ? 0 : 1); 
  73.           } 
  74.  

将项目export成为jar包,注意选择运行类为WordCount

在hadoop机器上:

[admin@host WordCount]$ vim input1.txt
Hello, i love china
are you ok
?
[admin@host WordCount]$ vim input2.txt
hello, i love word
You are ok

  在hadoop上新建目录,和put程序运行所需要的输入文件:

hadoop fs - mkdir / tmp / input
hadoop fs
- mkdir / tmp / output
hadoop fs
- put input1.txt / tmp / input /
hadoop fs
- put input2.txt / tmp / input /
 
执行:
hadoop jar wordcount.jar WordCount
 
查看效果:
hadoop fs -ls /tmp/output/
hadoop fs -cat /tmp/output/part-r-00000

OK!