Hadoop 之 数据去重(星星笔记)

1.问题描述

输入文件

file1:

2006-6-9 a
2006-6-10 b
2006-6-11 c
2006-6-12 d
2006-6-13 a
2006-6-14 b
2006-6-15 c
2006-6-11 c

 file 2: 
  

2006-6-9 b
2006-6-10 a
2006-6-11 b
2006-6-12 d
2006-6-13 a
2006-6-14 c
2006-6-15 d
2006-6-11 c
样例输出:

2006-6-10 a
2006-6-10 b
2006-6-11 b
2006-6-11 c
2006-6-12 d
2006-6-13 a
2006-6-14 b
2006-6-15 c
2006-6-15 d
2006-6-9  a
2006-6-9  b

 设计思路: 
  

      数据去重实例的最终目标是让原始数据中出现次数超过一次的数据在输出文件中只出现一次。具体就是Reduce的输入应该以数据作为Key,而对value-list则没有要求。当Reduce接收到一个<Key,value-list>时就直接将key复制到输出的key中,并将value设置成空值。在MapReduce流程中,Map的输出<Key,value> 经过shuffle过程聚集成<Key,value-list>后会被交给Reduce。所以从设计好的Reduce输入可以反推出Map输出的key应为数据,而value为任意值。继续反推,Map输出的key为数据。而在这个实例中每个数据代表输入文件中的一行内容,所以Map阶段要完成的任务就在采用Hadoop默认的作业输入方式之后,将value设置成key,并直接输出(输出中的value任意)。Map中的结果经过shuffle过程之后被交给Reduce。在Reduce阶段不管每个key有多少个value,都直接将输入的key复制为输出的key,并输出就可以了(输出中的value被设置成空)

具体代码如下:

package com.galaxy.star;
/**
 * liuyinxing
 */
import java.io.IOException;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;

public class Dedup 
{
	public static class Map extends Mapper{
		private static Text line = new Text();
		public void map(Object key, Text value, Context context) throws IOException,InterruptedException {
			line = value;
			System.out.println("map过程:"+line);
			context.write(line, new Text(""));
		}
	}
    public static class Reduce extends Reducer
    {
    	public void reduce (Text key, Iterable  values, Context context ) throws IOException, InterruptedException
    	{
    		System.out.println("reducer过程:"+key);
    		context.write(key, new Text(" "));
    	}
    }
	public static void main(String[] args) throws Exception{
		// TODO Auto-generated method stub
       Configuration conf = new Configuration();
       String[ ] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
       if (otherArgs.length != 2) {
		System.err.println("参数不对  ");
		System.exit(2);
	}
       Job job = new Job(conf, "数据,去重");
       job.setJarByClass(Dedup.class);
       job.setMapperClass(Map.class);
       job.setCombinerClass(Reduce.class);
       job.setReducerClass(Reduce.class);
       job.setOutputKeyClass(Text.class);
       job.setOutputValueClass(Text.class);
       FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
       FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
       System.exit(job.waitForCompletion(true) ?  0 : 1 );
	}

}


你可能感兴趣的:(java,并行计算,hadoop,mapreduce)