版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/sl1992/article/details/53980826
MapReduce中Combiner的作用和用法
①每一个map可能会产生大量的输出,Combiner的作用就是在map端对输出先做一次合并,以减少传输到reducer的数据量。
②Combiner最基本是实现本地key的归并,Combiner具有类似本地的reduce功能。
如果不用Combiner,那么,所有的结果都是reduce完成,效率会相对低下。
使用Combiner,先完成的map会在本地聚合,提升速度。
注意:Combiner的输出是Reducer的输入,如果Combiner是可插拔的,添加Combiner绝不能改变最终的计算结果。所以Combiner只应该用于那种Reduce的输入key/value与输出key/value类型完全一致,且不影响最终结果的场景。比如累加,最大值等。
③代码实例
package com.sl.bigdatatest.mapreduce;
import java.io.IOException;
import java.net.URI;
import java.net.URISyntaxException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
public class WordCount {
public static void main(String[] args) throws Exception {
if (args.length < 2) {
System.err.println("Uage:
System.exit(2);
}
String inputPath = args[0];
Path outputPath = new Path(args[1]);
//1.configuration
Configuration conf = new Configuration();
URI uri = new URI("hdfs://192.168.0.200:9000");
FileSystem fileSystem = FileSystem.get(uri, conf);
if (fileSystem.exists(outputPath)) {
boolean b = fileSystem.delete(outputPath, true);
System.out.println("已存在目录删除:"+b);
}
//2.建立job
Job job = Job.getInstance(conf, WordCount.class.getName());
job.setJarByClass(WordCount.class);
//3.输入文件
FileInputFormat.setInputPaths(job, new Path(inputPath));
//4.格式化输入文件
job.setInputFormatClass(TextInputFormat.class);
//5.map
job.setMapperClass(MapWordCountTask.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(LongWritable.class);
//6.reduce
job.setReducerClass(ReduceWordCountTask.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(LongWritable.class);
/**指定本job使用combiner组件,组件所用的类为ReduceWordCountTask**/
job.setCombinerClass(ReduceWordCountTask.class);
//7.输出文件
FileOutputFormat.setOutputPath(job, outputPath);
//8.输出文件格式化
job.setOutputFormatClass(TextOutputFormat.class);
//9.提交给集群执行
job.waitForCompletion(true);
}
public static class MapWordCountTask extends Mapper
private Text k2 = new Text();
private LongWritable v2 = new LongWritable();
@Override
protected void map(LongWritable key, Text value, Context context) throws Exception {
String content = value.toString();
StringTokenizer st = new StringTokenizer(content);
while (st.hasMoreElements()) {
k2.set(st.nextToken());
v2.set(1L);
context.write(k2, v2);
}
}
}
public static class ReduceWordCountTask extends Reducer
private LongWritable v3 = new LongWritable();
@Override
protected void reduce(Text k2, Iterable
long sum = 0;
for (LongWritable longWritable : v2s) {
sum += longWritable.get();
v3.set(sum);
}
context.write(k2, v3);
}
}
}
---------------------
作者:LifeIsForSharing
来源:CSDN
原文:https://blog.csdn.net/sl1992/article/details/53980826
版权声明:本文为博主原创文章,转载请附上博文链接!