MapReduce(八):Join多种应用

Reduce Join

Map端的主要工作:为来自不同表或文件的kv对,打标签以区别不同来源的记录。然后用连接字段作为key,其余部分或新加的标志作为value,最后进行输出。

Reduce端的主要操作:在Reduce端以连接字段作为key的分组已经完成,我们只需要在每一个分组当中将哪些来源不同文件的记录(在Map阶段已经打标志)分开,最后进行合并就OK了。

Reduce Join总结

缺点:合并方式的操作是在Reduce阶段完成,Reduce端的处理压力太大,Map节点的运算符在很低,资源利用率不高,且在Reduce阶段容易产生数据倾斜。

解决方案:Map端实现数据合并。

实战

TableBean.xml

public class TableBean implements Writable {

    private String id;
    private String pId;
    private Integer amount;
    private String pName;
    private String flag;

    public TableBean() {
    }

    public String getId() {
        return id;
    }

    public void setId(String id) {
        this.id = id;
    }

    public String getpId() {
        return pId;
    }

    public void setpId(String pId) {
        this.pId = pId;
    }

    public Integer getAmount() {
        return amount;
    }

    public void setAmount(Integer amount) {
        this.amount = amount;
    }

    public String getpName() {
        return pName;
    }

    public void setpName(String pName) {
        this.pName = pName;
    }

    public String getFlag() {
        return flag;
    }

    public void setFlag(String flag) {
        this.flag = flag;
    }

    @Override
    public void write(DataOutput out) throws IOException {
        out.writeUTF(id);
        out.writeUTF(pId);
        out.writeInt(amount);
        out.writeUTF(pName);
        out.writeUTF(flag);
    }

    @Override
    public void readFields(DataInput in) throws IOException {
        this.id = in.readUTF();
        this.pId = in.readUTF();
        this.amount = in.readInt();
        this.pName = in.readUTF();
        this.flag = in.readUTF();
    }

    @Override
    public String toString() {
        return id + "\t" + pName + "\t" + amount;
    }
}

TableMapper.java

public class TableMapper extends Mapper {

    private String fileName;
    private Text outK  = new Text();
    private TableBean outV = new TableBean();

    @Override
    protected void setup(Context context) throws IOException, InterruptedException {
        FileSplit split = (FileSplit) context.getInputSplit();
        fileName = split.getPath().getName();
    }

    @Override
    protected void map(LongWritable key, Text value,
        Mapper.Context context)
        throws IOException, InterruptedException {
        String line = value.toString();
        if (fileName.contains("order")) {
            String[] split = line.split(" ");
            // 封装kv
            outK.set(split[1]);
            outV.setId(split[0]);
            outV.setpId(split[1]);
            outV.setAmount(Integer.parseInt(split[2]));
            outV.setpName("");
            outV.setFlag("order");
        } else {
            String[] split = line.split(" ");
            // 封装kv
            outK.set(split[0]);
            outV.setId("");
            outV.setpId(split[0]);
            outV.setAmount(0);
            outV.setpName(split[1]);
            outV.setFlag("pd");
        }
        context.write(outK, outV);
    }
}

TableReducer.java

public class TableReducer extends Reducer {

    @Override
    protected void reduce(Text key, Iterable values,
        Reducer.Context context)
        throws IOException, InterruptedException {

        List orderBeans = new ArrayList<>();
        TableBean pdBean = new TableBean();
        for (TableBean value : values) {
            if("order".equals(value.getFlag())){
                TableBean tempBean = new TableBean();
                try {
                    BeanUtils.copyProperties(tempBean,value);
                } catch (IllegalAccessException e) {
                    e.printStackTrace();
                } catch (InvocationTargetException e) {
                    e.printStackTrace();
                }orderBeans.add(tempBean);
            }else{
                try {
                    BeanUtils.copyProperties(pdBean,value);
                } catch (IllegalAccessException e) {
                    e.printStackTrace();
                } catch (InvocationTargetException e) {
                    e.printStackTrace();
                }

            }
        }
        //
        for (TableBean orderBean : orderBeans) {
            orderBean.setpName(pdBean.getpName());
            context.write(orderBean,NullWritable.get());
        }
    }
}

TableDriver.java

public class TableDriver {

    public static void main(String[] args) throws Exception {

        Configuration conf = new Configuration();
        Job job = Job.getInstance(conf);

        job.setJarByClass(TableDriver.class);
        job.setMapperClass(TableMapper.class);
        job.setReducerClass(TableReducer.class);

        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(TableBean.class);

        job.setOutputKeyClass(TableBean.class);
        job.setOutputValueClass(NullWritable.class);

        FileInputFormat.setInputPaths(job,new Path(System.getProperty("user.dir")+"/input/reducejoin"));
        FileOutputFormat.setOutputPath(job,new Path(System.getProperty("user.dir")+"/output/reducejoin"));

        Boolean result = job.waitForCompletion(true);
        System.exit(result ? 0 : 1);

    }

}

Map Join

1)使用场景

Map Join适用于一张表十分大、一张表小的场景。

2)优点

在Map端缓存多张表,提前处理业务逻辑,这样增加Map端业务,减少Reduce端数据的压力,尽可能的减少数据倾斜。

3)具体办法:采用DistributedCache

  • 在Mapper的setup阶段,将文件读取到缓存集合中。

  • 在Driver驱动类中加载缓存。

    // 缓存普通文件到Task运行节点
    job.setCacheFile(new URI("file://xxx/pd.txt"));
    // 如果是集群运行,需要设置HDFS路径
    job.setCacheFile(new URI("hdfs://xxx/pd.txt"));
    
  • Map Join不需要ReduceTask,设置reduceTaskNum=0

实战

MapJoinMapper.java

public class MapJoinMapper extends Mapper {

    Map pdMap = new HashMap<>();

    private Text outK = new Text();

    @Override
    protected void setup(Mapper.Context context)
        throws IOException {
        // 获取缓存的文件,并把文件内容封装到集合 pd.txt
        URI cacheFile = context.getCacheFiles()[0];
        FileSystem fs = FileSystem.get(context.getConfiguration());
        FSDataInputStream fis = fs.open(new Path(cacheFile));

        // 从流读数据
        BufferedReader br = new BufferedReader(new InputStreamReader(fis, "utf-8"));

        String line;

        while (StringUtils.isNotBlank(line = br.readLine())) {
            String[] fields = line.split(" ");
            pdMap.put(fields[0], fields[1]);
        }
        // 关流
        IOUtils.closeStream(br);
        IOUtils.closeStream(fis);
    }

    @Override
    protected void map(LongWritable key, Text value,
        Mapper.Context context)
        throws IOException, InterruptedException {
        String line = value.toString();

        String[] split = line.split(" ");
        // 封装kv
        String pName = pdMap.get(split[1]);
        StringBuffer sbf = new StringBuffer();
        String abfStr = sbf.append(split[0]).append("\t").append(pName).append("\t")
            .append(Integer.parseInt(split[2])).toString();
        outK.set(abfStr);
        context.write(outK,NullWritable.get());
    }
}

MapJoinDriver.java

public class MapJoinDriver {

    public static void main(String[] args) throws Exception {
        Configuration conf = new Configuration();
        Job job = Job.getInstance(conf);

        job.setMapperClass(MapJoinMapper.class);
        job.setJarByClass(MapJoinDriver.class);

        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(NullWritable.class);

        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(NullWritable.class);

        job.addCacheFile(new URI(System.getProperty("user.dir")+"/input/mapjoin/cacheFile.txt"));
        job.setNumReduceTasks(0);

        FileInputFormat.setInputPaths(job, new Path(System.getProperty("user.dir")+"/input/mapjoin/mapjoin.txt"));
        FileOutputFormat.setOutputPath(job, new Path(System.getProperty("user.dir")+"/output/mapjoin"));

        boolean result = job.waitForCompletion(true);
        System.exit(result ? 0 : 1);

    }

}

小结:

本节学习到join操作,和SQL里面的join一样将多个文件关联查询出最终结果。如果一个文件大一个文件小,可以采用Map Join方式来处理。

你可能感兴趣的:(MapReduce(八):Join多种应用)