hadoop2.X嵌套计算(sumAndStor+序列化)

MR的计算是可以嵌套使用的,比如在现实业务中有时候我们可能要求先求出总结果,在对总结果进行排序,当排序的值是number类型,当直接排序。当需要排序的是javaBean,则首先要对其进行序列化。

【1】hadoop中的序列化
Hadoop的的序列化不采用的Java的序列化,而是实现了自己的序列化机制。
Hadoop的通过Writable接口实现的序列化机制,不过没有提供比较功能,所以和Java的的中Comparable接口合并,提供一个接口WritableComparable。

1 > Writable 的使用(当不需要进行排序而只是对数据的持久化等使用)

要实现序列化的bean要实现Writable 接口并复写他的两个方法(序列化、反序列化)
public class DataBean implements Writable{

    private String tel;
    private long upPayLoad; 
    private long downPayLoad;   
    private long totalPayLoad;
    public DataBean(){}
    
        //一般为了方便使用,都会给一个全参的构造方法
    public DataBean(String tel, long upPayLoad, long downPayLoad) {
        super();
        this.tel = tel;
        this.upPayLoad = upPayLoad;
        this.downPayLoad = downPayLoad;
        this.totalPayLoad = upPayLoad + downPayLoad;
    }

    @Override
    public String toString() {
        return this.upPayLoad + "\t" + this.downPayLoad + "\t" + this.totalPayLoad;
    }

    //反序列化
    public void write(DataOutput out) throws IOException {
        out.writeUTF(tel);
        out.writeLong(upPayLoad);
        out.writeLong(downPayLoad);
        out.writeLong(totalPayLoad);
    }

    //序列化
    public void readFields(DataInput in) throws IOException {
        this.tel = in.readUTF();
        this.upPayLoad = in.readLong();
        this.downPayLoad = in.readLong();
        this.totalPayLoad = in.readLong();      
    }

    //getter 和 setter方法
}

2 > WritableComparable的使用(当即需要持久化也需要排序)

当即需要对bean进行持久化又要实现某种排序,则要实现WritableComparable接口并复写三个方法(序列化、反序列化、比较)
public class InfoBean implements WritableComparable {

    private String account;//key邮箱
    private double income;//收入
    private double expenses;//支出
    private double surplus;//结余
    
    //set方法(含参数构造方法)
    public void set(String account,double income,double expenses){
        this.account = account;
        this.income = income;
        this.expenses = expenses;
        this.surplus = income - expenses;
    }
    //反序列化
    public void write(DataOutput out) throws IOException {
        out.writeUTF(account);
        out.writeDouble(income);
        out.writeDouble(expenses);
        out.writeDouble(surplus);
        
    }
    //序列化
    public void readFields(DataInput in) throws IOException {
        this.account = in.readUTF();
        this.income = in.readDouble();
        this.expenses = in.readDouble();
        this.surplus = in.readDouble();
    }
    //排序方法
    public int compareTo(InfoBean o) {
        //先比较收入,当输入相等
        if(this.income == o.getIncome()){
            //比支出
            return this.expenses > o.getExpenses() ? 1 : -1;
        }
        return this.income > o.getIncome() ? 1 : -1;
    }

    @Override
    public String toString() {
        return  income + "\t" + expenses + "\t" + surplus;
    }
    
//getter  和 setter方法
    
}

【案例】计算出用户的总输入和总支出,并排序(如果总输入相等在按总支出排序)

【1】源数据

[email protected]    6000    0   2014-02-20
[email protected]    2000    0   2014-02-20
[email protected]    0   100 2014-02-20
[email protected]    3000    0   2014-02-20
[email protected]  9000    0   2014-02-20
[email protected]  0   200     2014-02-20

【2】结果数据

[email protected]    2000.0  100.0   1900.0
[email protected]    9000.0  0.0 9000.0
[email protected]  9000.0  200.0   8800.0

【3】实现原理:即实现一个sum的MR对数据进行sum计算,在将sum的输出结果当做源数据编写sort计算,sort的计算结果就是最终结果数据

hadoop2.X嵌套计算(sumAndStor+序列化)_第1张图片
2016-12-13_103123.png

【4】代码实现:

文件一:InfoBean.java

public class InfoBean implements WritableComparable {

    private String account;//key邮箱
    private double income;//收入
    private double expenses;//支出
    private double surplus;//结余
    
    //set方法(含参数构造方法)
    public void set(String account,double income,double expenses){
        this.account = account;
        this.income = income;
        this.expenses = expenses;
        this.surplus = income - expenses;
    }
    //反序列化
    public void write(DataOutput out) throws IOException {
        out.writeUTF(account);
        out.writeDouble(income);
        out.writeDouble(expenses);
        out.writeDouble(surplus);
        
    }
    //序列化
    public void readFields(DataInput in) throws IOException {
        this.account = in.readUTF();
        this.income = in.readDouble();
        this.expenses = in.readDouble();
        this.surplus = in.readDouble();
    }

    @Override
    public int compareTo(InfoBean o) {
        //先比较收入,当输入相等
        if(this.income == o.getIncome()){
            //比支出
            return this.expenses > o.getExpenses() ? 1 : -1;
        }
        return this.income > o.getIncome() ? 1 : -1;
    }
    @Override
    public String toString() {
        return  income + "\t" + expenses + "\t" + surplus;
    }

    //getter setter方法
}


文件二:求和SumStep.java

public class SumStep {

    public static class SumMapper extends Mapper {
        private Text k = new Text();
        private InfoBean v = new InfoBean();
        
        protected void map(LongWritable key, Text value, Context context) 
                throws java.io.IOException ,InterruptedException {
            String line = value.toString();
            String[] fields = line.split("\t");
            
            String account = fields[0];
            double in = Double.parseDouble(fields[1]);
            double out = Double.parseDouble(fields[2]);
            k.set(account);
            v.set(account, in, out);
            context.write(k, v);
            //context.write(new Text(), new InfoBean());//这里为了避免多次new对象占用资源改为上方提前new好
        };
    }
    
    public static class SumReduce extends Reducer { 
        private InfoBean v = new InfoBean();
        
        protected void reduce(Text key, Iterable values, Context context) 
                throws java.io.IOException ,InterruptedException {
            double in_sum = 0 ;
            double out_sum = 0 ;
            for(InfoBean bean : values){
                in_sum += bean.getIncome();
                out_sum += bean.getExpenses();
            }
            v.set("", in_sum, out_sum);
            context.write(key, v);
        };
    }
    
    public static void main(String[] args) throws Exception {
        Configuration conf = new Configuration();
        Job job = Job.getInstance(conf);
        
        job.setJarByClass(SumStep.class);
        
        job.setMapperClass(SumMapper.class);
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(InfoBean.class);
        FileInputFormat.setInputPaths(job, new Path("/mrDemo/input/sum_sort"));
        
        job.setReducerClass(SumReduce.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(InfoBean.class);
        FileOutputFormat.setOutputPath(job, new Path("/mrDemo/output/sum_sort"));
        
        job.waitForCompletion(true);    
    }
}

//打jar包执行:hadoop jar /root/Desktop/mr_JAR/sumAndSort.jar

文件三:排序 SortStep.java

public class SortStep {
    public static class SortMapper extends Mapper {
        
        private InfoBean k = new InfoBean();
        
        protected void map(LongWritable key, Text value, Context context) 
                throws java.io.IOException ,InterruptedException {
            String line = value.toString();
            String[] fields = line.split("\t");
            
            String account = fields[0];
            double in = Double.parseDouble(fields[1]);
            double out = Double.parseDouble(fields[2]);
            k.set(account, in, out);
            context.write(k, NullWritable.get());
        };
    }
    
    public static class SortReduce extends Reducer {
        
        private Text k = new Text();
        protected void reduce(InfoBean bean, Iterable values, Context context) 
                throws java.io.IOException ,InterruptedException {
            String account = bean.getAccount();
            k.set(account);
            context.write(k, bean);
        };
    }
    
    public static void main(String[] args) throws Exception {
        
        Configuration conf = new Configuration();
        Job job = Job.getInstance(conf);
        
        job.setJarByClass(SortStep.class);
        
        job.setMapperClass(SortMapper.class);
        job.setMapOutputKeyClass(InfoBean.class);
        job.setMapOutputValueClass(NullWritable.class);
        FileInputFormat.setInputPaths(job, new Path("/mrDemo/output/sum_sort/part-r-00000"));
        
        job.setReducerClass(SortReduce.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(InfoBean.class);
        FileOutputFormat.setOutputPath(job, new Path("/mrDemo/output/sumAndSort"));
        
        job.waitForCompletion(true);
    }
}

//打jar包执行 hadoop jar /root/Desktop/mr_JAR/sumAndSort1.jar

注意:MR自身会使用快速排序将key排序,key必须可序列化

hadoop2.X嵌套计算(sumAndStor+序列化)_第2张图片
input.png
hadoop2.X嵌套计算(sumAndStor+序列化)_第3张图片
output.png

你可能感兴趣的:(hadoop2.X嵌套计算(sumAndStor+序列化))