Yarn 的 Tool 接口案例

Yarn 的 Tool 接口案例

0)回顾
hadoop jar wc.jar com.xiaoqiu.mapreduce.wordcount.WordCountDriver /input /output
期望可以动态传参,结果报错,误认为是第一个输入参数。
hadoop jar wc.jar com.xiaoqiu.mapreduce.wordcount.WordCountDriver -Dmapreduce.job.queuename=root.hadoop /input /output
1)需求:自己写的程序也可以动态修改参数。编写 Yarn 的 Tool 接口。
2)具体步骤:
(1)新建 Maven 项目 YarnDemo,pom 如下:

<dependencies>
 <dependency>
 <groupId>org.apache.hadoopgroupId>
 <artifactId>hadoop-clientartifactId>
 <version>3.1.3version>
 dependency>
 dependencies>

(2)新建 com.xiaoqiu.yarn 包名
(3)创建类 WordCount 并实现 Tool 接口:

package com.xiaoqiu;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;

import java.io.IOException;

/**
 * @author 小邱
 * @version 0.0.1
 * @description WordCount
 * @since 2021/12/5 15:42
 */
public class WordCount  implements Tool {

    private Configuration conf;

    @Override
    public int run(String[]  args) throws Exception {
        Job job = Job.getInstance(conf);
        job.setJarByClass(WordCountDriver.class);
        job.setMapperClass(WordCountMapper.class);
        job.setReducerClass(WordCountReducer.class);
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(IntWritable.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);
        FileInputFormat.setInputPaths(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));
        return job.waitForCompletion(true) ? 0 : 1;
    }

    @Override
    public void setConf(Configuration conf) {
        this.conf = conf;
    }

    @Override
    public Configuration getConf() {
        return conf;
    }
    public static class WordCountMapper extends Mapper<LongWritable, Text,Text, IntWritable> {
        private Text wordOut = new Text();
        private IntWritable outValue = new IntWritable(1);
        @Override
        protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
            // 1 获取一行
            String s = value.toString();
            // 2 切割
            String[] words = s.split(" ");
            // 3 循环写出
            for (String word : words) {
                // 封装
                wordOut.set(word);
                // 写出
                context.write(wordOut, outValue);
            }
        }
    }
    public static class WordCountReducer extends Reducer<Text, IntWritable, Text, IntWritable> {
        private IntWritable outValue = new IntWritable();
        @Override
        protected void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
            int sum = 0;
            // 累加
            for (IntWritable value : values) {
                sum += value.get();
            }
            outValue.set(sum);
            //写出
            context.write(key,outValue);
        }
    }
}

(4)新建 WordCountDriver

package com.xiaoqiu;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;

import java.io.IOException;
import java.util.Arrays;

/**
 * @author 小邱
 * @version 0.0.1
 * @description CombineTextInputFormatDriver
 * @since 2021/12/2 16:20
 */
public class WordCountDriver {
    private static Tool tool;
    public static void main(String[] args) throws Exception {
        // 1. 创建配置文件
        Configuration conf = new Configuration();
        // 2. 判断是否有 tool 接口
        switch (args[0]){
            case "wordcount":
                tool = new WordCount();
                break;
            default:
                throw new RuntimeException(" No such tool: "+
                        args[0] );
        }
        // 3. 用 Tool 执行程序
        // Arrays.copyOfRange 将老数组的元素放到新数组里面
        int run = ToolRunner.run(conf, tool,Arrays.copyOfRange(args, 1, args.length));
        System.exit(run);

    }
}

3)在 HDFS 上准备输入文件,假设为/input 目录,向集群提交该 Jar 包
yarn jar YarnDemo.jar com.xiaoqiu.WordCountDriver wordcount /input /output
注意此时提交的 3 个参数,第一个用于生成特定的 Tool,第二个和第三个为输入输出目录。此时如果我们希望加入设置参数,可以在 wordcount 后面添加参数,例如:
yarn jar YarnDemo.jar com.xiaoqiu.WordCountDriver wordcount -Dmapreduce.job.queuename=root.xiaoqiu /input /output1
4)注:以上操作全部做完过后,快照回去或者手动将配置文件修改成之前的状态,因为本身资源就不够,分成了这么多,不方便以后测试。

你可能感兴趣的:(Hadoop,yarn,hadoop,mapreduce)