Win10+hadoop+idea 运行wordcount

1.hadoop-window的安装这里不过多赘述 运行成功之后的结果如下图

Win10+hadoop+idea 运行wordcount_第1张图片

2.在idea中新建一个maven项目

Win10+hadoop+idea 运行wordcount_第2张图片

Win10+hadoop+idea 运行wordcount_第3张图片

Win10+hadoop+idea 运行wordcount_第4张图片

pom.xml



    4.0.0

    com.test
    wordcount
    1.0-SNAPSHOT
    jar
    http://maven.apache.org


    
        UTF-8
    

    
        
            org.apache.hadoop
            hadoop-common
            2.8.0
        
        
            org.apache.hadoop
            hadoop-hdfs
            2.8.0
        
        
            org.apache.hadoop
            hadoop-client
            2.8.0
        
        
            junit
            junit
            3.8.1
            test
        
        
            org.apache.hadoop
            hadoop-common
            2.8.0
        
        
            org.apache.hadoop
            hadoop-mapreduce-client-common
            2.8.0
        
        
            org.apache.hadoop
            hadoop-yarn-client
            2.8.0
        
        
            org.apache.hadoop
            hadoop-mapreduce-client-core
            2.8.0
        
        
            io.netty
            netty-common
            4.1.5.Final
        
    

    
        ${project.artifactId}
    


新建一个wordcount类

Win10+hadoop+idea 运行wordcount_第5张图片

package com.hadoop.wordcount;
import org.apache.hadoop.fs.FileUtil;
import org.apache.hadoop.io.IntWritable;

import java.io.IOException;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

import java.util.Iterator;
import java.util.StringTokenizer;

/**
 * Created by bee on 8/30/18.
 */
public class WordCount {


    public static class TokenizerMapper extends
            Mapper {


        public static final IntWritable one = new IntWritable(1);
        private Text word = new Text();

        public void map(Object key, Text value, Context context)
                throws IOException, InterruptedException {
            StringTokenizer itr = new StringTokenizer(value.toString());
            while (itr.hasMoreTokens()) {
                this.word.set(itr.nextToken());
                context.write(this.word, one);
            }
        }

    }

    public static class IntSumReduce extends
            Reducer {
        private IntWritable result = new IntWritable();

        public void reduce(Text key, Iterable values,
                           Context context)
                throws IOException, InterruptedException {
            int sum = 0;
            IntWritable val;
            for (Iterator i = values.iterator(); i.hasNext(); sum += val.get()) {
                val = (IntWritable) i.next();
            }
            this.result.set(sum);
            context.write(key, this.result);
        }
    }

    public static void main(String[] args)
            throws IOException, ClassNotFoundException, InterruptedException {

        Configuration conf = new Configuration();
        //设置hdfs地址之后就会自动寻找数据然后进行mapreduce操作,输入地址不能自己新建,程序运行会自动创建
        String[] otherArgs = new String[]{"hdfs://localhost:9000/input/dream.txt","hdfs://localhost:9000/output/wordcount/"};
        /*String[] otherArgs = new String[]{"input/dream.txt","output"};*/
        if (otherArgs.length != 2) {
            System.err.println("Usage:Merge and duplicate removal  ");
            System.exit(2);
        }

        Job job = Job.getInstance(conf, "WordCount");
        job.setJarByClass(WordCount.class);
        job.setMapperClass(WordCount.TokenizerMapper.class);
        job.setReducerClass(WordCount.IntSumReduce.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);
        FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
        FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
        System.exit(job.waitForCompletion(true) ? 0 : 1);
    }
}

在hdf地址中创建input文件夹 

Win10+hadoop+idea 运行wordcount_第6张图片

hadoop fs -mkdir  /input 

将本地文件dream拷贝到input文件夹下

hadoop fs -put F:\dream.txt /input

Win10+hadoop+idea 运行wordcount_第7张图片

点进运行idea中的worcount

Win10+hadoop+idea 运行wordcount_第8张图片

运行成功之后

Win10+hadoop+idea 运行wordcount_第9张图片

Win10+hadoop+idea 运行wordcount_第10张图片

Win10+hadoop+idea 运行wordcount_第11张图片

Win10+hadoop+idea 运行wordcount_第12张图片

Win10+hadoop+idea 运行wordcount_第13张图片

点击之后下载,结果如下

Win10+hadoop+idea 运行wordcount_第14张图片

至此wordcount运行完成

你可能感兴趣的:(大数据)