windows10下使用idea远程调试hadoop集群

在windows10环境下,使用idea搭建maven项目链接Linux上的hadoop集群。

注意事项: 

         保证hadoop集群的用户与Windows的用户一致,不然后报错,错误信息我忘了,反正很麻烦

1. 下载hadoop-2.6.0.tar.gz,解压到本地文件夹:D:\configureSoftWare\hadoop-2.6.0

2. 配置hadoop环境变量: %HADOOP_HOME% = D:\configureSoftWare\hadoop-2.6.0

3. 将winutils.exe文件拷贝到%HADOOP_HOME%/bin 目录下

4. hadoop.dll文件拷贝到C:\Windows\System32目录下

     winutils.lb和hadoop.dll的下载地址:http://pan.baidu.com/s/1hrNXq3y

5. 新建一个maven项目,这个比较简单,网上很多创建maven工程的文章,创建好以后项目结构如下:

6. 如图,将hadoop-2.6.0/etc/hadoop文件夹下的core-site.xml和log4j.properties文件拷贝到resources文件夹下

        在core-site.xml中添加配置:


  
               fs.defaultFS
               hdfs://192.168.0.26:9000
  

               hadoop.proxyuser.hadoop.hosts
               *


               hadoop.proxyuser.hadoop.groups
               *


fs.defaultFS处换为hadoop集群的namenode的IP地址。

7. 要想使用hadoop,害得添加依赖包。修改pom.xml文件



    4.0.0

    com.fun
    hadoop
    1.0-SNAPSHOT

    
        
            apache
            http://maven.apache.org
        
    

    

        
            org.apache.hadoop
            hadoop-common
            2.6.0
        
        
            org.apache.hadoop
            hadoop-client
            2.6.0
        
        
            org.apache.hadoop
            hadoop-hdfs
            2.6.0
        
        
            org.apache.hadoop
            hadoop-hdfs-client
            2.8.0
        
        
            org.apache.hadoop
            hadoop-mapreduce-client-jobclient
            2.6.0
        
        
            commons-cli
            commons-cli
            1.2
        
        
            org.apache.spark
            spark-core_2.10
            1.6.0
        
        
        
            com.alibaba
            fastjson
            1.2.33
        

    

    
        
            
                maven-dependency-plugin
                
                    false
                    true
                    ./lib
                

            
        
    

8. 到此为止,所有环境已搭建好,我们来试一试,使用最经典的Wordcount测试一下

package MR;

/**
 * Created by hadoop on 2017/5/25.
 */
/**
 * Created by jinshilin on 16/12/7.
 */
import java.io.IOException;
import java.net.URI;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class WordCount {

    public static class TokenizerMapper
            extends Mapper {

        private final static IntWritable one = new IntWritable(1);
        private Text word = new Text();

        public void map(Object key, Text value, Context context
        ) throws IOException, InterruptedException {
            StringTokenizer itr = new StringTokenizer(value.toString());
            while (itr.hasMoreTokens()) {
                word.set(itr.nextToken());
                context.write(word, one);
            }
        }
    }

    public static class IntSumReducer
            extends Reducer {
        private IntWritable result = new IntWritable();

        public void reduce(Text key, Iterable values,
                           Context context
        ) throws IOException, InterruptedException {
            int sum = 0;
            for (IntWritable val : values) {
                sum += val.get();
            }
            result.set(sum);
            context.write(key, result);
        }
    }

    public static void main(String[] args) throws Exception {
        Configuration conf = new Configuration();

//        Path input = new Path("hdfs://192.168.0.26:9000/people");
        Path input = new Path(URI.create("hdfs://192.168.0.26:9000/people"));
        Path output = new Path(URI.create("hdfs://192.168.0.26:9000/output"));
        Job job = Job.getInstance(conf, "word count");

        job.setJarByClass(WordCount.class);
        job.setMapperClass(TokenizerMapper.class);
        job.setCombinerClass(IntSumReducer.class);
        job.setReducerClass(IntSumReducer.class);

        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);
        FileInputFormat.addInputPath(job, input);
        FileOutputFormat.setOutputPath(job, output);
        System.exit(job.waitForCompletion(true) ? 0 : 1);
    }
}

10 . 运行程序:



在HDFS上查看结果:

11 . 大功告成!!









你可能感兴趣的:(window10,hadoop,远程)