Hadoop 理解与运用(四)

Java编写MapReduce程序

1、java开发map_reduce程序
2、配置系统环境变量HADOOP_HOME,指向hadoop安装目录(如
果你不想招惹不必要的麻烦,不要在目录中包含空格或者中文字符)把HADOOP_HOME/bin加到PATH环境变量(非必要,只是为了方便)
3、如果是在windows下开发,需要添加windows的库文件
1)把盘中共享的bin目录覆盖HADOOP_HOME/bin
2)如果还是不行,把其中的hadoop.dll复制到c:\windows\system32目录下,可能需要重启机器

4、建立新项目,引入hadoop需要的jar文件
代码WordMapper:

1 import java.io.IOException;
2 import org.apache.hadoop.io.IntWritable;
3 import org.apache.hadoop.io.LongWritable;
4 import org.apache.hadoop.io.Text;
5 import org.apache.hadoop.mapreduce.Mapper;
6 public class WordMapper extends Mapper {
7 @Override
8 protected void map(LongWritable key, Text value, 
  Mapper.Context context)
9 throws IOException, InterruptedException {
10 String line = value.toString();
11 String[] words = line.split(" ");
12 for(String word : words) {
13 context.write(new Text(word),new IntWritable(1));
14 }
15 }
16 }

6、代码WordReducer:

1 import java.io.IOException;
2 import org.apache.hadoop.io.IntWritable;
3 import org.apache.hadoop.io.LongWritable;
4 import org.apache.hadoop.io.Text;
5 import org.apache.hadoop.mapreduce.Reducer;
6 publicclassWordReducerextendsReducer {
7 @Override
8 protected void reduce(Text key, Iterable values,
9 Reducer.Context context)
  throws IOException, InterruptedException {
10 long count = 0 ;
11 for(IntWritable v : values) {
12 count += v.get();
13  }
14  context.write(key,newLongWritable(count));
15  }

16  }

7、代码Test:

import org.apache.hadoop.conf.Configuration;

import org.apache.hadoop.fs.Path;

import org.apache.hadoop.io.IntWritable;

import org.apache.hadoop.io.LongWritable;

import org.apache.hadoop.io.Text;

import org.apache.hadoop.mapreduce.Job;

import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;

import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

  public class Test {
  public static void main(String[] args) throws Exception {
  Configuration conf = new Configuration();    
  Job job = Job.getInstance(conf);       
  job.setMapperClass(WordMapper.class);
  job.setReducerClass(WordReducer.class);
  job.setMapOutputKeyClass(Text.class);     
  job.setMapOutputValueClass(IntWritable.class);       
  job.setOutputKeyClass(Text.class);       
 job.setOutputValueClass(LongWritable.class);        
FileInputFormat.setInputPaths(job,"c:/bigdata/hadoop/test/test.txt"); FileOutputFormat.setOutputPath(job,newPath("c:/bigdata/hadoop/test/out/"));        
job.waitForCompletion(true);    
}
}

8、把hdfs中的文件拉到本地来运行

1 FileInputFormat.setInputPaths(job,"hdfs://master:9000/wcinput/");
2 FileOutputFormat.setOutputPath(job,new Path(
  "hdfs://master:9000/wcoutput2/"));

注意这里是把hdfs文件拉到本地来运行,如果观察输出的话会观察到jobID带有local字样同时这样的运行方式是不需要yarn的(自己停掉yarn服务做实验)
9、在远程服务器执行

1 conf.set("fs.defaultFS","hdfs://master:9000/");
2 conf.set("mapreduce.job.jar","target/wc.jar");
3 conf.set("mapreduce.framework.name","yarn");
4 conf.set("yarn.resourcemanager.hostname","master");
5 conf.set("mapreduce.app-submission.cross-platform","true");
6 FileInputFormat.setInputPaths(job, "/wcinput/");
7 FileOutputFormat.setOutputPath(job, new Path("/wcoutput3/"));

如果遇到权限问题,配置执行时的虚拟机参DHADOOP_USER_NAME=root
10、也可以将hadoop的四个配置文件拿下来放到src根目录下,就不需要进行手工配置了,默认到classpath目录寻找
11、或者将配置文件放到别的地方,使用conf.addResource(.class.getClassLoader.getResourceAsStream)方式添加,不推荐使用绝对路径的方式
12、建立maven-hadoop项目:

1
2 4.0.0
3 mashibing.com
4 maven
5 0.0.1-SNAPSHOT
6 wc
7 hello mp
8 
9 UTF-810 2.7.3
11 
12 
13 
14  junit
15 junit
16  4.12
17 
18 
19 org.apache.hadoop
20 hadoop-client
21 ${hadoop.version}
22 
23 
24 org.apache.hadoop
25 hadoop-common
26 ${hadoop.version}
27 
28 
29 org.apache.hadoop
30 hadoop-hdfs
31 ${hadoop.version}
32 
33 
34 

13、配置log4j.properties,放到src/main/resources目录下

1 log4j.rootCategory=INFO, stdout
2 log4j.appender.stdout=org.apache.log4j.ConsoleAppender  
3 log4j.appender.stdout.layout=org.apache.log4j.PatternLayout  
4 log4j.appender.stdout.layout.ConversionPattern=[QC] %p [%t] 
  %C.%M(%L) | %m%n

你可能感兴趣的:(Hadoop 理解与运用(四))