一,背景
气象站分析一批复杂的数据,演示需要分析的数据
0043011990999991950051518004+68750+023550FM-12+038299999V0203201N00261220001CN9999999N9-00111+99999999999
存放在input.txt中。
其中包含了年份 和 温度数据
需要把这个年份和温度数据提取出来
二,具体执行
1,下载 hadoop-0.20.1
cd hadoop-020.1/conf/ 配置:
core-site.xml
- xml version="1.0"?>
- xml-stylesheet type="text/xsl" href="configuration.xsl"?>
-
-
-
- <configuration>
- <property>
- <name>fs.default.namename>
- <value>hdfs://localhost:9000value>
- property>
- configuration>
hdfs-site.xml
- xml version="1.0"?>
- xml-stylesheet type="text/xsl" href="configuration.xsl"?>
-
-
-
- <configuration>
- <property>
- <name>dfs.replicationname>
- <value>1value>
- property>
- configuration>
mapred-site.xml
- xml version="1.0"?>
- xml-stylesheet type="text/xsl" href="configuration.xsl"?>
-
-
-
- <configuration>
- <property>
- <name>mapred.job.trackername>
- <value>localhost:9001value>
- property>
- configuration>
配置完毕
cd bin
./hadoop namenode -format
./start-all.sh
2,我的pom.xml
- xml version="1.0" encoding="UTF-8"?>
- <project
- xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"
- xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
- <modelVersion>4.0.0modelVersion>
- <parent>
- <artifactId>balanceartifactId>
- <groupId>com.yajungroupId>
- <version>1.0-SNAPSHOTversion>
- parent>
- <groupId>com.yajun.hadoopgroupId>
- <artifactId>balance.hadoopartifactId>
- <version>1.0-SNAPSHOTversion>
- <name>balance.hadoopname>
- <url>http://maven.apache.orgurl>
- <dependencies>
- <dependency>
- <groupId>junitgroupId>
- <artifactId>junitartifactId>
- <version>4.7version>
- <scope>testscope>
- dependency>
- <dependency>
- <groupId>org.mockitogroupId>
- <artifactId>mockito-coreartifactId>
- <version>1.8.2version>
- <scope>testscope>
- dependency>
- <dependency>
- <groupId>org.apache.mahout.hadoopgroupId>
- <artifactId>hadoop-coreartifactId>
- <version>0.20.1version>
- dependency>
- <dependency>
- <groupId>commons-logginggroupId>
- <artifactId>commons-loggingartifactId>
- <version>1.1.1version>
- dependency>
- <dependency>
- <groupId>commons-httpclientgroupId>
- <artifactId>commons-httpclientartifactId>
- <version>3.0version>
- dependency>
- <dependency>
- <groupId>commons-cligroupId>
- <artifactId>commons-cliartifactId>
- <version>1.2version>
- dependency>
- dependencies>
- project>
使用以上pom,用maven 构建eclipse开发环境
3,写代码
分析代码 (Map部分)
- package com.yajun.hadoop.temperature;
-
- import java.io.IOException;
-
- import org.apache.hadoop.io.IntWritable;
- import org.apache.hadoop.io.LongWritable;
- import org.apache.hadoop.io.Text;
- import org.apache.hadoop.mapred.MapReduceBase;
- import org.apache.hadoop.mapred.Mapper;
- import org.apache.hadoop.mapred.OutputCollector;
- import org.apache.hadoop.mapred.Reporter;
-
-
-
-
-
-
- public class MaxTemperatureMapper extends MapReduceBase implements
- Mapper {
-
- public void map(LongWritable key, Text value, OutputCollector output,
- Reporter reporter) throws IOException {
- String line = value.toString();
-
- String year = line.substring(15, 19);
-
- String temp = line.substring(87, 92);
- if (!missing(temp)) {
- int airTemperature = Integer.parseInt(temp);
- output.collect(new Text(year), new IntWritable(airTemperature));
- }
- }
-
-
-
-
-
-
-
- private boolean missing(String temp) {
- return temp.equals("+9999");
- }
-
- }
输出结果代码(Reduce部分)
- package com.yajun.hadoop.temperature;
-
- import java.io.IOException;
- import java.util.Iterator;
-
- import org.apache.hadoop.io.IntWritable;
- import org.apache.hadoop.io.Text;
- import org.apache.hadoop.mapred.MapReduceBase;
- import org.apache.hadoop.mapred.OutputCollector;
- import org.apache.hadoop.mapred.Reducer;
- import org.apache.hadoop.mapred.Reporter;
-
-
-
-
-
-
- public class MaxTemperatureReducer extends MapReduceBase implements
- Reducer {
- public void reduce(Text key, Iterator values,
- OutputCollector output, Reporter reporter)
- throws IOException {
- int maxValue = Integer.MIN_VALUE;
- while (values.hasNext()) {
- maxValue = Math.max(maxValue, values.next().get());
- }
- output.collect(key, new IntWritable(maxValue));
- }
- }
运行整个JOB的代码
4,eclipse环境的hadoop插件配置好(如果没有安装这个插件也很简单:https://issues.apache.org/jira/browse/MAPREDUCE-1262 上面下载,扔到eclipse 的dropins目录里面搞定)
与hadoop的配置一样
5,运行代码
现将input.txt拷贝到 hdfs中去
./hadoop fs -put /home/txy/work/balanceofworld/balance/balance.hadoop/src/main/resources/temperature/input.txt /user/txy/src/main/resources/temperature/input.txt
设置运行 MaxTemperatureDriver 的时候需要两个命令行参数
1,输入文件:src/main/resources/temperature/input.txt (对应到HDFS里面的
/user/txy/src/main/resources/temperature/input.txt)
2,输出文件:src/main/resources/temperature/output.txt(对应到HDFS里面的
/user/txy/src/main/resources/temperature/output.txt
)
然后就在eclipse 里右键在hadoop上运行吧,哈哈。
该博文转载自http://yjhexy.iteye.com/blog/608105