[置顶] Ubuntu下eclipse开发hadoop应用程序环境配置

环境:Vmware 8.0 和Ubuntu11.04

Ubuntu下eclipse开发hadoop应用程序环境配置

第一步:下载eclipse-SDK-4.2.1-linux-gtk.tar.gz

http://mirrors.ustc.edu.cn/eclipse/eclipse/downloads/drops4/R-4.2.1-201209141800/eclipse-SDK-4.2.1-linux-gtk.tar.gz

注意:下载linux下的32位eclipse,不要下载64位的eclipse,不然会无法启动eclipse

第二步:下载最新版本的hadoop插件

https://issues.apache.org/jira/secure/attachment/12460491/hadoop-eclipse-plugin-0.20.3-SNAPSHOT.jar

重命名:将下载的插件重命名为"hadoop-0.20.2-eclipse-plugin.jar"

将hadoop-0.20.2-eclipse-plugin.jar 复制到eclipse/plugins目录下,重启eclipse。

第三步:配置hadoop路径

Window -> Preferences 选择 “Hadoop Map/Reduce”,点击“Browse...”选择Hadoop文件夹的路径。
这个步骤与运行环境无关,只是在新建工程的时候能将hadoop根目录和lib目录下的所有jar包自动导入。

[置顶] Ubuntu下eclipse开发hadoop应用程序环境配置_第1张图片

第四步:添加一个MapReduce环境

[置顶] Ubuntu下eclipse开发hadoop应用程序环境配置_第2张图片

在eclipse下端,控制台旁边会多一个Tab,叫“Map/Reduce Locations”,在下面空白的地方点右键,选择“New Hadoop location...”,如图所示:

[置顶] Ubuntu下eclipse开发hadoop应用程序环境配置_第3张图片

第五步:使用eclipse对HDFS内容进行修改

经过上一步骤,左侧“Project Explorer”中应该会出现配置好的HDFS,点击右键,可以进行新建文件夹、删除文件夹、上传文件、下载文件、删除文件等操作。注意:每一次操作完在eclipse中不能马上显示变化,必须得刷新一下。

在/home/tanglg1987/input 目录下新建两个文件file01.txt,file02.txt

file01.txt内容如下:

hello hadoop

file02.txt内容如下:

hello world

上传本地文件到hdfs:

hadoop fs -put /home/tanglg1987//file01.txt input
hadoop fs -put /home/tanglg1987/file02.txt input

[置顶] Ubuntu下eclipse开发hadoop应用程序环境配置_第4张图片

第六步:创建工程

File -> New -> Project 选择“Map/Reduce Project”,然后输入项目名称,创建项目。插件会自动把hadoop根目录和lib目录下的所有jar包导入。

[置顶] Ubuntu下eclipse开发hadoop应用程序环境配置_第5张图片

[置顶] Ubuntu下eclipse开发hadoop应用程序环境配置_第6张图片

第七步:新建一个WordCount.java,这里使用系统自带的TokenCountMapper和LongSumReducer,代码如下:

package com.baison.action;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.mapred.FileInputFormat;
import org.apache.hadoop.mapred.FileOutputFormat;
import org.apache.hadoop.mapred.JobClient;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapred.lib.TokenCountMapper;
import org.apache.hadoop.mapred.lib.LongSumReducer;
public class WordCount {
	public static void main(String[] args) {
		JobClient client = new JobClient();
		JobConf conf = new JobConf(WordCount.class);
		String[] arg = { "hdfs://localhost:9100/user/tanglg1987/input",
				"hdfs://localhost:9100/user/tanglg1987/output" };
		FileInputFormat.addInputPath(conf, new Path(arg[0]));
		FileOutputFormat.setOutputPath(conf, new Path(arg[1]));
		conf.setOutputKeyClass(Text.class);
		conf.setOutputValueClass(LongWritable.class);
		conf.setMapperClass(TokenCountMapper.class);
		conf.setCombinerClass(LongSumReducer.class);
		conf.setReducerClass(LongSumReducer.class);
		client.setConf(conf);
		try {
			JobClient.runJob(conf);
		} catch (Exception e) {
			e.printStackTrace();
		}
	}
}

第八步:运行WordCount

Run As -> Run on Hadoop 选择之前配置好的MapReduce运行环境,点击“Finish”运行。

运行过程如下:

12/10/18 22:53:38 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
12/10/18 22:53:38 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
12/10/18 22:53:38 WARN mapred.JobClient: No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
12/10/18 22:53:38 INFO mapred.FileInputFormat: Total input paths to process : 2
12/10/18 22:53:39 INFO mapred.JobClient: Running job: job_local_0001
12/10/18 22:53:39 INFO mapred.FileInputFormat: Total input paths to process : 2
12/10/18 22:53:39 INFO mapred.MapTask: numReduceTasks: 1
12/10/18 22:53:39 INFO mapred.MapTask: io.sort.mb = 100
12/10/18 22:53:39 INFO mapred.MapTask: data buffer = 79691776/99614720
12/10/18 22:53:39 INFO mapred.MapTask: record buffer = 262144/327680
12/10/18 22:53:39 INFO mapred.MapTask: Starting flush of map output
12/10/18 22:53:39 INFO mapred.MapTask: Finished spill 0
12/10/18 22:53:39 INFO mapred.TaskRunner: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting
12/10/18 22:53:39 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/file01.txt:0+12
12/10/18 22:53:39 INFO mapred.TaskRunner: Task 'attempt_local_0001_m_000000_0' done.
12/10/18 22:53:39 INFO mapred.MapTask: numReduceTasks: 1
12/10/18 22:53:39 INFO mapred.MapTask: io.sort.mb = 100
12/10/18 22:53:39 INFO mapred.MapTask: data buffer = 79691776/99614720
12/10/18 22:53:39 INFO mapred.MapTask: record buffer = 262144/327680
12/10/18 22:53:39 INFO mapred.MapTask: Starting flush of map output
12/10/18 22:53:39 INFO mapred.MapTask: Finished spill 0
12/10/18 22:53:39 INFO mapred.TaskRunner: Task:attempt_local_0001_m_000001_0 is done. And is in the process of commiting
12/10/18 22:53:39 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/file02.txt:0+13
12/10/18 22:53:39 INFO mapred.TaskRunner: Task 'attempt_local_0001_m_000001_0' done.
12/10/18 22:53:39 INFO mapred.LocalJobRunner:
12/10/18 22:53:39 INFO mapred.Merger: Merging 2 sorted segments
12/10/18 22:53:39 INFO mapred.Merger: Down to the last merge-pass, with 2 segments left of total size: 69 bytes
12/10/18 22:53:39 INFO mapred.LocalJobRunner:
12/10/18 22:53:39 INFO mapred.TaskRunner: Task:attempt_local_0001_r_000000_0 is done. And is in the process of commiting
12/10/18 22:53:39 INFO mapred.LocalJobRunner:
12/10/18 22:53:39 INFO mapred.TaskRunner: Task attempt_local_0001_r_000000_0 is allowed to commit now
12/10/18 22:53:39 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local_0001_r_000000_0' to hdfs://localhost:9100/user/tanglg1987/output
12/10/18 22:53:39 INFO mapred.LocalJobRunner: reduce > reduce
12/10/18 22:53:39 INFO mapred.TaskRunner: Task 'attempt_local_0001_r_000000_0' done.
12/10/18 22:53:40 INFO mapred.JobClient: map 100% reduce 100%
12/10/18 22:53:40 INFO mapred.JobClient: Job complete: job_local_0001
12/10/18 22:53:40 INFO mapred.JobClient: Counters: 15
12/10/18 22:53:40 INFO mapred.JobClient: FileSystemCounters
12/10/18 22:53:40 INFO mapred.JobClient: FILE_BYTES_READ=49601
12/10/18 22:53:40 INFO mapred.JobClient: HDFS_BYTES_READ=62
12/10/18 22:53:40 INFO mapred.JobClient: FILE_BYTES_WRITTEN=100852
12/10/18 22:53:40 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=25
12/10/18 22:53:40 INFO mapred.JobClient: Map-Reduce Framework
12/10/18 22:53:40 INFO mapred.JobClient: Reduce input groups=3
12/10/18 22:53:40 INFO mapred.JobClient: Combine output records=4
12/10/18 22:53:40 INFO mapred.JobClient: Map input records=2
12/10/18 22:53:40 INFO mapred.JobClient: Reduce shuffle bytes=0
12/10/18 22:53:40 INFO mapred.JobClient: Reduce output records=3
12/10/18 22:53:40 INFO mapred.JobClient: Spilled Records=8
12/10/18 22:53:40 INFO mapred.JobClient: Map output bytes=57
12/10/18 22:53:40 INFO mapred.JobClient: Map input bytes=25
12/10/18 22:53:40 INFO mapred.JobClient: Combine input records=4
12/10/18 22:53:40 INFO mapred.JobClient: Map output records=4
12/10/18 22:53:40 INFO mapred.JobClient: Reduce input records=4

查看运行结果:

在输出目录中,可以看见WordCount程序的输出文件。除此之外,还可以看见一个logs文件夹,里面会有运行的日志。

[置顶] Ubuntu下eclipse开发hadoop应用程序环境配置_第7张图片

 

你可能感兴趣的:(eclipse,mapreduce,hadoop,linux,input,output)