單位 作者 國家高速網路中心-格網技術組 Wei-Yu Chen waue @ nchc.org.tw
最新版本的 Eclipse 3.5 搭配 Ubuntu 9.04 + hadoop-eclipse-plugin 0.20.1 ,初步測試功能皆可正常運作
但 Ubuntu 9.10 的 各版本 Eclipse , 似乎會有 gtk 圖形介面的bug ,有此一說增加 GDK_NATIVE_WINDOWS=1 就可以解決問題,但經過初步測試似乎無用
安裝的部份沒必要都一模一樣,僅提供參考,反正只要安裝好java , hadoop , eclipse,並清楚自己的路徑就可以了
首先安裝java 基本套件
$ sudo apt-get install java-common sun-java6-bin sun-java6-jdk sun-java6-jre
1 將javadoc (jdk-6u10-docs.zip) 下載下來 下載點
2 下載完後將檔案放在 /tmp/ 下
3 執行
$ sudo apt-get install sun-java6-doc
$ apt-get install ssh $ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa $ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys $ ssh localhost
執行ssh localhost 沒有出現詢問密碼的訊息則無誤
安裝hadoop0.20到/opt/並取目錄名為hadoop
$ cd ~ $ wget http://apache.ntu.edu.tw/hadoop/core/hadoop-0.20.0/hadoop-0.20.0.tar.gz $ tar zxvf hadoop-0.20.0.tar.gz $ sudo mv hadoop-0.20.0 /opt/ $ sudo chown -R waue:waue /opt/hadoop-0.20.0 $ sudo ln -sf /opt/hadoop-0.20.0 /opt/hadoop
export JAVA_HOME=/usr/lib/jvm/java-6-sun export HADOOP_HOME=/opt/hadoop exportPATH=$PATH:/opt/hadoop/bin
<configuration> <property> <name>fs.default.name</name> <value>hdfs://localhost:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/tmp/hadoop/hadoop-${user.name}</value> </property> </configuration>
<configuration> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration>
<configuration> <property> <name>mapred.job.tracker</name> <value>localhost:9001</value> </property> </configuration>
$ cd /opt/hadoop $ source /opt/hadoop/conf/hadoop-env.sh $ hadoop namenode -format $ start-all.sh $ hadoop fs -put conf input $ hadoop fs -ls
$ cd ~ $ wget http://ftp.cs.pu.edu.tw/pub/eclipse/eclipse/downloads/drops/R-3.4.2-200902111700/eclipse-SDK-3.4.2-linux-gtk.tar.gz
$ cd ~ $ tar -zxvf eclipse-SDK-3.4.2-linux-gtk.tar.gz $ sudo mv eclipse /opt $ sudo ln -sf /opt/eclipse/eclipse /usr/local/bin/
$ cd /opt/hadoop $ sudo cp /opt/hadoop/contrib/eclipse-plugin/hadoop-0.20.0-eclipse-plugin.jar /opt/eclipse/plugins
$ sudo vim /opt/eclipse/eclipse.ini
-startup plugins/org.eclipse.equinox.launcher_1.0.101.R34x_v20081125.jar --launcher.library plugins/org.eclipse.equinox.launcher.gtk.linux.x86_1.0.101.R34x_v20080805 -showsplash org.eclipse.platform --launcher.XXMaxPermSize 512m -vmargs -Xms40m -Xmx512m
$ eclipse &
一開始會出現問你要將工作目錄放在哪裡:在這我們用預設值
PS: 之後的說明則是在eclipse 上的介面操作
window -> | open pers.. -> | other.. -> | map/reduce |
file -> new -> project -> Map/Reduce -> Map/Reduce Project -> next
建立mapreduce專案(1)
建立mapreduce專案的(2)
project name-> 輸入 : icas (隨意) use default hadoop -> Configur Hadoop install... -> 輸入:"/opt/hadoop" -> ok Finish
由於剛剛建立了icas這個專案,因此eclipse已經建立了新的專案,出現在左邊視窗,右鍵點選該資料夾,並選properties
Step1. 右鍵點選project的properties做細部設定
Step2. 進入專案的細部設定頁
source ...-> 輸入:/opt/opt/hadoop-0.20.0/src javadoc ...-> 輸入:file:/opt/hadoop/docs/api/
Step3. hadoop的javadoc的設定完後(2)
Step4. java本身的javadoc的設定(3)
設定完後回到eclipse 主視窗
Step1. 視窗右下角黃色大象圖示"Map/Reduce Locations tag" -> 點選齒輪右邊的藍色大象圖示:
Step2. 進行eclipse 與 hadoop 間的設定(2)
Location Name -> 輸入:hadoop (隨意) Map/Reduce Master -> Host-> 輸入:localhost Map/Reduce Master -> Port-> 輸入:9001 DFS Master -> Host-> 輸入:9000 Finish
設定完後,可以看到下方多了一隻藍色大象,左方展開資料夾也可以秀出在hdfs內的檔案結構
File -> new -> mapper
source folder-> 輸入: icas/src Package : Sample Name -> : mapper
package Sample; import java.io.IOException; import java.util.StringTokenizer; importorg.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; importorg.apache.hadoop.mapreduce.Mapper; public class mapper extends Mapper<Object, Text, Text,IntWritable> { private final static IntWritable one = new IntWritable(1); private Text word = newText(); public void map(Object key, Text value, Context context) throws IOException,InterruptedException { StringTokenizer itr = new StringTokenizer(value.toString()); while(itr.hasMoreTokens()) { word.set(itr.nextToken()); context.write(word, one); } } }
source folder-> 輸入: icas/src Package : Sample Name -> : reducer
package Sample; import java.io.IOException; import org.apache.hadoop.io.IntWritable; importorg.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Reducer; public class reducerextends Reducer<Text, IntWritable, Text, IntWritable> { private IntWritable result = newIntWritable(); public void reduce(Text key, Iterable<IntWritable> values, Context context) throwsIOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum +=val.get(); } result.set(sum); context.write(key, result); } }
建立WordCount.java,此檔用來驅動mapper 與 reducer,因此選擇 Map/Reduce Driver
source folder-> 輸入: icas/src Package : Sample Name -> : WordCount.java
package Sample; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path;import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; importorg.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; importorg.apache.hadoop.util.GenericOptionsParser; public class WordCount { public static voidmain(String[] args) throws Exception { Configuration conf = new Configuration(); String[]otherArgs = new GenericOptionsParser(conf, args) .getRemainingArgs(); if (otherArgs.length != 2) {System.err.println("Usage: wordcount <in> <out>"); System.exit(2); } Job job = new Job(conf, "word count"); job.setJarByClass(WordCount.class); job.setMapperClass(mapper.class);job.setCombinerClass(reducer.class); job.setReducerClass(reducer.class);job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class);FileInputFormat.addInputPath(job, new Path(otherArgs[0])); FileOutputFormat.setOutputPath(job, newPath(otherArgs[1])); System.exit(job.waitForCompletion(true) ? 0 : 1); } }
$ cd workspace/icas $ ls src/Sample/ mapper.java reducer.java WordCount.java $ ls bin/Sample/ mapper.class reducer.class WordCount.class
有一熱心的hadoop使用者提供一個能讓 run-on-hadoop 這個功能恢復的方法。
原因是hadoop 的 eclipse-plugin 也許是用eclipse europa 這個版本開發的,而eclipse 的各版本 3.2 , 3.3, 3.4 間也都有或多或少的差異性存在。
因此如果先用eclipse europa 來建立一個新專案,之後把europa的eclipse這個版本關掉,換用eclipse 3.4開啟,之後這個專案就能用run-on-mapreduce 這個功能囉!
有興趣的話可以試試!(感謝逢甲資工所謝同學)
$ cd /home/waue/workspace/icas/ $ gedit Makefile
JarFile="sample-0.1.jar" MainFunc="Sample.WordCount" LocalOutDir="/tmp/output" all:help jar: jar -cvf ${JarFile} -C bin/ . run: hadoop jar ${JarFile} ${MainFunc} input output clean: hadoop fs -rmr output output: rm -rf ${LocalOutDir} hadoop fs -get output ${LocalOutDir} gedit${LocalOutDir}/part-r-00000 & help: @echo "Usage:" @echo " make jar - Build Jar File." @echo " make clean - Clean up Output directory on HDFS." @echo " make run - Run your MapReduce code on Hadoop." @echo " make output - Download and show output file" @echo " make help - Show Makefile options." @echo " " @echo "Example:" @echo " make jar; make run; make output; make clean"
$ cd /home/waue/workspace/icas/ $ make Usage: make jar - Build Jar File. make clean - Clean up Output directory on HDFS. make run - Run your MapReduce code on Hadoop. make output - Download and show output file make help - Show Makefile options. Example: make jar; make run; make output; make clean
$ make jar
$ make run
$ make output
$ make clean