Mapreduce Tarball
你需要有MapReduce tarball,如果没有的话那么你需要从源文件中创建一个,执行命令如下:
$ mvn clean install -DskipTests $ cd hadoop-mapreduce-project $ mvn clean install assembly:assembly -Pnative
注意:你需要安装protoc 2.5.0 。
如果需要忽略本地化编译那么你可以去掉在maven中去掉 -Pnative参数,Tarball会在target/ 目录中生成。
设置环境变量
假定你已经安装了hadoop-common/hadoop-hdfs,并且已经设置了$HADOOP_COMMON_HOME/$HADOOP_HDFS_HOME,解压hadoop mapreduce tarball 并设置环境变量$HADOOP_MAPRED_HOME到解压的目录中去,同时也设置 $HADOOP_YARN_HOME 环境变量。
注意:下面的操作假定你的hdfs已经开始运行。
设置配置文件
启动ResourceManager 和 NodeManager之前你需要更新配置文件。假定$HADOOP_CONF_DIR是你的配置文件所在的目录并且已经为hdfs和 core-site.xml做了配置。那么有2个文件你需要去设置 mapred-site.xml 和 yarn-site.xml
mapred-site.xml配置
在 mapred-site.xml中加入如下配置:
<property> <name>mapreduce.cluster.temp.dir</name> <value></value> <description>No description</description> <final>true</final> </property> <property> <name>mapreduce.cluster.local.dir</name> <value></value> <description>No description</description> <final>true</final> </property>
yarn-site.xml配置
在你的yarn-site.xml中加入如下信息:
<property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>host:port</value> <description>host is the hostname of the resource manager and port is the port on which the NodeManagers contact the Resource Manager. </description> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>host:port</value> <description>host is the hostname of the resourcemanager and port is the port on which the Applications in the cluster talk to the Resource Manager. </description> </property> <property> <name>yarn.resourcemanager.scheduler.class</name> <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value> <description>In case you do not want to use the default scheduler</description> </property> <property> <name>yarn.resourcemanager.address</name> <value>host:port</value> <description>the host is the hostname of the ResourceManager and the port is the port on which the clients can talk to the Resource Manager. </description> </property> <property> <name>yarn.nodemanager.local-dirs</name> <value></value> <description>the local directories used by the nodemanager</description> </property> <property> <name>yarn.nodemanager.address</name> <value>0.0.0.0:port</value> <description>the nodemanagers bind to this port</description> </property> <property> <name>yarn.nodemanager.resource.memory-mb</name> <value>10240</value> <description>the amount of memory on the NodeManager in GB</description> </property> <property> <name>yarn.nodemanager.remote-app-log-dir</name> <value>/app-logs</value> <description>directory on hdfs where the application logs are moved to </description> </property> <property> <name>yarn.nodemanager.log-dirs</name> <value></value> <description>the directories used by Nodemanagers as log directories</description> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> <description>shuffle service that needs to be set for Map Reduce to run </description> </property>
配置capacity-scheduler.xml文件
确保root queues已经在capacity-scheduler.xml中设置:
<property> <name>yarn.scheduler.capacity.root.queues</name> <value>unfunded,default</value> </property> <property> <name>yarn.scheduler.capacity.root.capacity</name> <value>100</value> </property> <property> <name>yarn.scheduler.capacity.root.unfunded.capacity</name> <value>50</value> </property> <property> <name>yarn.scheduler.capacity.root.default.capacity</name> <value>50</value> </property>
运行守护进程
假定你的环境变量$HADOOP_COMMON_HOME, $HADOOP_HDFS_HOME, $HADOO_MAPRED_HOME, $HADOOP_YARN_HOME,$JAVA_HOME and $HADOOP_CONF_DIR都已经配置好了。跟配置 $HADOOP_CONF_DIR一样配置好$YARN_CONF_DIR。
执行如下命令启动ResourceManager 和NodeManager :
$ cd $HADOOP_MAPRED_HOME $ sbin/yarn-daemon.sh start resourcemanager $ sbin/yarn-daemon.sh start nodemanager
你可以执行randomwriter 来查看是否启动:
$ $HADOOP_COMMON_HOME/bin/hadoop jar hadoop-examples.jar randomwriter out