林炳文Evankaka原创作品。转载请注明出处http://blog.csdn.net/evankaka
摘要:本文主要讲了Hadoop2.7.2在Ubuntu14.04上的安装与配置过程
编辑配置文件.bashrc:
vi .bashrc
在文件末尾追加:#set java environment export JAVA_HOME=/usr/java/jdk/jdk1.8.0_65 export JRE_HOME=/usr/java/jdk/jdk1.8.0_65/jre export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$JAVA_HOME:$PATH
source ~/.bashrc
sudo apt-get install openssh-server生成私钥和公钥:
ssh-keygen -t rsa -P ""
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
ssh localhost
exit
mv hadoop-2.7.2.tar.gz /usr/hadoop/ tar -xzvf hadoop-2.7.2.tar.gz
解压后文件:
gedit ~/.bashrc
#HADOOP VARIABLES START export HADOOP_INSTALL=/usr/hadoop/hadoop-2.7.2 export PATH=$PATH:$HADOOP_INSTALL/bin export PATH=$PATH:$HADOOP_INSTALL/sbin export HADOOP_MAPRED_HOME=$HADOOP_INSTALL export HADOOP_COMMON_HOME=$HADOOP_INSTALL export HADOOP_HDFS_HOME=$HADOOP_INSTALL export YARN_HOME=$HADOOP_INSTALL export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib" #HADOOP VARIABLES END注意设置好路径,最后关闭后再使用
source ~/.bashrc使环境变量生效
gedit hadoop-env.sh
export JAVA_HOME=/usr/java/jdk/jdk1.8.0_65 export HADOOP_CONF_DIR=/usr/hadoop/hadoop-2.7.2/etc/hadoop
cd /usr/hadoop/hadoop-2.7.2 mkdir tmp
5.1、配置 core-site.xml
切换至配置文件目录:
cd /usr/hadoop/hadoop-2.7.2/etc/hadoop修改 /usr/hadoop/hadoop-2.7.2/etc/hadoop/core-site.xml 在 后面增加
gedit core-site.xml
<configuration> <property> <name>hadoop.tmp.dir</name> <value>file:/usr/hadoop/hadoop-2.7.2/tmp</value> <description>Abase for other temporary directories.</description> </property> <property> <name>fs.defaultFS</name> <value>hdfs://localhost:9000</value> </property> </configuration>
gedit hdfs-site.xml
<configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/usr/hadoop/hadoop-2.7.2/tmp/dfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/usr/hadoop/hadoop-2.7.2/tmp/dfs/data</value> </property> </configuration>
<configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> </configuration>5.4、配置mapred-site.xml
从模板文件复制一个xml,
执行命令:
mv mapred-site.xml.template mapred-site.xml执行命令:
gedit mapred-site.xml将文件修改为
<configuration> <property> <name>mapred.job.tracker</name> <value>localhost:9001</value> </property> </configuration>
hdfs namenode -format
倒数第五行出现Exitting with status 0 表示成功,若为 Exitting with status 1 则是出错。
cd /usr/hadoop/hadoop-2.7.2/sbin start-all.sh(文件放在/usr/hadoop/hadoop-2.7.2/sbin)
或者使用:
start-dfs.sh start-yarn.sh
查看各个进程是否正常启动,执行:jps。如果一切正常,将看到下列结果:
http://localhost:8088/ - Hadoop 管理介面
http://localhost:50070/ - Hadoop DFS 状态
sudo update-alternatives --install /usr/bin/jps jps /usr/lib/jvm/jdk1.7.0_79/bin/jps 1 sudo update-alternatives --install /usr/bin/javac javac /usr/lib/jvm/jdk1.7.0_79/bin/javac 300 sudo update-alternatives --install /usr/bin/java java /usr/lib/jvm/jdk1.7.0_79/bin/java 300
cd /usr/hadoop/hadoop-2.7.2/sbin stop-all.sh (文件放在/usr/hadoop/hadoop-2.7.2/sbin)
或者使用:
stop-dfs.sh stop-yarn.sh