hadoop2.2完全分布式高可靠安装文档

  1. 集群环境

3台机器,一主两从:

192.168.41.100  master

192.168.41.101  slave1

192.168.41.102  slave2

系统为CentOS 6.4


2.安装步骤:

① 安装前准备

(1)3台机器全部修改hosts文件和hostname后重启

vim /etc/hosts

wKiom1Ps0ObzuDfTAADU4T8eCFw652.jpg

(2)配置节点之间SSH免密码登陆(可参考博文hadoop1集群安装,这里不在详述)

(3)安装jdk(可参考博文hadoop1集群安装,这里不在详述)


②解压hadoop

tar zvxf /usr/local/hadoop-2.2.0.tar.gz -C /usr
cd /usr
mv hadoop-2.2.0 hadoop

效果如下:

wKioL1Ps08HjcxO0AABQPc2yQxM754.jpg


③hadoop配置过程

(1)在master本地创建如下文件夹:

wKiom1Ps03CCevFlAAC2j7bfceI212.jpg

(2)修改hadoop默认模板保证以下文件存在

hadoop-env.sh

yarn-env.sh

slaves

core-site.xml

hdfs-site.xml

mapred-site.xml

yarn-site.xml

命令如下:

cd /usr/hadoop/etc/hadoop
mv mapred-site.xml.template mapred-site.xml

(3)修改配置文件

配置文件1:hadoop-env.sh

修改JAVA_HOME值

wKioL1Ps1qmjbx0XAADeeoa2RIY422.jpg

配置文件2:yarn-env.sh

修改JAVA_HOME值

wKioL1Ps1tqDNnWNAAEtZdNVxEo683.jpg

配置文件3:slaves (这个文件里面保存所有slave节点)

wKiom1Ps1diR0niWAAAUta_G3MA326.jpg

配置文件4:core-site.xml

<configuration>
	<property>
		<name>fs.defaultFS</name>
		<value>hdfs://master:8020</value>
	</property>
	 <property>
                <name>io.file.buffer.size</name>
                <value>131072</value>
         </property>
         <property>
         	<name>hadoop.tmp.dir</name>
         	<value>file:/root/tmp</value>
         	 <description>Abase for other temporary   directories.</description>
         </property>
         <property>
         	<name>hadoop.proxyuser.root.hosts</name>
		<value>*</value>
         </property>
         <property>
         	<name>hadoop.proxyuser.root.groups</name>
         	<value>*</value>
         </property>
</configuration>

配置文件5:hdfs-site.xml

<configuration>
       <property>
                <name>dfs.namenode.secondary.http-address</name>
               <value>master:9001</value>
       </property>
     <property>
             <name>dfs.namenode.name.dir</name>
             <value>file:/root/dfs/name</value>
       </property>
      <property>
              <name>dfs.datanode.data.dir</name>
              <value>file:/root/dfs/data</value>
       </property>
       <property>
               <name>dfs.replication</name>
               <value>3</value>
        </property>
        <property>
                 <name>dfs.webhdfs.enabled</name>
                  <value>true</value>
         </property>
</configuration>

配置文件6:mapred-site.xml

<configuration>
          <property>
                <name>mapreduce.framework.name</name>
                <value>yarn</value>
           </property>
          <property>
                  <name>mapreduce.jobhistory.address</name>
                  <value>master:10020</value>
          </property>
          <property>
                <name>mapreduce.jobhistory.webapp.address</name>
                <value>master:19888</value>
       </property>
</configuration>

配置文件7:yarn-site.xml

<configuration>
        <property>
               <name>yarn.nodemanager.aux-services</name>
               <value>mapreduce_shuffle</value>
        </property>
        <property>                                                                
		 <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
               <value>org.apache.hadoop.mapred.ShuffleHandler</value>
        </property>
        <property>
               <name>yarn.resourcemanager.address</name>
               <value>master:8032</value>
       </property>
       <property>
               <name>yarn.resourcemanager.scheduler.address</name>
               <value>master:8030</value>
       </property>
       <property>
            <name>yarn.resourcemanager.resource-tracker.address</name>
             <value>master:8031</value>
      </property>
      <property>
              <name>yarn.resourcemanager.admin.address</name>
               <value>master:8033</value>
       </property>
       <property>
               <name>yarn.resourcemanager.webapp.address</name>
               <value>master:8088</value>
       </property>
</configuration>


④复制到其他节点

scp -r /usr/hadoop root@slave1:~/usr

⑤配置环境变量

wKioL1Ps2FqA4G-oAAJD833shAY370.jpg


⑥启动验证

(1)启动hadoop

hdfs namenode -format

或者

hadoop namenode format

(2)启动hdfs

start-dfs.sh

此时master上有namenode和secondname,slave上有datanode:

wKiom1Ps2K_B5eHXAABJvDYsf-Y381.jpg

wKioL1Ps2ciDE9OhAABZWjJAJss064.jpg

wKioL1Ps2cnBtyTyAABYwIZhYGw469.jpg

(3)启动yarn

start-yarn.sh

启动后master和slave进程如下:

wKioL1Ps2kDwY57vAABbJj4olVM323.jpg

wKiom1Ps2SjQf9F0AABmoogctBg966.jpg

wKioL1Ps2kGT3SFqAAGODSmFMaQ647.jpg

此时全部集群配置完毕


windows可以修改本机hosts来查看,http://master:8088

wKiom1Ps2aPB38ZrAAMMUGmKLTc458.jpg


3.需要注意的问题:

hadoop2.2的配置还是比较简单的,但是可能会遇到各种各样的问题。最常讲的就是看不到进程。

看不到进程大致有两个原因:

(1)你的配置文件有问题。

对于配置文件,主机名,空格之类的这些都不要带上。仔细检查

(2)Linux的权限不正确。

最常出问题的是core-site.xml,与hdfs-site.xml

core-site.xml:

<property>
       <name>hadoop.tmp.dir</name>
       <value>file:/root/tmp</value>
       <description>Abase forother temporary directories.</description>
</property>

上面参数的含义,这里是hadoop的临时文件目录,file的含义是使用本地目录。也就是使用的是Linux的目录,一定确保下面目录/root/tmp的权限所属为你创建的用户辅导。如果你创建了zhangsan或则lisi,那么这个目录就会变为/home/zhangsan/tmp,hdfs-site.xml也是同理。


本文出自 “Xlows” 博客,转载请与作者联系!

你可能感兴趣的:(hadoop,分布式,master,hadoop2)