hadoop完全分布式安装

安装环境如下

centos6.0 32位操作系统,jdk目前最新版jdk1.7,hadoop为0.20.2

第一步:安装环境准备工作

a、在现有的两台exsi服务器准备实验环境,

b、安装配置三台虚拟机,操作系统为censos6.0操作系统,配置内存1GB,硬盘20GB,安装一台后,另外两台使用克隆的方式进行安装

c、服务器的Ip地址、服务器名称如下,注意三台服务器的hosts文件下面需加上对方的服务器名称和IP的解析,使三台服务器可以相互之前在IP地址和名称间切换

    hadoop-1:172.168.16.61

    hadoop-2:172.168.16.62

    hadoop-3:172.168.16.63

    配置如下:

    [root@hadoop-1 src]#hostname hadoop-1

    [root@hadoop-1 src]# vim /etc/sysconfig/network
     NETWORKING=yes  
     HOSTNAME=hadoop-1

    [root@hadoop-1 src]#vim /etc/hosts

      127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
       ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
      172.168.16.61   hadoop-1
       172.168.16.62   hadoop-2
       172.168.16.63   hadoop-3

c、下载最新版jdk

[root@hadoop-1 src]#wget http://download.oracle.com/otn-pub/java/jdk/7u21-b11/jdk-7u21-linux-i586.tar.gz

[root@hadoop-1 src]# ls
debug  hadoop-0.20.2.tar.gz  jdk-7u21-linux-i586.tar.gz  kernels

[root@hadoop-1 src]#tar xvf  jdk-7u21-linux-i586.tar.gz -C /usr/local/

d、配置环境变量,在/etc/profile最后面加上如下内容,三台服务器的JAVA环境都需这样配置,使java环境能正常运行。

[root@hadoop-1 src]#vim /etc/etcprofile

export JAVA_HOME=/usr/local/jdk1.7.0_21
export JAVR_JRE=$JAVA_HOME/jre
export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH

[root@hadoop-1 src]#source /etc/profile

e、检查java是否正常运行

[root@hadoop-1 src]#echo $JAVA_HOME

/usr/local/jdk1.7.0_21

[root@hadoop-1 src]# java -version
java version "1.7.0_21"
Java(TM) SE Runtime Environment (build 1.7.0_21-b11)
Java HotSpot(TM) Client VM (build 23.21-b01, mixed mode)

实验环境准备工作完成

第一步、配置host文件,已在准备工作中完成

第二步、配置hadoop用户

[root@hadoop-1 src]# useradd hadoop

[root@hadoop-2 ~]# useradd hadoop

[root@hadoop-3 ~]# useradd hadoop

第三步、配置ssh免密码登入

[root@hadoop-1 src]#su hadoop

[hadoop@hadoop-1 root]$ ssh-keygen -t rsa   此步骤一路回车

[hadoop@hadoop-1 root]$ls /home/hadoop/.ssh/
authorized_keys  id_rsa  id_rsa.pub  known_hosts             生成id_rsa和id_rsa.pub文件

[hadoop@hadoop-1 root]$cp /home/hadoop/.ssh/id_rsa.pub /home/hadoop/.ssh/authorized_keys

同样在haddoop-2执行以上步骤

 

[root@hadoop-2]#su hadoop

[hadoop@hadoop-2 root]$ ssh-keygen -t rsa   此步骤一路回车

[hadoop@hadoop-2 root]$ls /home/hadoop/.ssh/
authorized_keys  id_rsa  id_rsa.pub  known_hosts             生成id_rsa和id_rsa.pub文件

[hadoop@hadoop-2 root]$cp /home/hadoop/.ssh/id_rsa.pub /home/hadoop/.ssh/authorized_keys

同样在haddoop-3行以上步骤

[root@hadoop-3]#su hadoop

[hadoop@hadoop-3 root]$ ssh-keygen -t rsa   此步骤一路回车

[hadoop@hadoop-3 root]$ls /home/hadoop/.ssh/
authorized_keys  id_rsa  id_rsa.pub  known_hosts             生成id_rsa和id_rsa.pub文件

[hadoop@hadoop-3 root]$cp /home/hadoop/.ssh/id_rsa.pub /home/hadoop/.ssh/authorized_keys

 

把hadoop-1中id_rsa.pub中的内容追加到hadoop_2和hadoop_3中的authorized_keys文件中。注意追加的时候要用">>"重定向功能,请不要用直接拷的方式,易有问题

[root@hadoop-1 src]# scp /home/hadoop/.ssh/id_rsa.pub 172.168.16.62:/home/hadoop

[root@hadoop-1 src]# scp /home/hadoop/.ssh/id_rsa.pub 172.168.16.63:/home/hadoop

[hadoop@hadoop-2 ~]$ cat id_rsa.pub >> .ssh/authorized_keys

同样在用上面的方法。把hadoop_2和hadoop_3上面的内容追加到另两台不同的机器。方法在此不再列出,配置后文件内容样式如下

[hadoop@hadoop-2 ~]$ cat .ssh/authorized_keys 
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAqaahAX+SuzUlpBeyUoODd51NvUqQfGZKjrC60lUR76FCrRs3wPMDITES9TF86MK4xFk0bzNuK+WZVleq9ZilnOnxJsyz7NoaqOwwy5ACMjsRDMM0C5dFQ21xAODP6jDQ1LsCve0yHeuW6MlbKVERC94LRE5oTt3RFH7gxSMrDmMIOoIFDJXjEYDmHM6/kN7hmUiEH6X6k5sBwQA1dUaIORjy6zUV/4Sz+QPsQmF558V+Lw/CO2EdGYAgw97CHMxbybIG+b9A5IlCw+47d+zcdrX2vUUF1VGxnTlw4OYZCbfYqhvvpE1F9UY3+0RTCAuayGBCqWIFMd06KV2Np9FYfw== hadoop@hadoop-2
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA1jyyWKj8/DgqTa0UZkDSX/12Vky/eQXmHccLmmwNSye1bjfGrotX4p05EFT46lzRsLlixwtWF4iWv2kLg/5bn4JJ83MWBW+ANcrqZLdF/lS97xa928lSq7ry4D00wSgLR9fybqo/wv7midn8mxZeI92jbSzMYE/6I5eyRb5GNySFSpGjnxkO0a9QvRSSvgJDZrQ80JNiw6FGUiRacf6kzP1/6qJwWPJnUgHHso/oQN66cmBtjZuCDy7/OGBwjJ1iHgjO8fnAdI3bmTPn7X3LslEUVPFoAXE1XciVM9Mk0Xh8Ixpc50XMG8jKboh4SdSu0QcGOI0R4Yy7rRDNt2QqcQ== hadoop@hadoop-1
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAuvT4KuNQeKwarbdvNiCEpUktNzpocvQsGjWYkwWbsU/M2fxyPYrUzgQqfF/NGXeEvf8BzWVgV7pQH9/Ajg2bOUOafcwubLIiimw+wzraQ4MGQERMYKOdd6Su+w+yR5vpohY/x6S5lMiYgmaBTNVhgitD9GjuVX/N5Mbn0c5sTt/TlWSMfgKOp6hNORWlf01JaTyKcCpap+I9gBtAq4vPD1YppvYyrfv9TeW8NdcVVxswGE6XHxPD2b1/+JyBLYE3zN5XfWWaIfqC8gBxJ4brHNxdBFMp+IQ8LJXRyAklwd882P9qxXNFEE/IqFtwm8PvxlV2Ad4APptfDgdRreyWXQ== hadoop@hadoop-3
[hadoop@hadoop-2 ~]$ 

检验ssh是否名密码验证,同时这样做的目的把第一次连结的提示“yes\no”去掉,一定要进行一次本机的连结,进行自身对自身的验证.

[root@hadoop-1 src]# ssh 172.168.16.61
Last login: Fri Apr 26 02:45:56 2013 from hadoop-1
[root@hadoop-1 ~]#

[root@hadoop-1 src]# ssh 172.168.16.62
Last login: Fri Apr 26 02:46:01 2013 from hadoop-1
[root@hadoop-2 ~]# 
[root@hadoop-1 src]# ssh 172.168.16.63
Last login: Fri Apr 26 01:17:26 2013 from aca81034.ipt.aol.com
[root@hadoop-3 ~]# 
第四步、下载并解压hadoop

[hadoop@hadoop-1 ~]$ wget http://archive.apache.org/dist/hadoop/core/hadoop-0.20.2/hadoop-0.20.2.tar.gz

[hadoop@hadoop-1 ~]$tar xvf hadoop-0.20.2.tar.gz -C /opt/

[hadoop@hadoop-1 ~]$ls /opt/

[hadoop@hadoop-1 ~]$ ls -al /opt/                                    注意,此步骤是在root权限下完成。完成后需对hadoop-0.20.2目录进行权限更改
total 12
drwxr-xr-x.  3 root   root   4096 Apr 25 12:56 .
dr-xr-xr-x. 22 root   root   4096 Apr 25 03:32 ..
drwxr-xr-x. 15 hadoop hadoop 4096 Apr 25 13:58 hadoop-0.20.2

更改hadoop目录的权限

[hadoop@hadoop-1 ~]#chown -R hadoop:hadoop /opt/hadoop-0.20.2

 

同以上步骤在hadoop-2和hadoop-3上面执行


第五步、修改配置文件

a、修改hadoop-env文件

[hadoop@hadoop-1 ~]$ cat /opt/hadoop-0.20.2/conf/hadoop-env.sh 
# Set Hadoop-specific environment variables here.


# The only required environment variable is JAVA_HOME.  All others are
# optional.  When running a distributed configuration it is best to
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes.


# The java implementation to use.  Required.
# export JAVA_HOME=/usr/lib/j2sdk1.5-sun
export JAVA_HOME=/usr/local/jdk1.7.0_21

b、修改core-site.xml,配置namenode节,此配置文件中,注意配置tmp目录,以免重启hadoop服务器,造成服务器不能启动

[hadoop@hadoop-1 ~]$ cat /opt/hadoop-0.20.2/conf/core-site.xml 
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>


<!-- Put site-specific property overrides in this file. -->


<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://hadoop-1:9000</value>
<final>true</final>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/hadoop-0.20.2/tmp</value>
<description>A base for other temporary directory</description>
</property>
</configuration>
c、修改hdfs-site.xml文件,配置数据目录和name目录。存入name节点数据和文件数据,同时配置复制数量为“2”,因这里只有两台datanode

[hadoop@hadoop-1 ~]$ cat /opt/hadoop-0.20.2/conf/hdfs-site.xml 
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>


<!-- Put site-specific property overrides in this file. -->


<configuration>
<property>
<name>dfs.name.dir</name>
<value>/opt/hadoop-0.20.2/hdfs/name</value>
<final>true</final>
</property>
<property>
<name>dfs.data.dir</name>
<value>/opt/hadoop-0.20.2/hdfs/data</value>
<final>true</final>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
<final>true</final>
</property>
</configuration>
d、配置jobtacker节点。

[hadoop@hadoop-1 ~]$ cat /opt/hadoop-0.20.2/conf/mapred-site.xml 
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>


<!-- Put site-specific property overrides in this file. -->


<configuration>
<property>
<name>mapred.job.tracker</name>
<value>hadoop-1:9001</value>
<final>true</final>
</property>
</configuration>

e、配置senondarynode节点

[hadoop@hadoop-1 ~]$ cat /opt/hadoop-0.20.2/conf/masters 
hadoop-1
f、配置datanode和tasktracter节点

[hadoop@hadoop-1 ~]$ cat /opt/hadoop-0.20.2/conf/slaves 
hadoop-2
hadoop-3

以上配置文件的修改,需同时在hadoop-2和hadoop-3上面进行,并保证所有内容是一样的。

 

第六步、配置环境变量,为了以后使用上的方便,需把hadoop的目录做为环境变量设置,具体配置后的效果如下

[hadoop@hadoop-1 ~]$ cat /etc/profile

 export JAVA_HOME=/usr/local/jdk1.7.0_21
export JAVR_JRE=$JAVA_HOME/jre
export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$HADOOP_INSTALL/bin:$PATH
export HADOOP_INSTALL=/opt/hadoop-0.20.2

 

第七步:格式化hadoop

[hadoop@hadoop-1 ~]$

hadoop  namenode -format

 

第七步:启动hadoop

[hadoop@hadoop-1 ~]$start-all.sh

 

第八步:验证hadoop

[hadoop@hadoop-1 ~]$ jps
5948 SecondaryNameNode
6019 JobTracker
5802 NameNode
6784 Jps

[hadoop@hadoop-2 ~]$ jps
4199 TaskTracker
9288 Jps
4111 DataNode
[hadoop@hadoop-2 ~]$ 
[hadoop@hadoop-3 root]$ jps
6673 Jps
1591 TaskTracker
1512 DataNode
[hadoop@hadoop-3 root]$

[hadoop@hadoop-1 ~]$ /opt/hadoop-0.20.2/bin/hadoop dfsadmin -report
Configured Capacity: 37073182720 (34.53 GB)
Present Capacity: 32527679488 (30.29 GB)
DFS Remaining: 32527589376 (30.29 GB)
DFS Used: 90112 (88 KB)
DFS Used%: 0%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0


-------------------------------------------------
Datanodes available: 2 (2 total, 0 dead)


Name: 172.168.16.62:50010
Decommission Status : Normal
Configured Capacity: 18536591360 (17.26 GB)
DFS Used: 45056 (44 KB)
Non DFS Used: 2272829440 (2.12 GB)
DFS Remaining: 16263716864(15.15 GB)
DFS Used%: 0%
DFS Remaining%: 87.74%
Last contact: Fri Apr 26 03:18:45 CST 2013




Name: 172.168.16.63:50010
Decommission Status : Normal
Configured Capacity: 18536591360 (17.26 GB)
DFS Used: 45056 (44 KB)
Non DFS Used: 2272673792 (2.12 GB)
DFS Remaining: 16263872512(15.15 GB)
DFS Used%: 0%
DFS Remaining%: 87.74%
Last contact: Fri Apr 26 03:18:46 CST 2013

 

也可以通过以下验证

http://172.168.16.61:50030/jobtracker.jsp

 

http://172.168.16.61:50070/jobtracker.jsp

 

 

 

你可能感兴趣的:(hadoop,完全分面式安装)