hadoop2.4完全分布式部署

hadoop2.4完全分布式部署

感谢:http://blog.csdn.net/licongcong_0224/article/details/12972889

集群组成:

两台red hat ent 6.5 x64 服务器

192.168.16.100 master 

192.168.16.101 cupcs3 

注意:master和cupcs3分别是两台服务器的hostname

1. 下载编译hadoop2.4,编译方法:http://www.cnblogs.com/wrencai/p/3897438.html

2. 修改相关配置文件,如下:

hadoop-env.sh文件
修改JAVA_HOME值(export JAVA_HOME=/YOURJDK_HOME)

yarn-env.sh文件
修改JAVA_HOME值(export JAVA_HOME=/YOURJDK_HOME)
slaves文件添加如下:注:此处我们将主节点master也所谓了一个slave这样在启动的时候主节点上也会启动datanode和nodemanager两个进程
master
cupcs3

core-site.xml文件
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://master:9000</value> </property> <property> <name>io.file.buffer.size</name> <value>131072</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/home/bigdata/hadoop-2.4.1/tmp/hadoop-${user.name}</value> <description>Abase for other temporary directories.</description> </property> <property> <name>hadoop.proxyuser.hduser.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.hduser.groups</name> <value>*</value> </property> </configuration> hdfs-site.xml文件 <configuration> <property> <name>dfs.namenode.secondary.http-address</name> <value>master:9001</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/home/bigdata/hadoop-2.4.1/dfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/home/bigdata/hadoop-2.4.1/dfs/data</value> </property> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property> </configuration> mapred-site.xml文件 <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>master:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>master:19888</value> </property> </configuration> yarn-site.xml文件 <configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>master:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>master:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>master:8031</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>master:8033</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>master:8088</value> </property> </configuration>

3. 配置服务器之间的ssh无密码连接:

在每台服务器上都执行

ssh-keygen -t rsa -P ''

执行完成后在~/.ssh隐藏目录下会生成 id_rsa和id_rsa.pub两个文件。

3.1 拷贝出所有机器上生成的id_rsa.pub文件内容到同一个文本文件中,然后将该文件命名为authorized_keys,然后将这个拷贝到所有机器的~/.ssh目录下面。

3.2 修改文件:vi /etc/ssh/sshd_config 

   

   RSAAuthentication yes                                    开启RSA加密方式

   PubkeyAuthentication yes                              开启公钥认证

   AuthorizedKeysFile .ssh/authorized_keys      公钥存放位置



   PasswordAuthentication yes                          使用密码登录



   GSSAPIAuthentication no                                防止登录慢,以及报错问题



   ClientAliveInterval 300                                   300秒超时自动退出

   ClientAliveCountMax 10                                 允许SSH远程连接的最大数

这样就完成了ssh无密码访问配置。

4.修改/etc/hosts文件,配置各个节点的ip和主机名映射关系,在本例中在每台机器的/etc/hosts中添加如下

192.168.16.100 master 

192.168.16.101 cupcs3 

5.关闭各台服务器上的防火墙,否则启动hadoop后,可能会出现各个进程启动正常,但是master监视不到slaves节点的情况。(下面两种方法任选其一)

5.1 重启后永久性生效:



开启:chkconfig iptables on



关闭:chkconfig iptables off



5.2 即时生效,重启后失效:



开启:service iptables start



关闭:service iptables stop

 

6.运行测试:

6.1格式化hdfs文件系统:

./HADOOP_HOME/bin/hadood namenode –format

6.2启动集群

./HADOOP_HOME/sbin/start-all.sh

成功启动后,执行jps命令,在master上看到如下进程

27055 ResourceManager

26597 NameNode

26887 SecondaryNameNode

26704 DataNode

27161 NodeManager

6672  Jps

cupcs3上看到如下进程

13399 NodeManager

13318 DataNode

17527 Jps

执行hdf dfsadmin -report命令,得到结果如下:

[bigdata@master]$ hdfs dfsadmin -report

Configured Capacity: 933896060928 (869.76 GB)

Present Capacity: 853796515840 (795.16 GB)

DFS Remaining: 849468751872 (791.13 GB)

DFS Used: 4327763968 (4.03 GB)

DFS Used%: 0.51%

Under replicated blocks: 408

Blocks with corrupt replicas: 0

Missing blocks: 0



-------------------------------------------------

Datanodes available: 2 (2 total, 0 dead)



Live datanodes:

Name: 192.168.16.100:50010 (master)

Hostname: master

Decommission Status : Normal

Configured Capacity: 466889310208 (434.82 GB)

DFS Used: 4293246976 (4.00 GB)

Non DFS Used: 39757963264 (37.03 GB)

DFS Remaining: 422838099968 (393.80 GB)

DFS Used%: 0.92%

DFS Remaining%: 90.56%

Configured Cache Capacity: 0 (0 B)

Cache Used: 0 (0 B)

Cache Remaining: 0 (0 B)

Cache Used%: 100.00%

Cache Remaining%: 0.00%

Last contact: Fri Sep 05 22:22:32 CST 2014





Name: 192.168.16.101:50010 (cupcs3)

Hostname: cupcs3

Decommission Status : Normal

Configured Capacity: 467006750720 (434.93 GB)

DFS Used: 34516992 (32.92 MB)

Non DFS Used: 40341581824 (37.57 GB)

DFS Remaining: 426630651904 (397.33 GB)

DFS Used%: 0.01%

DFS Remaining%: 91.35%

Configured Cache Capacity: 0 (0 B)

Cache Used: 0 (0 B)

Cache Remaining: 0 (0 B)

Cache Used%: 100.00%

Cache Remaining%: 0.00%

Last contact: Fri Sep 05 22:22:30 CST 2014



[bigdata@master]$

   同时在 浏览器下http://master:50070 可以查看hdfs的页面 http://master:8088 可以查看hadoop进程管理页面

你可能感兴趣的:(hadoop2)