1. 环境
Centos X86_64 6.4
JDK 1.6.0_45
8G内存 500G硬盘
4台机器,每台机器用户名都是hadoop
hostname | 角色 | ip |
lenovo10 | NN + RM + SNN | 218.193.154.XXX |
lenovo9 | NM + DN | |
lenovo11 | NM + DN | |
lenovo12 | NM + DN |
角色缩写备注:NameNode, ResourceManager,Secondary NameNode,DataNode , NodeManager
2. 准备
配制SSH兔密码登录,lenovo10能登录其他所有机器
安装JDK
关闭防火墙
3. 配制
修改Linux的配制文件
/etc/hosts
218.193.154.XXX lenovo9 218.193.154.XXX lenovo10 admin 218.193.154.XXX lenovo11 218.193.154.XXX lenovo12
/etc/profile
#set JAVA_HOME JAVA_HOME=/usr/java/jdk1.6.0_45/ PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$CLASSPATH export PATH JAVA_HOME CLASSPATH # set hadoop path HADOOP_HOME=/usr/local/hadoop export PATH="$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH"
/etc/security/limits.conf
hadoop - nofile 32768 hadoop soft/hard nproc 32000
/etc/pam.d/login
session required pam_limits.so
在HADOOP_HOME下的etc/hadoop目录中修改如下文件
在hadoop-env.sh和yarn-env.sh文件尾加入 JAVA_HOME=/usr/java/jdk1.6.0_45
core-site.xml
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://admin:9000</value> </property> <property> <name>io.file.buffer.size</name> <value>131072</value> </property> <property> <name>hadoop.tmp.dir</name> <value>file:/home/hadoop/tmp2</value> <description>Abase for other temporary directories.</description> </property> <property> <name>hadoop.proxyuser.hadoop.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.hadoop.groups</name> <value>*</value> </property> </configuration>
yarn-site.xml
<configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>admin:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>admin:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>admin:8031</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>admin:8033</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>admin:8088</value> </property> </configuration>
hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>dfs.namenode.secondary.http-address</name> <value>admin:9001</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/home/hadoop/tmp2/dfs/name</value> </property> <property> <name>dfs.namenode.data.dir</name> <value>file:/home/hadoop/tmp2/dfs/data</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property> </configuration>
mapred-site.xml
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>admin:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>admin:19888</value> </property> </configuration>
slaves
lenovo9 lenovo11 lenovo12
4. 运行
将配制好的hadoop文件夹分发到每一台机器
1.sh
#num=8 slave=slave #for i in {1..10}; do for i in {1,2}; do #for((i=1;i<=num;i++)); do host=$slave${i} echo "开始安装${host}..." echo "root免密码登录到${host}" ssh-copy-id root@${host} echo "hadoop帐号免密码登录到${host}.hadoop帐号没有密码" ssh-copy-id hadoop@${host} echo "拷贝hadoop目录" scp -r /usr/local/hadoop root@${host}:/usr/local echo "拷贝几个配置文件" scp /etc/hosts /etc/sudoers root@${host}:/etc scp /etc/profile.d/hadoop.sh root@${host}:/etc/profile.d scp /etc/pam.d/login root@${host}:/etc/pam.d scp /etc/security/limits.conf root@${host}:/etc/security echo "拷贝JDK" scp /home/hadoop/Downloads/jdk-6u45-linux-x64.bin root@${host}:/usr/java scp 2.sh root@${host}:/tmp/2.sh ssh root@${host} sh /tmp/2.sh echo "安装${host}完毕" done
2.sh
chmod u+w /etc/sudoers echo "关闭防火墙" service iptables stop chkconfig iptables off echo "开启ntpd" service ntpd start chkconfig ntpd on echo "设置sshd开机启动" #yum install sshd service sshd start chkconfig sshd on echo "add hadoop to root group" usermod -a -G root hadoop echo "创建HDFS的目录" #mkdir /home/hadoop mkdir /home/hadoop/tmp mkdir /home/hadoop/tmp/dfs mkdir /home/hadoop/tmp/dfs/name mkdir /home/hadoop/tmp/dfs/data echo "将hadoop相关目录权限下放给用户hadoop" chown -R hadoop:hadoop /home/hadoop/tmp chown -R hadoop:hadoop /usr/local/hadoop #chown -R hadoop:hadoop /usr/local/hbase echo "修改回/etc/sudoers的权限" chmod u-w /etc/sudoers echo "使环境变量生效" source /etc/profile echo "安装JDK" cd /usr/java ./jdk-6u45-linux-x64.bin echo "测试JDK" source /etc/profile java -version echo "删除JDK安装文件" rm -rf jdk-6u45-linux-x64.bin exit
在lenovo10上
第一次运行前要格式化
hdfs namenode -format
分别启动
start-dfs.sh
start-yarn.sh
或者全部启动
start-all.sh
查看启动的进程
JPS
在lenovo10在运行的进程有namenode, resourcemanager,secondary namenode
在lenovo9/11/12在运行的进程有datanode , nodemanager
查看HDFS状态:
http://lenovo10:8088
查看ResourceManager状态
http://lenovo10:50070
查看HDFS状态
hdfs dfsadmin -report
查看HDFS健康状态
hdfs fsck /
hdfs fsck / -files -blocks
新建HDFS文件夹
hdfs dfs -mkdir /input
显示HDFS文件列表
hdfs dfs -ls /
移动文件
hdfs dfs -mv /input /user/hadoop/
上传一个文本文件到HDFS
hdfs dfs -put ~/1.txt /input
用hadoop流的方式执行一个mapreduce
hadoop jar $HADOOP_HOME/share/hadoop/tools/lib/hadoop-streaming-2.2.0.jar -input /input -output woutput -mapper /bin/cat -reducer /usr/bin/wc
5. eclipse插件的制作
下载插件 (非官方,因为官方并没有出插件)
进入hadoop2x-eclipse-plugin-master/src/contrib/eclipse-plugin 目录
安装ant
执行ant命令
ant jar -Dversion=2.2.0 -Declipse.home=/usr/local/eclipse -Dhadoop.home=//home/hadoop/Downloads/java/hadoop-2.2.0
6. 可能出现的错误及解决方案
1.hdfs-site.xml:10:36: Content is not allowed in prolog
是由于复制文件时导致的编码的错误,将xml文件中多余的空格去掉
7. 参考资料
1.Hadoop MapReduce Next Generation - Cluster Setup
2.Hadoop2.2.0安装配置手册!完全分布式Hadoop集群搭建过程~(心血之作啊~~)
4.Hadoop学习笔记【12】-Hadoop2.1全分布式集群安装
5.Hadoop 新 MapReduce 框架 Yarn 详解