Hadoop2.6.4集群搭建

环境配置:

(1)4台配置好的centOs6.5(mini1,mini2,mini3,mini4),每台机器都新建hadoop用户的用户,授予最高权限;

(2)jdk版本:1.7;

(3)编译好的hadoop2.6.4安装包(centOS6.5平台)

集群搭建:

1)配置免密登入:

cd ~/.ssh

       ssh-keygen -t rsa (四个回车)

       执行完这个命令后,会生成两个文件id_rsa(私钥)、id_rsa.pub(公钥)

       将公钥拷贝到要免密登陆的目标机器上

       ssh-copy-id localhostmini1,mini2…

(2)JDK安装;

(3)上传编译好的hadoop安装包:先上传hadoop的安装包到服务器上去/home/hadoop/

第一个:hadoop-env.sh

               vim hadoop-env.sh

              #27

              export JAVA_HOME=/usr/java/jdk1.7.0_65

       第二个:core-site.xm

             

             

                     fs.defaultFS

                     hdfs://mini1:9000

             

             

             

                     hadoop.tmp.dir

                     /home/hadoop/hdpdata

             

             

       第三个:hdfs-site.xml  

             

             

                     dfs.replication

                     2

             

             

             

                     dfs.secondary.http.address

                     192.168.1.152:50090

             

 

       第四个:mapred-site.xml (mvmapred-site.xml.template mapred-site.xml)

              mvmapred-site.xml.template mapred-site.xml

              vimmapred-site.xml

             

             

                     mapreduce.framework.name

                     yarn

             

             

       第五个:yarn-site.xml

             

             

                     yarn.resourcemanager.hostname

                     mini1

             

             

             

                     yarn.nodemanager.aux-services

                     mapreduce_shuffle

             

         

3.2hadoop添加到环境变量

       vim /etc/proflie

                exportJAVA_HOME=/usr/java/jdk1.7.0_65

                   exportHADOOP_HOME=/home/hadoop/apps/hadoop-2.6.4

export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

         source /etc/profile

分发到别的机器(datanode节点服务器)

scp -r appsmini2:/home/hadoop

scp -r appsmini3:/home/hadoop

scp -r appsmini4:/home/hadoop      

3.3格式化namenode(是对namenode进行初始化)

       hdfs namenode -format(hadoop namenode -format)

3.4启动hadoop

       1)先启动HDFS

              sbin/start-dfs.sh

       2)再启动YARN

              sbin/start-yarn.sh

3.5验证是否启动成功

       使用jps命令验证

              27408NameNode

              28218Jps

              27643SecondaryNameNode

              28066NodeManager

              27803ResourceManager

              27512DataNode

   http://mini1:50070(HDFS管理界面)

    http://mini1:8088(MR管理界面)






你可能感兴趣的:(hadoop,hadoop,hdfs)