Apache hadoop安装配置

Apache hadoop安装配置

1.网络中继更改问题

        命令:   vi /etc/sysconfig/network-scripts/ifcfg-eth0

需要修改的代码

DEVICE=eth0

HWADDR=00:0C:29:11:02:E8

TYPE=Ethernet

UUID=c1038317-21f4-4251-a68f-0962fd644cab

NBOOT=yes

NM_CONTROLLED=yes

BOOTPROTO=static

IPADDR=192.168.17.238

GATEWAY=192.168.17.1

NDS1=114.114.114.114

IPV6INIT=NO

       2.hadoop 环境配置问题

           1.修改主机名称

                   命令:vi /etc/hosts

           2.配置java hadoop环境变量

                    命令:vi/etc/prifile

#java

JAVA_HOME=/jdk1.7.0_79

JAVA_BIN=/jdk1.7.0_79/bin

PATH=$JAVA_HOME/bin:$PATH

CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

export JAVA_HOME JAVA_BIN PATH CLASSPATH

#hadoop

export HADOOP_HOME=/home/hadoop-2.5.2

export PATH=$HADOOP_HOME/bin:$PATH

export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native

export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"

export JAVA_LIBRARY_PATH=/home/hadoop-2.5.2/lib/native/

 

        3关闭防火墙

                

          service iptables stop

           chkconfig iptables off

           3.1修改配置文件

           vi  /etc/selinux/config

                   修改为:

                     SELINUX=disabled

 

       4.ssh免密设置

         下载 ssh服务

            命令:yum -y install openssh-clients

         生成ssh密匙:

            ssh-keygen -t rsa

        进入根目录:命令: cd~;

         cd .ssh

          ls 查看文件

         将文件  id_rsa.hub 考入: 命令:cat id_rsa.pub  >>authorized_keys

          datanode节点上的 id_rsa.hub 考入 命令

         ssh datanode1 cat .ssh/id_rsa.pub >>authorized_keys

        将文件发送到datanode节点:   

                    [root@namenode ~]# scp authorized_keys datanode1:~/.ssh

       测试 ssh datanode1 免密登陆到datanode1主机

       至此 前期配置完成

5 新建组和用户

groupadd hadoop

useradd -g hadoop hadoop

Passwd hadoop

 6解压hadoop2-2-*home/hadoop

             tar -xzvf hadoop *    /home/hadoop

配置五个文件

Cd   /home/hodoop/hadoop2-***/etc/hadoop/

Ll 列举文件下内容

nameNodedataNodes都需要配置

cd hadoop-2.5.2/etc/hadoop

6.1 vi core-site.xml

    

        hadoop.tmp.dir

        /home/hadoop/tmp  //这里写入的文件夹要手动创建

        Abase for other temporary directories.

    

    

        fs.defaultFS

        hdfs://192.168.131.7:9000 //这里的ip地址写入的是masterip地址

    

    

        io.file.buffer.size

        4096

    

 

mkdir -p $HOME/dfs/name

mkdir -p $HOME/dfs/data

6.2 vi hdfs-site.xml

    

        dfs.nameservices

        hadoop-cluster1

    

    

        dfs.namenode.secondary.http-address

        192.168.131.7:50090     //这里的ip地址写入的是masterip地址

    

    

        dfs.namenode.name.dir

        file:///home/hadoop/dfs/name   //这里写入的文件夹要手动创建

    

    

        dfs.datanode.data.dir

        file:///home/hadoop/dfs/data  //这里写入的文件夹要手动创建

    

    

        dfs.replication

        2   //有多少台节点就写多少

    

    

        dfs.webhdfs.enabled

        true

    

 

 

 

6.3 vi mapred-site.xml

    

        mapreduce.framework.name

        yarn  //这是hadoop对原有第一代M/R计算模型的改进版框架yarn

    

    

        mapreduce.jobtracker.http.address

        192.168.131.7:50030     //这里的ip地址写入的是masterip地址

    

    

        mapreduce.jobhistory.address

        192.168.131.7:10020    //这里的ip地址写入的是masterip地址

    

    

        mapreduce.jobhistory.webapp.address

        192.168.131.7:19888     //这里的ip地址写入的是masterip地址

      

 

 

6.4 vi yarn-site.xml

    

        yarn.nodemanager.aux-services

        mapreduce_shuffle

    

    

        yarn.resourcemanager.address

        192.168.131.7:8032    //这里的ip地址写入的是masterip地址

    

    

        yarn.resourcemanager.scheduler.address

        192.168.131.7:8030    //这里的ip地址写入的是masterip地址

    

    

        yarn.resourcemanager.resource-tracker.address

        192.168.131.7:8031    //这里的ip地址写入的是masterip地址

    

    

        yarn.resourcemanager.admin.address

        192.168.131.7:8033    //这里的ip地址写入的是masterip地址

    

    

        yarn.resourcemanager.webapp.address

        192.168.131.7:8088   //这里的ip地址写入的是masterip地址

    

vi slaves

192.168.79.101

192.168.79.102

vi hadoop-env.sh

export JAVA_HOME=/opt/jdk1.7.0_06

vi yarn-env.sh

export JAVA_HOME=/opt/jdk1.7.0_06

 

在一台机器上配置完后,可批量复制至另外的机器

scp yarn-site.xml mapred-site.xml slaves hdfs-site.xml yarn-env.sh hadoop-env.sh dataNode1:/home/hadoop/hadoop-2.5.2/etc/hadoop

scp yarn-site.xml mapred-site.xml slaves hdfs-site.xml yarn-env.sh hadoop-env.sh dataNode2:/home/hadoop/hadoop-2.5.2/etc/hadoop

7 格式化文件系统

hdfs namenode –format

8 启动和关闭

master机器的hadoop目录下执行:

sbin/start-all.sh   等同于运行start-dfs.shstart-yarn.sh  

sbin/stop-all.sh   等同于运行stop-dfs.shstop-yarn.sh  

 

如果启动报错:Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

Starting namenodes on [nameNode]

去这个地址下载相应版本的本地库:

http://dl.bintray.com/sequenceiq/sequenceiq-bin/

解压:tar -xvf hadoop-native-64-2.5.2.tar -C  /home/hadoop/hadoop-2.5.2/lib/native/

scp * dataNode1:/home/hadoop/hadoop-2.5.2/lib/native/

scp * dataNode2:/home/hadoop/hadoop-2.5.2/lib/native/

再检查环境变量是否设置:

export JAVA_LIBRARY_PATH=/home/hadoop/hadoop-2.5.2/lib/native/

9查看启动的进程

Jps

10 测试访问

http://192.168.79.100:50070/  查看hdfs节点信息和文件系统,10.0.1.100masterip地址

http:// 192.168.79.100:8088/   查看map/reducejob调用情况

报错处理

如果出现:put: File /user/hadoop/input/mapred-site.xml._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 2 datanode(s) running and 2 node(s) are excluded in this operation.

需要关闭所有节点的防火墙。

 

posted @ 2017-07-29 15:47 菜鸟的进击 阅读( ...) 评论( ...) 编辑 收藏

你可能感兴趣的:(Apache hadoop安装配置)