Hadoop搭建三台虚拟机

                                   分布式环境搭建

一:上传压缩包并加压 

解压
tar -zxvf hadoop-2.6.0-cdh5.14.0.tar.gz -C ../servers/

二:查看Hadoop压缩方式

cd /export/servers/hadoop-2.6.0-cdh5.14.0
 bin/hadoop checknative

Hadoop搭建三台虚拟机_第1张图片

  •   安装 openssl (yum -y install openssl-devel)     

 三:修改配置文件

 1.   cd /export/servers/hadoop-2.6.0-cdh5.14.0/etc/hadoop     

      vim  core-site.xml


    
        fs.defaultFS
        hdfs://node01:8020
    
     
    
        hadoop.tmp.dir
        /export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/tempDatas
    
    
    
        io.file.buffer.size
        4096
    
 
    
    
        fs.trash.interval
        10080
    

 

vim  hdfs-site.xml


    
    
      
     
            dfs.namenode.secondary.http-address
            node01:50090
    
 
    
        dfs.namenode.http-address
        node01:50070
    
      
    
        dfs.namenode.name.dir
        file:///export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/namenodeDatas
    
    
    
        dfs.datanode.data.dir
        file:///export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/datanodeDatas
    
   
    
        dfs.namenode.edits.dir
        file:///export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/dfs/nn/edits
    
    
        dfs.namenode.checkpoint.dir
        file:///export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/dfs/snn/name
    
    
        dfs.namenode.checkpoint.edits.dir
        file:///export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/dfs/nn/snn/edits
    
    
    
        dfs.replication
        2
    
    
    
        dfs.permissions
        false
    
    

        dfs.blocksize
        134217728
    

vim mapred-site.xml



    
        mapreduce.framework.name
        yarn
    
    
    
        mapreduce.job.ubertask.enable
        true
    
    
    
    
        mapreduce.jobhistory.address
        node01:10020
    
   
    
        mapreduce.jobhistory.webapp.address
        node01:19888
    

vim yarn-site.xml


  
    
        yarn.resourcemanager.hostname
        node01
    
    
    
        yarn.nodemanager.aux-services
        mapreduce_shuffle
    

vim slaves (写入你的三主机名称)

node01
node02
node03

四:配置hadoop的环境变量

  vim /etc/profile.d/hadoop.sh

export  HADOOP_HOME=/export/servers/ hadoop-2.6.0-cdh5.14.0
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin

 

五:集群拷贝

  • scp -r hadoop-2.6.0-cdh5.14.0 node02:/export/servers/
  • scp -r hadoop-2.6.0-cdh5.14.0 node03:/export/servers/
  • scp -r /etc/profile.d/hadoop.sh node02:/etc/profile.d/
  • scp -r /etc/profile.d/hadoop.sh node03:/etc/profile.d/

六:在主节点格式化 bin/hdfs namenode -format

七:一键启动 sbin/start-all.sh

八 测试节点是否启动成功 主机点 jps

Hadoop搭建三台虚拟机_第2张图片

 

 

 

 

 

 

 

 

你可能感兴趣的:(hadoop,大数据,Linux)