Cent os 7 搭建Hadoop集群

  1. 配置静态IP

操作目标文件 vi /etc/sysconfig/network-scripts/ifcfg-enp0s3

修改结果        BOOTPROTO=static

IPADDR=192.168.31.190

NETMASK=255.255.255.0

GATEWAY=192.168.31.1

DNS1=221.228.255.1

允许联网        ONBOOT=yes

重启网络        service network start

配置 vi /etc/hosts 文件

给hadoop用户赋予 sudo 执行权限

切换到root用户修改文件 vi /etc/sudoers

添加 hadoop ALL=(root)       NOPASSWD:ALL

  1. 在hadoop用户下配置ssh免密码登录

进入hadoop用户下~/.ssh文件夹运行 ssh-keygen –t rsa

生成公钥        cat id_rsa.pub>>authorized_keys

检查父子文件权限 ssh不允许home和~/.ssh文件group写入权限

chmod g-w /home/hadoop

chmod 700 /home/hadoop/.ssh

chmod 600 /home/hadoop/.ssh/authorized_keys

    分发公钥到子节点上

       ssh-copy-id hadoop@slave1

  1. 安装jdk

vi /etc/profile

export JAVA_HOME=/home/hadoop/local/jdk

export JRE_HOME=/home/hadoop/local/jdk/jre

export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH

  1. 安装hadoop

vi /etc/profile

export HADOOP_HOME=/home/hadoop/local/hadoop

export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH

将java环境添加到hadoop-env.sh中

 

添加slaves

配置core-site.xml

    

           hadoop.tmp.dir

           /home/hadoop/local/hadoop/tmp

    

    

           fs.default.FS

           hdfs://master:9000

    

 

配置hdfs-site.xml

   dfs.namenode.name.dir

   file:///home/hadoop/local/hadoop/tmp/dfs/name

 

 

   dfs.datanode.data.dir

   file:///home/hadoop/local/hadoop/tmp/dfs/data

 

    

           dfs.http.address

           master:50070

    

    

           dfs.namenode.secondary.http-address

           master:50090

    

    

           dfs.replication

           3

    

 

配置mapred-site.xml

    

           mapred.job.tracker

           master:9001

    

    

           mapred.map.tasks

           20

    

    

           mapred.reduce.tasks

           4

    

    

           mapreduce.framework.name

           yarn

    

    

           mapreduce.jobhistory.address

          master:10020

    

    

          mapreduce.jobhistory.webapp.address

          master:19888

     

 

配置yarn-site.xml

    

          yarn.resourcemanager.address

          master:8032

    

    

          yarn.resourcemanager.scheduler.address

          master:8030

    

     

          yarn.resourcemanager.webapp.address

          master:8088

    

    

          yarn.resourcemanager.resource-tracker.address

          master:8031

    

    

          yarn.resourcemanager.admin.address

          master:8033

    

    

          yarn.nodemanager.aux-services

       mapreduce_shuffle

    

 

复制到其他节点后

Hdfs namenode -format

关闭防火墙 systemctl stop firewalld

/home/hadoop/local/hadoop/bin/hdfs dfsadmin -report

/home/hadoop/local/hadoop/bin/hdfs fsck / -files -blocks

你可能感兴趣的:(大数据环境搭建)