Linux搭建Hadoop开发环境

Linux搭建Hadoop开发环境
2.Hadoop环境搭建安装配置:
[1].官网下载Hadoop-2.7.5安装包: hadoop-2.7.5.tar.gz
[2].把Hadoop-2.7.5安装包利用Xftp5工具上传到:/usr/local/hadoop
[3].登录Liunx服务器,利用Xhell5进入:cd /usr/local/hadoop:
[root@marklin hadoop]# cd /usr/local/hadoop
       [root@marklin hadoop]#
       并使用tar -xvf 解压:tar -xvf hadoop-2.7.5.tar.gz,
[root@marklin hadoop]# tar -xvf hadoop-2.7.5.tar.gz
[4].配置Hadoop环境变量,输入:vim /etc/profile
  #Setting HADOOP_HOME PATH
export HADOOP_HOME=/usr/local/hadoop/hadoop-2.7.5
export PATH=${PATH}:${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin::${HADOOP_HOME}/lib
export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
export HADOOP_MAPARED_HOME=${HADOOP_HOME}
export HADOOP_COMMON_HOME=${HADOOP_HOME}
export HADOOP_HDFS_HOME=${HADOOP_HOME}
export YARN_HOME=${HADOOP_HOME}
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export HDFS_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export YARN_CONF_DIR=${HADOOP_HOME}/etc/hadoop
 

保存配置,输入:source /etc/profile
[root@marklin ~]# source /etc/profile
       [root@marklin ~]#
[5].Hadoop修改配置文件:
(1). 配置core-site.xml,在Hadoop安装目录[/usr/local/hadoop/hadoop-2.7.5/etc/hadoop]下 输入: vim core-site.xml
[root@marklin ~]# cd /usr/local/hadoop/hadoop-2.7.5/etc/hadoop
 输入:vim core-site.xml
并配置:
    
        fs.defaultFS
        hdfs://127.0.0.1:9000
    
    
        hadoop.tmp.dir
        /usr/local/hadoop/repository/hdfs/tmp
    
    
        io.file.buffer.size
        131702
    
同时在文件路径:/usr/local/hadoop/repository/hdfs,创建tmp目录: mkdir tmp
(2) 修改 hdfs-site.xml,并配置:vim hdfs-site.xml
[root@marklin hadoop]# vim hdfs-site.xml
[root@marklin hadoop]# 
    
        dfs.namenode.name.dir
        /usr/local/hadoop/repository/hdfs/name
        true
    
    
        dfs.datanode.data.dir
        /usr/local/hadoop/repository/hdfs/data
        true
    
    
        dfs.http.address
        127.0.0.1:50070
    
    
        dfs.permissions
        false
    
    
        dfs.replication
        3
    
    
        dfs.namenode.secondary.http-address
        127.0.0.1:9001
    
    
        dfs.webhdfs.enabled
        true
    
同时在文件路径:/usr/local/hadoop/repository/hdfs,创建name和data目录: mkdir  name 和mkdir  data
(3) 创建mapred-site.xml文件,输入:cp mapred-site.xml.template mapred-site.xml
[root@marklin hadoop]# cp mapred-site.xml.template mapred-site.xml
编辑mapred-site.xml文件,并配置:
    
        mapreduce.framework.name
        yarn
    
    
        mapred.job.tracker
        hdfs://127.0.0.1:8021/
    
    
        mapreduce.jobhistory.address
        127.0.0.1:10020
    
    
        mapreduce.jobhistory.webapp.address
        127.0.0.1:19888
    
    
        mapred.system.dir
        /usr/local/hadoop/repository/mapreduce/system
        true
    
    
        mapred.local.dir
        /usr/local/hadoop/repository/mapreduce/local
        true
    
(4) 修改 yarn-site.xml,并输入::vim yarn-site.xml
[root@marklin hadoop]# vim yarn-site.xml
[root@marklin hadoop]#
并配置:
    
        yarn.nodemanager.aux-services
        mapreduce_shuffle
    
    
        yarn.nodemanager.auxservices.mapreduce.shuffle.class
        org.apache.hadoop.mapred.ShuffleHandler
    
    
        yarn.resourcemanager.hostname
        127.0.0.1
    
    
        yarn.resourcemanager.address
        ${yarn.resourcemanager.hostname}:8032
    
    
        yarn.resourcemanager.scheduler.address
        ${yarn.resourcemanager.hostname}:8030
    
    
        yarn.resourcemanager.resource-tracker.address
        ${yarn.resourcemanager.hostname}:8031
    
    
        yarn.resourcemanager.admin.address
        ${yarn.resourcemanager.hostname}:8033
    
    
        yarn.resourcemanager.webapp.address
        ${yarn.resourcemanager.hostname}:8088
    
    
        yarn.nodemanager.resource.memory-mb
        768
    
【6】在Hadoop文件目录[/usr/local/hadoop/hadoop-2.7.5/etc/hadoop]下,
对应的 hadoop-env.sh,mapred-env.sh以及yarn-env.sh文件配置JAVA_HOME:export JAVA_HOME=/usr/local/java/jdk1.8.0_162
输入:vim hadoop-env.sh :
[root@marklin hadoop]# vim hadoop-env.sh
[root@marklin hadoop]#
export JAVA_HOME=/usr/local/java/jdk1.8.0_162
输入:vim mapred-env.sh:
export JAVA_HOME=/usr/local/java/jdk1.8.0_162
[root@marklin hadoop]# vim mapred-env.sh
[root@marklin hadoop]#
输入:vim yarn-env.sh
export JAVA_HOME=/usr/local/java/jdk1.8.0_162
[root@marklin hadoop]# vim yarn-env.sh
[root@marklin hadoop]#
 
【6】开放端口:50070
(1)启动防火墙:systemctl start firewalld.service
[root@marklin ~]# systemctl start firewalld.service
[root@marklin ~]#
(2)启动防火墙:firewall-cmd --zone=public --add-port=50070/tcp --permanent
[root@marklin ~]# firewall-cmd --zone=public --add-port=50070/tcp --permanent
[root@marklin ~]#
(3)启动:firewall-cmd --reload
[root@marklin ~]# firewall-cmd --reload
[root@marklin ~]# 
(4)格式化:hdfs namenode -format
[root@marklin ~]# hdfs namenode -format
[root@marklin ~]#
(5)启动脚本:start-all.sh
[root@marklin ~]# start-all.sh
[root@marklin ~]#

你可能感兴趣的:(数据库)