linux安装JDK及hadoop运行环境搭建

1.linux中安装jdk

(1)下载JDK至opt/install目录下,opt下创建目录soft,并解压至当前目录

tar xvf ./jdk-8u321-linux-x64.tar.gz -C /opt/soft/

(2)改名

linux安装JDK及hadoop运行环境搭建_第1张图片

(3)配置环境变量:vim /etc/profile

#JAVA_HOME

export JAVA_HOME=/opt/soft/jdk180

export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

export PATH=$PATH:$JAVA_HOME/bin

(4)更新资源并测试是否安装成功

source /opt/profile

java

2.hadoop运行环境搭建

2.1  安装jDK:参上

2.2  下载安装Hadoop

解压至soft目录下,改名为hadoop313

linux安装JDK及hadoop运行环境搭建_第2张图片

更改所属用户为root

linux安装JDK及hadoop运行环境搭建_第3张图片

配置环境变量:vim /etc/profilre;配置完成后source /etc/profile

# HADOOP_HOME

export HADOOP_HOME=/opt/soft/hadoop313

export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/lib

export HDFS_NAMENODE_USER=root

export HDFS_DATANODE_USER=root

export HDFS_SECONDARYNAMENODE_USER=root

export HDFS_JOURNALNODE_USER=root

export HDFS_ZKFC_USER=root

export YARN_RESOURCEMANAGER_USER=root

export YARN_NODEMANAGER_USER=root

export HADOOP_MAPRED_HOME=$HADOOP_HOME

export HADOOP_COMMON_HOME=$HADOOP_HOME

export HADOOP_HDFS_HOME=$HADOOP_HOME

export HADOOP_YARN_HOME=$HADOOP_HOME

export HADOOP_INSTALL=$HADOOP_HOME

export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native

export HADOOP_LIBEXEC_DIR=$HADOOP_HOME/libexec

export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native

export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
 

linux安装JDK及hadoop运行环境搭建_第4张图片

创建数据目录data

linux安装JDK及hadoop运行环境搭建_第5张图片

切换至hadoop目录,查看目录下文件,准备进行配置

cd /opt/soft/hadoop313/etc/hadoop

linux安装JDK及hadoop运行环境搭建_第6张图片

2.3  配置单机Hadoop

(1)配置core-site.xml



 

    

        fs.defaultFS

        hdfs://kb129:9000

    

    

    

        hadoop.tmp.dir

        /opt/soft/hadoop313/data

    

    

    

        hadoop.http.staticuser.user

        root

    

    

        io.file.buffer.size

        131072

    

    

        hadoop.proxyuser.root.hosts

        *

    

    

        hadoop.proxyuser.root.groups

        *

    

 

linux安装JDK及hadoop运行环境搭建_第7张图片

(2)配置hdfs-site.xml

1)编辑hadoop-enc.sh

2)开始配置hdfs-site.xml



    

        dfs.replication

        1

    

    

        dfs.namenode.name.dir

        /opt/soft/hadoop313/data/dfs/name

    

    

        dfs.datanode.data.dir

        /opt/soft/hadoop313/data/dfs/data

    

    

        dfs.permissions.enabled

        false

    

linux安装JDK及hadoop运行环境搭建_第8张图片

(3)配置yarn-site.xml





    

    

        yarn.resourcemanager.connect.retry-interval.ms

        20000

    

    

        yarn.resourcemanager.scheduler.class

        org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler

    

    

        yarn.nodemanager.localizer.address

        kb129:8040

    

    

        yarn.nodemanager.address

        kb129:8050

    

    

        yarn.nodemanager.webapp.address

        kb129:8042

    

    

    

        yarn.nodemanager.aux-services

        mapreduce_shuffle

    

    

        yarn.nodemanager.local-dirs

        /opt/soft/hadoop313/yarndata/yarn

    

    

        yarn.nodemanager.log-dirs

        /opt/soft/hadoop313/yarndata/log





        yarn.nodemanager.vmem-check-enabled

        false

    

 

linux安装JDK及hadoop运行环境搭建_第9张图片

(4)配置workers更改workers内容为kb129(主机名)

(5)配置mapred-site.xml





    

        mapreduce.framework.name

        yarn

    

    

        mapreduce.jobhistory.address

        kb129:10020

    

    

        mapreduce.jobhistory.webapp.address

        kb129:19888

    

    

        mapreduce.map.memory.mb

        2048

    

    

        mapreduce.reduce.memory.mb

        2048

    

    

        mapreduce.application.classpath

/opt/soft/hadoop313/etc/hadoop:/opt/soft/hadoop313/share/hadoop/common/lib/*:/opt/soft/hadoop313/share/hadoop/common/*:/opt/soft/hadoop313/share/hadoop/hdfs/*:/opt/soft/hadoop313/share/hadoop/hdfs/lib/*:/opt/soft/hadoop313/share/hadoop/mapreduce/*:/opt/soft/hadoop313/share/hadoop/mapreduce/lib/*:/opt/soft/hadoop313/share/hadoop/yarn/*:/opt/soft/hadoop313/share/hadoop/yarn/lib/*

    

 

linux安装JDK及hadoop运行环境搭建_第10张图片

2.4  启动测试hadoop

(1)bin目录下初始化集群hadoop namenode -format

linux安装JDK及hadoop运行环境搭建_第11张图片

(2)设置免密登录

回到根目录下配置kb129免密登录:ssh-keygen -t rsa -P ""

将本地主机的公钥文件(~/.ssh/id_rsa.pub)拷贝到远程主机 kb128 的 root 用户的 .ssh/authorized_keys 文件中,通过 SSH 连接到远程主机时可以使用公钥进行身份验证:cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys

将本地主机的公钥添加到远程主机的授权密钥列表中,以便实现通过 SSH 公钥身份验证来连接远程主机:ssh-copy-id -i ~/.ssh/id_rsa.pub -p22 root@kb128

(3)启动/关闭、查看

[root@kb129 hadoop]# start-all.sh

[root@kb129 hadoop]# stop-all.sh

[root@kb129 hadoop]# jps

15089 NodeManager

16241 Jps

14616 DataNode

13801 ResourceManager

14476 NameNode

16110 SecondaryNameNode

linux安装JDK及hadoop运行环境搭建_第12张图片

 

(4)网页测试:浏览器中输入网址:http://192.168.142.129:9870/

linux安装JDK及hadoop运行环境搭建_第13张图片

你可能感兴趣的:(linux,java,hadoop)