Hadoop集群部署

准备

  • 准备3台客户机,(关闭防火墙、静态IP、主机名称),配置ssh。我配置了4台;
  • 安装JDK,配置JAVA_HOME环境变量;
  • 安装Hadoop, 配置HADOOP_HOME环境变量;
  • 配置集群
  • 单点启动
  • 集群启动,测试集群

很多工作之前都做了,这里就讲下相关的主要内容和没做的吧!

  • Linux安装及其静态IP配置,安装JDK与Tomcat
  • Linux虚拟机之间的通信
  • 写几个好用的Linux脚本

环境变量

export JAVA_HOME=/opt/jdk/jdk-11.0.11
export HADOOP_HOME=/opt/hadoop-3.3.1
export PATH=${JAVA_HOME}/bin:${HADOOP_HOME}/bin:$PATH

配置集群

  • 集群配置计划
进程\主机 flink01 flink02 flink03 flink04
HDFS NameNode、DataNode SecondaryNameNode、DataNode DataNode DataNode
Yarn NodeManager NodeManager ResourceManager、NodeManager NodeManager
  • 配置 $HADOOP_HOME/etc/hadoop/ 下的4个重要配置文件:
    core-site.xml:
    hdfs-site.xml:
    mapred-site.xml:
    yarn-site.xml` :
# core-site.xml

    
    
        fs.defaultFS
        hdfs://flink01:8020
    
    
    
        hadoop.tmp.dir
        /opt/hadoop-3.3.1/data
    
    
    
        hadoop.http.staticuser.user
        liuwen
    


# hdfs-site.xml

    
    
        dfs.namenode.http-address
        flink01:9870
    
    
    
        dfs.namenode.secondary.http-address
        flink02:9868
    


# mapred-site.xml

    
    
        mapreduce.framework.name
        yarn
    
    
    
        mapreduce.jobhistory.address
        flink01:10020
    
    
    
        mapreduce.jobhistory.webapp.address
        flink01:19888
    


# yarn-site.xml

    
    
    
        yarn.nodemanager.aux-services
        mapreduce_shuffle
    
    
    
        yarn.resourcemanager.hostname
        flink03
    
    
    
        yarn.nodemanager.env-whitelist 
        JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME
    
    
    
        yarn.log-aggregation-enable
        true
    
    
    
        yarn.log.server.url
        http://flink01:19888/jobhistory/logs
    
    
    
        yarn.log-aggregation.retain-seconds
        604800
    


  • 配置集群相关服务器workers
flink01
flink02
flink03
flink04
  • 分发配置文件,并查看所有服务器的配置文件是否一致
[liuwen@flink01 hadoop]$ ~/bin/xsync ./hadoop

启动HDFS

第一次启动时需要格式化 NameNode

# 仅第一次启动时需要格式化 NameNode
[liuwen@flink01 hadoop-3.3.1]$ hdfs namenode -format

# 启动HDFS
[liuwen@flink01 hadoop-3.3.1]$ sbin/start-dfs.sh
Starting namenodes on [flink01]
Starting datanodes
Starting secondary namenodes [flink02]
2021-08-29 01:58:23,769 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[liuwen@flink01 hadoop-3.3.1]$

在web上查看HDFS:http://flink01:9870。点击菜单 Utilities -> Browse the file system

image.png

启动Yarn

[liuwen@flink01 hadoop-3.3.1]$ ssh flink03
[liuwen@flink03 ~]$ cd /opt/hadoop-3.3.1/
[liuwen@flink03 hadoop-3.3.1]$ sbin/start-yarn.sh
Starting resourcemanager
Starting nodemanagers
[liuwen@flink03 hadoop-3.3.1]$

在web上查看Yarn:http://flink03:8088

查看各服务器运行的进程

[liuwen@flink01 hadoop-3.3.1]$ ~/bin/jpsall
---------- flink01 jps ------------
51728 Jps
50840 NameNode
50968 DataNode
51468 NodeManager
---------- flink02 jps ------------
51159 Jps
50633 SecondaryNameNode
50508 DataNode
50910 NodeManager
---------- flink03 jps ------------
50977 NodeManager
50850 ResourceManager
50493 DataNode
51469 Jps
---------- flink04 jps ------------
50736 NodeManager
50983 Jps
50458 DataNode
[liuwen@flink01 hadoop-3.3.1]$

编写hadoop 启动关闭的脚本

启动关闭太麻烦,那就编写个脚本吧。

#!/bin/bash

case $1 in

"start"){
  echo ----------HDFS 启动------------
  ssh flink01 "/opt/hadoop-3.3.1/sbin/start-dfs.sh"
  echo ---------- Yarn 启动------------
  ssh flink03 "/opt/hadoop-3.3.1/sbin/start-yarn.sh"
  echo ---------- 历史服务器 启动------------
  ssh flink03 "/opt/hadoop-3.3.1/bin/mapred --daemon start historyserver"
};;
"stop"){
  echo ---------- 历史服务器 关闭------------
  ssh flink03 "/opt/hadoop-3.3.1/bin/mapred --daemon stop historyserver"
  echo ---------- Yarn $i 关闭------------
  ssh flink03 "/opt/hadoop-3.3.1/sbin/stop-yarn.sh"
  echo ----------HDFS $i 关闭------------
  ssh flink01 "/opt/hadoop-3.3.1/sbin/stop-dfs.sh"
};;

esac

端口号说明

端口名称 Hadoop3.x
NameNode内部通信端口 8020 / 9000/9820
NameNode HTTP UI 9870
MapReduce查看任务 HTTP UI 8088
历史服务器通信端口 19888

单进程启动与关闭

  • HDFS相关进程的启动与关闭
hdfs --daemon start/stop namenode/datanode/secondarynamenode
  • Yarn相关进程的启动与关闭
yarn --daemon start/stop  resourcemanager/nodemanager

你可能感兴趣的:(Hadoop集群部署)