Hadoop HA集群部署 - A - 详解

理论简介:
 HA 概念以及作用
    HA(High Available), 高可用性群集,是保证业务连续性的有效解决方案,一般有两个或两个以上的节点,且分为活动节点及备用节点。通常把正在执行业务的称为活动节点,而作为活动节点的一个备份的则称为备用节点。当活动节点出现问题,导致正在运行的业务(任务)不能正常运行时,备用节点此时就会侦测到,并立即接续活动节点来执行业务。从而实现业务的不中断或短暂中断。
HDFS概述
基础架构

1、NameNode(Master)
1)命名空间管理:命名空间支持对HDFS中的目录、文件和块做类似文件系统的创建、修改、删除、列表文件和目录等基本操作。

2)块存储管理。
NameNode+HA架构


  从上面的架构图可以看出,使用Active NameNode,Standby NameNode 两个节点可以解决单点问题,两个节点通过JounalNode共享状态,通过ZKFC 选举Active ,监控状态,自动备份。

1、Active NameNode
  接受client的RPC请求并处理,同时写自己的Editlog和共享存储上的Editlog,接收DataNode的Block report, block location updates和heartbeat。

2、Standby NameNode
  同样会接到来自DataNode的Block report, block location updates和heartbeat,同时会从共享存储的Editlog上读取并执行这些log操作,保持自己NameNode中的元数据(Namespcae information + Block locations map)和Active NameNode中的元数据是同步的。所以说Standby模式的NameNode是一个热备(Hot Standby NameNode),一旦切换成Active模式,马上就可以提供NameNode服务。

3、JounalNode
  用于Active NameNode , Standby NameNode 同步数据,本身由一组JounnalNode节点组成,该组节点奇数个。
4、ZKFC
  监控NameNode进程,自动备份。

YARN概述
基础架构
1、ResourceManager(RM)
  接收客户端任务请求,接收和监控NodeManager(NM)的资源情况汇报,负责资源的分配与调度,启动和监控ApplicationMaster(AM)。

2、NodeManager
  节点上的资源管理,启动Container运行task计算,上报资源、container情况汇报给RM和任务处理情况汇报给AM。

3、ApplicationMaster
  单个Application(Job)的task管理和调度,向RM进行资源的申请,向NM发出launch Container指令,接收NM的task处理状态信息。

4、Web Application Proxy
  用于防止Yarn遭受Web攻击,本身是ResourceManager的一部分,可通过配置独立进程。ResourceManager Web的访问基于守信用户,当Application Master运行于一个非受信用户,其提供给ResourceManager的可能是非受信连接,Web Application Proxy可以阻止这种连接提供给RM。

5、Job History Server
    NodeManager在启动的时候会初始化LogAggregationService服务, 该服务会在把本机执行的container log (在container结束的时候)收集并存放到hdfs指定的目录下. ApplicationMaster会把jobhistory信息写到hdfs的jobhistory临时目录下, 并在结束的时候把jobhisoty移动到最终目录, 这样就同时支持了job的recovery.History会启动web和RPC服务, 用户可以通过网页或RPC方式获取作业的信息。
ResourceManager+HA架构

ResourceManager HA 由一对Active,Standby结点构成,通过RMStateStore存储内部数据和主要应用的数据及标记。


需要说明以下几点:
         HDFS HA通常由两个NameNode组成,一个处于Active状态,另一个处于Standby状态。Active NameNode对外提供服务,而Standby NameNode则不对外提供服务,仅同步Active NameNode的状态,以便能够在它失败时快速进行切换。
Hadoop 2.0官方提供了两种HDFS HA的解决方案,一种是NFS,另一种是QJM。这里我们使用简单的QJM。在该方案中,主备NameNode之间通过一组JournalNode同步元数据信息,一条数据只要成功写入多数JournalNode即认为写入成功。通常配置奇数个JournalNode,这里还配置了一个Zookeeper集群,用于ZKFC故障转移,当Active NameNode挂掉了,会自动切换Standby NameNode为Active状态。
YARN的ResourceManager也存在单点故障问题,这个问题在hadoop-2.4.1得到了解决:有两个ResourceManager,一个是Active,一个是Standby,状态由zookeeper进行协调。
YARN框架下的MapReduce可以开启JobHistoryServer来记录历史任务信息,否则只能查看当前正在执行的任务信息。
Zookeeper的作用是负责HDFS中NameNode主备节点的选举,和YARN框架下ResourceManaer主备节点的选举。

(一),基础环境配置:
[hadoop@big-master1 hadoop]$ cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
## bigdata cluster ##
192.168.41.20  big-master1   #bigdata1  namenode1,zookeeper,resourcemanager
192.168.41.21  big-master2   #bigdata2  namenode2,zookeeper,slave,resourcemanager
192.168.41.22  big-slave01   #bigdata3  datanode1,zookeeper,slave
192.168.41.25  big-slave02   #bigdata4  datanode2,zookeeper,slave
192.168.41.27  big-slave03   #bigdata5  datanode3,zookeeper,slave

hadoop 分布式系统各软件版本:
系统版本:
[hadoop@big-master1 hadoop]$ cat /etc/redhat-release 
CentOS Linux release 7.7.1908 (Core)

JDK版本:
下载JDK : https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
[hadoop@big-master1 ~]$ java -version
java version "1.8.0_251"
Java(TM) SE Runtime Environment (build 1.8.0_251-b08)
Java HotSpot(TM) 64-Bit Server VM (build 25.251-b08, mixed mode)

zookeeper 版本:
3.4.14 
download :  https://zookeeper.apache.org/releases.html 
wget http://mirror.bit.edu.cn/apache/zookeeper/stable/zookeeper-3.4.12.tar.gz
yum install glibc-headers gcc gcc-c++ make cmake openssl-devel ncurses-devel -y

hadoop版本:
2.8.5 版本
## appache hadoop
download : http://hadoop.apache.org
https://archive.apache.org/dist/hadoop/common/

网卡设置:
[hadoop@big-master1 tools]$ cat /etc/sysconfig/network-scripts/ifcfg-enp0s3 
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=enp0s3
UUID=46ab62bc-448c-443b-9755-6bf8abbdc612
DEVICE=enp0s3
ONBOOT=yes
IPV6_PRIVACY=no
IPADDR=192.168.41.20
NETMASK=255.255.255.0
GATEWAY=192.168.41.1


Java版本
[hadoop@big-master1 hadoop]$ java -version
java version "1.8.0_251"
Java(TM) SE Runtime Environment (build 1.8.0_251-b08)
Java HotSpot(TM) 64-Bit Server VM (build 25.251-b08, mixed mode)

####
全局环境变量新增:
[hadoop@big-master1 hadoop]$ cat /etc/profile
### JDK ###
JAVA_HOME=/usr/local/jdk1.8.0_251
CLASSPATH=$JAVA_HOME/lib/tools.jar$JAVA_HOME/lib/dt.jar
PATH=$JAVA_HOME/bin:$PATH
export JAVA_HOME CLASSPATH PATH

### zookeeper  ##
export ZK_HOME=/usr/local/zookeeper
export PATH=$ZK_HOME/bin:$PATH

### hadoop ##
export HADOOP_HOME=/usr/local/hadoop
export PATH=$HADOOP_HOME/bin:$PATH:$HADOOP_HOME/sbin:$PATH

## tools ##
export PATH=/home/hadoop/tools:$PATH


####
关闭安全策略:
Selinux 安全策略关闭:
cat > /etc/selinux/config << EOF
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected. 
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted
EOF

检测:
[root@big-slave03 local]# getenforce   --redhat 7 
Disabled
or 
[root@big-slave03 local]# sestatus -v
SELinux status:                 disabled

## 防火墙 Iptables   ##
CentOS7在防火墙与端口上的操作
CentOS7使用systemctl指令来管理系统的单一服务,操作如下:
启动防火墙: systemctl start firewalld
查看防火墙状态: systemctl status firewalld
关闭防火墙: systemctl stop firewalld
开机时启用防火墙服务:systemctl enable firewalld
开机时禁用防火墙服务:systemctl disable firewalld
查询防火墙服务是否开机启动:systemctl is-enabled firewalld
查询已经启动的服务列表:systemctl list-unit-files|grep enabled
查询启动失败的服务列表:systemctl --failed
在安装软件或列库时,除了直接开启和关闭防火墙,也可以通过对端口的操作直接开放连接;
添加端口:firewall-cmd --zone=public --add-port=80/tcp --permanent
更新防火墙规则:firewall-cmd --reload
查看端口状态:firewall-cmd --zone=public --query-port=80/tcp
删除开放的端口:firewall-cmd --zone=public --remove-port=80/tcp --permanent
每次都更新防火墙规则,都需要重新更新:firewall-cmd --reload
在更新完防火墙的设置后,也可以查看所有开启的端口:firewall-cmd --zone=public --list-ports

查看状态:
[root@big-slave03 local]# firewall-cmd --list-port
FirewallD is not running
[root@big-slave03 local]# systemctl status firewalld.service
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)

## redhat 6  ##
/etc/init.d/iptables stop
chkconfig iptables off
chkconfig iptables --list 

####
修改主机名 - redhat 7 系列:
vim /etc/sysconfig/network 将HOSTNAME修改为你的主机名
vim /etc/hostname 将原来的主机名删除,添加你的新主机名
vim /etc/hosts 将原来的映射主机名修改为你的新主机名

## redhat 7.7 ##
[root@big-slave03 local]# cat /etc/sysconfig/network
# Created by anaconda

[root@big-slave03 local]# cat /etc/hostname 
big-slave03

[root@big-slave03 local]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
## bigdata cluster ##
192.168.41.20  big-master1   #bigdata1  namenode1,zookeeper,resourcemanager
192.168.41.21  big-master2   #bigdata2  namenode2,zookeeper,slave,resourcemanager
192.168.41.22  big-slave01   #bigdata3  datanode1,zookeeper,slave
192.168.41.25  big-slave02   #bigdata4  datanode2,zookeeper,slave
192.168.41.27  big-slave03   #bigdata5  datanode3,zookeeper,slave

####
创建hadoop 用户:
grouaddd -g  1100 hadoop 
useradd -m -u 1200 -g hadoop hadoop 
[hadoop@big-master1 ~]$ id hadoop
uid=1200(hadoop) gid=1100(hadoop) groups=1100(hadoop)
password hadoop 

####
SSH 等效性验证 & 免密登录 设置:
以下参考RAC 配置: redhat 6, redhat 7 一样:
配置:
mkdir  ~/.ssh
chmod -R 755  ~/.ssh/
ssh-keygen -t rsa
touch ~/.ssh/authorized_keys
chmod 644 ~/.ssh/authorized_keys
(aa)
10.180.53.1:
ssh 10.180.53.1 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys  

10.180.53.2:
ssh 10.180.53.2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys  

10.180.53.3:
ssh 10.180.53.3 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys  

10.180.53.4: 
ssh 10.180.53.4 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys  
(bb)
10.180.53.1:
ssh 10.180.53.2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys  
ssh 10.180.53.3 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys  
ssh 10.180.53.4 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys  

(cc)
10.180.53.1:
scp ~/.ssh/authorized_keys 10.180.53.2:.ssh/authorized_keys
scp ~/.ssh/authorized_keys 10.180.53.3:.ssh/authorized_keys
scp ~/.ssh/authorized_keys 10.180.53.4:.ssh/authorized_keys

新增:
如果add一台服务器,重做ssh ,只需要 执行: 
1, vim /home/mysql/.ssh/known_hosts 删除之前的记录信息:
2, ssh-copy-id [email protected]  即可。(这里10.180.53.3 为新替换的服务器)

验证:
[hadoop@big-slave02 ~]$ ssh big-master2 date;date && ssh big-master1 date;date && ssh big-slave01 date;date && ssh big-slave02 date;date && ssh big-slave03 date;date
Fri May 15 16:39:59 CST 2020
Fri May 15 16:39:58 CST 2020
Fri May 15 16:39:58 CST 2020
Fri May 15 16:39:58 CST 2020
Fri May 15 16:39:59 CST 2020
Fri May 15 16:39:59 CST 2020
Fri May 15 16:39:59 CST 2020
Fri May 15 16:39:59 CST 2020
Fri May 15 16:39:59 CST 2020
Fri May 15 16:39:59 CST 2020

####
时间同步& 时区设置
# 修改时区  ##

修改Linux的时区,RedHat 7
[root@ip-172-31-29-22 sysconfig]# timedatectl
      Local time: Tue 2018-01-09 08:01:21 UTC
  Universal time: Tue 2018-01-09 08:01:21 UTC
        RTC time: Tue 2018-01-09 08:01:20
       Time zone: UTC (UTC, +0000)
     NTP enabled: yes
NTP synchronized: yes
 RTC in local TZ: no
      DST active: n/a
修改时区:
[root@ip-172-31-29-22 sysconfig]# timedatectl set-timezone Asia/Shanghai

修改Linux的时区,RedHat 6
vim /etc/sysconfig/clock
ZONE="Asia/Shanghai"
UTC=false
ARC=false

# 时间同步NTP #
服务端:
yum install ntp ntpdate -y

[root@big-slave03 local]# systemctl stop ntpd
[root@big-slave03 local]# systemctl start ntpd
[root@big-master1 ~]# systemctl start ntpd && systemctl status ntpd && ntpq -p
● ntpd.service - Network Time Service
   Loaded: loaded (/usr/lib/systemd/system/ntpd.service; disabled; vendor preset: disabled)
   Active: active (running) since Fri 2020-05-15 17:10:16 CST; 47ms ago
  Process: 5669 ExecStart=/usr/sbin/ntpd -u ntp:ntp $OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 5670 (ntpd)
   CGroup: /system.slice/ntpd.service
           └─5670 /usr/sbin/ntpd -u ntp:ntp -g

May 15 17:10:16 big-master1 ntpd[5670]: Listen and drop on 0 v4wildcard 0.0.0.0 UDP 123
May 15 17:10:16 big-master1 ntpd[5670]: Listen and drop on 1 v6wildcard :: UDP 123
May 15 17:10:16 big-master1 ntpd[5670]: Listen normally on 2 lo 127.0.0.1 UDP 123
May 15 17:10:16 big-master1 ntpd[5670]: Listen normally on 3 enp0s3 192.168.41.20 UDP 123
May 15 17:10:16 big-master1 ntpd[5670]: Listen normally on 4 lo ::1 UDP 123
May 15 17:10:16 big-master1 ntpd[5670]: Listen normally on 5 enp0s3 fe80::cabd:f03f:32db:a908 UDP 123
May 15 17:10:16 big-master1 ntpd[5670]: Listening on routing socket on fd #22 for interface updates
May 15 17:10:16 big-master1 ntpd[5670]: inappropriate address 192.168.0.188 for the fudge command, line ignored
May 15 17:10:16 big-master1 ntpd[5670]: 0.0.0.0 c016 06 restart
May 15 17:10:16 big-master1 ntpd[5670]: 0.0.0.0 c012 02 freq_set kernel 22.512 PPM
     remote           refid      st t when poll reach   delay   offset  jitter
======================================================================
 dc1-1.500wan.co .INIT.          16 u    -   64    0    0.000    0.000   0.000

客户端:
vi /etc/ntp.conf
restrict master_ip_address   nomodify notrap noquery
server master_ip_address   Fudge master_ip_address   stratum 10
#server 0.rhel.pool.ntp.org iburst
#server 1.rhel.pool.ntp.org iburst
#server 2.rhel.pool.ntp.org iburst
#server 3.rhel.pool.ntp.org iburst

重启:
systemctl restart ntpd

同步时间:
ntpdate -u master_ip_address
ntpq -p

开机启动:
systemctl enable ntpd

ps :  hadoop 集群几个节点,我直接指向了 公司DNS 服务器,这里的client (客户端),就没有配置,直接配置 Server 端即可。

####
基本脚本tools池配置:
[hadoop@big-master1 tools]$ pwd
/home/hadoop/tools
[hadoop@big-master1 tools]$ ls
deploy.conf  deploy.sh  runRemoteCmd.sh
[hadoop@big-master1 tools]$ cat deploy.conf 
big-master1,all,namenode,zookeeper,resourcemanager,
big-master2,all,slave,namenode,zookeeper,resourcemanager,
big-slave01,all,slave,datanode,zookeeper,
big-slave02,all,slave,datanode,zookeeper,
big-slave03,all,slave,datanode,zookeeper,
[hadoop@big-master1 tools]$ cat deploy.sh 
#!/bin/bash
#set -x

if [ $# -lt 3 ]
then 
  echo "Usage: ./deply.sh srcFile(or Dir) descFile(or Dir) MachineTag"
  echo "Usage: ./deply.sh srcFile(or Dir) descFile(or Dir) MachineTag confFile"
  exit 
fi

src=$1
dest=$2
tag=$3
if [ 'a'$4'a' == 'aa' ]
then
  confFile=/home/hadoop/tools/deploy.conf
else 
  confFile=$4
fi

if [ -f $confFile ]
then
  if [ -f $src ]
  then
    for server in `cat $confFile|grep -v '^#'|grep ','$tag','|awk -F',' '{print $1}'` 
    do
       scp $src $server":"${dest}
    done 
  elif [ -d $src ]
  then
    for server in `cat $confFile|grep -v '^#'|grep ','$tag','|awk -F',' '{print $1}'` 
    do
       scp -r $src $server":"${dest}
    done 
  else
      echo "Error: No source file exist"
  fi

else
  echo "Error: Please assign config file or run deploy.sh command with deploy.conf in same directory"
fi
[hadoop@big-master1 tools]$ cat runRemoteCmd.sh 
#!/bin/bash
#set -x

if [ $# -lt 2 ]
then 
  echo "Usage: ./runRemoteCmd.sh Command MachineTag"
  echo "Usage: ./runRemoteCmd.sh Command MachineTag confFile"
  exit 
fi

cmd=$1
tag=$2
if [ 'a'$3'a' == 'aa' ]
then

  confFile=/home/hadoop/tools/deploy.conf
else 
  confFile=$3
fi

if [ -f $confFile ]
then
    for server in `cat $confFile|grep -v '^#'|grep ','$tag','|awk -F',' '{print $1}'` 
    do
       echo "*******************$server***************************"
       ssh $server "source /etc/profile; $cmd"
    done 
else
  echo "Error: Please assign config file or run deploy.sh command with deploy.conf in same directory"
fi


######################
(二),JDK 安装
2,软件版本安装及下载: 这里引用:
 2.1 jdk 安装
下载JDK : https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
这里推荐一个Windows系统与虚拟机可以相互传输文件的软件FileZilla,下载解压即可用:
链接:https://pan.baidu.com/s/193E3bfbHVpn5lsODg2ijJQ 提取码:kwiw
使用xshell-manageer 基本不需要ftp 工具

第一步:卸载删除OpenJDK
    卸载删除原有的OpenJDK,再安装新JDK,主要与对应的hadoop 版本对应兼容性。
   #  rpm -qa|grep openjdk -i #查找已经安装的OpenJDK,-i表示忽略“openjdk”的大小写
   # yum remove java-1.6.0-openjdk-devel-1.6.0.0-6.1.13.4.el7_0.x86_64 java-1.7.0-openjdk-devel-1.7.0.65-2.5.1.2.el7_0.x86_64 java-1.7.0-openjdk-headless-1.7.0.65-2.5.1.2.el7_0.x86_64 java-1.7.0-openjdk-1.7.0.65-2.5.1.2.el7_0.x86_64 java-1.6.0-openjdk-1.6.0.0-6.1.13.4.el7_0.x86_64
  -- 用RedHat系列系统自带的yum进行删除openjdk,yum类似ubuntu中的apt-get,均用于安装、卸载及更新系统自带的软件,注意:以上均以空格间隔
第二步:安装JDK

1、解压
首先解压下载得来的JDK:(JDK的tar.gz压缩包放在了~/dev目录下)
[Randy@localhost ~]$ sudo mkdir /usr/lib/jdk #如若没有/usr/lib/jdk路径,则执行此句予以创建jdk文件夹
[Randy@localhost ~]$ sudo tar -zxvf jdk-8u11-linux-i586.tar.gz -C /usr/lib/jdk #注意:-C, --directory=DIR        改变至目录 DIR
[Randy@localhost ~]$  ls /usr/lib/jdk
jdk1.8.0_11
[Randy@localhost ~]$ ls /usr/lib/jdk/jdk1.8.0_11/
bin        javafx-src.zip  man          THIRDPARTYLICENSEREADME-JAVAFX.txt
COPYRIGHT  jre             README.html  THIRDPARTYLICENSEREADME.txt
db         lib             release
include    LICENSE         src.zip
[Randy@localhost ~]$
移动jdk1.8.0_11中的文件到/usr/lib/jdk,并删除jdk1.8.0_11文件夹:

[Randy@localhost ~]$ sudo cp -rf /usr/lib/jdk/jdk1.8.0_11/* /usr/lib/jdk/ #移动
[Randy@localhost ~]$ 
[Randy@localhost ~]$  ls /usr/lib/jdk
bin        javafx-src.zip  LICENSE      src.zip
COPYRIGHT  jdk1.8.0_11     man          THIRDPARTYLICENSEREADME-JAVAFX.txt
db         jre             README.html  THIRDPARTYLICENSEREADME.txt
include    lib             release
[Randy@localhost ~]$ sudo rm -rf /usr/lib/jdk/jdk1.8.0_11/ #删除
[Randy@localhost ~]$  ls /usr/lib/jdk
bin        javafx-src.zip  man          THIRDPARTYLICENSEREADME-JAVAFX.txt
COPYRIGHT  jre             README.html  THIRDPARTYLICENSEREADME.txt
db         lib             release
include    LICENSE         src.zip
[Randy@localhost ~]$
 
2、配置环境变量

[Randy@localhost ~]$ sudo vim /etc/profile
在最后一行插入:
#JAVA Environment
export JAVA_HOME=/usr/lib/jdk
export JRE_HOME=/usr/lib/jdk/jre
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
export CLASSPATH=$CLASSPATH:.:$JAVA_HOME/lib:$JRE_HOME/lib

3、修改系统默认的JDK
[Randy@localhost ~]$  sudo update-alternatives --install /usr/bin/java java /usr/lib/jdk/bin/java 300  #使系统默认的java命令是/usr/lib/jdk/bin中的java命令
[Randy@localhost ~]$  sudo update-alternatives --install /usr/bin/javac javac /usr/lib/jdk/bin/javac 300  #使系统默认的javac命令是/usr/lib/jdk/bin中的javac命令
  [Randy@localhost ~]$ sudo update-alternatives --install /usr/bin/jar jar /usr/lib/jdk/bin/jar 300 #使系统默认的jar命令是/usr/lib/jdk/bin中的jar命令 
[Randy@localhost ~]$  sudo update-alternatives --config java   #配置默认java命令
共有 1 个提供“java”的程序。
  选项    命令
-----------------------------------------------
*+ 1          /usr/lib/jdk/bin/java
按 Enter 保留当前选项[+],或者键入选项编号:1
[Randy@localhost ~]$ sudo update-alternatives --config javac   #配置默认java命令
共有 1 个提供“java”的程序。
  选项    命令
-----------------------------------------------
*+ 1          /usr/lib/jdk/bin/javac
按 Enter 保留当前选项[+],或者键入选项编号:1

第三步:测试JDK
[Randy@localhost ~]$ java -version
java version "1.8.0_11"
Java(TM) SE Runtime Environment (build 1.8.0_11-b12)
Java HotSpot(TM) Server VM (build 25.11-b03, mixed mode)
[Randy@localhost ~]$ javac -version
javac 1.8.0_11
测试是遇到了一个问题:

[Randy@localhost ~]$ java
-bash: /usr/bin/java: /lib/ld-linux.so.2: bad ELF interpreter: 没有那个文件或目录
[Randy@localhost ~]$ ls /lib/ld-linux
ls: 无法访问/lib/ld-linux: 没有那个文件或目录
[Randy@localhost ~]$ java -version
-bash: /usr/bin/java: /lib/ld-linux.so.2: bad ELF interpreter: 没有那个文件或目录
[Randy@localhost ~]$
解决方法是:

[Randy@localhost ~]$ sudo yum install glibc.i686 #在64系统里执行32位程序如果出现/lib/ld-linux.so.2: bad ELF interpreter: No such file or directory,安装下glic即可
 
完成之后,把所有的jdk 配置文件 发送给 hadoop集群所有节点(big-master1,big-master2,big-slave01,big-slave02,big-slave03)
 #####

(三), zookeeper  安装:
 zookeeper 是一个分布式,开源调度服务。 他在hadoop cluster 中充当 角色有: 同步锁,HA方案,leader 选举方案等。
下载:
download :  https://zookeeper.apache.org/releases.html 
yum install glibc-headers gcc gcc-c++ make cmake openssl-devel ncurses-devel -y
# cd /usr/local
# wget http://mirror.bit.edu.cn/apache/zookeeper/stable/zookeeper-3.4.12.tar.gz
# tar -zxvf zookeeper-3.4.12.tar.gz
# cd zookeeper-3.4.12
Step3:重命名配置文件zoo_sample.cfg

# cp conf/zoo_sample.cfg conf/zoo.cfg
Step4:启动zookeeper

Zookeeper集群模式安装:

[hadoop@big-slave02 zookeeper]$ cd conf/
[hadoop@big-slave02 conf]$ pwd
/usr/local/zookeeper/conf
[hadoop@big-slave02 conf]$ ls
configuration.xsl  log4j.properties  zoo.cfg  zoo_sample.cfg
[hadoop@big-slave02 conf]$ cat zoo.cfg 
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/data/zookeeper/zkdata
#dataLogDir=/data/zookeeper/zkdatalog
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
#quorumListenOnAllIPs=true
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=big-master1:2888:3888
server.2=big-master2:2888:3888
server.3=big-slave01:2888:3888
server.4=big-slave02:2888:3888
server.5=big-slave03:2888:3888

zookeeper 配置zoo.cfg 参数说明:
##  我这里是多机,所以端口可以一样 (其他部署,详见@Zookeeper入门详解)
参数详解:
tickTime:这个时间是作为 Zookeeper 服务器之间或客户端与服务器之间维持心跳的时间间隔,也就是每个 tickTime 时间就会发送一个心跳。
initLimit:这个配置项是用来配置 Zookeeper 接受客户端(这里所说的客户端不是用户连接 Zookeeper 服务器的客户端,而是 Zookeeper 服务器集群中连接到 Leader 的 Follower 服务器)初始化连接时最长能忍受多少个心跳时间间隔数。当已经超过 10个心跳的时间(也就是 tickTime)长度后 Zookeeper 服务器还没有收到客户端的返回信息,那么表明这个客户端连接失败。总的时间长度就是 10*2000=20 秒
syncLimit:这个配置项标识 Leader 与 Follower 之间发送消息,请求和应答时间长度,最长不能超过多少个 tickTime 的时间长度,总的时间长度就是 5*2000=10秒
dataDir:顾名思义就是 Zookeeper 保存数据的目录,默认情况下,Zookeeper 将写数据的日志文件也保存在这个目录里。
clientPort:这个端口就是客户端连接 Zookeeper 服务器的端口,Zookeeper 会监听这个端口,接受客户端的访问请求。
server.A=B:C:D:其中 A 是一个数字,表示这个是第几号服务器;B 是这个服务器的 ip 地址;C 表示的是这个服务器与集群中的 Leader 服务器交换信息的端口;D 表示的是万一集群中的 Leader 服务器挂了,需要一个端口来重新进行选举,选出一个新的 Leader,而这个端口就是用来执行选举时服务器相互通信的端口。如果是伪集群的配置方式,由于 B 都是一样,所以不同的 Zookeeper 实例通信端口号不能一样,所以要给它们分配不同的端口号。

Step5:标识Server ID   --> dataDir 下的myid 设置。
在对应的目录下 ,我这里是 /usr/local/zookeeper-3.4.14/ ,在此目录创建唯一的id 值。

[hadoop@big-master1 tools]$ cat /data/zookeeper/zkdata/myid 
1
[hadoop@big-slave02 conf]$ cat /data/zookeeper/zkdata/myid 
4

最后一步,把zookeeper 添加到全局变量中. /etc/profile 
### zookeeper  ##
export ZK_HOME=/usr/local/zookeeper-3.4.14
export PATH=$ZK_HOME/bin:$PATH
source /etc/profile 

zookeeper 启动测试:
## 启动zookeeper 

[hadoop@tidb07 ~]$ zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.14/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[hadoop@tidb07 ~]$ ps -ef |grep zookeeper
hadoop   30713     1 13 16:51 pts/0    00:00:01 /usr/local/jdk1.8.0_251/bin/java -Dzookeeper.log.dir=. -Dzookeeper.root.logger=INFO,CONSOLE -cp /usr/local/zookeeper-3.4.14/bin/../zookeeper-server/target/classes:/usr/local/zookeeper-3.4.14/bin/../build/classes:/usr/local/zookeeper-3.4.14/bin/../zookeeper-server/target/lib/*.jar:/usr/local/zookeeper-3.4.14/bin/../build/lib/*.jar:/usr/local/zookeeper-3.4.14/bin/../lib/slf4j-log4j12-1.7.25.jar:/usr/local/zookeeper-3.4.14/bin/../lib/slf4j-api-1.7.25.jar:/usr/local/zookeeper-3.4.14/bin/../lib/netty-3.10.6.Final.jar:/usr/local/zookeeper-3.4.14/bin/../lib/log4j-1.2.17.jar:/usr/local/zookeeper-3.4.14/bin/../lib/jline-0.9.94.jar:/usr/local/zookeeper-3.4.14/bin/../lib/audience-annotations-0.5.0.jar:/usr/local/zookeeper-3.4.14/bin/../zookeeper-3.4.14.jar:/usr/local/zookeeper-3.4.14/bin/../zookeeper-server/src/main/resources/lib/*.jar:/usr/local/zookeeper-3.4.14/bin/../conf:/usr/local/jdk1.8.0_251/lib/tools.jar/usr/local/jdk1.8.0_251/lib/dt.jar -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /usr/local/zookeeper-3.4.14/bin/../conf/zoo.cfg
hadoop   30739 30662  0 16:52 pts/0    00:00:00 grep --color=auto zookeeper
[hadoop@tidb07 ~]$ zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: follower    ##  这是备节点。 leader 为主节点

### 另一节点
[hadoop@pd1-500 ~]$ zkServer.sh  status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: leader

关闭zookeeper :
[hadoop@tidb07 ~]$ zkServer.sh stop 

使用脚本,启动其他几个zookeeper:
使用runRemoteCmd.sh 脚本,启动所有节点上面的Zookeeper。
-- 把这个脚本也放在全局变量/etc/profile 下:
## tools ##
export PATH=/home/hadoop/tools:$PATH
##

[hadoop@big-master1 ~]$ cd /home/hadoop/tools/
[hadoop@big-master1 tools]$ ls
deploy.conf  deploy.sh  runRemoteCmd.sh
[hadoop@big-master1 ~]$ runRemoteCmd.sh "/usr/local/zookeeper/bin/zkServer.sh start " zookeeper
[hadoop@big-master1 ~]$ runRemoteCmd.sh "/usr/local/zookeeper/bin/zkServer.sh status " zookeeper
*******************big-master1***************************
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: follower
*******************big-master2***************************
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: follower
*******************big-slave01***************************
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: leader
*******************big-slave02***************************
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: follower
*******************big-slave03***************************
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: follower

-- 查看状态,只有当所有节点都启动后,才有mode 模式。如果只启动部分,是没有的。
-- 如果一个节点为leader,另四个节点为follower,则说明Zookeeper安装成功。

查看所有节点上面的QuorumPeerMain进程是否启动。

[hadoop@big-master1 ~]$ runRemoteCmd.sh "jps" zookeeper
*******************big-master1***************************
30037 JournalNode
4023 ResourceManager
29642 DFSZKFailoverController
29804 NameNode
9132 Jps
28141 QuorumPeerMain
*******************big-master2***************************
20032 NameNode
20116 JournalNode
20324 DFSZKFailoverController
13722 Jps
18830 QuorumPeerMain
2462 ResourceManager
*******************big-slave01***************************
10161 NodeManager
7702 QuorumPeerMain
8583 DataNode
11751 Jps
8686 JournalNode
*******************big-slave02***************************
5187 DataNode
8227 Jps
6697 NodeManager
4362 QuorumPeerMain
5290 JournalNode
*******************big-slave03***************************
4562 QuorumPeerMain
5442 DataNode
8405 Jps
6903 NodeManager
5545 JournalNode

--- 以上是hadoop ,yarm 都部署成功后,所有的启动服务,红色为zookeeper 的启动项。

#############

(四), hadoop 安装:

1, hadoop 开源版本下载:
## appache hadoop 
download : http://hadoop.apache.org
https://archive.apache.org/dist/hadoop/common/

2,配置对应文件:本次需要编辑的文件:
slaves
hdfs-site.xml  
core-site.xml 
mapred-env.sh  
mapred-site.xml
 yarn-env.sh
hadoop-env.sh 
yarn-site.xml

配置文件目录: /usr/local/hadoop-xxxx/etc/hadoop/
[hadoop@big-master1 ~]$ cd /usr/local/hadoop/etc/hadoop/
[hadoop@big-master1 hadoop]$ ls
capacity-scheduler.xml      hadoop-policy.xml        kms-log4j.properties        slaves
configuration.xsl           hdfs-site.xml            kms-site.xml                ssl-client.xml.example
container-executor.cfg      httpfs-env.sh            log4j.properties            ssl-server.xml.example
core-site.xml               httpfs-log4j.properties  mapred-env.cmd              yarn-env.cmd
hadoop-env.cmd              httpfs-signature.secret  mapred-env.sh               yarn-env.sh
hadoop-env.sh               httpfs-site.xml          mapred-queues.xml.template  yarn-site.xml
hadoop-metrics2.properties  kms-acls.xml             mapred-site.xml
hadoop-metrics.properties   kms-env.sh               mapred-site.xml.template

-- 需要设置jdk 环境变量的文件为 hadoop-env.sh 和 yarn-env.sh 
[hadoop@big-master1 hadoop]$ cat hadoop-env.sh
JAVA_HOME=/usr/local/jdk1.8.0_251   -- new add 

[hadoop@big-master1 hadoop]$ cat yarn-env.sh 
export JAVA_HOME=/usr/local/jdk1.8.0_251  -- new add 

2, 配置HDFS 
-- 
2.1 配置core-site.xml 文件
  [hadoop@big-master1 hadoop]$ cat /usr/local/hadoop/etc/hadoop/core-site.xml 




    
        fs.defaultFS
        hdfs://cluster1
       
    

    
         hadoop.tmp.dir
            /data/hadoop/data/hadoop_${user.name}
       
       

       
            ha.zookeeper.quorum
        big-master1:2181,big-master2:2181,big-slave01:2181,big-slave02:2181,big-slave03:2181
         
    


#####

2.2 配置hdfs-site.xml 文件:
[hadoop@big-master1 hadoop]$ cat hdfs-site.xml 



   
        dfs.replication
        3
   

   
   
        dfs.permissions
        false
   

   
        dfs.permissions.enabled
        false
   

   

   
        dfs.nameservices
        cluster1
   

   

   
        dfs.ha.namenodes.cluster1
        big-master1,big-master2
   

   

   
        dfs.namenode.rpc-address.cluster1.big-master1
        big-master1:9000
   

   

   
        dfs.namenode.http-address.cluster1.big-master1
        big-master1:50070
   

   

   
        dfs.namenode.rpc-address.cluster1.big-master2
        big-master2:9000
   

   

   
        dfs.namenode.http-address.cluster1.big-master2
        big-master2:50070
   

   

   
        dfs.ha.automatic-failover.enabled
        true
   

   

   
        dfs.namenode.shared.edits.dir
        qjournal://big-master1:8485;big-master2:8485;big-slave01:8485;big-slave02:8485;big-slave03:8485/cluster1
   

   

   
        dfs.client.failover.proxy.provider.cluster1
        org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
   

   

   
        dfs.journalnode.edits.dir
        /data/hadoop/data/journaldata/jn
   

   

   
        dfs.ha.fencing.methods
        shell(/bin/true)
   

   
        dfs.ha.fencing.ssh.private-key-files
        /home/hadoop/.ssh/id_rsa
   

   
        dfs.ha.fencing.ssh.connect-timeout
        10000
   

   

   
        dfs.namenode.handler.count
        100
   


#######

2.3 slave配置文件配置
[hadoop@big-master1 hadoop]$ cat slaves 
big-slave01
big-slave02
big-slave03

2.4  把配置好的所有文件,scp 到每一个节点,包括namenode, datanode 节点。
可以通过deploy.sh 脚本:
 [hadoop@big-master1 tools]$ deploy.sh /usr/local/hadoop  slave

2.5 hdfs 配置完成后的启动执行顺序(由上而下):
2.5.1 所有节点的zookeeper进程 
2.5.2 所有节点的journalnode 进程
2.5.3 在big-master1(primary)节点上 执行初始化
 a,namenode 的初始化: /usr/local/hadoop/bin/hdfs namenode -format 
 b,  高可用的初始化: /usr/local/hadoop/bin/hdfs zkfc -formatZK 
 b1,在启动主namenode 之前,确保journalnode 都已经启动,不然namenode standby 同步时会报错。
[hadoop@big-master1 bin]$ runRemoteCmd.sh "/usr/local/hadoop/sbin/hadoop-daemon.sh start journalnode" all
[hadoop@big-master1 bin]$ runRemoteCmd.sh "/usr/local/hadoop/sbin/hadoop-daemon.sh status journalnode" all
 c,  再启动namenode: /usr/local/hadoop/bin/hdfs namenode 
 d,  以上三步完成后,并没有报错后,需要再namenode(standby)节点 执行数据同步:
  namenode2 :  /usr/local/hadoop/bin/hdfs namenode -bootstrapStandby 
  --主要同步的是namenode1主节点的所有元数据。
 e, namenode2 同步主节点完成之后,
 f,启动zkfc 集群: /usr/local/hadoop/sbin/hadoop -daemon.sh start zkfc  (现在主节点big-master1上启动,如果没有问题,再启动一键启动所有hdfs 相关进程)。
 -- > 这里,可以通过关闭namenode 主,查看主备是否有切换过程。
 -- > 通过登陆的web 界面: http://big-master1:50070   (主),http://bing-master2:50070 (备),可以查看,主为actinve 状态,备为standby 状态。

 g, 通过测试上传文件,看是否能成功,所有的命令 都为hdfs 自己的命令。
[hadoop@big-master1 data]$ touch test01.txt    -- 有权限的目录下创建一个文件
[hadoop@big-master1 data]$ vim test01.txt 
[hadoop@big-master1 data]$ cat test01.txt      --随机的内容
hadoop  big-data
hadoop  yarm 
rdbms   oracle
rdbms   mysql

验证一,通过hdfs 命令
[hadoop@big-master1 data]$ hdfs dfs -mkdir /test01   -- 在hdfs 系统上创建一个文件目录
[hadoop@big-master1 data]$ hdfs dfs -put test01.txt /test01    --向hdfs上传一个文件
[hadoop@big-master1 data]$ hdfs dfs -ls /test01                      --查看文件是否上传成功
Found 1 items
-rw-r--r--   3 hadoop supergroup         61 2020-05-17 01:14 /test01/test01.txt

验证二,通过web 界面


查看文件:
[hadoop@big-master2 hadoop]$ hdfs dfs -cat /test01/test01.txt
hadoop  big-data
hadoop  yarm 
rdbms   oracle
rdbms   mysql

[hadoop@big-master2 hadoop]$ hdfs dfs -cat /test/test.txt
hadoop appache
hadoop ywendeng
hadoop tomcat

[hadoop@big-slave02 conf]$ hdfs dfs -cat /test01/test01.txt
hadoop  big-data
hadoop  yarm 
rdbms   oracle
rdbms   mysql

-- 截止到这里,说明 hdfs 已经安装成功。

3  yarn 安装配置:

3.1  配置mapred-site.xml
[hadoop@big-master1 ~]$ cat /usr/local/hadoop/etc/hadoop/mapred-site.xml 



    
    mapreduce.framework.name
    yarn
    


3.2  配置yarn-site.xml
[hadoop@big-master1 ~]$ cat /usr/local/hadoop/etc/hadoop/yarn-site.xml 


 

    yarn.resourcemanager.connect.retry-interval.ms
    2000



    yarn.resourcemanager.ha.enabled
    true



    yarn.resourcemanager.ha.automatic-failover.enabled
    true



    yarn.resourcemanager.ha.automatic-failover.embedded
    true


    yarn.resourcemanager.cluster-id
    yarn-rm-cluster



    yarn.resourcemanager.ha.rm-ids
    rm1,rm2



    yarn.resourcemanager.hostname.rm1
    big-master1



    yarn.resourcemanager.hostname.rm2
    big-master2



    yarn.resourcemanager.recovery.enabled
    true



    yarn.resourcemanager.zk.state-store.address
    big-master1:2181,big-master2:2181,big-slave01:2181,big-slave02:2181,big-slave03:2181



    yarn.resourcemanager.zk-address
    big-master1:2181,big-master2:2181,big-slave01:2181,big-slave02:2181,big-slave03:2181




    yarn.resourcemanager.address.rm1
    big-master1:8032



    yarn.resourcemanager.scheduler.address.rm1
    big-master1:8034



    yarn.resourcemanager.webapp.address.rm1
    big-master1:8088
 




    yarn.resourcemanager.address.rm2
    big-master2:8032



    yarn.resourcemanager.scheduler.address.rm2
    big-master2:8034



    yarn.resourcemanager.webapp.address.rm2
    big-master2:8088



    yarn.nodemanager.aux-services
    mapreduce_shuffle


    yarn.nodemanager.aux-services.mapreduce_shuffle.class
    org.apache.hadoop.mapred.ShuffleHandler


3.3 启动yarn 
 /usr/local/hadoop/sbin/start-yarn.sh   -- 在namenode1 上
 /usr/local/hadoop/sbin/yarn-daemon.sh start resourcemanager  -- 在namenode2上打开。

同时验证:
 http://big-master1:8088 
 http://big-master2:8088 
 -- 这个时候,只能big-master1  namenode1 可以开启,namenode2 不能显示,因为没有切换,所以正常。
 -- 关闭其中一个resourcemanager,然后再启动,看看这个过程的web界面变化。

检查ResourceManager状态:
[hadoop@big-master1 data]$ cd /usr/local/hadoop/bin/
[hadoop@big-master1 bin]$ ./yarn rmadmin -getServiceState rm1
active
[hadoop@big-master1 bin]$ ./yarn rmadmin -getServiceState rm2
standby

[hadoop@big-master2 bin]$ ./yarn rmadmin -getServiceState rm1
active
[hadoop@big-master2 bin]$ ./yarn rmadmin -getServiceState rm2
standby
[hadoop@big-slave01 bin]$ ./yarn rmadmin -getServiceState rm2
standby

Wordcount示例测试,如没有出错,表示该集群安装成成功。
[hadoop@big-master1 ~]$ hadoop
hadoop             hadoop.cmd         hadoop-daemon.sh   hadoop-daemons.sh  
[hadoop@big-master1 ~]$ hadoop jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.5.jar wordcount /test01/test01.txt /test01/out/
20/05/17 01:37:18 INFO input.FileInputFormat: Total input files to process : 1
20/05/17 01:37:19 INFO mapreduce.JobSubmitter: number of splits:1
20/05/17 01:37:20 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1589524167213_0002
20/05/17 01:37:21 INFO impl.YarnClientImpl: Submitted application application_1589524167213_0002
20/05/17 01:37:21 INFO mapreduce.Job: The url to track the job: http://big-master1:8088/proxy/application_1589524167213_0002/
20/05/17 01:37:21 INFO mapreduce.Job: Running job: job_1589524167213_0002
20/05/17 01:37:32 INFO mapreduce.Job: Job job_1589524167213_0002 running in uber mode : false
20/05/17 01:37:32 INFO mapreduce.Job:  map 0% reduce 0%
20/05/17 01:37:40 INFO mapreduce.Job:  map 100% reduce 0%
20/05/17 01:37:49 INFO mapreduce.Job:  map 100% reduce 100%
20/05/17 01:37:53 INFO mapreduce.Job: Job job_1589524167213_0002 completed successfully
20/05/17 01:37:53 INFO mapreduce.Job: Counters: 49
    File System Counters
        FILE: Number of bytes read=82
        FILE: Number of bytes written=324515
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=159
        HDFS: Number of bytes written=52
        HDFS: Number of read operations=6
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=2
    Job Counters 
        Launched map tasks=1
        Launched reduce tasks=1
        Data-local map tasks=1
        Total time spent by all maps in occupied slots (ms)=5484
        Total time spent by all reduces in occupied slots (ms)=6662
        Total time spent by all map tasks (ms)=5484
        Total time spent by all reduce tasks (ms)=6662
        Total vcore-milliseconds taken by all map tasks=5484
        Total vcore-milliseconds taken by all reduce tasks=6662
        Total megabyte-milliseconds taken by all map tasks=5615616
        Total megabyte-milliseconds taken by all reduce tasks=6821888
    Map-Reduce Framework
        Map input records=5
        Map output records=8
        Map output bytes=85
        Map output materialized bytes=82
        Input split bytes=98
        Combine input records=8
        Combine output records=6
        Reduce input groups=6
        Reduce shuffle bytes=82
        Reduce input records=6
        Reduce output records=6
        Spilled Records=12
        Shuffled Maps =1
        Failed Shuffles=0
        Merged Map outputs=1
        GC time elapsed (ms)=224
        CPU time spent (ms)=2850
        Physical memory (bytes) snapshot=450756608
        Virtual memory (bytes) snapshot=4229697536
        Total committed heap usage (bytes)=317194240
    Shuffle Errors
        BAD_ID=0
        CONNECTION=0
        IO_ERROR=0
        WRONG_LENGTH=0
        WRONG_MAP=0
        WRONG_REDUCE=0
    File Input Format Counters 
        Bytes Read=61
    File Output Format Counters 
        Bytes Written=52

-- 以上表示成功,在其他节点显示同样。

---------------------  The End ----------------------

##### 测试过程显示 ####
--启动journalnode 
[hadoop@big-master1 hadoop]$ /usr/local/hadoop-2.8.5/sbin/hadoop-daemon.sh start journalnode
starting journalnode, logging to /usr/local/hadoop-2.8.5/logs/hadoop-hadoop-journalnode-big-master1.out
[hadoop@big-master1 hadoop]$ cat /usr/local/hadoop-2.8.5/logs/hadoop-hadoop-journalnode-big-master1.out
ulimit -a for user hadoop
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 30816
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 4096
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
-- 如果没启动journalnode ,就启动 namenode 同步或者hadoop 其他进程,就会报错:
20/05/14 23:12:32 INFO ipc.Client: Retrying connect to server: big-master1/192.168.41.20:9000. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
20/05/14 23:12:32 WARN ipc.Client: Failed to connect to server: big-master1/192.168.41.20:9000: retries get failed due to exceeded maximum allowed retries number: 10
java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)

[hadoop@big-master2 bin]$ ./hdfs namenode -bootstrapStandby
20/05/14 23:12:21 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode

-- 可以通过脚本,所有节点关闭启动:
[hadoop@big-master1 bin]$ runRemoteCmd.sh "/usr/local/hadoop/sbin/hadoop-daemon.sh stop journalnode" all
*******************big-master1***************************
stopping journalnode
*******************big-master2***************************
stopping journalnode
*******************big-slave01***************************
stopping journalnode
*******************big-slave02***************************
stopping journalnode
*******************big-slave03***************************
stopping journalnode

--格式化:
[hadoop@big-master1 bin]$ ./hdfs namenode -format
20/05/14 23:10:00 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   user = hadoop
STARTUP_MSG:   host = big-master1/192.168.41.20
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.8.5
STARTUP_MSG:   classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.5.2.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.54.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/htrace-core4-4.0.1-incubating.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/usr/local/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/json-smart-1.3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.4.4.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.8.5-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.8.5.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/okhttp-2.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/hadoop-hdfs-client-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/htrace-core4-4.0.1-incubating.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/okio-1.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-2.8.5-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.5-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-2.8.5-tests.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/java-util-1.9.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/fst-2.50.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-math-2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javassist-3.18.1-GA.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/curator-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/curator-test-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/json-io-2.5.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.5-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.8.5.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 0b8464d75227fcee2c6e7f2410377b3d53d3d5f8; compiled by 'jdu' on 2018-09-10T03:32Z
STARTUP_MSG:   java = 1.8.0_251
************************************************************/
20/05/14 23:10:00 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
20/05/14 23:10:01 INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-4521db8a-5242-4421-a2bc-3c569020af64
20/05/14 23:10:01 INFO namenode.FSEditLog: Edit logging is async:true
20/05/14 23:10:01 INFO namenode.FSNamesystem: KeyProvider: null
20/05/14 23:10:01 INFO namenode.FSNamesystem: fsLock is fair: true
20/05/14 23:10:01 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
20/05/14 23:10:02 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
20/05/14 23:10:02 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
20/05/14 23:10:02 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
20/05/14 23:10:02 INFO blockmanagement.BlockManager: The block deletion will start around 2020 May 14 23:10:02
20/05/14 23:10:02 INFO util.GSet: Computing capacity for map BlocksMap
20/05/14 23:10:02 INFO util.GSet: VM type       = 64-bit
20/05/14 23:10:02 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
20/05/14 23:10:02 INFO util.GSet: capacity      = 2^21 = 2097152 entries
20/05/14 23:10:02 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
20/05/14 23:10:02 INFO blockmanagement.BlockManager: defaultReplication         = 3
20/05/14 23:10:02 INFO blockmanagement.BlockManager: maxReplication             = 512
20/05/14 23:10:02 INFO blockmanagement.BlockManager: minReplication             = 1
20/05/14 23:10:02 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
20/05/14 23:10:02 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
20/05/14 23:10:02 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
20/05/14 23:10:02 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
20/05/14 23:10:02 INFO namenode.FSNamesystem: fsOwner             = hadoop (auth:SIMPLE)
20/05/14 23:10:02 INFO namenode.FSNamesystem: supergroup          = supergroup
20/05/14 23:10:02 INFO namenode.FSNamesystem: isPermissionEnabled = false
20/05/14 23:10:02 INFO namenode.FSNamesystem: Determined nameservice ID: cluster1
20/05/14 23:10:02 INFO namenode.FSNamesystem: HA Enabled: true
20/05/14 23:10:02 INFO namenode.FSNamesystem: Append Enabled: true
20/05/14 23:10:02 INFO util.GSet: Computing capacity for map INodeMap
20/05/14 23:10:02 INFO util.GSet: VM type       = 64-bit
20/05/14 23:10:02 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
20/05/14 23:10:02 INFO util.GSet: capacity      = 2^20 = 1048576 entries
20/05/14 23:10:02 INFO namenode.FSDirectory: ACLs enabled? false
20/05/14 23:10:02 INFO namenode.FSDirectory: XAttrs enabled? true
20/05/14 23:10:02 INFO namenode.NameNode: Caching file names occurring more than 10 times
20/05/14 23:10:02 INFO util.GSet: Computing capacity for map cachedBlocks
20/05/14 23:10:02 INFO util.GSet: VM type       = 64-bit
20/05/14 23:10:02 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
20/05/14 23:10:02 INFO util.GSet: capacity      = 2^18 = 262144 entries
20/05/14 23:10:02 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
20/05/14 23:10:02 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
20/05/14 23:10:02 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
20/05/14 23:10:02 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
20/05/14 23:10:02 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
20/05/14 23:10:02 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
20/05/14 23:10:02 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
20/05/14 23:10:02 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
20/05/14 23:10:02 INFO util.GSet: Computing capacity for map NameNodeRetryCache
20/05/14 23:10:02 INFO util.GSet: VM type       = 64-bit
20/05/14 23:10:02 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
20/05/14 23:10:02 INFO util.GSet: capacity      = 2^15 = 32768 entries
20/05/14 23:10:04 INFO namenode.FSImage: Allocated new BlockPoolId: BP-822972339-192.168.41.20-1589469004161
20/05/14 23:10:04 INFO common.Storage: Storage directory /data/hadoop/data/hadoop_hadoop/dfs/name has been successfully formatted.
20/05/14 23:10:04 INFO namenode.FSImageFormatProtobuf: Saving image file /data/hadoop/data/hadoop_hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
20/05/14 23:10:04 INFO namenode.FSImageFormatProtobuf: Image file /data/hadoop/data/hadoop_hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 323 bytes saved in 0 seconds.
20/05/14 23:10:04 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
20/05/14 23:10:04 INFO util.ExitUtil: Exiting with status 0
20/05/14 23:10:04 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at big-master1/192.168.41.20
************************************************************

--格式化zkfc 集群:
[hadoop@big-master1 bin]$ ./hdfs zkfc -formatZK
20/05/14 23:11:30 INFO tools.DFSZKFailoverController: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting DFSZKFailoverController
STARTUP_MSG:   user = hadoop
STARTUP_MSG:   host = big-master1/192.168.41.20
STARTUP_MSG:   args = [-formatZK]
STARTUP_MSG:   version = 2.8.5
STARTUP_MSG:   classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.5.2.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.54.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/htrace-core4-4.0.1-incubating.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/usr/local/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/json-smart-1.3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.4.4.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.8.5-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.8.5.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/okhttp-2.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/hadoop-hdfs-client-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/htrace-core4-4.0.1-incubating.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/okio-1.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-2.8.5-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.5-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-2.8.5-tests.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/java-util-1.9.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/fst-2.50.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-math-2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javassist-3.18.1-GA.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/curator-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/curator-test-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/json-io-2.5.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.5-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.8.5.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 0b8464d75227fcee2c6e7f2410377b3d53d3d5f8; compiled by 'jdu' on 2018-09-10T03:32Z
STARTUP_MSG:   java = 1.8.0_251
************************************************************/
20/05/14 23:11:30 INFO tools.DFSZKFailoverController: registered UNIX signal handlers for [TERM, HUP, INT]
20/05/14 23:11:31 INFO tools.DFSZKFailoverController: Failover controller configured for NameNode NameNode at big-master1/192.168.41.20:9000
20/05/14 23:11:31 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
20/05/14 23:11:31 INFO zookeeper.ZooKeeper: Client environment:host.name=big-master1
20/05/14 23:11:31 INFO zookeeper.ZooKeeper: Client environment:java.version=1.8.0_251
20/05/14 23:11:31 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
20/05/14 23:11:31 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/local/jdk1.8.0_251/jre
20/05/14 23:11:31 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.5.2.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.54.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/htrace-core4-4.0.1-incubating.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/usr/local/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/json-smart-1.3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.4.4.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.8.5-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.8.5.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/okhttp-2.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/hadoop-hdfs-client-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/htrace-core4-4.0.1-incubating.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/okio-1.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-2.8.5-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.5-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-2.8.5-tests.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/java-util-1.9.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/fst-2.50.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-math-2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javassist-3.18.1-GA.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/curator-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/curator-test-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/json-io-2.5.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.5-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.8.5.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
20/05/14 23:11:31 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/usr/local/hadoop/lib/native
20/05/14 23:11:31 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
20/05/14 23:11:31 INFO zookeeper.ZooKeeper: Client environment:java.compiler=
20/05/14 23:11:31 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
20/05/14 23:11:31 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
20/05/14 23:11:31 INFO zookeeper.ZooKeeper: Client environment:os.version=3.10.0-1062.18.1.el7.x86_64
20/05/14 23:11:31 INFO zookeeper.ZooKeeper: Client environment:user.name=hadoop
20/05/14 23:11:31 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/hadoop
20/05/14 23:11:31 INFO zookeeper.ZooKeeper: Client environment:user.dir=/usr/local/hadoop/bin
20/05/14 23:11:31 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=big-master1:2181,big-master2:2181,big-slave01:2181,big-slave02:2181,big-slave03:2181 sessionTimeout=10000 watcher=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@765d7657
20/05/14 23:11:31 INFO zookeeper.ClientCnxn: Opening socket connection to server big-master1/192.168.41.20:2181. Will not attempt to authenticate using SASL (unknown error)
20/05/14 23:11:31 INFO zookeeper.ClientCnxn: Socket connection established to big-master1/192.168.41.20:2181, initiating session
20/05/14 23:11:31 INFO zookeeper.ClientCnxn: Session establishment complete on server big-master1/192.168.41.20:2181, sessionid = 0x100061aaf330000, negotiated timeout = 10000
20/05/14 23:11:31 INFO ha.ActiveStandbyElector: Session connected.
20/05/14 23:11:31 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/cluster1 in ZK.
20/05/14 23:11:31 INFO zookeeper.ZooKeeper: Session: 0x100061aaf330000 closed
20/05/14 23:11:31 INFO zookeeper.ClientCnxn: EventThread shut down
20/05/14 23:11:31 INFO tools.DFSZKFailoverController: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down DFSZKFailoverController at big-master1/192.168.41.20
************************************************************/

--namenode2 同步 主节点的元数据
[hadoop@big-master2 bin]$ ./hdfs namenode -bootstrapStandby
20/05/14 23:54:27 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   user = hadoop
STARTUP_MSG:   host = big-master2/192.168.41.21
STARTUP_MSG:   args = [-bootstrapStandby]
STARTUP_MSG:   version = 2.8.5
STARTUP_MSG:   classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/usr/local/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/htrace-core4-4.0.1-incubating.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.5.2.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.4.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.54.jar:/usr/local/hadoop/share/hadoop/common/lib/json-smart-1.3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.8.5-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.8.5.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/hadoop-hdfs-client-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/htrace-core4-4.0.1-incubating.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/okhttp-2.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/okio-1.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.5-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-2.8.5-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-2.8.5-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-2.8.5.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-math-2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/curator-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/curator-test-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/fst-2.50.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/java-util-1.9.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javassist-3.18.1-GA.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/json-io-2.5.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-2.8.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.5-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.8.5.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.5.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 0b8464d75227fcee2c6e7f2410377b3d53d3d5f8; compiled by 'jdu' on 2018-09-10T03:32Z
STARTUP_MSG:   java = 1.8.0_251
************************************************************/
20/05/14 23:54:27 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
20/05/14 23:54:27 INFO namenode.NameNode: createNameNode [-bootstrapStandby]
=====================================================
About to bootstrap Standby ID big-master2 from:
           Nameservice ID: cluster1
        Other Namenode ID: big-master1
  Other NN's HTTP address: http://big-master1:50070
  Other NN's IPC  address: big-master1/192.168.41.20:9000
             Namespace ID: 1159823603
            Block pool ID: BP-822972339-192.168.41.20-1589469004161
               Cluster ID: CID-4521db8a-5242-4421-a2bc-3c569020af64
           Layout version: -63
       isUpgradeFinalized: true
=====================================================
20/05/14 23:54:29 INFO common.Storage: Storage directory /data/hadoop/data/hadoop_hadoop/dfs/name has been successfully formatted.
20/05/14 23:54:29 INFO namenode.FSEditLog: Edit logging is async:true
20/05/14 23:54:30 INFO namenode.TransferFsImage: Opening connection to http://big-master1:50070/imagetransfer?getimage=1&txid=0&storageInfo=-63:1159823603:1589469004161:CID-4521db8a-5242-4421-a2bc-3c569020af64&bootstrapstandby=true
20/05/14 23:54:30 INFO namenode.TransferFsImage: Image Transfer timeout configured to 60000 milliseconds
20/05/14 23:54:30 INFO namenode.TransferFsImage: Transfer took 0.03s at 0.00 KB/s
20/05/14 23:54:30 INFO namenode.TransferFsImage: Downloaded file fsimage.ckpt_0000000000000000000 size 323 bytes.
20/05/14 23:54:30 INFO util.ExitUtil: Exiting with status 0
20/05/14 23:54:30 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at big-master2/192.168.41.21
************************************************************/

--启动zkfc 
[hadoop@big-master1 hadoop]$ sbin/hadoop-daemon.sh start zkfc
starting zkfc, logging to /usr/local/hadoop/logs/hadoop-hadoop-zkfc-big-master1.out
[hadoop@big-master1 hadoop]$ cat /usr/local/hadoop/logs/hadoop-hadoop-zkfc-big-master1.out
ulimit -a for user hadoop
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 30817
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 4096
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

--启动所有hdfs 进程
[hadoop@big-master1 hadoop]$ sbin/start-dfs.sh 
Starting namenodes on [big-master2 big-master1]
big-master1: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-namenode-big-master1.out
big-master2: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-namenode-big-master2.out
big-slave02: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hadoop-datanode-big-slave02.out
big-slave03: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hadoop-datanode-big-slave03.out
big-slave01: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hadoop-datanode-big-slave01.out
Starting journal nodes [big-slave01 big-slave02 big-slave03 big-master1 big-master2]
big-master1: starting journalnode, logging to /usr/local/hadoop/logs/hadoop-hadoop-journalnode-big-master1.out
big-slave01: starting journalnode, logging to /usr/local/hadoop/logs/hadoop-hadoop-journalnode-big-slave01.out
big-slave03: starting journalnode, logging to /usr/local/hadoop/logs/hadoop-hadoop-journalnode-big-slave03.out
big-master2: starting journalnode, logging to /usr/local/hadoop/logs/hadoop-hadoop-journalnode-big-master2.out
big-slave02: starting journalnode, logging to /usr/local/hadoop/logs/hadoop-hadoop-journalnode-big-slave02.out
Starting ZK Failover Controllers on NN hosts [big-master2 big-master1]
big-master1: zkfc running as process 29642. Stop it first.
big-master2: starting zkfc, logging to /usr/local/hadoop/logs/hadoop-hadoop-zkfc-big-master2.out

--启动yarn 进程
[hadoop@big-master1 sbin]$ start-yarn.sh 
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-resourcemanager-big-master1.out
big-slave01: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-nodemanager-big-slave01.out
big-slave02: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-nodemanager-big-slave02.out
big-slave03: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-nodemanager-big-slave03.out

[hadoop@big-slave03 ~]$ cat /usr/local/hadoop/logs/yarn-hadoop-nodemanager-big-slave03.out
May 15, 2020 2:29:27 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
INFO: Registering org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices as a root resource class
May 15, 2020 2:29:27 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
INFO: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider class
May 15, 2020 2:29:27 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
INFO: Registering org.apache.hadoop.yarn.server.nodemanager.webapp.JAXBContextResolver as a provider class
May 15, 2020 2:29:27 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate
INFO: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17 AM'
May 15, 2020 2:29:27 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider
INFO: Binding org.apache.hadoop.yarn.server.nodemanager.webapp.JAXBContextResolver to GuiceManagedComponentProvider with the scope "Singleton"
May 15, 2020 2:29:27 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider
INFO: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceManagedComponentProvider with the scope "Singleton"
May 15, 2020 2:29:28 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider
INFO: Binding org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices to GuiceManagedComponentProvider with the scope "Singleton"

--启动yarn ResourceManager 
[hadoop@big-master2 sbin]$ ./yarn-daemon.sh start resourcemanager
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-resourcemanager-big-master2.out

[hadoop@big-master2 sbin]$ cat /usr/local/hadoop/logs/yarn-hadoop-resourcemanager-big-master2.out
May 15, 2020 2:32:22 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
INFO: Registering org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver as a provider class
May 15, 2020 2:32:22 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
INFO: Registering org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices as a root resource class
May 15, 2020 2:32:22 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
INFO: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider class
May 15, 2020 2:32:22 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate
INFO: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17 AM'
May 15, 2020 2:32:22 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider
INFO: Binding org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver to GuiceManagedComponentProvider with the scope "Singleton"
May 15, 2020 2:32:23 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider
INFO: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceManagedComponentProvider with the scope "Singleton"
May 15, 2020 2:32:24 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider
INFO: Binding org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices to GuiceManagedComponentProvider with the scope "Singleton"

########## 个人测试,原始配置文件  ###########


### zookeeper 


### hadoop 

你可能感兴趣的:(㊣,应用架构解析,㊣,hadoop)