Linux 安装Hadoop 3.0操作文档~很详细

今天尝试安装Hadoop,为接下来学习Hadoop做好准备。

一、准备环境

1.1、查看操作系统的版本

[root@cql ~]# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 6.5 (Santiago)

1.2 关闭防火墙

[root@cql ~]# service iptables stop
[root@cql ~]# service ip6tables stop
[root@cql ~]# chkconfig iptables off
[root@cql ~]# chkconfig ip6tables off

1.3 关闭selinux

[root@cql ~]# sed -i 's|enforcing|disabled|' /etc/sysconfig/selinux 

安装Hadoop,首先要配置安装JDK,我电脑上自带的是1.7的,为了练习保持一致,我下载linux和window都是1.8的,所以我准备把我之前的卸载,重新配置JDK,下载地址:https://www.oracle.com/technetwork/java/javase/downloads/index-jsp-138363.html

二、配置JDK

  1. 卸载1.7版本的JDK 
[root@cql ~]# rpm -qa|grep jdk
java-1.7.0-openjdk-1.7.0.45-2.4.3.3.el6.x86_64
[root@cql ~]# rpm -e --nodeps java-1.7.0-openjdk-1.7.0.45-2.4.3.3.el6.x86_64

把下载好的jdk-8u181-linux-x64.tar.gz上传到虚拟机上,我这个是放在/soft目录下了,解tar包

[root@cql soft]# tar -xzvf jdk-8u181-linux-x64.tar.gz 

2、配置环境变量,编辑.bash_profile文件,把JAVA_HOME,JRE_HOME,CLASSPATH,PATH路径设置好


PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:$JRE_HOME/bin

export PATH
export JAVA_HOME=/soft/jdk1.8.0_181/
export JRE_HOME=/soft/jdk1.8.0_181/jre
export CLASSPATH=$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar

编辑完成生效
[root@cql ~]# source .bash_profile

3、验证当前的版本

[root@cql ~]# java -version
java version "1.8.0_181"
Java(TM) SE Runtime Environment (build 1.8.0_181-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)

三、Hadoop下载和安装

去http://hadoop.apache.org/  你找各种各样的版本都可以,我这次安装的是3.0

选择的是二进制下载binary download,下载完之后,生成hadoop-3.0.3.tar.gz,传到虚拟机/soft目录下

1、解HADOOP tar包
[root@cql soft]# tar -xzvf hadoop-3.0.3.tar.gz 
2、配置环境变量,编辑.bash_profile文件
[root@cql ~]# vi .bash_profile
PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:$JRE_HOME/bin:/soft/hadoop-3.0.3/bin:/soft/hadoop-3.0.3/sbin
export HADOOP_INSTALL=/soft/hadoop-3.0.3
把解压目录下的bin/和sbin/目录放到path中
3、生效,可看到HADOOP版本
[root@cql ~]# source .bash_profile
[root@cql ~]# hadoop version
Hadoop 3.0.3
Source code repository https://[email protected]/repos/asf/hadoop.git -r 37fd7d752db73d984dc31e0cdfd590d252f5e075
Compiled by yzhang on 2018-05-31T17:12Z
Compiled with protoc 2.5.0
From source with checksum 736cdcefa911261ad56d2d120bf1fa
This command was run using /soft/hadoop-3.0.3/share/hadoop/common/hadoop-common-3.0.3.jar
ps:在配置信息里面,你可以看到,我只配置了HADOOP_INSTALL和PATH路径,而HADOOP_HOME就不配置了,因为HADOOP——INSTALL目录下面有bin和sbin目录,都是有可执行文件,如果我配置,则容易造成冲突

这里HADOOP的安装已经安装好了,下面就是HADOOP配置了

四、HADOOP配置

hadoop配置有三种模式:

  1. 独立模式:独立模式没有守护进程,所有的程序都是运行在单独的虚拟机上,独立模式适合运行MapReduce程序在开发期间,用于测试和调试
  2. 伪分布式:Hadoop守护进程运行在本地机器上,它会模拟一个小规模的集群
  3. 完全分布式:hadoop守护进程运行在集群机器

独立模式,不用配置,因为单机,没有守护进程,默认情况下都是独立模式,下面配置伪分布式(pseudo),首先将hadoop安装目录etc目录下的hadoop目录,重新拷贝一份,命名为Hadoop_pesudo

1 新建Hadoop_pseudo伪分布式

[root@cql etc]# pwd
/soft/hadoop-3.0.3/etc
[root@cql etc]# ls -l
总用量 4
drwxr-xr-x. 3 2003 2003 4096 6月   1 01:36 hadoop
[root@cql etc]# cp -R hadoop hadoop_pseudo
[root@cql etc]# ls -l
总用量 8
drwxr-xr-x. 3 2003 2003 4096 6月   1 01:36 hadoop
drwxr-xr-x. 3 root root 4096 8月  11 20:12 hadoop_pseudo

2 配置Hadoop_pesudo伪分布式,进入Hadoop_pseudo目录下

需要配置core-site.xml, hdfs-site.xml, yarn-site.xml, mapreduce-site.xml等四个文件

[root@cql hadoop_pseudo]# vi core-site.xml 

    
        fs.defaultFS
        hdfs://localhost/
    

设置文件系统,fs都是默认,hdfs地址默认为本机

[root@cql hadoop_pseudo]# vi hdfs-site.xml 

    
        dfs.replication
        1
    


设置副本,因为是伪分布式,所以值是1


[root@cql hadoop_pseudo]# vi mapred-site.xml

    
        mapreduce.framework.name
        yarn
    

伪分布式需要设置yarn,yarn是MapReduce的框架

[root@cql hadoop_pseudo]# vi yarn-site.xml

    
        yarn.resourcemanager.hostname
        localhost
    
    
        yarn.nodemanager.aux-services
        mapreduce_shuffle
    

配置资源管理器的一些东西

五、配置SSH

从上面可以看出,伪分布式就是在单机配置的,伪分布式是完全分布式的一个特例,但是Hadoop没有真正的区分伪分布式还是完全分布式,它启动守护进程到集群节点上的通信是通过(定义slaves文件)SSH通信,所以我们只需要配置本地的ssh,登陆的时候不用输入密码

[root@cql ~]# service sshd status
openssh-daemon (pid  1616) 正在运行...
[root@cql ~]# which ssh-keygen
/usr/bin/ssh-keygen
[root@cql ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
50:97:43:55:28:ef:a2:fd:19:2d:98:c1:03:65:1d:36 root@cql
The key's randomart image is:
+--[ RSA 2048]----+
|        ..=+E+.  |
|       . ++.o.   |
|      . .  +     |
|       . o  .    |
|        S +.     |
|          .=..   |
|         oo.o .  |
|        . .  +   |
|           .o    |
+-----------------+
注意上面有三处需要输入信息,分别是:
存储公私钥的文件夹位置,如果不输入,则默认为~/.ssh/,文件名则默认是id_rsa和id_rsa.pub
使用该公私钥时是否需要密码,如果不输入则表示不需要密码
再次确认是否需要密码
[root@cql ~]# cd .ssh
[root@cql .ssh]# ls -l
总用量 8
-rw-------. 1 root root 1675 8月  11 21:17 id_rsa
-rw-r--r--. 1 root root  390 8月  11 21:17 id_rsa.pub
[root@cql .ssh]# cat id_rsa.pub >> authorized_keys
[root@cql .ssh]# ssh localhost date  
The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is c5:57:76:63:28:a3:ef:59:be:72:81:de:87:94:fa:90.
Are you sure you want to continue connecting (yes/no)? 
Host key verification failed.

失败了
后来加了一个参数,成功了,这个时候在.ssh下面也生成了一个known_hosts文件,然后再登陆测试的时候,就能登陆成功
[root@cql .ssh]# ssh -o StrictHostKeyChecking=no localhost
Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
Last login: Sat Aug 11 21:27:24 2018 from cql

第二次
[root@cql .ssh]# ssh localhost date
2018年 08月 11日 星期六 21:49:51 CST
我最初清理.ssh文件把known_hosts文件删除了,所以第一次没有成功,那么失败的原因和known_hosts有关系,第二次再次重新建立的时候,把其他文件删除,唯独保留known_hosts,则一次成功

六、格式化HDFS文件系统

[root@cql ~]# hdfs namenode -format
WARNING: /soft/hadoop-3.0.3/logs does not exist. Creating.
2018-08-11 22:00:41,279 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = cql/192.168.10.103
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 3.0.3
STARTUP_MSG:   classpath = /soft/hadoop-3.0.3/etc/hadoop:/soft/hadoop-3.0.3/share/hadoop/common/lib/commons-codec-1.4.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/curator-recipes-2.12.0.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jetty-xml-9.3.19.v20170502.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jetty-util-9.3.19.v20170502.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/guava-11.0.2.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jackson-annotations-2.7.8.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jersey-core-1.19.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/avro-1.7.7.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/kerb-core-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/curator-framework-2.12.0.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jul-to-slf4j-1.7.25.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/commons-math3-3.1.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jersey-json-1.19.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/json-smart-2.3.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/commons-compress-1.4.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/commons-io-2.4.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/commons-collections-3.2.2.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/commons-cli-1.2.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/httpclient-4.5.2.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/hadoop-annotations-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/xz-1.0.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/commons-configuration2-2.1.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jetty-server-9.3.19.v20170502.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jetty-webapp-9.3.19.v20170502.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/httpcore-4.4.4.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/kerb-server-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jettison-1.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/re2j-1.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jackson-core-2.7.8.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/snappy-java-1.0.5.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/kerby-asn1-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/woodstox-core-5.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/asm-5.0.4.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/paranamer-2.3.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/kerby-pkix-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/kerb-simplekdc-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/kerb-client-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jsp-api-2.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/token-provider-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/kerb-crypto-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/kerb-admin-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/netty-3.10.5.Final.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jsch-0.1.54.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/slf4j-api-1.7.25.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/commons-beanutils-1.9.3.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/log4j-1.2.17.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/kerb-common-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/commons-logging-1.1.3.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/kerby-util-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/kerb-identity-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/gson-2.2.4.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/commons-lang3-3.4.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jetty-io-9.3.19.v20170502.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/kerby-xdr-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/metrics-core-3.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jackson-databind-2.7.8.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/stax2-api-3.1.4.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/commons-lang-2.6.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jetty-security-9.3.19.v20170502.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jersey-server-1.19.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/accessors-smart-1.2.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/zookeeper-3.4.9.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/hadoop-auth-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/curator-client-2.12.0.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jersey-servlet-1.19.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/commons-net-3.6.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/kerby-config-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jetty-servlet-9.3.19.v20170502.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jetty-http-9.3.19.v20170502.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/kerb-util-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jsr305-3.0.0.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/hadoop-common-3.0.3-tests.jar:/soft/hadoop-3.0.3/share/hadoop/common/hadoop-common-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/common/hadoop-nfs-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/common/hadoop-kms-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/curator-recipes-2.12.0.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jetty-xml-9.3.19.v20170502.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jetty-util-9.3.19.v20170502.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/guava-11.0.2.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jackson-annotations-2.7.8.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jersey-core-1.19.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/avro-1.7.7.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jackson-jaxrs-1.9.13.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/kerb-core-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/curator-framework-2.12.0.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/commons-math3-3.1.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jersey-json-1.19.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/nimbus-jose-jwt-4.41.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/json-smart-2.3.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/commons-compress-1.4.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/commons-io-2.4.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/httpclient-4.5.2.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/hadoop-annotations-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/xz-1.0.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/commons-configuration2-2.1.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jetty-server-9.3.19.v20170502.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jetty-webapp-9.3.19.v20170502.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/httpcore-4.4.4.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/kerb-server-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jettison-1.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/re2j-1.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jackson-core-2.7.8.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/snappy-java-1.0.5.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/kerby-asn1-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/woodstox-core-5.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/asm-5.0.4.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jackson-xc-1.9.13.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/paranamer-2.3.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/kerby-pkix-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/kerb-simplekdc-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/kerb-client-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/token-provider-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/kerb-crypto-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/kerb-admin-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/netty-3.10.5.Final.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jsch-0.1.54.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/okio-1.6.0.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/commons-beanutils-1.9.3.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/kerb-common-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/kerby-util-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/kerb-identity-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/gson-2.2.4.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/commons-lang3-3.4.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/okhttp-2.7.5.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jetty-io-9.3.19.v20170502.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/kerby-xdr-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/htrace-core4-4.1.0-incubating.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jackson-databind-2.7.8.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/stax2-api-3.1.4.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jetty-security-9.3.19.v20170502.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jersey-server-1.19.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/accessors-smart-1.2.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/zookeeper-3.4.9.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/hadoop-auth-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jetty-util-ajax-9.3.19.v20170502.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/curator-client-2.12.0.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jersey-servlet-1.19.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/commons-net-3.6.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/kerby-config-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jetty-servlet-9.3.19.v20170502.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jetty-http-9.3.19.v20170502.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/kerb-util-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/hadoop-hdfs-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/hadoop-hdfs-client-3.0.3-tests.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/hadoop-hdfs-native-client-3.0.3-tests.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/hadoop-hdfs-native-client-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/hadoop-hdfs-3.0.3-tests.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/hadoop-hdfs-nfs-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/hadoop-hdfs-rbf-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/hadoop-hdfs-client-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/hadoop-hdfs-rbf-3.0.3-tests.jar:/soft/hadoop-3.0.3/share/hadoop/mapreduce/lib/junit-4.11.jar:/soft/hadoop-3.0.3/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/soft/hadoop-3.0.3/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.3-tests.jar:/soft/hadoop-3.0.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/yarn:/soft/hadoop-3.0.3/share/hadoop/yarn/lib/jackson-jaxrs-base-2.7.8.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/lib/guice-servlet-4.0.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/lib/javax.inject-1.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/lib/objenesis-1.0.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/lib/jersey-client-1.19.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/lib/fst-2.50.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/lib/json-io-2.5.1.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.7.8.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/lib/java-util-1.9.0.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.7.8.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/lib/jersey-guice-1.19.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/lib/aopalliance-1.0.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/lib/metrics-core-3.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/lib/guice-4.0.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/hadoop-yarn-server-common-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/hadoop-yarn-server-tests-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/hadoop-yarn-api-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/hadoop-yarn-server-router-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/hadoop-yarn-common-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/hadoop-yarn-client-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/hadoop-yarn-registry-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.0.3.jar
STARTUP_MSG:   build = https://[email protected]/repos/asf/hadoop.git -r 37fd7d752db73d984dc31e0cdfd590d252f5e075; compiled by 'yzhang' on 2018-05-31T17:12Z
STARTUP_MSG:   java = 1.8.0_181
************************************************************/
2018-08-11 22:00:41,323 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2018-08-11 22:00:41,348 INFO namenode.NameNode: createNameNode [-format]
2018-08-11 22:00:42,027 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Formatting using clusterid: CID-69157c14-18fc-49f1-85cc-8cd03b861716
2018-08-11 22:00:43,266 INFO namenode.FSEditLog: Edit logging is async:true
2018-08-11 22:00:43,311 INFO namenode.FSNamesystem: KeyProvider: null
2018-08-11 22:00:43,312 INFO namenode.FSNamesystem: fsLock is fair: true
2018-08-11 22:00:43,315 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
2018-08-11 22:00:43,336 INFO namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)
2018-08-11 22:00:43,336 INFO namenode.FSNamesystem: supergroup          = supergroup
2018-08-11 22:00:43,337 INFO namenode.FSNamesystem: isPermissionEnabled = true
2018-08-11 22:00:43,337 INFO namenode.FSNamesystem: HA Enabled: false
2018-08-11 22:00:43,414 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2018-08-11 22:00:43,441 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
2018-08-11 22:00:43,441 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2018-08-11 22:00:43,457 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2018-08-11 22:00:43,461 INFO blockmanagement.BlockManager: The block deletion will start around 2018 八月 11 22:00:43
2018-08-11 22:00:43,467 INFO util.GSet: Computing capacity for map BlocksMap
2018-08-11 22:00:43,467 INFO util.GSet: VM type       = 64-bit
2018-08-11 22:00:43,474 INFO util.GSet: 2.0% max memory 450.5 MB = 9.0 MB
2018-08-11 22:00:43,474 INFO util.GSet: capacity      = 2^20 = 1048576 entries
2018-08-11 22:00:43,525 INFO blockmanagement.BlockManager: dfs.block.access.token.enable = false
2018-08-11 22:00:43,533 INFO Configuration.deprecation: No unit for dfs.namenode.safemode.extension(30000) assuming MILLISECONDS
2018-08-11 22:00:43,533 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2018-08-11 22:00:43,534 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0
2018-08-11 22:00:43,534 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000
2018-08-11 22:00:43,534 INFO blockmanagement.BlockManager: defaultReplication         = 3
2018-08-11 22:00:43,534 INFO blockmanagement.BlockManager: maxReplication             = 512
2018-08-11 22:00:43,534 INFO blockmanagement.BlockManager: minReplication             = 1
2018-08-11 22:00:43,534 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
2018-08-11 22:00:43,534 INFO blockmanagement.BlockManager: redundancyRecheckInterval  = 3000ms
2018-08-11 22:00:43,534 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
2018-08-11 22:00:43,535 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
2018-08-11 22:00:43,698 INFO util.GSet: Computing capacity for map INodeMap
2018-08-11 22:00:43,698 INFO util.GSet: VM type       = 64-bit
2018-08-11 22:00:43,699 INFO util.GSet: 1.0% max memory 450.5 MB = 4.5 MB
2018-08-11 22:00:43,699 INFO util.GSet: capacity      = 2^19 = 524288 entries
2018-08-11 22:00:43,700 INFO namenode.FSDirectory: ACLs enabled? false
2018-08-11 22:00:43,700 INFO namenode.FSDirectory: POSIX ACL inheritance enabled? true
2018-08-11 22:00:43,700 INFO namenode.FSDirectory: XAttrs enabled? true
2018-08-11 22:00:43,700 INFO namenode.NameNode: Caching file names occurring more than 10 times
2018-08-11 22:00:43,712 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true
2018-08-11 22:00:43,735 INFO util.GSet: Computing capacity for map cachedBlocks
2018-08-11 22:00:43,735 INFO util.GSet: VM type       = 64-bit
2018-08-11 22:00:43,735 INFO util.GSet: 0.25% max memory 450.5 MB = 1.1 MB
2018-08-11 22:00:43,735 INFO util.GSet: capacity      = 2^17 = 131072 entries
2018-08-11 22:00:43,752 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2018-08-11 22:00:43,752 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2018-08-11 22:00:43,752 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2018-08-11 22:00:43,771 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
2018-08-11 22:00:43,771 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2018-08-11 22:00:43,776 INFO util.GSet: Computing capacity for map NameNodeRetryCache
2018-08-11 22:00:43,776 INFO util.GSet: VM type       = 64-bit
2018-08-11 22:00:43,778 INFO util.GSet: 0.029999999329447746% max memory 450.5 MB = 138.4 KB
2018-08-11 22:00:43,778 INFO util.GSet: capacity      = 2^14 = 16384 entries
2018-08-11 22:00:43,855 INFO namenode.FSImage: Allocated new BlockPoolId: BP-753739993-192.168.10.103-1533996043842
2018-08-11 22:00:43,874 INFO common.Storage: Storage directory /tmp/hadoop-root/dfs/name has been successfully formatted.
2018-08-11 22:00:43,903 INFO namenode.FSImageFormatProtobuf: Saving image file /tmp/hadoop-root/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
2018-08-11 22:00:44,058 INFO namenode.FSImageFormatProtobuf: Image file /tmp/hadoop-root/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 389 bytes saved in 0 seconds .
2018-08-11 22:00:44,090 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
2018-08-11 22:00:44,109 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at cql/192.168.10.103
************************************************************/
[root@cql ~]# 

七、启动和停止守护进程

1、启动hdfs

 

[root@cql ~]# start-dfs.sh --config $HADOOP_INSTALL/etc/hadoop_pseudo
Starting namenodes on [localhost]
ERROR: Attempting to operate on hdfs namenode as root
ERROR: but there is no HDFS_NAMENODE_USER defined. Aborting operation.
Starting datanodes
ERROR: Attempting to operate on hdfs datanode as root
ERROR: but there is no HDFS_DATANODE_USER defined. Aborting operation.
Starting secondary namenodes [cql]
ERROR: Attempting to operate on hdfs secondarynamenode as root
ERROR: but there is no HDFS_SECONDARYNAMENODE_USER defined. Aborting operation.
2018-08-11 22:07:50,293 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

启动出错,根据报错是HDS_NAMENODE_USER没有定义,则编辑sbin/下的start-dfs.sh和stop-dfs.sh,在头部加上
HDFS_DATANODE_USER=root
HDFS_DATANODE_SECURE_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root

[root@cql hadoop-3.0.3]# vi sbin/start-dfs.sh
# limitations under the License.
HDFS_DATANODE_USER=root
HDFS_DATANODE_SECURE_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root

# Start hadoop dfs daemons.

这个时候,再次启动
[root@cql ~]# start-dfs.sh --config $HADOOP_INSTALL/etc/hadoop_pseudo
Starting namenodes on [localhost]
localhost: ERROR: JAVA_HOME is not set and could not be found.
Starting datanodes
localhost: ERROR: JAVA_HOME is not set and could not be found.
Starting secondary namenodes [cql]
cql: ERROR: JAVA_HOME is not set and could not be found.
2018-08-11 22:19:05,869 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
发现报错,JAVA_HOME没有设置,但是这个肯定是在环境变量中设置了,到底是怎么回事,百度了一下,Hadoop有个配置文件,中有JAVA_HOME,则修改这个文件,首先找到这个文件,我这边是通过伪分布式启动的,则我需要修改pseudo下面的文件
[root@cql hadoop-3.0.3]# find /soft/hadoop-3.0.3 -name hadoop-env.sh
/soft/hadoop-3.0.3/etc/hadoop_pseudo/hadoop-env.sh
/soft/hadoop-3.0.3/etc/hadoop/hadoop-env.sh
[root@cql hadoop-3.0.3]# vi etc/hadoop_pseudo/hadoop-env.sh
JAVA_HOME=/soft/jdk1.8.0_181
这是我自己的JAVA_HOME

再次启动
[root@cql ~]# start-dfs.sh --config $HADOOP_INSTALL/etc/hadoop_pseudo
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [cql]
2018-08-11 22:27:28,031 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
发现好像还是有点问题,但是这个可以忽略,毕竟各个节点都已经起来了,但是我是个追求完美的人,还是要解决的,唉,真是困难多多,坚持不要放弃
a、关于2018-08-11 22:27:28,031 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
的解决办法是在/soft/hadoop-3.0.3/etc/hadoop_pseudo伪分布式下面的log4j.properties添加
log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR
详情解决办法:https://blog.csdn.net/l1028386804/article/details/51538611 可看这篇文章

这次终于起来了
[root@cql ~]# start-dfs.sh --config $HADOOP_INSTALL/etc/hadoop_pseudo
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [cql]
太不容易了

补充,我前面修改start-dfs.sh和stop-dfs.sh时,根据所搜的答案加的条件,最初是HADOOP_SECURE_DN_USER=hdfs 后来发现还是有警告,
WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER.
再次查找原因,把HADOOP_SECURE_DN_USER=hdfs 替换成HDFS_DATANODE_SECURE_USER=hdfs,这次终于没错了,所以第一次搭建环境,一定要仔细,仔细,再仔细,这些都是血淋淋的教训呐~


2、启动yarn

[root@cql ~]# start-yarn.sh --config $HADOOP_INSTALL/etc/hadoop_pseudo   
Starting resourcemanager
ERROR: Attempting to operate on yarn resourcemanager as root
ERROR: but there is no YARN_RESOURCEMANAGER_USER defined. Aborting operation.
Starting nodemanagers
ERROR: Attempting to operate on yarn nodemanager as root
ERROR: but there is no YARN_NODEMANAGER_USER defined. Aborting operation.
第一次启动报错,根据提示信息,有了上次的经验,这次编辑相应的文件sbin/start-yarn.sh和stop-yarn.sh,加入相应配置
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root

再次启动
[root@cql ~]# start-yarn.sh --config $HADOOP_INSTALL/etc/hadoop_pseudo
Starting resourcemanager
Starting nodemanagers
没有问题了,继续

3、启动Mapreduce的守护进程

mr-jobhistory-daemon.sh start historyserver 现在这个测试环境,可启动也可不启动,都可以

[root@cql ~]#  mr-jobhistory-daemon.sh start historyserver   
WARNING: Use of this script to start the MR JobHistory daemon is deprecated.
WARNING: Attempting to execute replacement "mapred --daemon start" instead.
[root@cql ~]# jps
11232 NameNode
11572 SecondaryNameNode
12836 Jps
12167 NodeManager
12045 ResourceManager
12766 JobHistoryServer   --现在已经启动了
11359 DataNode
详情可使用
[root@cql ~]# jps -l
11232 org.apache.hadoop.hdfs.server.namenode.NameNode
12995 org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer
11572 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode
12167 org.apache.hadoop.yarn.server.nodemanager.NodeManager
13049 sun.tools.jps.Jps
12045 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager
11359 org.apache.hadoop.hdfs.server.datanode.DataNode

4、可在网页查看

查看NameNode http://192.168.10.103:9870/

Linux 安装Hadoop 3.0操作文档~很详细_第1张图片

补充:

--安装版本不一样,所以端口号也不一样,之前2.7的版本,namenode的端口号是50070,而3.0的是9870,这个要看自己安装的设置 ,如果你是在不知道这个端口号到底在哪看,那么也可以根据进程找相应的端口号

[root@cql ~]# jps
11232 NameNode
12995 JobHistoryServer
11572 SecondaryNameNode
13831 Jps
12167 NodeManager
12045 ResourceManager
11359 DataNode
[root@cql ~]# 
[root@cql ~]# netstat -nap |grep 11232
tcp        0      0 0.0.0.0:9870                0.0.0.0:*                   LISTEN      11232/java          
tcp        0      0 127.0.0.1:8020              0.0.0.0:*                   LISTEN      11232/java          
tcp        0      0 127.0.0.1:8020              127.0.0.1:10002             ESTABLISHED 11232/java          
unix  2      [ ]         STREAM     CONNECTED     60145  11232/java          
unix  2      [ ]         STREAM     CONNECTED     60130  11232/java 

查看  resource manager 登陆http://192.168.10.103:8042

Linux 安装Hadoop 3.0操作文档~很详细_第2张图片

查看JOB

Linux 安装Hadoop 3.0操作文档~很详细_第3张图片 

最后创建用户目录

[root@cql ~]# export HADOOP_CONF_DIR=/soft/hadoop-3.0.3/etc/hadoop_pseudo
[root@cql ~]# hadoop fs -ls /
[root@cql ~]# hadoop fs -mkdir /user/
[root@cql ~]# hadoop fs -ls /        
Found 1 items
drwxr-xr-x   - root supergroup          0 2018-08-12 00:12 /user

 

你可能感兴趣的:(hadoop)