系统环境
1 [root@localhost ~]# cat /etc/redhat-release 2 CentOS release 6.8 (Final)
安装步骤
- 软件准备(只Master服务器上操作)
- 安装lrzsz,用来上传安装包到服务器上
1 [root@localhost ~]# yum list lrzsz 2 Loaded plugins: fastestmirror, refresh-packagekit, security 3 Loading mirror speeds from cached hostfile 4 * base: mirrors.aliyun.com 5 * extras: mirrors.aliyun.com 6 * updates: mirrors.aliyun.com 7 Available Packages 8 lrzsz.x86_64 0.12.20-27.1.el6 base
1 [root@localhost ~]# yum list lrzsz 2 Loaded plugins: fastestmirror, refresh-packagekit, security 3 Loading mirror speeds from cached hostfile 4 * base: mirrors.aliyun.com 5 * extras: mirrors.aliyun.com 6 * updates: mirrors.aliyun.com 7 Available Packages 8 lrzsz.x86_64 0.12.20-27.1.el6 base 9 [root@localhost ~]# yum install lrzsz.x86_64 10 Loaded plugins: fastestmirror, refresh-packagekit, security 11 Setting up Install Process 12 Loading mirror speeds from cached hostfile 13 * base: mirrors.aliyun.com 14 * extras: mirrors.aliyun.com 15 * updates: mirrors.aliyun.com 16 Resolving Dependencies 17 --> Running transaction check 18 ---> Package lrzsz.x86_64 0:0.12.20-27.1.el6 will be installed 19 --> Finished Dependency Resolution 20 21 Dependencies Resolved 22 23 ============================================================================================================================= 24 Package Arch Version Repository Size 25 ============================================================================================================================= 26 Installing: 27 lrzsz x86_64 0.12.20-27.1.el6 base 71 k 28 29 Transaction Summary 30 ============================================================================================================================= 31 Install 1 Package(s) 32 33 Total download size: 71 k 34 Installed size: 159 k 35 Is this ok [y/N]: y 36 Downloading Packages: 37 lrzsz-0.12.20-27.1.el6.x86_64.rpm | 71 kB 00:00 38 warning: rpmts_HdrFromFdno: Header V3 RSA/SHA256 Signature, key ID c105b9de: NOKEY 39 Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6 40 Importing GPG key 0xC105B9DE: 41 Userid : CentOS-6 Key (CentOS 6 Official Signing Key)6[email protected]> 42 Package: centos-release-6-8.el6.centos.12.3.x86_64 (@anaconda-CentOS-201605220104.x86_64/6.8) 43 From : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6 44 Is this ok [y/N]: y 45 Running rpm_check_debug 46 Running Transaction Test 47 Transaction Test Succeeded 48 Running Transaction 49 Installing : lrzsz-0.12.20-27.1.el6.x86_64 1/1 50 Verifying : lrzsz-0.12.20-27.1.el6.x86_64 1/1 51 52 Installed: 53 lrzsz.x86_64 0:0.12.20-27.1.el6 54 55 Complete!
-
- 上传hadoop和jdk安装包到/usr/local/src下
1 cd /usr/local/src 2 rz
-
- 解压,并移动解压后的文件到/usr/local下
1 [root@localhost src]# tar zxf hadoop-3.1.0.tar.gz 2 [root@localhost src]# tar zxf jdk-8u131-linux-x64.tar.gz 3 [root@localhost src]# mv hadoop-3.1.0 /usr/local/hadoop 4 [root@localhost src]# mv jdk1.8.0_131/ /usr/local/jdk
- 系统环境准备
- 修改主机名和hosts(Master和Slave1、Slave2均要操作)
1 [root@localhost local]# vi /etc/hosts 2 3 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 5 172.18.5.213 hd1 6 172.18.5.214 hd2 7 172.18.5.215 hd3
1 #注意:Master为hd1,Slave1和Slave2分别修改为hd2和hd3 2 [root@localhost local]# vi /etc/sysconfig/network 3 4 NETWORKING=yes 5 HOSTNAME=hd1
-
- 创建hadoop用户(此处为方便记忆,用户名创建为hadoop,三台服务器均要创建)
1 [root@localhost local]# useradd -m hadoop -s /bin/bash 2 [root@localhost local]# passwd hadoop 3 Changing password for user hadoop. 4 New password: 5 BAD PASSWORD: it is based on a dictionary word 6 BAD PASSWORD: is too simple 7 Retype new password: 8 passwd: all authentication tokens updated successfully.
-
- 更改hadoop和jdk文件的所有者为hadoop用户,以免出现权限问题
1 [root@localhost local]# cd /usr/local/ 2 [root@localhost local]# chown -R hadoop:hadoop hadoop/ 3 [root@localhost local]# chown -R hadoop:hadoop jdk/
-
- 配置hadoop用户的环境变量(三台服务器都要做)
1 [root@localhost .ssh]# su hadoop 2 [hadoop@localhost .ssh]$ vi ~/.bashrc 3 4 # .bashrc 5 6 # Source global definitions 7 if [ -f /etc/bashrc ]; then 8 . /etc/bashrc 9 fi 10 11 # User specific aliases and functions 12 13 # set java environment 14 export JAVA_HOME=/usr/local/jdk15 export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib 16 export PATH=$PATH:$JAVA_HOME/bin:$JAVA_HOME/jre/bin 17 18 # [email protected] 19 # Hadoop 20 export HADOOP_HOME="/usr/local/hadoop" 21 export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin 22 export HADOOP_COMMON_HOME=${HADOOP_HOME} 23 export HADOOP_HDFS_HOME=${HADOOP_HOME} 24 export HADOOP_MAPRED_HOME=${HADOOP_HOME} 25 export HADOOP_YARN_HOME=${HADOOP_HOME}
-
- 重启三台服务器使配置生效(之前修改了主机名,需要重启生效)
1 [hadoop@localhost .ssh]$ su root 2 Password: 3 [root@localhost .ssh]# shutdown -r now 4 5 Broadcast message from [email protected] 6 (/dev/pts/0) at 18:08 ... 7 8 The system is going down for reboot NOW!
-
- 重启后,验证主机名和环境是否配置好
1 #关闭防火墙,三台都要 2 [root@hd1 local]# service iptables stop 3 #在hd1上验证环境变量 4 [root@hd1 local]# su hadoop 5 [hadoop@hd1 local]$ hostname 6 hd1 7 [hadoop@hd1 local]$ java -version 8 java version "1.8.0_131" 9 Java(TM) SE Runtime Environment (build 1.8.0_131-b11) 10 Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode) 11 [hadoop@hd1 local]$ hadoop version 12 Hadoop 3.1.0 13 Source code repository https://github.com/apache/hadoop -r 16b70619a24cdcf5d3b0fcf4b58ca77238ccbe6d 14 Compiled by centos on 2018-03-30T00:00Z 15 Compiled with protoc 2.5.0 16 From source with checksum 14182d20c972b3e2105580a1ad6990 17 This command was run using /usr/local/hadoop/share/hadoop/common/hadoop-common-3.1.0.jar
- Hadoop配置
- 修改hadoop-env.sh
1 [hadoop@hd1 root]$ cd /usr/local/hadoop/etc/hadoop/ 2 [hadoop@hd1 hadoop]$ vi hadoop-env.sh 3 #加入 4 export JAVA_HOME=/usr/local/jdk
-
- 修改yarn-env.sh
1 [hadoop@hd1 hadoop]$ vi yarn-env.sh 2 #加入 3 export JAVA_HOME=/usr/local/jdk
-
- 修改core-site.xml
1 [hadoop@hd1 hadoop]$ vi core-site.xml 2 3 "1.0" encoding="UTF-8"?> 4 "text/xsl" href="configuration.xsl"?> 5 18 19 20 2122 23 24 28fs.defaultFS 25hdfs://hd1:9000 26The name of the default file system. 2729 34hadoop.tmp.dir 30 31/usr/local/hadoop/temp 32A base for other temporary directories. 33
-
- 修改hdfs-site.xml
1 [hadoop@hd1 hadoop]$ vi hdfs-site.xml 2 3 You may obtain a copy of the License at 4 5 http://www.apache.org/licenses/LICENSE-2.0 6 7 Unless required by applicable law or agreed to in writing, software 8 distributed under the License is distributed on an "AS IS" BASIS, 9 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 10 See the License for the specific language governing permissions and 11 limitations under the License. See accompanying LICENSE file. 12 --> 13 14 15 1617 18 22dfs.replication 19 202 2123 28dfs.namenode.name.dir 24 25file:/usr/local/hadoop/dfs/name 26true 2729 33dfs.datanode.data.dir 30 31file:/usr/local/hadoop/dfs/data 32
-
- 修改yarn-site.xml
1 [hadoop@hd1 hadoop]$ vi yarn-site.xml 2 3 Licensed under the Apache License, Version 2.0 (the "License"); 4 you may not use this file except in compliance with the License. 5 You may obtain a copy of the License at 6 7 http://www.apache.org/licenses/LICENSE-2.0 8 9 Unless required by applicable law or agreed to in writing, software 10 distributed under the License is distributed on an "AS IS" BASIS, 11 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 See the License for the specific language governing permissions and 13 limitations under the License. See accompanying LICENSE file. 14 --> 1516 17 18 19 22yarn.nodemanager.aux-services 20mapreduce_shuffle 2123 26 27 28yarn.nodemanager.aux-services.mapreduce.shuffle.class 24org.apache.hadoop.mapred.ShuffleHandler 2529 32yarn.resourcemanager.hostname 30hd1 31
-
- 修改mapred-site.xml
1 [hadoop@hd1 hadoop]$ vi mapred-site.xml 2 3 "1.0"?> 4 "text/xsl" href="configuration.xsl"?> 5 18 19 20 2122 23 27mapreduce.framework.name 24yarn 25true 26
-
- 修改workers文件
1 [hadoop@hd1 hadoop]$ vi workers 2 3 hd2 4 hd3
- 拷贝配置好的hadoop和jdk到另两台服务器上
1 [hadoop@hd1 local]$ su root 2 Password: 3 [root@hd1 local]# cd /usr/local/ 4 [root@hd1 local]# scp -r hadoop/ root@hd2:/usr/local/ 5 The authenticity of host 'hd2 (172.18.5.214)' can't be established. 6 RSA key fingerprint is 16:0b:40:c3:f4:a9:e4:07:7d:91:ac:9b:f4:5c:56:f4. 7 Are you sure you want to continue connecting (yes/no)? yes 8 Warning: Permanently added 'hd2' (RSA) to the list of known hosts 9 root@hd2's password: 10 [root@hd1 local]# scp -r hadoop/ root@hd3:/usr/local/ 11 The authenticity of host 'hd3 (172.18.5.215)' can't be established. 12 RSA key fingerprint is 16:0b:40:c3:f4:a9:e4:07:7d:91:ac:9b:f4:5c:56:f4. 13 Are you sure you want to continue connecting (yes/no)? yes 14 Warning: Permanently added 'hd3' (RSA) to the list of known hosts. 15 root@hd3's password: 16 [root@hd1 local]# scp -r jdk/ root@hd2:/usr/local/ 17 root@hd2's password: 18 [root@hd1 local]# scp -r jdk/ root@hd3:/usr/local/ 19 root@hd3's password:
- 验证三台服务器的hostname和环境变量是否均配置好
-
1 [root@hd1 local]# su hadoop 2 [hadoop@hd1 local]$ hostname 3 hd1 4 [hadoop@hd1 local]$ java -version 5 java version "1.8.0_131" 6 Java(TM) SE Runtime Environment (build 1.8.0_131-b11) 7 Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode) 8 [hadoop@hd1 local]$ hadoop version 9 Hadoop 3.1.0 10 Source code repository https://github.com/apache/hadoop -r 16b70619a24cdcf5d3b0fcf4b58ca77238ccbe6d 11 Compiled by centos on 2018-03-30T00:00Z 12 Compiled with protoc 2.5.0 13 From source with checksum 14182d20c972b3e2105580a1ad6990 14 This command was run using /usr/local/hadoop/share/hadoop/common/hadoop-common-3.1.0.jar
- 配置ssh免密码登录(三台服务器都要配置,均配置hadoop用户的免密登录)
1 [hadoop@hd1 local]$ cd ~/.ssh/ 2 [hadoop@hd1 .ssh]$ ssh-keygen -t rsa 3 Generating public/private rsa key pair. 4 Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): 5 Enter passphrase (empty for no passphrase): 6 Enter same passphrase again: 7 Your identification has been saved in /home/hadoop/.ssh/id_rsa. 8 Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub. 9 The key fingerprint is: 10 73:ed:9c:55:c1:2a:f2:c3:03:57:0f:5a:4d:fa:dc:8f hadoop@hd1 11 The key's randomart image is: 12 +--[ RSA 2048]----+ 13 | +o | 14 | +.o.| 15 | +.+ .| 16 | o = .oo.| 17 | S B o .o.| 18 | o B o ..| 19 | * E .| 20 | | 21 | | 22 +-----------------+ 23 [hadoop@hd1 .ssh]$ ssh-copy-id -i localhost 24 [hadoop@hd1 .ssh]$ ssh-copy-id -i hadoop@hd2 25 hadoop@hd2's password: 26 Now try logging into the machine, with "ssh 'hadoop@hd2'", and check in: 27 28 .ssh/authorized_keys 29 30 to make sure we haven't added extra keys that you weren't expecting. 31 32 [hadoop@hd1 .ssh]$ ssh-copy-id -i hadoop@hd3 33 The authenticity of host 'hd3 (172.18.5.215)' can't be established. 34 RSA key fingerprint is 16:0b:40:c3:f4:a9:e4:07:7d:91:ac:9b:f4:5c:56:f4. 35 Are you sure you want to continue connecting (yes/no)? yes 36 Warning: Permanently added 'hd3,172.18.5.215' (RSA) to the list of known hosts. 37 hadoop@hd3's password: 38 Now try logging into the machine, with "ssh 'hadoop@hd3'", and check in: 39 40 .ssh/authorized_keys 41 42 to make sure we haven't added extra keys that you weren't expecting.
-
- 验证免密登录是否成功
1 [hadoop@hd1 .ssh]$ ssh hd2 2 Last login: Thu Apr 19 17:59:02 2018 from hd1 3 [hadoop@hd2 ~]$ exit 4 logout 5 Connection to hd2 closed. 6 [hadoop@hd1 .ssh]$ ssh hd3 7 Last login: Thu Apr 19 17:59:08 2018 from hd1 8 [hadoop@hd3 ~]$ exit 9 logout 10 Connection to hd3 closed.
- 更改三台服务器上hadoop和jdk文件的所有者为hadoop用户,以免出现权限问题(以下只列hd1服务器的操作,hd2和hd3均需做此操作)
1 [hadoop@hd1 .ssh]$ su root 2 Password: 3 [root@hd1 .ssh]# cd /usr/local/ 4 [root@hd1 local]# chown -R hadoop:hadoop hadoop/ 5 [root@hd1 local]# chown -R hadoop:hadoop jdk
- 启动hadoop
- 格式化namenode,第一次启动hadoop时需要格式化namenode(在hd1上执行)
1 [root@hd1 hadoop]# su hadoop 2 [hadoop@hd1 hadoop]$ hdfs namenode -format 3 2018-04-19 20:54:45,374 INFO namenode.NameNode: STARTUP_MSG: 4 /************************************************************ 5 STARTUP_MSG: Starting NameNode 6 STARTUP_MSG: host = hd1/172.18.5.213 7 STARTUP_MSG: args = [-format] 8 STARTUP_MSG: version = 3.1.0 9 STARTUP_MSG: classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/kerb-common-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/woodstox-core-5.0.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-servlet-1.19.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-admin-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-server-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-util-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/json-smart-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/stax2-api-3.1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-databind-2.7.8.jar:/usr/local/hadoop/share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.11.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/kerby-xdr-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/accessors-smart-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-security-9.3.19.v20170502.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.5.2.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.7.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-xml-9.3.19.v20170502.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-annotations-2.7.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-server-9.3.19.v20170502.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-2.7.8.jar:/usr/local/hadoop/share/hadoop/common/lib/kerby-pkix-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.19.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-framework-2.12.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.19.jar:/usr/local/hadoop/share/hadoop/common/lib/kerby-asn1-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/kerby-util-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.19.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.6.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-client-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-core-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-http-9.3.19.v20170502.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-3.1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-client-2.12.0.jar:/usr/local/hadoop/share/hadoop/common/lib/token-provider-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-simplekdc-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-io-9.3.19.v20170502.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/re2j-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration2-2.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/metrics-core-3.2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/kerby-config-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-webapp-9.3.19.v20170502.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.4.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.9.3.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-recipes-2.12.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-servlet-9.3.19.v20170502.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-identity-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-3.1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.54.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.5.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.10.5.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang3-3.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jul-to-slf4j-1.7.25.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.25.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-5.0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-9.3.19.v20170502.jar:/usr/local/hadoop/share/hadoop/common/lib/kerb-crypto-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-3.1.0-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-3.1.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-kms-3.1.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-3.1.0.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-common-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/woodstox-core-5.0.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-servlet-1.19.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-admin-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-server-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-util-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/json-smart-2.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/okio-1.6.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/stax2-api-3.1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-databind-2.7.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.11.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerby-xdr-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/accessors-smart-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-security-9.3.19.v20170502.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/httpclient-4.5.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/zookeeper-3.4.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/avro-1.7.7.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-xml-9.3.19.v20170502.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-annotations-2.7.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-server-9.3.19.v20170502.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-2.7.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerby-pkix-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-json-1.19.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/curator-framework-2.12.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-ajax-9.3.19.v20170502.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/okhttp-2.7.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.19.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerby-asn1-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerby-util-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.19.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-net-3.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-client-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-core-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-http-9.3.19.v20170502.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/hadoop-auth-3.1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/curator-client-2.12.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/token-provider-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-simplekdc-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-io-9.3.19.v20170502.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/nimbus-jose-jwt-4.41.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/re2j-1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-configuration2-2.1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/gson-2.2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerby-config-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-webapp-9.3.19.v20170502.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/httpcore-4.4.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-beanutils-1.9.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/curator-recipes-2.12.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-servlet-9.3.19.v20170502.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-identity-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/htrace-core4-4.1.0-incubating.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/hadoop-annotations-3.1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsch-0.1.54.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/snappy-java-1.0.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.10.5.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang3-3.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.52.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-5.0.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-9.3.19.v20170502.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/kerb-crypto-1.0.1.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-3.1.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-3.1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-rbf-3.1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-3.1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-3.1.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-3.1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-3.1.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-3.1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-rbf-3.1.0-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.1.0-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-uploader-3.1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.1.0.jar:/usr/local/hadoop/share/hadoop/yarn:/usr/local/hadoop/share/hadoop/yarn/lib/fst-2.50.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.19.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.7.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/java-util-1.9.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.7.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-4.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/dnsjava-2.1.7.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-base-2.7.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.19.jar:/usr/local/hadoop/share/hadoop/yarn/lib/metrics-core-3.2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-4.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/snakeyaml-1.16.jar:/usr/local/hadoop/share/hadoop/yarn/lib/json-io-2.5.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.1.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.1.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-services-core-3.1.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.1.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.1.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.1.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-services-api-3.1.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-3.1.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-3.1.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-3.1.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-3.1.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.1.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.1.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.1.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-router-3.1.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-3.1.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-registry-3.1.0.jar 10 STARTUP_MSG: build = https://github.com/apache/hadoop -r 16b70619a24cdcf5d3b0fcf4b58ca77238ccbe6d; compiled by 'centos' on 2018-03-30T00:00Z 11 STARTUP_MSG: java = 1.8.0_131 12 ************************************************************/ 13 2018-04-19 20:54:45,414 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 14 2018-04-19 20:54:45,427 INFO namenode.NameNode: createNameNode [-format] 15 2018-04-19 20:54:45,930 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 16 Formatting using clusterid: CID-b7e966a3-9d2b-4e8d-87fb-6f02c1915e20 17 2018-04-19 20:54:46,984 INFO namenode.FSEditLog: Edit logging is async:true 18 2018-04-19 20:54:47,047 INFO namenode.FSNamesystem: KeyProvider: null 19 2018-04-19 20:54:47,048 INFO namenode.FSNamesystem: fsLock is fair: true 20 2018-04-19 20:54:47,050 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false 21 2018-04-19 20:54:47,090 INFO namenode.FSNamesystem: fsOwner = hadoop (auth:SIMPLE) 22 2018-04-19 20:54:47,090 INFO namenode.FSNamesystem: supergroup = supergroup 23 2018-04-19 20:54:47,090 INFO namenode.FSNamesystem: isPermissionEnabled = true 24 2018-04-19 20:54:47,090 INFO namenode.FSNamesystem: HA Enabled: false 25 2018-04-19 20:54:47,196 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling 26 2018-04-19 20:54:47,290 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000 27 2018-04-19 20:54:47,290 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true 28 2018-04-19 20:54:47,297 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 29 2018-04-19 20:54:47,307 INFO blockmanagement.BlockManager: The block deletion will start around 2018 Apr 19 20:54:47 30 2018-04-19 20:54:47,309 INFO util.GSet: Computing capacity for map BlocksMap 31 2018-04-19 20:54:47,309 INFO util.GSet: VM type = 64-bit 32 2018-04-19 20:54:47,317 INFO util.GSet: 2.0% max memory 416 MB = 8.3 MB 33 2018-04-19 20:54:47,317 INFO util.GSet: capacity = 2^20 = 1048576 entries 34 2018-04-19 20:54:47,384 INFO blockmanagement.BlockManager: dfs.block.access.token.enable = false 35 2018-04-19 20:54:47,391 INFO Configuration.deprecation: No unit for dfs.namenode.safemode.extension(30000) assuming MILLISECONDS 36 2018-04-19 20:54:47,391 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 37 2018-04-19 20:54:47,391 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0 38 2018-04-19 20:54:47,391 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000 39 2018-04-19 20:54:47,391 INFO blockmanagement.BlockManager: defaultReplication = 2 40 2018-04-19 20:54:47,391 INFO blockmanagement.BlockManager: maxReplication = 512 41 2018-04-19 20:54:47,392 INFO blockmanagement.BlockManager: minReplication = 1 42 2018-04-19 20:54:47,392 INFO blockmanagement.BlockManager: maxReplicationStreams = 2 43 2018-04-19 20:54:47,392 INFO blockmanagement.BlockManager: redundancyRecheckInterval = 3000ms 44 2018-04-19 20:54:47,392 INFO blockmanagement.BlockManager: encryptDataTransfer = false 45 2018-04-19 20:54:47,392 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000 46 2018-04-19 20:54:47,533 INFO util.GSet: Computing capacity for map INodeMap 47 2018-04-19 20:54:47,533 INFO util.GSet: VM type = 64-bit 48 2018-04-19 20:54:47,533 INFO util.GSet: 1.0% max memory 416 MB = 4.2 MB 49 2018-04-19 20:54:47,533 INFO util.GSet: capacity = 2^19 = 524288 entries 50 2018-04-19 20:54:47,534 INFO namenode.FSDirectory: ACLs enabled? false 51 2018-04-19 20:54:47,535 INFO namenode.FSDirectory: POSIX ACL inheritance enabled? true 52 2018-04-19 20:54:47,535 INFO namenode.FSDirectory: XAttrs enabled? true 53 2018-04-19 20:54:47,535 INFO namenode.NameNode: Caching file names occurring more than 10 times 54 2018-04-19 20:54:47,540 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536 55 2018-04-19 20:54:47,565 INFO snapshot.SnapshotManager: SkipList is disabled 56 2018-04-19 20:54:47,570 INFO util.GSet: Computing capacity for map cachedBlocks 57 2018-04-19 20:54:47,570 INFO util.GSet: VM type = 64-bit 58 2018-04-19 20:54:47,570 INFO util.GSet: 0.25% max memory 416 MB = 1.0 MB 59 2018-04-19 20:54:47,570 INFO util.GSet: capacity = 2^17 = 131072 entries 60 2018-04-19 20:54:47,582 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10 61 2018-04-19 20:54:47,582 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10 62 2018-04-19 20:54:47,582 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 63 2018-04-19 20:54:47,585 INFO namenode.FSNamesystem: Retry cache on namenode is enabled 64 2018-04-19 20:54:47,585 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 65 2018-04-19 20:54:47,588 INFO util.GSet: Computing capacity for map NameNodeRetryCache 66 2018-04-19 20:54:47,588 INFO util.GSet: VM type = 64-bit 67 2018-04-19 20:54:47,588 INFO util.GSet: 0.029999999329447746% max memory 416 MB = 127.8 KB 68 2018-04-19 20:54:47,588 INFO util.GSet: capacity = 2^14 = 16384 entries 69 2018-04-19 20:54:47,630 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1030989773-172.18.5.213-1524196487615 70 2018-04-19 20:54:47,654 INFO common.Storage: Storage directory /usr/local/hadoop/dfs/name has been successfully formatted. 71 2018-04-19 20:54:47,717 INFO namenode.FSImageFormatProtobuf: Saving image file /usr/local/hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression 72 2018-04-19 20:54:47,858 INFO namenode.FSImageFormatProtobuf: Image file /usr/local/hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 391 bytes saved in 0 seconds . 73 2018-04-19 20:54:47,938 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0 74 2018-04-19 20:54:47,971 INFO namenode.NameNode: SHUTDOWN_MSG: 75 /************************************************************ 76 SHUTDOWN_MSG: Shutting down NameNode at hd1/172.18.5.213 77 ************************************************************/
-
- 启动hdfs
1 [hadoop@hd1 hadoop]$ start-dfs.sh 2 Starting namenodes on [hd1] 3 Starting datanodes 4 Starting secondary namenodes [hd1] 5 2018-04-19 20:56:16,664 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
-
- 启动yarn
1 [hadoop@hd1 hadoop]$ start-yarn.sh 2 Starting resourcemanager 3 Starting nodemanagers
-
- 验证进程
- hd1上
- 验证进程
1 [hadoop@hd1 hadoop]$ jps 2 3281 NameNode 3 3618 SecondaryNameNode 4 4340 Jps 5 3855 ResourceManager
-
-
- hd2上
-
1 [hadoop@hd2 root]$ jps 2 3346 NodeManager 3 3234 DataNode 4 3468 Jps
-
-
- hd3上
-
1 [hadoop@hd3 root]$ jps 2 2992 Jps 3 2774 DataNode 4 2870 NodeManager
- 验证集群
- 网页验证
-
- 命令验证
1 [hadoop@hd1 hadoop]$ hadoop dfsadmin -report 2 WARNING: Use of this script to execute dfsadmin is deprecated. 3 WARNING: Attempting to execute replacement "hdfs dfsadmin" instead. 4 5 2018-04-22 18:45:27,226 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 6 Configured Capacity: 144910884864 (134.96 GB) 7 Present Capacity: 122461069312 (114.05 GB) 8 DFS Remaining: 122460995584 (114.05 GB) 9 DFS Used: 73728 (72 KB) 10 DFS Used%: 0.00% 11 Replicated Blocks: 12 Under replicated blocks: 0 13 Blocks with corrupt replicas: 0 14 Missing blocks: 0 15 Missing blocks (with replication factor 1): 0 16 Pending deletion blocks: 0 17 Erasure Coded Block Groups: 18 Low redundancy block groups: 0 19 Block groups with corrupt internal blocks: 0 20 Missing block groups: 0 21 Pending deletion blocks: 0 22 23 ------------------------------------------------- 24 Live datanodes (3): 25 26 Name: 172.18.5.213:9866 (hd1) 27 Hostname: hd1 28 Decommission Status : Normal 29 Configured Capacity: 48303628288 (44.99 GB) 30 DFS Used: 24576 (24 KB) 31 Non DFS Used: 7091535872 (6.60 GB) 32 DFS Remaining: 38751535104 (36.09 GB) 33 DFS Used%: 0.00% 34 DFS Remaining%: 80.22% 35 Configured Cache Capacity: 0 (0 B) 36 Cache Used: 0 (0 B) 37 Cache Remaining: 0 (0 B) 38 Cache Used%: 100.00% 39 Cache Remaining%: 0.00% 40 Xceivers: 1 41 Last contact: Sun Apr 22 18:45:26 PDT 2018 42 Last Block Report: Sun Apr 22 18:41:35 PDT 2018 43 Num of Blocks: 0 44 45 46 Name: 172.18.5.214:9866 (hd2) 47 Hostname: hd2 48 Decommission Status : Normal 49 Configured Capacity: 48303628288 (44.99 GB) 50 DFS Used: 24576 (24 KB) 51 Non DFS Used: 3911618560 (3.64 GB) 52 DFS Remaining: 41931452416 (39.05 GB) 53 DFS Used%: 0.00% 54 DFS Remaining%: 86.81% 55 Configured Cache Capacity: 0 (0 B) 56 Cache Used: 0 (0 B) 57 Cache Remaining: 0 (0 B) 58 Cache Used%: 100.00% 59 Cache Remaining%: 0.00% 60 Xceivers: 1 61 Last contact: Sun Apr 22 18:45:26 PDT 2018 62 Last Block Report: Sun Apr 22 18:42:26 PDT 2018 63 Num of Blocks: 0 64 65 66 Name: 172.18.5.215:9866 (hd3) 67 Hostname: hd3 68 Decommission Status : Normal 69 Configured Capacity: 48303628288 (44.99 GB) 70 DFS Used: 24576 (24 KB) 71 Non DFS Used: 4065062912 (3.79 GB) 72 DFS Remaining: 41778008064 (38.91 GB) 73 DFS Used%: 0.00% 74 DFS Remaining%: 86.49% 75 Configured Cache Capacity: 0 (0 B) 76 Cache Used: 0 (0 B) 77 Cache Remaining: 0 (0 B) 78 Cache Used%: 100.00% 79 Cache Remaining%: 0.00% 80 Xceivers: 1 81 Last contact: Sun Apr 22 18:45:25 PDT 2018 82 Last Block Report: Sun Apr 22 18:44:58 PDT 2018 83 Num of Blocks: 0
- 问题解决
- 分别启动datanode和nodemanager后,hd2和hd3不能自动加入到集群中
防火墙没有关,需要切换到root用户,执行service iptables stop
-
- 按照官方说明:If etc/hadoop/slaves and ssh trusted access is configured (see Single Node Setup), all of the HDFS processes can be started with a utility script. Ashdfs(如果配置了slaves和ssh免密登录,可以直接使用start-dfs.sh脚本启动),但是本环境中在hd1上执行该脚本并不能启动hd2和hd3的datanode
具体原因未知待追踪