[hadoop]3.0.0以上版本运行hadoop-mapreduce-examples的pi官方示例(踩坑日记)

目录

前言:

1. 下载官方示例的jar

2. 运行命令hadoop jar hadoop-mapreduce-examples-3.3.0.jar pi 5 5

2.1 遇到问题:任务卡住

2.2 Failing this attempt.Diagnostics: No space available in any of the local directories.

2.3 错误:  Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

2.4 异常:错误: 找不到或无法加载主类 org.apache.hadoop.mapreduce.v2.app.MRAppMaster

2.5 改完新的报错:Name node is in safe mode.

3. 正常启动

4. 完整地yarn-site.xml


前言:

hadoop环境都配置好了。运行官方示例(各种出错,那就哪里不对查哪里 GIYF)

1. 下载官方示例的jar

https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-mapreduce-examples

下载最新的3.3.0就好。之后需要用这个执行。

2. 运行命令hadoop jar hadoop-mapreduce-examples-3.3.0.jar pi 5 5

如果没有hadoop环境,请参考:hadoop 3.0配置

2.1 遇到问题:任务卡住

Number of Maps  = 5
Samples per Map = 5
2021-01-26 16:49:28,195 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Starting Job
2021-01-26 16:49:28,989 INFO client.RMProxy: Connecting to ResourceManager at localhost/127.0.0.1:18040
2021-01-26 16:49:29,477 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/bjhl/.staging/job_1611647014879_0002
2021-01-26 16:49:29,554 INFO input.FileInputFormat: Total input files to process : 5
2021-01-26 16:49:29,579 INFO mapreduce.JobSubmitter: number of splits:5
2021-01-26 16:49:29,675 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1611647014879_0002
2021-01-26 16:49:29,677 INFO mapreduce.JobSubmitter: Executing with tokens: []
2021-01-26 16:49:29,821 INFO conf.Configuration: resource-types.xml not found
2021-01-26 16:49:29,822 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2021-01-26 16:49:29,864 INFO impl.YarnClientImpl: Submitted application application_1611647014879_0002
2021-01-26 16:49:29,891 INFO mapreduce.Job: The url to track the job: http://localhost:18088/proxy/application_1611647014879_0002/
2021-01-26 16:49:29,891 INFO mapreduce.Job: Running job: job_1611647014879_0002

卡在这边不动了。

分析问题:处于accept的状态,而且显示unhealthy

[hadoop]3.0.0以上版本运行hadoop-mapreduce-examples的pi官方示例(踩坑日记)_第1张图片

 

通过hadoop安装目录下面,bin目录的yarn命令查看node状态

cd /Users/bjhl/environment/hadoop/hadoop-3.2.2/bin 
./yarn node -list -all

查看到节点处于unhealthy:

2021-01-26 16:51:57,861 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2021-01-26 16:51:57,934 INFO client.RMProxy: Connecting to ResourceManager at localhost/127.0.0.1:18040
Total Nodes:1
         Node-Id	     Node-State	Node-Http-Address	Number-of-Running-Containers
172.23.66.246:55139	      UNHEALTHY	172.23.66.246:8042	    

那么先进行网上的手段,更改yarn-site.xml,更改提示不健康的问题。


    yarn.nodemanager.disk-health-checker.min-healthy-disks
    0.0


    yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage
    100.0

2.2 Failing this attempt.Diagnostics: No space available in any of the local directories.

2021-01-26 17:05:20,876 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Starting Job
2021-01-26 17:05:22,211 INFO client.RMProxy: Connecting to ResourceManager at localhost/127.0.0.1:18040
2021-01-26 17:05:22,593 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/bjhl/.staging/job_1611651857025_0001
2021-01-26 17:05:22,678 INFO input.FileInputFormat: Total input files to process : 5
2021-01-26 17:05:22,704 INFO mapreduce.JobSubmitter: number of splits:5
2021-01-26 17:05:23,249 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1611651857025_0001
2021-01-26 17:05:23,251 INFO mapreduce.JobSubmitter: Executing with tokens: []
2021-01-26 17:05:23,397 INFO conf.Configuration: resource-types.xml not found
2021-01-26 17:05:23,397 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2021-01-26 17:05:23,768 INFO impl.YarnClientImpl: Submitted application application_1611651857025_0001
2021-01-26 17:05:23,795 INFO mapreduce.Job: The url to track the job: http://localhost:18088/proxy/application_1611651857025_0001/
2021-01-26 17:05:23,796 INFO mapreduce.Job: Running job: job_1611651857025_0001
2021-01-26 17:05:25,826 INFO mapreduce.Job: Job job_1611651857025_0001 running in uber mode : false
2021-01-26 17:05:25,828 INFO mapreduce.Job:  map 0% reduce 0%
2021-01-26 17:05:25,845 INFO mapreduce.Job: Job job_1611651857025_0001 failed with state FAILED due to: Application application_1611651857025_0001 failed 2 times due to AM Container for appattempt_1611651857025_0001_000002 exited with  exitCode: -1000
Failing this attempt.Diagnostics: [2021-01-26 17:05:25.395]No space available in any of the local directories.
For more detailed output, check the application tracking page: http://localhost:18088/cluster/app/application_1611651857025_0001 Then click on links to logs of each attempt.
. Failing the application.
2021-01-26 17:05:25,871 INFO mapreduce.Job: Counters: 0
Job job_1611651857025_0001 failed!

看样子应该是因为,No space available in any of the local directories.

但是查看 :

$ df -h

结果:
Filesystem      Size   Used  Avail Capacity iused      ifree %iused  Mounted on
/dev/disk1s1   233Gi   10Gi  106Gi     9%  488365 2447612955    0%   /
devfs          188Ki  188Ki    0Bi   100%     652          0  100%   /dev
/dev/disk1s2   233Gi  115Gi  106Gi    52% 1601566 2446499754    0%   /System/Volumes/Data
/dev/disk1s5   233Gi  1.0Gi  106Gi     1%       1 2448101319    0%   /private/var/vm
map auto_home    0Bi    0Bi    0Bi   100%       0          0  100%   /System/Volumes/Data/home

磁盘是健康的。

那么添加下面部分到yarn-site.xml 解决:


    yarn.resourcemanager.hostname
    master


    yarn.nodemanager.aux-services
    mapreduce_shuffle


    yarn.log-aggregation-enable
    true


    yarn.log-aggregation.retain-seconds
    604800

2.3 错误:  Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

其实就是有namenode或者datanode没起起来

jps发现也确实是,因为我在测试2.2的时候,总是删除tmp缓存文件夹。

所以需要:

重新格式化namenode

hadoop namebode -format

然后用jps看启动了哪些进程,把它们杀掉,重启。

进到sbin目录下面,./start-all.sh

bjhldeMacBook-Pro:hadoop bjhl$ hadoop jar hadoop-mapreduce-examples-3.3.0.jar pi 5 5
Number of Maps  = 5
Samples per Map = 5
2021-01-26 17:27:54,812 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
java.net.ConnectException: Call From bjhldeMacBook-Pro.local/127.0.0.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:836)
	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:760)
	at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1566)
	at org.apache.hadoop.ipc.Client.call(Client.java:1508)
	at org.apache.hadoop.ipc.Client.call(Client.java:1405)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
	at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:910)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
	at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1671)
	at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1602)
	at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1599)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1614)
	at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1690)
	at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:279)
	at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:360)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
	at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:368)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
	at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
	at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
Caused by: java.net.ConnectException: Connection refused
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:715)
	at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:534)
	at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:699)
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:812)
	at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:413)
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1636)
	at org.apache.hadoop.ipc.Client.call(Client.java:1452)
	... 38 more

2.4 异常:错误: 找不到或无法加载主类 org.apache.hadoop.mapreduce.v2.app.MRAppMaster

bjhldeMacBook-Pro:hadoop bjhl$ hadoop jar hadoop-mapreduce-examples-3.3.0.jar pi 5 5
Number of Maps  = 5
Samples per Map = 5
2021-01-26 20:01:39,218 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Starting Job
2021-01-26 20:01:41,026 INFO client.RMProxy: Connecting to ResourceManager at localhost/127.0.0.1:18040
2021-01-26 20:01:41,621 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/bjhl/.staging/job_1611662413845_0001
2021-01-26 20:01:42,133 INFO input.FileInputFormat: Total input files to process : 5
2021-01-26 20:01:42,158 INFO mapreduce.JobSubmitter: number of splits:5
2021-01-26 20:01:42,275 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1611662413845_0001
2021-01-26 20:01:42,276 INFO mapreduce.JobSubmitter: Executing with tokens: []
2021-01-26 20:01:42,421 INFO conf.Configuration: resource-types.xml not found
2021-01-26 20:01:42,421 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2021-01-26 20:01:42,809 INFO impl.YarnClientImpl: Submitted application application_1611662413845_0001
2021-01-26 20:01:42,837 INFO mapreduce.Job: The url to track the job: http://localhost:18088/proxy/application_1611662413845_0001/
2021-01-26 20:01:42,838 INFO mapreduce.Job: Running job: job_1611662413845_0001
2021-01-26 20:01:46,878 INFO mapreduce.Job: Job job_1611662413845_0001 running in uber mode : false
2021-01-26 20:01:46,880 INFO mapreduce.Job:  map 0% reduce 0%
2021-01-26 20:01:46,898 INFO mapreduce.Job: Job job_1611662413845_0001 failed with state FAILED due to: Application application_1611662413845_0001 failed 2 times due to AM Container for appattempt_1611662413845_0001_000002 exited with  exitCode: 1
Failing this attempt.Diagnostics: [2021-01-26 20:01:45.926]Exception from container-launch.
Container id: container_1611662413845_0001_02_000001
Exit code: 1

[2021-01-26 20:01:45.929]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
错误: 找不到或无法加载主类 org.apache.hadoop.mapreduce.v2.app.MRAppMaster


[2021-01-26 20:01:45.929]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
错误: 找不到或无法加载主类 org.apache.hadoop.mapreduce.v2.app.MRAppMaster


For more detailed output, check the application tracking page: http://localhost:18088/cluster/app/application_1611662413845_0001 Then click on links to logs of each attempt.
. Failing the application.
2021-01-26 20:01:46,920 INFO mapreduce.Job: Counters: 0

解决:

在命令行输入:hadoop classpath

bjhldeMacBook-Pro:hadoop bjhl$ hadoop classpath
/Users/bjhl/environment/hadoop/hadoop-3.2.2/etc/hadoop:/Users/bjhl/environment/hadoop/hadoop-3.2.2/share/hadoop/common/lib/*:/Users/bjhl/environment/hadoop/hadoop-3.2.2/share/hadoop/common/*:/Users/bjhl/environment/hadoop/hadoop-3.2.2/share/hadoop/hdfs:/Users/bjhl/environment/hadoop/hadoop-3.2.2/share/hadoop/hdfs/lib/*:/Users/bjhl/environment/hadoop/hadoop-3.2.2/share/hadoop/hdfs/*:/Users/bjhl/environment/hadoop/hadoop-3.2.2/share/hadoop/mapreduce/lib/*:/Users/bjhl/environment/hadoop/hadoop-3.2.2/share/hadoop/mapreduce/*:/Users/bjhl/environment/hadoop/hadoop-3.2.2/share/hadoop/yarn:/Users/bjhl/environment/hadoop/hadoop-3.2.2/share/hadoop/yarn/lib/*:/Users/bjhl/environment/hadoop/hadoop-3.2.2/share/hadoop/yarn/*

把这部分代码段,添加到yarn-site.xml文件对应的属性 yarn.application.classpath下面


        yarn.application.classpath
        /Users/bjhl/environment/hadoop/hadoop-3.2.2/etc/hadoop:/Users/bjhl/environment/hadoop/hadoop-3.2.2/share/hadoop/common/lib/*:/Users/bjhl/environment/hadoop/hadoop-3.2.2/share/hadoop/common/*:/Users/bjhl/environment/hadoop/hadoop-3.2.2/share/hadoop/hdfs:/Users/bjhl/environment/hadoop/hadoop-3.2.2/share/hadoop/hdfs/lib/*:/Users/bjhl/environment/hadoop/hadoop-3.2.2/share/hadoop/hdfs/*:/Users/bjhl/environment/hadoop/hadoop-3.2.2/share/hadoop/mapreduce/lib/*:/Users/bjhl/environment/hadoop/hadoop-3.2.2/share/hadoop/mapreduce/*:/Users/bjhl/environment/hadoop/hadoop-3.2.2/share/hadoop/yarn:/Users/bjhl/environment/hadoop/hadoop-3.2.2/share/hadoop/yarn/lib/*:/Users/bjhl/environment/hadoop/hadoop-3.2.2/share/hadoop/yarn/*

2.5 改完新的报错:Name node is in safe mode.

bjhldeMacBook-Pro:hadoop bjhl$ hadoop jar hadoop-mapreduce-examples-3.3.0.jar pi 5 5
Number of Maps  = 5
Samples per Map = 5
2021-01-26 20:06:05,337 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create directory /user/bjhl/QuasiMonteCarlo_1611662765227_11239671/in. Name node is in safe mode.
The reported blocks 10 has reached the threshold 0.9990 of total blocks 10. The minimum number of live datanodes is not required. In safe mode extension. Safe mode will be turned off automatically in 6 seconds. NamenodeHostName:localhost
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.newSafemodeException(FSNamesystem.java:1508)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1495)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3251)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1158)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:723)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:528)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1086)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1029)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:957)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2957)

	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
	at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)
	at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2432)
	at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2406)
	at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1338)
	at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1335)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1352)
	at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1327)
	at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2304)
	at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:283)
	at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:360)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
	at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:368)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
	at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
	at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /user/bjhl/QuasiMonteCarlo_1611662765227_11239671/in. Name node is in safe mode.
The reported blocks 10 has reached the threshold 0.9990 of total blocks 10. The minimum number of live datanodes is not required. In safe mode extension. Safe mode will be turned off automatically in 6 seconds. NamenodeHostName:localhost
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.newSafemodeException(FSNamesystem.java:1508)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1495)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3251)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1158)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:723)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:528)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1086)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1029)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:957)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2957)

	at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1562)
	at org.apache.hadoop.ipc.Client.call(Client.java:1508)
	at org.apache.hadoop.ipc.Client.call(Client.java:1405)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
	at com.sun.proxy.$Proxy9.mkdirs(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:663)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
	at com.sun.proxy.$Proxy10.mkdirs(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2430)
	... 24 more

 Name node is in safe mode.

The reported blocks 10 has reached the threshold 0.9990 of total blocks 10. The minimum number of live datanodes is not required. In safe mode extension. Safe mode will be turned off automatically in 6 seconds. NamenodeHostName:localhost

 

hadoop dfsadmin -safemode leave

然后在启动任务

3. 正常启动

bjhldeMacBook-Pro:hadoop bjhl$ hadoop jar hadoop-mapreduce-examples-3.3.0.jar pi 5 5
Number of Maps  = 5
Samples per Map = 5
2021-01-26 20:07:33,125 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Starting Job
2021-01-26 20:07:34,365 INFO client.RMProxy: Connecting to ResourceManager at localhost/127.0.0.1:18040
2021-01-26 20:07:34,659 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/bjhl/.staging/job_1611662749925_0001
2021-01-26 20:07:35,151 INFO input.FileInputFormat: Total input files to process : 5
2021-01-26 20:07:35,175 INFO mapreduce.JobSubmitter: number of splits:5
2021-01-26 20:07:35,266 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1611662749925_0001
2021-01-26 20:07:35,267 INFO mapreduce.JobSubmitter: Executing with tokens: []
2021-01-26 20:07:35,397 INFO conf.Configuration: resource-types.xml not found
2021-01-26 20:07:35,397 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2021-01-26 20:07:35,751 INFO impl.YarnClientImpl: Submitted application application_1611662749925_0001
2021-01-26 20:07:35,776 INFO mapreduce.Job: The url to track the job: http://localhost:18088/proxy/application_1611662749925_0001/
2021-01-26 20:07:35,777 INFO mapreduce.Job: Running job: job_1611662749925_0001
2021-01-26 20:07:41,866 INFO mapreduce.Job: Job job_1611662749925_0001 running in uber mode : false
2021-01-26 20:07:41,868 INFO mapreduce.Job:  map 0% reduce 0%
2021-01-26 20:07:49,000 INFO mapreduce.Job:  map 100% reduce 0%
2021-01-26 20:07:53,027 INFO mapreduce.Job:  map 100% reduce 100%
2021-01-26 20:07:54,042 INFO mapreduce.Job: Job job_1611662749925_0001 completed successfully
2021-01-26 20:07:54,122 INFO mapreduce.Job: Counters: 50
	File System Counters
		FILE: Number of bytes read=116
		FILE: Number of bytes written=1413195
		FILE: Number of read operations=0
		FILE: Number of large read operations=0
		FILE: Number of write operations=0
		HDFS: Number of bytes read=1315
		HDFS: Number of bytes written=215
		HDFS: Number of read operations=25
		HDFS: Number of large read operations=0
		HDFS: Number of write operations=3
		HDFS: Number of bytes read erasure-coded=0
	Job Counters 
		Launched map tasks=5
		Launched reduce tasks=1
		Data-local map tasks=5
		Total time spent by all maps in occupied slots (ms)=20439
		Total time spent by all reduces in occupied slots (ms)=2029
		Total time spent by all map tasks (ms)=20439
		Total time spent by all reduce tasks (ms)=2029
		Total vcore-milliseconds taken by all map tasks=20439
		Total vcore-milliseconds taken by all reduce tasks=2029
		Total megabyte-milliseconds taken by all map tasks=20929536
		Total megabyte-milliseconds taken by all reduce tasks=2077696
	Map-Reduce Framework
		Map input records=5
		Map output records=10
		Map output bytes=90
		Map output materialized bytes=140
		Input split bytes=725
		Combine input records=0
		Combine output records=0
		Reduce input groups=2
		Reduce shuffle bytes=140
		Reduce input records=10
		Reduce output records=0
		Spilled Records=20
		Shuffled Maps =5
		Failed Shuffles=0
		Merged Map outputs=5
		GC time elapsed (ms)=262
		CPU time spent (ms)=0
		Physical memory (bytes) snapshot=0
		Virtual memory (bytes) snapshot=0
		Total committed heap usage (bytes)=1799880704
	Shuffle Errors
		BAD_ID=0
		CONNECTION=0
		IO_ERROR=0
		WRONG_LENGTH=0
		WRONG_MAP=0
		WRONG_REDUCE=0
	File Input Format Counters 
		Bytes Read=590
	File Output Format Counters 
		Bytes Written=97
Job Finished in 19.804 seconds
Estimated value of Pi is 3.68000000000000000000

4. 完整地yarn-site.xml

因为基本上都是改的yarn这个xml;

别的东西没有改变,注意防火墙要关闭,否则影响测试。









      yarn.nodemanager.aux-services
      mapreduce_shuffle


      yarn.resourcemanager.address
      localhost:18040


      yarn.resourcemanager.scheduler.address
      localhost:18030


       yarn.resourcemanager.resource-tracker.address
       localhost:18025


       yarn.resourcemanager.admin.address
       localhost:18141


       yarn.resourcemanager.webapp.address
       localhost:18088


    yarn.nodemanager.disk-health-checker.min-healthy-disks
    0.0


    yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage
    100.0


    yarn.resourcemanager.hostname
    master


    yarn.nodemanager.aux-services
    mapreduce_shuffle


    yarn.log-aggregation-enable
    true


    yarn.log-aggregation.retain-seconds
    604800



        yarn.application.classpath
        /Users/bjhl/environment/hadoop/hadoop-3.2.2/etc/hadoop:/Users/bjhl/environment/hadoop/hadoop-3.2.2/share/hadoop/common/lib/*:/Users/bjhl/environment/hadoop/hadoop-3.2.2/share/hadoop/common/*:/Users/bjhl/environment/hadoop/hadoop-3.2.2/share/hadoop/hdfs:/Users/bjhl/environment/hadoop/hadoop-3.2.2/share/hadoop/hdfs/lib/*:/Users/bjhl/environment/hadoop/hadoop-3.2.2/share/hadoop/hdfs/*:/Users/bjhl/environment/hadoop/hadoop-3.2.2/share/hadoop/mapreduce/lib/*:/Users/bjhl/environment/hadoop/hadoop-3.2.2/share/hadoop/mapreduce/*:/Users/bjhl/environment/hadoop/hadoop-3.2.2/share/hadoop/yarn:/Users/bjhl/environment/hadoop/hadoop-3.2.2/share/hadoop/yarn/lib/*:/Users/bjhl/environment/hadoop/hadoop-3.2.2/share/hadoop/yarn/*



 

踩坑日常,下面进行wordcount

 

 

你可能感兴趣的:(hadoop)