运行hadoop中的例子程序PI

运行hadoop中的例子程序PI

  • 命令
  • 结果

安装好hadoop集群之后,都会想着试一下自己的集群是否正常工作,最简单的方式就是运行hadoop提供的例子程序。

命令

hadoop jar hadoop-mapreduce-examples-2.6.4.jar pi 5 5

结果

[root@zk2 ~]# hadoop jar hadoop-mapreduce-examples-2.6.4.jar pi 5 5
Number of Maps  = 5
Samples per Map = 5
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Starting Job
19/03/28 23:25:06 INFO client.RMProxy: Connecting to ResourceManager at /192.168.56.160:8032
19/03/28 23:25:07 INFO input.FileInputFormat: Total input paths to process : 5
19/03/28 23:25:07 INFO mapreduce.JobSubmitter: number of splits:5
19/03/28 23:25:08 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1553786668014_0001
19/03/28 23:25:08 INFO impl.YarnClientImpl: Submitted application application_1553786668014_0001
19/03/28 23:25:08 INFO mapreduce.Job: The url to track the job: http://zk1:8088/proxy/application_1553786668014_0001/
19/03/28 23:25:08 INFO mapreduce.Job: Running job: job_1553786668014_0001
19/03/28 23:25:20 INFO mapreduce.Job: Job job_1553786668014_0001 running in uber mode : false
19/03/28 23:25:20 INFO mapreduce.Job:  map 0% reduce 0%
19/03/28 23:27:43 INFO mapreduce.Job:  map 80% reduce 0%
19/03/28 23:27:45 INFO mapreduce.Job:  map 100% reduce 0%
19/03/28 23:28:28 INFO mapreduce.Job:  map 100% reduce 100%
19/03/28 23:28:29 INFO mapreduce.Job: Job job_1553786668014_0001 completed successfully
19/03/28 23:28:30 INFO mapreduce.Job: Counters: 49
        File System Counters
                FILE: Number of bytes read=116
                FILE: Number of bytes written=642111
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=1345
                HDFS: Number of bytes written=215
                HDFS: Number of read operations=23
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=3
        Job Counters 
                Launched map tasks=5
                Launched reduce tasks=1
                Data-local map tasks=5
                Total time spent by all maps in occupied slots (ms)=734480
                Total time spent by all reduces in occupied slots (ms)=36769
                Total time spent by all map tasks (ms)=734480
                Total time spent by all reduce tasks (ms)=36769
                Total vcore-milliseconds taken by all map tasks=734480
                Total vcore-milliseconds taken by all reduce tasks=36769
                Total megabyte-milliseconds taken by all map tasks=752107520
                Total megabyte-milliseconds taken by all reduce tasks=37651456
        Map-Reduce Framework
                Map input records=5
                Map output records=10
                Map output bytes=90
                Map output materialized bytes=140
                Input split bytes=755
                Combine input records=0
                Combine output records=0
                Reduce input groups=2
                Reduce shuffle bytes=140
                Reduce input records=10
                Reduce output records=0
                Spilled Records=20
                Shuffled Maps =5
                Failed Shuffles=0
                Merged Map outputs=5
                GC time elapsed (ms)=19784
                CPU time spent (ms)=14600
                Physical memory (bytes) snapshot=676265984
                Virtual memory (bytes) snapshot=12362330112
                Total committed heap usage (bytes)=626020352
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters 
                Bytes Read=590
        File Output Format Counters 
                Bytes Written=97
Job Finished in 204.801 seconds
Estimated value of Pi is 3.68000000000000000000

如果运行例子程序,出现以上结果,证明集群是正常工作的。

你可能感兴趣的:(大数据,大数据之路,hadoop)