Hadoop-2.9.2不同模式下hadoop-mapreduce-examples-2.9.2.jar跑的日志

Standalone

$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar grep input output 'dfs[a-z.]+'
19/04/04 14:04:20 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
19/04/04 14:04:20 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
19/04/04 14:04:20 INFO input.FileInputFormat: Total input files to process : 8
19/04/04 14:04:20 INFO mapreduce.JobSubmitter: number of splits:8
19/04/04 14:04:20 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local931706435_0001
19/04/04 14:04:20 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
19/04/04 14:04:20 INFO mapreduce.Job: Running job: job_local931706435_0001
19/04/04 14:04:20 INFO mapred.LocalJobRunner: OutputCommitter set in config null
19/04/04 14:04:20 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:04:20 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:04:20 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
19/04/04 14:04:20 INFO mapred.LocalJobRunner: Waiting for map tasks
19/04/04 14:04:20 INFO mapred.LocalJobRunner: Starting task: attempt_local931706435_0001_m_000000_0
19/04/04 14:04:20 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:04:20 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:04:20 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:04:20 INFO mapred.MapTask: Processing split: file:/home/admin/ws/hadoop-2.9.2/input/hadoop-policy.xml:0+10206
19/04/04 14:04:20 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/04 14:04:20 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/04 14:04:20 INFO mapred.MapTask: soft limit at 83886080
19/04/04 14:04:20 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/04 14:04:20 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/04 14:04:20 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/04 14:04:20 INFO mapred.LocalJobRunner: 
19/04/04 14:04:20 INFO mapred.MapTask: Starting flush of map output
19/04/04 14:04:20 INFO mapred.MapTask: Spilling map output
19/04/04 14:04:20 INFO mapred.MapTask: bufstart = 0; bufend = 17; bufvoid = 104857600
19/04/04 14:04:20 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214396(104857584); length = 1/6553600
19/04/04 14:04:20 INFO mapred.MapTask: Finished spill 0
19/04/04 14:04:20 INFO mapred.Task: Task:attempt_local931706435_0001_m_000000_0 is done. And is in the process of committing
19/04/04 14:04:20 INFO mapred.LocalJobRunner: map
19/04/04 14:04:20 INFO mapred.Task: Task 'attempt_local931706435_0001_m_000000_0' done.
19/04/04 14:04:20 INFO mapred.LocalJobRunner: Finishing task: attempt_local931706435_0001_m_000000_0
19/04/04 14:04:20 INFO mapred.LocalJobRunner: Starting task: attempt_local931706435_0001_m_000001_0
19/04/04 14:04:20 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:04:20 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:04:20 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:04:20 INFO mapred.MapTask: Processing split: file:/home/admin/ws/hadoop-2.9.2/input/capacity-scheduler.xml:0+7861
19/04/04 14:04:20 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/04 14:04:20 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/04 14:04:20 INFO mapred.MapTask: soft limit at 83886080
19/04/04 14:04:20 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/04 14:04:20 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/04 14:04:20 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/04 14:04:20 INFO mapred.LocalJobRunner: 
19/04/04 14:04:20 INFO mapred.MapTask: Starting flush of map output
19/04/04 14:04:20 INFO mapred.Task: Task:attempt_local931706435_0001_m_000001_0 is done. And is in the process of committing
19/04/04 14:04:20 INFO mapred.LocalJobRunner: map
19/04/04 14:04:20 INFO mapred.Task: Task 'attempt_local931706435_0001_m_000001_0' done.
19/04/04 14:04:20 INFO mapred.LocalJobRunner: Finishing task: attempt_local931706435_0001_m_000001_0
19/04/04 14:04:20 INFO mapred.LocalJobRunner: Starting task: attempt_local931706435_0001_m_000002_0
19/04/04 14:04:20 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:04:20 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:04:20 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:04:20 INFO mapred.MapTask: Processing split: file:/home/admin/ws/hadoop-2.9.2/input/kms-site.xml:0+5939
19/04/04 14:04:20 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/04 14:04:20 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/04 14:04:20 INFO mapred.MapTask: soft limit at 83886080
19/04/04 14:04:20 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/04 14:04:20 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/04 14:04:20 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/04 14:04:20 INFO mapred.LocalJobRunner: 
19/04/04 14:04:20 INFO mapred.MapTask: Starting flush of map output
19/04/04 14:04:20 INFO mapred.Task: Task:attempt_local931706435_0001_m_000002_0 is done. And is in the process of committing
19/04/04 14:04:20 INFO mapred.LocalJobRunner: map
19/04/04 14:04:20 INFO mapred.Task: Task 'attempt_local931706435_0001_m_000002_0' done.
19/04/04 14:04:20 INFO mapred.LocalJobRunner: Finishing task: attempt_local931706435_0001_m_000002_0
19/04/04 14:04:20 INFO mapred.LocalJobRunner: Starting task: attempt_local931706435_0001_m_000003_0
19/04/04 14:04:20 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:04:20 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:04:20 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:04:20 INFO mapred.MapTask: Processing split: file:/home/admin/ws/hadoop-2.9.2/input/kms-acls.xml:0+3518
19/04/04 14:04:20 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/04 14:04:20 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/04 14:04:20 INFO mapred.MapTask: soft limit at 83886080
19/04/04 14:04:20 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/04 14:04:20 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/04 14:04:20 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/04 14:04:20 INFO mapred.LocalJobRunner: 
19/04/04 14:04:20 INFO mapred.MapTask: Starting flush of map output
19/04/04 14:04:20 INFO mapred.Task: Task:attempt_local931706435_0001_m_000003_0 is done. And is in the process of committing
19/04/04 14:04:20 INFO mapred.LocalJobRunner: map
19/04/04 14:04:20 INFO mapred.Task: Task 'attempt_local931706435_0001_m_000003_0' done.
19/04/04 14:04:20 INFO mapred.LocalJobRunner: Finishing task: attempt_local931706435_0001_m_000003_0
19/04/04 14:04:20 INFO mapred.LocalJobRunner: Starting task: attempt_local931706435_0001_m_000004_0
19/04/04 14:04:20 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:04:20 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:04:20 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:04:20 INFO mapred.MapTask: Processing split: file:/home/admin/ws/hadoop-2.9.2/input/hdfs-site.xml:0+775
19/04/04 14:04:20 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/04 14:04:20 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/04 14:04:20 INFO mapred.MapTask: soft limit at 83886080
19/04/04 14:04:20 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/04 14:04:20 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/04 14:04:20 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/04 14:04:20 INFO mapred.LocalJobRunner: 
19/04/04 14:04:20 INFO mapred.MapTask: Starting flush of map output
19/04/04 14:04:20 INFO mapred.Task: Task:attempt_local931706435_0001_m_000004_0 is done. And is in the process of committing
19/04/04 14:04:20 INFO mapred.LocalJobRunner: map
19/04/04 14:04:20 INFO mapred.Task: Task 'attempt_local931706435_0001_m_000004_0' done.
19/04/04 14:04:20 INFO mapred.LocalJobRunner: Finishing task: attempt_local931706435_0001_m_000004_0
19/04/04 14:04:20 INFO mapred.LocalJobRunner: Starting task: attempt_local931706435_0001_m_000005_0
19/04/04 14:04:20 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:04:20 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:04:20 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:04:20 INFO mapred.MapTask: Processing split: file:/home/admin/ws/hadoop-2.9.2/input/core-site.xml:0+774
19/04/04 14:04:20 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/04 14:04:20 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/04 14:04:20 INFO mapred.MapTask: soft limit at 83886080
19/04/04 14:04:20 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/04 14:04:20 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/04 14:04:20 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/04 14:04:20 INFO mapred.LocalJobRunner: 
19/04/04 14:04:20 INFO mapred.MapTask: Starting flush of map output
19/04/04 14:04:20 INFO mapred.Task: Task:attempt_local931706435_0001_m_000005_0 is done. And is in the process of committing
19/04/04 14:04:20 INFO mapred.LocalJobRunner: map
19/04/04 14:04:20 INFO mapred.Task: Task 'attempt_local931706435_0001_m_000005_0' done.
19/04/04 14:04:20 INFO mapred.LocalJobRunner: Finishing task: attempt_local931706435_0001_m_000005_0
19/04/04 14:04:20 INFO mapred.LocalJobRunner: Starting task: attempt_local931706435_0001_m_000006_0
19/04/04 14:04:20 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:04:20 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:04:20 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:04:20 INFO mapred.MapTask: Processing split: file:/home/admin/ws/hadoop-2.9.2/input/yarn-site.xml:0+690
19/04/04 14:04:20 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/04 14:04:20 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/04 14:04:20 INFO mapred.MapTask: soft limit at 83886080
19/04/04 14:04:20 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/04 14:04:20 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/04 14:04:20 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/04 14:04:20 INFO mapred.LocalJobRunner: 
19/04/04 14:04:20 INFO mapred.MapTask: Starting flush of map output
19/04/04 14:04:20 INFO mapred.Task: Task:attempt_local931706435_0001_m_000006_0 is done. And is in the process of committing
19/04/04 14:04:20 INFO mapred.LocalJobRunner: map
19/04/04 14:04:20 INFO mapred.Task: Task 'attempt_local931706435_0001_m_000006_0' done.
19/04/04 14:04:20 INFO mapred.LocalJobRunner: Finishing task: attempt_local931706435_0001_m_000006_0
19/04/04 14:04:20 INFO mapred.LocalJobRunner: Starting task: attempt_local931706435_0001_m_000007_0
19/04/04 14:04:20 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:04:20 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:04:20 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:04:20 INFO mapred.MapTask: Processing split: file:/home/admin/ws/hadoop-2.9.2/input/httpfs-site.xml:0+620
19/04/04 14:04:20 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/04 14:04:20 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/04 14:04:20 INFO mapred.MapTask: soft limit at 83886080
19/04/04 14:04:20 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/04 14:04:20 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/04 14:04:20 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/04 14:04:20 INFO mapred.LocalJobRunner: 
19/04/04 14:04:20 INFO mapred.MapTask: Starting flush of map output
19/04/04 14:04:20 INFO mapred.Task: Task:attempt_local931706435_0001_m_000007_0 is done. And is in the process of committing
19/04/04 14:04:20 INFO mapred.LocalJobRunner: map
19/04/04 14:04:20 INFO mapred.Task: Task 'attempt_local931706435_0001_m_000007_0' done.
19/04/04 14:04:20 INFO mapred.LocalJobRunner: Finishing task: attempt_local931706435_0001_m_000007_0
19/04/04 14:04:20 INFO mapred.LocalJobRunner: map task executor complete.
19/04/04 14:04:20 INFO mapred.LocalJobRunner: Waiting for reduce tasks
19/04/04 14:04:20 INFO mapred.LocalJobRunner: Starting task: attempt_local931706435_0001_r_000000_0
19/04/04 14:04:20 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:04:20 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:04:20 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:04:20 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@7b4297cf
19/04/04 14:04:20 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=364432576, maxSingleShuffleLimit=91108144, mergeThreshold=240525504, ioSortFactor=10, memToMemMergeOutputsThreshold=10
19/04/04 14:04:20 INFO reduce.EventFetcher: attempt_local931706435_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
19/04/04 14:04:20 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local931706435_0001_m_000000_0 decomp: 21 len: 25 to MEMORY
19/04/04 14:04:20 INFO reduce.InMemoryMapOutput: Read 21 bytes from map-output for attempt_local931706435_0001_m_000000_0
19/04/04 14:04:20 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 21, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->21
19/04/04 14:04:20 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local931706435_0001_m_000003_0 decomp: 2 len: 6 to MEMORY
19/04/04 14:04:20 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local931706435_0001_m_000003_0
19/04/04 14:04:20 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 2, commitMemory -> 21, usedMemory ->23
19/04/04 14:04:20 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local931706435_0001_m_000006_0 decomp: 2 len: 6 to MEMORY
19/04/04 14:04:20 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local931706435_0001_m_000006_0
19/04/04 14:04:20 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 3, commitMemory -> 23, usedMemory ->25
19/04/04 14:04:20 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local931706435_0001_m_000002_0 decomp: 2 len: 6 to MEMORY
19/04/04 14:04:20 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local931706435_0001_m_000002_0
19/04/04 14:04:20 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 4, commitMemory -> 25, usedMemory ->27
19/04/04 14:04:20 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local931706435_0001_m_000005_0 decomp: 2 len: 6 to MEMORY
19/04/04 14:04:20 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local931706435_0001_m_000005_0
19/04/04 14:04:20 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 5, commitMemory -> 27, usedMemory ->29
19/04/04 14:04:20 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local931706435_0001_m_000001_0 decomp: 2 len: 6 to MEMORY
19/04/04 14:04:20 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local931706435_0001_m_000001_0
19/04/04 14:04:20 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 6, commitMemory -> 29, usedMemory ->31
19/04/04 14:04:20 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local931706435_0001_m_000004_0 decomp: 2 len: 6 to MEMORY
19/04/04 14:04:20 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local931706435_0001_m_000004_0
19/04/04 14:04:20 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 7, commitMemory -> 31, usedMemory ->33
19/04/04 14:04:20 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local931706435_0001_m_000007_0 decomp: 2 len: 6 to MEMORY
19/04/04 14:04:20 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local931706435_0001_m_000007_0
19/04/04 14:04:20 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 8, commitMemory -> 33, usedMemory ->35
19/04/04 14:04:20 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning
19/04/04 14:04:20 INFO mapred.LocalJobRunner: 8 / 8 copied.
19/04/04 14:04:20 INFO reduce.MergeManagerImpl: finalMerge called with 8 in-memory map-outputs and 0 on-disk map-outputs
19/04/04 14:04:20 INFO mapred.Merger: Merging 8 sorted segments
19/04/04 14:04:20 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 10 bytes
19/04/04 14:04:21 INFO reduce.MergeManagerImpl: Merged 8 segments, 35 bytes to disk to satisfy reduce memory limit
19/04/04 14:04:21 INFO reduce.MergeManagerImpl: Merging 1 files, 25 bytes from disk
19/04/04 14:04:21 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce
19/04/04 14:04:21 INFO mapred.Merger: Merging 1 sorted segments
19/04/04 14:04:21 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 10 bytes
19/04/04 14:04:21 INFO mapred.LocalJobRunner: 8 / 8 copied.
19/04/04 14:04:21 INFO Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
19/04/04 14:04:21 INFO mapred.Task: Task:attempt_local931706435_0001_r_000000_0 is done. And is in the process of committing
19/04/04 14:04:21 INFO mapred.LocalJobRunner: 8 / 8 copied.
19/04/04 14:04:21 INFO mapred.Task: Task attempt_local931706435_0001_r_000000_0 is allowed to commit now
19/04/04 14:04:21 INFO output.FileOutputCommitter: Saved output of task 'attempt_local931706435_0001_r_000000_0' to file:/home/admin/ws/hadoop-2.9.2/grep-temp-2108165286/_temporary/0/task_local931706435_0001_r_000000
19/04/04 14:04:21 INFO mapred.LocalJobRunner: reduce > reduce
19/04/04 14:04:21 INFO mapred.Task: Task 'attempt_local931706435_0001_r_000000_0' done.
19/04/04 14:04:21 INFO mapred.LocalJobRunner: Finishing task: attempt_local931706435_0001_r_000000_0
19/04/04 14:04:21 INFO mapred.LocalJobRunner: reduce task executor complete.
19/04/04 14:04:21 INFO mapreduce.Job: Job job_local931706435_0001 running in uber mode : false
19/04/04 14:04:21 INFO mapreduce.Job:  map 100% reduce 100%
19/04/04 14:04:21 INFO mapreduce.Job: Job job_local931706435_0001 completed successfully
19/04/04 14:04:21 INFO mapreduce.Job: Counters: 30
        File System Counters
                FILE: Number of bytes read=2997505
                FILE: Number of bytes written=6940406
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
        Map-Reduce Framework
                Map input records=840
                Map output records=1
                Map output bytes=17
                Map output materialized bytes=67
                Input split bytes=949
                Combine input records=1
                Combine output records=1
                Reduce input groups=1
                Reduce shuffle bytes=67
                Reduce input records=1
                Reduce output records=1
                Spilled Records=2
                Shuffled Maps =8
                Failed Shuffles=0
                Merged Map outputs=8
                GC time elapsed (ms)=26
                Total committed heap usage (bytes)=4343201792
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters 
                Bytes Read=30383
        File Output Format Counters 
                Bytes Written=123
19/04/04 14:04:21 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
19/04/04 14:04:21 INFO input.FileInputFormat: Total input files to process : 1
19/04/04 14:04:21 INFO mapreduce.JobSubmitter: number of splits:1
19/04/04 14:04:21 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local1791972909_0002
19/04/04 14:04:21 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
19/04/04 14:04:21 INFO mapred.LocalJobRunner: OutputCommitter set in config null
19/04/04 14:04:21 INFO mapreduce.Job: Running job: job_local1791972909_0002
19/04/04 14:04:21 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:04:21 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:04:21 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
19/04/04 14:04:21 INFO mapred.LocalJobRunner: Waiting for map tasks
19/04/04 14:04:21 INFO mapred.LocalJobRunner: Starting task: attempt_local1791972909_0002_m_000000_0
19/04/04 14:04:21 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:04:21 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:04:21 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:04:21 INFO mapred.MapTask: Processing split: file:/home/admin/ws/hadoop-2.9.2/grep-temp-2108165286/part-r-00000:0+111
19/04/04 14:04:21 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/04 14:04:21 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/04 14:04:21 INFO mapred.MapTask: soft limit at 83886080
19/04/04 14:04:21 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/04 14:04:21 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/04 14:04:21 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/04 14:04:21 INFO mapred.LocalJobRunner: 
19/04/04 14:04:21 INFO mapred.MapTask: Starting flush of map output
19/04/04 14:04:21 INFO mapred.MapTask: Spilling map output
19/04/04 14:04:21 INFO mapred.MapTask: bufstart = 0; bufend = 17; bufvoid = 104857600
19/04/04 14:04:21 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214396(104857584); length = 1/6553600
19/04/04 14:04:21 INFO mapred.MapTask: Finished spill 0
19/04/04 14:04:21 INFO mapred.Task: Task:attempt_local1791972909_0002_m_000000_0 is done. And is in the process of committing
19/04/04 14:04:21 INFO mapred.LocalJobRunner: map
19/04/04 14:04:21 INFO mapred.Task: Task 'attempt_local1791972909_0002_m_000000_0' done.
19/04/04 14:04:21 INFO mapred.LocalJobRunner: Finishing task: attempt_local1791972909_0002_m_000000_0
19/04/04 14:04:21 INFO mapred.LocalJobRunner: map task executor complete.
19/04/04 14:04:21 INFO mapred.LocalJobRunner: Waiting for reduce tasks
19/04/04 14:04:21 INFO mapred.LocalJobRunner: Starting task: attempt_local1791972909_0002_r_000000_0
19/04/04 14:04:21 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:04:21 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:04:21 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:04:21 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@4bc220e2
19/04/04 14:04:21 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=364065568, maxSingleShuffleLimit=91016392, mergeThreshold=240283280, ioSortFactor=10, memToMemMergeOutputsThreshold=10
19/04/04 14:04:21 INFO reduce.EventFetcher: attempt_local1791972909_0002_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
19/04/04 14:04:21 INFO reduce.LocalFetcher: localfetcher#2 about to shuffle output of map attempt_local1791972909_0002_m_000000_0 decomp: 21 len: 25 to MEMORY
19/04/04 14:04:21 INFO reduce.InMemoryMapOutput: Read 21 bytes from map-output for attempt_local1791972909_0002_m_000000_0
19/04/04 14:04:21 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 21, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->21
19/04/04 14:04:21 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning
19/04/04 14:04:21 INFO mapred.LocalJobRunner: 1 / 1 copied.
19/04/04 14:04:21 INFO reduce.MergeManagerImpl: finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs
19/04/04 14:04:21 INFO mapred.Merger: Merging 1 sorted segments
19/04/04 14:04:21 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 11 bytes
19/04/04 14:04:21 INFO reduce.MergeManagerImpl: Merged 1 segments, 21 bytes to disk to satisfy reduce memory limit
19/04/04 14:04:21 INFO reduce.MergeManagerImpl: Merging 1 files, 25 bytes from disk
19/04/04 14:04:21 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce
19/04/04 14:04:21 INFO mapred.Merger: Merging 1 sorted segments
19/04/04 14:04:21 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 11 bytes
19/04/04 14:04:21 INFO mapred.LocalJobRunner: 1 / 1 copied.
19/04/04 14:04:21 INFO mapred.Task: Task:attempt_local1791972909_0002_r_000000_0 is done. And is in the process of committing
19/04/04 14:04:21 INFO mapred.LocalJobRunner: 1 / 1 copied.
19/04/04 14:04:21 INFO mapred.Task: Task attempt_local1791972909_0002_r_000000_0 is allowed to commit now
19/04/04 14:04:21 INFO output.FileOutputCommitter: Saved output of task 'attempt_local1791972909_0002_r_000000_0' to file:/home/admin/ws/hadoop-2.9.2/output/_temporary/0/task_local1791972909_0002_r_000000
19/04/04 14:04:21 INFO mapred.LocalJobRunner: reduce > reduce
19/04/04 14:04:21 INFO mapred.Task: Task 'attempt_local1791972909_0002_r_000000_0' done.
19/04/04 14:04:21 INFO mapred.LocalJobRunner: Finishing task: attempt_local1791972909_0002_r_000000_0
19/04/04 14:04:21 INFO mapred.LocalJobRunner: reduce task executor complete.
19/04/04 14:04:22 INFO mapreduce.Job: Job job_local1791972909_0002 running in uber mode : false
19/04/04 14:04:22 INFO mapreduce.Job:  map 100% reduce 100%
19/04/04 14:04:22 INFO mapreduce.Job: Job job_local1791972909_0002 completed successfully
19/04/04 14:04:22 INFO mapreduce.Job: Counters: 30
        File System Counters
                FILE: Number of bytes read=1288240
                FILE: Number of bytes written=3084794
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
        Map-Reduce Framework
                Map input records=1
                Map output records=1
                Map output bytes=17
                Map output materialized bytes=25
                Input split bytes=131
                Combine input records=0
                Combine output records=0
                Reduce input groups=1
                Reduce shuffle bytes=25
                Reduce input records=1
                Reduce output records=1
                Spilled Records=2
                Shuffled Maps =1
                Failed Shuffles=0
                Merged Map outputs=1
                GC time elapsed (ms)=0
                Total committed heap usage (bytes)=1040187392
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters 
                Bytes Read=123
        File Output Format Counters 
                Bytes Written=23

 

Pseudo-Distributed - run a MapReduce job locally

$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar grep input output 'dfs[a-z.]+'
19/04/04 14:00:27 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
19/04/04 14:00:27 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
19/04/04 14:00:28 INFO input.FileInputFormat: Total input files to process : 29
19/04/04 14:00:28 INFO mapreduce.JobSubmitter: number of splits:29
19/04/04 14:00:28 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local1620358206_0001
19/04/04 14:00:28 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
19/04/04 14:00:28 INFO mapreduce.Job: Running job: job_local1620358206_0001
19/04/04 14:00:28 INFO mapred.LocalJobRunner: OutputCommitter set in config null
19/04/04 14:00:28 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:00:28 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:00:28 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
19/04/04 14:00:28 INFO mapred.LocalJobRunner: Waiting for map tasks
19/04/04 14:00:28 INFO mapred.LocalJobRunner: Starting task: attempt_local1620358206_0001_m_000000_0
19/04/04 14:00:28 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:00:28 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:00:28 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:00:28 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/admin/input/log4j.properties:0+14016
19/04/04 14:00:28 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/04 14:00:28 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/04 14:00:28 INFO mapred.MapTask: soft limit at 83886080
19/04/04 14:00:28 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/04 14:00:28 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/04 14:00:28 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/04 14:00:28 INFO mapred.LocalJobRunner: 
19/04/04 14:00:28 INFO mapred.MapTask: Starting flush of map output
19/04/04 14:00:28 INFO mapred.MapTask: Spilling map output
19/04/04 14:00:28 INFO mapred.MapTask: bufstart = 0; bufend = 352; bufvoid = 104857600
19/04/04 14:00:28 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214348(104857392); length = 49/6553600
19/04/04 14:00:28 INFO mapred.MapTask: Finished spill 0
19/04/04 14:00:28 INFO mapred.Task: Task:attempt_local1620358206_0001_m_000000_0 is done. And is in the process of committing
19/04/04 14:00:28 INFO mapred.LocalJobRunner: map
19/04/04 14:00:28 INFO mapred.Task: Task 'attempt_local1620358206_0001_m_000000_0' done.
19/04/04 14:00:28 INFO mapred.LocalJobRunner: Finishing task: attempt_local1620358206_0001_m_000000_0
19/04/04 14:00:28 INFO mapred.LocalJobRunner: Starting task: attempt_local1620358206_0001_m_000001_0
19/04/04 14:00:28 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:00:28 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:00:28 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:00:28 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/admin/input/hadoop-policy.xml:0+10206
19/04/04 14:00:28 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/04 14:00:28 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/04 14:00:28 INFO mapred.MapTask: soft limit at 83886080
19/04/04 14:00:28 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/04 14:00:28 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/04 14:00:28 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/04 14:00:28 INFO mapred.LocalJobRunner: 
19/04/04 14:00:28 INFO mapred.MapTask: Starting flush of map output
19/04/04 14:00:28 INFO mapred.MapTask: Spilling map output
19/04/04 14:00:28 INFO mapred.MapTask: bufstart = 0; bufend = 17; bufvoid = 104857600
19/04/04 14:00:28 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214396(104857584); length = 1/6553600
19/04/04 14:00:28 INFO mapred.MapTask: Finished spill 0
19/04/04 14:00:28 INFO mapred.Task: Task:attempt_local1620358206_0001_m_000001_0 is done. And is in the process of committing
19/04/04 14:00:28 INFO mapred.LocalJobRunner: map
19/04/04 14:00:28 INFO mapred.Task: Task 'attempt_local1620358206_0001_m_000001_0' done.
19/04/04 14:00:28 INFO mapred.LocalJobRunner: Finishing task: attempt_local1620358206_0001_m_000001_0
19/04/04 14:00:28 INFO mapred.LocalJobRunner: Starting task: attempt_local1620358206_0001_m_000002_0
19/04/04 14:00:28 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:00:28 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:00:28 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:00:28 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/admin/input/capacity-scheduler.xml:0+7861
19/04/04 14:00:28 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/04 14:00:28 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/04 14:00:28 INFO mapred.MapTask: soft limit at 83886080
19/04/04 14:00:28 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/04 14:00:28 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/04 14:00:28 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/04 14:00:28 INFO mapred.LocalJobRunner: 
19/04/04 14:00:28 INFO mapred.MapTask: Starting flush of map output
19/04/04 14:00:28 INFO mapred.Task: Task:attempt_local1620358206_0001_m_000002_0 is done. And is in the process of committing
19/04/04 14:00:28 INFO mapred.LocalJobRunner: map
19/04/04 14:00:28 INFO mapred.Task: Task 'attempt_local1620358206_0001_m_000002_0' done.
19/04/04 14:00:28 INFO mapred.LocalJobRunner: Finishing task: attempt_local1620358206_0001_m_000002_0
19/04/04 14:00:28 INFO mapred.LocalJobRunner: Starting task: attempt_local1620358206_0001_m_000003_0
19/04/04 14:00:28 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:00:28 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:00:28 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:00:28 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/admin/input/kms-site.xml:0+5939
19/04/04 14:00:28 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/04 14:00:28 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/04 14:00:28 INFO mapred.MapTask: soft limit at 83886080
19/04/04 14:00:28 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/04 14:00:28 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/04 14:00:28 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/04 14:00:28 INFO mapred.LocalJobRunner: 
19/04/04 14:00:28 INFO mapred.MapTask: Starting flush of map output
19/04/04 14:00:28 INFO mapred.Task: Task:attempt_local1620358206_0001_m_000003_0 is done. And is in the process of committing
19/04/04 14:00:28 INFO mapred.LocalJobRunner: map
19/04/04 14:00:28 INFO mapred.Task: Task 'attempt_local1620358206_0001_m_000003_0' done.
19/04/04 14:00:28 INFO mapred.LocalJobRunner: Finishing task: attempt_local1620358206_0001_m_000003_0
19/04/04 14:00:28 INFO mapred.LocalJobRunner: Starting task: attempt_local1620358206_0001_m_000004_0
19/04/04 14:00:28 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:00:28 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:00:28 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:00:28 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/admin/input/hadoop-env.sh:0+5005
19/04/04 14:00:28 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/04 14:00:28 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/04 14:00:28 INFO mapred.MapTask: soft limit at 83886080
19/04/04 14:00:28 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/04 14:00:28 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/04 14:00:28 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/04 14:00:28 INFO mapred.LocalJobRunner: 
19/04/04 14:00:28 INFO mapred.MapTask: Starting flush of map output
19/04/04 14:00:28 INFO mapred.MapTask: Spilling map output
19/04/04 14:00:28 INFO mapred.MapTask: bufstart = 0; bufend = 50; bufvoid = 104857600
19/04/04 14:00:28 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214392(104857568); length = 5/6553600
19/04/04 14:00:28 INFO mapred.MapTask: Finished spill 0
19/04/04 14:00:28 INFO mapred.Task: Task:attempt_local1620358206_0001_m_000004_0 is done. And is in the process of committing
19/04/04 14:00:28 INFO mapred.LocalJobRunner: map
19/04/04 14:00:28 INFO mapred.Task: Task 'attempt_local1620358206_0001_m_000004_0' done.
19/04/04 14:00:28 INFO mapred.LocalJobRunner: Finishing task: attempt_local1620358206_0001_m_000004_0
19/04/04 14:00:28 INFO mapred.LocalJobRunner: Starting task: attempt_local1620358206_0001_m_000005_0
19/04/04 14:00:28 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:00:28 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:00:28 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:00:28 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/admin/input/yarn-env.sh:0+4876
19/04/04 14:00:28 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/04 14:00:28 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/04 14:00:28 INFO mapred.MapTask: soft limit at 83886080
19/04/04 14:00:28 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/04 14:00:28 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/04 14:00:28 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/04 14:00:28 INFO mapred.LocalJobRunner: 
19/04/04 14:00:28 INFO mapred.MapTask: Starting flush of map output
19/04/04 14:00:28 INFO mapred.Task: Task:attempt_local1620358206_0001_m_000005_0 is done. And is in the process of committing
19/04/04 14:00:28 INFO mapred.LocalJobRunner: map
19/04/04 14:00:28 INFO mapred.Task: Task 'attempt_local1620358206_0001_m_000005_0' done.
19/04/04 14:00:28 INFO mapred.LocalJobRunner: Finishing task: attempt_local1620358206_0001_m_000005_0
19/04/04 14:00:28 INFO mapred.LocalJobRunner: Starting task: attempt_local1620358206_0001_m_000006_0
19/04/04 14:00:28 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:00:28 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:00:28 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:00:28 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/admin/input/hadoop-env.cmd:0+4133
19/04/04 14:00:28 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/04 14:00:28 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/04 14:00:28 INFO mapred.MapTask: soft limit at 83886080
19/04/04 14:00:28 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/04 14:00:28 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/04 14:00:28 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/04 14:00:28 INFO mapred.LocalJobRunner: 
19/04/04 14:00:28 INFO mapred.MapTask: Starting flush of map output
19/04/04 14:00:28 INFO mapred.MapTask: Spilling map output
19/04/04 14:00:28 INFO mapred.MapTask: bufstart = 0; bufend = 50; bufvoid = 104857600
19/04/04 14:00:28 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214392(104857568); length = 5/6553600
19/04/04 14:00:28 INFO mapred.MapTask: Finished spill 0
19/04/04 14:00:28 INFO mapred.Task: Task:attempt_local1620358206_0001_m_000006_0 is done. And is in the process of committing
19/04/04 14:00:28 INFO mapred.LocalJobRunner: map
19/04/04 14:00:28 INFO mapred.Task: Task 'attempt_local1620358206_0001_m_000006_0' done.
19/04/04 14:00:28 INFO mapred.LocalJobRunner: Finishing task: attempt_local1620358206_0001_m_000006_0
19/04/04 14:00:28 INFO mapred.LocalJobRunner: Starting task: attempt_local1620358206_0001_m_000007_0
19/04/04 14:00:28 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:00:28 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:00:28 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:00:28 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/admin/input/mapred-queues.xml.template:0+4113
19/04/04 14:00:28 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/04 14:00:28 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/04 14:00:28 INFO mapred.MapTask: soft limit at 83886080
19/04/04 14:00:28 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/04 14:00:28 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/04 14:00:28 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/04 14:00:28 INFO mapred.LocalJobRunner: 
19/04/04 14:00:28 INFO mapred.MapTask: Starting flush of map output
19/04/04 14:00:28 INFO mapred.Task: Task:attempt_local1620358206_0001_m_000007_0 is done. And is in the process of committing
19/04/04 14:00:28 INFO mapred.LocalJobRunner: map
19/04/04 14:00:28 INFO mapred.Task: Task 'attempt_local1620358206_0001_m_000007_0' done.
19/04/04 14:00:28 INFO mapred.LocalJobRunner: Finishing task: attempt_local1620358206_0001_m_000007_0
19/04/04 14:00:28 INFO mapred.LocalJobRunner: Starting task: attempt_local1620358206_0001_m_000008_0
19/04/04 14:00:28 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:00:28 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:00:28 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:00:28 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/admin/input/kms-acls.xml:0+3518
19/04/04 14:00:29 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/04 14:00:29 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/04 14:00:29 INFO mapred.MapTask: soft limit at 83886080
19/04/04 14:00:29 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/04 14:00:29 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/04 14:00:29 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/04 14:00:29 INFO mapred.LocalJobRunner: 
19/04/04 14:00:29 INFO mapred.MapTask: Starting flush of map output
19/04/04 14:00:29 INFO mapred.Task: Task:attempt_local1620358206_0001_m_000008_0 is done. And is in the process of committing
19/04/04 14:00:29 INFO mapred.LocalJobRunner: map
19/04/04 14:00:29 INFO mapred.Task: Task 'attempt_local1620358206_0001_m_000008_0' done.
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Finishing task: attempt_local1620358206_0001_m_000008_0
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Starting task: attempt_local1620358206_0001_m_000009_0
19/04/04 14:00:29 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:00:29 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:00:29 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:00:29 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/admin/input/kms-env.sh:0+3139
19/04/04 14:00:29 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/04 14:00:29 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/04 14:00:29 INFO mapred.MapTask: soft limit at 83886080
19/04/04 14:00:29 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/04 14:00:29 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/04 14:00:29 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/04 14:00:29 INFO mapred.LocalJobRunner: 
19/04/04 14:00:29 INFO mapred.MapTask: Starting flush of map output
19/04/04 14:00:29 INFO mapred.Task: Task:attempt_local1620358206_0001_m_000009_0 is done. And is in the process of committing
19/04/04 14:00:29 INFO mapred.LocalJobRunner: map
19/04/04 14:00:29 INFO mapred.Task: Task 'attempt_local1620358206_0001_m_000009_0' done.
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Finishing task: attempt_local1620358206_0001_m_000009_0
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Starting task: attempt_local1620358206_0001_m_000010_0
19/04/04 14:00:29 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:00:29 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:00:29 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:00:29 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/admin/input/ssl-server.xml.example:0+2697
19/04/04 14:00:29 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/04 14:00:29 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/04 14:00:29 INFO mapred.MapTask: soft limit at 83886080
19/04/04 14:00:29 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/04 14:00:29 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/04 14:00:29 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/04 14:00:29 INFO mapred.LocalJobRunner: 
19/04/04 14:00:29 INFO mapred.MapTask: Starting flush of map output
19/04/04 14:00:29 INFO mapred.Task: Task:attempt_local1620358206_0001_m_000010_0 is done. And is in the process of committing
19/04/04 14:00:29 INFO mapred.LocalJobRunner: map
19/04/04 14:00:29 INFO mapred.Task: Task 'attempt_local1620358206_0001_m_000010_0' done.
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Finishing task: attempt_local1620358206_0001_m_000010_0
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Starting task: attempt_local1620358206_0001_m_000011_0
19/04/04 14:00:29 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:00:29 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:00:29 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:00:29 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/admin/input/hadoop-metrics2.properties:0+2598
19/04/04 14:00:29 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/04 14:00:29 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/04 14:00:29 INFO mapred.MapTask: soft limit at 83886080
19/04/04 14:00:29 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/04 14:00:29 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/04 14:00:29 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/04 14:00:29 INFO mapred.LocalJobRunner: 
19/04/04 14:00:29 INFO mapred.MapTask: Starting flush of map output
19/04/04 14:00:29 INFO mapred.Task: Task:attempt_local1620358206_0001_m_000011_0 is done. And is in the process of committing
19/04/04 14:00:29 INFO mapred.LocalJobRunner: map
19/04/04 14:00:29 INFO mapred.Task: Task 'attempt_local1620358206_0001_m_000011_0' done.
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Finishing task: attempt_local1620358206_0001_m_000011_0
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Starting task: attempt_local1620358206_0001_m_000012_0
19/04/04 14:00:29 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:00:29 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:00:29 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:00:29 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/admin/input/hadoop-metrics.properties:0+2490
19/04/04 14:00:29 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/04 14:00:29 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/04 14:00:29 INFO mapred.MapTask: soft limit at 83886080
19/04/04 14:00:29 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/04 14:00:29 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/04 14:00:29 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/04 14:00:29 INFO mapred.LocalJobRunner: 
19/04/04 14:00:29 INFO mapred.MapTask: Starting flush of map output
19/04/04 14:00:29 INFO mapred.MapTask: Spilling map output
19/04/04 14:00:29 INFO mapred.MapTask: bufstart = 0; bufend = 170; bufvoid = 104857600
19/04/04 14:00:29 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214364(104857456); length = 33/6553600
19/04/04 14:00:29 INFO mapred.MapTask: Finished spill 0
19/04/04 14:00:29 INFO mapred.Task: Task:attempt_local1620358206_0001_m_000012_0 is done. And is in the process of committing
19/04/04 14:00:29 INFO mapred.LocalJobRunner: map
19/04/04 14:00:29 INFO mapred.Task: Task 'attempt_local1620358206_0001_m_000012_0' done.
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Finishing task: attempt_local1620358206_0001_m_000012_0
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Starting task: attempt_local1620358206_0001_m_000013_0
19/04/04 14:00:29 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:00:29 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:00:29 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:00:29 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/admin/input/ssl-client.xml.example:0+2316
19/04/04 14:00:29 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/04 14:00:29 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/04 14:00:29 INFO mapred.MapTask: soft limit at 83886080
19/04/04 14:00:29 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/04 14:00:29 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/04 14:00:29 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/04 14:00:29 INFO mapred.LocalJobRunner: 
19/04/04 14:00:29 INFO mapred.MapTask: Starting flush of map output
19/04/04 14:00:29 INFO mapred.Task: Task:attempt_local1620358206_0001_m_000013_0 is done. And is in the process of committing
19/04/04 14:00:29 INFO mapred.LocalJobRunner: map
19/04/04 14:00:29 INFO mapred.Task: Task 'attempt_local1620358206_0001_m_000013_0' done.
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Finishing task: attempt_local1620358206_0001_m_000013_0
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Starting task: attempt_local1620358206_0001_m_000014_0
19/04/04 14:00:29 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:00:29 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:00:29 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:00:29 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/admin/input/yarn-env.cmd:0+2250
19/04/04 14:00:29 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/04 14:00:29 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/04 14:00:29 INFO mapred.MapTask: soft limit at 83886080
19/04/04 14:00:29 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/04 14:00:29 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/04 14:00:29 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/04 14:00:29 INFO mapred.LocalJobRunner: 
19/04/04 14:00:29 INFO mapred.MapTask: Starting flush of map output
19/04/04 14:00:29 INFO mapred.Task: Task:attempt_local1620358206_0001_m_000014_0 is done. And is in the process of committing
19/04/04 14:00:29 INFO mapred.LocalJobRunner: map
19/04/04 14:00:29 INFO mapred.Task: Task 'attempt_local1620358206_0001_m_000014_0' done.
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Finishing task: attempt_local1620358206_0001_m_000014_0
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Starting task: attempt_local1620358206_0001_m_000015_0
19/04/04 14:00:29 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:00:29 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:00:29 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:00:29 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/admin/input/httpfs-env.sh:0+2230
19/04/04 14:00:29 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/04 14:00:29 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/04 14:00:29 INFO mapred.MapTask: soft limit at 83886080
19/04/04 14:00:29 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/04 14:00:29 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/04 14:00:29 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/04 14:00:29 INFO mapred.LocalJobRunner: 
19/04/04 14:00:29 INFO mapred.MapTask: Starting flush of map output
19/04/04 14:00:29 INFO mapred.Task: Task:attempt_local1620358206_0001_m_000015_0 is done. And is in the process of committing
19/04/04 14:00:29 INFO mapred.LocalJobRunner: map
19/04/04 14:00:29 INFO mapred.Task: Task 'attempt_local1620358206_0001_m_000015_0' done.
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Finishing task: attempt_local1620358206_0001_m_000015_0
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Starting task: attempt_local1620358206_0001_m_000016_0
19/04/04 14:00:29 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:00:29 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:00:29 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:00:29 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/admin/input/kms-log4j.properties:0+1788
19/04/04 14:00:29 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/04 14:00:29 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/04 14:00:29 INFO mapred.MapTask: soft limit at 83886080
19/04/04 14:00:29 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/04 14:00:29 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/04 14:00:29 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/04 14:00:29 INFO mapred.LocalJobRunner: 
19/04/04 14:00:29 INFO mapred.MapTask: Starting flush of map output
19/04/04 14:00:29 INFO mapred.Task: Task:attempt_local1620358206_0001_m_000016_0 is done. And is in the process of committing
19/04/04 14:00:29 INFO mapred.LocalJobRunner: map
19/04/04 14:00:29 INFO mapred.Task: Task 'attempt_local1620358206_0001_m_000016_0' done.
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Finishing task: attempt_local1620358206_0001_m_000016_0
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Starting task: attempt_local1620358206_0001_m_000017_0
19/04/04 14:00:29 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:00:29 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:00:29 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:00:29 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/admin/input/httpfs-log4j.properties:0+1657
19/04/04 14:00:29 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/04 14:00:29 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/04 14:00:29 INFO mapred.MapTask: soft limit at 83886080
19/04/04 14:00:29 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/04 14:00:29 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/04 14:00:29 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/04 14:00:29 INFO mapred.LocalJobRunner: 
19/04/04 14:00:29 INFO mapred.MapTask: Starting flush of map output
19/04/04 14:00:29 INFO mapred.Task: Task:attempt_local1620358206_0001_m_000017_0 is done. And is in the process of committing
19/04/04 14:00:29 INFO mapred.LocalJobRunner: map
19/04/04 14:00:29 INFO mapred.Task: Task 'attempt_local1620358206_0001_m_000017_0' done.
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Finishing task: attempt_local1620358206_0001_m_000017_0
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Starting task: attempt_local1620358206_0001_m_000018_0
19/04/04 14:00:29 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:00:29 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:00:29 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:00:29 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/admin/input/mapred-env.sh:0+1507
19/04/04 14:00:29 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/04 14:00:29 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/04 14:00:29 INFO mapred.MapTask: soft limit at 83886080
19/04/04 14:00:29 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/04 14:00:29 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/04 14:00:29 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/04 14:00:29 INFO mapred.LocalJobRunner: 
19/04/04 14:00:29 INFO mapred.MapTask: Starting flush of map output
19/04/04 14:00:29 INFO mapred.Task: Task:attempt_local1620358206_0001_m_000018_0 is done. And is in the process of committing
19/04/04 14:00:29 INFO mapred.LocalJobRunner: map
19/04/04 14:00:29 INFO mapred.Task: Task 'attempt_local1620358206_0001_m_000018_0' done.
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Finishing task: attempt_local1620358206_0001_m_000018_0
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Starting task: attempt_local1620358206_0001_m_000019_0
19/04/04 14:00:29 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:00:29 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:00:29 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:00:29 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/admin/input/configuration.xsl:0+1335
19/04/04 14:00:29 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/04 14:00:29 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/04 14:00:29 INFO mapred.MapTask: soft limit at 83886080
19/04/04 14:00:29 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/04 14:00:29 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/04 14:00:29 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/04 14:00:29 INFO mapred.LocalJobRunner: 
19/04/04 14:00:29 INFO mapred.MapTask: Starting flush of map output
19/04/04 14:00:29 INFO mapred.Task: Task:attempt_local1620358206_0001_m_000019_0 is done. And is in the process of committing
19/04/04 14:00:29 INFO mapred.LocalJobRunner: map
19/04/04 14:00:29 INFO mapred.Task: Task 'attempt_local1620358206_0001_m_000019_0' done.
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Finishing task: attempt_local1620358206_0001_m_000019_0
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Starting task: attempt_local1620358206_0001_m_000020_0
19/04/04 14:00:29 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:00:29 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:00:29 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:00:29 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/admin/input/container-executor.cfg:0+1211
19/04/04 14:00:29 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/04 14:00:29 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/04 14:00:29 INFO mapred.MapTask: soft limit at 83886080
19/04/04 14:00:29 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/04 14:00:29 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/04 14:00:29 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/04 14:00:29 INFO mapred.LocalJobRunner: 
19/04/04 14:00:29 INFO mapred.MapTask: Starting flush of map output
19/04/04 14:00:29 INFO mapred.Task: Task:attempt_local1620358206_0001_m_000020_0 is done. And is in the process of committing
19/04/04 14:00:29 INFO mapred.LocalJobRunner: map
19/04/04 14:00:29 INFO mapred.Task: Task 'attempt_local1620358206_0001_m_000020_0' done.
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Finishing task: attempt_local1620358206_0001_m_000020_0
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Starting task: attempt_local1620358206_0001_m_000021_0
19/04/04 14:00:29 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:00:29 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:00:29 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:00:29 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/admin/input/mapred-env.cmd:0+1076
19/04/04 14:00:29 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/04 14:00:29 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/04 14:00:29 INFO mapred.MapTask: soft limit at 83886080
19/04/04 14:00:29 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/04 14:00:29 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/04 14:00:29 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/04 14:00:29 INFO mapred.LocalJobRunner: 
19/04/04 14:00:29 INFO mapred.MapTask: Starting flush of map output
19/04/04 14:00:29 INFO mapred.Task: Task:attempt_local1620358206_0001_m_000021_0 is done. And is in the process of committing
19/04/04 14:00:29 INFO mapred.LocalJobRunner: map
19/04/04 14:00:29 INFO mapred.Task: Task 'attempt_local1620358206_0001_m_000021_0' done.
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Finishing task: attempt_local1620358206_0001_m_000021_0
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Starting task: attempt_local1620358206_0001_m_000022_0
19/04/04 14:00:29 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:00:29 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:00:29 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:00:29 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/admin/input/core-site.xml:0+884
19/04/04 14:00:29 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/04 14:00:29 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/04 14:00:29 INFO mapred.MapTask: soft limit at 83886080
19/04/04 14:00:29 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/04 14:00:29 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/04 14:00:29 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/04 14:00:29 INFO mapred.LocalJobRunner: 
19/04/04 14:00:29 INFO mapred.MapTask: Starting flush of map output
19/04/04 14:00:29 INFO mapred.Task: Task:attempt_local1620358206_0001_m_000022_0 is done. And is in the process of committing
19/04/04 14:00:29 INFO mapred.LocalJobRunner: map
19/04/04 14:00:29 INFO mapred.Task: Task 'attempt_local1620358206_0001_m_000022_0' done.
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Finishing task: attempt_local1620358206_0001_m_000022_0
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Starting task: attempt_local1620358206_0001_m_000023_0
19/04/04 14:00:29 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:00:29 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:00:29 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:00:29 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/admin/input/hdfs-site.xml:0+867
19/04/04 14:00:29 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/04 14:00:29 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/04 14:00:29 INFO mapred.MapTask: soft limit at 83886080
19/04/04 14:00:29 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/04 14:00:29 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/04 14:00:29 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/04 14:00:29 INFO mapred.LocalJobRunner: 
19/04/04 14:00:29 INFO mapred.MapTask: Starting flush of map output
19/04/04 14:00:29 INFO mapred.MapTask: Spilling map output
19/04/04 14:00:29 INFO mapred.MapTask: bufstart = 0; bufend = 24; bufvoid = 104857600
19/04/04 14:00:29 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214396(104857584); length = 1/6553600
19/04/04 14:00:29 INFO mapred.MapTask: Finished spill 0
19/04/04 14:00:29 INFO mapred.Task: Task:attempt_local1620358206_0001_m_000023_0 is done. And is in the process of committing
19/04/04 14:00:29 INFO mapred.LocalJobRunner: map
19/04/04 14:00:29 INFO mapred.Task: Task 'attempt_local1620358206_0001_m_000023_0' done.
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Finishing task: attempt_local1620358206_0001_m_000023_0
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Starting task: attempt_local1620358206_0001_m_000024_0
19/04/04 14:00:29 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:00:29 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:00:29 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:00:29 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/admin/input/mapred-site.xml.template:0+758
19/04/04 14:00:29 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/04 14:00:29 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/04 14:00:29 INFO mapred.MapTask: soft limit at 83886080
19/04/04 14:00:29 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/04 14:00:29 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/04 14:00:29 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/04 14:00:29 INFO mapred.LocalJobRunner: 
19/04/04 14:00:29 INFO mapred.MapTask: Starting flush of map output
19/04/04 14:00:29 INFO mapred.Task: Task:attempt_local1620358206_0001_m_000024_0 is done. And is in the process of committing
19/04/04 14:00:29 INFO mapred.LocalJobRunner: map
19/04/04 14:00:29 INFO mapred.Task: Task 'attempt_local1620358206_0001_m_000024_0' done.
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Finishing task: attempt_local1620358206_0001_m_000024_0
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Starting task: attempt_local1620358206_0001_m_000025_0
19/04/04 14:00:29 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:00:29 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:00:29 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:00:29 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/admin/input/yarn-site.xml:0+690
19/04/04 14:00:29 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/04 14:00:29 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/04 14:00:29 INFO mapred.MapTask: soft limit at 83886080
19/04/04 14:00:29 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/04 14:00:29 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/04 14:00:29 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/04 14:00:29 INFO mapred.LocalJobRunner: 
19/04/04 14:00:29 INFO mapred.MapTask: Starting flush of map output
19/04/04 14:00:29 INFO mapred.Task: Task:attempt_local1620358206_0001_m_000025_0 is done. And is in the process of committing
19/04/04 14:00:29 INFO mapred.LocalJobRunner: map
19/04/04 14:00:29 INFO mapred.Task: Task 'attempt_local1620358206_0001_m_000025_0' done.
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Finishing task: attempt_local1620358206_0001_m_000025_0
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Starting task: attempt_local1620358206_0001_m_000026_0
19/04/04 14:00:29 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:00:29 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:00:29 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:00:29 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/admin/input/httpfs-site.xml:0+620
19/04/04 14:00:29 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/04 14:00:29 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/04 14:00:29 INFO mapred.MapTask: soft limit at 83886080
19/04/04 14:00:29 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/04 14:00:29 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/04 14:00:29 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/04 14:00:29 INFO mapred.LocalJobRunner: 
19/04/04 14:00:29 INFO mapred.MapTask: Starting flush of map output
19/04/04 14:00:29 INFO mapred.Task: Task:attempt_local1620358206_0001_m_000026_0 is done. And is in the process of committing
19/04/04 14:00:29 INFO mapred.LocalJobRunner: map
19/04/04 14:00:29 INFO mapred.Task: Task 'attempt_local1620358206_0001_m_000026_0' done.
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Finishing task: attempt_local1620358206_0001_m_000026_0
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Starting task: attempt_local1620358206_0001_m_000027_0
19/04/04 14:00:29 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:00:29 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:00:29 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:00:29 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/admin/input/httpfs-signature.secret:0+21
19/04/04 14:00:29 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/04 14:00:29 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/04 14:00:29 INFO mapred.MapTask: soft limit at 83886080
19/04/04 14:00:29 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/04 14:00:29 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/04 14:00:29 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/04 14:00:29 INFO mapred.LocalJobRunner: 
19/04/04 14:00:29 INFO mapred.MapTask: Starting flush of map output
19/04/04 14:00:29 INFO mapred.Task: Task:attempt_local1620358206_0001_m_000027_0 is done. And is in the process of committing
19/04/04 14:00:29 INFO mapred.LocalJobRunner: map
19/04/04 14:00:29 INFO mapred.Task: Task 'attempt_local1620358206_0001_m_000027_0' done.
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Finishing task: attempt_local1620358206_0001_m_000027_0
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Starting task: attempt_local1620358206_0001_m_000028_0
19/04/04 14:00:29 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:00:29 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:00:29 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:00:29 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/admin/input/slaves:0+10
19/04/04 14:00:29 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/04 14:00:29 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/04 14:00:29 INFO mapred.MapTask: soft limit at 83886080
19/04/04 14:00:29 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/04 14:00:29 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/04 14:00:29 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/04 14:00:29 INFO mapred.LocalJobRunner: 
19/04/04 14:00:29 INFO mapred.MapTask: Starting flush of map output
19/04/04 14:00:29 INFO mapred.Task: Task:attempt_local1620358206_0001_m_000028_0 is done. And is in the process of committing
19/04/04 14:00:29 INFO mapred.LocalJobRunner: map
19/04/04 14:00:29 INFO mapred.Task: Task 'attempt_local1620358206_0001_m_000028_0' done.
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Finishing task: attempt_local1620358206_0001_m_000028_0
19/04/04 14:00:29 INFO mapred.LocalJobRunner: map task executor complete.
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Waiting for reduce tasks
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Starting task: attempt_local1620358206_0001_r_000000_0
19/04/04 14:00:29 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:00:29 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:00:29 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:00:29 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@acaab7e
19/04/04 14:00:29 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=369570592, maxSingleShuffleLimit=92392648, mergeThreshold=243916608, ioSortFactor=10, memToMemMergeOutputsThreshold=10
19/04/04 14:00:29 INFO mapreduce.Job: Job job_local1620358206_0001 running in uber mode : false
19/04/04 14:00:29 INFO reduce.EventFetcher: attempt_local1620358206_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
19/04/04 14:00:29 INFO mapreduce.Job:  map 100% reduce 0%
19/04/04 14:00:29 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1620358206_0001_m_000028_0 decomp: 2 len: 6 to MEMORY
19/04/04 14:00:29 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1620358206_0001_m_000028_0
19/04/04 14:00:29 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->2
19/04/04 14:00:29 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1620358206_0001_m_000003_0 decomp: 2 len: 6 to MEMORY
19/04/04 14:00:29 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1620358206_0001_m_000003_0
19/04/04 14:00:29 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 2, commitMemory -> 2, usedMemory ->4
19/04/04 14:00:29 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1620358206_0001_m_000002_0 decomp: 2 len: 6 to MEMORY
19/04/04 14:00:29 WARN io.ReadaheadPool: Failed readahead on ifile
EBADF: Bad file descriptor
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posix_fadvise(Native Method)
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posixFadviseIfPossible(NativeIO.java:267)
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX$CacheManipulator.posixFadviseIfPossible(NativeIO.java:146)
        at org.apache.hadoop.io.ReadaheadPool$ReadaheadRequestImpl.run(ReadaheadPool.java:208)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
19/04/04 14:00:29 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1620358206_0001_m_000002_0
19/04/04 14:00:29 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 3, commitMemory -> 4, usedMemory ->6
19/04/04 14:00:29 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1620358206_0001_m_000015_0 decomp: 2 len: 6 to MEMORY
19/04/04 14:00:29 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1620358206_0001_m_000015_0
19/04/04 14:00:29 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 4, commitMemory -> 6, usedMemory ->8
19/04/04 14:00:29 WARN io.ReadaheadPool: Failed readahead on ifile
EBADF: Bad file descriptor
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posix_fadvise(Native Method)
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posixFadviseIfPossible(NativeIO.java:267)
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX$CacheManipulator.posixFadviseIfPossible(NativeIO.java:146)
        at org.apache.hadoop.io.ReadaheadPool$ReadaheadRequestImpl.run(ReadaheadPool.java:208)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
19/04/04 14:00:29 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1620358206_0001_m_000001_0 decomp: 21 len: 25 to MEMORY
19/04/04 14:00:29 INFO reduce.InMemoryMapOutput: Read 21 bytes from map-output for attempt_local1620358206_0001_m_000001_0
19/04/04 14:00:29 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 21, inMemoryMapOutputs.size() -> 5, commitMemory -> 8, usedMemory ->29
19/04/04 14:00:29 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1620358206_0001_m_000014_0 decomp: 2 len: 6 to MEMORY
19/04/04 14:00:29 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1620358206_0001_m_000014_0
19/04/04 14:00:29 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 6, commitMemory -> 29, usedMemory ->31
19/04/04 14:00:29 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1620358206_0001_m_000027_0 decomp: 2 len: 6 to MEMORY
19/04/04 14:00:29 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1620358206_0001_m_000027_0
19/04/04 14:00:29 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 7, commitMemory -> 31, usedMemory ->33
19/04/04 14:00:29 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1620358206_0001_m_000000_0 decomp: 174 len: 178 to MEMORY
19/04/04 14:00:29 INFO reduce.InMemoryMapOutput: Read 174 bytes from map-output for attempt_local1620358206_0001_m_000000_0
19/04/04 14:00:29 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 174, inMemoryMapOutputs.size() -> 8, commitMemory -> 33, usedMemory ->207
19/04/04 14:00:29 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1620358206_0001_m_000013_0 decomp: 2 len: 6 to MEMORY
19/04/04 14:00:29 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1620358206_0001_m_000013_0
19/04/04 14:00:29 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 9, commitMemory -> 207, usedMemory ->209
19/04/04 14:00:29 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1620358206_0001_m_000026_0 decomp: 2 len: 6 to MEMORY
19/04/04 14:00:29 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1620358206_0001_m_000026_0
19/04/04 14:00:29 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 10, commitMemory -> 209, usedMemory ->211
19/04/04 14:00:29 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1620358206_0001_m_000012_0 decomp: 109 len: 113 to MEMORY
19/04/04 14:00:29 INFO reduce.InMemoryMapOutput: Read 109 bytes from map-output for attempt_local1620358206_0001_m_000012_0
19/04/04 14:00:29 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 109, inMemoryMapOutputs.size() -> 11, commitMemory -> 211, usedMemory ->320
19/04/04 14:00:29 WARN io.ReadaheadPool: Failed readahead on ifile
EBADF: Bad file descriptor
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posix_fadvise(Native Method)
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posixFadviseIfPossible(NativeIO.java:267)
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX$CacheManipulator.posixFadviseIfPossible(NativeIO.java:146)
        at org.apache.hadoop.io.ReadaheadPool$ReadaheadRequestImpl.run(ReadaheadPool.java:208)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
19/04/04 14:00:29 WARN io.ReadaheadPool: Failed readahead on ifile
EBADF: Bad file descriptor
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posix_fadvise(Native Method)
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posixFadviseIfPossible(NativeIO.java:267)
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX$CacheManipulator.posixFadviseIfPossible(NativeIO.java:146)
        at org.apache.hadoop.io.ReadaheadPool$ReadaheadRequestImpl.run(ReadaheadPool.java:208)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
19/04/04 14:00:29 WARN io.ReadaheadPool: Failed readahead on ifile
EBADF: Bad file descriptor
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posix_fadvise(Native Method)
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posixFadviseIfPossible(NativeIO.java:267)
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX$CacheManipulator.posixFadviseIfPossible(NativeIO.java:146)
        at org.apache.hadoop.io.ReadaheadPool$ReadaheadRequestImpl.run(ReadaheadPool.java:208)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
19/04/04 14:00:29 WARN io.ReadaheadPool: Failed readahead on ifile
EBADF: Bad file descriptor
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posix_fadvise(Native Method)
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posixFadviseIfPossible(NativeIO.java:267)
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX$CacheManipulator.posixFadviseIfPossible(NativeIO.java:146)
        at org.apache.hadoop.io.ReadaheadPool$ReadaheadRequestImpl.run(ReadaheadPool.java:208)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
19/04/04 14:00:29 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1620358206_0001_m_000025_0 decomp: 2 len: 6 to MEMORY
19/04/04 14:00:29 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1620358206_0001_m_000025_0
19/04/04 14:00:29 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 12, commitMemory -> 320, usedMemory ->322
19/04/04 14:00:29 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1620358206_0001_m_000024_0 decomp: 2 len: 6 to MEMORY
19/04/04 14:00:29 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1620358206_0001_m_000024_0
19/04/04 14:00:29 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 13, commitMemory -> 322, usedMemory ->324
19/04/04 14:00:29 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1620358206_0001_m_000011_0 decomp: 2 len: 6 to MEMORY
19/04/04 14:00:29 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1620358206_0001_m_000011_0
19/04/04 14:00:29 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 14, commitMemory -> 324, usedMemory ->326
19/04/04 14:00:29 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1620358206_0001_m_000010_0 decomp: 2 len: 6 to MEMORY
19/04/04 14:00:29 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1620358206_0001_m_000010_0
19/04/04 14:00:29 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 15, commitMemory -> 326, usedMemory ->328
19/04/04 14:00:29 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1620358206_0001_m_000023_0 decomp: 28 len: 32 to MEMORY
19/04/04 14:00:29 INFO reduce.InMemoryMapOutput: Read 28 bytes from map-output for attempt_local1620358206_0001_m_000023_0
19/04/04 14:00:29 WARN io.ReadaheadPool: Failed readahead on ifile
EBADF: Bad file descriptor
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posix_fadvise(Native Method)
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posixFadviseIfPossible(NativeIO.java:267)
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX$CacheManipulator.posixFadviseIfPossible(NativeIO.java:146)
        at org.apache.hadoop.io.ReadaheadPool$ReadaheadRequestImpl.run(ReadaheadPool.java:208)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
19/04/04 14:00:29 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 28, inMemoryMapOutputs.size() -> 16, commitMemory -> 328, usedMemory ->356
19/04/04 14:00:29 WARN io.ReadaheadPool: Failed readahead on ifile
EBADF: Bad file descriptor
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posix_fadvise(Native Method)
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posixFadviseIfPossible(NativeIO.java:267)
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX$CacheManipulator.posixFadviseIfPossible(NativeIO.java:146)
        at org.apache.hadoop.io.ReadaheadPool$ReadaheadRequestImpl.run(ReadaheadPool.java:208)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
19/04/04 14:00:29 WARN io.ReadaheadPool: Failed readahead on ifile
EBADF: Bad file descriptor
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posix_fadvise(Native Method)
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posixFadviseIfPossible(NativeIO.java:267)
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX$CacheManipulator.posixFadviseIfPossible(NativeIO.java:146)
        at org.apache.hadoop.io.ReadaheadPool$ReadaheadRequestImpl.run(ReadaheadPool.java:208)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
19/04/04 14:00:29 WARN io.ReadaheadPool: Failed readahead on ifile
EBADF: Bad file descriptor
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posix_fadvise(Native Method)
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posixFadviseIfPossible(NativeIO.java:267)
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX$CacheManipulator.posixFadviseIfPossible(NativeIO.java:146)
        at org.apache.hadoop.io.ReadaheadPool$ReadaheadRequestImpl.run(ReadaheadPool.java:208)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
19/04/04 14:00:29 WARN io.ReadaheadPool: Failed readahead on ifile
EBADF: Bad file descriptor
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posix_fadvise(Native Method)
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posixFadviseIfPossible(NativeIO.java:267)
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX$CacheManipulator.posixFadviseIfPossible(NativeIO.java:146)
        at org.apache.hadoop.io.ReadaheadPool$ReadaheadRequestImpl.run(ReadaheadPool.java:208)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
19/04/04 14:00:29 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1620358206_0001_m_000009_0 decomp: 2 len: 6 to MEMORY
19/04/04 14:00:29 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1620358206_0001_m_000009_0
19/04/04 14:00:29 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 17, commitMemory -> 356, usedMemory ->358
19/04/04 14:00:29 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1620358206_0001_m_000022_0 decomp: 2 len: 6 to MEMORY
19/04/04 14:00:29 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1620358206_0001_m_000022_0
19/04/04 14:00:29 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 18, commitMemory -> 358, usedMemory ->360
19/04/04 14:00:29 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1620358206_0001_m_000008_0 decomp: 2 len: 6 to MEMORY
19/04/04 14:00:29 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1620358206_0001_m_000008_0
19/04/04 14:00:29 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 19, commitMemory -> 360, usedMemory ->362
19/04/04 14:00:29 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1620358206_0001_m_000021_0 decomp: 2 len: 6 to MEMORY
19/04/04 14:00:29 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1620358206_0001_m_000021_0
19/04/04 14:00:29 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 20, commitMemory -> 362, usedMemory ->364
19/04/04 14:00:29 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1620358206_0001_m_000020_0 decomp: 2 len: 6 to MEMORY
19/04/04 14:00:29 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1620358206_0001_m_000020_0
19/04/04 14:00:29 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 21, commitMemory -> 364, usedMemory ->366
19/04/04 14:00:29 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1620358206_0001_m_000007_0 decomp: 2 len: 6 to MEMORY
19/04/04 14:00:29 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1620358206_0001_m_000007_0
19/04/04 14:00:29 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 22, commitMemory -> 366, usedMemory ->368
19/04/04 14:00:29 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1620358206_0001_m_000006_0 decomp: 29 len: 33 to MEMORY
19/04/04 14:00:29 INFO reduce.InMemoryMapOutput: Read 29 bytes from map-output for attempt_local1620358206_0001_m_000006_0
19/04/04 14:00:29 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 29, inMemoryMapOutputs.size() -> 23, commitMemory -> 368, usedMemory ->397
19/04/04 14:00:29 WARN io.ReadaheadPool: Failed readahead on ifile
EBADF: Bad file descriptor
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posix_fadvise(Native Method)
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posixFadviseIfPossible(NativeIO.java:267)
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX$CacheManipulator.posixFadviseIfPossible(NativeIO.java:146)
        at org.apache.hadoop.io.ReadaheadPool$ReadaheadRequestImpl.run(ReadaheadPool.java:208)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
19/04/04 14:00:29 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1620358206_0001_m_000019_0 decomp: 2 len: 6 to MEMORY
19/04/04 14:00:29 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1620358206_0001_m_000019_0
19/04/04 14:00:29 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 24, commitMemory -> 397, usedMemory ->399
19/04/04 14:00:29 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1620358206_0001_m_000005_0 decomp: 2 len: 6 to MEMORY
19/04/04 14:00:29 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1620358206_0001_m_000005_0
19/04/04 14:00:29 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 25, commitMemory -> 399, usedMemory ->401
19/04/04 14:00:29 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1620358206_0001_m_000018_0 decomp: 2 len: 6 to MEMORY
19/04/04 14:00:29 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1620358206_0001_m_000018_0
19/04/04 14:00:29 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 26, commitMemory -> 401, usedMemory ->403
19/04/04 14:00:29 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1620358206_0001_m_000004_0 decomp: 29 len: 33 to MEMORY
19/04/04 14:00:29 INFO reduce.InMemoryMapOutput: Read 29 bytes from map-output for attempt_local1620358206_0001_m_000004_0
19/04/04 14:00:29 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 29, inMemoryMapOutputs.size() -> 27, commitMemory -> 403, usedMemory ->432
19/04/04 14:00:29 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1620358206_0001_m_000017_0 decomp: 2 len: 6 to MEMORY
19/04/04 14:00:29 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1620358206_0001_m_000017_0
19/04/04 14:00:29 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 28, commitMemory -> 432, usedMemory ->434
19/04/04 14:00:29 WARN io.ReadaheadPool: Failed readahead on ifile
EBADF: Bad file descriptor
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posix_fadvise(Native Method)
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posixFadviseIfPossible(NativeIO.java:267)
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX$CacheManipulator.posixFadviseIfPossible(NativeIO.java:146)
        at org.apache.hadoop.io.ReadaheadPool$ReadaheadRequestImpl.run(ReadaheadPool.java:208)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
19/04/04 14:00:29 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1620358206_0001_m_000016_0 decomp: 2 len: 6 to MEMORY
19/04/04 14:00:29 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1620358206_0001_m_000016_0
19/04/04 14:00:29 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 29, commitMemory -> 434, usedMemory ->436
19/04/04 14:00:29 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning
19/04/04 14:00:29 WARN io.ReadaheadPool: Failed readahead on ifile
EBADF: Bad file descriptor
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posix_fadvise(Native Method)
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posixFadviseIfPossible(NativeIO.java:267)
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX$CacheManipulator.posixFadviseIfPossible(NativeIO.java:146)
        at org.apache.hadoop.io.ReadaheadPool$ReadaheadRequestImpl.run(ReadaheadPool.java:208)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
19/04/04 14:00:29 INFO mapred.LocalJobRunner: 29 / 29 copied.
19/04/04 14:00:29 INFO reduce.MergeManagerImpl: finalMerge called with 29 in-memory map-outputs and 0 on-disk map-outputs
19/04/04 14:00:29 INFO mapred.Merger: Merging 29 sorted segments
19/04/04 14:00:29 INFO mapred.Merger: Down to the last merge-pass, with 6 segments left of total size: 280 bytes
19/04/04 14:00:29 INFO reduce.MergeManagerImpl: Merged 29 segments, 436 bytes to disk to satisfy reduce memory limit
19/04/04 14:00:29 INFO reduce.MergeManagerImpl: Merging 1 files, 384 bytes from disk
19/04/04 14:00:29 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce
19/04/04 14:00:29 INFO mapred.Merger: Merging 1 sorted segments
19/04/04 14:00:29 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 349 bytes
19/04/04 14:00:29 INFO mapred.LocalJobRunner: 29 / 29 copied.
19/04/04 14:00:29 INFO Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
19/04/04 14:00:29 INFO mapred.Task: Task:attempt_local1620358206_0001_r_000000_0 is done. And is in the process of committing
19/04/04 14:00:29 INFO mapred.LocalJobRunner: 29 / 29 copied.
19/04/04 14:00:29 INFO mapred.Task: Task attempt_local1620358206_0001_r_000000_0 is allowed to commit now
19/04/04 14:00:29 INFO output.FileOutputCommitter: Saved output of task 'attempt_local1620358206_0001_r_000000_0' to hdfs://localhost:9000/user/admin/grep-temp-1670846439/_temporary/0/task_local1620358206_0001_r_000000
19/04/04 14:00:29 INFO mapred.LocalJobRunner: reduce > reduce
19/04/04 14:00:29 INFO mapred.Task: Task 'attempt_local1620358206_0001_r_000000_0' done.
19/04/04 14:00:29 INFO mapred.LocalJobRunner: Finishing task: attempt_local1620358206_0001_r_000000_0
19/04/04 14:00:29 INFO mapred.LocalJobRunner: reduce task executor complete.
19/04/04 14:00:30 INFO mapreduce.Job:  map 100% reduce 100%
19/04/04 14:00:30 INFO mapreduce.Job: Job job_local1620358206_0001 completed successfully
19/04/04 14:00:30 INFO mapreduce.Job: Counters: 35
        File System Counters
                FILE: Number of bytes read=10332926
                FILE: Number of bytes written=23327075
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=2067833
                HDFS: Number of bytes written=488
                HDFS: Number of read operations=1021
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=32
        Map-Reduce Framework
                Map input records=2358
                Map output records=28
                Map output bytes=663
                Map output materialized bytes=552
                Input split bytes=3505
                Combine input records=28
                Combine output records=15
                Reduce input groups=13
                Reduce shuffle bytes=552
                Reduce input records=15
                Reduce output records=13
                Spilled Records=30
                Shuffled Maps =29
                Failed Shuffles=0
                Merged Map outputs=29
                GC time elapsed (ms)=91
                Total committed heap usage (bytes)=15231090688
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters 
                Bytes Read=89811
        File Output Format Counters 
                Bytes Written=488
19/04/04 14:00:30 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
19/04/04 14:00:30 INFO input.FileInputFormat: Total input files to process : 1
19/04/04 14:00:30 INFO mapreduce.JobSubmitter: number of splits:1
19/04/04 14:00:30 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local1284643993_0002
19/04/04 14:00:30 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
19/04/04 14:00:30 INFO mapreduce.Job: Running job: job_local1284643993_0002
19/04/04 14:00:30 INFO mapred.LocalJobRunner: OutputCommitter set in config null
19/04/04 14:00:30 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:00:30 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:00:30 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
19/04/04 14:00:30 INFO mapred.LocalJobRunner: Waiting for map tasks
19/04/04 14:00:30 INFO mapred.LocalJobRunner: Starting task: attempt_local1284643993_0002_m_000000_0
19/04/04 14:00:30 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:00:30 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:00:30 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:00:30 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/admin/grep-temp-1670846439/part-r-00000:0+488
19/04/04 14:00:30 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
19/04/04 14:00:30 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
19/04/04 14:00:30 INFO mapred.MapTask: soft limit at 83886080
19/04/04 14:00:30 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
19/04/04 14:00:30 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
19/04/04 14:00:30 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
19/04/04 14:00:30 INFO mapred.LocalJobRunner: 
19/04/04 14:00:30 INFO mapred.MapTask: Starting flush of map output
19/04/04 14:00:30 INFO mapred.MapTask: Spilling map output
19/04/04 14:00:30 INFO mapred.MapTask: bufstart = 0; bufend = 298; bufvoid = 104857600
19/04/04 14:00:30 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214348(104857392); length = 49/6553600
19/04/04 14:00:30 INFO mapred.MapTask: Finished spill 0
19/04/04 14:00:30 INFO mapred.Task: Task:attempt_local1284643993_0002_m_000000_0 is done. And is in the process of committing
19/04/04 14:00:30 INFO mapred.LocalJobRunner: map
19/04/04 14:00:30 INFO mapred.Task: Task 'attempt_local1284643993_0002_m_000000_0' done.
19/04/04 14:00:30 INFO mapred.LocalJobRunner: Finishing task: attempt_local1284643993_0002_m_000000_0
19/04/04 14:00:30 INFO mapred.LocalJobRunner: map task executor complete.
19/04/04 14:00:30 INFO mapred.LocalJobRunner: Waiting for reduce tasks
19/04/04 14:00:30 INFO mapred.LocalJobRunner: Starting task: attempt_local1284643993_0002_r_000000_0
19/04/04 14:00:30 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
19/04/04 14:00:30 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
19/04/04 14:00:30 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
19/04/04 14:00:30 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@100ac37
19/04/04 14:00:30 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=370304608, maxSingleShuffleLimit=92576152, mergeThreshold=244401056, ioSortFactor=10, memToMemMergeOutputsThreshold=10
19/04/04 14:00:30 INFO reduce.EventFetcher: attempt_local1284643993_0002_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
19/04/04 14:00:30 INFO reduce.LocalFetcher: localfetcher#2 about to shuffle output of map attempt_local1284643993_0002_m_000000_0 decomp: 326 len: 330 to MEMORY
19/04/04 14:00:30 INFO reduce.InMemoryMapOutput: Read 326 bytes from map-output for attempt_local1284643993_0002_m_000000_0
19/04/04 14:00:30 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 326, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->326
19/04/04 14:00:30 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning
19/04/04 14:00:30 INFO mapred.LocalJobRunner: 1 / 1 copied.
19/04/04 14:00:30 INFO reduce.MergeManagerImpl: finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs
19/04/04 14:00:30 INFO mapred.Merger: Merging 1 sorted segments
19/04/04 14:00:30 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 316 bytes
19/04/04 14:00:30 INFO reduce.MergeManagerImpl: Merged 1 segments, 326 bytes to disk to satisfy reduce memory limit
19/04/04 14:00:30 INFO reduce.MergeManagerImpl: Merging 1 files, 330 bytes from disk
19/04/04 14:00:30 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce
19/04/04 14:00:30 INFO mapred.Merger: Merging 1 sorted segments
19/04/04 14:00:30 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 316 bytes
19/04/04 14:00:30 INFO mapred.LocalJobRunner: 1 / 1 copied.
19/04/04 14:00:31 INFO mapred.Task: Task:attempt_local1284643993_0002_r_000000_0 is done. And is in the process of committing
19/04/04 14:00:31 INFO mapred.LocalJobRunner: 1 / 1 copied.
19/04/04 14:00:31 INFO mapred.Task: Task attempt_local1284643993_0002_r_000000_0 is allowed to commit now
19/04/04 14:00:31 INFO output.FileOutputCommitter: Saved output of task 'attempt_local1284643993_0002_r_000000_0' to hdfs://localhost:9000/user/admin/output/_temporary/0/task_local1284643993_0002_r_000000
19/04/04 14:00:31 INFO mapred.LocalJobRunner: reduce > reduce
19/04/04 14:00:31 INFO mapred.Task: Task 'attempt_local1284643993_0002_r_000000_0' done.
19/04/04 14:00:31 INFO mapred.LocalJobRunner: Finishing task: attempt_local1284643993_0002_r_000000_0
19/04/04 14:00:31 INFO mapred.LocalJobRunner: reduce task executor complete.
19/04/04 14:00:31 INFO mapreduce.Job: Job job_local1284643993_0002 running in uber mode : false
19/04/04 14:00:31 INFO mapreduce.Job:  map 100% reduce 100%
19/04/04 14:00:31 INFO mapreduce.Job: Job job_local1284643993_0002 completed successfully
19/04/04 14:00:31 INFO mapreduce.Job: Counters: 35
        File System Counters
                FILE: Number of bytes read=1340914
                FILE: Number of bytes written=3099944
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=180598
                HDFS: Number of bytes written=1196
                HDFS: Number of read operations=151
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=16
        Map-Reduce Framework
                Map input records=13
                Map output records=13
                Map output bytes=298
                Map output materialized bytes=330
                Input split bytes=131
                Combine input records=0
                Combine output records=0
                Reduce input groups=5
                Reduce shuffle bytes=330
                Reduce input records=13
                Reduce output records=13
                Spilled Records=26
                Shuffled Maps =1
                Failed Shuffles=0
                Merged Map outputs=1
                GC time elapsed (ms)=0
                Total committed heap usage (bytes)=1058013184
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters 
                Bytes Read=488
        File Output Format Counters 
                Bytes Written=220

 

 

Pseudo-Distributed - run a MapReduce job on YARN

$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar grep input output 'dfs[a-z.]+'
19/04/04 14:23:28 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
19/04/04 14:23:28 INFO input.FileInputFormat: Total input files to process : 29
19/04/04 14:23:28 INFO mapreduce.JobSubmitter: number of splits:29
19/04/04 14:23:28 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
19/04/04 14:23:28 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1554358905587_0002
19/04/04 14:23:29 INFO impl.YarnClientImpl: Submitted application application_1554358905587_0002
19/04/04 14:23:29 INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1554358905587_0002/
19/04/04 14:23:29 INFO mapreduce.Job: Running job: job_1554358905587_0002
19/04/04 14:23:34 INFO mapreduce.Job: Job job_1554358905587_0002 running in uber mode : false
19/04/04 14:23:34 INFO mapreduce.Job:  map 0% reduce 0%
19/04/04 14:23:43 INFO mapreduce.Job:  map 21% reduce 0%
19/04/04 14:23:51 INFO mapreduce.Job:  map 41% reduce 0%
19/04/04 14:23:59 INFO mapreduce.Job:  map 59% reduce 0%
19/04/04 14:24:02 INFO mapreduce.Job:  map 62% reduce 0%
19/04/04 14:24:06 INFO mapreduce.Job:  map 76% reduce 0%
19/04/04 14:24:07 INFO mapreduce.Job:  map 79% reduce 0%
19/04/04 14:24:10 INFO mapreduce.Job:  map 79% reduce 26%
19/04/04 14:24:11 INFO mapreduce.Job:  map 83% reduce 26%
19/04/04 14:24:13 INFO mapreduce.Job:  map 97% reduce 26%
19/04/04 14:24:14 INFO mapreduce.Job:  map 100% reduce 26%
19/04/04 14:24:15 INFO mapreduce.Job:  map 100% reduce 100%
19/04/04 14:24:16 INFO mapreduce.Job: Job job_1554358905587_0002 completed successfully
19/04/04 14:24:16 INFO mapreduce.Job: Counters: 49
        File System Counters
                FILE: Number of bytes read=384
                FILE: Number of bytes written=5965650
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=93316
                HDFS: Number of bytes written=488
                HDFS: Number of read operations=90
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=2
        Job Counters 
                Launched map tasks=29
                Launched reduce tasks=1
                Data-local map tasks=29
                Total time spent by all maps in occupied slots (ms)=176063
                Total time spent by all reduces in occupied slots (ms)=22832
                Total time spent by all map tasks (ms)=176063
                Total time spent by all reduce tasks (ms)=22832
                Total vcore-milliseconds taken by all map tasks=176063
                Total vcore-milliseconds taken by all reduce tasks=22832
                Total megabyte-milliseconds taken by all map tasks=180288512
                Total megabyte-milliseconds taken by all reduce tasks=23379968
        Map-Reduce Framework
                Map input records=2358
                Map output records=28
                Map output bytes=663
                Map output materialized bytes=552
                Input split bytes=3505
                Combine input records=28
                Combine output records=15
                Reduce input groups=13
                Reduce shuffle bytes=552
                Reduce input records=15
                Reduce output records=13
                Spilled Records=30
                Shuffled Maps =29
                Failed Shuffles=0
                Merged Map outputs=29
                GC time elapsed (ms)=5719
                CPU time spent (ms)=12970
                Physical memory (bytes) snapshot=9046470656
                Virtual memory (bytes) snapshot=63524478976
                Total committed heap usage (bytes)=5980553216
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters 
                Bytes Read=89811
        File Output Format Counters 
                Bytes Written=488
19/04/04 14:24:16 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
19/04/04 14:24:16 INFO input.FileInputFormat: Total input files to process : 1
19/04/04 14:24:16 INFO mapreduce.JobSubmitter: number of splits:1
19/04/04 14:24:16 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1554358905587_0003
19/04/04 14:24:16 INFO impl.YarnClientImpl: Submitted application application_1554358905587_0003
19/04/04 14:24:16 INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1554358905587_0003/
19/04/04 14:24:16 INFO mapreduce.Job: Running job: job_1554358905587_0003
19/04/04 14:24:26 INFO mapreduce.Job: Job job_1554358905587_0003 running in uber mode : false
19/04/04 14:24:26 INFO mapreduce.Job:  map 0% reduce 0%
19/04/04 14:24:30 INFO mapreduce.Job:  map 100% reduce 0%
19/04/04 14:24:34 INFO mapreduce.Job:  map 100% reduce 100%
19/04/04 14:24:35 INFO mapreduce.Job: Job job_1554358905587_0003 completed successfully
19/04/04 14:24:35 INFO mapreduce.Job: Counters: 49
        File System Counters
                FILE: Number of bytes read=330
                FILE: Number of bytes written=397061
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=618
                HDFS: Number of bytes written=220
                HDFS: Number of read operations=7
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=2
        Job Counters 
                Launched map tasks=1
                Launched reduce tasks=1
                Data-local map tasks=1
                Total time spent by all maps in occupied slots (ms)=1786
                Total time spent by all reduces in occupied slots (ms)=1915
                Total time spent by all map tasks (ms)=1786
                Total time spent by all reduce tasks (ms)=1915
                Total vcore-milliseconds taken by all map tasks=1786
                Total vcore-milliseconds taken by all reduce tasks=1915
                Total megabyte-milliseconds taken by all map tasks=1828864
                Total megabyte-milliseconds taken by all reduce tasks=1960960
        Map-Reduce Framework
                Map input records=13
                Map output records=13
                Map output bytes=298
                Map output materialized bytes=330
                Input split bytes=130
                Combine input records=0
                Combine output records=0
                Reduce input groups=5
                Reduce shuffle bytes=330
                Reduce input records=13
                Reduce output records=13
                Spilled Records=26
                Shuffled Maps =1
                Failed Shuffles=0
                Merged Map outputs=1
                GC time elapsed (ms)=85
                CPU time spent (ms)=1040
                Physical memory (bytes) snapshot=511217664
                Virtual memory (bytes) snapshot=4241166336
                Total committed heap usage (bytes)=338690048
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters 
                Bytes Read=488
        File Output Format Counters 
                Bytes Written=220

你可能感兴趣的:(Hadoop)