学习Hadoop权威指南之Hadoop运行MapReduce日志查看

阅读更多

 

修改map配置文件 mapred-site.xml
 
[root@bigdata yar]# vim /opt/hadoop-2.8.3/etc/hadoop/mapred-site.xml
 
mapreduce.jobhistory.address
bigdata.cqmfin.com:10020
 
mapreduce.jobhistory.webapp.address
bigdata.cqmfin.com:19888
 
修改yarn配置文件 yarn-site.xml
yarn.nodemanager.delete.debug-delay-sec
默认值:0,app执行完之后立即删除本地文件
desc:应用程序完成之后 NodeManager 的 DeletionService 删除应用程序的本地化文件和日志目录之前的时间(秒数)。要诊断 YARN 应用程序问题,请将此属性的值设为足够大(例如,设为 600 秒,即 10 分钟)以允许检查这些目录。
 
  
    yarn.nodemanager.local-dirs
    /opt/hadoop/logs/yar/local
  
 
  
    yarn.nodemanager.log-dirs
    /opt/hadoop/logs/yar/log
  
 
yarn.log.server.url
http://bigdata.cqmfin.com:19888/jobhistory/logs
yarn.log-aggregation-enable
true
 
yarn.nodemanager.delete.debug-delay-sec
60000
 
启动服务
[root@bigdata yar]mr-jobhistory-daemon.sh start historyserver
查看是否有JobHistoryServer
[root@bigdata yar]# jps
15098 Jps
13338 NodeManager
12875 DataNode
13052 SecondaryNameNode
13229 ResourceManager
8765 JobHistoryServer
12735 NameNode
 
运行MapReducer
 
查看输入文件
[root@bigdata yar]# hadoop dfs -cat /user/input/sample.txt
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
 
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=512M; support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=1024m; support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=512M; support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=1024m; support was removed in 8.0
18/08/08 22:47:58 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
0067011990999991950051507004+68750+023550FM-12+038299999V0203301N00671220001CN9999999N9+00001+99999999999
0043011990999991950051512004+68750+023550FM-12+038299999V0203201N00671220001CN9999999N9+00221+99999999999
0043011990999991950051518004+68750+023550FM-12+038299999V0203201N00261220001CN9999999N9-00111+99999999999
0043012650999991949032412004+62300+010750FM-12+048599999V0202701N00461220001CN0500001N9+01111+99999999999
0043012650999991949032418004+62300+010750FM-12+048599999V0202701N00461220001CN0500001N9+00781+99999999999
设置环境
[root@bigdata jar]# export HADOOP_CLASSPATH=/opt/jar/hadoop-examples-mapReducer.jar
删除输出文件
[root@bigdata jar]# hadoop dfs -rm -f -R /user/input/outputMax
运行测试
[root@bigdata opt]# hadoop MaxTemperature /user/input/sample.txt /user/input/outputMax
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=512M; support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=1024m; support was removed in 8.0
18/08/08 20:46:46 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/08/08 20:46:47 INFO client.RMProxy: Connecting to ResourceManager at bigdata.cqmfin.com/192.168.100.131:8032
18/08/08 20:46:47 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
18/08/08 20:46:48 INFO input.FileInputFormat: Total input files to process : 1
18/08/08 20:46:48 INFO mapreduce.JobSubmitter: number of splits:1
18/08/08 20:46:48 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1533786361328_0001
18/08/08 20:46:49 INFO impl.YarnClientImpl: Submitted application application_1533786361328_0001
18/08/08 20:46:49 INFO mapreduce.Job: The url to track the job: http://bigdata.cqmfin.com:8088/proxy/application_1533786361328_0001/
18/08/08 20:46:49 INFO mapreduce.Job: Running job: job_1533786361328_0001
18/08/08 20:46:56 INFO mapreduce.Job: Job job_1533786361328_0001 running in uber mode : false
18/08/08 20:46:56 INFO mapreduce.Job: map 0% reduce 0%
18/08/08 20:47:00 INFO mapreduce.Job: map 100% reduce 0%
18/08/08 20:47:05 INFO mapreduce.Job: map 100% reduce 100%
18/08/08 20:47:06 INFO mapreduce.Job: Job job_1533786361328_0001 completed successfully
18/08/08 20:47:06 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=61
FILE: Number of bytes written=315249
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=650
HDFS: Number of bytes written=17
HDFS: Number of read operations=6
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
Job Counters
Launched map tasks=1
Launched reduce tasks=1
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=2212
Total time spent by all reduces in occupied slots (ms)=2402
Total time spent by all map tasks (ms)=2212
Total time spent by all reduce tasks (ms)=2402
Total vcore-milliseconds taken by all map tasks=2212
Total vcore-milliseconds taken by all reduce tasks=2402
Total megabyte-milliseconds taken by all map tasks=2265088
Total megabyte-milliseconds taken by all reduce tasks=2459648
Map-Reduce Framework
Map input records=5
Map output records=5
Map output bytes=45
Map output materialized bytes=61
Input split bytes=117
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=61
Reduce input records=5
Reduce output records=2
Spilled Records=10
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=108
CPU time spent (ms)=950
Physical memory (bytes) snapshot=453992448
Virtual memory (bytes) snapshot=4193198080
Total committed heap usage (bytes)=323485696
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=533
File Output Format Counters
Bytes Written=17
[root@bigdata yar]# pwd
/opt/hadoop/logs/yar
查找日志文件
[root@bigdata yar]# grep -R 'write,key' *
log/application_1533786361328_0001/container_1533786361328_0001_01_000002/stdout:--------------------->>>>>write,key:1950,value:0
log/application_1533786361328_0001/container_1533786361328_0001_01_000002/stdout:--------------------->>>>>write,key:1950,value:22
log/application_1533786361328_0001/container_1533786361328_0001_01_000002/stdout:--------------------->>>>>write,key:1950,value:-11
log/application_1533786361328_0001/container_1533786361328_0001_01_000002/stdout:--------------------->>>>>write,key:1949,value:111
log/application_1533786361328_0001/container_1533786361328_0001_01_000002/stdout:--------------------->>>>>write,key:1949,value:78
[root@bigdata yar]# grep -R 'write,key' *
log/application_1533786361328_0001/container_1533786361328_0001_01_000002/stdout:--------------------->>>>>write,key:1950,value:0
log/application_1533786361328_0001/container_1533786361328_0001_01_000002/stdout:--------------------->>>>>write,key:1950,value:22
log/application_1533786361328_0001/container_1533786361328_0001_01_000002/stdout:--------------------->>>>>write,key:1950,value:-11
log/application_1533786361328_0001/container_1533786361328_0001_01_000002/stdout:--------------------->>>>>write,key:1949,value:111
log/application_1533786361328_0001/container_1533786361328_0001_01_000002/stdout:--------------------->>>>>write,key:1949,value:78
[root@bigdata yar]# grep -R 'maxValue' *
log/application_1533786361328_0001/container_1533786361328_0001_01_000003/stdout:--------reducer>> maxValue:1949,maxValue:111
log/application_1533786361328_0001/container_1533786361328_0001_01_000003/stdout:--------reducer>> maxValue:1950,maxValue:22
[root@bigdata yar]# ^C
[root@bigdata yar]# ^C
查看日志文件
[root@bigdata yar]# cat log/application_1533786361328_0001/container_1533786361328_0001_01_000003/stdout
--------reducer>> key:1949,values:org.apache.hadoop.mapreduce.task.ReduceContextImpl$ValueIterable@56f6d40b
--------reducer>> maxValue:1949,maxValue:111
--------reducer>> key:1950,values:org.apache.hadoop.mapreduce.task.ReduceContextImpl$ValueIterable@56f6d40b
--------reducer>> maxValue:1950,maxValue:22
[root@bigdata yar]# cat log/application_1533786361328_0001/container_1533786361328_0001_01_000002/stdout
--------------------->>>>>,key:0,value:0067011990999991950051507004+68750+023550FM-12+038299999V0203301N00671220001CN9999999N9+00001+99999999999
--------------------->>>>>write,key:1950,value:0
--------------------->>>>>,key:107,value:0043011990999991950051512004+68750+023550FM-12+038299999V0203201N00671220001CN9999999N9+00221+99999999999
--------------------->>>>>write,key:1950,value:22
--------------------->>>>>,key:214,value:0043011990999991950051518004+68750+023550FM-12+038299999V0203201N00261220001CN9999999N9-00111+99999999999
--------------------->>>>>write,key:1950,value:-11
--------------------->>>>>,key:321,value:0043012650999991949032412004+62300+010750FM-12+048599999V0202701N00461220001CN0500001N9+01111+99999999999
--------------------->>>>>write,key:1949,value:111
--------------------->>>>>,key:428,value:0043012650999991949032418004+62300+010750FM-12+048599999V0202701N00461220001CN0500001N9+00781+99999999999

--------------------->>>>>write,key:1949,value:78

你可能感兴趣的:(hadoop,大数据)