hadoop-0.20.2-examples.jar grep 示例

1. 运行
root@ubuntu:/usr/hadoop#  bin/hadoop jar hadoop-0.20.2-examples.jar grep input output 'dfs[a-z.]+'
10/06/20 05:58:07 INFO mapred.FileInputFormat: Total input paths to process : 17
10/06/20 05:58:08 INFO mapred.JobClient: Running job: job_201006200542_0001
10/06/20 05:58:09 INFO mapred.JobClient:  map 0% reduce 0%
10/06/20 05:58:46 INFO mapred.JobClient:  map 11% reduce 0%
10/06/20 05:59:00 INFO mapred.JobClient:  map 23% reduce 0%
10/06/20 05:59:07 INFO mapred.JobClient:  map 35% reduce 7%
10/06/20 05:59:09 INFO mapred.JobClient:  map 47% reduce 7%
10/06/20 05:59:15 INFO mapred.JobClient:  map 58% reduce 11%
10/06/20 05:59:19 INFO mapred.JobClient:  map 64% reduce 11%
10/06/20 05:59:22 INFO mapred.JobClient:  map 76% reduce 19%
10/06/20 05:59:25 INFO mapred.JobClient:  map 88% reduce 19%
10/06/20 05:59:28 INFO mapred.JobClient:  map 100% reduce 21%
10/06/20 05:59:34 INFO mapred.JobClient:  map 100% reduce 31%
10/06/20 05:59:40 INFO mapred.JobClient:  map 100% reduce 100%
10/06/20 05:59:45 INFO mapred.JobClient: Job complete: job_201006200542_0001
10/06/20 05:59:49 INFO mapred.JobClient: Counters: 18
10/06/20 05:59:49 INFO mapred.JobClient:   Job Counters 
10/06/20 05:59:49 INFO mapred.JobClient:     Launched reduce tasks=1
10/06/20 05:59:49 INFO mapred.JobClient:     Launched map tasks=17
10/06/20 05:59:49 INFO mapred.JobClient:     Data-local map tasks=17
10/06/20 05:59:49 INFO mapred.JobClient:   FileSystemCounters
10/06/20 05:59:49 INFO mapred.JobClient:     FILE_BYTES_READ=184
10/06/20 05:59:49 INFO mapred.JobClient:     HDFS_BYTES_READ=21571
10/06/20 05:59:49 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=1008
10/06/20 05:59:49 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=280
10/06/20 05:59:49 INFO mapred.JobClient:   Map-Reduce Framework
10/06/20 05:59:49 INFO mapred.JobClient:     Reduce input groups=7
10/06/20 05:59:49 INFO mapred.JobClient:     Combine output records=8
10/06/20 05:59:49 INFO mapred.JobClient:     Map input records=651
10/06/20 05:59:49 INFO mapred.JobClient:     Reduce shuffle bytes=280
10/06/20 05:59:49 INFO mapred.JobClient:     Reduce output records=7
10/06/20 05:59:49 INFO mapred.JobClient:     Spilled Records=16
10/06/20 05:59:49 INFO mapred.JobClient:     Map output bytes=217
10/06/20 05:59:49 INFO mapred.JobClient:     Map input bytes=21571
10/06/20 05:59:49 INFO mapred.JobClient:     Combine input records=11
10/06/20 05:59:49 INFO mapred.JobClient:     Map output records=11
10/06/20 05:59:49 INFO mapred.JobClient:     Reduce input records=8
10/06/20 05:59:52 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
10/06/20 05:59:55 INFO mapred.FileInputFormat: Total input paths to process : 1
10/06/20 06:00:00 INFO mapred.JobClient: Running job: job_201006200542_0002
10/06/20 06:00:01 INFO mapred.JobClient:  map 0% reduce 0%
10/06/20 06:00:10 INFO mapred.JobClient:  map 100% reduce 0%
10/06/20 06:00:23 INFO mapred.JobClient:  map 100% reduce 100%
10/06/20 06:00:25 INFO mapred.JobClient: Job complete: job_201006200542_0002
10/06/20 06:00:25 INFO mapred.JobClient: Counters: 18
10/06/20 06:00:25 INFO mapred.JobClient:   Job Counters 
10/06/20 06:00:25 INFO mapred.JobClient:     Launched reduce tasks=1
10/06/20 06:00:25 INFO mapred.JobClient:     Launched map tasks=1
10/06/20 06:00:25 INFO mapred.JobClient:     Data-local map tasks=1
10/06/20 06:00:25 INFO mapred.JobClient:   FileSystemCounters
10/06/20 06:00:25 INFO mapred.JobClient:     FILE_BYTES_READ=158
10/06/20 06:00:25 INFO mapred.JobClient:     HDFS_BYTES_READ=280
10/06/20 06:00:25 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=348
10/06/20 06:00:25 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=96
10/06/20 06:00:25 INFO mapred.JobClient:   Map-Reduce Framework
10/06/20 06:00:25 INFO mapred.JobClient:     Reduce input groups=3
10/06/20 06:00:25 INFO mapred.JobClient:     Combine output records=0
10/06/20 06:00:25 INFO mapred.JobClient:     Map input records=7
10/06/20 06:00:25 INFO mapred.JobClient:     Reduce shuffle bytes=158
10/06/20 06:00:25 INFO mapred.JobClient:     Reduce output records=7
10/06/20 06:00:25 INFO mapred.JobClient:     Spilled Records=14
10/06/20 06:00:25 INFO mapred.JobClient:     Map output bytes=138
10/06/20 06:00:25 INFO mapred.JobClient:     Map input bytes=194
10/06/20 06:00:25 INFO mapred.JobClient:     Combine input records=0
10/06/20 06:00:25 INFO mapred.JobClient:     Map output records=7

10/06/20 06:00:25 INFO mapred.JobClient:     Reduce input records=7

2. 查看结果:
root@ubuntu:/usr/hadoop# bin/hadoop fs -get output output // 将输出文件从分布式文件系统拷贝到本地文件系统
root@ubuntu:/usr/hadoop# cat output/*
cat: output/_logs: 是一个目录
3    dfs.class
2    dfs.period
2    dfs.replication
1    dfs.file
1    dfs.servers
1    dfsadmin
1    dfsmetrics.log

你可能感兴趣的:(hadoop-0.20.2-examples.jar grep 示例)