wordcount

First wordcount

写在前面

  • 写作不易,如果感觉有用,请打赏作者一杯咖啡~
  • 转载请联系作者,或者注明作者,以及原文链接!

引言

承接上篇,Hadoop 伪分布式搭建
apache YARN 是Hadoop 的集群资源管理系统,YARN 的引入最初也是为了改善 mapreduce 的实现,
但他具有足够的通用性,同样支持其他的分布式计算模式。本节将介绍yarn的基本配置,以及第一个
wordcount 案例。

yarn 的配置

  1. 修改hadoop/etc/hadoop/mapred-site.xml文件
    指定mapreduce计算模型运行在yarn上。

        mapreduce.framework.name
        yarn

  1. 修改hadoop/etc/hadoop/yarn-site.xml文件
    指定启动运行mapreduce上的nodemanager的运行服务
    
        yarn.nodemanager.aux-services
        mapreduce_shuffle
    
    
  2. 指定resourcemanager主节点机器,可选项,不一定要配置,默认是本机,但是 指定了之后,在其他机器上启动,就会报错
   
        yarn.resourcemanager.hostname
        bigdata-4
    

wordcount 案例

  1. 启动yarn
    sbin/yarn-daemon.sh start resourcemanager
    sbin/yarn-daemon.sh start nodemanager 
  1. 查看yarn外部web界面bigdata-4或者IP地址 跟上8088端口号,
    外部通信http

  2. 测试环境,运行一个mapreduce,wordcount单词统计案例

一个mapreduce分为五个阶段
    input -> map() -> shuffle ->  reduce() -> output
    步骤:将mapreduce运行在yarn上,需要打jar包
            新建一个数据文件,用于测试mapreduce
            将数据文件从本地上传到HDFS
            bin/hdfs dfs -put word.txt /user/wxt/wxt_test/input
            使用官方提供的示例jar包:share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.0.jar
  1. 运行
    bin/yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.0.jar  wordcount /user/wxt/wxt_test/input/word.txt /output/word.txt
  1. 成功运行
17/12/08 19:55:40 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[wxt@bigdata-4 hadoop-2.5.0]$ bin/yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.0.jar  wordcount /user/wxt/wxt_test/input/word.txt /output/word.txt
17/12/08 19:59:09 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/12/08 19:59:11 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
17/12/08 19:59:12 INFO input.FileInputFormat: Total input paths to process : 1
17/12/08 19:59:13 INFO mapreduce.JobSubmitter: number of splits:1
17/12/08 19:59:13 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1512719236321_0001
17/12/08 19:59:14 INFO impl.YarnClientImpl: Submitted application application_1512719236321_0001
17/12/08 19:59:14 INFO mapreduce.Job: The url to track the job: http://bigdata-4:8088/proxy/application_1512719236321_0001/
17/12/08 19:59:14 INFO mapreduce.Job: Running job: job_1512719236321_0001
17/12/08 19:59:32 INFO mapreduce.Job: Job job_1512719236321_0001 running in uber mode : false
17/12/08 19:59:32 INFO mapreduce.Job:  map 0% reduce 0%
17/12/08 19:59:52 INFO mapreduce.Job:  map 100% reduce 0%
17/12/08 20:00:12 INFO mapreduce.Job:  map 100% reduce 100%
17/12/08 20:00:13 INFO mapreduce.Job: Job job_1512719236321_0001 completed successfully
17/12/08 20:00:13 INFO mapreduce.Job: Counters: 49
    File System Counters
        FILE: Number of bytes read=552
        FILE: Number of bytes written=195039
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=431
        HDFS: Number of bytes written=354
        HDFS: Number of read operations=6
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=2
    Job Counters 
        Launched map tasks=1
        Launched reduce tasks=1
        Data-local map tasks=1
        Total time spent by all maps in occupied slots (ms)=17464
        Total time spent by all reduces in occupied slots (ms)=17724
        Total time spent by all map tasks (ms)=17464
        Total time spent by all reduce tasks (ms)=17724
        Total vcore-seconds taken by all map tasks=17464
        Total vcore-seconds taken by all reduce tasks=17724
        Total megabyte-seconds taken by all map tasks=17883136
        Total megabyte-seconds taken by all reduce tasks=18149376
    Map-Reduce Framework
        Map input records=1
        Map output records=61
        Map output bytes=557
        Map output materialized bytes=552
        Input split bytes=119
        Combine input records=61
        Combine output records=48
        Reduce input groups=48
        Reduce shuffle bytes=552
        Reduce input records=48
        Reduce output records=48
        Spilled Records=96
        Shuffled Maps =1
        Failed Shuffles=0
        Merged Map outputs=1
        GC time elapsed (ms)=175
        CPU time spent (ms)=2240
        Physical memory (bytes) snapshot=309456896
        Virtual memory (bytes) snapshot=1680064512
        Total committed heap usage (bytes)=136450048
    Shuffle Errors
        BAD_ID=0
        CONNECTION=0
        IO_ERROR=0
        WRONG_LENGTH=0
        WRONG_MAP=0
        WRONG_REDUCE=0
    File Input Format Counters 
        Bytes Read=312
    File Output Format Counters 
        Bytes Written=354

致此,First WordCount 运行结束。

你可能感兴趣的:(wordcount)