Hadoop3.x组件MapReduce入门

一、什么是MapReduce

Hadoop MapReduce,以下简称MR,是一个分布式计算框架,可以用于轻松编写分布式应用程序,使得这些程序能以可靠的、容错的、并行的方式在分布式集群机器上处理大规模的数据。

MR同时还是一种编程思想,是一种编程模型。其核心思想就是“先分再合,分而治之”,就是把一个复杂的问题,按照一定的分解方法将其分为等价的规模较小的若干简单问题,然后分别求解这些简单问题,最后把这些问题的答案组合起来形成整个复杂问题的结果。MR其实就是这整个求解过程的两个阶段:

  • Map,负责问题的拆分,问题可拆分的前提是这些小任务都是可以并行计算的,它们之间没有依赖关系;

  • Reduce,负责子问题答案的合并,对Map阶段的结果进行全局汇总;

MR求解问题示意图

二、为什么需要MapReduce

在传统的单机计算场景中,通常对一个问题求解,都是通过集中式计算完成的,随着业务场景变得复杂,需要处理的数据量越来越大,集中式计算的成本越来越高,而且计算的时效越来越长,比较耗费时间。因此MR作为分布式计算框架,通过分布式计算能横向扩展计算能力,理论上不存在上限,且扩展成本低廉,并行处理使得大数据计算场景也能很快得到结果。

MR来源于Google发表的论文MapReduce,由Apache实现的分布式计算框架,解决了人们在面临海量数据处理时束手无策的问题,具有易用性且是高度可扩展的,使得开发者无需关心分布式系统底层的复杂性就能很容易地编写出分布式大数据处理程序。

MR具有如下的优点:

  • 高容错性,Hadoop集群是分布式的,任意单个节点故障都可以轻易地将计算任务转移到其它节点上运行,不影响整个作业任务的完成,这整个过程都有Hadoop帮助我们完成,用户无需操心;
  • 适合海量数据离线计算,需要处理的数据量越大,MR的优势越明显,在集群节点可扩展的前提下,计算能力没有上限;

MR具有如下的缺点:

  • 实时计算性能差,不管多么简单的MR任务,不管需要计算的数据量多么小,整个任务执行完成都不能保证在秒级,不适合实时性要求高的业务场景;
  • 不能进行流式计算,面对源源不断流式增加数据的场景,MR无法处理,只能处理离线数据,面对的静态的不会变化的数据;

三、MapReduce是如何工作的

MR工作过程中主要有如下三个角色:

  • ApplicationMaster,负责整个MR程序的过程调度及状态协调,一个任务作业只会有一个AM存在;

  • MapTask,负责Map阶段整个数据的处理流程,一个计算节点上会以容器的形式启动多个不同任务的MapTask;

  • ReduceTask,负责Reduce阶段整个数据的处理流程,一个计算节点上会以容器的形式启动多个不同任务的ReduceTask;

    这三种角色都是以容器的方式运行在集群中,容器的概念在后面介绍YARN的时候会提到,此处了解即可。

MR角色职责示意图

MR工作过程中主要有如下三个阶段:

  • Map,将输入的文件切片,默认一个切片大小为128M,每一个切片交给一个MapTask进行处理;

  • Shuffle,将Map端无规则的输出数据按照一定规则进行洗牌,使其具有一定的规则,以便Reduce端接收处理;

  • Reduce,主动从MapTask拉取属于自己要处理的数据,把这些数据进行合并,将结果写入到HDFS中;

MR过程阶段示意图

在整个MR过程中,数据都是以KV键值对的形式流转的:

MR过程实例演示

四、如何在Hadoop集群上执行MapReduce程序

首先需要搭建一个Hadoop集群,这部分内容我们原先已经完成了,可以参考:Hadoop3.x集群安装教程 - (jianshu.com)

然后我们运行两个官方的示例:

4.1 计算圆周率

# 进入案例所在目录
[root@iZuf6gmsvearrd5uc3emkyZ ~]# cd /root/soft/hadoop-3.3.4/share/hadoop/mapreduce/
[root@iZuf6gmsvearrd5uc3emkyZ mapreduce]# ls
hadoop-mapreduce-client-app-3.3.4.jar     hadoop-mapreduce-client-hs-plugins-3.3.4.jar       hadoop-mapreduce-client-shuffle-3.3.4.jar   lib-examples
hadoop-mapreduce-client-common-3.3.4.jar  hadoop-mapreduce-client-jobclient-3.3.4.jar        hadoop-mapreduce-client-uploader-3.3.4.jar  sources
hadoop-mapreduce-client-core-3.3.4.jar    hadoop-mapreduce-client-jobclient-3.3.4-tests.jar  hadoop-mapreduce-examples-3.3.4.jar
hadoop-mapreduce-client-hs-3.3.4.jar      hadoop-mapreduce-client-nativetask-3.3.4.jar       jdiff
# 执行计算任务
[root@iZuf6gmsvearrd5uc3emkyZ mapreduce]# hadoop jar hadoop-mapreduce-examples-3.3.4.jar pi 2 4
Number of Maps  = 2
Samples per Map = 4
Wrote input for Map #0
Wrote input for Map #1
Starting Job
# 由Yarn的ResourceManager来分配资源
2022-11-02 17:31:41,735 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at /172.24.38.209:8032
2022-11-02 17:31:42,738 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/root/.staging/job_1667379747044_0002
2022-11-02 17:31:43,046 INFO input.FileInputFormat: Total input files to process : 2
2022-11-02 17:31:43,953 INFO mapreduce.JobSubmitter: number of splits:2
2022-11-02 17:31:44,735 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1667379747044_0002
2022-11-02 17:31:44,736 INFO mapreduce.JobSubmitter: Executing with tokens: []
2022-11-02 17:31:45,110 INFO conf.Configuration: resource-types.xml not found
2022-11-02 17:31:45,110 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2022-11-02 17:31:45,264 INFO impl.YarnClientImpl: Submitted application application_1667379747044_0002
2022-11-02 17:31:45,349 INFO mapreduce.Job: The url to track the job: http://iZuf6gmsvearrd5uc3emkyZ:8088/proxy/application_1667379747044_0002/
2022-11-02 17:31:45,350 INFO mapreduce.Job: Running job: job_1667379747044_0002
2022-11-02 17:31:59,929 INFO mapreduce.Job: Job job_1667379747044_0002 running in uber mode : false
2022-11-02 17:31:59,931 INFO mapreduce.Job:  map 0% reduce 0%
2022-11-02 17:32:14,551 INFO mapreduce.Job:  map 50% reduce 0%
2022-11-02 17:32:15,568 INFO mapreduce.Job:  map 100% reduce 0%
2022-11-02 17:32:23,754 INFO mapreduce.Job:  map 100% reduce 100%
2022-11-02 17:32:25,819 INFO mapreduce.Job: Job job_1667379747044_0002 completed successfully
2022-11-02 17:32:26,168 INFO mapreduce.Job: Counters: 54
        File System Counters
                FILE: Number of bytes read=50
                FILE: Number of bytes written=830664
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=536
                HDFS: Number of bytes written=215
                HDFS: Number of read operations=13
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=3
                HDFS: Number of bytes read erasure-coded=0
        Job Counters
                Launched map tasks=2
                Launched reduce tasks=1
                Data-local map tasks=2
                Total time spent by all maps in occupied slots (ms)=24774
                Total time spent by all reduces in occupied slots (ms)=6828
                Total time spent by all map tasks (ms)=24774
                Total time spent by all reduce tasks (ms)=6828
                Total vcore-milliseconds taken by all map tasks=24774
                Total vcore-milliseconds taken by all reduce tasks=6828
                Total megabyte-milliseconds taken by all map tasks=25368576
                Total megabyte-milliseconds taken by all reduce tasks=6991872
        Map-Reduce Framework
                Map input records=2
                Map output records=4
                Map output bytes=36
                Map output materialized bytes=56
                Input split bytes=300
                Combine input records=0
                Combine output records=0
                Reduce input groups=2
                Reduce shuffle bytes=56
                Reduce input records=4
                Reduce output records=0
                Spilled Records=8
                Shuffled Maps =2
                Failed Shuffles=0
                Merged Map outputs=2
                GC time elapsed (ms)=781
                CPU time spent (ms)=2500
                Physical memory (bytes) snapshot=537804800
                Virtual memory (bytes) snapshot=8220102656
                Total committed heap usage (bytes)=295051264
                Peak Map Physical memory (bytes)=213385216
                Peak Map Virtual memory (bytes)=2737229824
                Peak Reduce Physical memory (bytes)=113819648
                Peak Reduce Virtual memory (bytes)=2745643008
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters
                Bytes Read=236
        File Output Format Counters
                Bytes Written=97
Job Finished in 44.671 seconds
# 计算结果
Estimated value of Pi is 3.50000000000000000000

4.2 统计文件中单词频率

#准备测试文件
cd /root/data/wordcount
vim wordcount.txt
hello hadoop hello bigdata
hadoop hdfs
hadoop mapreduce
hadoop yarn
hello

#上传待处理文件
hadoop fs -mkdir -p /example/wordcount/input
hadoop fs -put wordcount.txt /example/wordcount/input

#执行文件单词频率统计程序,要求输出目录不能已经存在
cd /root/soft/hadoop-3.3.4/share/hadoop/mapreduce
hadoop jar hadoop-mapreduce-examples-3.3.4.jar wordcount /example/wordcount/input /example/wordcount/output

#查看结果
hadoop fs -cat /zx/example/hello/output/part-r-00000
bigdata 1
hadoop  4
hdfs    1
hello   3
mapreduce       1
yarn    1

在实际的企业分布式计算程序开发中,开发人员很少会直接编写MR程序,都是通过Hive之类的数仓工具来由工具自动生成MR程序执行,所以不要求开发人员学会MR程序的编写。其次,MR计算框架由于其天生的弊端存在,即通过内存+磁盘结合的方式完成计算任务,在效率上比不上其它基于内存的计算框架,所以使用的场景也不多了。最后,MR不能满足流式计算场景,使用MR计算框架的场景更少了。

但是MR作为初代分布式计算框架,是具有跨时代意义的,是大数据学习路上必须了解的知识,很多其它流行的计算框架或多或少都有借鉴参考MR的思想和实现方式,了解学会MR,将有助于后续快速上手其它计算框架。

你可能感兴趣的:(Hadoop3.x组件MapReduce入门)