Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#2

问题:搭建完成Hadoop2.5.2之后,启动时各项均正常,但在测试运行Hadoop经典例子wordcount时候出现了如下异常:

Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#2
        at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134)
        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
Caused by: java.io.IOException: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
        at org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.checkReducerHealth(ShuffleSchedulerImpl.java:323)
        at org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.copyFailed(ShuffleSchedulerImpl.java:245)
        at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:323)
        at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:165)


解决方案:

在yarn-site.xml配置文件中增加了如下配置:

   
             yarn.resourcemanager.scheduler.class
            org.apache.hadoop.yarn.server.resourcemanager.
scheduler.capacity.CapacityScheduler

     
     
           yarn.scheduler.minimum-allocation-mb
           512
     

由于在自己的机器上配置,内存受限,所以在配置分配内存时给了512M,但官方文档默认的最小值是1024M

增加了这两个属性配置之后,再启动Hadoop集群,运行wordcount就能够正常运行了。

你可能感兴趣的:(hadoop)