Too many fetch failures

 

http://lucene.472066.n3.nabble.com/Reg-Too-many-fetch-failures-Error-td4037975.html

 

http://www.slideshare.net/cloudera/hadoop-troubleshooting-101-kate-ting-cloudera

 

Too many fetch failures_第1张图片

 

 

 http://lucene.472066.n3.nabble.com/Reg-Too-many-fetch-failures-Error-td4037975.html

 

As you may be aware this means the reduces are unable to fetch intermediate data from TaskTrackers that ran map tasks – you can try:

* increasing tasktracker.http.threads so there are more threads to handle fetch requests from reduces.

* decreasing mapreduce.reduce.parallel.copies : so fewer copy / fetches are performed in parallel

 

It could also be due to a temporary DNS issue.

 

 

cdh3u4:

TaskTracker里面: 

workerThreads = conf.getInt("tasktracker.http.threads", 40);

 

ReduceTask里面: 

this.numCopiers = conf.getInt("mapred.reduce.parallel.copies", 5);

 

 

 http://lucene.472066.n3.nabble.com/Error-Too-Many-Fetch-Failures-td3990324.html

 

 $ cat /proc/sys/net/core/somaxconn 

1024

$ ulimit -n 

131072

 

 

 

 

 

你可能感兴趣的:(fetch)