MapReduce中Map Task和Reduce Task的数量

一、Map Task的数量
一般通过File block size来控制,File total size/File block size的值一般就是Map Task的数量


二、Reduce Task的数量
可通过配置参数进行精确控制
1)JobConf's setNumReduceTasks(int num)
2)-Dmapred.reduce.tasks=10




引用一:
You cannot generalize how number of mappers/reducers are to be set.


Number of Mappers: You cannot set number of mappers explicitly to a certain number(There are parameters to set this but it doesn't come into effect). This is decided by the number of Input Splits created by hadoop for your given set of input. You may control this by setting mapred.min.split.size parameter. For more read the InputSplit section here. If you have a lot of mappers being generated due to huge amount of small files and you want to reduce number of mappers then you will need to combine data from more than one files. Read this: How to combine input files to get to a single mapper and control number of mappers.


To quote from the wiki page:


The number of maps is usually driven by the number of DFS blocks in the input files. Although that causes people to adjust their DFS block size to adjust the number of maps. The right level of parallelism for maps seems to be around 10-100 maps/node, although we have taken it up to 300 or so for very cpu-light map tasks. Task setup takes awhile, so it is best if the maps take at least a minute to execute.


Actually controlling the number of maps is subtle. The mapred.map.tasks parameter is just a hint to the InputFormat for the number of maps. The default InputFormat behavior is to split the total number of bytes into the right number of fragments. However, in the default case the DFS block size of the input files is treated as an upper bound for input splits. A lower bound on the split size can be set via mapred.min.split.size. Thus, if you expect 10TB of input data and have 128MB DFS blocks, you'll end up with 82k maps, unless your mapred.map.tasks is even larger. Ultimately the InputFormat determines the number of maps.


The number of map tasks can also be increased manually using the JobConf's conf.setNumMapTasks(int num). This can be used to increase the number of map tasks, but will not set the number below that which Hadoop determines via splitting the input data.
Number of Reducers: You can explicitly set the number of reducers. Just set the parameter mapred.reduce.tasks. There are guidelines for setting this number, but usually the default number of reducers should be good enough. At times a single report file is required, in those cases you might want number of reducers to be set to be 1.


Again to quote from wiki:


The right number of reduces seems to be 0.95 or 1.75 * (nodes * mapred.tasktracker.tasks.maximum). At 0.95 all of the reduces can launch immediately and start transfering map outputs as the maps finish. At 1.75 the faster nodes will finish their first round of reduces and launch a second round of reduces doing a much better job of load balancing.


Currently the number of reduces is limited to roughly 1000 by the buffer size for the output files (io.buffer.size * 2 * numReduces << heapSize). This will be fixed at some point, but until it is it provides a pretty firm upper bound.


The number of reduces also controls the number of output files in the output directory, but usually that is not important because the next map/reduce step will split them into even smaller splits for the maps.


The number of reduce tasks can also be increased in the same way as the map tasks, via JobConf's conf.setNumReduceTasks(int num).
来源:http://stackoverflow.com/questions/16414664/pseudo-distributed-number-map-and-reduce-tasks/16415522#16415522


引用二:
Number of maps is decided based on the choice of IputFormatClass. By default it is TextInputFormat class, which will creates same number of maps as the number of blocks. There will be exception if only the last record is broken across two blocks (in this case number of maps will be number of blocks minus one). The number reducers is a configuration choice, which can even be specified during job submission. By default number of reducers is one.
来源:http://stackoverflow.com/questions/18461638/how-does-hadoop-decides-the-no-of-reducers-runs-for-given-senario


引用三:
Number of mappers is set according to
File total size / File block size But you can set configuration variable to change its behaviour like:
map minimum split size, map maximum split size, minimum map number, etc ... If you want to know more about these variables look at mapred default hdfs default and core default.
来源:http://stackoverflow.com/questions/18461638/how-does-hadoop-decides-the-no-of-reducers-runs-for-given-senario

你可能感兴趣的:(大数据)