大橙子网站建设,新征程启航
为企业提供网站建设、域名注册、服务器等服务
在hadoop中当一个任务没有设置的时候,该任务的执行的map的个数是由任务本身的数据量决定的,具体计算方法会在下文说明;而reduce的个数hadoop是默认设置为1的。为何设置为1那,因为一个任务的输出的文件个数是由reduce的个数来决定的。一般一个任务的结果默认是输出到一个文件中,所以reduce的数目设置为1。那如果我们为了提高任务的执行速度如何对map与reduce的个数来进行调整那。
创新互联长期为上千家客户提供的网站建设服务,团队从业经验10年,关注不同地域、不同群体,并针对不同对象提供差异化的产品和服务;打造开放共赢平台,与合作伙伴共同营造健康的互联网生态环境。为六枝企业提供专业的成都网站制作、做网站、外贸营销网站建设,六枝网站改版等技术服务。拥有10年丰富建站经验和众多成功案例,为您定制开发。
在讲解之前首先,看一下hadoop官方文档是如何说明的。
Number of Maps
The number of maps is usually driven by the number of DFS blocks in the input files. Although that causes people to adjust their DFS block size to adjust the number of maps. The right level of parallelism for maps seems to be around 10-100 maps/node, although we have taken it up to 300 or so for very cpu-light map tasks. Task setup takes awhile, so it is best if the maps take at least a minute to execute.
Actually controlling the number of maps is subtle. The mapred.map.tasks parameter is just a hint to the InputFormat for the number of maps. The default InputFormat behavior is to split the total number of bytes into the right number of fragments. However, in the default case the DFS block size of the input files is treated as an upper bound for input splits. A lower bound on the split size can be set via mapred.min.split.size. Thus, if you expect 10TB of input data and have 128MB DFS blocks, you'll end up with 82k maps, unless your mapred.map.tasks is even larger. Ultimately the InputFormat determines the number of maps.
The number of map tasks can also be increased manually using the JobConf's conf.setNumMapTasks(int num). This can be used to increase the number of map tasks, but will not set the number below that which Hadoop determines via splitting the input data.
Number of Reduces
The right number of reduces seems to be 0.95 or 1.75 * (nodes * mapred.tasktracker.tasks.maximum). At 0.95 all of the reduces can launch immediately and start transfering map outputs as the maps finish. At 1.75 the faster nodes will finish their first round of reduces and launch a second round of reduces doing a much better job of load balancing.
Currently the number of reduces is limited to roughly 1000 by the buffer size for the output files (io.buffer.size * 2 * numReduces << heapSize). This will be fixed at some point, but until it is it provides a pretty firm upper bound.
The number of reduces also controls the number of output files in the output directory, but usually that is not important because the next map/reduce step will split them into even smaller splits for the maps.
The number of reduce tasks can also be increased in the same way as the map tasks, via JobConf's conf.setNumReduceTasks(int num).
上述的说明是map与reduce的个数是如何确定的。对于map的个数是通过任务执行的时候读入的数据量除以每个block的大小(默认是64M)来决定的,而reduce就是默认为1,而且它有个建议范围,这个范围是由你的node个数来决定的。一般reduce的个数是通过:nodes个数 X 一个TaskTracker设置的最大reduce个数(默认为2) X (0.95~1.75)之间的数目。注意这上述的个数只是设置中的一个最大的上限。在实际运行中的个数,还要看你具体的任务设置。
如果想设置一个任务执行的map与reduce的个数,那可以使用如下方法。
map:当你想更改map的个数的时候,则可以通过更改配置文件中block的size来增大或者减小map的个数,或者通过 JobConf's conf.setNumMapTasks(int num).。但是就算你设置了数目在这里,它在实际运行中的数目不会小于它实际分割产生的数目。意思就是当你通过程序设置map为2个,但是在读入数据的时候,分割数据是需要3个,那么最后任务在实际运行的过程中map个数是3个而不是你设置的2个。
reduce:当想修改reduce的个数那么可以按照如下方法进行更改:
当是在程序调试中可以通过声明一个job对象,调用job.setNumReduceTasks(tasks),或者在conf设置中调用conf.setStrings("mapred.reduce.tasks", values);
而当是通过命令进行执行任务的时候可以在命令行加入运行期参数:
bin/hadoop jar examples.jar job_name -Dmapred.map.tasks=nums -Dmapred.reduce.tasks=nums INPUT OUTPUT