内容:
1、Hash Shuffle彻底解密;
2、Shuffle Pluggable解密;
3、Sorted Shuffle解密;
4、Shuffle性能优化;
Spark的Map Reduce本身就是shuffl的思想
钨丝计划
到底什么是shuffle?
Hadoop中产生于Mapper和Reducer中间。
中文翻译为洗牌,需要shuffle关键性原因是,某种具有共同特征的数据,需要最终汇聚到一个计算节点上计算。
shuffle可能面临的问题?运行Task的时候才会产生Shuffle(Shuffle已经熔化在Spark的算子中)
1、数据量非常大;
2、数据如何分类,即如何partition,Hash、Sort、钨丝计划;
3、负载均衡、数据倾斜;
4、网络传输效率:需要在压缩和解压缩之间做出权衡、序列化和反序列化也是要考虑的问题;
问题:
说明:具体Task进行计算的时候,尽一切最大可能使得数据具备Process Locality的特性;退而求其次,是增加数据分片,减少每个Task处理的数据量,除非说你的计算特别复杂。
==========Hash Shuffle彻底解密 ============
1、Hash就是key的hashcode,key不能是Array,如果是数组就不能非常友好的算hashcode
2、Hash Shuffle不需要排序,此时从理论上讲就节省了Hadoop MapReduce中进行shuffle需要排序时候的时间浪费,因为实际生产环境下,有大量的不需要排序的shuffle类型
思考:不需要排序Hash Shuffle是否一定比需要排序的Sorted Shuffle速度更快?
不一定。数据规模比较小的Hash Shuffle一般会比Sorted Shuffle效率快(快很多),但是如果数据量大,此时Sorted Shuffle一般都会笔Hash Shuffle快(快很多)。
3、每个Shuffle Map Task会根据key的Hash值计算出当前的key需要写入的partition,然后把决定后的结果写入到单独的文件中,此时会导致每个Task产生R(Reducer的个数,指下一个Stage的并行度)个文件,如果当前的Stage中有M个Shuffle Map Task,则会产生M*R个文件;
注意:Shuffle操作绝大多数情况下,都要通过网路,如果Mapper和Reducer在同一台机器上,此时只需要读取本地磁盘即可。
除了最后Stage外的Stage,Task的类型是Shuffle Map Task。数据分成多少类型,跟下一个阶段多少个任务有关系,实际上并行度没关系。就会在本地生成多少个文件。
下图就产生6*2个文件。如果是1万*1000呢???问题大了吧。
Hash Shuffle两大死穴:
1、Shuffle会产生海量的小文件于磁盘之上,此时会产生大量耗时低效的IO操作;
2、内存不共用!!!由于内存中需要保存海量的文件操作句柄和临时缓存信息,如果处理规模比较庞大的话,内存不可承受,出现OOM等问题或者不响应等等。
每个Write Handler默认貌似是50kb的内存大小,如果是10亿个,那会多大???!!!
那会成为性能瓶颈。
为了改善上述的问题(同时打开过多文件导致Writer Handler 内存使用过大以及产生内存使用过大以及产生过多文件导致大量的随机读写带来的效率极为地下的磁盘IO操作),Spark后来推出了Consalidate机制,来把小文件合并。此时Shuffle时文件产生的数量为CPU cores*Reducer。对于Shuffle Map Task的数量明显多于同时可用的并行cores的数量情况下,Shuffle产生的数量会大幅度减少。
就算这样,在并行度非常高的情况下,还是非常麻烦。。。
为此Spark(1.0)推出了Shuffle Pluggable开放框架,方便系统升级的时候定制Shuffle功能模块,也方便第三方系统改造人员根据实际的业务场景来开发具体最佳的Shuffle模块。
Spark(1.1)推出了Sorted Base Shuffle
核心接口ShuffleManager,具体默认实现有HashShuffleManager、SortShuffleManager等;
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.spark.shuffle
import org.apache.spark.{TaskContext, ShuffleDependency}
/**
* Pluggable interface for shuffle systems. A ShuffleManager is created in SparkEnv on the driver
* and on each executor, based on the spark.shuffle.manager setting. The driver registers shuffles
* with it, and executors (or tasks running locally in the driver) can ask to read and write data.
*
* NOTE: this will be instantiated by SparkEnv so its constructor can take a SparkConf and
* boolean isDriver as parameters.
*/
private[spark] trait ShuffleManager {
/**
* Register a shuffle with the manager and obtain a handle for it to pass to tasks.
*/
def registerShuffle[K, V, C](
shuffleId: Int,
numMaps: Int,
dependency: ShuffleDependency[K, V, C]): ShuffleHandle
/** Get a writer for a given partition. Called on executors by map tasks. */
def getWriter[K, V](handle: ShuffleHandle, mapId: Int, context: TaskContext): ShuffleWriter[K, V]
/**
* Get a reader for a range of reduce partitions (startPartition to endPartition-1, inclusive).
* Called on executors by reduce tasks.
*/
def getReader[K, C](
handle: ShuffleHandle,
startPartition: Int,
endPartition: Int,
context: TaskContext): ShuffleReader[K, C]
/**
* Remove a shuffle's metadata from the ShuffleManager.
* @return true if the metadata removed successfully, otherwise false.
*/
def unregisterShuffle(shuffleId: Int): Boolean
/**
* Return a resolver capable of retrieving shuffle block data based on block coordinates.
*/
def shuffleBlockResolver: ShuffleBlockResolver
/** Shut down this ShuffleManager. */
def stop(): Unit
}
Spark1.6.0中具体的配置如下:
// Let the user specify short names for shuffle managers
val shortShuffleMgrNames = Map(
"hash" -> "org.apache.spark.shuffle.hash.HashShuffleManager",
"sort" -> "org.apache.spark.shuffle.sort.SortShuffleManager",
"tungsten-sort" -> "org.apache.spark.shuffle.sort.SortShuffleManager")
val shuffleMgrName = conf.get("spark.shuffle.manager", "sort") //默认是sort
val shuffleMgrClass = shortShuffleMgrNames.getOrElse(shuffleMgrName.toLowerCase, shuffleMgrName)
val shuffleManager = instantiateClass[ShuffleManager](shuffleMgrClass)
每个Mapper为每个Reducer写文件
做Hash方式比较适合的场景是数据柜面比较小,而且不需要排序的场景。
数据规模小如果sort,则有性能损耗,而规模大之后,可以减少文件的数量。
Sorted Shuffle按照key相应的partitioner id进行sort,结束的时候进行merge-sort,归并排序
有index文件,专门用来记录排序的信息,一般10-100个文件进行具体的归并排序
本文出自 “一枝花傲寒” 博客,谢绝转载!