HBase 的 Split过程是一个较为重要的过程,牵扯到许多东西。
Client
1. HBaseAdmin.split
2. 判断split的是Region还是Table
3. 调用HBaseAdmin.split(final ServerName sn, final HRegionInfo hri,byte[] splitPoint)
4. PRC调用HRegionInterface.splitRegion
5. Server端(异步)完成。
Server端:
1. HRegionServer.splitRegion
此方法必须flushes the region & corces a compaction,flush是同步,compaction是异步
1. checkOpen 查看RegionServer的状态
2. region.flushcache(); flush该Region
3. region.forceSplit
1. 标志位splitRequest=true,explicitSplitPoint=splitkeypoing split的key的赋值
4. compactSplitThread.requestSplit 启动一个SplitRequest 线程处理split的请求
5. SplitRequest.run
1. 实例化SplitTransaction This class is not thread safe.Caller needs ensure split is run by one thread only.
splitdir=file:/E:/tmp/hadoop/data/william/{ParentHFileName}/.splits
建立table/{RegionName}/.split 文件夹
2. SplitTransaction.prepare
1. check info
2. 实例化两个RegionInfo: hri_a和 hri_b,赋予key的起始结束值,根据给定的splitkey
3. SplitTransaction.execute
1. createDaughters 对需要split的region进行预处理,比如presplit
0. presplit
1. fileSplitTimeout 获取对于fileSplitTimeout时间,默认30000
2. createNodeSplitting
create ephemeral node 在zookeeper里 (RegionTransitionData)
1. 在zookeeper里建立node: /hbase/unassigned/{parentHRegionFileName},状态splitting
2. close parent HRegion
3. splitStoreFiles :creates daughter region dirs under the parent splits dir 在Parent HFile下创建子目录
1. 创建.split临时目录 E:\tmp\hadoop\data\{TableName}\{ParentHRegionName}\.split
2. 在.split下,创建top和bottom两个文件,通过Reference写下该文件的引用部分(splitkey,top/bottom)
如:E:\tmp\hadoop\data\william\{ParentHRegionFileName}\.splits\{DaughterHRegionFile}\f1\hfile***
3. 将.split的两部分HFile移动到{tableName}/下
4. Offline parent in meta. 向.META.表Put Split的信息,如下:
{“totalColumns”:3,”families”:{“info”:[{"timestamp":9223372036854775807,"qualifier":"regioninfo","vlen":82},
{"timestamp":9223372036854775807,"qualifier":"splitA","vlen":84},
"timestamp":9223372036854775807,"qualifier":"splitB","vlen":86}]},
“row”:”william,,1371057759135.5a0dd042f18c323099b0cbfdb38bb228.”}
2. openDaughters (异步)
1. 实例化两个DaughterOpener,分别管理两部分HRegion
2. openDaughterRegion : Open daughter regions, add them to online list and update meta.
两个DaughterOpener并发启动,将Regions加入到online列表兵update meta信息
1. checkCompressionCodecs
2. initialize HRegion (HRegion.initializeRegionInternals)
3. HLog.set(SeqNum)
3. check exception
4. postOpenDeployTasks (second one)
1. 遍历该HRegion下的所有Store
2. 触发MajorCompaction,触发条件是因为Store的文件是Reference ->
进入Compact的流程 {判断是isReference,则用HalfStoreFileReader读取HFile,返回多少条是否用Scanner决定? }
1. 在Daughter下创建.tmp文件,将scan得到的数据写进去
5. addToOnlineRegions // Should add it to OnlineRegions
6. postOpenDeployTasks (first one)
7. addToOnlineRegions // Should add it to OnlineRegions
(此时Region可用)
3. transitionZKNode :Finish off split transaction, transition the zknode 对于zknode事务性的更新处理,还有比如postsplit的处理
1. 对于Parent Node的Zk信息处理 ,更新状态为Split (之后由Master来处理,CatalogJanitor 来清理多余文件夹)
关键类:
1. CompactSplitThread 管理所有对于region的compact和split的请求
2. SplitRequest 执行split的线程
3. SplitTransaction Split的过程对于split失败的情况进行处理
* SplitTransaction st = new SplitTransaction(this.conf, parent, midKey)
* if (!st.prepare()) return;
* try {
* st.execute(server, services);
* } catch (IOException ioe) {
* try {
* st.rollback(server, services);
* return;
* } catch (RuntimeException e) {
* myAbortable.abort(“Failed split, abort”);
* }
* }
并通过JournalEntry来记录整个Split过程运行的状态
4. ZKAssign 对于split的节点的分配和状态的控制
5. StoreFileSplitter,SplitTransaction的内部类,实际对HFile进行Split,文件的操作
6. Reference :A reference to the bottom half of the hsf store file.
7. HalfStoreFileReader,读取HFile进行切分copy