HBase scan源码研究

hbase scan 执行步骤:(基于hbase0.94.1,常规情况)
客户端:
HTable table = new HTable(conf, "tableName");
Scan scan = new Scan();
scan.addColumn(...);
scan.setStartRow(...);
scan.setStopRow(...);
scan.setBatch(...);
ResultScanner ss = table.getScanner(scan);
HTable.getScanner(Scan scan)
返回新创建的ClientScanner
ClientScanner(Configuration conf, Scan scan, byte[] tableName, HConnection connection)
构造函数最后会初始化这个scan:ClientScanner.nextScanner(int nbRows, boolean done)
nextScanner方法主要是创建回调函数ScannerCallable 
ClientScanner.getScannerCallable(byte[] localStartKey, int nbRows)
然后通知HRegionServer打开scan。

请求由ScannerCallable.call()发起,通过openScanner函数进行rpc调用:
long id = this.server.openScanner(this.location.getRegionInfo().getRegionName(),this.scan);

转到服务端:接收到rpc请求后,会进行一系列的操作。
先看看调用栈(BTrace跟踪):
org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java)
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1426)
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1402)
org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2068)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
java.lang.reflect.Method.invoke(Method.java:597)
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
定位到HRegionServer.openScanner
协处理器的调用
s = r.getCoprocessorHost().preScannerOpen(scan);
scan初始化入口
s = r.getScanner(scan);
具体执行方法是:
HRegion.getScanner(Scan scan,List<KeyValueScanner> additionalScanners)
初始化regionScan
HRegion.instantiateRegionScanner(Scan scan,List<KeyValueScanner> additionalScanners)
会返回RegionScannerImpl实例,这才是真正执行scan的类,构造时做了很多事情,最关键的一点:定位!
定什么位?当然是scan开始的位置咯,不过这也是你在客户端调用时指定的,也就是Scan.setStartRow,Scan.setStopRow,如果不指定,那默认就是全表扫描了。
我们知道HBase基于hadoop,要定位row首先得找到HDFS上的文件:
for (Map.Entry<byte[], NavigableSet<byte[]>> entry :
          scan.getFamilyMap().entrySet()) {
 Store store = stores.get(entry.getKey());
 StoreScanner scanner = store.getScanner(scan,                    
 entry.getValue());
 scanners.add(scanner);
}

(HBase体系:  http://www.tbdata.org/archives/1509 ,http://www.searchtb.com/2011/01/understanding-hbase.html,假设有个表test,下面有一个Column Family 为A,则Store表示一个Column Family的存储,也就是对应A)

然后根据Store初始化StoreScanner:

一、StoreScanner(Store, Scan, List<? extends KeyValueScanner> , ScanType , long , long )

  1. 首先会new一个ScanQueryMatcher,这是一个查询匹配器,关键方法是match,枚举MatchCode指示scan如何处理当前KeyValue。
  2. 通过StoreFile来创建HFile.Reader:StoreFile.Reader r = file.createReader(),HFile.Reader用来封装客户端对HFile的open and iterate操作
  3. 通过StoreFile.Reader包装的HFileReaderV2来创建ScannerV2(HFileScanner),这个类能通过使用HFile的索引,快速定位。
  4. 把StoreFile.Reader,ScannerV2包装成StoreFileScanner,返回List<StoreFileScanner>
  5. 把memStoreScanners,和storeFileScanners复制一个List<KeyValueScanner>返回。memStoreScanners比较好理解,就是引用了MemStore的数据缓存区。
  6. 至此返回的是all scanners for this store,然后通过selectScannersFrom来选择合适的,返回满足条件的List<KeyValueScanner>。主要是StoreFile.Reader的passesTimerangeFilter过滤scan是否过期和passesBloomFilter判断startRow是否命中。
二、之前说过关键是定位到开始的row,现在合适的KeyValueScanner已经生成,具体执行的代码:
    // Seek all scanners to the start of the Row (or if the exact matching row
    // key does not exist, then to the start of the next matching Row).
    // Always check bloom filter to optimize the top row seek for delete
    // family marker.
    if (explicitColumnQuery && lazySeekEnabledGlobally) {
      for (KeyValueScanner scanner : scanners) {
        scanner.requestSeek(matcher.getStartKey(), false, true);
      }
    } else {
      for (KeyValueScanner scanner : scanners) {
        scanner.seek(matcher.getStartKey());
      }
    }
假设指定了Column,执行KeyValueScanner的requestSeek方法。
scanner = heap.poll(),memStoreScanners假设没有数据,我们来看StoreFileScanner。
BloomType是通过表申明的,一般情况是会执行到enforceSeek方法,然后是seek方法:
if(!seekAtOrAfter(hfs, key)) {
   close();
   return false;
}
seekAtOrAfter方法会调用到HFileReaderV2.AbstractScannerV2.seekTo方法,通过HFile的block索引,把ScannerV2的blockBuffer指向startRow。

给StoreFileScanner.cur赋值:

cur = hfs.getKeyValue() = new KeyValue(blockBuffer.array(),blockBuffer.arrayOffset() + blockBuffer.position());
(可以执行hbase org.apache.hadoop.hbase.io.hfile.HFile -b -m -f hdfs:/hbase/tbaleName/path看到HFile的索引信息)
// Combine all seeked scanners with a heap
heap = new KeyValueHeap(scanners, store.comparator);
包装成KeyValueHeap,this.current = pollRealKV();把current指向构造好的StoreFileScanner

至此StoreScanner初始化完成,HRegionServer最后执行addScanner,把RegionScannerImpl放入map中缓存,整个openScanner结束,scan准备工作完成。


基本结构:

RegionScannerImpl
    —KeyValueHeap
            —StoreScanner
                  —KeyValueHeap
                         —StoreFileScanner
                                —StoreFile.Reader
                                        —HFileReaderV2
                                —ScannerV2
                         —MemStoreScanner

客户端开始获取结果:
ResultScanner ss = table.getScanner(scan);
for (Result r : ss) {
    for (KeyValue kv : r.raw()) {
        ......
    }
}

本文主要是分析服务端,客户端可参考 http://punishzhou.iteye.com/blog/1297015
客户端负责关闭RegionScannerImpl,scan的承载体是HRegion,通过块索引定位其实open开销是很小的,如果跨HRegion会重新openScanner。

ClientScanner.next,同样进行rpc远程调用,方法调用栈:
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:350)
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:127)
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3459)
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3406)
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3423)
org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:2393)
sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
java.lang.reflect.Method.invoke(Method.java:597)
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1389)
核心方法:StoreScanner.next(List<KeyValue>, int)
核心代码:
 LOOP: while((kv = this.heap.peek()) != null) {
    ......
    ScanQueryMatcher.MatchCode qcode = matcher.match(kv);
    switch(qcode) {
      ......
    }
 }

如果scan添加Filter,会在ScanQueryMatcher.match(KeyValue)中判断:

    if (filter != null) {
      ReturnCode filterResponse = filter.filterKeyValue(kv);
      if (filterResponse == ReturnCode.SKIP) {
        return MatchCode.SKIP;
      } else if (filterResponse == ReturnCode.NEXT_COL) {
        return columns.getNextRowOrNextColumn(bytes, offset, qualLength);
      } else if (filterResponse == ReturnCode.NEXT_ROW) {
        stickyNextRow = true;
        return MatchCode.SEEK_NEXT_ROW;
      } else if (filterResponse == ReturnCode.SEEK_NEXT_USING_HINT) {
        return MatchCode.SEEK_NEXT_USING_HINT;
      }
    }
/**
 * Implementing classes of this interface will be used for the tracking
 * and enforcement of columns and numbers of versions and timeToLive during
 * the course of a Get or Scan operation.
 */
MatchCode colChecker = columns.checkColumn(bytes, offset, qualLength,
    timestamp, type, kv.getMemstoreTS() > maxReadPointToTrackVersions);

客户端迭代结果集,返回的MatchCode

SEEK_NEXT_COL
------------------------------
INCLUDE_AND_SEEK_NEXT_ROW
------------------------------
SEEK_NEXT_COL
------------------------------
INCLUDE_AND_SEEK_NEXT_ROW 

......

最后StoreScanner返回结果:

case INCLUDE_AND_SEEK_NEXT_COL:
 Filter f = matcher.getFilter();
 results.add(f == null ? kv : f.transform(kv));


 if (qcode == ScanQueryMatcher.MatchCode.INCLUDE_AND_SEEK_NEXT_ROW) {
   if (!matcher.moreRowsMayExistAfter(kv)) {
     outResult.addAll(results);
     return false;
   }
   reseek(matcher.getKeyForNextRow(kv));
 } else if (qcode == ScanQueryMatcher.MatchCode.INCLUDE_AND_SEEK_NEXT_COL) {
   reseek(matcher.getKeyForNextColumn(kv));
 } else {
   this.heap.next();
 }


 RegionMetricsStorage.incrNumericMetric(metricNameGetSize, kv.getLength());
 if (limit > 0 && (results.size() == limit)) {
   break LOOP;
 }
 continue;
......
if (!results.isEmpty()) {
  // copy jazz
  outResult.addAll(results);
  return true;
}

你可能感兴趣的:(NoSQL,hbase)