1. Cache 读写
调用逻辑:
hmaster.handleCreateTable->HRegion.createHRegion-> HRegion. initialize->initializeRegionInternals->instantiateHStore
->Store.Store->new CacheConfig(conf, family)-> CacheConfig.instantiateBlockCache->new LruBlockCache
传入参数
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- public LruBlockCache(long maxSize, long blockSize, boolean evictionThread,
- int mapInitialSize, float mapLoadFactor, int mapConcurrencyLevel,
- float minFactor, float acceptableFactor,
- float singleFactor, float multiFactor, float memoryFactor)
new LruBlockCache时除了设置默认的参数外,还会创建evictionThread并wait和一个定时打印的线程StatisticsThread
当执行HFileReaderV2的readBlock时,会先看判断是否开户了Cache ,如果开启,则使用cache中block
-
- if (cacheConf.isBlockCacheEnabled()) {
-
-
- HFileBlock cachedBlock = (HFileBlock)
- cacheConf.getBlockCache().getBlock(cacheKey, cacheBlock, useLock);
- if (cachedBlock != null) {
- BlockCategory blockCategory =
- cachedBlock.getBlockType().getCategory();
-
- getSchemaMetrics().updateOnCacheHit(blockCategory, isCompaction);
-
- if (cachedBlock.getBlockType() == BlockType.DATA) {
- HFile.dataBlockReadCnt.incrementAndGet();
- }
-
- validateBlockType(cachedBlock, expectedBlockType);
-
-
-
- if (cachedBlock.getBlockType() == BlockType.ENCODED_DATA &&
- cachedBlock.getDataBlockEncoding() !=
- dataBlockEncoder.getEncodingInCache()) {
- throw new IOException(“Cached block under key ” + cacheKey + “ ” +
- “has wrong encoding: ” + cachedBlock.getDataBlockEncoding() +
- “ (expected: ” + dataBlockEncoder.getEncodingInCache() + “)”);
- }
- return cachedBlock;
- }
-
- }
在getBlock方法中,会更新一些统计数据,重要的时更新
- BlockPriority.SINGLE为BlockPriority.MULTI
- public Cacheable getBlock(BlockCacheKey cacheKey, boolean caching, boolean repeat) {
- CachedBlock cb = map.get(cacheKey);
- if(cb == null) {
- if (!repeat) stats.miss(caching);
- return null;
- }
- stats.hit(caching);
- cb.access(count.incrementAndGet());
- return cb.getBuffer();
- }
———————
若是第一次读,则将block加入Cache.
-
- if (cacheBlock && cacheConf.shouldCacheBlockOnRead(
- hfileBlock.getBlockType().getCategory())) {
- cacheConf.getBlockCache().cacheBlock(cacheKey, hfileBlock,
- cacheConf.isInMemory());
- }
2. LRU evict
写入cache时就是将block加入到 一个 ConcurrentHashMap中,并更新Metrics,之后判断if(newSize > acceptableSize() && !evictionInProgress), acceptableSize是初始化时给的值(long)Math.floor(this.maxSize * this.acceptableFactor),acceptableFactor是一个百分比,是可以配置的:”hbase.lru.blockcache.acceptable.factor”(0.85f), 这里的意思就是判断总Size是不是大于这个值,如果大于并且没有正在执行的eviction线程, 那么就执行evict。
-
-
-
-
-
-
-
-
-
- public void cacheBlock(BlockCacheKey cacheKey, Cacheable buf, boolean inMemory) {
- CachedBlock cb = map.get(cacheKey);
- if(cb != null) {
- throw new RuntimeException(“Cached an already cached block”);
- }
- cb = new CachedBlock(cacheKey, buf, count.incrementAndGet(), inMemory);
- long newSize = updateSizeMetrics(cb, false);
- map.put(cacheKey, cb);
- elements.incrementAndGet();
- if(newSize > acceptableSize() && !evictionInProgress) {
- runEviction();
- }
- }
在evict方法中,
1. 计算总size和需要free的size, minsize = (long)Math.floor(this.maxSize * this.minFactor);其中minFactor是可配置的”hbase.lru.blockcache.min.factor”(0.75f);
- long currentSize = this.size.get();
- long bytesToFree = currentSize - minSize();
2. 初始化三种BlockBucket:bucketSingle,bucketMulti,bucketMemory并遍历map,按照三种类型分别add进各自的queue(MinMaxPriorityQueue.expectedSize(initialSize).create();)中, 并按照访问的次数逆序。
三种类型的区别是:
SINGLE对应第一次读的
MULTI对应多次读
MEMORY是设定column family中的IN_MEMORY为true的
-
- BlockBucket bucketSingle = new BlockBucket(bytesToFree, blockSize,
- singleSize());
- BlockBucket bucketMulti = new BlockBucket(bytesToFree, blockSize,
- multiSize());
- BlockBucket bucketMemory = new BlockBucket(bytesToFree, blockSize,
- memorySize());
其中三种BlockBuckt Size大小分配比例默认是:
static final float DEFAULT_SINGLE_FACTOR = 0.25f;
static final float DEFAULT_MULTI_FACTOR = 0.50f;
static final float DEFAULT_MEMORY_FACTOR = 0.25f;
- private long singleSize() {
- return (long)Math.floor(this.maxSize * this.singleFactor * this.minFactor);
- }
- private long multiSize() {
- return (long)Math.floor(this.maxSize * this.multiFactor * this.minFactor);
- }
- private long memorySize() {
- return (long)Math.floor(this.maxSize * this.memoryFactor * this.minFactor);
- }
并将三种BlockBuckt 加入到优先队列中,按照totalSize – bucketSize排序,,再计算需要free大小,执行free:
- PriorityQueue<BlockBucket> bucketQueue =
- new PriorityQueue<BlockBucket>(3);
-
- bucketQueue.add(bucketSingle);
- bucketQueue.add(bucketMulti);
- bucketQueue.add(bucketMemory);
-
- int remainingBuckets = 3;
- long bytesFreed = 0;
-
- BlockBucket bucket;
- while((bucket = bucketQueue.poll()) != null) {
- long overflow = bucket.overflow();
- if(overflow > 0) {
- long bucketBytesToFree = Math.min(overflow,
- (bytesToFree - bytesFreed) / remainingBuckets);
- bytesFreed += bucket.free(bucketBytesToFree);
- }
- remainingBuckets–;
- }
free方法中一个一个取出queue中block,由于是按照访问次数逆序,所以从后面取出就是先取出访问次数少的,将其在map中一个一个remove, 并更新Mertrics.
- public long free(long toFree) {
- CachedBlock cb;
- long freedBytes = 0;
- while ((cb = queue.pollLast()) != null) {
- freedBytes += evictBlock(cb);
- if (freedBytes >= toFree) {
- return freedBytes;
- }
- }
- return freedBytes;
- }
-
-
-
-
- otected long evictBlock(CachedBlock block) {
- map.remove(block.getCacheKey());
- updateSizeMetrics(block, true);
- elements.decrementAndGet();
- stats.evicted();
- return block.heapSize();
3. HBase LruBlockCache的特点是针对不同的访问次数使用不同的策略,避免频繁的更新的Cache(便如Scan),这样更加有利于提高读的性能。