HBase系列--HBase Region合并

1.为什么要合并region

这个需要从Region的Split来说。当一个Region被不断的写数据,达到Region的Split的阀值时(由属性hbase.hregion.max.filesize来决定,默认是10GB),该Region就会被Split成2个新的Region。随着业务数据量的不断增加,Region不断的执行Split,那么Region的个数也会越来越多。

一个业务表的Region越多,在进行读写操作时,或是对该表执行Compaction操作时,此时集群的压力是很大的。这里笔者做过一个线上统计,在一个业务表的Region个数达到9000+时,每次对该表进行Compaction操作时,集群的负载便会加重。而间接的也会影响应用程序的读写,一个表的Region过大,势必整个集群的Region个数也会增加,负载均衡后,每个RegionServer承担的Region个数也会增加。

因此,这种情况是很有必要的进行Region合并的。比如,当前Region进行Split的阀值设置为30GB,那么我们可以对小于等于10GB的Region进行一次合并,减少每个业务表的Region,从而降低整个集群的Region,减缓每个RegionServer上的Region压力。

2. 如何进行合并

那么我们如何进行Region合并呢?HBase有提供一个合并Region的命令,具体操作如下:

# 合并相邻的两个Region
hbase> merge_region 'ENCODED_REGIONNAME', 'ENCODED_REGIONNAME'
# 强制合并两个Region
hbase> merge_region 'ENCODED_REGIONNAME', 'ENCODED_REGIONNAME', true

但是这种方式存在问题就是只能一次合并2个Region,如果这里有几千个Region需要合并,这种方式是不可取的。

2.1 批量合并

这里有一种批量合并的方式,通过编写脚本(merge_small_regions.rb)实现批量合并,实现源码如下所示:

# Test Mode:
#
# hbase org.jruby.Main merge_empty_regions.rb namespace.tablename <skip_size> <batch_regions> <merge?>
#
# Non Test - ie actually do the merge:
#
# hbase org.jruby.Main merge_empty_regions.rb namespace.tablename <skip_size> <batch_regions> merge
#
# Note: Please replace namespace.tablename with your namespace and table, eg NS1.MyTable. This value is case sensitive.

require 'digest'
require 'java'
java_import org.apache.hadoop.hbase.HBaseConfiguration
java_import org.apache.hadoop.hbase.client.HBaseAdmin
java_import org.apache.hadoop.hbase.TableName
java_import org.apache.hadoop.hbase.HRegionInfo;
java_import org.apache.hadoop.hbase.client.Connection
java_import org.apache.hadoop.hbase.client.ConnectionFactory
java_import org.apache.hadoop.hbase.client.Table
java_import org.apache.hadoop.hbase.util.Bytes

def list_bigger_regions(admin, table, low_size)
  cluster_status = admin.getClusterStatus()
  master = cluster_status.getMaster()
  biggers = []
  cluster_status.getServers.each do |s|
    cluster_status.getLoad(s).getRegionsLoad.each do |r|
      # getRegionsLoad returns an array of arrays, where each array
      # is 2 elements

      # Filter out any regions that don't match the requested
      # tablename
      next unless r[1].get_name_as_string =~ /#{table}\,/
      if r[1].getStorefileSizeMB() > low_size
        if r[1].get_name_as_string =~ /\.([^\.]+)\.$/
          biggers.push $1
        else
          raise "Failed to get the encoded name for #{r[1].get_name_as_string}"
        end
      end
    end
  end
  biggers
end

# Handle command line parameters
table_name = ARGV[0]
low_size = 1024
if ARGV[1].to_i >= low_size
  low_size=ARGV[1].to_i
end

limit_batch = 1000
if ARGV[2].to_i <= limit_batch
  limit_batch = ARGV[2].to_i
end
do_merge = false
if ARGV[3] == 'merge'
  do_merge = true
end

config = HBaseConfiguration.create();
connection = ConnectionFactory.createConnection(config);
admin = HBaseAdmin.new(connection);

bigger_regions = list_bigger_regions(admin, table_name, low_size)
regions = admin.getTableRegions(Bytes.toBytes(table_name));

puts "Total Table Regions: #{regions.length}"
puts "Total bigger regions: #{bigger_regions.length}"

filtered_regions = regions.reject do |r|
  bigger_regions.include?(r.get_encoded_name)
end

puts "Total regions to consider for Merge: #{filtered_regions.length}"

filtered_regions_limit = filtered_regions

if filtered_regions.length < 2
  puts "There are not enough regions to merge"
  filtered_regions_limit = filtered_regions
end

if filtered_regions.length > limit_batch
   filtered_regions_limit = filtered_regions[0,limit_batch]
   puts "But we will merge : #{filtered_regions_limit.length} regions because limit in parameter!"
end


r1, r2 = nil
filtered_regions_limit.each do |r|
  if r1.nil?
    r1 = r
    next
  end
  if r2.nil?
    r2 = r
  end
  # Skip any region that is a split region
  if r1.is_split()
    r1 = r2
    r2 = nil
  puts "Skip #{r1.get_encoded_name} bcause it in spliting!"
    next
  end
  if r2.is_split()
    r2 = nil
 puts "Skip #{r2.get_encoded_name} bcause it in spliting!"
    next
  end
  if HRegionInfo.are_adjacent(r1, r2)
    # only merge regions that are adjacent
    puts "#{r1.get_encoded_name} is adjacent to #{r2.get_encoded_name}"
    if do_merge
      admin.mergeRegions(r1.getEncodedNameAsBytes, r2.getEncodedNameAsBytes, false)
      puts "Successfully Merged #{r1.get_encoded_name} with #{r2.get_encoded_name}"
      sleep 2
    end
    r1, r2 = nil
  else
    puts "Regions are not adjacent, so drop the first one and with the #{r2.get_encoded_name} to  iterate again"
    r1 = r2
    r2 = nil
  end
end
admin.close

该脚本默认是合并1GB以内的Region,个数为1000个。如果我们要合并小于10GB,个数在4000以内,脚本(merging-region.sh)如下:

#! /bin/bash

num=$1

echo "[`date "+%Y-%m-%d %H:%M:%S"`] INFO : RegionServer Start Merging..."
if [ ! -n "$num" ]; then
    echo "[`date "+%Y-%m-%d %H:%M:%S"`] INFO : Default Merging 10 Times."
    num=10
elif [[ $num == *[!0-9]* ]]; then
    echo "[`date "+%Y-%m-%d %H:%M:%S"`] INFO : Input [$num] Times Must Be Number."
    exit 1
else
    echo "[`date "+%Y-%m-%d %H:%M:%S"`] INFO : User-Defined Merging [$num] Times."
fi

for (( i=1; i<=$num; i++ ))
do
    echo "[`date "+%Y-%m-%d %H:%M:%S"`] INFO : Merging [$i] Times,Total [$num] Times."
    hbase org.jruby.Main merge_small_regions.rb namespace.tablename 10240  4000 merge
    sleep 5
done

在merging-region.sh脚本中,做了参数控制,可以循环来执行批量合并脚本。可能在实际操作过程中,批量执行一次Region合并,合并后的结果Region还是有很多(可能此时又有新的Region生成),这是我们可以使用merging-region.sh这个脚本多次执行批量合并Region操作,具体操作命令如下:

# 默认循环10次,例如本次循环执行5次
sh merging-region.sh 5

你可能感兴趣的:(Hbase,hbase,region合并)