如何查看HBase的HFile

1. 建表并填入测试数据

  1. 首先创建表test,并指定一个名为t1的列簇
create 'test','t1'
  1. 随意填几条测试数据
put 'test','0001','t1:name','zhangsan'
put 'test','0002','t1:name','lisi'

2. 查看HFile

  1. 查看命令
hbase hfile -v -p -m -f hdfs://localhost:9000/hbase/data/default/test/782d9564125bae4e0037e367626186b4/t1/44bd781271ef445ea5056ca632569611
  1. 内容如下:
Scanning -> hdfs://localhost:9000/hbase/data/default/test/782d9564125bae4e0037e367626186b4/t1/44bd781271ef445ea5056ca632569611
2019-06-01 22:03:36,540 INFO  [main] hfile.CacheConfig: Created cacheConfig: CacheConfig:disabled
K: 0001/t1:name/1559371642919/Put/vlen=8/seqid=4 V: zhangsan
Block index size as per heapsize: 392
reader=hdfs://localhost:9000/hbase/data/default/test/782d9564125bae4e0037e367626186b4/t1/44bd781271ef445ea5056ca632569611,
    compression=none,
    cacheConf=CacheConfig:disabled,
    firstKey=0001/t1:name/1559371642919/Put,
    lastKey=0001/t1:name/1559371642919/Put,
    avgKeyLen=22,
    avgValueLen=8,
    entries=1,
    length=4914
Trailer:
    fileinfoOffset=224,
    loadOnOpenDataOffset=115,
    dataIndexCount=1,
    metaIndexCount=0,
    totalUncomressedBytes=4822,
    entryCount=1,
    compressionCodec=NONE,
    uncompressedDataIndexSize=35,
    numDataIndexLevels=1,
    firstDataBlockOffset=0,
    lastDataBlockOffset=0,
    comparatorClassName=org.apache.hadoop.hbase.KeyValue$KeyComparator,
    encryptionKey=NONE,
    majorVersion=3,
    minorVersion=0
Fileinfo:
    BLOOM_FILTER_TYPE = ROW
    DELETE_FAMILY_COUNT = \x00\x00\x00\x00\x00\x00\x00\x00
    EARLIEST_PUT_TS = \x00\x00\x01k\x11\xCA\xF8'
    KEY_VALUE_VERSION = \x00\x00\x00\x01
    LAST_BLOOM_KEY = 0001
    MAJOR_COMPACTION_KEY = \x00
    MAX_MEMSTORE_TS_KEY = \x00\x00\x00\x00\x00\x00\x00\x04
    MAX_SEQ_ID_KEY = 6
    TIMERANGE = 1559371642919....1559371642919
    hfile.AVG_KEY_LEN = 22
    hfile.AVG_VALUE_LEN = 8
    hfile.CREATE_TIME_TS = \x00\x00\x01k\x12\x05\xCAt
    hfile.LASTKEY = \x00\x040001\x02t1name\x00\x00\x01k\x11\xCA\xF8'\x04
Mid-key: \x00\x040001\x02t1name\x00\x00\x01k\x11\xCA\xF8'\x04
Bloom filter:
    BloomSize: 2
    No of Keys in bloom: 1
    Max Keys for bloom: 1
    Percentage filled: 100%
    Number of chunks: 1
    Comparator: RawBytesComparator
Delete Family Bloom filter:
    Not present
Scanned kv count -> 1

注:可以看到第三行K: 0001/t1:name/1559371642919/Put/vlen=8/seqid=4 V: zhangsan,只有一条数据。
原因:此时第二条数据在MemStore中,还未Flush到磁盘中,因此看不到第二条记录

  1. 手动将MemStore数据Flush到磁盘
    进入hbase shell
flush 'test'
# 查看帮助可以知,flush可以有三种用法
# help "flush"
# hbase> flush 'TABLENAME'
# hbase> flush 'REGIONNAME'
# hbase> flush 'ENCODED_REGIONNAME'
  1. 然后查看HBase的Web页面可以看到形成了新的HFile文件
    上面的文件是之前查看的文件,下面的是新flush生成的文件


    HFile界面.jpg
  2. hbase hfile查看新生成的文件
    可以看到第三行是RowKey为0002的那条数据
Scanning -> hdfs://localhost:9000/hbase/data/default/test/782d9564125bae4e0037e367626186b4/t1/4aa0b75c76004c8e8a9c5c00819b8611
2019-06-01 22:24:09,582 INFO  [main] hfile.CacheConfig: Created cacheConfig: CacheConfig:disabled
K: 0002/t1:name/1559397766755/Put/vlen=4/seqid=9 V: lisi
Block index size as per heapsize: 392
reader=hdfs://localhost:9000/hbase/data/default/test/782d9564125bae4e0037e367626186b4/t1/4aa0b75c76004c8e8a9c5c00819b8611,
    compression=none,
    cacheConf=CacheConfig:disabled,
    firstKey=0002/t1:name/1559397766755/Put,
    lastKey=0002/t1:name/1559397766755/Put,
    avgKeyLen=22,
    avgValueLen=4,
    entries=1,
    length=4910
Trailer:
    fileinfoOffset=220,
    loadOnOpenDataOffset=111,
    dataIndexCount=1,
    metaIndexCount=0,
    totalUncomressedBytes=4818,
    entryCount=1,
    compressionCodec=NONE,
    uncompressedDataIndexSize=35,
    numDataIndexLevels=1,
    firstDataBlockOffset=0,
    lastDataBlockOffset=0,
    comparatorClassName=org.apache.hadoop.hbase.KeyValue$KeyComparator,
    encryptionKey=NONE,
    majorVersion=3,
    minorVersion=0
Fileinfo:
    BLOOM_FILTER_TYPE = ROW
    DELETE_FAMILY_COUNT = \x00\x00\x00\x00\x00\x00\x00\x00
    EARLIEST_PUT_TS = \x00\x00\x01k\x13Y\x96c
    KEY_VALUE_VERSION = \x00\x00\x00\x01
    LAST_BLOOM_KEY = 0002
    MAJOR_COMPACTION_KEY = \x00
    MAX_MEMSTORE_TS_KEY = \x00\x00\x00\x00\x00\x00\x00\x09
    MAX_SEQ_ID_KEY = 11
    TIMERANGE = 1559397766755....1559397766755
    hfile.AVG_KEY_LEN = 22
    hfile.AVG_VALUE_LEN = 4
    hfile.CREATE_TIME_TS = \x00\x00\x01k\x13a\xE4\xFF
    hfile.LASTKEY = \x00\x040002\x02t1name\x00\x00\x01k\x13Y\x96c\x04
Mid-key: \x00\x040002\x02t1name\x00\x00\x01k\x13Y\x96c\x04
Bloom filter:
    BloomSize: 2
    No of Keys in bloom: 1
    Max Keys for bloom: 1
    Percentage filled: 100%
    Number of chunks: 1
    Comparator: RawBytesComparator
Delete Family Bloom filter:
    Not present
Scanned kv count -> 1

注:可以使用major_compact "test"命令进行一次全压缩,进而合并两个HFile文件,然后新的HFile中就包含了之前两条数据

你可能感兴趣的:(如何查看HBase的HFile)