在DT大数据时代,海量数据的存储和分析是一个巨大的挑战,给我们的hadoop或者hbase集群添加数据压缩的能力,是必不可少的,通过压缩我们不但能节约磁盘空间,而且也能节省集群间网络带宽的损耗,从而间接提高了集群任务的整体执行效率,hadoop已经自带支持一些比较常用的压缩,如gz,bz等,使用hadoop checknative -a命令可以查看你的hadoop支持几种压缩格式:
- 15/12/30 14:51:11 INFO bzip2.Bzip2Factory: Successfully loaded & initialized native-bzip2 library system-native
- 15/12/30 14:51:11 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
- Native library checking:
- hadoop: true /ROOT/server/hadoop/lib/native/libhadoop.so
- zlib: true /lib64/libz.so.1
- snappy: true /ROOT/server/hadoop/lib/native/libsnappy.so.1
- lz4: true revision:99
- bzip2: true /lib64/libbz2.so.1
- openssl: true /usr/lib64/libcrypto.so
15/12/30 14:51:11 INFO bzip2.Bzip2Factory: Successfully loaded & initialized native-bzip2 library system-native
15/12/30 14:51:11 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
Native library checking:
hadoop: true /ROOT/server/hadoop/lib/native/libhadoop.so
zlib: true /lib64/libz.so.1
snappy: true /ROOT/server/hadoop/lib/native/libsnappy.so.1
lz4: true revision:99
bzip2: true /lib64/libbz2.so.1
openssl: true /usr/lib64/libcrypto.so
当今大多数的互联网公司对于hadoop压缩的选型,通常是Snappy和LZO,两者都有不错的压缩比和解压速度,关于具体的对比,请参考此篇文章:http://www.cnblogs.com/zhengrunjian/p/4527165.html
本篇主要介绍在Hbase中使用snappy压缩,如果你的hadoop已经安装了snappy,那么接下来就会非常简单了,如果你的hadoop集群还不支持snappy压缩,那么也没关系,请参考散仙以前的文章:http://qindongliang.iteye.com/blog/2222145
版本介绍:
Apache Hadoop2.7.1
Apache Hbase0.98.12
安装测试步骤如下:
(1)给hadoop集群正确安装snappy压缩
(2)拷贝hadoop/lib下的native下的所有so文件和hadoop-snappy-0.0.1-SNAPSHOT.jar到hbase/lib下面
(3)如果有多台机器,每台机器都要拷贝分发
(4)拷贝完成后,重启hbase集群
(5)执行命令验证snappy是否验证成功,如果打印succes,即安装成功
- [webmaster@Hadoop-0-187 logs]$ hbase org.apache.hadoop.hbase.util.CompressionTest /user/webmaster/word/in/tt2 snappy
- 2015-12-30 15:14:11,460 INFO [main] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
- SLF4J: Class path contains multiple SLF4J bindings.
- SLF4J: Found binding in [jar:file:/ROOT/server/hbase/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
- SLF4J: Found binding in [jar:file:/ROOT/server/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
- SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
- 2015-12-30 15:14:12,607 INFO [main] util.ChecksumType: Checksum using org.apache.hadoop.util.PureJavaCrc32
- 2015-12-30 15:14:12,609 INFO [main] util.ChecksumType: Checksum can use org.apache.hadoop.util.PureJavaCrc32C
- 2015-12-30 15:14:12,916 INFO [main] compress.CodecPool: Got brand-new compressor [.snappy]
- 2015-12-30 15:14:12,923 INFO [main] compress.CodecPool: Got brand-new compressor [.snappy]
- 2015-12-30 15:14:12,932 ERROR [main] hbase.KeyValue: Unexpected getShortMidpointKey result, fakeKey:testkey, firstKeyInBlock:testkey
- 2015-12-30 15:14:13,218 INFO [main] compress.CodecPool: Got brand-new decompressor [.snappy]
- SUCCESS
[webmaster@Hadoop-0-187 logs]$ hbase org.apache.hadoop.hbase.util.CompressionTest /user/webmaster/word/in/tt2 snappy
2015-12-30 15:14:11,460 INFO [main] Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/ROOT/server/hbase/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/ROOT/server/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
2015-12-30 15:14:12,607 INFO [main] util.ChecksumType: Checksum using org.apache.hadoop.util.PureJavaCrc32
2015-12-30 15:14:12,609 INFO [main] util.ChecksumType: Checksum can use org.apache.hadoop.util.PureJavaCrc32C
2015-12-30 15:14:12,916 INFO [main] compress.CodecPool: Got brand-new compressor [.snappy]
2015-12-30 15:14:12,923 INFO [main] compress.CodecPool: Got brand-new compressor [.snappy]
2015-12-30 15:14:12,932 ERROR [main] hbase.KeyValue: Unexpected getShortMidpointKey result, fakeKey:testkey, firstKeyInBlock:testkey
2015-12-30 15:14:13,218 INFO [main] compress.CodecPool: Got brand-new decompressor [.snappy]
SUCCESS
确认成功后,执行hbase shell命令行,采用下面的几个命令,建2个表,一个指定snappy压缩,另外一个不启动压缩,最后插入几十条数据对比结果。
- //创建一个表,指定snappy压缩
- create 'tsnappy', { NAME => 'f', COMPRESSION => 'snappy'}
- //创建一个表,不启动压缩
- create 'nosnappy', { NAME => 'f'}
- //查看表描述
- describe 'tsnappy'
- //put一条数据
- put 'tsnappy', 'row1', 'f:col1', 'value'
- //统计hbase表数据
- count 'tsnappy'
- //查看数据
- scan 'tsnappy'
- truncate 'tsnappy' 清空表里面的数据
- //修改已有的表为snappy压缩
- alter apData,{NAME=>'cf1',COMPRESSION=>'snappy'}
- //禁用表
- disable 'my_table'
- //修改支持压缩
- alter 'my_table', {NAME => 'my_column_family', COMPRESSION => 'snappy'}
- //激活表
- enable 'my_table'
//创建一个表,指定snappy压缩
create 'tsnappy', { NAME => 'f', COMPRESSION => 'snappy'}
//创建一个表,不启动压缩
create 'nosnappy', { NAME => 'f'}
//查看表描述
describe 'tsnappy'
//put一条数据
put 'tsnappy', 'row1', 'f:col1', 'value'
//统计hbase表数据
count 'tsnappy'
//查看数据
scan 'tsnappy'
truncate 'tsnappy' 清空表里面的数据
//修改已有的表为snappy压缩
alter apData,{NAME=>'cf1',COMPRESSION=>'snappy'}
//禁用表
disable 'my_table'
//修改支持压缩
alter 'my_table', {NAME => 'my_column_family', COMPRESSION => 'snappy'}
//激活表
enable 'my_table'
然后插入50行数据,一个字段,内容大概4M,附件里面有Hbase操作的Java类,插入数据完整后,执行命令:
hadoop fs -du -s -h /hbase/data/default/*对比查看开启压缩和不开启压缩索占的存储空间,大约为1:9,当然这也和自己的存储的数据有关系,总体来说,还是不错的。