首先检查下 hadoop 是否已经集成 snappy:
hadoop checknative -a
Native library checking:
hadoop: true /home/bigdata/hadoop/lib/native/libhadoop.so.1.0.0
zlib: true /lib/x86_64-linux-gnu/libz.so.1
snappy: false
zstd : false
lz4: true revision:10301
bzip2: true /lib/x86_64-linux-gnu/libbz2.so.1
openssl: false Cannot load libcrypto.so (libcrypto.so: cannot open shared object file: No such file or directory)!
sudo apt-get install gcc g++ libtool cmake maven zlib1g.dev autoconf automake gzip unzip
# 后续编译出错:CMAKE Could NOT find PkgConfig (missing: PKG_CONFIG_EXECUTABLE),添加如下依赖
sudo apt-get install --reinstall pkg-config cmake-data
# 编译继续出错:CMake not able to find OpenSSL library,添加如下依赖
sudo apt-get install libssl-dev
解压下载的 sanppy 安装包
tar -xvf snappy-1.1.3.tar.gz
cd snappy-1.1.3
./configure
make
sudo make install
默认目录是 /usr/local/lib,查看snappy是否安装成功
ll /usr/local/lib | grep snappy
出现下面5个文件,安装成功
libsnappy.a
libsnappy.la*
libsnappy.so -> libsnappy.so.1.3.0*
libsnappy.so.1 -> libsnappy.so.1.3.0*
libsnappy.so.1.3.0*
tar -xvf protobuf-2.5.0.tar.gz
cd protobuf-2.5.0
./configure --prefix=/home/bigdata/protobuf
make && make install
编译成功后加入环境变量并生效:
export PATH= /home/bigdata/protobuf/bin:$PATH
最后,protoc --version命令,如显示libprotoc 2.5.0 则安装成功。
Hadoop2.x 以后版本在 hadoop-common 模块已经内置 snappy 编解码,所以编译安装 hadoop-snappy 是多余的(Hadoop1.x 需要),只要安装 snappy 本地库和重新编译 hadoop native 库。
由于下载hadoop-2.9.2.tar.gz中不包含源码文件,所以需要使用 hadoop-src-2.9.2包来编译源码;
官网下载后解压:
tar -xvf hadoop-2.9.2-src.tar.gz
编译过程中,会出现java堆栈溢出情况,因此执行下述命令扩大内存:
export MAVEN_OPTS="-Xms256m -Xmx512m"
然后编译:
mvn package -DskipTests -Pdist,native -Dtar -Dsnappy.lib=/usr/local/lib -Dbundle.snappy
时间会很长,如果安装了第三步中的依赖,应该没有问题,如果还有报错,一般也是缺少相应依赖,相应安装解决即可。
编译成功后可以看到:
ll /home/bigdata/hadoop-2.9.2-src/hadoop-dist/target/hadoop-2.9.2/lib/native
有如下文件:
libhadoop.a libhadooputils.a libsnappy.a libsnappy.so.1.3.0
libhadooppipes.a libhdfs.a libsnappy.la
libhadoop.so libhdfs.so libsnappy.so
libhadoop.so.1.0.0 libhdfs.so.0.0.0 libsnappy.so.1
拷贝库文件到 hadoop,同步到集群:
cp -r /home/bigdata/hadoop-2.9.2-src/hadoop-dist/target/hadoop-2.9.2/lib/native/* $HADOOP_HOME/lib/native/
scp -r $HADOOP_HOME/lib/native/* bigdata@intellif-bigdata-node1:/$HADOOP_HOME/lib/native/
scp -r $HADOOP_HOME/lib/native/* bigdata@intellif-bigdata-node2:/$HADOOP_HOME/lib/native/
scp -r $HADOOP_HOME/lib/native/* bigdata@intellif-bigdata-node3:/$HADOOP_HOME/lib/native/
$HADOOP_/HOME/etc/hadoop/hadoop-env.sh 添加如下环境变量:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib:$HADOOP_COMMON_LIB_NATIVE_DIR"
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native/
修改 $HADOOP_HOME/etc/hadoop/core-site.xml配置文件,在文件中加入:
<property>
<name>io.compression.codecsname>
<value>
org.apache.hadoop.io.compress.GzipCodec,
org.apache.hadoop.io.compress.DefaultCodec,
org.apache.hadoop.io.compress.BZip2Codec,
org.apache.hadoop.io.compress.SnappyCodec
value>
property>
修改 $HADOOP_HOME/etc/hadoop/mapred-site.xml
<property>
<name>mapred.output.compressname>
<value>truevalue>
property>
<property>
<name>mapred.output.compression.codecname>
<value>org.apache.hadoop.io.compress.SnappyCodecvalue>
property>
<property>
<name>mapred.compress.map.outputname>
<value>truevalue>
property>
<property>
<name>mapred.map.output.compression.codecname>
<value>org.apache.hadoop.io.compress.SnappyCodecvalue>
property>
同步修改配置到所有集群。
重新启动 hdfs,再次检查:
hadoop checknative -a
Native library checking:
hadoop: true /home/bigdata/hadoop/lib/native/libhadoop.so
zlib: true /lib/x86_64-linux-gnu/libz.so.1
**snappy: true /home/bigdata/hadoop/lib/native/libsnappy.so.1**
zstd : false
lz4: true revision:10301
bzip2: false
在 hbase 每个节点中创建目录
mkdir -p $HBASE_HOME/lib/native/Linux-amd64-64
每个节点拷贝库文件到 Hbase
cp -r $HADOOP_HOME/lib/native/* $HBASE_HOME/lib/native/Linux-amd64-64/
每个节点 $HBASE_HOME/conf/hbase-env.sh 添加如下环境变量:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/bigdata/hadoop/lib/native/:/usr/local/lib/
export HBASE_LIBRARY_PATH=$HBASE_LIBRARY_PATH:$HBASE_HOME/lib/native/Linux-amd64-64/:/usr/local/lib/
每个节点 $HBASE_HOME/conf/hbase-site.xml 添加:
<property>
<name>hbase.regionserver.codecsname>
<value>snappyvalue>
property>
然后重启 hbase。
创建表验证一下:
create 'tsnappy', { NAME => 'f', COMPRESSION => 'snappy'}
describe 'tsnappy'
put 'tsnappy', 'row1', 'f:col1', 'value'
scan 'tsnappy'