启用lzo的压缩方式对于小规模集群是很有用处,压缩比率大概能降到原始日志大小的1/3。同时解压缩的速度也比较快。
lzo并不是linux系统原生支持,所以需要下载安装软件包。这里至少需要安装3个软件包:lzo, lzop, hadoop-gpl-packaging。
gpl-packaging的作用主要是对压缩的lzo文件创建索引,否则的话,无论压缩文件是否大于hdfs的block大小,都只会按照默认启动2个map操作。
[root@localhost ~]# wget http://www.oberhumer.com/opensource/lzo/download/lzo-2.06.tar.gz
[root@localhost ~]# tar -zxvf lzo-2.06.tar.gz
[root@localhost ~]# cd lzo-2.06
[root@localhost ~]# export CFLAGS=-m64
[root@localhost ~]# ./configure -enable-shared -prefix=/usr/local/hadoop/lzo/
[root@localhost ~]# make && sudo make install
编译完lzo包之后,会在/usr/local/hadoop/lzo/生成一些文件。
将/usr/local/hadoop/lzo目录下的所有文件打包,并同步到集群中的所有机器上。
在编译lzo包的时候,需要一些环境,可以用下面的命令安装好lzo编译环境
[root@localhost ~]# yum -y install lzo-devel zlib-devel gcc autoconf automake libtool
这里下载的是Twitter hadoop-lzo,可以用Maven(如何安装Maven请参照本博客的《Linux命令行下安装Maven与配置》)进行编译。
[root@localhost ~]# wget https://github.com/twitter/hadoop-lzo/archive/master.zip
下载后的文件名是master,它是一个zip格式的压缩包,可以进行解压:
[root@localhost ~]# unzip master
解压后的文件夹名为hadoop-lzo-master
当然,如果你电脑安装了git,你也可以用下面的命令去下载
[root@localhost ~]# git clone https://github.com/twitter/hadoop-lzo.git
hadoop-lzo中的pom.xml依赖了hadoop2.1.0-beta,由于我们这里用到的是Hadoop 2.2.0,所以建议将hadoop版本修改为2.2.0:
.build.sourceEncoding>UTF-8 .build.sourceEncoding>
.current.version>2.2.0 .current.version>
.old.version>1.0.4 .old.version>
然后进入hadoop-lzo-master目录,依次执行下面的命令
[root@localhost ~]# export CFLAGS=-m64
[root@localhost ~]# export CXXFLAGS=-m64
[root@localhost ~]# export C_INCLUDE_PATH=/usr/local/hadoop/lzo/include
[root@localhost ~]# export LIBRARY_PATH=/usr/local/hadoop/lzo/lib
[root@localhost ~]# mvn clean package -Dmaven.test.skip=true
[root@localhost ~]# cd target/native/Linux-amd64-64
[root@localhost ~]# tar -cBf - -C lib . | tar -xBvf - -C ~
[root@localhost ~]# cp ~/libgplcompression* $HADOOP_HOME/lib/native/
[root@localhost ~]# cp target/hadoop-lzo-0.4.18-SNAPSHOT.jar $HADOOP_HOME/share/hadoop/common/
其实在tar -cBf – -C lib . | tar -xBvf – -C ~命令之后,会在~目录下生成一下几个文件:
[root@localhost ~]# ls -l
1-rw-r--r-- 1 libgplcompression.a
2-rw-r--r-- 1 libgplcompression.la
3lrwxrwxrwx 1 libgplcompression.so -> libgplcompression.so.0.0.0
4lrwxrwxrwx 1 libgplcompression.so.0 -> libgplcompression.so.0.0.0
5-rwxr-xr-x 1 libgplcompression.so.0.0.0
其中libgplcompression.so和libgplcompression.so.0是链接文件,指向libgplcompression.so.0.0.0,将刚刚生成的libgplcompression*和target/hadoop-lzo-0.4.18-SNAPSHOT.jar同步到集群中的所有机器对应的目录。
1、在Hadoop中的$HADOOP_HOME/etc/hadoop/hadoop-env.sh加上下面配置:
export LD_LIBRARY_PATH=/usr/local/hadoop/lzo/lib
2、在$HADOOP_HOME/etc/hadoop/core-site.xml加上如下配置:
<property>
<name>io.compression.codecsname>
<value>org.apache.hadoop.io.compress.GzipCodec,
org.apache.hadoop.io.compress.DefaultCodec,
com.hadoop.compression.lzo.LzoCodec,
com.hadoop.compression.lzo.LzopCodec,
org.apache.hadoop.io.compress.BZip2Codec
value>
property>
<property>
<name>io.compression.codec.lzo.classname>
<value>com.hadoop.compression.lzo.LzoCodecvalue>
property>
3、在$HADOOP_HOME/etc/hadoop/mapred-site.xml加上如下配置
<property>
<name>mapred.compress.map.outputname>
<value>truevalue>
property>
<property>
<name>mapred.map.output.compression.codecname>
<value>com.hadoop.compression.lzo.LzoCodecvalue>
property>
<property>
<name>mapred.child.envname>
<value>LD_LIBRARY_PATH=/usr/local/hadoop/lzo/libvalue>
property>
将刚刚修改的配置文件全部同步到集群的所有机器上,并重启Hadoop集群,这样就可以在Hadoop中使用lzo。
CREATE TABLE lzo (
ip STRING,
user STRING,
time STRING,
request STRING,
status STRING,
size STRING,
rt STRING,
referer STRING,
agent STRING,
forwarded String
)
partitioned by (
date string,
host string
)
row format delimited
fields terminated by '\t'
STORED AS INPUTFORMAT "com.hadoop.mapred.DeprecatedLzoTextInputFormat"
OUTPUTFORMAT "org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat";
LOAD DATA Local INPATH '/home/hadoop/data/access_20151230_25.log.lzo' INTO TABLE lzo PARTITION(date=20151229,host=25);
/home/hadoop/data/access_20151219.log文件的格式如下:
xxx.xxx.xx.xxx - [23/Dec/2015:23:22:38 +0800] "GET /ClientGetResourceDetail.action?id=318880&token=Ocm HTTP/1.1" 200 199 0.008 "xxx.com" "Android4.1.2/LENOVO/Lenovo A706/ch_lenovo/80" "-"
直接采用lzop /home/hadoop/data/access_20151219.log即可生成lzo格式压缩文件/home/hadoop/data/access_20151219.log.lzo
1. 批量lzo文件修改
$HADOOP_HOME/bin/hadoop jar
/home/hadoop/hadoop-2.2.0/share/hadoop/common/hadoop-lzo-0.4.20-SNAPSHOT.jar
com.hadoop.compression.lzo.DistributedLzoIndexer
/user/hive/warehouse/lzo
2. 单个lzo文件修改
$HADOOP_HOME/bin/hadoop jar
/home/hadoop/hadoop-2.2.0/share/hadoop/common/hadoop-lzo-0.4.20-SNAPSHOT.jar
com.hadoop.compression.lzo.LzoIndexer
/user/hive/warehouse/lzo/20151228/lzo_test_20151228.lzo
set hive.exec.reducers.max=10;
set mapred.reduce.tasks=10;
select ip,rt from nginx_lzo limit 10;
在hive的控制台能看到类似如下格式输出,就表示正确了!
hive> set hive.exec.reducers.max=10;
hive> set mapred.reduce.tasks=10;
hive> select ip,rt from lzo limit 10;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1388065803340_0009, Tracking URL = http://mycluster:8088/proxy/application_1388065803340_0009/
Kill Command = /home/hadoop/hadoop-2.2.0/bin/hadoop job -kill job_1388065803340_0009
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2013-12-27 09:13:39,163 Stage-1 map = 0%, reduce = 0%
2013-12-27 09:13:45,343 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.22 sec
2013-12-27 09:13:46,369 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.22 sec
MapReduce Total cumulative CPU time: 1 seconds 220 msec
Ended Job = job_1388065803340_0009
MapReduce Jobs Launched:
Job 0: Map: 1 Cumulative CPU: 1.22 sec HDFS Read: 63570 HDFS Write: 315 SUCCESS
Total MapReduce CPU Time Spent: 1 seconds 220 msec
OK
xxx.xxx.xx.xxx "XXX.com"
Time taken: 17.498 seconds, Fetched: 10 row(s)
ALTER TABLE lzo SET FILEFORMAT
INPUTFORMAT 'com.hadoop.mapred.DeprecatedLzoTextInputFormat'
OUTPUTFORMAT "org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat"
SERDE "org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe";