本博客属原创文章,转载请注明出处:http://guoyunsky.iteye.com/blog/1289475
欢迎加入Hadoop超级群: 180941958
之前有篇文章http://guoyunsky.iteye.com/blog/1237327介绍解hadoop-lzo相关问题,同时也介绍到了如何安装.但发现这种安装方法会出现一些问题.
同时这种安装方法也是网上流传的安装方法,我这里予以纠正.先说下大概原因:hadoop-lzo-xxx的前身是hadoop-gpl-compression-xxx,之前是放在google code下管理,地址:http://code.google.com/p/hadoop-gpl-compression/ .但由于协议问题后来移植到github上,也就是现在的hadoop-lzo-xxx,github,链接地址:https://github.com/kevinweil/hadoop-lzo.网上介绍hadoop lzo压缩都是基于hadoop-gpl-compression的介绍.而hadoop-gpl-compression还是09年开发的,跟现在hadoop版本已经无法再完全兼容,会发生一些问题.而按照网上的方法,为了兼容hadoop,使用hadoop-lzo-xxx,但安装hadoop-gpl-compression会报错.具体报错如下:
11/12/02 14:28:41 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library 11/12/02 14:28:41 WARN lzo.LzoCompressor: java.lang.NoSuchFieldError: workingMemoryBuf 11/12/02 14:28:41 ERROR lzo.LzoCodec: Failed to load/initialize native-lzo library 11/12/02 14:28:41 WARN mapred.LocalJobRunner: job_local_0001 java.lang.RuntimeException: native-lzo library not available at com.hadoop.compression.lzo.LzoCodec.createCompressor(LzoCodec.java:165) at com.hadoop.compression.lzo.LzopCodec.createOutputStream(LzopCodec.java:50) at org.apache.hadoop.mapreduce.lib.output.TextOutputFormat.getRecordWriter(TextOutputFormat.java:132) at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.<init>(MapTask.java:520) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:635) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:322) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:210)
我调试源码才发现原因,因为hadoop lzo实际上得依赖C/C++开发的lzo去压缩,而他们通过JNI去调用.如果使用hadoop-gpl-compression下的Native,但使用hadoop-lzo-xxx的话,会导致版本不一致问题.所以正确的做法是,将hadoop-lzo-xxx下的Native放入到/usr/local/lib下.而你每升级一个hadoop-lzo-xxx版本,或许就得重复将新lzo版本下的native目录放入/usr/local/lib下.具体需要测试.
同时这里说下,hadoop-lzo-xxx的验证原理,让我们更系统的了解为什么使用hadoop-lzo会报的一系列错误.
1)首先Hadoop-lzo会通过JNI调用gplcompression,如果调取不到会报Could not load native gpl library异常.具体代码如下:
static {
try {
//try to load the lib
System.loadLibrary("gplcompression");
nativeLibraryLoaded = true;
LOG.info("Loaded native gpl library");
} catch (Throwable t) {
LOG.error("Could not load native gpl library", t);
nativeLibraryLoaded = false;
}
}
2)获取了gplcompression后需要初始化加载以便可以调用,如果加载不成功,如我刚才说的版本冲突等也会报一系列错误.
同时这里的加载和初始化分成两步,一步是压缩,对应Java的类是LzoCompressor.另一步解压缩,对应Java的类是LzoDecompressor.先看下LzoCompressor是如何加载初始化的,代码如下:
static {
if (GPLNativeCodeLoader.isNativeCodeLoaded()) {
// Initialize the native library
try {
initIDs();
nativeLzoLoaded = true;
} catch (Throwable t) {
// Ignore failure to load/initialize native-lzo
LOG.warn(t.toString());
nativeLzoLoaded = false;
}
LZO_LIBRARY_VERSION = (nativeLzoLoaded) ? 0xFFFF & getLzoLibraryVersion()
: -1;
} else {
LOG.error("Cannot load " + LzoCompressor.class.getName() +
" without native-hadoop library!");
nativeLzoLoaded = false;
LZO_LIBRARY_VERSION = -1;
}
}
如我这里所报的警告WARN lzo.LzoCompressor: java.lang.NoSuchFieldError: workingMemoryBuf
就是由这里的LOG.warn(t.toString());所抛出.同时这里也会先加载gplcompression,加载不成功同样会报without native-hadoop library!错误.
再看看解压缩LzoDecompressor,原理差不多,不再阐述,代码如下:
static {
if (GPLNativeCodeLoader.isNativeCodeLoaded()) {
// Initialize the native library
try {
initIDs();
nativeLzoLoaded = true;
} catch (Throwable t) {
// Ignore failure to load/initialize native-lzo
LOG.warn(t.toString());
nativeLzoLoaded = false;
}
LZO_LIBRARY_VERSION = (nativeLzoLoaded) ? 0xFFFF & getLzoLibraryVersion()
: -1;
} else {
LOG.error("Cannot load " + LzoDecompressor.class.getName() +
" without native-hadoop library!");
nativeLzoLoaded = false;
LZO_LIBRARY_VERSION = -1;
}
}
以上基本包含了hadoop-lzo安装使用所遇到的问题.最后回到本文的主题,正确安装hadoop-lzo.
1)首先下载https://github.com/kevinweil/hadoop-lzo/,我这里下载到/home/guoyun/Downloads//home/guoyun/hadoop/kevinweil-hadoop-lzo-2dd49ec
2)通过ant生成native和jar,命令如下:
ant tar
在build目录下生成对应的tar包,解压缩后,进入该目录可以看到对应的jar包hadoop-lzo-0.4.14.jar.同时将lib/native/Linux-amd64-64/目录下所有文件拷贝到$HADOOP_HOME/lib和/usr/local/lib两个目录下.
注明:拷贝到/usr/local/lib是便于调试,如是生产环境则无需拷贝.