在运行hadoopdb\hive\hadoop源码时不能正确调用自定义core-site.xml等配置文件

一 前言

在hadoopdb\hive\hadoop源码中,会有很多的测试主程序 ,而这些测试主程序通常都要用到configuration,即

new JobConf(conf)时初始化configuration对象,如果直接运行这些程序 ,可能会出现

只读取jar包中的配置文件 ,并不读取在conf路径下重新定义的新配置文件。


二 解决方案

记得在项目的classpath中添加conf文件 路径,或在应用程序的classpath中添加conf文件路径 

以configuration测试用例做说明:

org.apache.hadoop.conf.configuration类提供了一个专门用做测试的 main函数

1.在run->debug configurations ...  对话框左侧边栏找到java application,双击java application,会提示 创建新的调试程序 。

将Name 为congfiguration,主程序选择org.apache.hadoop.conf.Configuration,点击 debug

会输出以下默认信息

<?xml version="1.0" encoding="UTF-8" standalone="no"?><configuration>
<property><name>fs.file.impl</name><value>org.apache.hadoop.fs.LocalFileSystem</value></property>
<property><name>hadoop.logfile.count</name><value>10</value></property>
<property><name>fs.har.impl.disable.cache</name><value>true</value></property>
<property><name>ipc.client.kill.max</name><value>10</value></property>
<property><name>fs.s3n.impl</name><value>org.apache.hadoop.fs.s3native.NativeS3FileSystem</value></property>
<property><name>io.mapfile.bloom.size</name><value>1048576</value></property>
<property><name>fs.s3.sleepTimeSeconds</name><value>10</value></property>
<property><name>fs.s3.block.size</name><value>67108864</value></property>
<property><name>fs.kfs.impl</name><value>org.apache.hadoop.fs.kfs.KosmosFileSystem</value></property>
<property><name>ipc.server.listen.queue.size</name><value>128</value></property>
<property><name>hadoop.util.hash.type</name><value>murmur</value></property>
<property><name>ipc.client.tcpnodelay</name><value>false</value></property>
<property><name>io.file.buffer.size</name><value>4096</value></property>
<property><name>fs.s3.buffer.dir</name><value>${hadoop.tmp.dir}/s3</value></property>
<property><name>hadoop.tmp.dir</name><value>/tmp/hadoop-${user.name}</value></property>
<property><name>fs.trash.interval</name><value>0</value></property>
<property><name>io.seqfile.sorter.recordlimit</name><value>1000000</value></property>
<property><name>fs.ftp.impl</name><value>org.apache.hadoop.fs.ftp.FTPFileSystem</value></property>
<property><name>fs.checkpoint.size</name><value>67108864</value></property>
<property><name>fs.checkpoint.period</name><value>3600</value></property>
<property><name>fs.hftp.impl</name><value>org.apache.hadoop.hdfs.HftpFileSystem</value></property>
<property><name>hadoop.native.lib</name><value>true</value></property>
<property><name>fs.hsftp.impl</name><value>org.apache.hadoop.hdfs.HsftpFileSystem</value></property>
<property><name>ipc.client.connect.max.retries</name><value>10</value></property>
<property><name>fs.har.impl</name><value>org.apache.hadoop.fs.HarFileSystem</value></property>
<property><name>fs.s3.maxRetries</name><value>4</value></property>
<property><name>topology.node.switch.mapping.impl</name><value>org.apache.hadoop.net.ScriptBasedMapping</value></property>
<property><name>hadoop.logfile.size</name><value>10000000</value></property>
<property><name>fs.checkpoint.dir</name><value>${hadoop.tmp.dir}/dfs/namesecondary</value></property>
<property><name>fs.checkpoint.edits.dir</name><value>${fs.checkpoint.dir}</value></property>
<property><name>topology.script.number.args</name><value>100</value></property>
<property><name>fs.s3.impl</name><value>org.apache.hadoop.fs.s3.S3FileSystem</value></property>
<property><name>ipc.client.connection.maxidletime</name><value>10000</value></property>
<property><name>io.compression.codecs</name><value>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec</value></property>
<property><name>ipc.server.tcpnodelay</name><value>false</value></property>
<property><name>io.serializations</name><value>org.apache.hadoop.io.serializer.WritableSerialization</value></property>
<property><name>ipc.client.idlethreshold</name><value>4000</value></property>
<property><name>fs.hdfs.impl</name><value>org.apache.hadoop.hdfs.DistributedFileSystem</value></property>
<property><name>io.bytes.per.checksum</name><value>512</value></property>
<property><name>io.mapfile.bloom.error.rate</name><value>0.005</value></property>
<property><name>io.seqfile.lazydecompress</name><value>true</value></property>
<property><name>local.cache.size</name><value>10737418240</value></property>
<property><name>hadoop.security.authorization</name><value>false</value></property>
<property><name>hadoop.rpc.socket.factory.class.default</name><value>org.apache.hadoop.net.StandardSocketFactory</value></property>
<property><name>io.skip.checksum.errors</name><value>false</value></property>
<property><name>io.seqfile.compress.blocksize</name><value>1000000</value></property>
<property><name>fs.ramfs.impl</name><value>org.apache.hadoop.fs.InMemoryFileSystem</value></property>
<property><name>webinterface.private.actions</name><value>false</value></property>
<property><name>fs.default.name</name><value>file:///</value></property>
</configuration>
2.打开debug configurations ...  对话框,在configuration 调试程序classpath选项卡中加入$HADOOP_HOME/conf文件夹。如果在conf文件 中有自定义的配置文件

就会在下面的输出中看到自定义配置覆盖了默认配置。

<?xml version="1.0" encoding="UTF-8" standalone="no"?><configuration>
<property><name>hadoopdb.config.replication</name><value>false</value></property>
<property><name>fs.file.impl</name><value>org.apache.hadoop.fs.LocalFileSystem</value></property>
<property><name>hadoop.logfile.count</name><value>10</value></property>
<property><name>fs.har.impl.disable.cache</name><value>true</value></property>
<property><name>ipc.client.kill.max</name><value>10</value></property>
<property><name>fs.s3n.impl</name><value>org.apache.hadoop.fs.s3native.NativeS3FileSystem</value></property>
<property><name>io.mapfile.bloom.size</name><value>1048576</value></property>
<property><name>fs.s3.sleepTimeSeconds</name><value>10</value></property>
<property><name>fs.s3.block.size</name><value>67108864</value></property>
<property><name>fs.kfs.impl</name><value>org.apache.hadoop.fs.kfs.KosmosFileSystem</value></property>
<property><name>ipc.server.listen.queue.size</name><value>128</value></property>
<property><name>hadoop.util.hash.type</name><value>murmur</value></property>
<property><name>ipc.client.tcpnodelay</name><value>false</value></property>
<property><name>io.file.buffer.size</name><value>4096</value></property>
<property><name>fs.s3.buffer.dir</name><value>${hadoop.tmp.dir}/s3</value></property>
<property><name>hadoop.tmp.dir</name><value>/tmp/hadoop-${user.name}</value></property>
<property><name>fs.trash.interval</name><value>0</value></property>
<property><name>io.seqfile.sorter.recordlimit</name><value>1000000</value></property>
<property><name>fs.ftp.impl</name><value>org.apache.hadoop.fs.ftp.FTPFileSystem</value></property>
<property><name>fs.checkpoint.size</name><value>67108864</value></property>
<property><name>fs.checkpoint.period</name><value>3600</value></property>
<property><name>fs.hftp.impl</name><value>org.apache.hadoop.hdfs.HftpFileSystem</value></property>
<property><name>hadoopdb.fetch.size</name><value>1000</value></property> #变化部分
<property><name>hadoop.native.lib</name><value>true</value></property>
<property><name>fs.hsftp.impl</name><value>org.apache.hadoop.hdfs.HsftpFileSystem</value></property>
<property><name>ipc.client.connect.max.retries</name><value>10</value></property>
<property><name>fs.har.impl</name><value>org.apache.hadoop.fs.HarFileSystem</value></property>
<property><name>fs.s3.maxRetries</name><value>4</value></property>
<property><name>topology.node.switch.mapping.impl</name><value>org.apache.hadoop.net.ScriptBasedMapping</value></property>
<property><name>hadoop.logfile.size</name><value>10000000</value></property>
<property><name>fs.checkpoint.dir</name><value>${hadoop.tmp.dir}/dfs/namesecondary</value></property>
<property><name>fs.checkpoint.edits.dir</name><value>${fs.checkpoint.dir}</value></property>
<property><name>topology.script.number.args</name><value>100</value></property>  #被新值覆盖
<property><name>fs.s3.impl</name><value>org.apache.hadoop.fs.s3.S3FileSystem</value></property>
<property><name>ipc.client.connection.maxidletime</name><value>10000</value></property>
<property><name>io.compression.codecs</name><value>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec</value></property>
<property><name>hadoopdb.config.file</name><value>HadoopDB.xml</value></property>   #添加新属性
<property><name>ipc.server.tcpnodelay</name><value>false</value></property>
<property><name>io.serializations</name><value>org.apache.hadoop.io.serializer.WritableSerialization</value></property>
<property><name>ipc.client.idlethreshold</name><value>4000</value></property>
<property><name>fs.hdfs.impl</name><value>org.apache.hadoop.hdfs.DistributedFileSystem</value></property>
<property><name>io.bytes.per.checksum</name><value>512</value></property>
<property><name>io.mapfile.bloom.error.rate</name><value>0.005</value></property>
<property><name>io.seqfile.lazydecompress</name><value>true</value></property>
<property><name>local.cache.size</name><value>10737418240</value></property>
<property><name>hadoop.security.authorization</name><value>false</value></property>
<property><name>hadoop.rpc.socket.factory.class.default</name><value>org.apache.hadoop.net.StandardSocketFactory</value></property>
<property><name>io.skip.checksum.errors</name><value>false</value></property>
<property><name>io.seqfile.compress.blocksize</name><value>1000000</value></property>
<property><name>fs.ramfs.impl</name><value>org.apache.hadoop.fs.InMemoryFileSystem</value></property>
<property><name>webinterface.private.actions</name><value>false</value></property>
<property><name>fs.default.name</name><value>hdfs://localhost:9000</value></property> #被新值覆盖
</configuration>

三 总结

这种情况是因为在configuration类内部,对资源 的读取采用以下方式

    ClassLoader cL = Thread.currentThread().getContextClassLoader();
    if (cL == null) {
      cL = Configuration.class.getClassLoader();
    }

然后 使用cL.getResource("XXX.xml")来获取资源,这些资源 都是存放在类所在路径 的根目录 中的

|-com.cn.test
                  |-Test.class
                  |-test2.txt 
|-test1.txt

com.cn.test.class.getClassLoader.getResource("test1.txt") 返回test1.txt

com.cn.test.class.getClassLoader.getResource("test2.txt")返回null


此时,如果要在增加其它 的资源 ,可以在classpath中加入其它路径。



你可能感兴趣的:(在运行hadoopdb\hive\hadoop源码时不能正确调用自定义core-site.xml等配置文件)