not instantiate implementation: com.thinkaurelius.titan.diskstorage.hbase.HBaseStoreManager

前言

在hadoop-ha中集成Apache-Atlas管理元数据报错处理

报错

Factory method ‘get’ threw exception; nested exception is
java.lang.IllegalArgumentException: Could not instantiate
implementation:
com.thinkaurelius.titan.diskstorage.hbase.HBaseStoreManager
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:479)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:306)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:302)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:761)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:866)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:542)
关键报错位置

    at org.apache.hadoop.hbase.protobuf.ProtobufUtil.(ProtobufUtil.java:241)
    ... 106 more Caused by: java.net.UnknownHostException: mycluster
    ... 120 more

原因

atlas依赖于Hbase管理元数据,solr管理index,kafka或者api/rest进行数据交互,在加载hbase配置时无法读取hbase-site.xml中配置的mycluster(为hadoop里集群名称)!

hbase-site.xml
not instantiate implementation: com.thinkaurelius.titan.diskstorage.hbase.HBaseStoreManager_第1张图片

解决方案

在${atlas_home}/conf/hbase中我们将外部hbase的conf目录软链接到了这个里面!

not instantiate implementation: com.thinkaurelius.titan.diskstorage.hbase.HBaseStoreManager_第2张图片
只需要再进入这个软链接conf目录里,把hdfs-site和core-site拿过来就行,因为启动atlas时,他会启动内部的hbase,默认是不启动,所以配置启动外部hbase时他会读这里的配置信息,所以配上以后就可以解决!

not instantiate implementation: com.thinkaurelius.titan.diskstorage.hbase.HBaseStoreManager_第3张图片

总结

启动atlas必要的组件,因为需要kafka那么就必须要zk,依赖于hbase,hbase依赖于HDFS。如果要管理hive的元数据,提前启动hive的metastore服务。如果是后面搭建的atlas那么需要手动去导入hive数据!

你可能感兴趣的:(hive,atlas,HA,Hadoop)