【HBase数据开发】集群搭建NameNode未格式化

1.报错如下

Encountered exception loading fsimage
java.io.IOException: NameNode is not formatted.
上午10点28:19.302分	WARN	FSNamesystem	
Encountered exception loading fsimage
java.io.IOException: NameNode is not formatted.
	at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:238)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1152)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:799)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:614)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:676)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:844)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:823)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1547)

 

 

错误原因:
原因是namenode元数据被破坏了,需要修复。


解决办法:
输入命令:hadoop namenode -recover
全都选择  ‘c’ 即可

 

但是,会继续抛出异常。

 

rg.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /dfs/nn is in an inconsistent state: storage directory does not exist or is not accessible.
	at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:377)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:228)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1152)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:799)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.doRecovery(NameNode.java:1437)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1531)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1615)
20/07/22 10:33:47 INFO namenode.MetaRecoveryContext: RECOVERY FAILED: caught exception
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /dfs/nn is in an inconsistent state: storage directory does not exist or is not accessible.
	at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:377)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:228)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1152)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:799)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.doRecovery(NameNode.java:1437)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1531)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1615)
20/07/22 10:33:47 ERROR namenode.NameNode: Failed to start namenode.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /dfs/nn is in an inconsistent state: storage directory does not exist or is not accessible.
	at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:377)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:228)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1152)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:799)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.doRecovery(NameNode.java:1437)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1531)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1615)
20/07/22 10:33:47 INFO util.ExitUtil: Exiting with status 1

 

继续看下是什么原因

2.第二个报错

20/07/22 10:33:47 ERROR namenode.NameNode: Failed to start namenode.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /dfs/nn is in an inconsistent state: storage directory does not exist or is not accessible.
 

 

执行这个

hadoop namenode –format

成功,一直Y就可以了。

 

 

 

 

新来一个报错。

Encountered exception loading fsimage
java.io.FileNotFoundException: /dfs/nn/current/VERSION (Permission denied)

3.第三个报错。

Encountered exception loading fsimage
java.io.FileNotFoundException: /dfs/nn/current/VERSION (Permission denied)

问题解决:
chown hdfs:root -R /dfs/nn/*

 

可以完美解决,哈哈。ok了,马上就可以愉快地跑任务了。

 

 

 

 

4.第四个报错。

似乎还是权限问题。

Error starting NodeManager
org.apache.hadoop.service.ServiceStateException: org.fusesource.leveldbjni.internal.NativeDB$DBException: IO error: /var/lib/hadoop-yarn/yarn-nm-recovery/yarn-nm-state/LOCK: Permission denied

 

Error starting NodeManager
org.apache.hadoop.service.ServiceStateException: org.fusesource.leveldbjni.internal.NativeDB$DBException: IO error: /var/lib/hadoop-yarn/yarn-nm-recovery/yarn-nm-state/LOCK: Permission denied

 

chmod 777 /var/lib/hadoop-yarn

chmod 777 /var/lib/hadoop-yarn/yarn-nm-recovery

chmod 777 /var/lib/hadoop-yarn/yarn-nm-recovery/yarn-nm-state

 

 

 

 

5.第五个报错,JobManager

来看看第五个报错

上午10点51:25.326分	FATAL	HMaster	
Unhandled exception. Starting shutdown.
org.apache.hadoop.security.AccessControlException: Permission denied: user=hbase, access=WRITE, inode="/":root:supergroup:drwxr-xr-x
	at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:279)
	at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:260)
	at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:240)
	at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:162)

 

Unhandled exception. Starting shutdown.
org.apache.hadoop.security.AccessControlException: Permission denied: user=hbase, access=WRITE, inode="/":root:supergroup:drwxr-xr-x

因为只是开发环境,所以

hdfs dfs -mkdir /hbase

hdfs dfs -chmod 777 /hbase

直接给最大权限

 

重启应该就行了

 

 

6.第六个报错

Error starting JobHistoryServer
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Error creating done directory: [hdfs://node1:8020/user/history/done]
Error starting JobHistoryServer
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Error creating done directory: [hdfs://node1:8020/user/history/done]

 

 

sudo -u hdfs hdfs dfs -chmod -R 777 /

 

赋权试一下

 

 

好了,hbase、yarn启动成功了,通过了,

很开心。哈哈

但是hive又来幺蛾子了。

 

 

 

7.第七个hive的问题

+ exec /opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hive/bin/hive --config /run/cloudera-scm-agent/process/205-hive-HIVEMETASTORE --service metastore -p 9083
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512M; support was removed in 8.0
OpenJDK 64-Bit Server VM warning: Using incremental CMS is deprecated and will likely be removed in a future release
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512M; support was removed in 8.0
javax.jdo.JDOFatalInternalException: Error creating transactional connection factory
	at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:587)
	at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:788)
	at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:333)
	at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:202)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at javax.jdo.JDOHelper$16.run(JDOHelper.java:1965)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.jdo.JDOHelper.invoke(JDOHelper.java:1960)
	at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1166)
	at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808)
	at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:420)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:449)
	at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:344)

这个问题基本上就是hive没有jar有关。

 

日志能看到

Caused by: org.datanucleus.exceptions.NucleusException: Attempt to invoke the "BONECP" plugin to create a ConnectionPool gave an error : The specified datastore driver ("com.mysql.jdbc.Driver") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.

 

Caused by: org.datanucleus.exceptions.NucleusException: Attempt to invoke the "BONECP" plugin to create a ConnectionPool gave an error : The specified datastore driver ("com.mysql.jdbc.Driver") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.

 

看到应该是hive缺少一个连接mysql的第三方jar连接包,我们要弄一下了。

但是呢我们在cdh的hive的lib下面放了jar之后,还有一个

MetaException(message:Version information not found in metastore. )
OpenJDK 64-Bit Server VM warning: Using incremental CMS is deprecated and will likely be removed in a future release
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512M; support was removed in 8.0
MetaException(message:Version information not found in metastore. )
	at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:7387)
	at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:7363)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:103)
	at com.sun.proxy.$Proxy7.verifySchema(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:664)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:712)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:511)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:78)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84)
	at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6511)
	at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6506)
	at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:6756)
	at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:6683)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:226)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:141)
Exception in thread "main" MetaException(message:Version information not found in metastore. )
	at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:7387)
	at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:7363)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:103)
	at com.sun.proxy.$Proxy7.verifySchema(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:664)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:712)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:511)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:78)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84)
	at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6511)

 

这个,再找一下,

/opt/cloudera/parcels/CDH/lib/hive/scripts/metastore/upgrade/mysql

这里然后进入mysql

mysql

 

source hive-schema-1.1.0.mysql.sql;

 

 

 

8.第八个错误

[Errno 2] No such file or directory: '/var/log/oozie/oozie-cmf-oozie-OOZIE_SERVER-node1.log.out'
[Errno 2] No such file or directory: '/var/log/oozie/oozie-cmf-oozie-OOZIE_SERVER-node1.log.out'

 

直接赋权限,看看,应该能行吧

 

rm: cannot remove /var/lib/oozie/tomcat-deployment/webapps/oozie/docs: Permission denied
cp: failed to access /var/lib/oozie/tomcat-deployment/webapps/oozie/docs: Permission denied
mkdir: cannot create directory /var/lib/oozie: Permission denied
cp: failed to access /var/lib/oozie/tomcat-deployment/webapps/oozie/WEB-INF: Permission denied
chown: cannot access /var/lib/oozie/tomcat-deployment: Permission denied

 

rm: cannot remove /var/lib/oozie/tomcat-deployment/webapps/oozie/docs: Permission denied
cp: failed to access /var/lib/oozie/tomcat-deployment/webapps/oozie/docs: Permission denied
mkdir: cannot create directory /var/lib/oozie: Permission denied
cp: failed to access /var/lib/oozie/tomcat-deployment/webapps/oozie/WEB-INF: Permission denied
chown: cannot access /var/lib/oozie/tomcat-deployment: Permission denied

 

 

9.第九个错误

可能是mysql的驱动没有,别急

CMF_CONF_DIR=/etc/cloudera-scm-agent
Copying JDBC jar from /usr/share/java/mysql-connector-java.jar to /var/lib/oozie

ERROR: Oozie could not be started

REASON: org.apache.oozie.service.ServiceException: E0103: Could not load service classes, Cannot load JDBC driver class 'com.mysql.jdbc.Driver'

Stacktrace:
-----------------------------------------------------------------
org.apache.oozie.service.ServiceException: E0103: Could not load service classes, Cannot load JDBC driver class 'com.mysql.jdbc.Driver'
	at org.apache.oozie.service.Services.loadServices(Services.java:309)
	at org.apache.oozie.service.Services.init(Services.java:213)
	at org.apache.oozie.servlet.ServicesLoader.contextInitialized(ServicesLoader.java:46)
	at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4276)
	at org.apache.catalina.core.StandardContext.start(StandardContext.java:4779)
	at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:803)
	at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:780)
	at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:583)
	at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:944)
	at org.apache.catalina.startup.HostConfig.deployWARs(HostConfig.java:779)
	at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:505)
	at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1322)
	at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:325)
	at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)
	at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1069)
	at org.apache.catalina.core.StandardHost.start(StandardHost.java:822)
	at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1061)
	at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463)
	at org.apache.catalina.core.StandardService.start(StandardService.java:525)
	at org.apache.catalina.core.StandardServer.start(StandardServer.java:761)
	at org.apache.catalina.startup.Catalina.start(Catalina.java:595)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289)
	at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414)
Caused by:  org.apache.openjpa.persistence.PersistenceException: Cannot load JDBC driver class 'com.mysql.jdbc.Driver'
	at org.apache.openjpa.jdbc.sql.DBDictionaryFactory.newDBDictionary(DBDictionaryFactory.java:106)
	at org.apache.openjpa.jdbc.conf.JDBCConfigurationImpl.getDBDictionaryInstance(JDBCConfigurationImpl.java:603)
	at org.apache.openjpa.jdbc.meta.MappingRepository.endConfiguration(MappingRepository.java:1520)
	at org.apache.openjpa.lib.conf.Configurations.configureInstance(Configurations.java:533)
	at org.apache.openjpa.lib.conf.Configurations.configureInstance(Configurations.java:458)
	at org.apache.openjpa.lib.conf.PluginValue.instantiate(PluginValue.java:121)
	at org.apache.openjpa.conf.MetaDataRepositoryValue.instantiate(MetaDataRepositoryValue.java:68)
	at org.apache.openjpa.lib.conf.ObjectValue.instantiate(ObjectValue.java:83)
	at org.apache.openjpa.conf.OpenJPAConfigurationImpl.newMetaDataRepositoryInstance(OpenJPAConfigurationImpl.java:967)
	at org.apache.openjpa.conf.OpenJPAConfigurationImpl.getMetaDataRepositoryInstance(OpenJPAConfigurationImpl.java:958)
	at org.apache.openjpa.kernel.AbstractBrokerFactory.makeReadOnly(AbstractBrokerFactory.java:642)
	at org.apache.openjpa.kernel.AbstractBrokerFactory.newBroker(AbstractBrokerFactory.java:202)
	at org.apache.openjpa.kernel.DelegatingBrokerFactory.newBroker(DelegatingBrokerFactory.java:154)
CMF_CONF_DIR=/etc/cloudera-scm-agent
Copying JDBC jar from /usr/share/java/mysql-connector-java.jar to /var/lib/oozie

ERROR: Oozie could not be started

REASON: org.apache.oozie.service.ServiceException: E0103: Could not load service classes, Cannot load JDBC driver class 'com.mysql.jdbc.Driver'

Stacktrace:
-----------------------------------------------------------------
org.apache.oozie.service.ServiceException: E0103: Could not load service classes, Cannot load JDBC driver class 'com.mysql.jdbc.Driver'
	at org.apache.oozie.service.Services.loadServices(Services.java:309)
	at org.apache.oozie.service.Services.init(Services.java:213)
	at org.apache.oozie.servlet.ServicesLoader.contextInitialized(ServicesLoader.java:46)
	at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4276)
	at org.apache.catalina.core.StandardContext.start(StandardContext.java:4779)
	at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:803)
	at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:780)
	at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:583)
	at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:944)
	at org.apache.catalina.startup.HostConfig.deployWARs(HostConfig.java:779)
	at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:505)
	at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1322)
	at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:325)
	at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)
	at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1069)
	at org.apache.catalina.core.StandardHost.start(StandardHost.java:822)
	at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1061)
	at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463)
	at org.apache.catalina.core.StandardService.start(StandardService.java:525)
	at org.apache.catalina.core.StandardServer.start(StandardServer.java:761)
	at org.apache.catalina.startup.Catalina.start(Catalina.java:595)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289)
	at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414)
Caused by:  org.apache.openjpa.persistence.PersistenceException: Cannot load JDBC driver class 'com.mysql.jdbc.Driver'
	at org.apache.openjpa.jdbc.sql.DBDictionaryFactory.newDBDictionary(DBDictionaryFactory.java:106)
	at org.apache.openjpa.jdbc.conf.JDBCConfigurationImpl.getDBDictionaryInstance(JDBCConfigurationImpl.java:603)
	at org.apache.openjpa.jdbc.meta.MappingRepository.endConfiguration(MappingRepository.java:1520)
	at org.apache.openjpa.lib.conf.Configurations.configureInstance(Configurations.java:533)
	at org.apache.openjpa.lib.conf.Configurations.configureInstance(Configurations.java:458)
	at org.apache.openjpa.lib.conf.PluginValue.instantiate(PluginValue.java:121)
	at org.apache.openjpa.conf.MetaDataRepositoryValue.instantiate(MetaDataRepositoryValue.java:68)
	at org.apache.openjpa.lib.conf.ObjectValue.instantiate(ObjectValue.java:83)
	at org.apache.openjpa.conf.OpenJPAConfigurationImpl.newMetaDataRepositoryInstance(OpenJPAConfigurationImpl.java:967)
	at org.apache.openjpa.conf.OpenJPAConfigurationImpl.getMetaDataRepositoryInstance(OpenJPAConfigurationImpl.java:958)
	at org.apache.openjpa.kernel.AbstractBrokerFactory.makeReadOnly(AbstractBrokerFactory.java:642)
	at org.apache.openjpa.kernel.AbstractBrokerFactory.newBroker(AbstractBrokerFactory.java:202)
	at org.apache.openjpa.kernel.DelegatingBrokerFactory.newBroker(DelegatingBrokerFactory.java:154)

 

 

 

【HBase数据开发】集群搭建NameNode未格式化_第1张图片

 

即可搞定。好的,接下来,我们再看看,还有什么错误。

 

 

10.hue报错

基本也是权限问题,解决也容易,至此服务启动

 

 

 

 

【HBase数据开发】集群搭建NameNode未格式化_第2张图片

 

 

ok,起飞了

 

 

 

 

11.再来一次,还是说格式化的问题

对当前 NameNode 的名称目录进行格式化。如果名称目录不为空,此操作将失败。
这个是因为我之前安装失败了然后dfs下的nn目录不为空,删除每台机器下dfs目录下的nn目录就行了。

将namenode的目录删除就好 我的是在/dfs/nn
然后重试就可以了

 

rm -rf /dfs/nn 即可

 

 

 

12.奇怪的报错。namenode 和 datanode 集群id不匹配

IOException in offerService
java.io.IOException: Failed on local exception: java.io.IOException: Connection reset by peer; Host Details : local host is: "node1/172.16.0.56"; destination host is: "node1":8022;

 

这个问题搞了很久,那么我来吧报错写的详细点,再来告诉你怎么回事

 

 

【HBase数据开发】集群搭建NameNode未格式化_第3张图片


 

【HBase数据开发】集群搭建NameNode未格式化_第4张图片

 

 

然后请注意,你需要确定你的集群的目录

/dfs 这里,要符合  nn里面            snn             以及 dn里面           version   里面的clusterId一致才行,好的,我们规整一下。

【HBase数据开发】集群搭建NameNode未格式化_第5张图片

 

那我们就改成 13好了

【HBase数据开发】集群搭建NameNode未格式化_第6张图片

 

成功,思路就是 /dfs/nn        和        /dfs/dn     下      version里的cluster_id 保持一致即可

 

 

13.yarn的问题,好难解决啊

HDFS dependency of YARN (MR2 Included) does not have sufficient roles running.
Completed only 0/1 steps. First failure: Completed only 0/2 steps. First failure: Failed to execute command Create Job History Dir on service YARN (MR2 Included)

这个太棘手了,好难搞啊。

 

 

 

你可能感兴趣的:(大数据,CDH,大数据)