1、hbase日志
2019-12-13 10:17:32,133 INFO [PEWorker-2] master.SplitLogManager: dead splitlog workers [localhost,16020,1576054070166]
2019-12-13 10:17:32,137 INFO [PEWorker-2] master.SplitLogManager: Started splitting 1 logs in [hdfs://hadoop1:9000/hbase/WALs/localhost,16020,1576054070166-splitting] for [localhost,16020,1576054070166]
2019-12-13 10:17:32,226 WARN [master/localhost:16000] procedure2.StoppableThread: Waiting termination of thread PEWorker-2, 2.2510sec; sending interrupt
2019-12-13 10:17:32,226 WARN [PEWorker-2] master.SplitLogManager: Interrupted while waiting for log splits to be completed
2019-12-13 10:17:32,226 WARN [PEWorker-2] master.SplitLogManager: error while splitting logs in [hdfs://hadoop1:9000/hbase/WALs/localhost,16020,1576054070166-splitting] installed = 1 but only 0 done
java.io.IOException: error or interrupted while splitting logs in [hdfs://hadoop1:9000/hbase/WALs/localhost,16020,1576054070166-splitting] Task = installed = 1 done = 0 error = 0
at org.apache.hadoop.hbase.master.SplitLogManager.splitLogDistributed(SplitLogManager.java:272)
at org.apache.hadoop.hbase.master.MasterWalManager.splitLog(MasterWalManager.java:349)
at org.apache.hadoop.hbase.master.MasterWalManager.splitMetaLog(MasterWalManager.java:287)
at org.apache.hadoop.hbase.master.MasterWalManager.splitMetaLog(MasterWalManager.java:279)
at org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.splitMetaLogs(ServerCrashProcedure.java:290)
at org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState(ServerCrashProcedure.java:156)
at org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure.executeFromState(ServerCrashProcedure.java:64)
at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:194)
at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:962)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1648)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1395)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:78)
java.lang.RuntimeException: java.lang.InterruptedException
at org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.pushData(WALProcedureStore.java:761)
at org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.update(WALProcedureStore.java:604)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.updateStoreOnExec(ProcedureExecutor.java:1850)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1716)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1395)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:78)
at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1965)
Caused by: java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2034)
at org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.pushData(WALProcedureStore.java:756)
可以在hbase-site.xml添加如下设置,但是第二次重新装了一遍没这个问题了
<property>
<name>hbase.master.distributed.log.splitting</name>
<value>false</value>
</property>
2、Master is initializing,这个纠结的问题,网上有很多关于这个问题的解决方法
ERROR: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2808)
at org.apache.hadoop.hbase.master.MasterRpcServices.getClusterStatus(MasterRpcServices.java:956)
at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318)
For usage try 'help "status"'
(1)进入zk客户端zkCli.sh
和rmr /hbase
,然后重启hbase
(2)同步机器时间
(3)有一个版本要rootdir改为root.dir(my god)
(4)先启动regionserver,再启动hmaser
(5)/etc/hosts 主机id和主机名
(6)关闭防火墙
hadoop dfsadmin -safemode leave
以上方法都试了,都不行,于是选择了第7种方法
(7)重新格式namenode
结果出现了新问题:
关于这个,一般的解决方法如下:
(8)格式化namenode(哈哈)
(9)hadoop集群在安全模式下,再重新启动hbase
(10)也就是上述第5条
结果发现都没用,这告诉我们,这两个错误都是很大的一个类,产生问题的地方有很多,所以我们应该看日志,但是看了日志,什么都没查到,此刻我的内心是崩溃的,没办法了。
于是在确保hadoop、zookeeper没有问题的情况下,重新安装了一遍hbase,你猜发生了什么,居然好了
总结:遇到问题还是要多多看日志,最后不行就重装
1、
Exception in thread "main" java.lang.RuntimeException: com.ctc.wstx.exc.WstxParsingException: Illegal character entity: expansion character (code 0x8
at [row,col,system-id]: [3215,96,"file:/root/hive-3.1.2/conf/hive-site.xml"]
at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3024)
at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2973)
at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2848)
at org.apache.hadoop.conf.Configuration.get(Configuration.java:1460)
at org.apache.hadoop.hive.conf.HiveConf.getVar(HiveConf.java:4996)
at org.apache.hadoop.hive.conf.HiveConf.getVar(HiveConf.java:5069)
at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:5156)
at org.apache.hadoop.hive.conf.HiveConf.<init>(HiveConf.java:5099)
at org.apache.hadoop.hive.common.LogUtils.initHiveLog4jCommon(LogUtils.java:97)
at org.apache.hadoop.hive.common.LogUtils.initHiveLog4j(LogUtils.java:81)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:699)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
Caused by: com.ctc.wstx.exc.WstxParsingException: Illegal character entity: expansion character (code 0x8
at [row,col,system-id]: [3215,96,"file:/root/hive-3.1.2/conf/hive-site.xml"]
at com.ctc.wstx.sr.StreamScanner.constructWfcException(StreamScanner.java:621)
at com.ctc.wstx.sr.StreamScanner.throwParseError(StreamScanner.java:491)
at com.ctc.wstx.sr.StreamScanner.reportIllegalChar(StreamScanner.java:2456)
at com.ctc.wstx.sr.StreamScanner.validateChar(StreamScanner.java:2403)
at com.ctc.wstx.sr.StreamScanner.resolveCharEnt(StreamScanner.java:2369)
at com.ctc.wstx.sr.StreamScanner.fullyResolveEntity(StreamScanner.java:1515)
at com.ctc.wstx.sr.BasicStreamReader.nextFromTree(BasicStreamReader.java:2828)
at com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1123)
at org.apache.hadoop.conf.Configuration$Parser.parseNext(Configuration.java:3320)
at org.apache.hadoop.conf.Configuration$Parser.parse(Configuration.java:3114)
at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3007)
... 17 more
把对应位置字符删掉即可
2、
Logging initialized using configuration in jar:file:/root/hive-3.1.2/lib/hive-common-3.1.2.jar!/hive-log4j2.properties Async: true
Exception in thread "main" java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: ${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D
at org.apache.hadoop.fs.Path.initialize(Path.java:259)
at org.apache.hadoop.fs.Path.<init>(Path.java:217)
at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:710)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:627)
at org.apache.hadoop.hive.ql.session.SessionState.beginStart(SessionState.java:591)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:747)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
Caused by: java.net.URISyntaxException: Relative path in absolute URI: ${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D
at java.net.URI.checkPath(URI.java:1823)
at java.net.URI.<init>(URI.java:745)
at org.apache.hadoop.fs.Path.initialize(Path.java:256)
... 12 more
修改配置文件vi hive-site.xml,将有关KaTeX parse error: Expected '}', got 'EOF' at end of input: …a.io.tmpdir%7D/%7Bsystem:user.name%7D的都替换成对应路径 eg:/root/hive-3.1.2/tmp
3、
FAILED: HiveException java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
2019-12-17 09:28:36,962 WARN [24e5b272-27c0-43d1-bc2c-5dc3167c48ed main] DataNucleus.Query: Query for candidates of org.apache.hadoop.hive.metastore.model.MVersionTable and subclasses resulted in no possible candidates
Required table missing : "VERSION" in Catalog "" Schema "". DataNucleus requires this table to perform its persistence operations. Either your MetaData is incorrect, or you need to enable "datanucleus.schema.autoCreateTables"
org.datanucleus.store.rdbms.exceptions.MissingTableException: Required table missing : "VERSION" in Catalog "" Schema "". DataNucleus requires this table to perform its persistence operations. Either your MetaData is incorrect, or you need to enable "datanucleus.schema.autoCreateTables"
at org.datanucleus.store.rdbms.table.AbstractTable.exists(AbstractTable.java:606)
at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.performTablesValidation(RDBMSStoreManager.java:3385)
at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.run(RDBMSStoreManager.java:2896)
at org.datanucleus.store.rdbms.AbstractSchemaTransaction.execute(AbstractSchemaTransaction.java:119)
at org.datanucleus.store.rdbms.RDBMSStoreManager.manageClasses(RDBMSStoreManager.java:1627)
at org.datanucleus.store.rdbms.RDBMSStoreManager.getDatastoreClass(RDBMSStoreManager.java:672)
at org.datanucleus.store.rdbms.query.RDBMSQueryUtils.getStatementForCandidates(RDBMSQueryUtils.java:425)
at org.datanucleus.store.rdbms.query.JDOQLQuery.compileQueryFull(JDOQLQuery.java:865)
at org.datanucleus.store.rdbms.query.JDOQLQuery.compileInternal(JDOQLQuery.java:347)
at org.datanucleus.store.query.Query.executeQuery(Query.java:1816)
at org.datanucleus.store.query.Query.executeWithArray(Query.java:1744)
at org.datanucleus.store.query.Query.execute(Query.java:1726)
at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:374)
at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:216)
at org.apache.hadoop.hive.metastore.ObjectStore.getMSchemaVersion(ObjectStore.java:9101)
at org.apache.hadoop.hive.metastore.ObjectStore.getMetaStoreSchemaVersion(ObjectStore.java:9085)
at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:9042)
at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:9027)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97)
at com.sun.proxy.$Proxy36.verifySchema(Unknown Source)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMSForConf(HiveMetaStore.java:697)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:690)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:767)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:538)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:80)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:93)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:8667)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:169)
at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:94)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hive.metastore.utils.JavaUtils.newInstance(JavaUtils.java:84)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:95)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:148)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:119)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:4299)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:4367)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:4347)
at org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:4603)
at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:291)
at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:274)
at org.apache.hadoop.hive.ql.metadata.Hive.<init>(Hive.java:435)
at org.apache.hadoop.hive.ql.metadata.Hive.create(Hive.java:375)
at org.apache.hadoop.hive.ql.metadata.Hive.getInternal(Hive.java:355)
at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:401)
at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:397)
at org.apache.hadoop.hive.ql.Driver.dumpMetaCallTimingWithoutEx(Driver.java:983)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:760)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1826)
at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1773)
at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1768)
at org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:126)
at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:214)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
修改hive-site配置文件:datanucleus.schema.autoCreateAll=true
4、远程连接hive
解决方法,在hadoop中core-site中添加,如果hive连接账户设置为张三,则把下面的root换成张三即可(注意重启才有效)
<!--hive远程连接给root权限 -->
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
</property>
5、采用stop-hbase停不了,可以分别停止
hbase-daemon.sh stop master
hbase-daemon.sh stop regionserver
6、其中一些linux一些基本操作
(1)将hbase-site.xml复制到集群中另一台机器上:
scp /root/hbase-2.2.2/conf/hbase-site.xml root@hadoop2:/root/hbase-2.2.2/conf
复制文件夹,在scp后面加上-r
或者:scp /root/hbase-2.2.2/conf/hbase-site.xml hadoop2:/root/hbase-2.2.2/conf
(2)找到某文件夹下hadoop的jar包复制到另一文件夹下
find /root/hadoop/hadoop-3.1.3/share/hadoop -name “hadoop*jar” | xargs -i cp {} /root/hbase-2.2.2/lib
(3)查看安全模式
hadoop dfsadmin -safemode get
hadoop dfsadmin -safemode leave
(4)修改所有用户的环境变量
vim /etc/profileD
source /etc/profile
(5)将文件解压到某个文件夹
tar -zvxf hbase-2.0.5-bin.tar.gz -C /home/hadoop
(6)启动kafka
kafka-server-start.sh -daemon /root/kafka_2.12-2.4.0/config/server.properties
kafka-server-stop.sh
如果想过要停止kafka的进程,需要执行zookeeper-server-stop.sh,然后等待几秒后jps查看,实在停不掉就只能使用kill -9 进程名 来强杀进程了
(7)查出Hiveserver进程:
ps -aux| grep hiveserver2
(8)kill掉进程
kill -9 进程号
(9)
beeline
!connect jdbc:hive2://localhost:10000/default
(10)
输入"uname -a ",可显示电脑以及操作系统的相关信息。