-----------------------------------------【hbase】--------------------------------------------------------------- ###在hbase shell中使用list命令报错:ERROR: Can’t get master address from ZooKeeper; znode data == null原因及解决方法
###s101: WARNING: /home/fgq/soft/hadoop-3.2.0/logs does not exist. Creating. s102: /home/fgq/soft/hadoop/etc/hadoop/hadoop-env.sh: line 140: hbase: command not found s102: WARNING: /home/fgq/soft/hadoop-3.2.0/logs does not exist. Creating. Starting secondary namenodes [s103] s103: /home/fgq/soft/hadoop/etc/hadoop/hadoop-env.sh: line 140: hbase: command not found
[2019-11-28 01:06:06,607] ERROR Closing socket for /192.168.146.100 because of error (kafka.network.Processor)
kafka.common.KafkaException: Wrong request type 18 at kafka.api.RequestKeys . d e s e r i a l i z e r F o r K e y ( R e q u e s t K e y s . s c a l a : 53 ) a t k a f k a . n e t w o r k . R e q u e s t C h a n n e l .deserializerForKey(RequestKeys.scala:53) at kafka.network.RequestChannel .deserializerForKey(RequestKeys.scala:53)atkafka.network.RequestChannelRequest.(RequestChannel.scala:49) at kafka.network.Processor.read(SocketServer.scala:353) at kafka.network.Processor.run(SocketServer.scala:245) at java.lang.Thread.run(Thread.java:745) INFO conflict in /controller data: { “brokerid”:1, “timestamp”:“1574932456824”, “version”:1 } stored data: { “brokerid”:1, “timestamp”:“1574932455346”, “version”:1 } (kafka.utils.ZkUtils ) [ 2019 − 11 − 2801 : 20 : 49 , 220 ] I N F O I w r o t e t h i s c o n f l i c t e d e p h e m e r a l n o d e [ " b r o k e r i d " : 1 , " t i m e s t a m p " : " 1574932456824 " , " v e r s i o n " : 1 ] a t / c o n t r o l l e r a w h i l e b a c k i n a d i f f e r e n t s e s s i o n , h e n c e I w i l l b a c k o f f f o r t h i s n o d e t o b e d e l e t e d b y Z o o k e e p e r a n d r e t r y ( k a f k a . u t i l s . Z k U t i l s ) [2019-11-28 01:20:49,220] INFO I wrote this conflicted ephemeral node [{ "brokerid":1, "timestamp":"1574932456824", "version":1 }] at /controller a while back in a different session, hence I will backoff for this node to be deleted by Zookeeper and retry (kafka.utils.ZkUtils )[2019−11−2801:20:49,220]INFOIwrotethisconflictedephemeralnode["brokerid":1,"timestamp":"1574932456824","version":1]at/controllerawhilebackinadifferentsession,henceIwillbackoffforthisnodetobedeletedbyZookeeperandretry(kafka.utils.ZkUtils)
###hbase数据如何查看(存入后是编码后的内容)
###[2019-11-28 00:44:18,726] ERROR Closing socket for /192.168.146.100 because of error (kafka.network.Processor) kafka.common.KafkaException: Wrong request type 18
###查看内存使用情况
free -m
###maven启动storm+hbase程序时(或spark) tried to access method com.google.common.base.Stopwatch.()V from class org.apache.hadoop.hbase.zookeeper.MetaTableLocator
###Phoenix4.14.0-cdh5.14.2 Java api操作HBase 报错org.apache.hadoop.fs.FileSystem: Provider org.apache.hadoop.hdfs.DistributedFileSystem could not be instantiated
-----------------------------------------【hive】--------------------------------------------------------------- ###hive_异常_01_ Terminal initialization failed; falling back to unsupported
###hive> show databases; FAILED: SemanticException org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
这个应该是元数据的问题,hive的元数据存在mysql里面,所以如果你没有弄好mysql的权限,你的show databases是不可以用的。
解决办法:
1.首先检查你自己的hive-site.xml配置文件中的mysql账号和自己的创建hive_database的mysql账号是否一致。若不一致修改为创建了hive数据库的mysql账号。
javax.jdo.option.ConnectionURLjdbc:mysql://192.168.1.195:3306/hadoop_hive?createDatabaseIfNotExist=truejavax.jdo.option.ConnectionDriverNamecom.mysql.jdbc.Driverjavax.jdo.option.ConnectionUserNamerootjavax.jdo.option.ConnectionPassword123456datanucleus.schema.autoCreateAlltrue
2.mysql的权限问题
在mysql中:
grant all privileges on *.* to 'root'@'%' identified by '123456' with grant option;
grant all privileges on *.* to 'root'@'192.168.1.195' identified by '123456' with grant option;
flush privileges; 刷新权限
3.hive的服务端没有打开 //做完前两步可以试试环境是否已经可以,可以后此步可以忽略
1)hive --service metastore &
2)然后Ctrl+C
3)再hive,进去
我的maven中的setting.xml配置文件里面关于mirror部分的配置如下:
ibiblio*Human Readable Name for this Mirror.http://mirrors.ibiblio.org/pub/mirrors/maven2/
错误就出在mirrorOf节点了,如果写*会覆盖掉所有的,不管是哪个repository,
最后都被这个镜像所mirror掉了,导致pom文件中的repository不生效了。
解决方案也很简单,把这个mirrorOf改掉就好了。具体修改建议参考maven官方说明:
maven的私服配置:http://my.oschina.net/liangbo/blog/195739
深入比较几种maven仓库的优先级:http://toozhao.com/2012/07/13/compare-priority-of-maven-repository/
http://maven.apache.org/guides/mini/guide-mirror-settings.html
Maven最佳实践--Maven仓库:http://juvenshun.iteye.com/blog/359256
Maven仓库管理之Nexus:http://my.oschina.net/aiguozhe/blog/101537
注意*myeclipse 也许反应比较慢 maven install 然后更新local Repository 然后update project
或者重启myeclipse 醒目有错号没什么 只要编译成功了也可以运行
###Could not get the value for parameter encoding for plugin execution default-resources Plugin org.apache.maven.plugins:maven-resources-plugin:2.5 or one of its dependencies could not be resolved: Failed to collect dependencies for org.apache.maven.plugins:maven-resources-plugin:jar:2.5 ()
###Unable to create project from archetype [org.scala-tools.archetypes:scala-archetype-simple:1.2 -> http://scala-tools.org/repo-releases] The desired archetype does not exist (org.scala-tools.archetypes:scala-archetype-simple:1.2)
###Error:scalac: Error: Error compiling the sbt component ‘compiler-interface-2.11.2-55.0’ sbt.internal.inc.CompileFailed: Error compiling the sbt component ‘compiler-interface-2.11.2-55.0’ at sbt.internal.inc.AnalyzingCompiler$.handleCompilationError 1 ( A n a l y z i n g C o m p i l e r . s c a l a : 331 ) a t s b t . i n t e r n a l . i n c . A n a l y z i n g C o m p i l e r 1(AnalyzingCompiler.scala:331) at sbt.internal.inc.AnalyzingCompiler 1(AnalyzingCompiler.scala:331)atsbt.internal.inc.AnalyzingCompiler. a n o n f u n anonfun anonfuncompileSources 4 ( A n a l y z i n g C o m p i l e r . s c a l a : 346 ) a t s b t . i n t e r n a l . i n c . A n a l y z i n g C o m p i l e r 4(AnalyzingCompiler.scala:346) at sbt.internal.inc.AnalyzingCompiler 4(AnalyzingCompiler.scala:346)atsbt.internal.inc.AnalyzingCompiler. a n o n f u n anonfun anonfuncompileSources 4 4 4adapted(AnalyzingCompiler.scala:341) at sbt.io.IO . w i t h T e m p o r a r y D i r e c t o r y ( I O . s c a l a : 376 ) a t s b t . i o . I O .withTemporaryDirectory(IO.scala:376) at sbt.io.IO .withTemporaryDirectory(IO.scala:376)atsbt.io.IO.withTemporaryDirectory(IO.scala:383) at sbt.internal.inc.AnalyzingCompiler . . .anonfun$compileSources 2 ( A n a l y z i n g C o m p i l e r . s c a l a : 341 ) a t s b t . i n t e r n a l . i n c . A n a l y z i n g C o m p i l e r 2(AnalyzingCompiler.scala:341) at sbt.internal.inc.AnalyzingCompiler 2(AnalyzingCompiler.scala:341)atsbt.internal.inc.AnalyzingCompiler. a n o n f u n anonfun anonfuncompileSources 2 2 2adapted(AnalyzingCompiler.scala:335) at sbt.io.IO . w i t h T e m p o r a r y D i r e c t o r y ( I O . s c a l a : 376 ) a t s b t . i o . I O .withTemporaryDirectory(IO.scala:376) at sbt.io.IO .withTemporaryDirectory(IO.scala:376)atsbt.io.IO.withTemporaryDirectory(IO.scala:383) at sbt.internal.inc.AnalyzingCompiler . c o m p i l e S o u r c e s ( A n a l y z i n g C o m p i l e r . s c a l a : 335 ) a t o r g . j e t b r a i n s . j p s . i n c r e m e n t a l . s c a l a . l o c a l . C o m p i l e r F a c t o r y I m p l .compileSources(AnalyzingCompiler.scala:335) at org.jetbrains.jps.incremental.scala.local.CompilerFactoryImpl .compileSources(AnalyzingCompiler.scala:335)atorg.jetbrains.jps.incremental.scala.local.CompilerFactoryImpl.getOrCompileInterfaceJar(CompilerFactoryImpl.scala:113) at org.jetbrains.jps.incremental.scala.local.CompilerFactoryImpl. a n o n f u n anonfun anonfungetScalac 1 ( C o m p i l e r F a c t o r y I m p l . s c a l a : 49 ) a t s c a l a . O p t i o n . m a p ( O p t i o n . s c a l a : 146 ) a t o r g . j e t b r a i n s . j p s . i n c r e m e n t a l . s c a l a . l o c a l . C o m p i l e r F a c t o r y I m p l . g e t S c a l a c ( C o m p i l e r F a c t o r y I m p l . s c a l a : 47 ) a t o r g . j e t b r a i n s . j p s . i n c r e m e n t a l . s c a l a . l o c a l . C o m p i l e r F a c t o r y I m p l . c r e a t e C o m p i l e r ( C o m p i l e r F a c t o r y I m p l . s c a l a : 25 ) a t o r g . j e t b r a i n s . j p s . i n c r e m e n t a l . s c a l a . l o c a l . C a c h i n g F a c t o r y . 1(CompilerFactoryImpl.scala:49) at scala.Option.map(Option.scala:146) at org.jetbrains.jps.incremental.scala.local.CompilerFactoryImpl.getScalac(CompilerFactoryImpl.scala:47) at org.jetbrains.jps.incremental.scala.local.CompilerFactoryImpl.createCompiler(CompilerFactoryImpl.scala:25) at org.jetbrains.jps.incremental.scala.local.CachingFactory. 1(CompilerFactoryImpl.scala:49)atscala.Option.map(Option.scala:146)atorg.jetbrains.jps.incremental.scala.local.CompilerFactoryImpl.getScalac(CompilerFactoryImpl.scala:47)atorg.jetbrains.jps.incremental.scala.local.CompilerFactoryImpl.createCompiler(CompilerFactoryImpl.scala:25)atorg.jetbrains.jps.incremental.scala.local.CachingFactory.anonfun$createCompiler 3 ( C a c h i n g F a c t o r y . s c a l a : 24 ) a t o r g . j e t b r a i n s . j p s . i n c r e m e n t a l . s c a l a . l o c a l . C a c h e . 3(CachingFactory.scala:24) at org.jetbrains.jps.incremental.scala.local.Cache. 3(CachingFactory.scala:24)atorg.jetbrains.jps.incremental.scala.local.Cache.anonfun$getOrUpdate 2 ( C a c h e . s c a l a : 20 ) a t s c a l a . O p t i o n . g e t O r E l s e ( O p t i o n . s c a l a : 121 ) a t o r g . j e t b r a i n s . j p s . i n c r e m e n t a l . s c a l a . l o c a l . C a c h e . g e t O r U p d a t e ( C a c h e . s c a l a : 19 ) a t o r g . j e t b r a i n s . j p s . i n c r e m e n t a l . s c a l a . l o c a l . C a c h i n g F a c t o r y . c r e a t e C o m p i l e r ( C a c h i n g F a c t o r y . s c a l a : 24 ) a t o r g . j e t b r a i n s . j p s . i n c r e m e n t a l . s c a l a . l o c a l . L o c a l S e r v e r . c o m p i l e ( L o c a l S e r v e r . s c a l a : 27 ) a t o r g . j e t b r a i n s . j p s . i n c r e m e n t a l . s c a l a . r e m o t e . M a i n 2(Cache.scala:20) at scala.Option.getOrElse(Option.scala:121) at org.jetbrains.jps.incremental.scala.local.Cache.getOrUpdate(Cache.scala:19) at org.jetbrains.jps.incremental.scala.local.CachingFactory.createCompiler(CachingFactory.scala:24) at org.jetbrains.jps.incremental.scala.local.LocalServer.compile(LocalServer.scala:27) at org.jetbrains.jps.incremental.scala.remote.Main 2(Cache.scala:20)atscala.Option.getOrElse(Option.scala:121)atorg.jetbrains.jps.incremental.scala.local.Cache.getOrUpdate(Cache.scala:19)atorg.jetbrains.jps.incremental.scala.local.CachingFactory.createCompiler(CachingFactory.scala:24)atorg.jetbrains.jps.incremental.scala.local.LocalServer.compile(LocalServer.scala:27)atorg.jetbrains.jps.incremental.scala.remote.Main.make(Main.scala:88) at org.jetbrains.jps.incremental.scala.remote.Main$.nailMain(Main.scala:36) at org.jetbrains.jps.incremental.scala.remote.Main.nailMain(Main.scala) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.martiansoftware.nailgun.NGSession.run(NGSession.java:319)
默认在zookeeper安装路径bin文件夹里,文件名是zookeeper.out。可以通过修改bin/zkEnv.sh文件来指定日志路径。
默认设置
if ["x${ZOO_LOG_DIR}"="x"]
then
ZOO_LOG_DIR="."
fi
修改成
if ["x${ZOO_LOG_DIR}"="x"]
then
ZOO_LOG_DIR="$ZOOBINDIR/../logs"
fi
最后日志文件就生成到安装目录的logs文件夹根目录下
androi中提到了布尔数组;
布尔数组默认的是false, 并且只会打印false或者是true
布尔数组的例子; 根据字符数组创建布尔数组
char[] c = {'p','u','b','l','i','c'};
//根据字符数组的长度创建布尔数组的个数
boolean[] b = new bool
文章摘自:http://blog.csdn.net/yangwawa19870921/article/details/7553181
在编写HQL时,可能会出现这种代码:
select a.name,b.age from TableA a left join TableB b on a.id=b.id
如果这是HQL,那么这段代码就是错误的,因为HQL不支持
1. 简单的for循环
public static void main(String[] args) {
for (int i = 1, y = i + 10; i < 5 && y < 12; i++, y = i * 2) {
System.err.println("i=" + i + " y="
异常信息本地化
Spring Security支持将展现给终端用户看的异常信息本地化,这些信息包括认证失败、访问被拒绝等。而对于展现给开发者看的异常信息和日志信息(如配置错误)则是不能够进行本地化的,它们是以英文硬编码在Spring Security的代码中的。在Spring-Security-core-x
近来工作中遇到这样的两个需求
1. 给个Date对象,找出该时间所在月的第一天和最后一天
2. 给个Date对象,找出该时间所在周的第一天和最后一天
需求1中的找月第一天很简单,我记得api中有setDate方法可以使用
使用setDate方法前,先看看getDate
var date = new Date();
console.log(date);
// Sat J
MyBatis的update元素的用法与insert元素基本相同,因此本篇不打算重复了。本篇仅记录批量update操作的
sql语句,懂得SQL语句,那么MyBatis部分的操作就简单了。 注意:下列批量更新语句都是作为一个事务整体执行,要不全部成功,要不全部回滚。
MSSQL的SQL语句
WITH R AS(
SELECT 'John' as name, 18 as