spark任务中报连接不到hbase的错误

17/10/16 20:51:22 INFOzookeeper.ClientCnxn: Opening socket connection to serverlocalhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknownerror)

17/10/16 20:51:22 WARNzookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closingsocket connection and attempting reconnect

java.net.ConnectException: Connectionrefused

         atsun.nio.ch.SocketChannelImpl.checkConnect(Native Method)

         atsun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)

         atorg.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)

         atorg.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)

17/10/16 20:51:22 WARNzookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper,quorum=localhost:2181,exception=org.apache.zookeeper.KeeperException$ConnectionLossException:KeeperErrorCode = ConnectionLoss for /hbase/meta-region-server

17/10/16 20:51:22 INFO zookeeper.ClientCnxn:Opening socket connection to server localhost/127.0.0.1:2181. Will not attemptto authenticate using SASL (unknown error)

17/10/16 20:51:22 ERRORzookeeper.RecoverableZooKeeper: ZooKeeper getData failed after 4 attempts

17/10/16 20:51:22 WARN zookeeper.ZKUtil:hconnection-0x49e549d20x0, quorum=localhost:2181, baseZNode=/hbase Unable toget data of znode /hbase/meta-region-server

org.apache.zookeeper.KeeperException$ConnectionLossException:KeeperErrorCode = ConnectionLoss for /hbase/meta-region-server

         atorg.apache.zookeeper.KeeperException.create(KeeperException.java:99)

         atorg.apache.zookeeper.KeeperException.create(KeeperException.java:51)

         atorg.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1151)

         atorg.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:360)

         atorg.apache.hadoop.hbase.zookeeper.ZKUtil.getData(ZKUtil.java:685)

         atorg.apache.hadoop.hbase.zookeeper.MetaTableLocator.getMetaRegionState(MetaTableLocator.java:374)

         atorg.apache.hadoop.hbase.zookeeper.MetaTableLocator.getMetaRegionLocation(MetaTableLocator.java:133)

         atorg.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:438)

         atorg.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:60)

         atorg.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1122)

         atorg.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1109)

         atorg.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1261)

         atorg.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1125)

         atorg.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:369)

         atorg.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:320)

         atorg.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:206)

         atorg.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:183)

         atorg.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1496)

         atorg.apache.hadoop.hbase.client.HTable.close(HTable.java:1532)

         atcom.wbkit.cobub.hbase.HBaseUtils$.putToHTable(HBaseUtils.scala:62)

         atcom.wbkit.cobub.batch.EventJobs$$anonfun$countAndWriteToHBase$1.apply(EventJobs.scala:82)

         atcom.wbkit.cobub.batch.EventJobs$$anonfun$countAndWriteToHBase$1.apply(EventJobs.scala:82)

         atorg.apache.spark.rdd.RDD

anonfun$foreachPartition$1
anonfun$apply$29.apply(RDD.scala:898)

         atorg.apache.spark.rdd.RDD

anonfun$foreachPartition$1
anonfun$apply$29.apply(RDD.scala:898)

         atorg.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1850)

         atorg.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1850)

         atorg.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)

         atorg.apache.spark.scheduler.Task.run(Task.scala:88)

         atorg.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)

         atjava.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

         atjava.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

         atjava.lang.Thread.run(Thread.java:745)

 

CDH上:

 

/usr/bin/spark-submit  -> /etc/alternatives/spark-submit -> /opt/cloudera/parcels/CDH-5.5.0-1.cdh5.5.0.p0.8/bin/spark-submit

 

/var/lib/alternatives/spark-submit

/opt/cloudera/parcels/CDH-5.5.0-1.cdh5.5.0.p0.8/lib/spark/bin/spark-submit

 

 

 

 

解决方法:加--file参数

spark任务中报连接不到hbase的错误_第1张图片

你可能感兴趣的:(spark)