SparkStreaming+Flume出现ERROR ReceiverTracker: Deregistered receiver for stream 0: Error starting receiver 0 - org.jboss.netty.channel.ChannelException

文章发自http://www.cnblogs.com/hark0623/p/4204104.html ,转载请注明

我发现太多太多的坑要趟了…

向yarn提交sparkstreaming了,提交脚本如下,使用的是yarn-client

spark-submit --driver-memory 1g --executor-memory 1g --executor-cores 1  --num-executors 3 --class com.yhx.sensor.sparkstreaming.LogicHandle --master yarn-client /opt/spark/SparkStreaming.jar

出现了如下的错误:

15/01/05 17:12:30 ERROR ReceiverTracker: Deregistered receiver for stream 0: Error starting receiver 0 - org.jboss.netty.channel.ChannelException: Failed to bind to: /xxx.xx.xx.xx:xxxx

        at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)

        at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:103)

        at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:74)

        at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:68)

        at org.apache.spark.streaming.flume.FlumeReceiver.initServer(FlumeInputDStream.scala:164)

        at org.apache.spark.streaming.flume.FlumeReceiver.onStart(FlumeInputDStream.scala:171)

        at org.apache.spark.streaming.receiver.ReceiverSupervisor.startReceiver(ReceiverSupervisor.scala:121)

        at org.apache.spark.streaming.receiver.ReceiverSupervisor.start(ReceiverSupervisor.scala:106)

        at org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$$anonfun$9.apply(ReceiverTracker.scala:264)

        at org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$$anonfun$9.apply(ReceiverTracker.scala:257)

        at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1121)

        at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1121)

        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)

        at org.apache.spark.scheduler.Task.run(Task.scala:54)

        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:180)

        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

        at java.lang.Thread.run(Thread.java:745)

Caused by: java.net.BindException: Cannot assign requested address

        at sun.nio.ch.Net.bind0(Native Method)

        at sun.nio.ch.Net.bind(Net.java:444)

        at sun.nio.ch.Net.bind(Net.java:436)

        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)

        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)

        at org.jboss.netty.channel.socket.nio.NioServerBoss$RegisterTask.run(NioServerBoss.java:193)

        at org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:366)

        at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:290)

        at org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)

        ... 3 more



15/01/05 17:12:30 WARN TaskSetManager: Lost task 0.0 in stage 2.0 (TID 70, xxx.xxx.dn02): org.jboss.netty.channel.ChannelException: Failed to bind to: /121.41.49.51:2345

        org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)

        org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:103)

        org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:74)

        org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:68)

其实异常中我们大概能看出,是监听无法被绑定了…

我这里无法绑定是因为被占用了, 当我终止第一次启动的sparkstreaming后,马上就开始运行sparkstreaming,这时之前绑定的监听还没有被释放,所以才会出现这个异常。

大概可以使用netstat -anp|grep 端口号  来确认一下…… 

 

唉,尼玛各种坑啊……

你可能感兴趣的:(exception)