flink常见报错及解决方案

Exception in thread "main" org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
Caused by: java.lang.Exception: java.net.SocketException: Connection reset
Caused by: java.net.SocketException: Connection reset

原因:socket连接重置,可能是使用不同的方式或者是重复提交flink任务,导致socket端口占用导致
2.No new data sinks have been defined since
原因:未被定义的数据输出
flink的批处理不需要行动算子来触发,因此删除最后一行的

//启动流式处理,如果没有该行代码上面的程序不会运行
    streamEnv.execute("wordcount")

3.当设置的分区数多于机器的CPU数
会发生数据混乱的错误,导致计算不正确
flink常见报错及解决方案_第1张图片
本身机器的CPU为4核
4.

org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to initialize the cluster entrypoint YarnSessionClusterEntrypoint.
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:182)
	at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:501)
	at org.apache.flink.yarn.entrypoint.YarnSessionClusterEntrypoint.main(YarnSessionClusterEntrypoint.java:93)
Caused by: java.net.ConnectException: Call From node2/192.168.40.62 to node1:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

flink常见报错及解决方案_第2张图片
搭建flinkHA高可用使用yarn模式,报错连接不上hadoop9000端口,后来发现flink-conf.yaml中jobManager存储路径写错了hadoop的路径,遂解决
5.ProgramInvocationException: Could not retrieve the execution result
使用hdfs存储checkpoint使得即使job任务别取消,下次运行时,依然可以继续接着上次运行
报错原因:
Caused by: java.io.IOException: Port 9000 specified in URI hdfs://mycluster:9000/checkpoint/cp1 but host 'mycluster' is a logical (HA) namenode and does not use port information.
仔细检查发现hdfs路径不正确
6.
UnsupportedFileSystemSchemeException:Could not find a file system implementation for scheme ‘hdfs’.The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded.
在这里插入图片描述
大致含义:该方案没有得到Flink的直接支持,也无法加载支持该方案的Hadoop文件系统。
原因分析:集群中必须每一个节点都需要有hdfs和flink的连接jar包,不然不能彼此识别,故在其他节点上加上flink-shaded-hadoop-2-uber-2.7.5-9.0.jar
7.在这里插入图片描述
真正的报错信息在flink的log目录下flink常见报错及解决方案_第3张图片
8.A rowtime attribute requires an EventTime time characteristic in stream environment. But is: ProcessingTime
未设置EventTime环境

val streamEnv: StreamExecutionEnvironment = StreamExecutionEnvironment.getExecutionEnvironment
    streamEnv.setParallelism(1)
    streamEnv.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)

你可能感兴趣的:(flink,BigData)