flink-1.14.4启动报错setPreferCheckpointForRecovery(Z)v

从flink1.12升级到flink1.14,修改了pom.xml的flink-version,打包的时候发现报错:

flink-1.14.4启动报错setPreferCheckpointForRecovery(Z)v_第1张图片


        // 当有较新的 Savepoint 时,作业也会从 Checkpoint 处恢复
        env.getCheckpointConfig().setPreferCheckpointForRecovery(true);

 于是屏蔽了这段配置后,打了包,放到yarn上启动,结果出现了下面的报错:

java.lang.NoSuchMethodError: org.apache.flink.streaming.api.environment.CheckpointConfig.setPreferCheckpointForRecovery(Z)V
        at com.sitech.csd.ulmp_v2.flink.util.ExecutionEnvUtil.prepare(ExecutionEnvUtil.java:65)
        at com.sitech.csd.ulmp_v2.flink.application.BaseApplication.init(BaseApplication.java:93)
        at com.sitech.csd.ulmp_v2.flink.application.BaseApplication.init(BaseApplication.java:32)
        at com.sitech.csd.ulmp_v2.flink.LogWritingApplication.main(LogWritingApplication.java:19)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:355)
        at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:222)
        at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:114)
        at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:812)
        at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:246)
        at org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1054)
        at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1132)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
        at org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
        at 

后来发现是scala版本不对,需要的是2.12,但我用了2.11,修改了版本之后,我重新提交,出现如下报错:

2023-09-13 09:23:33,516 WARN  org.apache.flink.yarn.configuration.YarnLogConfigUtil        [] - The configuration directory ('/ulmp/flink/conf') already contains a LOG4J config file.If you want to use logback, then please delete or rename the log configuration file.
log4j:WARN No appenders could be found for logger (org.apache.hadoop.yarn.ipc.YarnRPC).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
2023-09-13 09:23:33,810 INFO  org.apache.flink.yarn.YarnClusterDescriptor                  [] - No path for the flink jar passed. Using the location of class org.apache.flink.yarn.YarnClusterDescriptor to locate the jar
2023-09-13 09:23:33,970 INFO  org.apache.hadoop.conf.Configuration                         [] - resource-types.xml not found
2023-09-13 09:23:33,971 INFO  org.apache.hadoop.yarn.util.resource.ResourceUtils           [] - Unable to find 'resource-types.xml'.
2023-09-13 09:23:33,980 WARN  org.apache.flink.yarn.YarnClusterDescriptor                  [] - Neither the HADOOP_CONF_DIR nor the YARN_CONF_DIR environment variable is set. The Flink YARN Client needs one of these to be set to properly load the Hadoop configuration for accessing YARN.
2023-09-13 09:23:34,014 INFO  org.apache.flink.yarn.YarnClusterDescriptor                  [] - The configured JobManager memory is 768 MB. YARN will allocate 1024 MB to make up an integer multiple of its minimum allocation memory (1024 MB, configured via 'yarn.scheduler.minimum-allocation-mb'). The extra 256 MB may not be used by Flink.
2023-09-13 09:23:34,015 INFO  org.apache.flink.yarn.YarnClusterDescriptor                  [] - Cluster specification: ClusterSpecification{masterMemoryMB=1024, taskManagerMemoryMB=2048, slotsPerTaskManager=1}
2023-09-13 09:23:34,471 WARN  org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory      [] - The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
2023-09-13 09:23:36,566 INFO  org.apache.flink.yarn.YarnClusterDescriptor                  [] - Submitting application master application_1687851042699_2369
2023-09-13 09:23:36,598 INFO  org.apache.hadoop.yarn.client.api.impl.YarnClientImpl        [] - Submitted application application_1687851042699_2369
2023-09-13 09:23:36,598 INFO  org.apache.flink.yarn.YarnClusterDescriptor                  [] - Waiting for the cluster to be allocated
2023-09-13 09:23:36,599 INFO  org.apache.flink.yarn.YarnClusterDescriptor                  [] - Deploying cluster, current state ACCEPTED
2023-09-13 09:24:36,761 INFO  org.apache.flink.yarn.YarnClusterDescriptor                  [] - Deployment took more than 60 seconds. Please check if the requested resources are available in the YARN cluster
2023-09-13 09:24:37,012 INFO  org.apache.flink.yarn.YarnClusterDescriptor                  [] - Deployment took more than 60 seconds. Please check if the requested resources are available in the YARN cluster
2023-09-13 09:24:37,262 INFO  org.apache.flink.yarn.YarnClusterDescriptor                  [] - Deployment took more than 60 seconds. Please check if the requested resources are available in the YARN cluster
2023-09-13 09:24:37,513 INFO  org.apache.flink.yarn.YarnClusterDescriptor                  [] - Deployment took more than 60 seconds. Please check if the requested resources are available in the YARN cluster
2023-09-13 09:24:37,764 INFO  org.apache.flink.yarn.YarnClusterDescriptor                  [] - Deployment took more than 60 seconds. Please check if the requested resources are available in the YARN cluster

你可能感兴趣的:(flink,大数据)