1. Java 的安装和配置
在各个操作系统上安装和配置 Java 的教程有很多,这里有三个要点需要注意:
● Flink 编译和运行要求 Java 版本至少是 Java 8,且最好选用 Java 8u51 及以上版本
● 如果要能够编译 Flink 代码,需要安装 JDK
● 安装好 Java 后,还需要配置 JAVA_HOME 和 PATH
● maven 的安装目录配置为 MAVEN_HOME,并把 maven 的 bin 目录加到 PATH 中
下载maven
wget https://archive.apache.org/dist/maven/maven-3/3.2.5/binaries/apache-maven-3.2.5- bin.tar.gz
一个可用的 maven settings.xml 配置文件见链接:
https://drive.google.com/file/d/1cUq9BaHSxEelKBPKYE8YyFQ9LD6EW8yB/view?usp=sharing
打开链接后,直接复制文件内容,然后粘贴到“~/.m2/settings.xml”文件(或其他 settings.xml 文件)
中。
2、查看java版本和maven版本
3、开始编译
mvn clean package -DskipTests -Pvendor-repos -Drat.skip=true -Pinclude-hadoop -Dhadoop.version=3.0.0-cdh6.2.1
出现错误
[ERROR] Failed to execute goal on project flink-hadoop-fs: Could not resolve dependencies for project org.apache.flink:flink-hadoop-fs:jar:1.9-SNAPSHOT: Failed to collect dependencies at org.apache.flink:flink-shaded-hadoop-2:jar:2.6.0-cdh5.16.1-7.0: Failed to read artifact descriptor for org.apache.flink:flink-shaded-hadoop-2:jar:2.6.0-cdh5.16.1-7.0: Could not transfer artifact org.apache.flink:flink-shaded-hadoop-2:pom:2.6.0-cdh5.16.1-7.0 from/to mapr-releases (https://repository.mapr.com/maven/): sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn -rf :flink-hadoop-fs
flink-shaded-hadoop-2相应版本的jar包找不到,我们去maven仓库看一下都有什么版本存在
flink-shaded-hadoop-2的7.0版本不存在,因此需要我们自己手动编译这个项目了。
解决办法:
flink-shaded项目clone到本地
git clone https://github.com/apache/flink-shaded.git git checkout release-7.0
git checkout release-7.0
开始编译 flink-shaded
mvn clean install -DskipTests -Drat.skip=true -Pvendor-repos -Dhadoop.version=3.0.0-cdh6.2.1
[ERROR] Failed to execute goal on project flink-shaded-hadoop-2: Could not resolve dependencies for project org.apache.flink:flink-shaded-hadoop-2:jar:3.0.0-cdh6.2.1-9.0: The following artifacts could not be resolved: org.apache.hadoop:hadoop-common:jar:3.0.0-cdh6.2.1, org.apache.hadoop:hadoop-mapreduce-client-core:jar:3.0.0-cdh6.2.1, org.apache.hadoop:hadoop-yarn-client:jar:3.0.0-cdh6.2.1, org.apache.hadoop:hadoop-yarn-common:jar:3.0.0-cdh6.2.1: Failure to find org.apache.hadoop:hadoop-common:jar:3.0.0-cdh6.2.1 in http://maven.aliyun.com/nexus/content/groups/public was cached in the local repository, resolution will not be reattempted until the update interval of nexus-aliyun has elapsed or updates are forced -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn -rf :flink-shaded-hadoop-2
解决办法
到flink-shaded 的项目目录修改pom.xml
在flink-shaded 的pom.xml 中增加 cdh 的 库
cloudera
https://repository.cloudera.com/artifactory/cloudera-repos
再重新编译
再重新编译flink
mvn clean package -DskipTests -Pvendor-repos -Drat.skip=true -Pinclude-hadoop -Dhadoop.version=3.0.0-cdh6.2.1
又出现错误
[ERROR] Failed to execute goal on project flink-hadoop-fs: Could not resolve dependencies for project org.apache.flink:flink-hadoop-fs:jar:1.11-SNAPSHOT: The following artifacts could not be resolved: org.apach
e.hadoop:hadoop-hdfs:jar:tests:3.0.0-cdh6.2.1, org.apache.hadoop:hadoop-common:jar:tests:3.0.0-cdh6.2.1: Failure to find org.apache.hadoop:hadoop-hdfs:jar:tests:3.0.0-cdh6.2.1 in http://maven.aliyun.com/nexus/c
ontent/groups/public/ was cached in the local repository, resolution will not be reattempted until the update interval of alimaven has elapsed or updates are forced -> [Help 1]
解决办法
cloudera
https://repository.cloudera.com/artifactory/cloudera-repos
再重新编译flink
mvn clean package -DskipTests -Pvendor-repos -Drat.skip=true -Pinclude-hadoop -Dhadoop.version=3.0.0-cdh6.2.1 -rf :flink-hadoop-fs
又出现错误
[ERROR] Failed to execute goal on project flink-avro-confluent-registry: Could not resolve dependencies for project org.apache.flink:flink-avro-confluent-registry:jar:1.9.3: Could not find artifact io.confluent:kafka-schema-registry-client:jar:3.3.1 in nexus-aliyun (http://maven.aliyun.com/nexus/content/groups/public) -> [Help 1]
缺少kafka的包
解决办法:自己下一个,然后装到自己的本地库里
wget http://packages.confluent.io/maven/io/confluent/kafka-schema-registry-client/3.3.1/kafka-schema-registry-client-3.3.1.jar
#安装
mvn install:install-file -DgroupId=io.confluent -DartifactId=kafka-schema-registry-client -Dversion=3.3.1 -Dpackaging=jar -Dfile=/root/kafka-schema-registry-client-3.3.1.jar
又出现yarn 的错误
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.0:testCompile (default-testCompile) on project flink-yarn_2.11: Compilation failure
[ERROR] /opt/flink-1.9.3/flink-yarn/src/test/java/org/apache/flink/yarn/AbstractYarnClusterTest.java:[89,41] no suitable method found for newInstance(org.apache.hadoop.yarn.api.records.ApplicationId,org.apache.hadoop.yarn.api.records.ApplicationAttemptId,java.lang.String,java.lang.String,java.lang.String,java.lang.String,int,,org.apache.hadoop.yarn.api.records.YarnApplicationState,,,long,long,org.apache.hadoop.yarn.api.records.FinalApplicationStatus,,,float,,)
[ERROR] method org.apache.hadoop.yarn.api.records.ApplicationReport.newInstance(org.apache.hadoop.yarn.api.records.ApplicationId,org.apache.hadoop.yarn.api.records.ApplicationAttemptId,java.lang.String,java.lang.String,java.lang.String,java.lang.String,int,org.apache.hadoop.yarn.api.records.Token,org.apache.hadoop.yarn.api.records.YarnApplicationState,java.lang.String,java.lang.String,long,long,long,org.apache.hadoop.yarn.api.records.FinalApplicationStatus,org.apache.hadoop.yarn.api.records.ApplicationResourceUsageReport,java.lang.String,float,java.lang.String,org.apache.hadoop.yarn.api.records.Token) is not applicable
[ERROR] (actual and formal argument lists differ in length)
[ERROR] method org.apache.hadoop.yarn.api.records.ApplicationReport.newInstance(org.apache.hadoop.yarn.api.records.ApplicationId,org.apache.hadoop.yarn.api.records.ApplicationAttemptId,java.lang.String,java.lang.String,java.lang.String,java.lang.String,int,org.apache.hadoop.yarn.api.records.Token,org.apache.hadoop.yarn.api.records.YarnApplicationState,java.lang.String,java.lang.String,long,long,org.apache.hadoop.yarn.api.records.FinalApplicationStatus,org.apache.hadoop.yarn.api.records.ApplicationResourceUsageReport,java.lang.String,float,java.lang.String,org.apache.hadoop.yarn.api.records.Token,java.util.Set,boolean,org.apache.hadoop.yarn.api.records.Priority,java.lang.String,java.lang.String) is not applicable
[ERROR] (actual and formal argument lists differ in length)
[ERROR] method org.apache.hadoop.yarn.api.records.ApplicationReport.newInstance(org.apache.hadoop.yarn.api.records.ApplicationId,org.apache.hadoop.yarn.api.records.ApplicationAttemptId,java.lang.String,java.lang.String,java.lang.String,java.lang.String,int,org.apache.hadoop.yarn.api.records.Token,org.apache.hadoop.yarn.api.records.YarnApplicationState,java.lang.String,java.lang.String,long,long,long,org.apache.hadoop.yarn.api.records.FinalApplicationStatus,org.apache.hadoop.yarn.api.records.ApplicationResourceUsageReport,java.lang.String,float,java.lang.String,org.apache.hadoop.yarn.api.records.Token,java.util.Set,boolean,org.apache.hadoop.yarn.api.records.Priority,java.lang.String,java.lang.String) is not applicable
因此我们需要在 flink-yarn 和 flink-yarn-tests模块下的pom文件的build中添加如下插件,跳过模块的测试代码的编译。
org.apache.maven.plugins
maven-compiler-plugin
3.8.0
${java.version}
true
false
-Xpkginfo:always
再次编译
mvn clean package -DskipTests -Pvendor-repos -Drat.skip=true -Pinclude-hadoop -Dhadoop.version=3.0.0-cdh6.2.1
[INFO] force-shading ...................................... SUCCESS [ 1.953 s]
[INFO] flink .............................................. SUCCESS [ 2.829 s]
[INFO] flink-annotations .................................. SUCCESS [ 2.331 s]
[INFO] flink-shaded-curator ............................... SUCCESS [ 2.526 s]
[INFO] flink-metrics ...................................... SUCCESS [ 0.220 s]
[INFO] flink-metrics-core ................................. SUCCESS [ 1.209 s]
[INFO] flink-test-utils-parent ............................ SUCCESS [ 0.220 s]
[INFO] flink-test-utils-junit ............................. SUCCESS [ 1.215 s]
[INFO] flink-core ......................................... SUCCESS [01:04 min]
[INFO] flink-java ......................................... SUCCESS [ 10.109 s]
[INFO] flink-queryable-state .............................. SUCCESS [ 0.178 s]
[INFO] flink-queryable-state-client-java .................. SUCCESS [ 1.360 s]
[INFO] flink-filesystems .................................. SUCCESS [ 0.175 s]
[INFO] flink-hadoop-fs .................................... SUCCESS [ 3.633 s]
[INFO] flink-runtime ...................................... SUCCESS [03:07 min]
[INFO] flink-scala ........................................ SUCCESS [01:07 min]
[INFO] flink-mapr-fs ...................................... SUCCESS [ 1.376 s]
[INFO] flink-filesystems :: flink-fs-hadoop-shaded ........ SUCCESS [ 7.834 s]
[INFO] flink-s3-fs-base ................................... SUCCESS [ 11.338 s]
[INFO] flink-s3-fs-hadoop ................................. SUCCESS [ 10.893 s]
[INFO] flink-s3-fs-presto ................................. SUCCESS [ 19.850 s]
[INFO] flink-swift-fs-hadoop .............................. SUCCESS [ 40.948 s]
[INFO] flink-oss-fs-hadoop ................................ SUCCESS [ 20.133 s]
[INFO] flink-azure-fs-hadoop .............................. SUCCESS [ 12.963 s]
[INFO] flink-optimizer .................................... SUCCESS [ 22.688 s]
[INFO] flink-clients ...................................... SUCCESS [ 3.883 s]
[INFO] flink-streaming-java ............................... SUCCESS [ 23.488 s]
[INFO] flink-test-utils ................................... SUCCESS [ 4.908 s]
[INFO] flink-runtime-web .................................. SUCCESS [03:45 min]
[INFO] flink-examples ..................................... SUCCESS [ 0.261 s]
[INFO] flink-examples-batch ............................... SUCCESS [ 21.655 s]
[INFO] flink-connectors ................................... SUCCESS [ 0.147 s]
[INFO] flink-hadoop-compatibility ......................... SUCCESS [ 8.186 s]
[INFO] flink-state-backends ............................... SUCCESS [ 0.159 s]
[INFO] flink-statebackend-rocksdb ......................... SUCCESS [ 2.753 s]
[INFO] flink-tests ........................................ SUCCESS [01:04 min]
[INFO] flink-streaming-scala .............................. SUCCESS [ 54.903 s]
[INFO] flink-table ........................................ SUCCESS [ 0.151 s]
[INFO] flink-table-common ................................. SUCCESS [ 4.907 s]
[INFO] flink-table-api-java ............................... SUCCESS [ 3.489 s]
[INFO] flink-table-api-java-bridge ........................ SUCCESS [ 1.605 s]
[INFO] flink-table-api-scala .............................. SUCCESS [ 10.409 s]
[INFO] flink-table-api-scala-bridge ....................... SUCCESS [ 15.821 s]
[INFO] flink-sql-parser ................................... SUCCESS [ 8.202 s]
[INFO] flink-libraries .................................... SUCCESS [ 0.175 s]
[INFO] flink-cep .......................................... SUCCESS [ 6.611 s]
[INFO] flink-table-planner ................................ SUCCESS [03:41 min]
[INFO] flink-orc .......................................... SUCCESS [ 1.318 s]
[INFO] flink-jdbc ......................................... SUCCESS [ 1.337 s]
[INFO] flink-table-runtime-blink .......................... SUCCESS [ 9.913 s]
[INFO] flink-table-planner-blink .......................... SUCCESS [04:38 min]
[INFO] flink-hbase ........................................ SUCCESS [ 6.425 s]
[INFO] flink-hcatalog ..................................... SUCCESS [ 8.011 s]
[INFO] flink-metrics-jmx .................................. SUCCESS [ 0.571 s]
[INFO] flink-connector-kafka-base ......................... SUCCESS [ 4.910 s]
[INFO] flink-connector-kafka-0.9 .......................... SUCCESS [ 1.896 s]
[INFO] flink-connector-kafka-0.10 ......................... SUCCESS [ 1.445 s]
[INFO] flink-connector-kafka-0.11 ......................... SUCCESS [ 1.664 s]
[INFO] flink-formats ...................................... SUCCESS [ 0.146 s]
[INFO] flink-json ......................................... SUCCESS [ 0.810 s]
[INFO] flink-connector-elasticsearch-base ................. SUCCESS [ 3.256 s]
[INFO] flink-connector-elasticsearch2 ..................... SUCCESS [ 25.116 s]
[INFO] flink-connector-elasticsearch5 ..................... SUCCESS [ 28.019 s]
[INFO] flink-connector-elasticsearch6 ..................... SUCCESS [ 2.647 s]
[INFO] flink-csv .......................................... SUCCESS [ 0.570 s]
[INFO] flink-connector-hive ............................... SUCCESS [ 5.571 s]
[INFO] flink-connector-rabbitmq ........................... SUCCESS [ 0.806 s]
[INFO] flink-connector-twitter ............................ SUCCESS [ 3.711 s]
[INFO] flink-connector-nifi ............................... SUCCESS [ 0.937 s]
[INFO] flink-connector-cassandra .......................... SUCCESS [ 5.207 s]
[INFO] flink-avro ......................................... SUCCESS [ 3.231 s]
[INFO] flink-connector-filesystem ......................... SUCCESS [ 1.937 s]
[INFO] flink-connector-kafka .............................. SUCCESS [ 2.031 s]
[INFO] flink-connector-gcp-pubsub ......................... SUCCESS [ 1.508 s]
[INFO] flink-sql-connector-elasticsearch6 ................. SUCCESS [ 14.583 s]
[INFO] flink-sql-connector-kafka-0.9 ...................... SUCCESS [ 0.807 s]
[INFO] flink-sql-connector-kafka-0.10 ..................... SUCCESS [ 0.962 s]
[INFO] flink-sql-connector-kafka-0.11 ..................... SUCCESS [ 1.340 s]
[INFO] flink-sql-connector-kafka .......................... SUCCESS [ 2.029 s]
[INFO] flink-connector-kafka-0.8 .......................... SUCCESS [ 1.565 s]
[INFO] flink-avro-confluent-registry ...................... SUCCESS [ 0.441 s]
[INFO] flink-parquet ...................................... SUCCESS [ 1.903 s]
[INFO] flink-sequence-file ................................ SUCCESS [ 0.560 s]
[INFO] flink-examples-streaming ........................... SUCCESS [ 22.746 s]
[INFO] flink-examples-table ............................... SUCCESS [ 11.965 s]
[INFO] flink-examples-build-helper ........................ SUCCESS [ 0.292 s]
[INFO] flink-examples-streaming-twitter ................... SUCCESS [ 1.287 s]
[INFO] flink-examples-streaming-state-machine ............. SUCCESS [ 0.785 s]
[INFO] flink-examples-streaming-gcp-pubsub ................ SUCCESS [ 9.470 s]
[INFO] flink-container .................................... SUCCESS [ 0.712 s]
[INFO] flink-queryable-state-runtime ...................... SUCCESS [ 1.281 s]
[INFO] flink-end-to-end-tests ............................. SUCCESS [ 0.130 s]
[INFO] flink-cli-test ..................................... SUCCESS [ 0.323 s]
[INFO] flink-parent-child-classloading-test-program ....... SUCCESS [ 0.319 s]
[INFO] flink-parent-child-classloading-test-lib-package ... SUCCESS [ 0.245 s]
[INFO] flink-dataset-allround-test ........................ SUCCESS [ 0.400 s]
[INFO] flink-dataset-fine-grained-recovery-test ........... SUCCESS [ 0.351 s]
[INFO] flink-datastream-allround-test ..................... SUCCESS [ 2.968 s]
[INFO] flink-batch-sql-test ............................... SUCCESS [ 0.332 s]
[INFO] flink-stream-sql-test .............................. SUCCESS [ 0.408 s]
[INFO] flink-bucketing-sink-test .......................... SUCCESS [ 1.140 s]
[INFO] flink-distributed-cache-via-blob ................... SUCCESS [ 0.352 s]
[INFO] flink-high-parallelism-iterations-test ............. SUCCESS [ 11.800 s]
[INFO] flink-stream-stateful-job-upgrade-test ............. SUCCESS [ 1.491 s]
[INFO] flink-queryable-state-test ......................... SUCCESS [ 2.863 s]
[INFO] flink-local-recovery-and-allocation-test ........... SUCCESS [ 2.573 s]
[INFO] flink-elasticsearch2-test .......................... SUCCESS [ 4.748 s]
[INFO] flink-elasticsearch5-test .......................... SUCCESS [ 5.396 s]
[INFO] flink-elasticsearch6-test .......................... SUCCESS [ 5.435 s]
[INFO] flink-quickstart ................................... SUCCESS [ 1.673 s]
[INFO] flink-quickstart-java .............................. SUCCESS [ 1.276 s]
[INFO] flink-quickstart-scala ............................. SUCCESS [ 0.309 s]
[INFO] flink-quickstart-test .............................. SUCCESS [ 0.509 s]
[INFO] flink-confluent-schema-registry .................... SUCCESS [ 1.820 s]
[INFO] flink-stream-state-ttl-test ........................ SUCCESS [ 6.325 s]
[INFO] flink-sql-client-test .............................. SUCCESS [ 0.927 s]
[INFO] flink-streaming-file-sink-test ..................... SUCCESS [ 0.342 s]
[INFO] flink-state-evolution-test ......................... SUCCESS [ 1.427 s]
[INFO] flink-mesos ........................................ SUCCESS [ 35.742 s]
[INFO] flink-yarn ......................................... SUCCESS [ 2.190 s]
[INFO] flink-gelly ........................................ SUCCESS [ 6.388 s]
[INFO] flink-gelly-scala .................................. SUCCESS [ 27.857 s]
[INFO] flink-gelly-examples ............................... SUCCESS [ 19.265 s]
[INFO] flink-metrics-dropwizard ........................... SUCCESS [ 1.604 s]
[INFO] flink-metrics-graphite ............................. SUCCESS [ 1.037 s]
[INFO] flink-metrics-influxdb ............................. SUCCESS [ 13.751 s]
[INFO] flink-metrics-prometheus ........................... SUCCESS [ 5.497 s]
[INFO] flink-metrics-statsd ............................... SUCCESS [ 0.447 s]
[INFO] flink-metrics-datadog .............................. SUCCESS [ 0.953 s]
[INFO] flink-metrics-slf4j ................................ SUCCESS [ 0.424 s]
[INFO] flink-cep-scala .................................... SUCCESS [ 18.828 s]
[INFO] flink-table-uber ................................... SUCCESS [ 4.212 s]
[INFO] flink-table-uber-blink ............................. SUCCESS [ 5.079 s]
[INFO] flink-sql-client ................................... SUCCESS [ 8.341 s]
[INFO] flink-state-processor-api .......................... SUCCESS [ 1.697 s]
[INFO] flink-python ....................................... SUCCESS [ 5.225 s]
[INFO] flink-scala-shell .................................. SUCCESS [ 18.612 s]
[INFO] flink-dist ......................................... SUCCESS [ 39.995 s]
[INFO] flink-end-to-end-tests-common ...................... SUCCESS [ 0.870 s]
[INFO] flink-metrics-availability-test .................... SUCCESS [ 0.459 s]
[INFO] flink-metrics-reporter-prometheus-test ............. SUCCESS [ 0.437 s]
[INFO] flink-heavy-deployment-stress-test ................. SUCCESS [ 13.129 s]
[INFO] flink-connector-gcp-pubsub-emulator-tests .......... SUCCESS [ 24.620 s]
[INFO] flink-streaming-kafka-test-base .................... SUCCESS [ 0.686 s]
[INFO] flink-streaming-kafka-test ......................... SUCCESS [ 11.831 s]
[INFO] flink-streaming-kafka011-test ...................... SUCCESS [ 10.185 s]
[INFO] flink-streaming-kafka010-test ...................... SUCCESS [ 10.420 s]
[INFO] flink-plugins-test ................................. SUCCESS [ 0.159 s]
[INFO] dummy-fs ........................................... SUCCESS [ 0.227 s]
[INFO] another-dummy-fs ................................... SUCCESS [ 0.238 s]
[INFO] flink-tpch-test .................................... SUCCESS [ 3.887 s]
[INFO] flink-contrib ...................................... SUCCESS [ 0.144 s]
[INFO] flink-connector-wikiedits .......................... SUCCESS [ 1.565 s]
[INFO] flink-yarn-tests ................................... SUCCESS [ 14.572 s]
[INFO] flink-fs-tests ..................................... SUCCESS [ 1.039 s]
[INFO] flink-docs ......................................... SUCCESS [ 3.117 s]
[INFO] flink-ml-parent .................................... SUCCESS [ 0.128 s]
[INFO] flink-ml-api ....................................... SUCCESS [ 0.879 s]
[INFO] flink-ml-lib ....................................... SUCCESS [ 0.511 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 33:47 min
[INFO] Finished at: 2020-06-10T12:13:49+08:00
[INFO] Final Memory: 574M/3289M
[INFO] ------------------------------------------------------------------------
历经磨难终于编译完成,最后打包
cd /opt/flink-1.9.3-src/flink-dist/target/flink-1.9.3-bin
tar -czvf flink-1.9.3-hadoop3.0.0-cdh6.2.1.tar.gz flink-1.9.3/
将flink-1.9.3-hadoop3.0.0-cdh6.2.1.tar.gz 上传服务器搭建flink集群
开始吧!
编译好的包 https://download.csdn.net/download/qq_21348527/12510764
可以下载我编译好的
推荐大家到 Flink China 中文社区学习,来个网址 https://ververica.cn/developers/flink-training-course3/