Flink 官网导航

配置参数 | Apache Flink配置参数 # All configuration is done in conf/flink-conf.yaml, which is expected to be a flat collection of YAML key value pairs with format key: value.The configuration is parsed and evaluated when the Flink processes are started. Changes to the configuration file require restarting the relevant processes.The out of the box configuration will use your default Java installation. You can manually set the environment variable JAVA_HOME or the configuration key env.https://nightlies.apache.org/flink/flink-docs-release-1.13/zh/docs/deployment/config/

Kafka | Apache FlinkApache Kafka SQL Connector # Scan Source: Unbounded Sink: Streaming Append ModeThe Kafka connector allows for reading data from and writing data into Kafka topics.Dependencies # In order to use the Kafka connector the following dependencies are required for both projects using a build automation tool (such as Maven or SBT) and SQL Client with SQL JAR bundles. Kafka version Maven dependency SQL Client JAR universal org.https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/connectors/table/kafka/

Formats | Apache FlinkFormats # Flink provides a set of table formats that can be used with table connectors. A table format is a storage format defines how to map binary data onto table columns.Flink supports the following formats: Formats Supported Connectors CSV Apache Kafka, Upsert Kafka, Amazon Kinesis Data Streams, Filesystem JSON Apache Kafka, Upsert Kafka, Amazon Kinesis Data Streams, Filesystem, Elasticsearch Apache Avro Apache Kafka, Upsert Kafka, Amazon Kinesis Data Streams, Filesystem Confluent Avro Apache Kafka, Upsert Kafka Debezium CDC Apache Kafka, Filesystem Canal CDC Apache Kafka, Filesystem Maxwell CDC Apache Kafka, Filesystem Apache Parquet Filesystem Apache ORC Filesystem Raw Apache Kafka, Upsert Kafka, Amazon Kinesis Data Streams, Filesystem https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/connectors/table/formats/overview/

你可能感兴趣的:(flink,flink,大数据,big,data)