目录
举个例子
连接器
下载连接器(connector)和格式(format)jar 包
依赖管理
如何使用连接器
StreamExecutionEnvironment集成了DataStream API,通过额外的函数扩展了TableEnvironment。
下面代码演示两种API如何互转
from pyflink.datastream import StreamExecutionEnvironment
from pyflink.table import StreamTableEnvironment
from pyflink.common.typeinfo import Types
env = StreamExecutionEnvironment.get_execution_environment()
t_env = StreamTableEnvironment.create(env)
# create a DataStream
ds = env.from_collection(["Alice", "Bob", "John"], Types.STRING())
# interpret the insert-only DataStream as a Table
t = t_env.from_data_stream(ds)
# register the Table object as a view and query it
t_env.create_temporary_view("InputTable", t)
res_table = t_env.sql_query("SELECT UPPER(f0) FROM InputTable")
# interpret the insert-only Table as a DataStream again
res_ds = t_env.to_data_stream(res_table)
# add a printing sink and execute in DataStream API
res_ds.print()
env.execute()
TableEnvironment将采用StreamExecutionEnvironment所有的配置选项。
建议,在转换为Table API之前,设置DataStream API的所有配置选项,如下代码。
from pyflink.datastream import StreamExecutionEnvironment
from pyflink.table import StreamTableEnvironment
from pyflink.datastream.checkpointing_mode import CheckpointingMode
# create Python DataStream API
env = StreamExecutionEnvironment.get_execution_environment()
# set various configuration early
env.set_max_parallelism(256)
env.get_config().add_default_kryo_serializer("type_class_name", "serializer_class_name")
env.get_checkpoint_config().set_checkpointing_mode(CheckpointingMode.EXACTLY_ONCE)
# then switch to Python Table API
t_env = StreamTableEnvironment.create(env)
# set configuration early
t_env.get_config().set_local_timezone("Europe/Berlin")
# start defining your pipelines in both APIs...
由于Flink是一个基于 Java/Scala 的项目,连接器(connector)和格式(format)的实现是作为 jar 包存在的, 要在 PyFlink 作业中使用,首先需要将其指定为作业的依赖。
如果使用第三方JAR,可以在Python Table API中指定JAR,如下所示:
table_env.get_config().get_configuration().set_string("pipeline.jars", "file:///my/jar/path/connector.jar;file:///my/jar/path/json.jar")
or
table_env.get_config().get_configuration().set_string("pipeline.classpaths", "file:///my/jar/path/connector.jar;file:///my/jar/path/udf.jar")
需要在Python API程序中使用依赖项。例如,Python用户自定义函数中使用第三方Python库。此外,在用机器学习模型预测等场景中,用户可能希望在Python自定义函数中加载机器学习模型。
当PyFlink作业在本地执行时,可以将第三方Python库安装到本地Python环境中,将机器学习模型下载到本地,等等。
然而,当用户想要将PyFlink任务提交到远程集群时,这种方法并不奏效。
除了Table API, 在Python DataStream API中则如下配置:
stream_execution_environment.add_jars("file:///my/jar/path/connector1.jar", "file:///my/jar/path/connector2.jar")
stream_execution_environment.add_jars("file:///E:/my/jar/path/connector1.jar", "file:///E:/my/jar/path/connector2.jar")
# NOTE: The paths must specify a protocol (e.g. file://) and users should ensure that the
# URLs are accessible on both the client and the cluster.
stream_execution_environment.add_classpaths("file:///my/jar/path/connector1.jar", "file:///my/jar/path/connector2.jar")
在 PyFlink Table API 中,DDL 是定义 source 和 sink 比较推荐的方式,这可以通过 TableEnvironment 中的 execute_sql() 方法来完成,然后就可以在作业中使用这张表了。
--下面是如何在 PyFlink 中使用 Kafka source/sink 和 JSON 格式的完整示例。
from pyflink.table import TableEnvironment, Environmentsettings
def log_processing():
env_settings = Environmentsettings.in_streaming_mode()
t_env = TableEnvironment.create(env_settings)
# specify connector and format jars
t_env.get_config().get_configuration().set_string("pipeline.jars", "file:///my/jar/path/connector.jar;file:///my/jar/path/json.jar")
source_ddl = """
CREATE TABLE source_table(
a VARCHAR,
b INT
) WITH (
'connector' = 'kafka',
'topic' = 'source_topic',
'properties.bootstrap.servers' = 'kafka:9092',
'properties.group.id' = 'test_3',
'scan.startup.mode' = 'latest-offset',
'format' = 'json'
)
"""
sink_ddl = """
CREATE TABLE sink_table(
a VARCHAR
) WITH (
'connector' = 'kafka',
'topic' = 'sink_topic',
'properties.bootstrap.servers' = 'kafka:9092',
'format' = 'json'
)
"""
t_env.execute_sql(source_ddl)
t_env.execute_sql(sink_ddl)
t_env.sql_query("SELECT a FROM source_table") \
.execute_insert("sink_table").wait()
if __name__ == '__main__':
log_processing()