Flink DataStream API Programming Guide学习&译文(未完待续)


Flink DataStream API Programming Guide


DataStream programs in Flink are regular programs that implement transformations on data streams (e.g., filtering, updating state, defining windows, aggregating). The data streams are initially created from various sources (e.g., message queues, socket streams, files). Results are returned via sinks, which may for example write the data to files, or to standard output (for example the command line terminal). Flink programs run in a variety of contexts, standalone, or embedded in other programs. The execution can happen in a local JVM, or on clusters of many machines.(很简单一看就懂)


如下是一个基于Web Socket作为数据源的一个Stream的wordcount程序:

import org.apache.flink.streaming.api.scala._
import org.apache.flink.streaming.api.windowing.time.Time

object WindowWordCount {
  def main(args: Array[String]) {

    val env = StreamExecutionEnvironment.getExecutionEnvironment
    val text = env.socketTextStream("localhost", 9999)

    val counts = text.flatMap { _.toLowerCase.split("\\W+") filter { _.nonEmpty } }
      .map { (_, 1) }
      .keyBy(0)
      .timeWindow(Time.seconds(5))
      .sum(1)

    counts.print

    env.execute("Window Stream WordCount")
  }
}


 为了运行程序。你首先需要开启netcat工具,使用 nc -lk 9999开启,之后运行程序。然后在nc的终端输入单词既可看见Flink程序输出!

DataStream Transformations


Data transformation可以讲一个或者多个DataStream转成一个新的DataStream,同时可以组合使用多个transformation组成复杂的拓扑!


transformation算子描述可以参考:https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/datastream_api.html#datastream-transformations


对于元组,case class ,集合的提取进行匿名匹配是不支持的,例如:


val data: DataStream[(Int, String, Double)] = // [...]
data.map {
  case (id, name, temperature) => // [...]
}

匿名的模式匹配需要使用拓展的Scala API 参考: Scala API extension .


Physical partitioning


如果需要的话,Flink同时支持对DataStream进行算子转换的数据集进行底层的分区控制,使用如下函数:







你可能感兴趣的:(Flink DataStream API Programming Guide学习&译文(未完待续))