Dependencies unknown

在创建 Flink 的 SocketWindowWordCount 例子的时候:

import java.sql.Time

import org.apache.flink.api.java.utils.ParameterTool
import org.apache.flink.streaming.api.scala._
import org.apache.flink.streaming.api.windowing.time.Time

/**
 * Implements a streaming windowed version of the "WordCount" program.
 * 
 * This program connects to a server socket and reads strings from the socket.
 * The easiest way to try this out is to open a text sever (at port 12345) 
 * using the ''netcat'' tool via
 * {{{
 * nc -l 12345
 * }}}
 * and run this example with the hostname and the port as arguments..
 */
object SocketWindowWordCount {

  /** Main program method */
  def main(args: Array[String]) : Unit = {

    // the host and the port to connect to
    var hostname: String = "localhost"
    var port: Int = 0

    try {
      val params = ParameterTool.fromArgs(args)
      hostname = if (params.has("hostname")) params.get("hostname") else "localhost"
      port = params.getInt("port")
    } catch {
      case e: Exception => {
        System.err.println("No port specified. Please run 'SocketWindowWordCount " +
          "--hostname  --port ', where hostname (localhost by default) and port " +
          "is the address of the text server")
        System.err.println("To start a simple text server, run 'netcat -l ' " +
          "and type the input text into the command line")
        return
      }
    }
    
    // get the execution environment
    val env: StreamExecutionEnvironment = StreamExecutionEnvironment.getExecutionEnvironment
    
    // get input data by connecting to the socket
    val text: DataStream[String] = env.socketTextStream(hostname, port, '\n')

    // parse the data, group it, window it, and aggregate the counts 
    val windowCounts = text
          .flatMap { w => w.split("\\s") }
          .map { w => WordWithCount(w, 1) }
          .keyBy("word")
          .timeWindow(Time.seconds(5))
          .sum("count")

    // print the results with a single thread, rather than in parallel
    windowCounts.print().setParallelism(1)

    env.execute("Socket Window WordCount")
  }

  /** Data type for words with count */
  case class WordWithCount(word: String, count: Long)
}

像 Spark Streaming 例子一样,需要在 pom 文件里面添加依赖, 官网的例子中并没有说明如何写 pom 文件, 所以你复制粘贴了源代码还是运行不了,但是官网给了例子 flink-examples ,里面有父级 pom 文件的孩子级 pom 文件,根据这俩 pom 文件,我在 pom.xml 文件中添加的依赖如下:


        

        
            org.apache.flink
            flink-core
        

        
            org.apache.flink
            flink-clients_2.11
        

        
            org.slf4j
            slf4j-log4j12
            compile
        

        
            log4j
            log4j
            compile
        

    
    
        org.apache.flink
        flink-streaming-scala_2.11
    

        
            org.apache.flink
            flink-connector-kafka-0.10_2.11
        

右侧面板中的依赖显示的是未知版本:

unknow.png

所以在 maven 仓库 中搜索下 flink-core



    org.apache.flink
    flink-core
    1.5.0

发现必须有 version 版本号:

core-version.png

你可能感兴趣的:(Dependencies unknown)