想要了解任何Apache下的平台框架,官网一直都是一个不错的学习途径。
Flink是什么
Apache Flink 是一个框架和分布式处理引擎,用于在无边界和有边界数据流上进行有状态的计算。Flink 能在所有常见集群环境中运行,并能以内存速度和任意规模进行计算。
应用场景
实时数仓与ETL
实时报表分析
实时智能推荐
流数据分析
基本组件
Flink架构体系基本上分三层(自顶向下)
Flink集群
本案例相关的运行环境
到Flink官网(https://flink.apache.org/downloads.html)下载安装包,这里我们选择的版本是flink-1.9.1-bin-scala_2.11.tgz,如果官网网速太慢也可以点击我分享的云盘链接(链接:https://pan.baidu.com/s/1TtwXJxfBjjuY4bULoikttg 密码:0e0y)进行下载。默认下载文件在用户主目录的Downloads中。然后,使用如下命令进行解压:
cd /home/silver/Downloads
sudo tar -zxvf flink-1.9.1-bin-scala_2.11.tgz -C /usr/local
修改目录名称,并设置权限,命令如下:
cd /usr/local
sudo mv ./ flink-1.9.1 ./flink
sudo chown -R silver:silver ./flink
Flink对于本地模式是开箱即用的,如果要修改Java运行环境,可以修改“/usr/local/flink/conf/flink-conf.yaml”文件中的env.java.home参数,设置为本地Java的绝对路径。
使用如下命令添加环境变量:
vim ~/.bashrc
在.bashrc文件中添加如下内容:
export FLNK_HOME=/usr/local/flink
export PATH=$FLINK_HOME/bin:$PATH
保存并退出.bashrc文件,然后执行如下命令让配置文件生效:
source ~/.bashrc
使用如下命令启动Flink:
cd /usr/local/flink
./bin/start-cluster.sh
使用jps命令查看进程:
/usr/local/flink$ jps 13936 Jps 13400 StandaloneSessionClusterEntrypoint 13854 TaskManagerRunner
如果能够看到TaskManagerRunner和StandaloneSessionClusterEntrypoint这两个进程,就说明启动成功。
Flink的JobManager同时会在8081端口上启动一个Web前端,可以在浏览器中输入“http://localhost:8081”来访问。
Flink安装包中自带了测试样例,这里可以运行WordCount样例程序来测试Flink的运行效果,具体命令如下:
cd /usr/local/flink/bin
./flink run /usr/local/flink/examples/batch/WordCount.jar
Ubuntu中没有自带安装Maven,需要手动安装Maven。可以访问Maven官网下载安装文件,官网下载链接:http://apache.fayea.com/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.zip。 如果官网下载慢,也可以使用我分享的云盘链接下载:(链接:https://pan.baidu.com/s/1SDOldLVjZ3sb3ONww-lfpA 密码:awbc)。
下载完成之后,解压并安装
sudo unzip ~/Downloads/apache-maven-3.3.9-bin.zip -d /usr/local
cd /usr/local
sudo mv apache-maven-3.3.9/ ./maven
sudo chown -R hadoop ./maven
编写 Flink 程序一般经过如下几个步骤:
1)获取执行环境( ExecutionEnvironment );
2)加载/创建初始数据集;
3)在数据集上进行各种转换操作,生成新的数据集;
4)指定计算结果输出方式;
5)开始执行。
首先在用户主文件夹(/home/用户名)下创建一个文件夹flinkapp作为应用程序根目录:
cd ~ #进入用户主文件夹
mkdir -p ./flinkapp/src/main/java
然后,使用vim编辑器在“./flinkapp/src/main/java”目录下建立三个代码文件,即:
####WordCountData.java
WordCountData.java用于提供原始数据。这里我们处理的数据源类型是有界数据源,利用DataSet API对String[]中的数据进行批处理。(流处理程序用DataStream API)
package cn.stu.silver;
import org.apache.flink.api.java.DataSet;
import org.apache.flink.api.java.ExecutionEnvironment;
public class WordCountData {
public static final String[] WORDS=new String[]{"To be, or not to be,--that is the question:--", "Whether \'tis nobler in the mind to suffer", "The slings and arrows of outrageous fortune", "Or to take arms against a sea of troubles,", "And by opposing end them?--To die,--to sleep,--", "No more; and by a sleep to say we end", "The heartache, and the thousand natural shocks", "That flesh is heir to,--\'tis a consummation", "Devoutly to be wish\'d. To die,--to sleep;--", "To sleep! perchance to dream:--ay, there\'s the rub;", "For in that sleep of death what dreams may come,", "When we have shuffled off this mortal coil,", "Must give us pause: there\'s the respect", "That makes calamity of so long life;", "For who would bear the whips and scorns of time,", "The oppressor\'s wrong, the proud man\'s contumely,", "The pangs of despis\'d love, the law\'s delay,", "The insolence of office, and the spurns", "That patient merit of the unworthy takes,", "When he himself might his quietus make", "With a bare bodkin? who would these fardels bear,", "To grunt and sweat under a weary life,", "But that the dread of something after death,--", "The undiscover\'d country, from whose bourn", "No traveller returns,--puzzles the will,", "And makes us rather bear those ills we have", "Than fly to others that we know not of?", "Thus conscience does make cowards of us all;", "And thus the native hue of resolution", "Is sicklied o\'er with the pale cast of thought;", "And enterprises of great pith and moment,", "With this regard, their currents turn awry,", "And lose the name of action.--Soft you now!", "The fair Ophelia!--Nymph, in thy orisons", "Be all my sins remember\'d."};
public WordCountData() {
}
public static DataSet getDefaultTextLineDataset(ExecutionEnvironment env){
return env.fromElements(WORDS);
}
}
package cn.stu.silver;
import org.apache.flink.api.common.functions.FlatMapFunction;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.util.Collector;
public class WordCountTokenizer implements FlatMapFunction>{
public WordCountTokenizer(){}
public void flatMap(String value, Collector> out) throws Exception {
String[] tokens = value.toLowerCase().split("\\W+");
int len = tokens.length;
for(int i = 0; i0){
out.collect(new Tuple2(tmp,Integer.valueOf(1)));
}
}
}
}
package cn.stu.silver;
import org.apache.flink.api.java.DataSet;
import org.apache.flink.api.java.ExecutionEnvironment;
import org.apache.flink.api.java.operators.AggregateOperator;
import org.apache.flink.api.java.utils.ParameterTool;
public class WordCount {
public WordCount(){}
public static void main(String[] args) throws Exception {
ParameterTool params = ParameterTool.fromArgs(args);
ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
env.getConfig().setGlobalJobParameters(params);
Object text;
//如果没有指定输入路径,则默认使用WordCountData中提供的数据
if(params.has("input")){
text = env.readTextFile(params.get("input"));
}else{
System.out.println("Executing WordCount example with default input data set.");
System.out.println("Use -- input to specify file input.");
text = WordCountData.getDefaultTextLineDataset(env);
}
AggregateOperator counts = ((DataSet)text).flatMap(new WordCountTokenizer()).groupBy(new int[]{0}).sum(1);
//如果没有指定输出,则默认打印到控制台
if(params.has("output")){
counts.writeAsCsv(params.get("output"),"\n", " ");
env.execute();
}else{
System.out.println("Printing result to stdout. Use --output to specify output path.");
counts.print();
}
}
}
首先返回到~/flinkapp路径
cd ~/flinkapp
sudo vim pom.xml
输入i进入编辑模式,输入以下配置:
<project>
<groupId>cn.stu.silvergroupId>
<artifactId>simple-projectartifactId>
<modelVersion>4.0.0modelVersion>
<name>Simple Projectname>
<packaging>jarpackaging>
<version>1.0version>
<repositories>
<repository>
<id>jbossid>
<name>JBoss Repositoryname>
<url>http://repository.jboss.com/maven2/url>
repository>
repositories>
<dependencies>
<dependency>
<groupId>org.apache.flinkgroupId>
<artifactId>flink-javaartifactId>
<version>1.9.1version>
dependency>
<dependency>
<groupId>org.apache.flinkgroupId>
<artifactId>flink-streaming-java_2.11artifactId>
<version>1.9.1version>
dependency>
<dependency>
<groupId>org.apache.flinkgroupId>
<artifactId>flink-clients_2.11artifactId>
<version>1.9.1version>
dependency>
dependencies>
project>
为了保证Maven能够正常运行,先执行如下命令检查整个应用程序的文件结构:
cd ~/flinkapp
find .
文件结构如下:
注意pom.xml并不跟之前的三个.java程序在同一路径下,而是直接和src处于同级目录/文件
. ./src ./src/main ./src/main/java ./src/main/java/WordCountData.java ./src/main/java/WordCount.java ./src/main/java/WordCountTokenizer.java ./pom.xml
将整个应用程序打包成JAR包
cd ~/flinkapp #一定把这个目录设置为当前目录
/usr/local/maven/bin/mvn package
打包成功,屏幕会返回"BUILD SUCCESS"信息,如下:
最后,可以将生成的JAR包通过flink run命令提交到Flink中运行(请确认已经启动Flink),命令如下:
/usr/local/flink/bin/flink run --class cn.stu.silver.WordCount ~/flinkapp/target/simple-project-1.0.jar
Flink安装与编程实践
Flink官方文档-基础API概念
Flink基本API的使用