flink分布式高可用集群部署

文章目录

    • 一、部署规划
      • 1.1 版本说明
      • 1.2 服务器规划
    • 二、flink部署
      • 2.1 解压安装包并配置环境变量
      • 2.2 修改核心配置文件
        • 2.2.1 配置 flink-conf.yaml
        • 2.2.2 配置 masters、workers
        • 2.2.3 将flink jar包拷贝至lib
        • 2.2.4 将安装包分发给其他节点
      • 2.3 启动flink

一、部署规划

1.1 版本说明

软硬件信息 参数
配置 2C2G
操作系统版本 CentOS Linux release 7.7.1908 (Core)
java版本 java version “1.8.0_251”
Hadoop版本 Hadoop 3.3.0
flink版本 flink 1.11.1

1.2 服务器规划

服务器 IP 角色
node1 192.168.137.86 zk、namenode、zkfc、datanode、nodemanager、journalnode、StandaloneSessionClusterEntrypoint、TaskManagerRunner
node2 192.168.137.87 zk、namenode、zkfc、datanode、nodemanager、resourcemanager、journalnode、StandaloneSessionClusterEntrypoint、TaskManagerRunner
node3 192.168.137.88 zk、datanode、nodemanager、resourcemanager、journalnode、StandaloneSessionClusterEntrypoint、TaskManagerRunner

二、flink部署

2.1 解压安装包并配置环境变量

tar xf flink-1.11.1-bin-scala_2.12.tgz -C /usr/local/
mv /usr/local/flink-1.11.1/ /usr/local/flink
chown -R hadoop.hadoop /usr/local/flink/
cat>>/etc/profile <

2.2 修改核心配置文件

[hadoop@node1 conf]$ cd /usr/local/flink/conf/

2.2.1 配置 flink-conf.yaml

jobmanager.rpc.address: node1
jobmanager.rpc.port: 6123
jobmanager.heap.size: 1024m
taskmanager.heap.size: 1024m
taskmanager.numberOfTaskSlots: 3
parallelism.default: 1
fs.default-scheme:hdfs:///flink/
high-availability: zookeeper
high-availability.storageDir: hdfs:///flink/ha/
high-availability.zookeeper.quorum: node1:2181,node2:2181,node3:2181
state.checkpoints.dir: hdfs:///flink-checkpoints
state.savepoints.dir: hdfs:///flink-checkpoints
state.backend.incremental: ture
jobmanager.execution.failover-strategy: region

2.2.2 配置 masters、workers

[hadoop@node1 conf]$ cat masters
node1:8081
node2:8081
node3:8081

[hadoop@node1 conf]$ cat workers
node1
node2
node3

2.2.3 将flink jar包拷贝至lib

[hadoop@node1 lib]$ ll /usr/local/flink/lib/flink-shaded-hadoop-2-uber-2.8.3-10.0.jar
-rw-r--r-- 1 hadoop hadoop 43317025 7月  29 13:59 /usr/local/flink/lib/flink-shaded-hadoop-2-uber-2.8.3-10.0.jar

2.2.4 将安装包分发给其他节点

scp -r /usr/loca/flink/* node2:/usr/local/flink
scp -r /usr/loca/flink/* node3:/usr/local/flink

2.3 启动flink

[hadoop@node1 conf]$ start-cluster.sh
Starting HA cluster with 3 masters.
Starting standalonesession daemon on host node1.
Starting standalonesession daemon on host node2.
Starting standalonesession daemon on host node3.
Starting taskexecutor daemon on host node1.
Starting taskexecutor daemon on host node2.
Starting taskexecutor daemon on host node3.

flink分布式高可用集群部署_第1张图片

你可能感兴趣的:(大数据)