Flink Standalone

flink部署

需要环境java >1.8

tar -zxf flink-1.10*.tgz

配置flink-conf.yaml

jobmanager.rpc.address: 192.168.1.3 //rpc链接地址

jobmanager.rpc.port: 6123 //端口

jobmanager.heap.size: 1024m // jobmanager的堆大小

taskmanager.memory.process.size: 1568m //taskmanager的所有占用内存大小

taskmanager.numberOfTaskSlots: 1 //taskmanager给每个slots分配的线程,有几个就能并行执行几个,定义的是上现

parallelism.default: 1 // 并行度,实际执行的数量

rest.port: 8081 //开放的flinkweib管理的端口

配置master

192.168.1.3:8081  //rest.port定义的端口号

配置slaves

192.168.1.5

hadoop3

将master中的flink-1.10.0 分发给slaves

#!/bin/bash

for ip in `cat ip.txt`

do

echo "lianjie $ip"

sshpass -p saka ssh -o StrictHostKeyChecking=no  root@$ip mkdir /flink | scp -r /flink/flink-1.10.0 root@$ip:/flink/

done

运行

./start-cluster.sh  //master上跑就行

Starting cluster.

[INFO] 1 instance(s) of standalonesession are already running on hadoop1.

Starting standalonesession daemon on host hadoop1.

[email protected]'s password:

Starting taskexecutor daemon on host hadoop2.

root@hadoop3's password:

Starting taskexecutor daemon on host hadoop3.

web访问

ip:rest.port

你可能感兴趣的:(Flink Standalone)