flink部署模式(三)- standalone k8s session 部署模式

flink版本flink1.15.0。

k8s部署

k8s部署(minkube简单快速部署k8s):https://minikube.sigs.k8s.io/docs/start/
如果使用 MiniKube,请确保在部署 Flink 集群之前先执行 minikube ssh ‘sudo ip link set docker0 promisc on’,否则 Flink 组件不能自动地将自己映射到 Kubernetes Service 中。

docker 镜像准备

拉取镜像

docker pull flink:1.15-java8 jobmanager

测试镜像可用性

docker run --name flink -it flink:1.15-java8 jobmanager
docker exec -it flink bash

登录本地镜像仓库

docker login 192.168.0.8 -u username -p xxx

重新打标签

docker tag flink:1.15-java8 192.168.0.8/bdp/flink:1.15.0-java8

推送镜像

docker push 192.168.0.8/bdp/flink:1.15.0-java8

NOTE:
需要镜像上传到docker镜像私服(或者在每个node节点都构建相同的镜像),并保证所有k8s node节点都有权限链接到此私服。

flink集群部署

创建configmap

tee flink-configmap.yaml <apiVersion: v1
kind: ConfigMap
metadata:
  namespace: flink-standalone-session
  name: flink-config
  labels:
    app: flink
data:
  flink-conf.yaml: |+
    jobmanager.rpc.address: session-jm-service
    taskmanager.numberOfTaskSlots: 5
    blob.server.port: 6124
    jobmanager.rpc.port: 6123
    taskmanager.rpc.port: 6122
    jobmanager.heap.size: 1024m
    taskmanager.memory.process.size: 1024m
  log4j.properties: |+
    log4j.rootLogger=INFO, file
    log4j.logger.akka=INFO
    log4j.logger.org.apache.kafka=INFO
    log4j.logger.org.apache.hadoop=INFO
    log4j.logger.org.apache.zookeeper=INFO
    log4j.appender.file=org.apache.log4j.FileAppender
    log4j.appender.file.file=\${log.file}
    log4j.appender.file.layout=org.apache.log4j.PatternLayout
    log4j.appender.file.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %-60c %x - %m%n
    log4j.logger.org.apache.flink.shaded.akka.org.jboss.netty.channel.DefaultChannelPipeline=ERROR, file
EOF

创建jobmanager

tee session-jm-deploy.yaml <apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: flink-standalone-session
  name: session-jm-deploy
spec:
  replicas: 1
  selector:
    matchLabels:
      app: flink
      component: jobmanager
  template:
    metadata:
      labels:
        app: flink
        component: jobmanager
    spec:
      containers:
      - name: jobmanager
        image: 192.168.0.8/bdp/flink:1.15.0-java8
        workingDir: /opt/flink
        command: ["/bin/bash", "-c", "\$FLINK_HOME/bin/jobmanager.sh start;
          while :;
          do
            if [[ -f \$(find log -name '*jobmanager*.log' -print -quit) ]] ;
              then tail -f -n +1 log/*jobmanager*.log;
            fi;
          done"]
        ports:
        - containerPort: 6123
          name: rpc
        - containerPort: 6124
          name: blob
        - containerPort: 8081
          name: ui
        resources:
          limits:
            cpu: "1"
            memory: "1Gi"
          requests:
            cpu: 1
            memory: "1Gi"
        livenessProbe:
          tcpSocket:
            port: 6123
          initialDelaySeconds: 30
          periodSeconds: 60
        volumeMounts:
        - name: flink-config-volume
          mountPath: /opt/flink/conf
        securityContext:
          runAsUser: 9999  # refers to user _flink_ from official flink image, change if necessary
      volumes:
      - name: flink-config-volume
        configMap:
          name: flink-config
          items:
          - key: flink-conf.yaml
            path: flink-conf.yaml
          - key: log4j.properties
            path: log4j.properties
EOF

创建taskmanager

tee session-tm-deploy.yaml <apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: flink-standalone-session
  name: session-tm-deploy
spec:
  replicas: 2
  selector:
    matchLabels:
      app: flink
      component: taskmanager
  template:
    metadata:
      labels:
        app: flink
        component: taskmanager
    spec:
      containers:
      - name: taskmanager
        image: 192.168.0.8/bdp/flink:1.15.0-java8
        workingDir: /opt/flink
        command: ["/bin/bash", "-c", "\$FLINK_HOME/bin/taskmanager.sh start;
          while :;
          do
            if [[ -f \$(find log -name '*taskmanager*.log' -print -quit) ]] ;
              then tail -f -n +1 log/*taskmanager*.log;
            fi;
          done"]
        ports:
        - containerPort: 6122
          name: rpc
        resources: 
          limits:
            cpu: "2"
            memory: "2Gi"
          requests:
            cpu: "2"
            memory: "2Gi"
        livenessProbe:
          tcpSocket:
            port: 6122
          initialDelaySeconds: 30
          periodSeconds: 60
        volumeMounts:
        - name: flink-config-volume
          mountPath: /opt/flink/conf/
        securityContext:
          runAsUser: 9999  # refers to user _flink_ from official flink image, change if necessary
      volumes:
      - name: flink-config-volume
        configMap:
          name: flink-config
          items:
          - key: flink-conf.yaml
            path: flink-conf.yaml
          - key: log4j.properties
            path: log4j.properties
EOF

创建jobmanager service

tee session-jm-service.yaml <apiVersion: v1
kind: Service
metadata:
  namespace: flink-standalone-session
  name: session-jm-service
spec:
  type: ClusterIP
  ports:
  - name: rpc
    port: 6123
  - name: blob
    port: 6124
  - name: ui
    port: 8081
  selector:
    app: flink
    component: jobmanager
EOF

创建namespace

kubectl create ns flink-standalone-session

设置命名空间首选项

kubectl config set-context --current --namespace=flink-standalone-session

创建 Flink 集群

kubectl create -f flink-configmap.yaml
kubectl create -f session-jm-service.yaml
kubectl create -f session-jm-deploy.yaml
kubectl create -f session-tm-deploy.yaml

查看服务信息

kubectl get ns
kubectl get pod -n flink-standalone-session
kubectl get pod,svc -n flink-standalone-session
kubectl get svc,pod,deploy,configmap -n flink-standalone-session
kubectl get svc,pod,deployment,configmap -n flink-standalone-session

查看日志信息

kubectl logs deployment/session-jm-deploy
kubectl logs -f deployment/session-jm-deploy
kubectl logs -f deployment/session-tm-deploy
kubectl logs -f pod

删除服务

kubectl delete deployment session-jm-deploy session-tm-deploy -n flink-standalone-session
kubectl delete svc session-jm-service -n flink-standalone-session
kubectl delete configmap flink-config -n flink-standalone-session

查看pod详情

kubectl describe pod session-jm-deploy-86dd6cfd6-7fssw

查看集群node详情

kubectl describe node node3

查看configmap详情

kubectl describe configmap/flink-config
kubectl describe configmap flink-config

将本机默认路由上的8082端口转发到service session-jm-service中的8081端口上

kubectl -n flink-standalone-session port-forward --address 0.0.0.0 service/session-jm-service 8082:8081

你可能感兴趣的:(flink,flink,docker,kubernetes)