OpenShift 4 之AMQ Streams(3) - 用Kafka MirrorMaker在Kafka集群间复制数据

文章目录

  • 什么是MirrorMaker
  • 配置MirrorMaker
    • 确认Kafka环境
    • 创建Source和Target
    • 创建MirrorMaker
    • 测试验证MirrorMaker
      • 发送测试数据
      • 接收测试数据

什么是MirrorMaker

MirrorMaker是Kafka中用于不同的Kafka集群之间镜像、复制、同步数据的工具。MirrorMaker可从源集群中消费并发送到目标群集。
OpenShift 4 之AMQ Streams(3) - 用Kafka MirrorMaker在Kafka集群间复制数据_第1张图片

配置MirrorMaker

确认Kafka环境

为了恢复环境,可以执行以下命令删除以前操作的资源;还可以新建一个OpenShift项目,然后安装AMQ Stream Operator或AStrimzi Operator即可。

oc delete kafka -n kafka my-cluster
oc delete deploy -n kafka kafka-consumer
oc delete deploy -n kafka kafka-producer
oc delete deploy -n kafka connector-consumer
oc delete deploy -n kafka my-cluster-entity-operator
 
$ oc get deployment
NAME                                  READY   UP-TO-DATE   AVAILABLE   AGE
amq-streams-cluster-operator-v1.4.0   1/1     1            1           6h
my-connect-connect                    1/1     1            1           1h

创建Source和Target

  1. 创建内容如下的kafka-source.yaml的文件,其中定义了名为my-source-cluster的Kafka相关资源,它是MirrorMaker的source kafka集群。
apiVersion: kafka.strimzi.io/v1alpha1
kind: Kafka
metadata:
  name: my-source-cluster
spec:
  kafka:
    replicas: 3
    listeners:
      plain: {}
      tls: {}
    config:
      offsets.topic.replication.factor: 3
      transaction.state.log.replication.factor: 3
      transaction.state.log.min.isr: 2
    storage:
      type: ephemeral
    resources:
      requests:
        memory: 512Mi
        cpu: 500m
      limits:
        memory: 2Gi
        cpu: 1000m
  zookeeper:
    replicas: 3
    storage:
      type: ephemeral
    resources:
      requests:
        memory: 512Mi
        cpu: 500m
      limits:
        memory: 2Gi
        cpu: 1000m
  entityOperator:
    topicOperator:
      resources:
        requests:
          memory: 512Mi
          cpu: 500m
        limits:
          memory: 2Gi
          cpu: 1000m
  userOperator:
    resources:
      requests:
        memory: 512Mi
        cpu: 500m
      limits:
        memory: 2Gi
        cpu: 1000m
  1. 执行命令创建名为my-source-cluster的Kafka集群。
oc apply -f kafka-source.yaml -n kafka
  1. 创建内容如下的kafka-target.yaml的文件,其中定义了名为my-target-cluster的Kafka相关资源,它是MirrorMaker的target kafka集群。
apiVersion: kafka.strimzi.io/v1alpha1
kind: Kafka
metadata:
  name: my-target-cluster
spec:
  kafka:
    replicas: 3
    listeners:
      plain: {}
      tls: {}
    config:
      offsets.topic.replication.factor: 3
      transaction.state.log.replication.factor: 3
      transaction.state.log.min.isr: 2
    storage:
      type: ephemeral
    resources:
      requests:
        memory: 512Mi
        cpu: 500m
      limits:
        memory: 2Gi
        cpu: 700m
  zookeeper:
    replicas: 3
    storage:
      type: ephemeral
    resources:
      requests:
        memory: 512Mi
        cpu: 500m
      limits:
        memory: 2Gi
        cpu: 700m
  entityOperator:
    topicOperator: 
      resources:
        requests:
          memory: 512Mi
          cpu: 500m
        limits:
          memory: 2Gi
          cpu: 700m
    userOperator: 
      resources:
        requests:
          memory: 512Mi
          cpu: 500m
        limits:
          memory: 2Gi
          cpu: 700m
  1. 执行命令创建名为my-target-cluster的Kafka集群。
$ oc apply -f kafka-target.yaml -n kafka
  1. 查看当前的Kafka集群和相关Pod资源。
$ oc get kafka -n kafka
NAME                DESIRED KAFKA REPLICAS   DESIRED ZK REPLICAS
my-source-cluster   3                        3
my-target-cluster   3                        3

$ oc get pod -n kafka
NAME                                                   READY   STATUS              RESTARTS   AGE
amq-streams-cluster-operator-v1.4.0-59c7778c88-7bvzx   1/1     Running             4          46h
my-connect-connect-75ddc48968-tmbt6                    1/1     Running             0          49m
my-source-cluster-entity-operator-669bfdbb5b-vbc5k     2/2     Running             0          26m
my-source-cluster-kafka-0                              2/2     Running             0          27m
my-source-cluster-kafka-1                              2/2     Running             1          27m
my-source-cluster-kafka-2                              2/2     Running             0          27m
my-source-cluster-zookeeper-0                          2/2     Running             0          27m
my-source-cluster-zookeeper-1                          2/2     Running             0          27m
my-source-cluster-zookeeper-2                          2/2     Running             0          27m
my-target-cluster-entity-operator-6c877d758c-cz79p     3/3     Running             0          2s
my-target-cluster-kafka-0                              2/2     Running             1          48s
my-target-cluster-kafka-1                              2/2     Running             0          48s
my-target-cluster-kafka-2                              2/2     Running             1          48s
my-target-cluster-zookeeper-0                          2/2     Running             0          82s
my-target-cluster-zookeeper-1                          2/2     Running             0          82s
my-target-cluster-zookeeper-2                          2/2     Running             0          82s

创建MirrorMaker

  1. 创建内容如下的kafka-mirror-maker.yaml文件,它定义了名为my-mirror-maker的KafkaMirrorMaker。
apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaMirrorMaker
metadata:
  name: my-mirror-maker
spec:
  image: strimzi/kafka-mirror-maker:0.8.0
  replicas: 1
  consumer:
    bootstrapServers: my-source-cluster-kafka-bootstrap:9092
    groupId: my-source-group-id
  producer:
    bootstrapServers: my-target-cluster-kafka-bootstrap:9092
  whitelist: ".*"
  1. 查看KafkaMirrorMaker资源,确认正常运行。
$ oc apply -f kafka-mirror-maker.yaml -n kafka
 
$ oc get KafkaMirrorMaker
NAME              DESIRED REPLICAS
my-mirror-maker   1
 
$ oc get pod -l strimzi.io/name=my-mirror-maker-mirror-maker
NAME                                            READY   STATUS    RESTARTS   AGE
my-mirror-maker-mirror-maker-646b477695-xlq87   1/1     Running   0          28m

测试验证MirrorMaker

发送测试数据

  1. 执行命令,进入my-source-cluster-kafka-2。
$ oc exec -ti my-source-cluster-kafka-2 -- bash
  1. 发送测试数据,然后退出Pod。
$ ./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
> testmessage1
> testmessage2
> lasttestmessage
> <ctrl-c>
$ <ctrl-d>

接收测试数据

  1. 执行命令,进入my-target-cluster-kafka-2。
oc exec -ti my-target-cluster-kafka-2 -- bash
  1. 确认能接收到数据,然后退出Pod。
$ ./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
message1
message2
lastmessage
<ctrl-d>

你可能感兴趣的:(OpenShift,4,Kafka,Dev)