OpenShift 4 之Kafka(1)-部署Strimzi Operator运行Kafka应用

《OpenShift 4.x HOL教程汇总》

文章目录

  • 关于Strimzi
  • 场景说明
    • 安装Strimzi Operator
    • 创建Kafka Cluster
    • 创建Kafka Topic
    • 测试验证
  • 参考

关于Strimzi

Strimzi目前是CNCF的一个sandbox级别项目,它通过Operator来维护在Kubernetes上运行Apache Kafka生态系统,目前主要由Red Hat参与开发和维护。我们可以在Operator上部署Strimzi Operator来快速部署以一套Kafka集群环境,包括Zookeeper集群,Kafka集群,负责User和Topic管理的Operator、为HTTP客户提供访问的Kafka Bridge、适配外部事件源的Kafka Connect、多数据中心消息同步的Kafka Mirror Maker等资源。
OpenShift 4 之Kafka(1)-部署Strimzi Operator运行Kafka应用_第1张图片

场景说明

本文在OpenShift部署Strimzi Operator,然后通过它运行一个Kafka集群,在其上通过Topic在消息的Producer和Consumer之间传递消息。
环境:OpenShift 4.2/OpenShift 4.3

安装Strimzi Operator

  1. 创建kafka项目。
$ oc new-project kafka
  1. 使用缺省配制在Kafka项目中安装Strimzi Operator,成功后可在Installed Operators中看到Strimzi,并且还可在项目中看到以下运行的Pod和API资源。
    OpenShift 4 之Kafka(1)-部署Strimzi Operator运行Kafka应用_第2张图片
$ oc get pod -n kafka
NAME                                               READY   STATUS    RESTARTS   AGE
strimzi-cluster-operator-v0.17.0-cc65586fc-rqmck   1/1     Running   0          99s

$ oc api-resources --api-group='kafka.strimzi.io'
NAME                 SHORTNAMES   APIGROUP           NAMESPACED   KIND
kafkabridges         kb           kafka.strimzi.io   true         KafkaBridge
kafkaconnectors      kctr         kafka.strimzi.io   true         KafkaConnector
kafkaconnects        kc           kafka.strimzi.io   true         KafkaConnect
kafkaconnects2is     kcs2i        kafka.strimzi.io   true         KafkaConnectS2I
kafkamirrormaker2s   kmm2         kafka.strimzi.io   true         KafkaMirrorMaker2
kafkamirrormakers    kmm          kafka.strimzi.io   true         KafkaMirrorMaker
kafkas               k            kafka.strimzi.io   true         Kafka
kafkatopics          kt           kafka.strimzi.io   true         KafkaTopic
kafkausers           ku           kafka.strimzi.io   true         KafkaUser

创建Kafka Cluster

  1. 创建kafka-broker-my-cluster.yaml文件,内容如下,
apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    version: 2.4.0
    replicas: 1
    listeners:
      plain: {}
      tls: {}
    config:
      offsets.topic.replication.factor: 1
      transaction.state.log.replication.factor: 1
      transaction.state.log.min.isr: 1
      log.message.format.version: "2.4"
    storage:
      type: jbod
      volumes:
      - id: 0
        type: persistent-claim
        size: 100Gi
        deleteClaim: false
  zookeeper:
    replicas: 1
    storage:
      type: persistent-claim
      size: 100Gi
      deleteClaim: false
  entityOperator:
    topicOperator: {}
    userOperator: {}
  1. 执行命令,创建Kafka Cluster。
$ oc -n kafka apply -f kafka-broker-my-cluster.yaml
kafka.kafka.strimzi.io/my-cluster created
 
$ oc get Kafka
NAME         DESIRED KAFKA REPLICAS   DESIRED ZK REPLICAS
my-cluster   1                        1
  1. (可选)以上(1-2)步也可通过在OpenShift Console的Strimzi Operator中使用缺省的配置创建一个Kafka Cluster。
    OpenShift 4 之Kafka(1)-部署Strimzi Operator运行Kafka应用_第3张图片
  2. 查看名额外my-cluster的Kafka Cluster运行Pod,可以看到集群中每个模块都是由多个Pod运行的。
$ oc get pods -n kafka
NAME                                               READY   STATUS    RESTARTS   AGE
my-cluster-entity-operator-6f676b98cd-vldgw        3/3     Running   0          5m16s
my-cluster-kafka-0                                 2/2     Running   1          6m23s
my-cluster-zookeeper-0                             2/2     Running   0          7m28s
strimzi-cluster-operator-v0.17.0-cc65586fc-rqmck   1/1     Running   0          12m

创建Kafka Topic

  1. 创建如下kafka-topic-my-topic.yaml文件,内容定义了名为my-topic的KafkaTopic对象。
apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaTopic
metadata:
  name: my-topic
  labels:
    strimzi.io/cluster: my-cluster
spec:
  partitions: 10
  replicas: 1
  1. 执行命令,创建Kafka Topic,然后查看kafkatopics资源。
$ oc apply -f kafka-topic-my-topic.yaml
kafkatopic.kafka.strimzi.io/my-topic created

$ oc get kafkatopics
NAME       PARTITIONS   REPLICATION FACTOR
my-topic   10           1

测试验证

下面我们是用运行在容器中的Kafka客户端进行验证测试。

  1. 在第一个Terminal运行producer,然后输入字符串。
$ KAFKA_TOPIC=${1:-'my-topic'}
$ KAFKA_CLUSTER_NS=${2:-'kafka'}
$ KAFKA_CLUSTER_NAME=${3:-'my-cluster'}
$ oc -n $KAFKA_CLUSTER_NS run kafka-producer -ti \
 --image=strimzi/kafka:0.15.0-kafka-2.3.1 \
 --rm=true --restart=Never \
 -- bin/kafka-console-producer.sh\
 --broker-list $KAFKA_CLUSTER_NAME-$KAFKA_CLUSTER_NS-bootstrap:9092 \
 --topic $KAFKA_TOPIC
  1. 在第二个Terminal运行consumer,确认可以接收到producer发送的字符串。
$ KAFKA_TOPIC=${1:-'my-topic'}
$ KAFKA_CLUSTER_NS=${2:-'kafka'}
$ KAFKA_CLUSTER_NAME=${3:-'my-cluster'}
$ oc -n $KAFKA_CLUSTER_NS run kafka-consumer -ti \
    --image=strimzi/kafka:0.15.0-kafka-2.3.1 \
    --rm=true --restart=Never \
    -- bin/kafka-console-consumer.sh \
    --bootstrap-server $KAFKA_CLUSTER_NAME-$KAFKA_CLUSTER_NS-bootstrap:9092 \
    --topic $KAFKA_TOPIC --from-beginning
  1. 在第三个Terminal查看运行的Pod。
$ oc get pod -n kafka
NAME                                               READY   STATUS    RESTARTS   AGE
kafka-consumer                                     1/1     Running   0          8m8s
kafka-producer                                     1/1     Running   0          47m
my-cluster-entity-operator-6f676b98cd-vldgw        3/3     Running   0          72m
my-cluster-kafka-0                                 2/2     Running   1          73m
my-cluster-zookeeper-0                             2/2     Running   0          74m
strimzi-cluster-operator-v0.17.0-cc65586fc-rqmck   1/1     Running   0          79m

参考

  • https://strimzi.io/docs/overview/latest
  • https://www.github.com/redhat-developer-demos/knative-tutorial

你可能感兴趣的:(Kafka,Strimzi,Dev)