step-by-step搭建kafka-based instance

step-by-step搭建kafka-based instance

搭建一个单organization的基于kafka的fabric instance

  • 第一步:生成msp信息
$ cat crypto-config.yaml 
PeerOrgs:
  - Name: OneOrg
    Domain: oneorg.example.com
    Template:
        Count: 5
    Users:
      Count: 5
    Specs:
      - Hostname: orderer
$ rm -rf crypto-config
$ cryptogen generate --config=./crypto-config.yaml --output=crypto-config

为所有的元素在目录crypto-config/peerOrganizations/oneorg.example.com生成msp证书信息。

  • 第二步:生成genesisblock

为了减少篇幅,注释删除了,这样不便于阅读,主要是为了保留数据。

$ cat configtx.yaml
Profiles:
    OneOrgProfile:
        Orderer:
            <<: *OrdererDefaults
            Organizations:
                - *OneOrg
        Consortiums:
            SampleConsortium:
                Organizations:
                    - *OneOrg
        Consortium: SampleConsortium
        Application:
            <<: *ApplicationDefaults
            Organizations:
                - *OneOrg

Organizations:
    - &OneOrg
        Name: OneOrg
        ID: OneOrgMSP
        MSPDir: crypto-config/peerOrganizations/oneorg.example.com/msp
        AnchorPeers:
            - Host: peer0.oneorg.example.com
              Port: 7051
Orderer: &OrdererDefaults
    OrdererType: kafka
    Addresses:
        - orderer0.oneorg.example.com:7050
        - orderer1.oneorg.example.com:7050
    BatchTimeout: 2s
    BatchSize:
        MaxMessageCount: 10
        AbsoluteMaxBytes: 99 MB
        PreferredMaxBytes: 512 KB
    Kafka:
        Brokers:
            - kafka0.oneorg.example.com:9092
            - kafka1.oneorg.example.com:9092
            - kafka2.oneorg.example.com:9092
    Organizations:
Application: &ApplicationDefaults
    Organizations:

在这里定义了orderer type为kafka,并且指定了kafka.brkers地址;这些属性也可以在orderer的运行环境里可以被重新定义,通过docker-compose.yaml文件或者直接在容器内部定义环境变量。

$ mkdir -p channel-artifacts
$ configtxgen -profile OneOrgProfile -outputBlock channel-artifacts/genesis.block

结果是生成orderer的genesisblock文件,channel-artifacts/genesis.block,这个文件在orderer启动的时候会用到。

  • 第三步:生成channel定义文件
$ configtxgen -profile OneOrgProfile -outputCreateChannelTx ./gen/channel-artifacts/mychannel.tx -channelID mychannel

这个channel定义文件channel-artifacts/mychannel.tx在orderer创建channel的时候需要用到。

  • 第四步:启动docker-compose

这个docker-compose.yaml文件不是完整的,不能直接使用,但是包含了主要的内容,仅供参考。

$ cat docker-compose-base.yaml 
version: '2'

services:

  zookeeper:
    image: hyperledger/fabric-zookeeper
    ports:
        - 2181
        - 2888
        - 3888
  kafka:
    image: hyperledger/fabric-kafka
    environment:
        - KAFKA_LOG_RETENTION_MS=-1
        - KAFKA_MESSAGE_MAX_BYTES=103809024
        - KAFKA_REPLICA_FETCH_MAX_BYTES=103809024
        - KAFKA_REPLICA_FETCH_RESPONSE_MAX_BYTES=10485760
        - KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
        - KAFKA_DEFAULT_REPLICATION_FACTOR=3
        - KAFKA_MIN_INSYNC_REPLICAS=2
    ports:
        - 9092

  orderer:
    image: hyperledger/fabric-orderer:x86_64-1.1.0
    environment:
      - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
      - ORDERER_GENERAL_GENESISMETHOD=file
      - ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
      - ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
      - ORDERER_KAFKA_VERBOSE=true
    command: orderer
    ports:
      - 7050

  peer:
    image: hyperledger/fabric-peer:x86_64-1.1.0
    environment:
      - CORE_VM_ENDPOINT=unix:///var/run/docker.sock
      - CORE_PEER_GOSSIP_USELEADERELECTION=true
      - CORE_PEER_GOSSIP_ORGLEADER=false
      - CORE_PEER_PROFILE_ENABLED=true
    command: peer node start
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    ports:
      - 7051
      - 7053

注意zookeeper会用到三个端口,缺省值:

  1. 2181 给client连接使用。
  2. 2888 给followers同步信息使用。
  3. 3888 给重新选举lead使用。

kafka的缺省端口是9092。

下面的docker-compose.yaml文件创建3个zookeeper,4个kafka,2个orderer和2个peer,1个cli节点。

$ cat docker-compose-tmp.yaml 
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#

version: '2'

networks:
  byfn:

services:

  zookeeper1.oneorg.example.com:
    extends:
        file: docker-compose-base.yaml
        service: zookeeper
    container_name: zookeeper1.oneorg.example.com
    environment:
        - ZOO_MY_ID=1
        - ZOO_SERVERS=server.1=zookeeper1.oneorg.example.com:2888:3888 server.2=zookeeper2.oneorg.example.com:2888:3888 server.3=zookeeper3.oneorg.example.com:2888:3888
    networks:
      - byfn

  zookeeper2.oneorg.example.com:
    extends:
        file: docker-compose-base.yaml
        service: zookeeper
    container_name: zookeeper2.oneorg.example.com
    environment:
        - ZOO_MY_ID=2
        - ZOO_SERVERS=server.1=zookeeper1.oneorg.example.com:2888:3888 server.2=zookeeper2.oneorg.example.com:2888:3888 server.3=zookeeper3.oneorg.example.com:2888:3888
    networks:
      - byfn

  zookeeper3.oneorg.example.com:
    extends:
        file: docker-compose-base.yaml
        service: zookeeper
    container_name: zookeeper3.oneorg.example.com
    environment:
        - ZOO_MY_ID=3
        - ZOO_SERVERS=server.1=zookeeper1.oneorg.example.com:2888:3888 server.2=zookeeper2.oneorg.example.com:2888:3888 server.3=zookeeper3.oneorg.example.com:2888:3888
    networks:
      - byfn


  kafka1.oneorg.example.com:
    extends:
        file: docker-compose-base.yaml
        service: kafka 
    container_name: kafka1.oneorg.example.com
    environment:
        - KAFKA_BROKER_ID=1
        - KAFKA_ZOOKEEPER_CONNECT=zookeeper1.oneorg.example.com:2181,zookeeper2.oneorg.example.com:2181,zookeeper3.oneorg.example.com:2181
    ports:
        - 9092
    depends_on:
        - zookeeper1.oneorg.example.com
        - zookeeper2.oneorg.example.com
        - zookeeper3.oneorg.example.com
    networks:
        - byfn

  kafka2.oneorg.example.com:
    extends:
        file: docker-compose-base.yaml
        service: kafka 
    container_name: kafka2.oneorg.example.com
    environment:
        - KAFKA_BROKER_ID=2
        - KAFKA_ZOOKEEPER_CONNECT=zookeeper1.oneorg.example.com:2181,zookeeper2.oneorg.example.com:2181,zookeeper3.oneorg.example.com:2181
    ports:
        - 9092
    depends_on:
        - zookeeper1.oneorg.example.com
        - zookeeper2.oneorg.example.com
        - zookeeper3.oneorg.example.com
    networks:
        - byfn

  kafka3.oneorg.example.com:
    extends:
        file: docker-compose-base.yaml
        service: kafka 
    container_name: kafka3.oneorg.example.com
    environment:
        - KAFKA_BROKER_ID=3
        - KAFKA_ZOOKEEPER_CONNECT=zookeeper1.oneorg.example.com:2181,zookeeper2.oneorg.example.com:2181,zookeeper3.oneorg.example.com:2181
    ports:
        - 9092
    depends_on:
        - zookeeper1.oneorg.example.com
        - zookeeper2.oneorg.example.com
        - zookeeper3.oneorg.example.com
    networks:
        - byfn

  kafka4.oneorg.example.com:
    extends:
        file: docker-compose-base.yaml
        service: kafka 
    container_name: kafka4.oneorg.example.com
    environment:
        - KAFKA_BROKER_ID=4
        - KAFKA_ZOOKEEPER_CONNECT=zookeeper1.oneorg.example.com:2181,zookeeper2.oneorg.example.com:2181,zookeeper3.oneorg.example.com:2181
    ports:
        - 9092
    depends_on:
        - zookeeper1.oneorg.example.com
        - zookeeper2.oneorg.example.com
        - zookeeper3.oneorg.example.com
    networks:
        - byfn

  orderer0.oneorg.example.com:
    extends:
        file: docker-compose-base.yaml
        service: orderer
    container_name: orderer0.oneorg.example.com
    environment:
      - ORDERER_GENERAL_LOCALMSPID=OneOrgMSP
      - CONFIGTX_ORDERER_ORDERERTYPE=kafka
      - CONFIGTX_ORDERER_KAFKA_BROKERS=[kafka1.oneorg.example.com:9092,kafka2.oneorg.example.com:9092,kafka3.oneorg.example.com:9092,kafka4.oneorg.example.com:9092]
    command: orderer
    volumes:
      - ./crypto-config/peerOrganizations/oneorg.example.com/peers/orderer.oneorg.example.com/msp:/var/hyperledger/msp
    ports:
      - 7050
    depends_on:
      - kafka1.oneorg.example.com
      - kafka2.oneorg.example.com
      - kafka3.oneorg.example.com
      - kafka4.oneorg.example.com
    networks:
      - byfn

  orderer1.oneorg.example.com:
    extends:
        file: docker-compose-base.yaml
        service: orderer
    container_name: orderer1.oneorg.example.com
    environment:
      - ORDERER_GENERAL_LOCALMSPID=OneOrgMSP
      - CONFIGTX_ORDERER_ORDERERTYPE=kafka
      - CONFIGTX_ORDERER_KAFKA_BROKERS=[kafka1.oneorg.example.com:9092,kafka2.oneorg.example.com:9092,kafka3.oneorg.example.com:9092,kafka4.oneorg.example.com:9092]
    command: orderer
    volumes:
      - ./crypto-config/peerOrganizations/oneorg.example.com/peers/orderer.oneorg.example.com/msp:/var/hyperledger/msp
    ports:
      - 7050
    depends_on:
      - kafka1.oneorg.example.com
      - kafka2.oneorg.example.com
      - kafka3.oneorg.example.com
      - kafka4.oneorg.example.com
    networks:
      - byfn

  peer0.oneorg.example.com:
    extends:
        file: docker-compose-base.yaml
        service: peer
    container_name: peer0.oneorg.example.com
    environment:
      - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=${COMPOSE_PROJECT_NAME}_byfn
      - CORE_PEER_ID=peer0.oneorg.example.com
      - CORE_PEER_ADDRESS=peer0.oneorg.example.com:7051
      - CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.oneorg.example.com:7051
      - CORE_PEER_LOCALMSPID=OneOrgMSP
    command: peer node start
    volumes:
      - ./crypto-config/peerOrganizations/oneorg.example.com/peers/peer0.oneorg.example.com/msp:/var/hyperledger/msp
    ports:
      - 7051
      - 7053
    depends_on:
      - orderer0.oneorg.example.com
      - orderer1.oneorg.example.com
    networks:
      - byfn

  peer1.oneorg.example.com:
    extends:
        file: docker-compose-base.yaml
        service: peer 
    container_name: peer1.oneorg.example.com
    environment:
      - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=${COMPOSE_PROJECT_NAME}_byfn
      - CORE_PEER_ID=peer1.oneorg.example.com
      - CORE_PEER_ADDRESS=peer1.oneorg.example.com:7051
      - CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.oneorg.example.com:7051
      - CORE_PEER_LOCALMSPID=OneOrgMSP
    command: peer node start
    volumes:
      - ./crypto-config/peerOrganizations/oneorg.example.com/peers/peer1.oneorg.example.com/msp:/var/hyperledger/msp
    ports:
      - 7051
      - 7053
    depends_on:
      - orderer0.oneorg.example.com
      - orderer1.oneorg.example.com
    networks:
      - byfn

  cli.oneorg.example.com:
    container_name: cli.oneorg.example.com
    image: hyperledger/fabric-tools:x86_64-1.1.0
    tty: true
    environment:
      - CORE_VM_ENDPOINT=unix:///var/run/docker.sock
      - CORE_LOGGING_LEVEL=DEBUG
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - ./crypto-config/peerOrganizations/oneorg.example.com/peers/peer0.oneorg.example.com/msp:/var/hyperledger/msp
      - ./crypto-config/peerOrganizations/oneorg.example.com/users/[email protected]:/var/hyperledger/[email protected]
    depends_on:
      - orderer0.oneorg.example.com
      - orderer1.oneorg.example.com
      - peer0.oneorg.example.com
      - peer1.oneorg.example.com
    networks:
      - byfn

如果需要保存持久化数据,需要为zookeeper和kafka映射两个分区,设置环境变量到对象的分区上:
对zookeeper,例如:

    environment:
        - ZOO_DATA_DIR=/work/zdata
        - ZOO_DATA_LOG_DIR=/work/zlog
    volumes:
        - ./zlog:/work/zlog
        - ./zdata:/work/zdata

对kafka,例如:

    environment:
        - KAFKA_LOG_DIRS=/work/klog
        - KAFKA_DATA_DIRS=/work/kdata
    volumes:
        - ./run/kafka1/klog:/work/klog
        - ./run/kafka1/kdata:/work/kdata

这样kafka就会把对应的topic数据写到/work/klog目录下面,一个topic对应一个子目录。
启动docker-compose

$ docker-compose up -d
$ docker-compose ps
            Name                           Command               State                                                Ports                                              
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------
cli.oneorg.example.com          /bin/bash                        Up                                                                                                      
kafka1.oneorg.example.com       /docker-entrypoint.sh /opt ...   Up      0.0.0.0:33072->9092/tcp, 9093/tcp                                                               
kafka2.oneorg.example.com       /docker-entrypoint.sh /opt ...   Up      0.0.0.0:33069->9092/tcp, 9093/tcp                                                               
kafka3.oneorg.example.com       /docker-entrypoint.sh /opt ...   Up      0.0.0.0:33070->9092/tcp, 9093/tcp                                                               
kafka4.oneorg.example.com       /docker-entrypoint.sh /opt ...   Up      0.0.0.0:33073->9092/tcp, 9093/tcp                                                               
orderer0.oneorg.example.com     orderer                          Up      0.0.0.0:50050->7050/tcp,0.0.0.0:33075->7050/tcp                                                 
orderer1.oneorg.example.com     orderer                          Up      0.0.0.0:51050->7050/tcp,0.0.0.0:33074->7050/tcp                                                 
peer0.oneorg.example.com        peer node start                  Up      0.0.0.0:50051->7051/tcp,0.0.0.0:33079->7051/tcp, 0.0.0.0:50053->7053/tcp,0.0.0.0:33078->7053/tcp
peer1.oneorg.example.com        peer node start                  Up      0.0.0.0:50061->7051/tcp,0.0.0.0:33077->7051/tcp, 0.0.0.0:50063->7053/tcp,0.0.0.0:33076->7053/tcp
zookeeper1.oneorg.example.com   /docker-entrypoint.sh zkSe ...   Up      0.0.0.0:33068->2181/tcp, 0.0.0.0:33067->2888/tcp, 0.0.0.0:33066->3888/tcp                       
zookeeper2.oneorg.example.com   /docker-entrypoint.sh zkSe ...   Up      0.0.0.0:33062->2181/tcp, 0.0.0.0:33061->2888/tcp, 0.0.0.0:33060->3888/tcp                       
zookeeper3.oneorg.example.com   /docker-entrypoint.sh zkSe ...   Up      0.0.0.0:33065->2181/tcp, 0.0.0.0:33064->2888/tcp, 0.0.0.0:33063->3888/tcp                 
  • 第五步:创建channel
docker exec \
    -e "CORE_PEER_ID=${PEER_ID}" \
    -e "CORE_PEER_ADDRESS=${PEER_ADDRESS}" \
    -e "CORE_PEER_LOCALMSPID=${PEER_LOCALMSPID}" \
    -e "CORE_PEER_MSPCONFIGPATH=${PEER_ADMIN_MSPCONFIGPATH}" \
    cli.oneorg.example.com \
    peer channel create -o ${ORDERER_ADDRESS} -c ${CHANNEL} -f ./channel-artifacts/${CHANNEL}.tx
  • 第六步:加入channel
docker exec \
    -e "CORE_PEER_ID=${PEER_ID}" \
    -e "CORE_PEER_ADDRESS=${PEER_ADDRESS}" \
    -e "CORE_PEER_LOCALMSPID=${PEER_LOCALMSPID}" \
    -e "CORE_PEER_MSPCONFIGPATH=${PEER_ADMIN_MSPCONFIGPATH}" \
    cli.oneorg.example.com \
    peer channel join -b ./${CHANNEL}.block
  • 第七步:安装chaincode

安装example02例子:

docker exec \
    -e "CORE_PEER_ID=${PEER_ID}" \
    -e "CORE_PEER_ADDRESS=${PEER_ADDRESS}" \
    -e "CORE_PEER_LOCALMSPID=${PEER_LOCALMSPID}" \
    -e "CORE_PEER_MSPCONFIGPATH=${PEER_ADMIN_MSPCONFIGPATH}" \
    cli.oneorg.example.com \
    peer chaincode install -n ${CHAINCODE} -v ${VERSION} -p ${CHAINCODEPATH}
  • 第八步:启动chaincode
docker exec \
    -e "CORE_PEER_ID=${PEER_ID}" \
    -e "CORE_PEER_ADDRESS=${PEER_ADDRESS}" \
    -e "CORE_PEER_LOCALMSPID=${PEER_LOCALMSPID}" \
    -e "CORE_PEER_MSPCONFIGPATH=${PEER_ADMIN_MSPCONFIGPATH}" \
    cli.oneorg.example.com \
    peer chaincode instantiate -o ${ORDERER_ADDRESS} -C ${CHANNEL} -n ${CHAINCODE} -v ${VERSION} -c '{"Args":["init","a","1000","b","2000"]}'
  • 第九步:执行invoke
docker exec \
    -e "CORE_PEER_ID=${PEER_ID}" \
    -e "CORE_PEER_ADDRESS=${PEER_ADDRESS}" \
    -e "CORE_PEER_LOCALMSPID=${PEER_LOCALMSPID}" \
    -e "CORE_PEER_MSPCONFIGPATH=${PEER_MSPCONFIGPATH}" \
    cli.oneorg.example.com \
    peer chaincode invoke -o ${ORDERER_ADDRESS} -C ${CHANNEL} -n ${CHAINCODE} -c '{"Args":["invoke", "a", "b", "1"]}'
  • 第十步:查询chaincode
docker exec \
    -e "CORE_PEER_ID=${PEER_ID}" \
    -e "CORE_PEER_ADDRESS=${PEER_ADDRESS}" \
    -e "CORE_PEER_LOCALMSPID=${PEER_LOCALMSPID}" \
    -e "CORE_PEER_MSPCONFIGPATH=${PEER_MSPCONFIGPATH}" \
    cli.oneorg.example.com \
    peer chaincode query -C ${CHANNEL} -n ${CHAINCODE} -c '{"Args":["query","a"]}'
  • 第十一步:验证kafka的数据

查看所有的kafka topic:

$ docker exec kafka1.oneorg.example.com /opt/kafka/bin/kafka-topics.sh --list \
  --zookeeper zookeeper1.oneorg.example.com:2181,zookeeper2.oneorg.example.com:2181,zookeeper3.oneorg.example.com:2181
mychannel
testchainid

我们看到有两个topic,mychannl是我们创建的channel,testchainid是orderer的系统缺省channel。

查看topic的信息:

$ docker exec kafka1.oneorg.example.com /opt/kafka/bin/kafka-topics.sh --describe \
  --zookeeper zookeeper1.oneorg.example.com:2181,zookeeper2.oneorg.example.com:2181,zookeeper3.oneorg.example.com:2181 \
  --topic mychannel
Topic:mychannel PartitionCount:1    ReplicationFactor:3 Configs:
    Topic: mychannel    Partition: 0    Leader: 4   Replicas: 4,1,2 Isr: 4,1,2

我们可以看到mychannel是有一个分区,并且有三份复制,分别部署在kafka节点1,2,和4上。

查看mychannel上的消息信息:

$ docker exec kafka1.oneorg.example.com /opt/kafka/bin/kafka-run-class.sh kafka.tools.GetOffsetShell \
  --broker-list kafka1.oneorg.example.com:9092,kafka2.oneorg.example.com:9092,kafka3.oneorg.example.com:9092 \
  --topic mychannel
mychannel:0:10

我们可以看到mychannel上partition 0的offset为10。
(完)

你可能感兴趣的:(step-by-step搭建kafka-based instance)