将需要 13 台机器,分别:
OrdererOrgs:
- Name: Orderer
Domain: example.com
Specs:
- Hostname: orderer0
- Hostname: orderer1
- Hostname: orderer2
PeerOrgs:
- Name: Org1
Domain: org1.example.com
Template:
Count: 2
Users:
Count: 1
- Name: Org2
Domain: org2.example.com
Template:
Count: 2
Users:
Count: 1
Specs:
- Hostname: foo
CommonName: foo27.org2.example.com
- Hostname: bar
- Hostname: baz
- Name: Org3
Domain: org3.example.com
Template:
Count: 2
Users:
Count: 1
- Name: Org4
Domain: org4.example.com
Template:
Count: 2
Users:
Count: 1
- Name: Org5
Domain: org5.example.com
Template:
Count: 2
Users:
Count: 1
Org2 除使用模板配置,还使用了自定义节点配置,后面也会对其进行验证。
cd ~/fabric/aberic/
./bin/cryptogen generate --config=./crypto-config.yaml
生成的组织配置文件在 ~/fabric/aberic/crypto-config/peerOrganizations 目录下。
configtx.yaml 配置文件在上下文中需要与 crypto-config.yaml 相匹配。
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
---
################################################################################
#
# Section: Organizations
#
# - This section defines the different organizational identities which will
# be referenced later in the configuration.
#
################################################################################
Organizations:
- &OrdererOrg
Name: OrdererMSP
ID: OrdererMSP
MSPDir: crypto-config/ordererOrganizations/example.com/msp
- &Org1
Name: Org1MSP
ID: Org1MSP
MSPDir: crypto-config/peerOrganizations/org1.example.com/msp
AnchorPeers:
- Host: peer0.org1.example.com
Port: 7051
- &Org2
Name: Org2MSP
ID: Org2MSP
MSPDir: crypto-config/peerOrganizations/org2.example.com/msp
AnchorPeers:
- Host: peer0.org2.example.com
Port: 7051
- &Org3
Name: Org3MSP
ID: Org3MSP
MSPDir: crypto-config/peerOrganizations/org3.example.com/msp
AnchorPeers:
- Host: peer0.org3.example.com
Port: 7051
- &Org4
Name: Org4MSP
ID: Org4MSP
MSPDir: crypto-config/peerOrganizations/org4.example.com/msp
AnchorPeers:
- Host: peer0.org4.example.com
Port: 7051
- &Org5
Name: Org5MSP
ID: Org5MSP
MSPDir: crypto-config/peerOrganizations/org5.example.com/msp
AnchorPeers:
- Host: peer0.org5.example.com
Port: 7051
################################################################################
#
# SECTION: Orderer
#
# - This section defines the values to encode into a config transaction or
# genesis block for orderer related parameters
#
################################################################################
Orderer: &OrdererDefaults
OrdererType: kafka
Addresses:
- orderer0.example.com:7050
- orderer1.example.com:7050
- orderer2.example.com:7050
BatchTimeout: 2s
BatchSize:
MaxMessageCount: 10
AbsoluteMaxBytes: 98 MB
PreferredMaxBytes: 512 KB
Kafka:
Brokers:
- 192.168.24.204:9092
- 192.168.24.205:9092
- 192.168.24.206:9092
- 192.168.24.207:9092
Organizations:
################################################################################
#
# SECTION: Application
#
# - This section defines the values to encode into a config transaction or
# genesis block for application related parameters
#
################################################################################
Application: &ApplicationDefaults
Organizations:
Capabilities:
Global: &ChannelCapabilities
V1_1: true
Orderer: &OrdererCapabilities
V1_1: true
Application: &ApplicationCapabilities
V1_1: true
################################################################################
#
# Profile
#
# - Different configuration profiles may be encoded here to be specified
# as parameters to the configtxgen tool
#
################################################################################
Profiles:
TwoOrgsOrdererGenesis:
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrdererOrg
Consortiums:
SampleConsortium:
Organizations:
- *Org1
- *Org2
- *Org3
- *Org4
- *Org5
TwoOrgsChannel:
Consortium: SampleConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *Org1
- *Org2
- *Org3
- *Org4
- *Org5
./bin/configtxgen -profile TwoOrgsOrdererGenesis -outputBlock ./channel-artifacts/genesis.block
./bin/configtxgen -profile TwoOrgsChannel -outputCreateChannelTx ./channel-artifacts/mychannel.tx -channelID mychannel
docker-zookeeper1.yaml
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
# ZooKeeper的基本运转流程:
# 1、选举Leader。
# 2、同步数据。
# 3、选举Leader过程中算法有很多,但要达到的选举标准是一致的。
# 4、Leader要具有最高的执行ID,类似root权限。
# 5、集群中大多数的机器得到响应并follow选出的Leader。
#
version: '2'
services:
zookeeper1:
container_name: zookeeper1
hostname: zookeeper1
image: hyperledger/fabric-zookeeper
restart: always
environment:
# ========================================================================
# Reference: https://zookeeper.apache.org/doc/r3.4.9/zookeeperAdmin.html#sc_configuration
# ========================================================================
#
# myid
# The ID must be unique within the ensemble and should have a value
# ID在集合中必须是唯一的并且应该有一个值
# between 1 and 255.
# 在1和255之间。
- ZOO_MY_ID=1
#
# server.x=[hostname]:nnnnn[:nnnnn]
# The list of servers that make up the ZK ensemble. The list that is used
# by the clients must match the list of ZooKeeper servers that each ZK
# server has. There are two port numbers `nnnnn`. The first is what
# followers use to connect to the leader, while the second is for leader
# election.
# 组成ZK集合的服务器列表。客户端使用的列表必须与ZooKeeper服务器列表所拥有的每一个ZK服务器相匹配。
# 有两个端口号 `nnnnn`。第一个是追随者用来连接领导者的东西,第二个是领导人选举。
- ZOO_SERVERS=server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888
ports:
- 2181:2181
- 2888:2888
- 3888:3888
extra_hosts:
- zookeeper1:192.168.24.201
- zookeeper2:192.168.24.202
- zookeeper3:192.168.24.203
- kafka1:192.168.24.204
- kafka2:192.168.24.205
- kafka3:192.168.24.206
- kafka4:192.168.24.207
docker-zookeeper2.yaml
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
# ZooKeeper的基本运转流程:
# 1、选举Leader。
# 2、同步数据。
# 3、选举Leader过程中算法有很多,但要达到的选举标准是一致的。
# 4、Leader要具有最高的执行ID,类似root权限。
# 5、集群中大多数的机器得到响应并follow选出的Leader。
#
version: '2'
services:
zookeeper2:
container_name: zookeeper2
hostname: zookeeper2
image: hyperledger/fabric-zookeeper
restart: always
environment:
# ========================================================================
# Reference: https://zookeeper.apache.org/doc/r3.4.9/zookeeperAdmin.html#sc_configuration
# ========================================================================
#
# myid
# The ID must be unique within the ensemble and should have a value
# ID在集合中必须是唯一的并且应该有一个值
# between 1 and 255.
# 在1和255之间。
- ZOO_MY_ID=2
#
# server.x=[hostname]:nnnnn[:nnnnn]
# The list of servers that make up the ZK ensemble. The list that is used
# by the clients must match the list of ZooKeeper servers that each ZK
# server has. There are two port numbers `nnnnn`. The first is what
# followers use to connect to the leader, while the second is for leader
# election.
# 组成ZK集合的服务器列表。客户端使用的列表必须与ZooKeeper服务器列表所拥有的每一个ZK服务器相匹配。
# 有两个端口号 `nnnnn`。第一个是追随者用来连接领导者的东西,第二个是领导人选举。
- ZOO_SERVERS=server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888
ports:
- 2181:2181
- 2888:2888
- 3888:3888
extra_hosts:
- zookeeper1:192.168.24.201
- zookeeper2:192.168.24.202
- zookeeper3:192.168.24.203
- kafka1:192.168.24.204
- kafka2:192.168.24.205
- kafka3:192.168.24.206
- kafka4:192.168.24.207
docker-zookeeper3.yaml
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
# ZooKeeper的基本运转流程:
# 1、选举Leader。
# 2、同步数据。
# 3、选举Leader过程中算法有很多,但要达到的选举标准是一致的。
# 4、Leader要具有最高的执行ID,类似root权限。
# 5、集群中大多数的机器得到响应并follow选出的Leader。
#
version: '2'
services:
zookeeper3:
container_name: zookeeper3
hostname: zookeeper3
image: hyperledger/fabric-zookeeper
restart: always
environment:
# ========================================================================
# Reference: https://zookeeper.apache.org/doc/r3.4.9/zookeeperAdmin.html#sc_configuration
# ========================================================================
#
# myid
# The ID must be unique within the ensemble and should have a value
# ID在集合中必须是唯一的并且应该有一个值
# between 1 and 255.
# 在1和255之间。
- ZOO_MY_ID=3
#
# server.x=[hostname]:nnnnn[:nnnnn]
# The list of servers that make up the ZK ensemble. The list that is used
# by the clients must match the list of ZooKeeper servers that each ZK
# server has. There are two port numbers `nnnnn`. The first is what
# followers use to connect to the leader, while the second is for leader
# election.
# 组成ZK集合的服务器列表。客户端使用的列表必须与ZooKeeper服务器列表所拥有的每一个ZK服务器相匹配。
# 有两个端口号 `nnnnn`。第一个是追随者用来连接领导者的东西,第二个是领导人选举。
- ZOO_SERVERS=server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888
ports:
- 2181:2181
- 2888:2888
- 3888:3888
extra_hosts:
- zookeeper1:192.168.24.201
- zookeeper2:192.168.24.202
- zookeeper3:192.168.24.203
- kafka1:192.168.24.204
- kafka2:192.168.24.205
- kafka3:192.168.24.206
- kafka4:192.168.24.207
docker-kafka1.yaml
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
# 我们使用K和Z分别代表Kafka集群和ZooKeeper集群的节点个数
#
# 1)K的最小值应该被设置为4(我们将会在第4步中解释,这是为了满足crash容错的最小节点数。
# 如果有4个代理,那么可以容错一个代理崩溃,一个代理停止服务后,channel仍然可以继续读写,新的channel可以被创建)
# 2)Z可以为3,5或是7。它的值需要是一个奇数避免脑裂(split-brain)情况,同时选择大于1的值为了避免单点故障。
# 超过7个ZooKeeper servers会被认为overkill。
#
version: '2'
services:
kafka1:
container_name: kafka1
hostname: kafka1
image: hyperledger/fabric-kafka
restart: always
environment:
# ========================================================================
# Reference: https://kafka.apache.org/documentation/#configuration
# ========================================================================
#
# broker.id
- KAFKA_BROKER_ID=1
#
# min.insync.replicas
# Let the value of this setting be M. Data is considered committed when
# it is written to at least M replicas (which are then considered in-sync
# and belong to the in-sync replica set, or ISR). In any other case, the
# write operation returns an error. Then:
# 1. If up to M-N replicas -- out of the N (see default.replication.factor
# below) that the channel data is written to -- become unavailable,
# operations proceed normally.
# 2. If more replicas become unavailable, Kafka cannot maintain an ISR set
# of M, so it stops accepting writes. Reads work without issues. The
# channel becomes writeable again when M replicas get in-sync.
#
# min.insync.replicas = M---设置一个M值(例如1
docker-kafka2.yaml
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
# 我们使用K和Z分别代表Kafka集群和ZooKeeper集群的节点个数
#
# 1)K的最小值应该被设置为4(我们将会在第4步中解释,这是为了满足crash容错的最小节点数。
# 如果有4个代理,那么可以容错一个代理崩溃,一个代理停止服务后,channel仍然可以继续读写,新的channel可以被创建)
# 2)Z可以为3,5或是7。它的值需要是一个奇数避免脑裂(split-brain)情况,同时选择大于1的值为了避免单点故障。
# 超过7个ZooKeeper servers会被认为overkill。
#
version: '2'
services:
kafka2:
container_name: kafka2
hostname: kafka2
image: hyperledger/fabric-kafka
restart: always
environment:
# ========================================================================
# Reference: https://kafka.apache.org/documentation/#configuration
# ========================================================================
#
# broker.id
- KAFKA_BROKER_ID=2
#
# min.insync.replicas
# Let the value of this setting be M. Data is considered committed when
# it is written to at least M replicas (which are then considered in-sync
# and belong to the in-sync replica set, or ISR). In any other case, the
# write operation returns an error. Then:
# 1. If up to M-N replicas -- out of the N (see default.replication.factor
# below) that the channel data is written to -- become unavailable,
# operations proceed normally.
# 2. If more replicas become unavailable, Kafka cannot maintain an ISR set
# of M, so it stops accepting writes. Reads work without issues. The
# channel becomes writeable again when M replicas get in-sync.
#
# min.insync.replicas = M---设置一个M值(例如1
docker-kafka3.yaml
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
# 我们使用K和Z分别代表Kafka集群和ZooKeeper集群的节点个数
#
# 1)K的最小值应该被设置为4(我们将会在第4步中解释,这是为了满足crash容错的最小节点数。
# 如果有4个代理,那么可以容错一个代理崩溃,一个代理停止服务后,channel仍然可以继续读写,新的channel可以被创建)
# 2)Z可以为3,5或是7。它的值需要是一个奇数避免脑裂(split-brain)情况,同时选择大于1的值为了避免单点故障。
# 超过7个ZooKeeper servers会被认为overkill。
#
version: '2'
services:
kafka3:
container_name: kafka3
hostname: kafka3
image: hyperledger/fabric-kafka
restart: always
environment:
# ========================================================================
# Reference: https://kafka.apache.org/documentation/#configuration
# ========================================================================
#
# broker.id
- KAFKA_BROKER_ID=3
#
# min.insync.replicas
# Let the value of this setting be M. Data is considered committed when
# it is written to at least M replicas (which are then considered in-sync
# and belong to the in-sync replica set, or ISR). In any other case, the
# write operation returns an error. Then:
# 1. If up to M-N replicas -- out of the N (see default.replication.factor
# below) that the channel data is written to -- become unavailable,
# operations proceed normally.
# 2. If more replicas become unavailable, Kafka cannot maintain an ISR set
# of M, so it stops accepting writes. Reads work without issues. The
# channel becomes writeable again when M replicas get in-sync.
#
# min.insync.replicas = M---设置一个M值(例如1
docker-kafka4.yaml
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
# 我们使用K和Z分别代表Kafka集群和ZooKeeper集群的节点个数
#
# 1)K的最小值应该被设置为4(我们将会在第4步中解释,这是为了满足crash容错的最小节点数。
# 如果有4个代理,那么可以容错一个代理崩溃,一个代理停止服务后,channel仍然可以继续读写,新的channel可以被创建)
# 2)Z可以为3,5或是7。它的值需要是一个奇数避免脑裂(split-brain)情况,同时选择大于1的值为了避免单点故障。
# 超过7个ZooKeeper servers会被认为overkill。
#
version: '2'
services:
kafka4:
container_name: kafka4
hostname: kafka4
image: hyperledger/fabric-kafka
restart: always
environment:
# ========================================================================
# Reference: https://kafka.apache.org/documentation/#configuration
# ========================================================================
#
# broker.id
- KAFKA_BROKER_ID=4
#
# min.insync.replicas
# Let the value of this setting be M. Data is considered committed when
# it is written to at least M replicas (which are then considered in-sync
# and belong to the in-sync replica set, or ISR). In any other case, the
# write operation returns an error. Then:
# 1. If up to M-N replicas -- out of the N (see default.replication.factor
# below) that the channel data is written to -- become unavailable,
# operations proceed normally.
# 2. If more replicas become unavailable, Kafka cannot maintain an ISR set
# of M, so it stops accepting writes. Reads work without issues. The
# channel becomes writeable again when M replicas get in-sync.
#
# min.insync.replicas = M---设置一个M值(例如1
Kafka 的最小值应该被设置为 4,以满足 Crash 容错的最小节点数。如果有 4个代理,则可以容错一个代理崩溃,即一个代理停止服务后,Channel 仍然可以继续读写,新的 channel 可以被创建。
docker-order0.yaml
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
version: '2'
services:
orderer0.example.com:
container_name: orderer0.example.com
image: hyperledger/fabric-orderer
environment:
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=aberic_default
- ORDERER_GENERAL_LOGLEVEL=debug
# - ORDERER_GENERAL_LOGLEVEL=error
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_LISTENPORT=7050
#- ORDERER_GENERAL_GENESISPROFILE=AntiMothOrdererGenesis
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
#- ORDERER_GENERAL_LEDGERTYPE=ram
#- ORDERER_GENERAL_LEDGERTYPE=file
# enabled TLS
- ORDERER_GENERAL_TLS_ENABLED=false
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
- ORDERER_KAFKA_RETRY_LONGINTERVAL=10s
- ORDERER_KAFKA_RETRY_LONGTOTAL=100s
- ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
- ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
- ORDERER_KAFKA_VERBOSE=true
- ORDERER_KAFKA_BROKERS=[192.168.24.204:9092,192.168.24.205:9092,192.168.24.206:9092,192.168.24.207:9092]
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: orderer
volumes:
- ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/msp:/var/hyperledger/orderer/msp
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/tls/:/var/hyperledger/orderer/tls
networks:
default:
aliases:
- aberic
ports:
- 7050:7050
extra_hosts:
- kafka1:192.168.24.204
- kafka2:192.168.24.205
- kafka3:192.168.24.206
- kafka4:192.168.24.207
docker-order1.yaml
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
version: '2'
services:
orderer1.example.com:
container_name: orderer1.example.com
image: hyperledger/fabric-orderer
environment:
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=aberic_default
- ORDERER_GENERAL_LOGLEVEL=debug
# - ORDERER_GENERAL_LOGLEVEL=error
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_LISTENPORT=7050
#- ORDERER_GENERAL_GENESISPROFILE=AntiMothOrdererGenesis
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
#- ORDERER_GENERAL_LEDGERTYPE=ram
#- ORDERER_GENERAL_LEDGERTYPE=file
# enabled TLS
- ORDERER_GENERAL_TLS_ENABLED=false
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
- ORDERER_KAFKA_RETRY_LONGINTERVAL=10s
- ORDERER_KAFKA_RETRY_LONGTOTAL=100s
- ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
- ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
- ORDERER_KAFKA_VERBOSE=true
- ORDERER_KAFKA_BROKERS=[192.168.24.204:9092,192.168.24.205:9092,192.168.24.206:9092,192.168.24.207:9092]
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: orderer
volumes:
- ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer1.example.com/msp:/var/hyperledger/orderer/msp
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer1.example.com/tls/:/var/hyperledger/orderer/tls
networks:
default:
aliases:
- aberic
ports:
- 7050:7050
extra_hosts:
- kafka1:192.168.24.204
- kafka2:192.168.24.205
- kafka3:192.168.24.206
- kafka4:192.168.24.207
docker-order2.yaml
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
version: '2'
services:
orderer2.example.com:
container_name: orderer2.example.com
image: hyperledger/fabric-orderer
environment:
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=aberic_default
- ORDERER_GENERAL_LOGLEVEL=debug
# - ORDERER_GENERAL_LOGLEVEL=error
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_LISTENPORT=7050
#- ORDERER_GENERAL_GENESISPROFILE=AntiMothOrdererGenesis
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
#- ORDERER_GENERAL_LEDGERTYPE=ram
#- ORDERER_GENERAL_LEDGERTYPE=file
# enabled TLS
- ORDERER_GENERAL_TLS_ENABLED=false
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
- ORDERER_KAFKA_RETRY_LONGINTERVAL=10s
- ORDERER_KAFKA_RETRY_LONGTOTAL=100s
- ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
- ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
- ORDERER_KAFKA_VERBOSE=true
- ORDERER_KAFKA_BROKERS=[192.168.24.204:9092,192.168.24.205:9092,192.168.24.206:9092,192.168.24.207:9092]
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: orderer
volumes:
- ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer2.example.com/msp:/var/hyperledger/orderer/msp
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer2.example.com/tls/:/var/hyperledger/orderer/tls
networks:
default:
aliases:
- aberic
ports:
- 7050:7050
extra_hosts:
- kafka1:192.168.24.204
- kafka2:192.168.24.205
- kafka3:192.168.24.206
- kafka4:192.168.24.207
docker-compose -f docker-zookeeper1.yaml up
docker-compose -f docker-zookeeper2.yaml up
docker-compose -f docker-zookeeper3.yaml up
不加 -d 参数,可直接查看 zookeeper 启动日志。
docker exec -it zookeeper2 bash
./bin/zkServer.sh status
docker-compose -f docker-kafka1.yaml up
docker-compose -f docker-kafka2.yaml up
docker-compose -f docker-kafka3.yaml up
docker-compose -f docker-kafka4.yaml up
未加 “-d” 参数,是为方便查看 kafka 启动日志。
以上启动过程若报内存不足,且如果是测试环境,可在 kafka 的配置文件中环境变量里加入如下参数。
- KAFKA_HEAP_OPTS=-Xmx256M -Xms128M
与 zookeeper 及 kafka 时一样:
1)将 docker-orderer0 ~ 2.yaml 上传至 fabric/aberic 目录下;
2)将前面生成的 genesis.block 创世区块文件传到各自 fabric/aberic/channel-artifacts 目录下;
3)将通过 crypto-config.yaml 配置文件所生成的节点文件,即 crypto-config 下的 ordererOrganizations 上传至各服务器的 fabric/aberic/crypto-config 目录下。
原 ordererOrganizations 文件夹的再下一层有一个 orderers 目录,下有三个文件夹,并非都要上传到每一台 Orderer 排序服务器上,只需将与之名称对应的文件夹上传即可。
之后在三台服务器上分别执行:
docker-compose -f docker-orderer0.yaml up
docker-compose -f docker-orderer1.yaml up -d
docker-compose -f docker-orderer2.yaml up -d
由于 peer 节点的交互止步于 Orderer 排序服务,即 peer 并不关心顶层建设是 Solo 还是 Kafka,所以 peer 节点的配置与前面类似。
接下来会有三份配置文件,不仅测试以往的集群功能,还包括自定义节点配置文件测试:
docker-peer0org1.yaml
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
version: '2'
services:
couchdb:
container_name: couchdb
image: hyperledger/fabric-couchdb
# Comment/Uncomment the port mapping if you want to hide/expose the CouchDB service,
# for example map it to utilize Fauxton User Interface in dev environments.
ports:
- 5984:5984
ca:
container_name: ca
image: hyperledger/fabric-ca
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_NAME=ca
- FABRIC_CA_SERVER_TLS_ENABLED=false
- FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem
- FABRIC_CA_SERVER_TLS_KEYFILE=/etc/hyperledger/fabric-ca-server-config/4867fff01ded23e5212f4d4b8ff02dad3c8c41e316fc3ddcb287eff9000d3dee_sk
ports:
- 7054:7054
command: sh -c 'fabric-ca-server start --ca.certfile /etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem --ca.keyfile /etc/hyperledger/fabric-ca-server-config/4867fff01ded23e5212f4d4b8ff02dad3c8c41e316fc3ddcb287eff9000d3dee_sk -b admin:adminpw -d'
volumes:
- ./crypto-config/peerOrganizations/org1.example.com/ca/:/etc/hyperledger/fabric-ca-server-config
peer0.org1.example.com:
container_name: peer0.org1.example.com
image: hyperledger/fabric-peer
environment:
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=192.168.24.211:5984
- CORE_PEER_ID=peer0.org1.example.com
- CORE_PEER_NETWORKID=aberic
- CORE_PEER_ADDRESS=peer0.org1.example.com:7051
- CORE_PEER_CHAINCODEADDRESS=peer0.org1.example.com:7052
- CORE_PEER_CHAINCODELISTENADDRESS=peer0.org1.example.com:7052
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org1.example.com:7051
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
# the following setting starts chaincode containers on the same
# bridge network as the peers
# https://docs.docker.com/compose/networking/
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=aberic_default
- CORE_VM_DOCKER_TLS_ENABLED=false
# - CORE_LOGGING_LEVEL=ERROR
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_GOSSIP_SKIPHANDSHAKE=true
- CORE_PEER_GOSSIP_USELEADERELECTION=true
- CORE_PEER_GOSSIP_ORGLEADER=false
- CORE_PEER_PROFILE_ENABLED=false
- CORE_PEER_TLS_ENABLED=false
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
volumes:
- /var/run/:/host/var/run/
- ./chaincode/go/:/opt/gopath/src/github.com/hyperledger/fabric/chaincode/go
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp:/etc/hyperledger/fabric/msp
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls:/etc/hyperledger/fabric/tls
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: peer node start
ports:
- 7051:7051
- 7052:7052
- 7053:7053
depends_on:
- couchdb
networks:
default:
aliases:
- aberic
extra_hosts:
- orderer0.example.com:192.168.24.208
- orderer1.example.com:192.168.24.209
- orderer2.example.com:192.168.24.210
cli:
container_name: cli
image: hyperledger/fabric-tools
tty: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
# - CORE_LOGGING_LEVEL=ERROR
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer0.org1.example.com:7051
- CORE_PEER_CHAINCODELISTENADDRESS=peer0.org1.example.com:7052
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_TLS_ENABLED=false
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/[email protected]/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
volumes:
- /var/run/:/host/var/run/
- ./chaincode/go/:/opt/gopath/src/github.com/hyperledger/fabric/chaincode/go
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
depends_on:
- peer0.org1.example.com
extra_hosts:
- orderer0.example.com:192.168.24.208
- orderer1.example.com:192.168.24.209
- orderer2.example.com:192.168.24.210
- peer0.org1.example.com:192.168.24.211
之后:
1)将 docker-peer0org1.yaml 上传至 peer 节点的 aberic/ 目录下;
2)将 mychannel.tx 频道文件上传至 aberic/channel-artifacts 目录下;
3)将 peerOrganizations 上传至 aberic/crypto-config 目录下,且仅上传 org1 相关配置。
登入 peer 节点服务器,启动 peer 节点服务
docker-compose -f docker-peer0org1.yaml up -d
创建频道
peer channel create -o orderer0.example.com:7050 -c mychannel -t 50s -f ./channel-artifacts/mychannel.tx
加入频道
peer channel join -b mychannel.block
安装智能合约
peer chaincode install -n mychannel -p github.com/hyperledger/fabric/chaincode/go/chaincode_example02 -v 1.0
实例化合约
peer chaincode instantiate -o orderer0.example.com:7050 -C mychannel -n mychannel -c '{"Args":["init","A","10","B","10"]}' -P "OR ('Org1MSP.member','Org2MSP.member')" -v 1.0
查询 A 账户资产
peer chaincode query -C mychannel -n mychannel -c '{"Args":["query","A"]}'
备份 peer0.org1 通道描述文件
docker cp :/opt/gopath/src/github.com/hyperledger/fabric/peer/mychannel.block ./channel-artifacts/
测试自定义节点 foo27.org2, docker-foo27org2.yaml
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
version: '2'
services:
couchdb:
container_name: couchdb
image: hyperledger/fabric-couchdb
environment:
- COUCHDB_USER=admin
- COUCHDB_PASSWORD=123456
# Comment/Uncomment the port mapping if you want to hide/expose the CouchDB service,
# for example map it to utilize Fauxton User Interface in dev environments.
ports:
- 5984:5984
ca:
container_name: ca
image: hyperledger/fabric-ca
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_NAME=ca
- FABRIC_CA_SERVER_TLS_ENABLED=false
- FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org2.example.com-cert.pem
- FABRIC_CA_SERVER_TLS_KEYFILE=/etc/hyperledger/fabric-ca-server-config/43fc8bf11c4cadf28bbe5d19ccf3e9ae617b744dc18cb117ef50ef901ddad630_sk
ports:
- 7054:7054
command: sh -c 'fabric-ca-server start --ca.certfile /etc/hyperledger/fabric-ca-server-config/ca.org2.example.com-cert.pem --ca.keyfile /etc/hyperledger/fabric-ca-server-config/43fc8bf11c4cadf28bbe5d19ccf3e9ae617b744dc18cb117ef50ef901ddad630_sk -b admin:adminpw -d'
volumes:
- ./crypto-config/peerOrganizations/org2.example.com/ca/:/etc/hyperledger/fabric-ca-server-config
foo27.org2.example.com:
container_name: foo27.org2.example.com
image: hyperledger/fabric-peer
environment:
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb:5984
- CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=admin
- CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=123456
- CORE_PEER_ID=foo27.org2.example.com
- CORE_PEER_NETWORKID=aberic
- CORE_PEER_ADDRESS=foo27.org2.example.com:7051
- CORE_PEER_CHAINCODEADDRESS=foo27.org2.example.com:7052
- CORE_PEER_CHAINCODELISTENADDRESS=foo27.org2.example.com:7052
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=foo27.org2.example.com:7051
- CORE_PEER_LOCALMSPID=Org2MSP
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
# the following setting starts chaincode containers on the same
# bridge network as the peers
# https://docs.docker.com/compose/networking/
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=aberic_default
- CORE_VM_DOCKER_TLS_ENABLED=false
# - CORE_LOGGING_LEVEL=ERROR
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_GOSSIP_SKIPHANDSHAKE=true
- CORE_PEER_GOSSIP_USELEADERELECTION=true
- CORE_PEER_GOSSIP_ORGLEADER=false
- CORE_PEER_PROFILE_ENABLED=false
- CORE_PEER_TLS_ENABLED=false
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
volumes:
- /var/run/:/host/var/run/
- ./chaincode/go/:/opt/gopath/src/github.com/hyperledger/fabric/chaincode/go
- ./crypto-config/peerOrganizations/org2.example.com/peers/foo27.org2.example.com/msp:/etc/hyperledger/fabric/msp
- ./crypto-config/peerOrganizations/org2.example.com/peers/foo27.org2.example.com/tls:/etc/hyperledger/fabric/tls
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: peer node start
ports:
- 7051:7051
- 7052:7052
- 7053:7053
depends_on:
- couchdb
networks:
default:
aliases:
- aberic
extra_hosts:
- orderer0.example.com:192.168.24.208
- orderer1.example.com:192.168.24.209
- orderer2.example.com:192.168.24.210
cli:
container_name: cli
image: hyperledger/fabric-tools
tty: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
# - CORE_LOGGING_LEVEL=ERROR
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=foo27.org2.example.com:7051
- CORE_PEER_CHAINCODELISTENADDRESS=foo27.org2.example.com:7052
- CORE_PEER_LOCALMSPID=Org2MSP
- CORE_PEER_TLS_ENABLED=false
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/foo27.org2.example.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/foo27.org2.example.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/foo27.org2.example.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/[email protected]/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
volumes:
- /var/run/:/host/var/run/
- ./chaincode/go/:/opt/gopath/src/github.com/hyperledger/fabric/chaincode/go
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
depends_on:
- foo27.org2.example.com
extra_hosts:
- orderer0.example.com:192.168.24.208
- orderer1.example.com:192.168.24.209
- orderer2.example.com:192.168.24.210
- foo27.org2.example.com:192.168.24.212
文件上传至 peerx 服务器 fabric/aberic 目录下。
docker-compose -f docker-foo27org2.yaml up -d
mychannel.block 文件复制到当前服务器 fabric/aberic/channel-artifacts 目录下,并通过docker cp
命令将其复制到 cli 容器 peer 目录下
加入 mychannel 频道
peer channel join -b mychannel.block
安装智能合约
peer chaincode install -n mychannel -p github.com/hyperledger/fabric/chaincode/go/chaincode_example02 -v 1.0
查询 A 账户资产
peer chaincode query -C mychannel -n mychannel -c '{"Args":["query","A"]}'
查询后可得到结果与 peer0.org1 一致,说明该自定义配置节点启动及运行成功。
测试半自定义节点 bar.org2, docker-barorg2.yaml
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
version: '2'
services:
couchdb:
container_name: couchdb
image: hyperledger/fabric-couchdb
# Comment/Uncomment the port mapping if you want to hide/expose the CouchDB service,
# for example map it to utilize Fauxton User Interface in dev environments.
ports:
- 5984:5984
ca:
container_name: ca
image: hyperledger/fabric-ca
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_NAME=ca
- FABRIC_CA_SERVER_TLS_ENABLED=false
- FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org2.example.com-cert.pem
- FABRIC_CA_SERVER_TLS_KEYFILE=/etc/hyperledger/fabric-ca-server-config/43fc8bf11c4cadf28bbe5d19ccf3e9ae617b744dc18cb117ef50ef901ddad630_sk
ports:
- 7054:7054
command: sh -c 'fabric-ca-server start --ca.certfile /etc/hyperledger/fabric-ca-server-config/ca.org2.example.com-cert.pem --ca.keyfile /etc/hyperledger/fabric-ca-server-config/43fc8bf11c4cadf28bbe5d19ccf3e9ae617b744dc18cb117ef50ef901ddad630_sk -b admin:adminpw -d'
volumes:
- ./crypto-config/peerOrganizations/org2.example.com/ca/:/etc/hyperledger/fabric-ca-server-config
bar.org2.example.com:
container_name: bar.org2.example.com
image: hyperledger/fabric-peer
environment:
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=192.168.24.213:5984
- CORE_PEER_ID=bar.org2.example.com
- CORE_PEER_NETWORKID=aberic
- CORE_PEER_ADDRESS=bar.org2.example.com:7051
- CORE_PEER_CHAINCODEADDRESS=bar.org2.example.com:7052
- CORE_PEER_CHAINCODELISTENADDRESS=bar.org2.example.com:7052
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=bar.org2.example.com:7051
- CORE_PEER_LOCALMSPID=Org2MSP
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
# the following setting starts chaincode containers on the same
# bridge network as the peers
# https://docs.docker.com/compose/networking/
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=aberic_default
- CORE_VM_DOCKER_TLS_ENABLED=false
# - CORE_LOGGING_LEVEL=ERROR
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_GOSSIP_SKIPHANDSHAKE=true
- CORE_PEER_GOSSIP_USELEADERELECTION=true
- CORE_PEER_GOSSIP_ORGLEADER=false
- CORE_PEER_PROFILE_ENABLED=false
- CORE_PEER_TLS_ENABLED=false
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
volumes:
- /var/run/:/host/var/run/
- ./chaincode/go/:/opt/gopath/src/github.com/hyperledger/fabric/chaincode/go
- ./crypto-config/peerOrganizations/org2.example.com/peers/bar.org2.example.com/msp:/etc/hyperledger/fabric/msp
- ./crypto-config/peerOrganizations/org2.example.com/peers/bar.org2.example.com/tls:/etc/hyperledger/fabric/tls
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: peer node start
ports:
- 7051:7051
- 7052:7052
- 7053:7053
depends_on:
- couchdb
networks:
default:
aliases:
- aberic
extra_hosts:
- orderer0.example.com:192.168.24.208
- orderer1.example.com:192.168.24.209
- orderer2.example.com:192.168.24.210
cli:
container_name: cli
image: hyperledger/fabric-tools
tty: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
# - CORE_LOGGING_LEVEL=ERROR
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=bar.org2.example.com:7051
- CORE_PEER_CHAINCODELISTENADDRESS=bar.org2.example.com:7052
- CORE_PEER_LOCALMSPID=Org2MSP
- CORE_PEER_TLS_ENABLED=false
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/bar.org2.example.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/bar.org2.example.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/bar.org2.example.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/[email protected]/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
volumes:
- /var/run/:/host/var/run/
- ./chaincode/go/:/opt/gopath/src/github.com/hyperledger/fabric/chaincode/go
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
depends_on:
- bar.org2.example.com
extra_hosts:
- orderer0.example.com:192.168.24.208
- orderer1.example.com:192.168.24.209
- orderer2.example.com:192.168.24.210
- bar.org2.example.com:192.168.24.213
测试方法与 foo27.org2 节点服务相同,结果也能顺利地完成频道加入,合约安装,查询等步骤。