之前部署的Solo模式只存在一个排序(orderer)服务,是一种中心化结构,一旦排序(orderer)服务出现了问题,整个区块链网络将会崩溃,为了能在正式环境中稳定运行,需要对排序(orderer)服务采用集群方式,Hyperledger Fabric采用kafka方式实现排序(orderer)服务的集群,kafka模块被认为是半中心化结构。
11.2 Kafka网络拓扑
Kafka模式通过Kafka集群和zookeeper集群保证数据的一致性,实现排序功能,网络拓扑图如下:
Kafka模式由排序(orderer)服务、kafka集群和zookeeper集群组成。每个排序(orderer)服务相互之间不通信,只与kafka集群通信,kafka集群与zookeeper相互连接。
Fabric网络中的各节点(Peer)收到客户端发送的交易请求时,把交易信息发送给与其连接的排序(orderer)服务,交由排序(orderer)服务集群进行排序处理。
11.3 Kafka运行配置
Kafka生产环境部署案例采用三个排序(orderer)服务、四个kafka、三个zookeeper和四个节点(peer)组成,共准备11台服务器,每台服务器对应的服务如下表所示:
kafka至少需要4台服务器来构成集群,这是为了满足crash容错的最小节点数。如果有4个代理,那么可以容错一个代理崩溃,一个代理停止服务后,channel仍然可以继续读写,新的channel可以被创建。
如配置信息中注释所示,最小写入同步的副本数量大于1,即最小为2;
如配置信息中注释所示,默认副本保存channel信息的数量,必须大于最小写入同步的副本数量,即最小值3;
若要保证容错,即kafka集群最少需要4台服务器来保证,此时允许一个代理出现问题。
所有服务器基本环境安装
https://blog.csdn.net/Doudou_Mylove/article/details/102454596
centos7.6(1cpu,1G,10G)
fabric1.4.0
安装依赖软件
yum -y install gcc-c++ telnet net-tools vim wget libtool libltdl-dev
添加映射
vim /etc/hosts
192.168.2.215 orderer0.example.com
192.168.2.218 orderer1.example.com
192.168.2.221 orderer2.example.com
192.168.2.220 peer0.org1.example.com
192.168.2.222 peer1.org1.example.com
192.168.2.214 peer0.org2.example.com
192.168.2.219 peer1.org2.example.com
192.168.2.217 kafka0
192.168.2.216 kafka1
192.168.2.236 kafka2
192.168.2.235 kafka3
192.168.2.217 zookeeper0
192.168.2.216 zookeeper1
192.168.2.236 zookeeper2
创建kafkapeer目录
mkdir -p $GOPATH/src/github.com/hyperledger/
cd $GOPATH/src/github.com/hyperledger/
mkdir -p fabric/kafkapeer
cd kafkapeer
开放端口
#firewall-cmd --zone=public --add-port=80/tcp --permanent (--permanent永久生效,没有此参数重启后失效)
firewall-cmd --zone=public --add-port=7050/tcp --permanent
firewall-cmd --zone=public --add-port=7051/tcp --permanent
firewall-cmd --zone=public --add-port=9092/tcp --permanent
firewall-cmd --zone=public --add-port=2181/tcp --permanent
firewall-cmd --zone=public --add-port=2888/tcp --permanent
firewall-cmd --zone=public --add-port=3888/tcp --permanent
firewall-cmd --zone=public --add-port=7052/tcp --permanent
firewall-cmd --zone=public --add-port=7053/tcp --permanent
firewall-cmd --reload
查看
firewall-cmd --zone=public --list-ports
#以上内容所有服务器都需要操作
第一步:证书,初始快,通道
peer0.org1.example.com上操作
把下载的hyperledger-fabric-linux-amd64-1.4.0.tar.gz二进制文件包解压,把其中的bin目录拷贝到kafkapeer目录下。
安装fabric(需要chaincode/go/example02目录下的go文件(智能合约))
/usr/local/go/src/github.com/hyperledger
yum -y install libtool libltdl-dev
wget https://raw.githubusercontent.com/hyperledger/fabric/master/scripts/bootstrap.sh
chmod +x ./bootstrap.sh
./bootstrap.sh
cp -r fabric-samples/chaincode/chaincode_example02/go/ fabric/kafkapeer/
准备生成证书和区块配置文件
配置证书文件crypto-config.yaml和configtx.yaml文件,拷贝到kafkapeer目录下。
vim crypto-config.yaml
OrdererOrgs:
- Name: Orderer
Domain: example.com
CA:
Country: US
Province: California
Locality: San Francisco
Specs:
- Hostname: orderer0
- Hostname: orderer1
- Hostname: orderer2
PeerOrgs:
- Name: Org1
Domain: org1.example.com
EnableNodeOUs: true
CA:
Country: US
Province: California
Locality: San Francisco
Template:
Count: 2
Users:
Count: 1
- Name: Org2
Domain: org2.example.com
EnableNodeOUs: true
CA:
Country: US
Province: California
Locality: San Francisco
Template:
Count: 2
Users:
Count: 1
vim configtx.yaml
---
Organizations:
- &OrdererOrg
Name: OrdererOrg
ID: OrdererMSP
MSPDir: crypto-config/ordererOrganizations/example.com/msp
Policies:
Readers:
Type: Signature
Rule: "OR('OrdererMSP.member')"
Writers:
Type: Signature
Rule: "OR('OrdererMSP.member')"
Admins:
Type: Signature
Rule: "OR('OrdererMSP.admin')"
- &Org1
Name: Org1MSP
ID: Org1MSP
MSPDir: crypto-config/peerOrganizations/org1.example.com/msp
Policies:
Readers:
Type: Signature
Rule: "OR('Org1MSP.admin', 'Org1MSP.peer', 'Org1MSP.client')"
Writers:
Type: Signature
Rule: "OR('Org1MSP.admin', 'Org1MSP.client')"
Admins:
Type: Signature
Rule: "OR('Org1MSP.admin')"
AnchorPeers:
- Host: peer0.org1.example.com
Port: 7051
- &Org2
Name: Org2MSP
ID: Org2MSP
MSPDir: crypto-config/peerOrganizations/org2.example.com/msp
Policies:
Readers:
Type: Signature
Rule: "OR('Org2MSP.admin', 'Org2MSP.peer', 'Org2MSP.client')"
Writers:
Type: Signature
Rule: "OR('Org2MSP.admin', 'Org2MSP.client')"
Admins:
Type: Signature
Rule: "OR('Org2MSP.admin')"
AnchorPeers:
- Host: peer0.org2.example.com
Port: 7051
Capabilities:
Global: &ChannelCapabilities
V1_1: true
Orderer: &OrdererCapabilities
V1_1: true
Application: &ApplicationCapabilities
V1_2: true
Application: &ApplicationDefaults
Organizations:
Policies:
Readers:
Type: ImplicitMeta
Rule: "ANY Readers"
Writers:
Type: ImplicitMeta
Rule: "ANY Writers"
Admins:
Type: ImplicitMeta
Rule: "MAJORITY Admins"
Capabilities:
\<<: *ApplicationCapabilities
Orderer: &OrdererDefaults
OrdererType: kafka
Addresses:
- orderer0.example.com:7050
- orderer1.example.com:7050
- orderer2.example.com:7050
BatchTimeout: 2s
BatchSize:
MaxMessageCount: 10
AbsoluteMaxBytes: 98 MB
PreferredMaxBytes: 512 KB
Kafka:
Brokers:
- kafka0:9092
- kafka1:9092
- kafka2:9092
- kafka3:9092
Organizations:
Policies:
Readers:
Type: ImplicitMeta
Rule: "ANY Readers"
Writers:
Type: ImplicitMeta
Rule: "ANY Writers"
Admins:
Type: ImplicitMeta
Rule: "MAJORITY Admins"
BlockValidation:
Type: ImplicitMeta
Rule: "ANY Writers"
Capabilities:
\<<: *OrdererCapabilities
Channel: &ChannelDefaults
Policies:
Readers:
Type: ImplicitMeta
Rule: "ANY Readers"
Writers:
Type: ImplicitMeta
Rule: "ANY Writers"
Admins:
Type: ImplicitMeta
Rule: "MAJORITY Admins"
Capabilities:
\<<: *ChannelCapabilities
Profiles:
TwoOrgsOrdererGenesis:
\<<: *ChannelDefaults
Orderer:
\<<: *OrdererDefaults
Organizations:
- *OrdererOrg
Consortiums:
SampleConsortium:
Organizations:
- *Org1
- *Org2
TwoOrgsChannel:
Consortium: SampleConsortium
Application:
\<<: *ApplicationDefaults
Organizations:
- *Org1
- *Org2
#注意每个<<:前面的\都是多余的,生产中要去掉。
生成公私钥和证书
./bin/cryptogen generate --config=./crypto-config.yaml
生成创世区块
mkdir channel-artifacts
./bin/configtxgen -profile TwoOrgsOrdererGenesis -outputBlock ./channel-artifacts/genesis.block
生成通道配置区块
./bin/configtxgen -profile TwoOrgsChannel -outputCreateChannelTx ./channel-artifacts/mychannel.tx -channelID mychannel
#生成锚节点配置文件(这里不用)
#./bin/configtxgen -profile TwoOrgsChannel -outputAnchorPeersUpdate ./channel-artifacts/Org1MSPanchors.tx -channelID mychannel -asOrg Org1MSP
#./bin/configtxgen -profile TwoOrgsChannel -outputAnchorPeersUpdate ./channel-artifacts/Org2MSPanchors.tx -channelID mychannel -asOrg Org2MSP
拷贝生成文件到其它服务器
scp -r ../kafkapeer/ 192.168.2.222:/usr/local/go/src/github.com/hyperledger/fabric/
scp -r ../kafkapeer/ 192.168.2.214:/usr/local/go/src/github.com/hyperledger/fabric/
scp -r ../kafkapeer/ 192.168.2.219:/usr/local/go/src/github.com/hyperledger/fabric/
scp -r ../kafkapeer/ 192.168.2.218:/usr/local/go/src/github.com/hyperledger/fabric/
scp -r ../kafkapeer/ 192.168.2.221:/usr/local/go/src/github.com/hyperledger/fabric/
scp -r ../kafkapeer/ 192.168.2.215:/usr/local/go/src/github.com/hyperledger/fabric/
下面是Kafka服务器,可以不用拷贝
scp -r ../kafkapeer/ 192.168.2.220:/usr/local/go/src/github.com/hyperledger/fabric/
scp -r ../kafkapeer/ 192.168.2.217:/usr/local/go/src/github.com/hyperledger/fabric/
scp -r ../kafkapeer/ 192.168.2.235:/usr/local/go/src/github.com/hyperledger/fabric/
scp -r ../kafkapeer/ 192.168.2.216:/usr/local/go/src/github.com/hyperledger/fabric/
scp -r ../kafkapeer/ 192.168.2.236:/usr/local/go/src/github.com/hyperledger/fabric/
先部署Kafka集群Kafka部分
在zookeeper0上配置docker-compose-zookeeper.yaml文件,拷贝到kafkapeer目录下。
vim docker-compose-zookeeper.yaml
version: '2'
services:
zookeeper0:
container_name: zookeeper0
hostname: zookeeper0
image: hyperledger/fabric-zookeeper
restart: always
environment:
- ZOO_MY_ID=1
- ZOO_SERVERS=server.1=zookeeper0:2888:3888 server.2=zookeeper1:2888:3888 server.3=zookeeper2:2888:3888
ports:
- 2181:2181
- 2888:2888
- 3888:3888
extra_hosts:
- "zookeeper0:192.168.2.217"
- "zookeeper1:192.168.2.216"
- "zookeeper2:192.168.2.236"
- "kafka0:192.168.2.217"
- "kafka1:192.168.2.216"
- "kafka2:192.168.2.236"
- "kafka3:192.168.2.235"
相关配置项解释:
docker 的restart策略
no – 容器退出时不要自动重启,这个是默认值。
on-failure[:max-retries] – 只在容器以非0状态码退出时重启, 例如:on-failure:10
always – 不管退出状态码是什么始终重启容器
unless-stopped – 不管退出状态码是什么始终重启容器,不过当daemon启动时,如果容器之前已经为停止状态,不要尝试启动它。
环境变量
ZOO_MY_ID
zookeeper集群中的当前zookeeper服务器节点的ID, 在集群中这个只是唯一的, 范围: 1-255
ZOO_SERVERS
组成zookeeper集群的服务器列表
列表中每个服务器的值都附带两个端口号
第一个: 追随者用来连接 Leader 使用的
第二个: 用户选举 Leader
zookeeper服务器中三个重要端口:
访问zookeeper的端口: 2181
zookeeper集群中追随者连接 Leader 的端口: 2888
zookeeper集群中选举 Leader 的端口: 3888
extra_hosts
设置服务器名和其指向的IP地址的对应关系
zookeeper1:192.168.24.201
看到名字zookeeper1就会将其解析为IP地址: 192.168.24.201在Kafka0上配置docker-compose-kafka.yaml文件,拷贝到kafkapeer目录下。
vim docker-compose-kafka.yaml
version: '2'
services:
kafka0:
container_name: kafka0
hostname: kafka0
image: hyperledger/fabric-kafka
restart: always
environment:
- KAFKA_MESSAGE_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
- KAFKA_REPLICA_FETCH_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
- KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
environment:
- KAFKA_BROKER_ID=1
- KAFKA_MIN_INSYNC_REPLICAS=2
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA_ZOOKEEPER_CONNECT=zookeeper0:2181,zookeeper1:2181,zookeeper2:2181
ports:
- 9092:9092
extra_hosts:
- "zookeeper0:192.168.2.217"
- "zookeeper1:192.168.2.216"
- "zookeeper2:192.168.2.236"
- "kafka0:192.168.2.217"
- "kafka1:192.168.2.216"
- "kafka2:192.168.2.236"
- "kafka3:192.168.2.235"
配置项解释
Kafka 默认端口为: 9092
环境变量:
KAFKA_BROKER_ID
是一个唯一的非负整数, 可以作为代理Broker的名字
KAFKA_MIN_INSYNC_REPLICAS
最小同步备份
该值要小于环境变量 KAFKA_DEFAULT_REPLICATION_FACTOR的值
KAFKA_DEFAULT_REPLICATION_FACTOR
默认同步备份, 该值要小于kafka集群数量
KAFKA_ZOOKEEPER_CONNECT
指向zookeeper节点的集合
KAFKA_MESSAGE_MAX_BYTES
消息的最大字节数
和配置文件configtx.yaml中的Orderer.BatchSize.AbsoluteMaxBytes对应
由于消息都有头信息, 所以这个值要比计算出的值稍大, 多加1M就足够了
KAFKA_REPLICA_FETCH_MAX_BYTES=103809024
副本最大字节数, 试图为每个channel获取的消息的字节数
AbsoluteMaxBytes
非一致性的 Leader 选举
开启: true
关闭: false
KAFKA_LOG_RETENTION_MS=-1
对压缩日志保留的最长时间
这个选项在Kafka中已经默认关闭
KAFKA_HEAP_OPTS
设置堆内存大小, kafka默认为 1G
-Xmx256M -> 允许分配的堆内存
-Xms128M -> 初始分配的堆内存
在zookeeper1上配置docker-compose-zookeeper.yaml文件,拷贝到kafkapeer目录下。
vim docker-compose-zookeeper.yaml
version: '2'
services:
zookeeper1:
container_name: zookeeper1
hostname: zookeeper1
image: hyperledger/fabric-zookeeper
restart: always
environment:
- ZOO_MY_ID=2
- ZOO_SERVERS=server.1=zookeeper0:2888:3888 server.2=zookeeper1:2888:3888 server.3=zookeeper2:2888:3888
ports:
- 2181:2181
- 2888:2888
- 3888:3888
extra_hosts:
- "zookeeper0:192.168.2.217"
- "zookeeper1:192.168.2.216"
- "zookeeper2:192.168.2.236"
- "kafka0:192.168.2.217"
- "kafka1:192.168.2.216"
- "kafka2:192.168.2.236"
- "kafka3:192.168.2.235"
在Kafka1上配置docker-compose-kafka.yaml文件,拷贝到kafkapeer目录下。
vim docker-compose-kafka.yaml
version: '2'
services:
kafka1:
container_name: kafka1
hostname: kafka1
image: hyperledger/fabric-kafka
restart: always
environment:
- KAFKA_MESSAGE_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
- KAFKA_REPLICA_FETCH_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
- KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
environment:
- KAFKA_BROKER_ID=2
- KAFKA_MIN_INSYNC_REPLICAS=2
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA_ZOOKEEPER_CONNECT=zookeeper0:2181,zookeeper1:2181,zookeeper2:2181
ports:
- 9092:9092
extra_hosts:
- "zookeeper0:192.168.2.217"
- "zookeeper1:192.168.2.216"
- "zookeeper2:192.168.2.236"
- "kafka0:192.168.2.217"
- "kafka1:192.168.2.216"
- "kafka2:192.168.2.236"
- "kafka3:192.168.2.235"
在zookeeper2上配置docker-compose-zookeeper.yaml文件,拷贝到kafkapeer目录下。
vim docker-compose-zookeeper.yaml
version: '2'
services:
zookeeper2:
container_name: zookeeper2
hostname: zookeeper2
image: hyperledger/fabric-zookeeper
restart: always
environment:
- ZOO_MY_ID=3
- ZOO_SERVERS=server.1=zookeeper0:2888:3888 server.2=zookeeper1:2888:3888 server.3=zookeeper2:2888:3888
ports:
- 2181:2181
- 2888:2888
- 3888:3888
extra_hosts:
- "zookeeper0:192.168.2.217"
- "zookeeper1:192.168.2.216"
- "zookeeper2:192.168.2.236"
- "kafka0:192.168.2.217"
- "kafka1:192.168.2.216"
- "kafka2:192.168.2.236"
- "kafka3:192.168.2.235"
在Kafka2上配置docker-compose-kafka.yaml文件,拷贝到kafkapeer目录下。
vim docker-compose-kafka.yaml
version: '2'
services:
kafka2:
container_name: kafka2
hostname: kafka2
image: hyperledger/fabric-kafka
restart: always
environment:
- KAFKA_MESSAGE_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
- KAFKA_REPLICA_FETCH_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
- KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
environment:
- KAFKA_BROKER_ID=3
- KAFKA_MIN_INSYNC_REPLICAS=2
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA_ZOOKEEPER_CONNECT=zookeeper0:2181,zookeeper1:2181,zookeeper2:2181
ports:
- 9092:9092
extra_hosts:
- "zookeeper0:192.168.2.217"
- "zookeeper1:192.168.2.216"
- "zookeeper2:192.168.2.236"
- "kafka0:192.168.2.217"
- "kafka1:192.168.2.216"
- "kafka2:192.168.2.236"
- "kafka3:192.168.2.235"
在Kafka3上配置docker-compose-kafka.yaml文件,拷贝到kafkapeer目录下。
vim docker-compose-kafka.yaml
version: '2'
services:
kafka3:
container_name: kafka3
hostname: kafka3
image: hyperledger/fabric-kafka
restart: always
environment:
- KAFKA_MESSAGE_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
- KAFKA_REPLICA_FETCH_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
- KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
environment:
- KAFKA_BROKER_ID=4
- KAFKA_MIN_INSYNC_REPLICAS=2
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA_ZOOKEEPER_CONNECT=zookeeper0:2181,zookeeper1:2181,zookeeper2:2181
ports:
- 9092:9092
extra_hosts:
- "zookeeper0:192.168.2.217"
- "zookeeper1:192.168.2.216"
- "zookeeper2:192.168.2.236"
- "kafka0:192.168.2.217"
- "kafka1:192.168.2.216"
- "kafka2:192.168.2.236"
- "kafka3:192.168.2.235"
Kafka集群启动
先启动Zookeeper集群
1. 服务器(192.168.2.217)启动
cd /usr/local/go/src/github.com/hyperledger/fabric/kafkapeer
docker-compose -f docker-compose-zookeeper.yaml up -d
2. 服务器(192.168.2.216)启动
cd /usr/local/go/src/github.com/hyperledger/fabric/kafkapeer
docker-compose -f docker-compose-zookeeper.yaml up -d
3. 服务器(192.168.2.236)启动
cd /usr/local/go/src/github.com/hyperledger/fabric/kafkapeer
docker-compose -f docker-compose-zookeeper.yaml up -d
再Kafka集群启动
1. 服务器(192.168.2.217)启动
cd /usr/local/go/src/github.com/hyperledger/fabric/kafkapeer
docker-compose -f docker-compose-kafka.yaml up -d
2. 服务器(192.168.22.216)启动
cd /usr/local/go/src/github.com/hyperledger/fabric/kafkapeer
docker-compose -f docker-compose-kafka.yaml up -d
3. 服务器(192.168.2.236)启动
cd /usr/local/go/src/github.com/hyperledger/fabric/kafkapeer
docker-compose -f docker-compose-kafka.yaml up -d
4. 服务器(192.168.2.235)启动
cd /usr/local/go/src/github.com/hyperledger/fabric/kafkapeer
docker-compose -f docker-compose-kafka.yaml up -d
peer部分
在peer0.org1.example.com上配置docker-compose-peer.yaml文件,拷贝到kafkapeer目录下。
vim docker-compose-peer.yaml
version: '2'
services:
peer0.org1.example.com:
container_name: peer0.org1.example.com
hostname: peer0.org1.example.com
image: hyperledger/fabric-peer
environment:
- CORE_PEER_ID=peer0.org1.example.com
- CORE_PEER_ADDRESS=peer0.org1.example.com:7051
- CORE_PEER_CHAINCODELISTENADDRESS=peer0.org1.example.com:7052
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org1.example.com:7051
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
# the following setting starts chaincode containers on the same
# bridge network as the peers
# https://docs.docker.com/compose/networking/
#- CORE_LOGGING_LEVEL=ERROR
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_GOSSIP_USELEADERELECTION=true
- CORE_PEER_GOSSIP_ORGLEADER=false
- CORE_PEER_PROFILE_ENABLED=true
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: peer node start
volumes:
- /var/run/:/host/var/run/
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp:/etc/hyperledger/fabric/msp
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls:/etc/hyperledger/fabric/tls
ports:
- 7051:7051
- 7052:7052
- 7053:7053
extra_hosts:
- "orderer0.example.com:192.168.2.215"
- "orderer1.example.com:192.168.2.218"
- "orderer2.example.com:192.168.2.221"
cli:
container_name: cli
image: hyperledger/fabric-tools
tty: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
# - CORE_LOGGING_LEVEL=ERROR
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer0.org1.example.com:7051
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/[email protected]/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
volumes:
- /var/run/:/host/var/run/
- ./chaincode/go/:/opt/gopath/src/github.com/hyperledger/fabric/kafkapeer/chaincode/go
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
extra_hosts:
- "orderer0.example.com:192.168.2.215"
- "orderer1.example.com:192.168.2.218"
- "orderer2.example.com:192.168.2.221"
- "peer0.org1.example.com:192.168.2.220"
- "peer1.org1.example.com:192.168.2.222"
- "peer0.org2.example.com:192.168.2.214"
- "peer1.org2.example.com:192.168.2.219"
在peer0.org2.example.com上配置docker-compose-peer.yaml文件,拷贝到kafkapeer目录下。
vim docker-compose-peer.yaml
# All elements in this file should depend on the docker-compose-base.yaml
# Provided fabric peer node
version: '2'
services:
peer0.org2.example.com:
container_name: peer0.org2.example.com
hostname: peer0.org2.example.com
image: hyperledger/fabric-peer
environment:
- CORE_PEER_ID=peer0.org2.example.com
- CORE_PEER_ADDRESS=peer0.org2.example.com:7051
- CORE_PEER_CHAINCODELISTENADDRESS=peer0.org2.example.com:7052
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org2.example.com:7051
- CORE_PEER_LOCALMSPID=Org2MSP
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
# the following setting starts chaincode containers on the same
# bridge network as the peers
# https://docs.docker.com/compose/networking/
#- CORE_LOGGING_LEVEL=ERROR
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_GOSSIP_USELEADERELECTION=true
- CORE_PEER_GOSSIP_ORGLEADER=false
- CORE_PEER_PROFILE_ENABLED=true
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: peer node start
volumes:
- /var/run/:/host/var/run/
- ./crypto-config/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/msp:/etc/hyperledger/fabric/msp
- ./crypto-config/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls:/etc/hyperledger/fabric/tls
ports:
- 7051:7051
- 7052:7052
- 7053:7053
extra_hosts:
- "orderer0.example.com:192.168.2.215"
- "orderer1.example.com:192.168.2.218"
- "orderer2.example.com:192.168.2.221"
cli:
container_name: cli
image: hyperledger/fabric-tools
tty: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
# - CORE_LOGGING_LEVEL=ERROR
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer0.org2.example.com:7051
- CORE_PEER_LOCALMSPID=Org2MSP
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/[email protected]/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
volumes:
- /var/run/:/host/var/run/
- ./chaincode/go/:/opt/gopath/src/github.com/hyperledger/fabric/kafkapeer/chaincode/go
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
extra_hosts:
- "orderer0.example.com:192.168.2.215"
- "orderer1.example.com:192.168.2.218"
- "orderer2.example.com:192.168.2.221"
- "peer0.org1.example.com:192.168.2.220"
- "peer1.org1.example.com:192.168.2.222"
- "peer0.org2.example.com:192.168.2.214"
- "peer1.org2.example.com:192.168.2.219"
在peer1.org1.example.com上配置docker-compose-peer.yaml文件,拷贝到kafkapeer目录下
vim docker-compose-peer.yaml
# All elements in this file should depend on the docker-compose-base.yaml
# Provided fabric peer node
version: '2'
services:
peer1.org1.example.com:
container_name: peer1.org1.example.com
hostname: peer1.org1.example.com
image: hyperledger/fabric-peer
environment:
- CORE_PEER_ID=peer1.org1.example.com
- CORE_PEER_ADDRESS=peer1.org1.example.com:7051
- CORE_PEER_CHAINCODELISTENADDRESS=peer1.org1.example.com:7052
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org1.example.com:7051
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
# the following setting starts chaincode containers on the same
# bridge network as the peers
# https://docs.docker.com/compose/networking/
#- CORE_LOGGING_LEVEL=ERROR
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_GOSSIP_USELEADERELECTION=true
- CORE_PEER_GOSSIP_ORGLEADER=false
- CORE_PEER_PROFILE_ENABLED=true
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: peer node start
volumes:
- /var/run/:/host/var/run/
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/msp:/etc/hyperledger/fabric/msp
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls:/etc/hyperledger/fabric/tls
ports:
- 7051:7051
- 7052:7052
- 7053:7053
extra_hosts:
- "orderer0.example.com:192.168.2.215"
- "orderer1.example.com:192.168.2.218"
- "orderer2.example.com:192.168.2.221"
cli:
container_name: cli
image: hyperledger/fabric-tools
tty: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
# - CORE_LOGGING_LEVEL=ERROR
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer1.org1.example.com:7051
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/[email protected]/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
volumes:
- /var/run/:/host/var/run/
- ./chaincode/go/:/opt/gopath/src/github.com/hyperledger/fabric/kafkapeer/chaincode/go
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
extra_hosts:
- "orderer0.example.com:192.168.2.215"
- "orderer1.example.com:192.168.2.218"
- "orderer2.example.com:192.168.2.221"
- "peer0.org1.example.com:192.168.2.220"
- "peer1.org1.example.com:192.168.2.222"
- "peer0.org2.example.com:192.168.2.214"
- "peer1.org2.example.com:192.168.2.219"
在peer1.org2.example.com配置docker-compose-peer.yaml文件,拷贝到kafkapeer目录下。
vim docker-compose-peer.yaml
# All elements in this file should depend on the docker-compose-base.yaml
# Provided fabric peer node
version: '2'
services:
peer1.org2.example.com:
container_name: peer1.org2.example.com
hostname: peer1.org2.example.com
image: hyperledger/fabric-peer
environment:
- CORE_PEER_ID=peer1.org2.example.com
- CORE_PEER_ADDRESS=peer1.org2.example.com:7051
- CORE_PEER_CHAINCODELISTENADDRESS=peer1.org2.example.com:7052
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org2.example.com:7051
- CORE_PEER_LOCALMSPID=Org2MSP
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
# the following setting starts chaincode containers on the same
# bridge network as the peers
# https://docs.docker.com/compose/networking/
#- CORE_LOGGING_LEVEL=ERROR
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_GOSSIP_USELEADERELECTION=true
- CORE_PEER_GOSSIP_ORGLEADER=false
- CORE_PEER_PROFILE_ENABLED=true
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: peer node start
volumes:
- /var/run/:/host/var/run/
- ./crypto-config/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/msp:/etc/hyperledger/fabric/msp
- ./crypto-config/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls:/etc/hyperledger/fabric/tls
ports:
- 7051:7051
- 7052:7052
- 7053:7053
extra_hosts:
- "orderer0.example.com:192.168.2.215"
- "orderer1.example.com:192.168.2.218"
- "orderer2.example.com:192.168.2.221"
cli:
container_name: cli
image: hyperledger/fabric-tools
tty: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
# - CORE_LOGGING_LEVEL=ERROR
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer1.org2.example.com:7051
- CORE_PEER_LOCALMSPID=Org2MSP
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/[email protected]/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
volumes:
- /var/run/:/host/var/run/
- ./chaincode/go/:/opt/gopath/src/github.com/hyperledger/fabric/kafkapeer/chaincode/go
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
extra_hosts:
- "orderer0.example.com:192.168.2.215"
- "orderer1.example.com:192.168.2.218"
- "orderer2.example.com:192.168.2.221"
- "peer0.org1.example.com:192.168.2.220"
- "peer1.org1.example.com:192.168.2.214"
- "peer0.org2.example.com:192.168.2.222"
- "peer1.org2.example.com:192.168.2.219"
orderer部分
在orderer0上配置docker-compose-orderer.yaml文件,拷贝到kafkapeer目录下。
vim docker-compose-orderer.yaml
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
version: '2'
services:
orderer0.example.com:
container_name: orderer0.example.com
image: hyperledger/fabric-orderer
environment:
- ORDERER_GENERAL_LOGLEVEL=debug
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
# enabled TLS
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
- ORDERER_KAFKA_RETRY_LONGINTERVAL=10s
- ORDERER_KAFKA_RETRY_LONGTOTAL=100s
- ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
- ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
- ORDERER_KAFKA_VERBOSE=true
- ORDERER_KAFKA_BROKERS=[192.168.2.217:9092,192.168.2.216:9092,192.168.2.236:9092,192.168.2.235:9092]
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: orderer
volumes:
- ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/msp:/var/hyperledger/orderer/msp
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/tls/:/var/hyperledger/orderer/tls
ports:
- 7050:7050
extra_hosts:
- "kafka0:192.168.2.217"
- "kafka1:192.168.2.216"
- "kafka2:192.168.2.236"
- "kafka3:192.168.2.235"
细节解释
环境变量
ORDERER_KAFKA_RETRY_LONGINTERVAL
每隔多长时间进行一次重试, 单位:秒
ORDERER_KAFKA_RETRY_LONGTOTAL
总共重试的时长, 单位: 秒
ORDERER_KAFKA_RETRY_SHORTINTERVAL
每隔多长时间进行一次重试, 单位:秒
ORDERER_KAFKA_RETRY_SHORTTOTAL
总共重试的时长, 单位: 秒
ORDERER_KAFKA_VERBOSE
启用日志与kafka进行交互, 启用: true, 不启用: false
ORDERER_KAFKA_BROKERS
指向kafka节点的集合
关于重试的时长
先使用ORDERER_KAFKA_RETRY_SHORTINTERVAL进行重连, 重连的总时长为ORDERER_KAFKA_RETRY_SHORTTOTAL
如果上述步骤没有重连成功, 使用ORDERER_KAFKA_RETRY_LONGINTERVAL进行重连, 重连的总时长为ORDERER_KAFKA_RETRY_LONGTOTAL
在orderer1上配置docker-compose-orderer.yaml文件,拷贝到kafkapeer目录下
vim docker-compose-orderer.yaml
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
version: '2'
services:
orderer1.example.com:
container_name: orderer1.example.com
image: hyperledger/fabric-orderer
environment:
- ORDERER_GENERAL_LOGLEVEL=debug
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
# enabled TLS
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
- ORDERER_KAFKA_RETRY_LONGINTERVAL=10s
- ORDERER_KAFKA_RETRY_LONGTOTAL=100s
- ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
- ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
- ORDERER_KAFKA_VERBOSE=true
- ORDERER_KAFKA_BROKERS=[192.168.2.217:9092,192.168.2.216:9092,192.168.2.236:9092,192.168.2.235:9092]
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: orderer
volumes:
- ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer1.example.com/msp:/var/hyperledger/orderer/msp
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer1.example.com/tls/:/var/hyperledger/orderer/tls
ports:
- 7050:7050
extra_hosts:
- "kafka0:192.168.2.217"
- "kafka1:192.168.2.216"
- "kafka2:192.168.2.236"
- "kafka3:192.168.2.235"
在orderer2上配置docker-compose-orderer.yaml文件,拷贝到kafkapeer目录下。
vim docker-compose-orderer.yaml
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
version: '2'
services:
orderer2.example.com:
container_name: orderer2.example.com
image: hyperledger/fabric-orderer
environment:
- ORDERER_GENERAL_LOGLEVEL=debug
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
# enabled TLS
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
- ORDERER_KAFKA_RETRY_LONGINTERVAL=10s
- ORDERER_KAFKA_RETRY_LONGTOTAL=100s
- ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
- ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
- ORDERER_KAFKA_VERBOSE=true
- ORDERER_KAFKA_BROKERS=[192.168.2.217:9092,192.168.2.216:9092,192.168.2.236:9092,192.168.2.235:9092]
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: orderer
volumes:
- ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer2.example.com/msp:/var/hyperledger/orderer/msp
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer2.example.com/tls/:/var/hyperledger/orderer/tls
ports:
- 7050:7050
extra_hosts:
- "kafka0:192.168.2.217"
- "kafka1:192.168.2.216"
- "kafka2:192.168.2.236"
- "kafka3:192.168.2.235"
Orderer集群启动
orderer节点启动
1. 服务器(192.168.2.215)启动
cd /usr/local/go/src/github.com/hyperledger/fabric/kafkapeer
docker-compose -f docker-compose-orderer.yaml up -d
2. 服务器(192.168.2.218)启动
cd /usr/local/go/src/github.com/hyperledger/fabric/kafkapeer
docker-compose -f docker-compose-orderer.yaml up -d
3. 服务器(192.168.2.221)启动
cd /usr/local/go/src/github.com/hyperledger/fabric/kafkapeer
docker-compose -f docker-compose-orderer.yaml up -d
Peer节点启动
1. 服务器(192.168.2.220)启动
cd /usr/local/go/src/github.com/hyperledger/fabric/kafkapeer
docker-compose -f docker-compose-peer.yaml up -d
2. 服务器(192.168.2.214)启动
cd /usr/local/go/src/github.com/hyperledger/fabric/kafkapeer
docker-compose -f docker-compose-peer.yaml up -d
3. 服务器(192.168.2.222)启动
cd /usr/local/go/src/github.com/hyperledger/fabric/kafkapeer
docker-compose -f docker-compose-peer.yaml up -d
4. 服务器(192.168.2.219)启动
cd /usr/local/go/src/github.com/hyperledger/fabric/kafkapeer
docker-compose -f docker-compose-peer.yaml up -d
Kafka运行验证
peer0.org1.example.com
1. 准备部署智能合约
拷贝examples/chaincode/go/example02目录下的go文件(智能合约)拷贝到kafkapeer/chaincode/go/example02目录下,只有peer节点需要智能合约。
cd /usr/local/go/src/github.com/hyperledger/fabric-samples/chaincode/chaincode_example02/
cp -r go/ ../../../fabric/kafkapeer/chaincode/
2. 启动Fabric网络
1) 启动cli容器
docker exec -it cli bash
创建Channel
ORDERER_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer0.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
peer channel create -o orderer0.example.com:7050 -c mychannel -f ./channel-artifacts/mychannel.tx --tls --cafile $ORDERER_CA
报错:
Error: got unexpected status: BAD_REQUEST -- error validating channel creation transaction for new channel 'mychannel', could not succesfully apply update to template configuration: error authorizing update: error validating DeltaSet: policy for [Group] /Channel/Application not satisfied: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Admins' sub-policies to be satisfied
解决方法:把fabric1.4.3换成1.4.0版本即可
Peer加入Channel
peer channel join -b mychannel.block
保存mychannel.block
exit
docker cp d98a19fd6da7:/opt/gopath/src/github.com/hyperledger/fabric/peer/mychannel.block /opt/gopath/src/github.com/hyperledger/fabric/kafkapeer
mychannel.block拷贝到其它peer节点
scp mychannel.block 192.168.2.222:/opt/gopath/src/github.com/hyperledger/fabric/kafkapeer/
scp mychannel.block 192.168.2.214:/opt/gopath/src/github.com/hyperledger/fabric/kafkapeer/
scp mychannel.block 192.168.2.219:/opt/gopath/src/github.com/hyperledger/fabric/kafkapeer/
安装与运行智能合约
1) 安装智能合约
docker exec -it cli bash
peer chaincode install -n mycc -p github.com/hyperledger/fabric/kafkapeer/chaincode/go/ -v 1.0
2) 实例化智能合约
区块初始化数据为a为200,b为400。
复制代码
ORDERER_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer0.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
peer chaincode instantiate -o orderer0.example.com:7050 --tls --cafile $ORDERER_CA -C mychannel -n mycc -v 1.0 -c '{"Args":["init","a","200","b","400"]}' -P "OR ('Org1MSP.peer','Org2MSP.peer')"
报错:
Error: could not assemble transaction, err proposal response was not successful, error code 500, msg error starting container: error starting container: API error (400): OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"chaincode\": executable file not found in $PATH": unknown
原因是go文件不行,解决方法:把fabric-sample里面的go文件替换掉example02里面的go文件
复制代码
3) Peer上查询a,显示200
peer chaincode query -C mychannel -n mycc -c '{"Args":["query","a"]}'