k8s区块链部署**

环境需求

主机名 Wan_ip
k8s-master 172.26.84.124
k8s-node1 172.26.84.125
gateway 172.26.84.123

一、部署k8s集群

1 master 1 node 集群方便演示

  • centos7
  • k8s 1.14.1
  • docker 18.06.3

两台centos7虚拟机

主机名 Wan_ip
k8s-master 172.26.84.124
k8s-node1 172.26.84.125

集群准备环境

k8s-master

通过ssh操作 (ssh [email protected])

1、修改主机名
hostnamectl --static set-hostname k8s-master 
echo -e "172.26.84.124 k8s-master\n172.26.84.125 k8s-node1" >> /etc/hosts  
2、关闭防火墙
systemctl stop firewalld & systemctl disable firewalld
3、关闭selinux
setenforce 0
4、关闭swap
sed -i '/ swap / s/^/#/' /etc/fstab
5、安装docker
yum install -y yum-utils device-mapper-persistent-data lvm2

设置yum源

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo 

可以查看所有仓库中所有docker版本,并选择特定版本安装

yum list docker-ce --showduplicates | sort -r

安装

yum install docker-ce-18.06.3.ce-3.el7   

启动

systemctl enable docker && systemctl start docker 

配置镜像加速器,通过修改daemon配置文件/etc/docker/daemon.json来使用加速器

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://ijdk512y.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

验证

docker version
6、 安装kubelet、kubeadm、kubectl

配置K8S的yum源,执行以下命令安装kubelet、kubeadm、kubectl

cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装

yum install -y kubelet-1.14.1-0.x86_64
yum install -y kubectl-1.14.1-0.x86_64 kubeadm-1.14.1-0.x86_64

CentOS 7上的一些用户报告了由于iptables被绕过而导致流量路由不正确的问题。应该确保 net.bridge.bridge-nf-call-iptables在sysctl配置中设置为1

cat <  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

启动k8s相关服务

systemctl enable kubelet && systemctl start kubelet
7.初始化k8s集群镜像

默认去访问谷歌的服务器,以下载集群所依赖的Docker镜像,因此也会超时失败。docker.io仓库对google的容器做了镜像,可以通过下列命令下拉取相关镜像

docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.14.1
docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.14.1
docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.14.1
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.14.1
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd-amd64:3.3.10
docker pull coredns/coredns:1.3.1
docker pull thejosan20/flannel:v0.10.0-amd64
docker pull mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.0

版本信息需要根据实际情况进行相应的修改。通过docker tag命令来修改镜像的标签

docker tag docker.io/mirrorgooglecontainers/kube-proxy-amd64:v1.14.1 k8s.gcr.io/kube-proxy:v1.14.1
docker tag docker.io/mirrorgooglecontainers/kube-scheduler-amd64:v1.14.1 k8s.gcr.io/kube-scheduler:v1.14.1
docker tag docker.io/mirrorgooglecontainers/kube-apiserver-amd64:v1.14.1 k8s.gcr.io/kube-apiserver:v1.14.1
docker tag docker.io/mirrorgooglecontainers/kube-controller-manager-amd64:v1.14.1 k8s.gcr.io/kube-controller-manager:v1.14.1
docker tag docker.io/mirrorgooglecontainers/etcd-amd64:3.3.10  k8s.gcr.io/etcd:3.3.10
docker tag docker.io/mirrorgooglecontainers/pause:3.1  k8s.gcr.io/pause:3.1
docker tag docker.io/coredns/coredns:1.3.1  k8s.gcr.io/coredns:1.3.1
docker tag docker.io/thejosan20/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
docker tag docker.io/mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.0  k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0

删除原镜像

docker rmi mirrorgooglecontainers/kube-apiserver-amd64:v1.14.1
docker rmi mirrorgooglecontainers/kube-controller-manager-amd64:v1.14.1
docker rmi mirrorgooglecontainers/kube-scheduler-amd64:v1.14.1
docker rmi mirrorgooglecontainers/kube-proxy-amd64:v1.14.1
docker rmi mirrorgooglecontainers/pause:3.1
docker rmi mirrorgooglecontainers/etcd-amd64:3.3.10
docker rmi coredns/coredns:1.3.1
docker rmi thejosan20/flannel:v0.10.0-amd64
docker rmi mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.0  

fabric 相关镜像

vim docker_images.sh

#!/bin/bash

docker pull hyperledger/fabric-ca:1.4.1
docker tag hyperledger/fabric-ca:1.4.1 hyperledger/fabric-ca:latest

docker pull hyperledger/fabric-tools:1.4.1
docker tag hyperledger/fabric-tools:1.4.1 hyperledger/fabric-tools:latest

docker pull hyperledger/fabric-ccenv:1.4.1
docker tag hyperledger/fabric-ccenv:1.4.1 hyperledger/fabric-ccenv:latest

docker pull hyperledger/fabric-orderer:1.4.1 
docker tag hyperledger/fabric-orderer:1.4.1 hyperledger/fabric-orderer:latest

docker pull hyperledger/fabric-peer:1.4.1 
docker tag hyperledger/fabric-peer:1.4.1 hyperledger/fabric-peer:latest

docker pull hyperledger/fabric-javaenv:1.4.1
docker tag hyperledger/fabric-javaenv:1.4.1 hyperledger/fabric-javaenv:latest

docker pull hyperledger/fabric-zookeeper:0.4.15
docker tag hyperledger/fabric-zookeeper:0.4.15 hyperledger/fabric-zookeeper:latest

docker pull hyperledger/fabric-kafka:0.4.15 
docker tag hyperledger/fabric-kafka:0.4.15 hyperledger/fabric-kafka:latest

docker pull hyperledger/fabric-couchdb:0.4.15
docker tag hyperledger/fabric-couchdb:0.4.15 hyperledger/fabric-couchdb:latest

docker pull hyperledger/fabric-baseos:0.4.15 
docker tag hyperledger/fabric-baseos:0.4.15 hyperledger/fabric-baseos:latest

执行下载并打包

sh -x docker_images.sh

8.安装 nfs 工具后重启
yum -y install nfs-utils 
reboot
9.安装go语言
wget https://studygolang.com/dl/golang/go1.12.7.linux-amd64.tar.gz
tar -zxvf go1.12.7.linux-amd64.tar.gz -C /usr/local/

vim /etc/profile
export PATH=$PATH:/usr/local/go/bin

source /etc/profile

k8s-node1

通过ssh操作

1.修改主机名

hostnamectl --static set-hostname k8s-node1
echo -e "172.26.84.124 k8s-master\n172.26.84.125 k8s-node1"  >> /etc/hosts  

其余步骤与k8s-master一样

构建K8S集群

进入 k8s-master

下载baasmanager源码(三台服务器都需要下载)

cd /data
git clone https://gitee.com/liveanddream/baasmanager.git

1、初始化 master

(pod-network-cidr flannel网络用到)

kubeadm init --kubernetes-version=v1.14.1 --pod-network-cidr=10.244.0.0/16

在初始化时,会出现类似以下内容(用于从节点加入到集群中,在从节点执行)

kubeadm join 172.26.84.124:6443 --token 5bsm6z.7udt6z3u40ap27xu \
    --discovery-token-ca-cert-hash sha256:4284e4d214c62bf48f64120afa8d436a8653c40db7d464d7aaa34cc478c11d6c

报错

[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/

是由于docker的Cgroup Driver和kubelet的Cgroup Driver不一致导致的,此处选择修改docker的和kubelet一致

docker info | grep Cgroup
Cgroup Driver: cgroupfs

编辑文件/usr/lib/systemd/system/docker.service

ExecStart=/usr/bin/dockerd --exec-opt native.cgroupdriver=systemd
systemctl daemon-reload
systemctl restart docker
 docker info | grep Cgroup
Cgroup Driver: systemd

执行日志中脚本

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

2.k8s-node1加入到master集群中

执行k8smaster上执行初始化反馈的结果

kubeadm join 172.26.84.124:6443 --token 5bsm6z.7udt6z3u40ap27xu \
    --discovery-token-ca-cert-hash sha256:4284e4d214c62bf48f64120afa8d436a8653c40db7d464d7aaa34cc478c11d6c

3.创建flannel网络

进入 k8s-master

在k8s-master验证,并创建网络

kubectl get nodes   这时查看到的node状态是NotReady
kubectl taint nodes --all node-role.kubernetes.io/master-

编辑flannel/kube-flannel.yml,创建flannel网络

kubectl apply -f kube-flannel.yml

查看pods

kubectl get pods --all-namespaces

4、创建K8S Dashboard

k8s-master

编辑dashboard/kubernetes-dashboard.yaml,创建K8S Dashboard

kubectl create -f kubernetes-dashboard.yaml

编辑dashboard/admin-token.yaml,创建Dashboard 管理员用户

kubectl create -f admin-token.yaml

获取登陆token

kubectl describe secret/$(kubectl get secret -nkube-system |grep admin|awk '{print $1}') -nkube-system

浏览器打开:https://172.26.84.124:30000/#!/login 令牌为token登录

5.配置k8s集群DNS解析

获取 kube-dns 的ip地址

kubectl get services --all-namespaces | grep kube-dns 
显示下列内容
kube-system   kube-dns               ClusterIP   10.96.0.10              53/UDP,53/TCP,9153/TCP   26m

得到kube-dns的ip:10.96.0.10

在k8s集群搭建完后操作

为了解决解析域名的问题,需要在k8s集群每个worker节点的 ExecStart 中加入相关参数: kube-dns 的 ip 为10.96.0.10,宿主机网络 DNS 的地址为 192.168.0.1, 为使得 chaincode 的容器可以解析到 peer 节点,在每个k8s worker节点,修改步骤如下

vi /lib/systemd/system/docker.service

在 ExecStart 参数后追加:

--dns=10.96.0.10 --dns=192.168.0.1 --dns-search default.svc.cluster.local --dns-search svc.cluster.local --dns-opt ndots:2 --dns-opt timeout:2 --dns-opt attempts:2

重启docker

systemctl daemon-reload && systemctl restart docker 

二、部署区块链

1、部署 baas-kubeengine

  • k8s-master

修改 kubeconfig/config

[root@k8s-master baas-kubeengine]# pwd
/data/baasmanager/baas-kubeengine

cp $HOME/. kube/config  kubeconfig/config 

修改配置文件 keconfig.yaml

vi keconfig.yaml 为实际路径地址

BaasKubeMasterConfig: /data/baasmanager/baas-kubeengine/kubeconfig/config

运行

nohup go run main.go &

查看nohup.out查看是否存在报错

故障:package google.golang.org/grpc: unrecognized import path "google.golang.org/grpc" (https fetch: Get https://google.golang.org/grpc?go-get=1: dial tcp 216.239.37.1:443: i/o timeout)

报这个错误说明缺少grpc这个包 ,使用代理进行安装

export GO111MODULE=on
export GOPROXY=https://goproxy.io
go env  查看相关变量环境

重新执行go run main.go

2.部署 baas-fabricengine和nfs

k8s-node1

cd /data
git clone https://gitee.com/liveanddream/baasmanager.git
cd /data/baasmanager
mkdir baas
cp  -ar baas-template baas
cp -ar baas-nfsshared baas

安装nfs

yum -y install nfs-utils rpcbind 
vim /etc/exports 
/data/baasmanager/baas-nfsshared *(rw,sync,no_root_squash,no_all_squash)

exportfs -r  (配置生效) 
service rpcbind start && service nfs start (启动rpcbind、nfs服务)

测试nfs,能显示出来表示正常

本机测试
showmount -e localhost

k8s-master测试
showmount -e 172.26.84.125

修改 feconfig.yaml 文件

cd /data/baasmanager/baas-fabricengine

vim feconfig.yaml

# fabric引擎端口
BaasFabricEnginePort: 4991
# baas 的根目录
BaasRootPath: /data/baasmanager
# nfs server ip
BaasNfsServer: 172.26.84.125
# k8s引擎地址
    BaasKubeEngine: http://172.26.84.124:5991
# 在baas根目录下nfs共享目录
BaasNfsShared: baas-nfsshared
# 在baas根目录下fabric k8s模板目录
BaasTemplate: baas-template
# 保存chaincode的gopath下的src目录
BaasChaincodeGithub: github.com/baaschaincodes
# 共识排序参数
OrdererBatchTimeout: 2s
OrdererMaxMessageCount: 500
OrdererAbsoluteMaxBytes: 99 MB
OrdererPreferredMaxBytes:  512 KB

go build ./main.go

3、安装 baas-gateway

部署到gateway主机上
下载baasmanager源码

cd /data
git clone https://gitee.com/liveanddream/baasmanager.git

需要安装docker环境和golong环境

安装mysql

docker run -p 3306:3306 --name apimysql \
           -e MYSQL_ROOT_PASSWORD=123456 \
           -d mysql:5.7 

通过 mysql.sql 初始化 mysql,对应修改dbconfig.yaml

导入mysql.sql数据到数据库中

mysql -uroot -p123456 -h 127.0.0.1

修改配置文件 gwconfig.yaml

# gateway引擎端口
BaasGatewayPort: 6991
# fabric引擎地址
BaasFabricEngine: http://172.26.84.125:4991
# db配置
BaasGatewayDbconfig: /data/baasmanager/baas-gateway/dbconfig.yaml

运行 baas-gateway

go build ./main.go 更新go运行环境

nohup go run main.go & 执行运行

4、 部署baas-frontend

部署到gateway主机上

cd /data/baasmanager/baas-frontend

安装nodeJS

curl --silent --location https://rpm.nodesource.com/setup_10.x | bash -
yum install -y nodejs
npm install -g cnpm --registry=https://registry.npm.taobao.org
npm install
npm -v

构建前端环境

npm run build:prod

把打包生成的dist文件夹复制并重命名/usr/local/nginx/baas

配置nginx.conf反向代理(相应修改baas-gateway地址 )

user www-data;
worker_processes auto;
pid /run/nginx.pid;

events {
    worker_connections 768;
    # multi_accept on;
}

http {
    include       mime.types;
    default_type  application/octet-stream;

    log_format  logformat  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for" '
                      '"[$request_time]" "[$upstream_response_time]" '
                      '"[$connection]" "[$connection_requests]" '
                      '"$http_imei" "$http_mobile" "$http_type" "$http_key" "$cookie_sfpay_jsessionid"';
    access_log  /var/log/nginx/access.log logformat;

    sendfile        on;
    #tcp_nopush     on;
    underscores_in_headers on;

    keepalive_timeout  65;
    proxy_connect_timeout 120;
    proxy_read_timeout 120;
    proxy_send_timeout 60;
    proxy_buffer_size 16k;
    proxy_buffers 4 64k;
    proxy_busy_buffers_size 128k;
    proxy_temp_file_write_size 128k;
    proxy_temp_path /tmp/temp_dir;
    proxy_cache_path /tmp/cache levels=1:2 keys_zone=cache_one:200m inactive=1d max_size=30g;

    client_header_buffer_size 12k;
    open_file_cache max=204800 inactive=65s;
    open_file_cache_valid 30s;
    open_file_cache_min_uses 1;

    gzip  on;
    gzip_types       text/plain application/x-javascript text/css application/xml text/javascript application/x-httpd-php image/jpeg image/gif image/png image/jpg;
    # baas-gateway地址
    upstream baasapi {
        server 127.0.0.1:6991;
    }

    # HTTP server
    #
    server {
        listen       8080;
        server_name  baasadmin;

        location /nginx_status {
                stub_status on;
                access_log off;
        }
        location /api/{
            proxy_pass  http://baasapi/api/;
            proxy_set_header  X-Real-IP  $remote_addr;
            proxy_set_header Host $host;

        }
        location /dev-api/{
            proxy_pass  http://baasapi/api/;
            proxy_set_header  X-Real-IP  $remote_addr;
            proxy_set_header Host $host;

        }
        location /stage-api/{
            proxy_pass  http://baasapi/api/;
            proxy_set_header  X-Real-IP  $remote_addr;
            proxy_set_header Host $host;

        }

        location / {
            root   baas;
            index  index.html index.htm;
        }

        location ~ ^/favicon\\.ico$ {
            root   baas;
        }

    }
}

访问http://IP:8080
账户:admin 密码:123456