作者:刘宽,原文链接:https://blog.csdn.net/liukuan73/article/details/79634524
Harbor是一个用于存储和分发Docker镜像的企业级Registry服务器。
镜像的存储harbor使用的是官方的docker registry(v2命名是distribution)服务去完成。harbor在docker distribution的基础上增加了一些安全、访问控制、管理的功能以满足企业对于镜像仓库的需求。harbor以docker-compose的规范形式组织各个组件,并通过docker-compose工具进行启停。
docker的registry是用本地存储或者s3都是可以的,harbor的功能是在此之上提供用户权限管理、镜像复制等功能,提高使用的registry的效率。Harbor的镜像拷贝功能是通过docker registry的API去拷贝,这种做法屏蔽了繁琐的底层文件操作、不仅可以利用现有docker registry功能不必重复造轮子,而且可以解决冲突和一致性的问题。
主要组件包括:
备注:
harbor官方从1.4.0版本开始支持ha部署模式,详情可见:https://github.com/vmware/harbor/blob/master/docs/high_availability_installation_guide.md
对harbor中的组件根据有/无状态进行划分
无状态组件:
有状态组件:
ha基本思路就是:
说明:
官方文档为方便验证,有状态组件仅仅简单地用容器启动单实例,如下:
docker run --name redis-server -p 6379:6379 -d redis
docker run -d --restart=always -e MYSQL_ROOT_PASSWORD=123456 -v /dcos/harbor-ha/mariadb:/var/lib/mysql:z -p 3306:3306 --name mariadb vmware/mariadb-photon:10.2.10
docker run -d -e POSTGRES_PASSWORD="123456" -p 5432:5432 postgres:9.6
而我是通过helm来部署redis-ha、mariadb、postgresql应用,helm的使用方法请见:http://blog.csdn.net/liukuan73/article/details/79319900
<1>创建持久化存储
pvc所使用的storageclass:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: two-replica-glusterfs-sc
provisioner: kubernetes.io/glusterfs
reclaimPolicy: Retain
parameters:
gidMax: "50000"
gidMin: "40000"
resturl: http://10.142.21.23:30088
volumetype: replicate:2
restauthenabled: "true"
restuser: "admin"
restuserkey: "123456"
# secretNamespace: "default"
# secretName: "heketi-secret"
为mariadb创建pvc:
vim mariadb.pvc-sc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mariadb-pvc
spec:
storageClassName: two-replica-glusterfs-sc
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
kubectl create -f mariadb.pvc-sc.yaml
为postgresql创建pvc:
vim postgresql.pvc-sc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgresql-pvc
spec:
storageClassName: two-replica-glusterfs-sc
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
kubectl create -f postgresql.pvc-sc.yaml
<2>helm部署redis-ha、mariadb、postgresql:
cp -r chart-master/stable/redis-ha /dcos/appstore/app-repo/local-charts/
cp -r chart-master/stable/mariadb /dcos/appstore/app-repo/local-charts/
cp -r chart-master/stable/postgresql /dcos/appstore/app-repo/local-charts/
cd /dcos/appstore/app-repo/local-charts
helm package redis-ha --save=false
helm package mariadb --save=false
helm package postgresql --save=false
helm repo index --url=http://10.142.21.21:8879 .
helm repo update
helm install --name austin-redis --set rbac.create=false,nodeSelector."node-type"=master,tolerations[0].key=master,tolerations[0].operator=Equal,tolerations[0].value=yes,tolerations[0].effect=NoSchedule local-charts/redis-ha
helm install --name austin-mariadb --set mariadbRootPassword=root,persistence.existingClaim=mariadb-pvc local-charts/mariadb
helm install --name austin-postgresql --set postgresUser=root,postgresPassword=root,persistence.existingClaim=postgresql-pvc,nodeSelector."node-type"=master,tolerations[0].key=master,tolerations[0].operator=Equal,tolerations[0].value=yes,tolerations[0].effect=NoSchedule local-charts/postgresql
备注:
<3>为有状态应用创建对集群外的四层转发
因为harbor组件是用docker-compose启动的,组件并不在k8s集群内,所以还要为redis-ha、mariadb、postgresql创建通过ingress-controller(使用hostNetwork模式启动)的四层转发tcp stream以供集群外部访问,参见:https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/exposing-tcp-udp-services.md,对应的configmap如下:
tcp-services-configmap.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: default
data:
3306: "default/austin-mariadb:3306"
6379: "default/austin-redis-redis-ha-master-svc:6379"
5432: "default/austin-postgresql:5432"
在offline安装包解压后的目录中
docker cp ha/registry.sql mariadb:/tmp/
docker exec -it mariadb /bin/bash
mysql -uroot -proot --default-character-set=utf8
create database if not exists registry DEFAULT CHARACTER SET = 'UTF8' DEFAULT COLLATE 'utf8_general_ci';
use registry
source /tmp/registry.sql
<1>安装编译依赖插件:
yum install -y gcc openssl-devel popt-devel
<2>获取源码包:
wget http://www.keepalived.org/software/keepalived-1.4.2.tar.gz
tar -zxvf keepalived-1.4.2.tar.gz
<3>编译安装:
cd keepalived-1.4.2
mkdir /usr/local/keepalived
./configure --prefix=/usr/local/keepalived
make && make install
<4>配置keepalived:
cp keepalived/etc/init.d/keepalived /etc/init.d/
vim /etc/keepalived/keepalived.conf
内容见:https://github.com/vmware/harbor/blob/release-1.4.0/make/ha/sample/active_active/keepalived_active_active.conf
<5>配置健康检查脚本:
vim /usr/local/bin/check.sh
内容见:https://github.com/vmware/harbor/blob/release-1.4.0/make/ha/sample/active_active/check.sh
chmod +x /usr/local/bin/check.sh
<6>开启ip转发功能:
vim /etc/sysctl.conf
net.ipv4.ip_forward = 1
net.ipv4.ip_nonlocal_bind = 1
sysctl -p
<7>重启keepalived并设置开机自启动:
systemctl restart keepalived
systemctl enable keepalived
<8>重复以上步骤配置第二个节点的keepalived,将/etc/keepalived/keepalived.conf中的priority设置为20,这个数字高的,将会优先获取VIP。**
<1>修改harbor.cfg,修改以下内容:
hostname =
db_host =
redis_url = :6379
clair_db_host =
clair_db_password = 123456
clair_db_port = 5432
clair_db_username = postgres
clair_db = postgres
registry_storage_provider_name = filesystem
备注:
此处的registry后端使用filesystem,两个registry的后端存储目录/dcos/harbor/registry(chown 10000:10000 /dcos/harbor/registry)需要mount到同一块NFS上,nfs的部署和使用请看:http://blog.csdn.net/liukuan73/article/details/79649042
<2>修改ha/docker-compose.yml
cp docker-compose.yml make/ha/
vim ha/docker-compose.yml(删除mysql相关的内容,其他的与单节点部署harbor配置无异。)
docker-compose.yml内容如下:
version: '2'
services:
log:
image: vmware/harbor-log:v1.4.0
container_name: harbor-log
restart: always
volumes:
- /dcos/harbor/log/harbor/:/var/log/docker/:z
- ./common/config/log/:/etc/logrotate.d/:z
ports:
- 127.0.0.1:1514:10514
networks:
- harbor
registry:
image: vmware/registry-photon:v2.6.2-v1.4.0
container_name: registry
restart: always
volumes:
- /dcos/harbor/registry:/storage:z
- ./common/config/registry/:/etc/registry/:z
networks:
- harbor
environment:
- GODEBUG=netdns=cgo
command:
["serve", "/etc/registry/config.yml"]
depends_on:
- log
logging:
driver: "syslog"
options:
syslog-address: "tcp://127.0.0.1:1514"
tag: "registry"
adminserver:
image: vmware/harbor-adminserver:v1.4.0
container_name: harbor-adminserver
env_file:
- ./common/config/adminserver/env
restart: always
volumes:
- /dcos/harbor/adminserver/data/config/:/etc/adminserver/config/:z
- /dcos/harbor/adminserver/data/secretkey:/etc/adminserver/key:z
- /dcos/harbor/adminserver/data/:/data/:z
networks:
- harbor
depends_on:
- log
logging:
driver: "syslog"
options:
syslog-address: "tcp://127.0.0.1:1514"
tag: "adminserver"
ui:
image: vmware/harbor-ui:v1.4.0
container_name: harbor-ui
env_file:
- ./common/config/ui/env
restart: always
volumes:
- ./common/config/ui/app.conf:/etc/ui/app.conf:z
- ./common/config/ui/private_key.pem:/etc/ui/private_key.pem:z
- ./common/config/ui/certificates/:/etc/ui/certificates/:z
- /dcos/harbor/ui/secretkey:/etc/ui/key:z
- /dcos/harbor/ui/ca_download/:/etc/ui/ca/:z
- /dcos/harbor/ui/psc/:/etc/ui/token/:z
networks:
- harbor
depends_on:
- log
- adminserver
- registry
logging:
driver: "syslog"
options:
syslog-address: "tcp://127.0.0.1:1514"
tag: "ui"
jobservice:
image: vmware/harbor-jobservice:v1.4.0
container_name: harbor-jobservice
env_file:
- ./common/config/jobservice/env
restart: always
volumes:
- /dcos/harbor/job_logs:/var/log/jobs:z
- ./common/config/jobservice/app.conf:/etc/jobservice/app.conf:z
- /dcos/harbor/secretkey:/etc/jobservice/key:z
networks:
- harbor
depends_on:
- ui
- adminserver
logging:
driver: "syslog"
options:
syslog-address: "tcp://127.0.0.1:1514"
tag: "jobservice"
proxy:
image: vmware/nginx-photon:v1.4.0
container_name: nginx
restart: always
volumes:
- ./common/config/nginx:/etc/nginx:z
networks:
- harbor
ports:
- 80:80
- 443:443
- 4443:4443
depends_on:
- registry
- ui
- log
logging:
driver: "syslog"
options:
syslog-address: "tcp://127.0.0.1:1514"
tag: "proxy"
networks:
harbor:
external: false
ha/docker-compose.clair.yml内容如下:
version: '2'
services:
ui:
networks:
harbor-clair:
aliases:
- harbor-ui
jobservice:
networks:
- harbor-clair
registry:
networks:
- harbor-clair
clair:
networks:
- harbor-clair
container_name: clair
image: vmware/clair-photon:v2.0.1-v1.4.0
restart: always
cpu_quota: 150000
depends_on:
- log
volumes:
- ./common/config/clair:/config
logging:
driver: "syslog"
options:
syslog-address: "tcp://127.0.0.1:1514"
tag: "clair"
networks:
harbor-clair:
external: false
备注:
执行install.sh的脚本的时候,docker-compose.yml和docker-compose.clair.yml是从ha目录下拷贝到上层目录使用的,所以,需要修改的是ha目录下的相应配置。
<3>启动harbor1:
./install.sh --ha --with-clair
<4>修改iptables:
iptables -t nat -A PREROUTING -p tcp -d <vip> --dport 80 -j REDIRECT
iptables -t nat -A PREROUTING -p tcp -d <vip> --dport 443 -j REDIRECT
<5>将harbor文件夹打包:
tar -cvf harbor_ha.tar harbor
<6>将harbor包拷贝到后续harbor节点:
scp harbor_ha.tar root@harbor-2:/dcos/install-addons/
<1>启动harbor:
tar -xvf harbor_ha.tar
cd harbor
./install.sh --ha --with-clair
<2>修改iptables:
iptables -t nat -A PREROUTING -p tcp -d <vip> --dport 80 -j REDIRECT
iptables -t nat -A PREROUTING -p tcp -d <vip> --dport 443 -j REDIRECT
kubernetes的部署方式可以通过kubernetes的生命周期管理结合k8s平台的持久化存储实现“一定程度”的高可用。
官方文档的部署方式没有提供redis,所以UI还是只能起一个实例,否则多实例没有redis提供共享session的话会有会话丢失的问题。
mysql目前不是集群部署模式,且目前这种方式有且只能启动一个实例(因为多实例共用后端数据会有问题,数据不能及时同步),只不过该实例的数据存在共享存储上(比如glusterfs),相当于将mysql无状态化了,可保证mysql实例挂掉自动在本机或其他主机重启一个实例,使用之前数据。
k8s的部署方式用ingress代替了nginx来实现proxy
<1>修改make/harbor.cfg配置:
hostname = registry.dcos:30099
db_password = 123456
clair_db_password = 123456
harbor_admin_password = 123456
auth_mode = db_auth
备注:
make文件夹在harbor的源码中,harbor的release包中是没有的。
<2>在每个节点load harbor镜像:
docker load < harbor.v1.2.0.tar.gz
备注:
虽然目前harbor最新的release版本是1.4.0,但是官方文档中目前的harbor_on_kubernetes只支持到1.2.0(1.4.0中harbor_on_kubernetes使用的镜像也依然1.2.0的),我试了下1.4.0版的harbor镜像确实是无法用1.2版本官方给出的kubernetes yaml文件是正常启动的。比如,1.4版的adminserver启动的时候就因为有参数是空而导致程序错误,看了下代码,发现1.4比1.2多了参数,如下图:
其他组件也有这些问题,所以,不想费时间去研究到底需要修改哪些配置的话还是乖乖用回到1.2.0的镜像,等官方更新。
<3>修改基本配置:
根据具体需求修改deployment、service、pvc的基本配置,比如修改pod数,通过冗余实现高可用。
make/kubernetes/**/*.svc.yaml: Specify the service of pods.
make/kubernetes/**/*.deploy.yaml: Specify configs of containers.
make/kubernetes/pv/*.pvc.yaml: Persistent Volume Claim.
<4>创建持久化存储卷:
在pv下包含pv和pvc的配置文件,在这里我没有按照官方的pv+pvc的使用方式,而是使用的storageclass+pvc的方式,storageclass使用的glusterfs,具体过程不赘述,可参考之前文章:http://blog.csdn.net/liukuan73/article/details/78511697
创建两复本,reclaim模式为Retain的storageclass,two-replica-glusterfs-sc.yaml:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: two-replica-glusterfs-sc
provisioner: kubernetes.io/glusterfs
reclaimPolicy: Retain
parameters:
gidMax: "50000"
gidMin: "40000"
resturl: http://:
volumetype: replicate:2
restauthenabled: "true"
restuser: "admin"
restuserkey: "123456"
# secretNamespace: "default"
# secretName: "heketi-secret"
创建存储log日志的pvc,log.pvc-sc.yaml:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: log-pvc
spec:
storageClassName: two-replica-glusterfs-sc
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
创建存储registry镜像的pvc,registry.pvc-sc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: registry-pvc
spec:
storageClassName: two-replica-glusterfs-sc
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
创建存储mysql数据库数据的pvc,storage-pvc-sc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: storage-pvc
spec:
storageClassName: two-replica-glusterfs-sc
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
<5>生成configmap文件:
python make/kubernetes/k8s-prepare
将会产生以下文件:
make/kubernetes/jobservice/jobservice.cm.yaml
make/kubernetes/mysql/mysql.cm.yaml
make/kubernetes/registry/registry.cm.yaml
make/kubernetes/ui/ui.cm.yaml
make/kubernetes/adminserver/adminserver.cm.yaml
make/kubernetes/ingress.yaml
<6>修改make/kubernetes/ingress.yaml文件:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: harbor
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: ui
servicePort: 80
- path: /v2
backend:
serviceName: registry
servicePort: repo
- path: /service
backend:
serviceName: ui
servicePort: 80
<7>启动:
# create config map
kubectl apply -f make/kubernetes/jobservice/jobservice.cm.yaml
kubectl apply -f make/kubernetes/mysql/mysql.cm.yaml
kubectl apply -f make/kubernetes/registry/registry.cm.yaml
kubectl apply -f make/kubernetes/ui/ui.cm.yaml
kubectl apply -f make/kubernetes/adminserver/adminserver.cm.yaml
# create service
kubectl apply -f make/kubernetes/jobservice/jobservice.svc.yaml
kubectl apply -f make/kubernetes/mysql/mysql.svc.yaml
kubectl apply -f make/kubernetes/registry/registry.svc.yaml
kubectl apply -f make/kubernetes/ui/ui.svc.yaml
kubectl apply -f make/kubernetes/adminserver/adminserver.svc.yaml
# create k8s deployment
kubectl apply -f make/kubernetes/registry/registry.deploy.yaml
kubectl apply -f make/kubernetes/mysql/mysql.deploy.yaml
kubectl apply -f make/kubernetes/jobservice/jobservice.deploy.yaml
kubectl apply -f make/kubernetes/ui/ui.deploy.yaml
kubectl apply -f make/kubernetes/adminserver/adminserver.deploy.yaml
# create k8s ingress
kubectl apply -f make/kubernetes/ingress.yaml
备注:
nginx-ingress-controller注意使用的是quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.8.3
,不要使用版本太高的镜像,高版本镜像存在将http请求redirect成https的问题:https://github.com/kubernetes/ingress-nginx/issues/1957、https://github.com/kubernetes/ingress-nginx/issues/668、https://github.com/kubernetes/ingress-nginx/pull/1854/files
1.http://www.think-foundry.com/architecture-of-harbor-an-open-source-enterprise-class-registry-server/
2.https://github.com/vmware/harbor/blob/master/docs/high_availability_installation_guide.md
3.https://github.com/vmware/harbor/blob/v1.4.0/docs/kubernetes_deployment.md