Kubernetes部署(八):k8s项目交付----(5)持续部署

一、云计算模型概念

Kubernetes部署(八):k8s项目交付----(5)持续部署_第1张图片

● You manage  # 你管理
● Managed by vendor #  供应商管理
 
● Applications  # 开发研发出的业务
● Runtimes  # 运行时环境,Applications业务运行起来,需要依赖的运行时环境,或者是编译环境或者是继承环境,比如java需要jre,python 需要依赖python-env
● Security & integr ation # 继承环境和安全,比如网络安全,业务安全
● Databases # 中间件和数据库
● Service # 跑在虚拟化上的一个一个虚拟机
● Virtualization # 虚拟化资源(kvm、zen、Open VZ、docker (轻量级虚拟化))
● Service HW # 服务器的硬件资源
● Storage # 存储资源
● Networking # 网络资源
  • IaaS (基础设施及服务)云计算最底层,①去某供应商购买IaaS平台,供应商提供一些基础设施(Networking网络资源,Storage 存储资源,Service HW 服务器的硬件资源、Virtualization虚拟化资源(kvm、zen、Open VZ、docker (轻量级虚拟化)、Service 跑在虚拟化上的一个一个虚拟机))等等。②供应商不提供软件服务(Applications 开发研发发出的业务、Runtimes运行时环境、Security & integr ation 继承环境和安全、Databases中间件和数据库)。说白了,由供应商提供(底层硬件、网络、硬件实现方法),本公司自主提供运维、开发,由运维、开发提供产品(运行时环境、数据库、代码)。
  • PaaS ①供应商提供(底层硬件、网络、硬件实现方法、数据库、运行时环境)。②供应商不提供应用程序。说白了,由供应商提供(底层硬件、网络、硬件实现方法、产品运行时环境、数据库),本公司自主提供开发,由开发提供代码。
  • SaaS ①所有的供应商提供(底层硬件、网络、硬件实现方法、产品运行时环境、数据库、代码),都是供应商实现的,本公司只负责拿钱买,直接用就行。

Kubernetes不是什么?

Kubernetes从设计上并不是传统的PaaS(平台即服务)系统,更应该说是传统的IaaS。但对于企业来说,企业只想要一个云周围的生态,要的是一个PaaS,只需要自己提供代码,其他的全部都是供应商提供,所以才有了阿里云、腾讯云等等,企业只需要把代码运行在些写供应商平台上就行。

  • Kubernetes不限制支持应用的类型,不限制应用框架。限制受支持的语言runtimes (例如, Java, Python, Ruby),满足12-factor applications 。不区分 “apps” 或者“services”。 Kubernetes支持不同负载应用,包括有状态、无状态、数据处理类型的应用。只要这个应用可以在容器里运行,那么就能很好的运行在Kubernetes上。
  • Kubernetes不提供中间件(如message buses)、数据处理框架(如Spark)、数据库(如Mysql)或者集群存储系统(如Cephfs)作为内置服务。但这些应用都可以运行在Kubernetes上面。
  • Kubernetes不部署源码不编译应用。持续集成的 (CI)工作流方面,不同的用户有不同的需求和偏好的区域,因此,我们提供分层的 CI工作流,但并不定义它应该如何工作。
  • Kubernetes允许用户选择自己的日志、监控和报警系统。
  • Kubernetes不提供或授权一个全面的应用程序配置 语言/系统(例如,jsonnet)。
  • Kubernetes不提供任何机器配置、维护、管理或者自修复系统。

二、PaaS平台介绍

  • 因为k8s更偏向于IaaS的一个软件,提供的容器的编排能力,编排存储、安全、网络、业务容器、pod业务容器,所以越来越多的云计算厂商,正在基于k8s构建PaaS平台。比如阿里云,直接提供了k8s服务,跟我们部署没区别,只不过是他们把k8s封装成一个ansible,在需要的时候跑一遍ansible脚本,k8s服务就起来了。青云、腾讯云、微软的Azure云,都提供了以k8s为基础的PaaS平台,提供了好多web页面,只需要点点点,就能发布软件。
  • 获得PaaS能力的几个必要条件:
    ● docker引擎提供了统一应用的运行时环境(docker)
    ● 有IaaS能力(基础设施编排能力)
    ● 有可靠的中间件集群、数据库集群(DBA的主要工作)
    ● 有分布式存储集群(存储工程师的主要工作)
    ● 有适配的监控、日志系统(Prometheus、ELK)
    ● 有完善的CI、CD系统(Jenkins、?)

统一应用的运行时环境(docker):

  • docker 三大核心概念:
    镜像、容器、仓库是docker的三大核心概念。
    docker镜像类似于虚拟机镜像,你可以将其理解为一个只读模板。
    docker容器类似于一个轻量级的沙箱,Docker利用容器来运行和隔离应用。
    容器是从镜像创建的应用运行实例。可以将其启动、开始、停止、删除,而这些容器都是彼此相互隔离的、互不可见的。
    镜像自身是只读的。容器从镜像启动时,会在镜像的最上层创建一个可写层。
    简单的说,容器是镜像的一个运行实例。所不同的是,镜像只是静态的只读文件,而容器带有运行时需要的可写文件层。
    如果认为虚拟机是模拟运行的一整套操作系统(包括内核、应用运行态环境和其它系统环境)和跑在上面的应用,那么docker容器就是独立运行的一个(或一组)应用,以及它必须的运行环境。
    docker仓库类似于代码仓库,它是docker集中存放镜像文件的场所。
  • 有IaaS能力(基础设施编排能力):
    (比如k8s,提供了网络编排能力,依赖CNI网络插件,让容器跨宿主机通信,k8s可以横向扩展,集群规模不在受基设施约束,只要有钱加硬件,不受技术条件制约,还有自己的安全能力,rbac机制,存储能力,编排存储,这篇文章主要是nfs网络附加存储,一种naas。除了nfs还可以使用分布式存储,对象存储、san存储,通过pv跟pvc接入k8s直接使用。k8s还有网络、存储、运算资源、容器生命周期等编排能力,还可以给容器制定一系列的就绪型探针、存活型探针、启动后钩子函数、停止前钩子函数等高级用法,k8s有非常强大的IaaS能力,k8s能很好的帮你管理IaaS基础设施)
  • 有可靠的中间件集群、数据库集群:
    (DBA的主要工作:状态的应用,如ES、mongodb、redis开持久化、mysql、orcale),系统工程师管理无状态的服务,都封装到k8s中

什么是CD:

在上述获得PaaS能力的几个必要条件中,只有最后CD系统是不清楚的,什么是CD,CD是持续部署,之前交付dubbo微服务的时候,流程是:开发提代码到git仓库,jenkins拉取git仓库的代码做持续集成(带参数化的流水线),变成image镜像后,推送到harbor的私有仓库中,然后手动编写k8s的yaml文件方式手动部署到k8s中,CD持续部署主要解决,从jenkins生成镜像到harbor仓库后,如何自动编写yaml,自动部署到k8s中

常见的基于K8S的CD系统:

●  自研 ,通过python等自己开发编写CD,调用kubetctl apply -f,或者传递apiserver需要干什么
●  Argo CD,基于gitlab的工具,原理也是调取k8s的API
●  Openshift,红帽发布的基于k8s构建的一个企业级的PaaS平台,重量级(笨重),有CD、容器的私有仓库、流水线等功能
●  Spinnaker,完全开源免费,缺点有点复杂

三、Spinnaker介绍

Spinnaker 是 Netflix 在2015年开源的一款持续交付平台,它继承了 Netflix 上一代集群和部署管理工具 Asgard(阿斯加德):Web-based Cloud Management and Deployment的优点,同时根据公司业务以及技术的的发展抛弃了一些过时的设计:提高了持续交付系统的可复用性,提供了稳定可靠的API,提供了对基础设施和程序全局性的视图,配置、管理、运维都更简单,而且还完全兼容 Asgard,总之对于 Netflix 来说 Spinnaker 是更牛逼的持续交付平台。

3.1、主要功能

集群管理:

集群管理代表的意思是(Spinnaker主要用于管理云资源),Spinnaker 所说的“”云“”可以理解成AWS,即主要是IaaS的资源,比如可以管理OpenStack、Google、微软云等,后来还支持了管理Kubernetes,管理方式还是按照管理基础设施模式来设计的。

部署管理:

管理部署流程是Spinnaker 的核心功能,他负责将Jenkins流水线创建的镜像,部署到Kubernetes集群中去,让服务真正的运行起来

3.2、架构

Kubernetes部署(八):k8s项目交付----(5)持续部署_第2张图片

mark

各组件功能:

Spinnaker自己本身也是基础java、Spring cloud  开发的一套微服务

  • Deck 是一套完全独立的,前端静态web页面项目
  • Gate 是API网关,整个Spinnaker的心脏,所有的Spinnaker 用户界面和API调用程序都要通过Spinnaker Gate 进行通信
  • Custom Script/API Caller 提供用户自定义写脚本和自定义写API调用程序,通过调Gate的接口,获取Spinnaker的一些功能,实现自己的需求
  • Fiat 是Spinnaker的认证服务(账户认证+权限管理),相当于登录到Spinnaker中,需要经过Fiat 进行身份认证,可以接AD域,openldap等等账户统一认证组件,相当于openldap创建的账户跟密码,通过Fiat调用openldap,获取信息为登录进Spinnaker账户跟密码,此文章用不到。
  • Clouddriver 云驱动,驱动整个底层云计算集群的引擎,他处于大脑,决定了Spinnaker到底跟哪一个云计算引擎连接,是Kubernetes还是谷歌云等,管理哪一个云的引擎,是Kubernetes还是谷歌云等,最难部署的
  • Front50 用于管理数据持久化的组件,Spinnaker作为一套企业级k8s运维平台自动化部署的微服务,总的有数据持久化。Front50用的比较巧妙,没有用关系型跟非关系型数据库,使用redis做缓存,在后面接了一个对象存储用于保存应用程序,管道,项目和通知的元数据。对象存储(熟知的比如阿里云的OSS),Front50默认接对象存储为(亚马逊云的S3),但是接S3必须由亚马逊账户,不能因为Front50去创建一个亚马逊的账户。所以此方案中,对象存储使用了小巧的对象存储开源软件minio,Front50去连接minio的方式,恰好跟连接S3一样。注:redis是缓存,redis宕机只不过页面的成功等返回信息没了,数据没影响,都在minio。
  • Orca 是编排引擎。它处理所有临时操作和流水线。很重要呈上Gate启下Clouddriver等,从Deck页面点点点,通过Gate调用Clouddriver必须要经过Orca编排任务组件。在比如若干的流水线,若干Spinnaker任务,都需要经过Gate调用Orca,Orca决定是调用Front50拿数据还是调用Clouddriver去调云计算的引擎或者调用Rosco还是Kayenta
  • Rosco 是协助管理调度虚拟机(VM、Zen等等),此文章用不到。
  • Kayenta 帮助提供了自动化的金丝雀分析的组件,此文章用不到。
  • Echo 是信息通信服务组件。用来发现哪一个任务完成应该给谁发邮件或提示信息,它支持发送通知(例如,Slack,电子邮件,SMS),并处理来自Github之类的服务中传入的Webhook。架构中Igor是依赖于Echo。
  • Igor 用于跟Jenkins通信,如果Spinnaker想调用jenkins接口,必须要Igor ,持续集成系统进整合。
  • Halyard CLI 是一个Sprinnaker的脚手架工具,什么叫脚手架工具,盖楼需要脚手架工具工人才能爬上去,砌砖等等,它是帮助安装部署、配置、升级Sprinnaker工具,此文档不用,原因是Halyard CLI本身需要了解大量的Halyard CLI命令、而Halyard CLI 是官方工具,所以不得不使用谷歌源gcr.io/spinnaker-marketplace,可是国内拉不到。除非有亚马逊云的环境。

四、部署Sprinnaker的Armory发行版

由于Sprinnaker镜像在谷歌源,国内一般无法下载。所以在国内衍生出了许多第三方的镜像,他们通过某种渠道下载后,在传入到国内github上。但无法的得知镜像是否存在改动、安不安全,而且由于不及时更新,有可能找不到Sprinnaker最新版本。而Sprinnaker越来越成熟、社区越开越活跃后,有一些大型的第三方公司,把原装的Sprinnaker封装起来,通过使用Sprinnaker实现自家产品功能,所以Sprinnaker变相的可以买卖或者送,典型比如Hadoop、第三方公司如CDH。所以可以部署Sprinnaker的Armory的发行版。

部署流程:minio  →  redis  →  Clouddriver   →   Front50   →   Orca   →    Echo  →  Igor  →  Gate   → Deck  →  Deck前端代理nginx(静态程序) 

1、部署Minio

1.1、准备镜像

[root@hdss7-200 ~]# docker pull minio/minio:latest
[root@hdss7-200 ~]# docker image ls -a |grep minio
minio/minio                                    latest                          e31e0721a96b   3 months ago    406MB

[root@hdss7-200 ~]# docker image tag e31e0721a96b harbor.od.com:180/armory/minio:latest

harbor.od.com创建私有类型的armory

Kubernetes部署(八):k8s项目交付----(5)持续部署_第3张图片

推送镜像到armory仓库
[root@hdss7-200 ~]# docker login harbor.od.com:180
[root@hdss7-200 ~]# docker image push harbor.od.com:180/armory/minio:latest

1.2、准备资源配置清单

[root@hdss7-200 ~]# mkdir /data/k8s-yaml/armory;cd /data/k8s-yaml/armory
[root@hdss7-200 armory ]# mkdir minio;cd  minio
[root@hdss7-200 minio]# vi dp.yaml   # minio提供上传下载的功能,跟S3功能一致。注意老版本的minio使用9000端口就行页面访问,新版本的minio区分开来,需要指定console页面访问端口,具体查看k8s 部署 minio_Jerry00713的博客-CSDN博客

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    name: minio
  name: minio
  namespace: armory
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 7
  selector:
    matchLabels:
      name: minio
  template:
    metadata:
      labels:
        app: minio
        name: minio
    spec:
      containers:
      - name: minio
        env:
        - name: MINIO_ROOT_USER
          value: "admin"
        - name: MINIO_ROOT_PASSWORD
          value: "admin123"
        image: harbor.od.com:180/armory/minio:latest
        imagePullPolicy: IfNotPresent
        command:
          - /bin/sh
          - -c
          - minio server /data --console-address ":5000"
        ports:
        - name: data
          containerPort: 9000
          protocol: "TCP"
        - name: console
          containerPort: 5000
          protocol: "TCP"
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /minio/health/ready
            port: 9000
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 5
        volumeMounts:
        - mountPath: /data
          name: data
      imagePullSecrets:
      - name: harbor
      volumes:
      - nfs:
          server: hdss7-200
          path: /data/nfs-volume/minio
        name: data

解释:

progressDeadlineSeconds: 600  # 主要用于k8s 在升级过程中有可能由于各种原因升级卡住(这个时候还没有明确的升级失败),在容器启动后,progressDeadlineSeconds会按照配置文件中内容进行倒计时,在倒计时期间不会有任何的影响。而我们了解,delpoyment会跟pod资源通过标签先择器进行绑定,而pod资源最先会调用镜像,如果拉取镜像时候,出问题了,那么此时容器会一直处于不正常。这时候只有配置了progressDeadlineSeconds,在progressDeadlineSeconds倒数多少时间后,检测pod资源是不是正常,如果不正常,会上报这个情况,这个时候这个 Deployment 状态就被标记为 False,并且注明原因。然后会重启启动这个资源。非Runing都被标记为 False

replicas: 1  # 启动一份

revisionHistoryLimit: 7  #  留几份历史记录,Deployment 更新容器后会把之前的配置等备份记录,默认kubernetes 将保存 Deployment 的所有更新(rollout)历史。方便需要回滚到指定的版本。  

imagePullPolicy: IfNotPresent   # 如果本地有镜像,就不从harbor拉取  

 args:
    - server
    - /data 
  

readinessProbe:            # 就绪探针readinessProbe
failureThreshold: 3        # 失败次数3次
   httpGet:                       # http的GET请求
   path: /minio/health/ready   # 访问的url
      port: 9000                 # 访问的端口
      scheme: HTTP         # 类型http
         initialDelaySeconds: 10     # K8S将在Pod开始启动后10s开始探测
         periodSeconds: 10             # 在Pod运行过程中,K8S仍然会每隔10s(periodSeconds检测9000端口的/minio/health/ready
         successThreshold: 1        #  在持续10s中成功一次代表success
         timeoutSeconds: 5            # Pod启动10s,持续5s一直探测,没有返回200~300内,就绪检查就失败

总结:K8S将在Pod开始启动后10s开始,持续5s一直探测curl -I 127.0.0.1:9000/minio/health/ready有没有返回200~300内,如果有一次success,代表成功。一次都没有成功。将在10s后再一次探测,持续5s还是一次都没有成功,在10s后再一次探测,持续5s还是一次都没有成功。连续3都是失败,代表检测失败了,此容器将不会被调度流量。

 env:
        - name: MINIO_ROOT_USER
   # minio容器中,网页启动脚本中,账户是从MINIO_ACCESS_KEY环境变量获取
          value: admin
        - name: MINIO_ROOT_PASSWORD
  # minio容器中,网页启动脚本中,密码是从MINIO_ACCESS_KEY环境变量获取
          value: admin123

volumes:
      - nfs:
          server: hdss7-200
          path: /data/nfs-volume/minio   
# 为了保证数据持久化,就是容器宕机也无所谓
        name: data

[root@hdss7-200 minio]# mkdir /data/nfs-volume/minio
[root@hdss7-200 minio]# vi svc.yaml

apiVersion: v1
kind: Service
metadata:
  name: minio
  namespace: armory
spec:
  ports:
  - name: data
    port: 80
    targetPort: 9000
    protocol: TCP
  - name: console
    port: 5000
    targetPort: 5000
    protocol: TCP
  selector:
    app: minio

[root@hdss7-200 minio]# vi ingress.yaml    # minio 是一个对象存储,对外提供一个http接口,可以通过http上传下载文件,跟S3一样

apiVersion: extensions/v1beta1
kind: Ingress
metadata: 
  name: minio
  namespace: armory
spec:
  rules:
  - host: minio.od.com
    http:
      paths:
      - path: /
        backend: 
          serviceName: minio
          servicePort: 5000

1.3、配置DNS解析

[root@hdss7-11 ~]# vi /var/named/od.com.zone

$ORIGIN od.com.
$TTL 600	; 10 minutes
@   		IN SOA	dns.od.com. dnsadmin.od.com. (
				2020010501 ; serial
				10800      ; refresh (3 hours)
				900        ; retry (15 minutes)
				604800     ; expire (1 week)
				86400      ; minimum (1 day)
				)
				NS   dns.od.com.
$TTL 60	; 1 minute
dns                A    10.4.7.11
$ORIGIN od.com.
$TTL 600        ; 10 minutes
@               IN SOA  dns.od.com. dnsadmin.od.com. (
                                2020010518 ; serial
                                10800      ; refresh (3 hours)
                                900        ; retry (15 minutes)
                                604800     ; expire (1 week)
                                86400      ; minimum (1 day)
                                )
                                NS   dns.od.com.
$TTL 60 ; 1 minute
dns                A    10.4.7.11
harbor             A    10.4.7.200
k8s-yaml           A    10.4.7.200
traefik            A    10.4.7.10
dashboard          A    10.4.7.10
zk1                A    10.4.7.11
zk2                A    10.4.7.12
zk3                A    10.4.7.21
jenkins            A    10.4.7.10
dubbo-monitor      A    10.4.7.10
demo               A    10.4.7.10
config             A    10.4.7.10
mysql              A    10.4.7.11
portal             A    10.4.7.10
zk-test            A    10.4.7.11
zk-prod            A    10.4.7.12
config-test        A    10.4.7.10
config-prod        A    10.4.7.10
demo-test          A    10.4.7.10
demo-prod          A    10.4.7.10
blackbox           A    10.4.7.10
prometheus         A    10.4.7.10
grafana            A    10.4.7.10
km                 A    10.4.7.10
kibana             A    10.4.7.10
minio              A    10.4.7.10

[root@hdss7-11 ~]# systemctl restart named
[root@hdss7-11 ~]# dig -t A minio.od.com +short @10.4.7.11 
10.4.7.10

加深讲解 L4 L7 调度跟apiserver通信具体查看:https://blog.csdn.net/Jerry00713/article/details/124216958

1.4、创建armory命名空间、secret

因为harbor.od.com:180/armory/minio:lates在私有仓库,所以minio的pod资源拉取不到镜像,所以在armory命名空间创建一个secret,secret绑定harbor的账户密码等信息,然后pod资源调用此secret去私有仓库armory取镜像

[root@hdss7-21 ~]# kubectl create ns armory
namespace/armory created

[root@hdss7-21 ~]# kubectl create secret docker-registry harbor --docker-server=harbor.od.com:180 --docker-username=admin --docker-password=Harbor12345 -n armory

Kubernetes部署(八):k8s项目交付----(5)持续部署_第4张图片

1.5、应用资源配置清单

[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/armory/minio/dp.yaml
deployment.extensions/minio created
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/armory/minio/svc.yaml
service/minio created
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/armory/minio/ingress.yaml
ingress.extensions/minio created

dashboard看pod

Kubernetes部署(八):k8s项目交付----(5)持续部署_第5张图片

此pod资源Running起来比较慢。原因是里面有就绪型探针。

1.6、访问minio.od.com

一个特别简单的页面,账户admin、密码admin123,是通过(- name: MINIO_ROOT_USER    value: admin)( - name: MINIO_ROOT_PASSWORD    value: admin123) 传递进去的。而比较神奇的是,S3也是这么传递用户密码 。等装完Front50就会有内容。

Kubernetes部署(八):k8s项目交付----(5)持续部署_第6张图片

Kubernetes部署(八):k8s项目交付----(5)持续部署_第7张图片

2、部署redis

版本比较随意,但尽量别太高,此文章没做将redis持久化,毕竟redis是一个缓存软件,就算宕机数据都是了,也不影响什么只不过不过是界面历史记录没了而已。

2.1、准备镜像

[root@hdss7-200 ~]# docker pull redis:4.0.14
[root@hdss7-200 minio]# docker images |grep redis
redis                                          4.0.14                          191c4017dcdd   2 years ago     89.3MB
goharbor/redis-photon                          v1.9.4                          48c941077683   2 years ago     113MB

[root@hdss7-200 ~]# docker tag 191c4017dcdd harbor.od.com:180/armory/redis:4.0.14
[root@hdss7-200 ~]# docker login harbor.od.com:180
[root@hdss7-200 ~]# docker image push harbor.od.com:180/armory/redis:4.0.14

2.2、准备资源配置清单

redis不对外提供http服务,对内提供service资源,说明k8s其他的组件需要通过service资源名字(redis.aromry.svc.cluster.local. )连接redis。为什么其他组件不能通过service资源的cluster_ip进行连接,因为cluster_ip重启是会变的,但是service资源名字(redis.aromry.svc.cluster.local. )永远不会变。

[root@hdss7-200 ~]# cd /data/k8s-yaml/armory
[root@hdss7-200 armory ]# mkdir redis;cd redis
[root@hdss7-200 redis]# vi dp.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    name: redis
  name: redis
  namespace: armory
spec:
  replicas: 1
  revisionHistoryLimit: 7
  selector:
    matchLabels:
      name: redis
  template:
    metadata:
      labels:
        app: redis
        name: redis
    spec:
      containers:
      - name: redis
        image: harbor.od.com:180/armory/redis:4.0.14
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 6379
          protocol: TCP
      imagePullSecrets:
      - name: harbor

[root@hdss7-200 redis]# vi svc.yaml

apiVersion: v1
kind: Service
metadata:
  name: redis
  namespace: armory
spec:
  ports:
  - port: 6379
    protocol: TCP
    targetPort: 6379
  selector:
    app: redis

2.2、应用资源配置清单

[root@hdss7-21 minio]# kubectl apply -f http://k8s-yaml.od.com/armory/redis/dp.yaml
deployment.extensions/redis created
[root@hdss7-21 minio]# kubectl apply -f http://k8s-yaml.od.com/armory/redis/svc.yaml
service/redis created

2.3、验证是否启动成功

[root@hdss7-21 minio]# kubectl get pod -n armory -o wide |grep redis
redis-58b569cdd-4v5jk    1/1     Running   0          9s    172.7.21.8   hdss7-21.host.com              

[root@hdss7-21 minio]# telnet 172.7.21.8 6379   # telnet 容器的 6379
Trying 172.7.21.8...
Connected to 172.7.21.8.
Escape character is '^]'.


或者

[root@hdss7-21 ~]# kubectl get svc -n armory  -o wide |grep redis
redis   ClusterIP   192.168.119.189           6379/TCP            4m36s   app=redis

[root@hdss7-21 ~]# telnet 192.168.119.189 6379  # telnet cluster IP 的 6379
Trying 192.168.119.189...
Connected to 192.168.119.189.
Escape character is '^]'.

3、部署Spinnaker-clouddriver

部署clouddriver云驱动软件

3.1、准备镜像

# 当前版本的镜像比较老,不需要太新,可以考虑新版本进行尝试,slim微小、瘦身版本
[root@hdss7-200 ~]# docker pull armory/spinnaker-clouddriver-slim:release-1.11.x-bee52673a
[root@hdss7-200 clouddriver]# docker image ls -a |grep clouddriver
armory/spinnaker-clouddriver-slim              release-1.11.x-bee52673a        f1d52d01e28d   3 years ago     1.05GB

[root@hdss7-200 ~]# docker tag f1d52d01e28d harbor.od.com:180/armory/clouddriver:v1.11.x
[root@hdss7-200 ~]# docker push harbor.od.com:180/armory/clouddriver:v1.11.x

3.2、准备minio的secret

什么是k8s的secret,k8s secrets用于存储和管理一些敏感数据,比如密码,token,密钥等敏感信息。有三种类型,其中docker registry,专门存储harbor的认证信息。 generic存储自定义的账户密码,跟env挂载环境变量是一样的,只不过这个是给加密,非管理员权限是不能轻易看见。我们创建一个 generic存储minio的账户密码,让font50能登录进去跟minio通信,具体为什么font50拿着账户密码就能访问minio,因为font50默认跟配套的s3连接就是这么连接的,s3和minio是一样的。关于secret:k8s之Secret详细理解及使用_Jerry00713的博客-CSDN博客

[root@hdss7-200 ~]# cd /data/k8s-yaml/armory
[root@hdss7-200 armory]# mkdir clouddriver;cd clouddriver
[root@hdss7-200 clouddriver]# vi credentials    # minio 账号密码,后期会挂载到Spinnaker

[default]
aws_access_key_id=admin
aws_secret_access_key=admin123

# 在运算节点把7-200 /clouddriver/credentials下载过来
[root@hdss7-21 ~]# wget http://k8s-yaml.od.com/armory/clouddriver/credentials

# 在armory空间下创建generic类型的secert
[root@hdss7-21 ~]# kubectl create secret generic credentials --from-file=./credentials -n armory

dashboard空间看一下secret

Kubernetes部署(八):k8s项目交付----(5)持续部署_第8张图片

3.3、准备k8s的用户配置

因为clouddriver组件要管理k8s集群,就必须有k8s集群的集群管理员权限,或者相应的权限,两种方法,一种是ServiceAccount类型,例如部署dashboard时候,给了他相应的cluster-admin权限(声明了一个kubernetes-dashboard-admin,让cluster-admin(角色) 和 集群角色 ,进行了ClusterRoleBinding 集群角色绑定 ),但clouddriver不建议用这么粗暴的形式做,使用另一种kubeconfig形式(ua配置文件),比如kubectl 使用的是kubelet.config 文件访问集群 。这种形式需要制作证书,让apiserver接受私钥+ca证书,在本地生成一个文件附带公钥、使用者信息,通过此文件就可以访问api的信息

3.3.1、签发证书制作kube-config文件

给clouddriver签发一个admin-pem证书

[root@hdss7-200 ~]# cd /opt/certs/
[root@hdss7-200 certs]# cp client-csr.json admin-csr.json
[root@hdss7-200 certs]# vi admin-csr.json   # 签发证书,将CN改成cluster-admin,因为证书中的CN直接作为请求的用户名,如果不想使用此用户,还想用此证书,自创角色后,自创角色还要和集群中的服务账号做集群角色绑定

{
    "CN": "cluster-admin",
    "hosts": [
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "od",
            "OU": "ops"
        }
    ]
}

[root@hdss7-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client admin-csr.json | cfssl-json -bare admin

解释:

gencert: 生成新的key(密钥)和签名证书
 -ca:指明ca的证书
 -ca-key:指明ca的私钥文件
 -config:指明请求证书的json文件
 -profile:与-config中的profile对应,是指根据config中的profile段来生成证书的相关信息
所以cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json的意思是,使用ca.pem公钥   ca-key.pem私钥   ca-config.json 给某个文件生成一套新的key(密钥)和签名证书。为什么要使用这三个才能给某个文件生成证书,因为cfssl 要验证ca是不是没有被篡改过的,比如ca.pem是公钥也是证书,都知道证书是验证是不是没合法的,没被篡改,公钥加上私钥跟申请文件中的CN验证是不是一致。
-profile=client 是读取的ca-config.json中的信息,走的是client的配置
cfssjson 只是整理json格式,-bare主要的意思在于命名
整体的意思是,使用ca的认证给admin-csr.json配置文件,生成一套证书+私钥+申请文件

[root@hdss7-200 certs]# ll admin*     # 给clouddriver制作ua(userAccount)配置文件,需要使用(admin-key.pem、admin.pem)

-rw-r--r--. 1 root root 1001 Jul 27 08:03 admin.csr
-rw-r--r--. 1 root root  285 Jul 27 07:58 admin-csr.json
-rw-------. 1 root root 1675 Jul 27 08:03 admin-key.pem       # 私钥文件
-rw-r--r--. 1 root root 1371 Jul 27 08:03 admin.pem           # 证书文件

3.3.2、运算节点做Kubeconfig 配置

任意node运算节点:

[root@hdss7-21 ~]# scp root@hdss7-200:/opt/certs/ca.pem .
[root@hdss7-21 ~]# scp root@hdss7-200:/opt/certs/admin.pem .
[root@hdss7-21 ~]# scp root@hdss7-200:/opt/certs/admin-key.pem .

# kube-config文件也是用于通过apiserver来操作k8s集群创建资源使用,一般给予cluster-admin权限

# 指向了10.4.7.10:7443,不会指定那个apiserver,才能做到高可用。整体意思为配置一个在k8s中名字为myk8s的一个集群用户,它绑定的ca.pem,并告知apiserver,最后把这份信息输出到当前目录的config文件,运行后可以cat config查看,文件中有ca证书
[root@hdss7-21 ~]# kubectl config set-cluster myk8s --certificate-authority=./ca.pem --embed-certs=true --server=https://10.4.7.10:7443 --kubeconfig=config

# 在k8s中,创建一个cluster-admin用户,并赋予他admin-key.pem、admin.pem证书,把这份信息输出到config文件中,运行后可以cat config查看,文件中有ca证书、admin-key.pem、admin.pem证书,users: cluster-admin (userAccount就是ua)。这么做得意思是,我们使用config文件申请资源的时候,使用cluster-admin用户,去跟apiserver通信,通信期间发送绑定证书admin.pem,apiserver使用第一步添加的ca.pem公钥对admin.pem证书进行解密,解密后是admin的公钥+证书的CN,通过发现CN跟访问用户是一致的,说明是没问题的(跟访问网站域名需要跟证书CN一致)。然后使用admin的公钥加密apiserver自己的私钥,发送给客户端本地后,使用admin的私钥,进行解密后,获取apiserver的私钥,进而通信
[root@hdss7-21 ~]# kubectl config set-credentials cluster-admin --client-certificate=./admin.pem --client-key=./admin-key.pem --embed-certs=true --kubeconfig=config

# set context (定义运行环境,kubeconfig配置文件中设置一个环境项),k8s可以基于命名空间实现完全隔离的环境,可以根据业务的不同划分不同的namespace来使用,k8s 通过set-context 控制namespace 进行隔离。kubectl use-context配置多集群访问,比如 kubectl 连接k8s集群时,默认情况下,kubectl 会在 /root/.kube 目录下查找名为 config 的文件。通过此文件信心连接与 apiserver 通信。细心的会发现,如果 kubectl 和 apiserver部署在一起,没有此文件kubectl 也与 apiserver 通信,因为 kubectl  访问的是本地的http,本地的回环,127.0.0.1:8080(netstat-tulpn |grep 8080 后显示 127.0.0.1:8080  LISTEN  1554/kube-apiserver)和apiserver通信的。 rancher针对每个集群都有对应的kubeconfig文件,文件中连接的用户(user)名、集群(cluster)名、上下文(contexts)都是对应的。在将集群、用户和上下文定义在一个或多个配置文件中之后,用户可以使用 kubectl config use-context 命令快速地在集群之间进行切换。在myk8s-context运行环境下,绑定集群环境myk8s-context,使用cluster-admin用户,把这份信息输出到当前目录的config文件
[root@hdss7-21 ~]# kubectl config set-context myk8s-context --cluster=myk8s --user=cluster-admin --kubeconfig=config

使用集群用户myk8s运行环境,把这份信息输出到当前目录的config文件
[root@hdss7-21 ~]# kubectl config use-context myk8s-context --kubeconfig=config

[root@hdss7-21 ~]# cat config  查看

[root@hdss7-21 ~]# kubectl get clusterrolebindings
NAME                                                   AGE
cluster-admin                                          137d
grafana                                                108d
heapster                                               123d
k8s-node                                               136d
kube-state-metrics                                     108d
kubernetes-dashboard-admin                             129d
prometheus                                             108d
spinnake                                               35m
system:basic-user                                      137d
system:controller:attachdetach-controller              137d
system:controller:certificate-controller               137d
system:controller:clusterrole-aggregation-controller   137d
system:controller:cronjob-controller                   137d
system:controller:daemon-set-controller                137d
system:controller:deployment-controller                137d
system:controller:disruption-controller                137d
system:controller:endpoint-controller                  137d
system:controller:endpointslice-controller             101d
system:controller:endpointslicemirroring-controller    101d
system:controller:ephemeral-volume-controller          101d
system:controller:expand-controller                    137d
system:controller:generic-garbage-collector            137d
system:controller:horizontal-pod-autoscaler            137d
system:controller:job-controller                       137d
system:controller:namespace-controller                 137d
system:controller:node-controller                      137d
system:controller:persistent-volume-binder             137d
system:controller:pod-garbage-collector                137d
system:controller:pv-protection-controller             137d
system:controller:pvc-protection-controller            137d
system:controller:replicaset-controller                137d
system:controller:replication-controller               137d
system:controller:resourcequota-controller             137d
system:controller:root-ca-cert-publisher               101d
system:controller:route-controller                     137d
system:controller:service-account-controller           137d
system:controller:service-controller                   137d
system:controller:statefulset-controller               137d
system:controller:ttl-after-finished-controller        101d
system:controller:ttl-controller                       137d
system:coredns                                         130d
system:discovery                                       137d
system:kube-controller-manager                         137d
system:kube-dns                                        137d
system:kube-scheduler                                  137d
system:monitoring                                      101d
system:node                                            137d
system:node-proxier                                    137d
system:public-info-viewer                              137d
system:service-account-issuer-discovery                101d
system:volume-scheduler                                137d
traefik-ingress-controller                             129d

创建集群角色绑定,使用cluster-admin用户, 和集群角色cluster-admin,进行绑定。执行后,cat config  查看后发现,执行kubectl create clusterrolebinding无变化,所以这个只是在k8s内部做集群绑定
[root@hdss7-21 ~]# kubectl create clusterrolebinding myk8s-admin --clusterrole=cluster-admin --user=cluster-admin

# 测试kube-config是否可以用,在上述已经告知,kubect会先找/root/.kube/config 文件与apiserver通信,没有在跟访问1270.01:8080   注意:dashborad只能用service account登陆

[root@hdss7-200 ~]# mkdir /root/.kube;cd /root/.kube/
[root@hdss7-21 ~]# scp config [email protected]:/root/.kube/
[root@hdss7-21 ~]# scp /opt/kubernetes/server/bin/kubectl [email protected]:/usr/bin/
[root@hdss7-200 .kube]# kubectl get pod  # 查看是否可以

# 默认kubect会先找/root/.kube/config 文件,如何修改找其他的文件 (了解不需要操作)

echo "export KUBECONFIG=文件" >>~/.bash_profile

3.3.3、创建ConfigMap资源

将刚刚创建的 Kubeconfig 的配置文件config,使用ConfigMap形式挂载到k8s中,挂载到Spinnaker-clouddriver的pod,实现Spinnaker-clouddriver通过config ,进而跟apiserver通信

[root@hdss7-21 ~]# kubectl create configmap default-kubeconfig --from-file=default-kubeconfig=config -n armory

3.3.4、删除config相关配置

为了安全起见,在操作完成后,删除config、ca.pem、admin.pem 、admin-key.pem

 [root@hdss7-21 ~]# rm -f config ca.pem admin.pem admin-key.pem

3.4资源配置清单

Spinnaker 的配置比较繁琐,其中有一个default-config.yaml的configmap非常复杂,一般不需要修改。原因是sprinnaker的armory发行版,本身把sprinnaker复杂的配置综合起来。

[root@hdss7-200 ~]# cd /data/k8s-yaml/armory/clouddriver
[root@hdss7-200 clouddriver]# vi init-env.yaml

# init-env.yaml
# 包括redis地址、对外的API接口域名等
kind: ConfigMap
apiVersion: v1
metadata:
  name: init-env
  namespace: armory
data:
  API_HOST: http://spinnaker.od.com/api
  ARMORY_ID: c02f0781-92f5-4e80-86db-0ba8fe7b8544
  ARMORYSPINNAKER_CONF_STORE_BUCKET: armory-platform
  ARMORYSPINNAKER_CONF_STORE_PREFIX: front50
  ARMORYSPINNAKER_GCS_ENABLED: "false"
  ARMORYSPINNAKER_S3_ENABLED: "true"
  AUTH_ENABLED: "false"
  AWS_REGION: us-east-1
  BASE_IP: 127.0.0.1
  CLOUDDRIVER_OPTS: -Dspring.profiles.active=armory,configurator,local
  CONFIGURATOR_ENABLED: "false"
  DECK_HOST: http://spinnaker.od.com
  ECHO_OPTS: -Dspring.profiles.active=armory,configurator,local
  GATE_OPTS: -Dspring.profiles.active=armory,configurator,local
  IGOR_OPTS: -Dspring.profiles.active=armory,configurator,local
  PLATFORM_ARCHITECTURE: k8s
  REDIS_HOST: redis://redis:6379
  SERVER_ADDRESS: 0.0.0.0
  SPINNAKER_AWS_DEFAULT_REGION: us-east-1
  SPINNAKER_AWS_ENABLED: "false"
  SPINNAKER_CONFIG_DIR: /home/spinnaker/config
  SPINNAKER_GOOGLE_PROJECT_CREDENTIALS_PATH: ""
  SPINNAKER_HOME: /home/spinnaker
  SPRING_PROFILES_ACTIVE: armory,configurator,local

注释:

1、在配置文件中 data: 下做了一系列的环境变量(API_HOST: http://spinnaker.od.com/api、REDIS_HOST: redis://redis:6379 连接的是名字是redis的service资源的6379端口)
2、API_HOST: http://spinnaker.od.com/api    API_HOST就是我们的Gate组件的地址,所有的请求必须通过Gate组件
3、 ARMORYSPINNAKER_CONF_STORE_BUCKET: armory-platform   部署spinnaker clouddriver 后,clouddriver 的 pod 通过这个字段,在 minio 中创建一个armory-platform名字的桶,可以按需求更改
4、ARMORY_ID: c02f0781-92f5-4e80-86db-0ba8fe7b8544   随机的字符串,使用脚手架工具导出的方案,随机生成的,都用这个ID无问题。

[root@hdss7-200 clouddriver]# vi default-config.yaml

kind: ConfigMap
apiVersion: v1
metadata:
  name: default-config
  namespace: armory
data:
  barometer.yml: |
    server:
      port: 9092
 
    spinnaker:
      redis:
        host: ${services.redis.host}
        port: ${services.redis.port}
  clouddriver-armory.yml: |
    aws:
      defaultAssumeRole: role/${SPINNAKER_AWS_DEFAULT_ASSUME_ROLE:SpinnakerManagedProfile}
      accounts:
        - name: default-aws-account
          accountId: ${SPINNAKER_AWS_DEFAULT_ACCOUNT_ID:none}
 
      client:
        maxErrorRetry: 20
 
    serviceLimits:
      cloudProviderOverrides:
        aws:
          rateLimit: 15.0
 
      implementationLimits:
        AmazonAutoScaling:
          defaults:
            rateLimit: 3.0
        AmazonElasticLoadBalancing:
          defaults:
            rateLimit: 5.0
 
    security.basic.enabled: false
    management.security.enabled: false
  clouddriver-dev.yml: |
 
    serviceLimits:
      defaults:
        rateLimit: 2
  clouddriver.yml: |
    server:
      port: ${services.clouddriver.port:7002}
      address: ${services.clouddriver.host:localhost}
 
    redis:
      connection: ${REDIS_HOST:redis://localhost:6379}
 
    udf:
      enabled: ${services.clouddriver.aws.udf.enabled:true}
      udfRoot: /opt/spinnaker/config/udf
      defaultLegacyUdf: false
 
    default:
      account:
        env: ${providers.aws.primaryCredentials.name}
 
    aws:
      enabled: ${providers.aws.enabled:false}
      defaults:
        iamRole: ${providers.aws.defaultIAMRole:BaseIAMRole}
      defaultRegions:
        - name: ${providers.aws.defaultRegion:us-east-1}
      defaultFront50Template: ${services.front50.baseUrl}
      defaultKeyPairTemplate: ${providers.aws.defaultKeyPairTemplate}
 
    azure:
      enabled: ${providers.azure.enabled:false}
 
      accounts:
        - name: ${providers.azure.primaryCredentials.name}
          clientId: ${providers.azure.primaryCredentials.clientId}
          appKey: ${providers.azure.primaryCredentials.appKey}
          tenantId: ${providers.azure.primaryCredentials.tenantId}
          subscriptionId: ${providers.azure.primaryCredentials.subscriptionId}
 
    google:
      enabled: ${providers.google.enabled:false}
 
      accounts:
        - name: ${providers.google.primaryCredentials.name}
          project: ${providers.google.primaryCredentials.project}
          jsonPath: ${providers.google.primaryCredentials.jsonPath}
          consul:
            enabled: ${providers.google.primaryCredentials.consul.enabled:false}
 
    cf:
      enabled: ${providers.cf.enabled:false}
 
      accounts:
        - name: ${providers.cf.primaryCredentials.name}
          api: ${providers.cf.primaryCredentials.api}
          console: ${providers.cf.primaryCredentials.console}
          org: ${providers.cf.defaultOrg}
          space: ${providers.cf.defaultSpace}
          username: ${providers.cf.account.name:}
          password: ${providers.cf.account.password:}
 
    kubernetes:
      enabled: ${providers.kubernetes.enabled:false}
      accounts:
        - name: ${providers.kubernetes.primaryCredentials.name}
          dockerRegistries:
            - accountName: ${providers.kubernetes.primaryCredentials.dockerRegistryAccount}
 
    openstack:
      enabled: ${providers.openstack.enabled:false}
      accounts:
        - name: ${providers.openstack.primaryCredentials.name}
          authUrl: ${providers.openstack.primaryCredentials.authUrl}
          username: ${providers.openstack.primaryCredentials.username}
          password: ${providers.openstack.primaryCredentials.password}
          projectName: ${providers.openstack.primaryCredentials.projectName}
          domainName: ${providers.openstack.primaryCredentials.domainName:Default}
          regions: ${providers.openstack.primaryCredentials.regions}
          insecure: ${providers.openstack.primaryCredentials.insecure:false}
          userDataFile: ${providers.openstack.primaryCredentials.userDataFile:}
 
          lbaas:
            pollTimeout: 60
            pollInterval: 5
 
    dockerRegistry:
      enabled: ${providers.dockerRegistry.enabled:false}
      accounts:
        - name: ${providers.dockerRegistry.primaryCredentials.name}
          address: ${providers.dockerRegistry.primaryCredentials.address}
          username: ${providers.dockerRegistry.primaryCredentials.username:}
          passwordFile: ${providers.dockerRegistry.primaryCredentials.passwordFile}
 
    credentials:
      primaryAccountTypes: ${providers.aws.primaryCredentials.name}, ${providers.google.primaryCredentials.name}, ${providers.cf.primaryCredentials.name}, ${providers.azure.primaryCredentials.name}
      challengeDestructiveActionsEnvironments: ${providers.aws.primaryCredentials.name}, ${providers.google.primaryCredentials.name}, ${providers.cf.primaryCredentials.name}, ${providers.azure.primaryCredentials.name}
 
    spectator:
      applicationName: ${spring.application.name}
      webEndpoint:
        enabled: ${services.spectator.webEndpoint.enabled:false}
        prototypeFilter:
          path: ${services.spectator.webEndpoint.prototypeFilter.path:}
 
      stackdriver:
        enabled: ${services.stackdriver.enabled}
        projectName: ${services.stackdriver.projectName}
        credentialsPath: ${services.stackdriver.credentialsPath}
 
    stackdriver:
      hints:
        - name: controller.invocations
          labels:
          - account
          - region
  dinghy.yml: ""
  echo-armory.yml: |
    diagnostics:
      enabled: true
      id: ${ARMORY_ID:unknown}
 
    armorywebhooks:
      enabled: false
      forwarding:
        baseUrl: http://armory-dinghy:8081
        endpoint: v1/webhooks
  echo-noncron.yml: |
    scheduler:
      enabled: false
  echo.yml: |
    server:
      port: ${services.echo.port:8089}
      address: ${services.echo.host:localhost}
 
    cassandra:
      enabled: ${services.echo.cassandra.enabled:false}
      embedded: ${services.cassandra.embedded:false}
      host: ${services.cassandra.host:localhost}
 
    spinnaker:
      baseUrl: ${services.deck.baseUrl}
      cassandra:
         enabled: ${services.echo.cassandra.enabled:false}
      inMemory:
         enabled: ${services.echo.inMemory.enabled:true}
 
    front50:
      baseUrl: ${services.front50.baseUrl:http://localhost:8080 }
 
    orca:
      baseUrl: ${services.orca.baseUrl:http://localhost:8083 }
 
    endpoints.health.sensitive: false
 
    slack:
      enabled: ${services.echo.notifications.slack.enabled:false}
      token: ${services.echo.notifications.slack.token}
 
    spring:
      mail:
        host: ${mail.host}
 
    mail:
      enabled: ${services.echo.notifications.mail.enabled:false}
      host: ${services.echo.notifications.mail.host}
      from: ${services.echo.notifications.mail.fromAddress}
 
    hipchat:
      enabled: ${services.echo.notifications.hipchat.enabled:false}
      baseUrl: ${services.echo.notifications.hipchat.url}
      token: ${services.echo.notifications.hipchat.token}
 
    twilio:
      enabled: ${services.echo.notifications.sms.enabled:false}
      baseUrl: ${services.echo.notifications.sms.url:https://api.twilio.com/ }
      account: ${services.echo.notifications.sms.account}
      token: ${services.echo.notifications.sms.token}
      from: ${services.echo.notifications.sms.from}
 
    scheduler:
      enabled: ${services.echo.cron.enabled:true}
      threadPoolSize: 20
      triggeringEnabled: true
      pipelineConfigsPoller:
        enabled: true
        pollingIntervalMs: 30000
      cron:
        timezone: ${services.echo.cron.timezone}
 
    spectator:
      applicationName: ${spring.application.name}
      webEndpoint:
        enabled: ${services.spectator.webEndpoint.enabled:false}
        prototypeFilter:
          path: ${services.spectator.webEndpoint.prototypeFilter.path:}
 
      stackdriver:
        enabled: ${services.stackdriver.enabled}
        projectName: ${services.stackdriver.projectName}
        credentialsPath: ${services.stackdriver.credentialsPath}
 
    webhooks:
      artifacts:
        enabled: true
  fetch.sh: |+
  
    CONFIG_LOCATION=${SPINNAKER_HOME:-"/opt/spinnaker"}/config
    CONTAINER=$1
 
    rm -f /opt/spinnaker/config/*.yml
 
    mkdir -p ${CONFIG_LOCATION}
 
    for filename in /opt/spinnaker/config/default/*.yml; do
        cp $filename ${CONFIG_LOCATION}
    done
 
    if [ -d /opt/spinnaker/config/custom ]; then
        for filename in /opt/spinnaker/config/custom/*; do
            cp $filename ${CONFIG_LOCATION}
        done
    fi
 
    add_ca_certs() {
      ca_cert_path="$1"
      jks_path="$2"
      alias="$3"
 
      if [[ "$(whoami)" != "root" ]]; then
        echo "INFO: I do not have proper permisions to add CA roots"
        return
      fi
 
      if [[ ! -f ${ca_cert_path} ]]; then
        echo "INFO: No CA cert found at ${ca_cert_path}"
        return
      fi
      keytool -importcert \
          -file ${ca_cert_path} \
          -keystore ${jks_path} \
          -alias ${alias} \
          -storepass changeit \
          -noprompt
    }
 
    if [ `which keytool` ]; then
      echo "INFO: Keytool found adding certs where appropriate"
      add_ca_certs "${CONFIG_LOCATION}/ca.crt" "/etc/ssl/certs/java/cacerts" "custom-ca"
    else
      echo "INFO: Keytool not found, not adding any certs/private keys"
    fi
 
    saml_pem_path="/opt/spinnaker/config/custom/saml.pem"
    saml_pkcs12_path="/tmp/saml.pkcs12"
    saml_jks_path="${CONFIG_LOCATION}/saml.jks"
 
    x509_ca_cert_path="/opt/spinnaker/config/custom/x509ca.crt"
    x509_client_cert_path="/opt/spinnaker/config/custom/x509client.crt"
    x509_jks_path="${CONFIG_LOCATION}/x509.jks"
    x509_nginx_cert_path="/opt/nginx/certs/ssl.crt"
 
    if [ "${CONTAINER}" == "gate" ]; then
        if [ -f ${saml_pem_path} ]; then
            echo "Loading ${saml_pem_path} into ${saml_jks_path}"
            openssl pkcs12 -export -out ${saml_pkcs12_path} -in ${saml_pem_path} -password pass:changeit -name saml
            keytool -genkey -v -keystore ${saml_jks_path} -alias saml \
                    -keyalg RSA -keysize 2048 -validity 10000 \
                    -storepass changeit -keypass changeit -dname "CN=armory"
            keytool -importkeystore \
                    -srckeystore ${saml_pkcs12_path} \
                    -srcstoretype PKCS12 \
                    -srcstorepass changeit \
                    -destkeystore ${saml_jks_path} \
                    -deststoretype JKS \
                    -storepass changeit \
                    -alias saml \
                    -destalias saml \
                    -noprompt
        else
            echo "No SAML IDP pemfile found at ${saml_pem_path}"
        fi
        if [ -f ${x509_ca_cert_path} ]; then
            echo "Loading ${x509_ca_cert_path} into ${x509_jks_path}"
            add_ca_certs ${x509_ca_cert_path} ${x509_jks_path} "ca"
        else
            echo "No x509 CA cert found at ${x509_ca_cert_path}"
        fi
        if [ -f ${x509_client_cert_path} ]; then
            echo "Loading ${x509_client_cert_path} into ${x509_jks_path}"
            add_ca_certs ${x509_client_cert_path} ${x509_jks_path} "client"
        else
            echo "No x509 Client cert found at ${x509_client_cert_path}"
        fi
 
        if [ -f ${x509_nginx_cert_path} ]; then
            echo "Creating a self-signed CA (EXPIRES IN 360 DAYS) with java keystore: ${x509_jks_path}"
            echo -e "\n\n\n\n\n\ny\n" | keytool -genkey -keyalg RSA -alias server -keystore keystore.jks -storepass changeit -validity 360 -keysize 2048
            keytool -importkeystore \
                    -srckeystore keystore.jks \
                    -srcstorepass changeit \
                    -destkeystore "${x509_jks_path}" \
                    -storepass changeit \
                    -srcalias server \
                    -destalias server \
                    -noprompt
        else
            echo "No x509 nginx cert found at ${x509_nginx_cert_path}"
        fi
    fi
 
    if [ "${CONTAINER}" == "nginx" ]; then
        nginx_conf_path="/opt/spinnaker/config/default/nginx.conf"
        if [ -f ${nginx_conf_path} ]; then
            cp ${nginx_conf_path} /etc/nginx/nginx.conf
        fi
    fi
 
  fiat.yml: |-
    server:
      port: ${services.fiat.port:7003}
      address: ${services.fiat.host:localhost}
 
    redis:
      connection: ${services.redis.connection:redis://localhost:6379}
 
    spectator:
      applicationName: ${spring.application.name}
      webEndpoint:
        enabled: ${services.spectator.webEndpoint.enabled:false}
        prototypeFilter:
          path: ${services.spectator.webEndpoint.prototypeFilter.path:}
      stackdriver:
        enabled: ${services.stackdriver.enabled}
        projectName: ${services.stackdriver.projectName}
        credentialsPath: ${services.stackdriver.credentialsPath}
 
    hystrix:
     command:
       default.execution.isolation.thread.timeoutInMilliseconds: 20000
 
    logging:
      level:
        com.netflix.spinnaker.fiat: DEBUG
  front50-armory.yml: |
    spinnaker:
      redis:
        enabled: true
        host: redis
  front50.yml: |
    server:
      port: ${services.front50.port:8080}
      address: ${services.front50.host:localhost}
 
    hystrix:
      command:
        default.execution.isolation.thread.timeoutInMilliseconds: 15000
 
    cassandra:
      enabled: ${services.front50.cassandra.enabled:false}
      embedded: ${services.cassandra.embedded:false}
      host: ${services.cassandra.host:localhost}
 
    aws:
      simpleDBEnabled: ${providers.aws.simpleDBEnabled:false}
      defaultSimpleDBDomain: ${providers.aws.defaultSimpleDBDomain}
 
    spinnaker:
      cassandra:
        enabled: ${services.front50.cassandra.enabled:false}
        host: ${services.cassandra.host:localhost}
        port: ${services.cassandra.port:9042}
        cluster: ${services.cassandra.cluster:CASS_SPINNAKER}
        keyspace: front50
        name: global
 
      redis:
        enabled: ${services.front50.redis.enabled:false}
 
      gcs:
        enabled: ${services.front50.gcs.enabled:false}
        bucket: ${services.front50.storage_bucket:}
        bucketLocation: ${services.front50.bucket_location:}
        rootFolder: ${services.front50.rootFolder:front50}
        project: ${providers.google.primaryCredentials.project}
        jsonPath: ${providers.google.primaryCredentials.jsonPath}
 
      s3:
        enabled: ${services.front50.s3.enabled:false}
        bucket: ${services.front50.storage_bucket:}
        rootFolder: ${services.front50.rootFolder:front50}
 
    spectator:
      applicationName: ${spring.application.name}
      webEndpoint:
        enabled: ${services.spectator.webEndpoint.enabled:false}
        prototypeFilter:
          path: ${services.spectator.webEndpoint.prototypeFilter.path:}
 
      stackdriver:
        enabled: ${services.stackdriver.enabled}
        projectName: ${services.stackdriver.projectName}
        credentialsPath: ${services.stackdriver.credentialsPath}
 
    stackdriver:
      hints:
        - name: controller.invocations
          labels:
          - application
          - cause
        - name: aws.request.httpRequestTime
          labels:
          - status
          - exception
          - AWSErrorCode
        - name: aws.request.requestSigningTime
          labels:
          - exception
  gate-armory.yml: |+
    lighthouse:
        baseUrl: http://${DEFAULT_DNS_NAME:lighthouse}:5000
 
  gate.yml: |
    server:
      port: ${services.gate.port:8084}
      address: ${services.gate.host:localhost}
 
    redis:
      connection: ${REDIS_HOST:redis://localhost:6379}
      configuration:
        secure: true
 
    spectator:
      applicationName: ${spring.application.name}
      webEndpoint:
        enabled: ${services.spectator.webEndpoint.enabled:false}
        prototypeFilter:
          path: ${services.spectator.webEndpoint.prototypeFilter.path:}
 
      stackdriver:
        enabled: ${services.stackdriver.enabled}
        projectName: ${services.stackdriver.projectName}
        credentialsPath: ${services.stackdriver.credentialsPath}
 
    stackdriver:
      hints:
        - name: EurekaOkClient_Request
          labels:
          - cause
          - reason
          - status
  igor-nonpolling.yml: |
    jenkins:
      polling:
        enabled: false
  igor.yml: |
    server:
      port: ${services.igor.port:8088}
      address: ${services.igor.host:localhost}
 
    jenkins:
      enabled: ${services.jenkins.enabled:false}
      masters:
        - name: ${services.jenkins.defaultMaster.name}
          address: ${services.jenkins.defaultMaster.baseUrl}
          username: ${services.jenkins.defaultMaster.username}
          password: ${services.jenkins.defaultMaster.password}
          csrf: ${services.jenkins.defaultMaster.csrf:false}
          
    travis:
      enabled: ${services.travis.enabled:false}
      masters:
        - name: ${services.travis.defaultMaster.name}
          baseUrl: ${services.travis.defaultMaster.baseUrl}
          address: ${services.travis.defaultMaster.address}
          githubToken: ${services.travis.defaultMaster.githubToken}
 
 
    dockerRegistry:
      enabled: ${providers.dockerRegistry.enabled:false}
 
 
    redis:
      connection: ${REDIS_HOST:redis://localhost:6379}
 
    spectator:
      applicationName: ${spring.application.name}
      webEndpoint:
        enabled: ${services.spectator.webEndpoint.enabled:false}
        prototypeFilter:
          path: ${services.spectator.webEndpoint.prototypeFilter.path:}
      stackdriver:
        enabled: ${services.stackdriver.enabled}
        projectName: ${services.stackdriver.projectName}
        credentialsPath: ${services.stackdriver.credentialsPath}
 
    stackdriver:
      hints:
        - name: controller.invocations
          labels:
          - master
  kayenta-armory.yml: |
    kayenta:
      aws:
        enabled: ${ARMORYSPINNAKER_S3_ENABLED:false}
        accounts:
          - name: aws-s3-storage
            bucket: ${ARMORYSPINNAKER_CONF_STORE_BUCKET}
            rootFolder: kayenta
            supportedTypes:
              - OBJECT_STORE
              - CONFIGURATION_STORE
 
      s3:
        enabled: ${ARMORYSPINNAKER_S3_ENABLED:false}
 
      google:
        enabled: ${ARMORYSPINNAKER_GCS_ENABLED:false}
        accounts:
          - name: cloud-armory
            bucket: ${ARMORYSPINNAKER_CONF_STORE_BUCKET}
            rootFolder: kayenta-prod
            supportedTypes:
              - METRICS_STORE
              - OBJECT_STORE
              - CONFIGURATION_STORE
              
      gcs:
        enabled: ${ARMORYSPINNAKER_GCS_ENABLED:false}
  kayenta.yml: |2
 
    server:
      port: 8090
 
    kayenta:
      atlas:
        enabled: false
 
      google:
        enabled: false
 
      aws:
        enabled: false
 
      datadog:
        enabled: false
 
      prometheus:
        enabled: false
 
      gcs:
        enabled: false
 
      s3:
        enabled: false
 
      stackdriver:
        enabled: false
 
      memory:
        enabled: false
 
      configbin:
        enabled: false
 
    keiko:
      queue:
        redis:
          queueName: kayenta.keiko.queue
          deadLetterQueueName: kayenta.keiko.queue.deadLetters
 
    redis:
      connection: ${REDIS_HOST:redis://localhost:6379}
 
    spectator:
      applicationName: ${spring.application.name}
      webEndpoint:
        enabled: true
 
    swagger:
      enabled: true
      title: Kayenta API
      description:
      contact:
      patterns:
        - /admin.*
        - /canary.*
        - /canaryConfig.*
        - /canaryJudgeResult.*
        - /credentials.*
        - /fetch.*
        - /health
        - /judges.*
        - /metadata.*
        - /metricSetList.*
        - /metricSetPairList.*
        - /pipeline.*
 
    security.basic.enabled: false
    management.security.enabled: false
  nginx.conf: |
    user  nginx;
    worker_processes  1;
 
    error_log  /var/log/nginx/error.log warn;
    pid        /var/run/nginx.pid;
 
    events {
        worker_connections  1024;
    }
 
    http {
        include       /etc/nginx/mime.types;
        default_type  application/octet-stream;
        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                          '$status $body_bytes_sent "$http_referer" '
                          '"$http_user_agent" "$http_x_forwarded_for"';
        access_log  /var/log/nginx/access.log  main;
        
        sendfile        on;
        keepalive_timeout  65;
        include /etc/nginx/conf.d/*.conf;
    }
 
    stream {
        upstream gate_api {
            server armory-gate:8085;
        }
 
        server {
            listen 8085;
            proxy_pass gate_api;
        }
    }
  nginx.http.conf: |
    gzip on;
    gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/vnd.ms-fontobject application/x-font-ttf font/opentype image/svg+xml image/x-icon;
 
    server {
           listen 80;
           listen [::]:80;
 
           location / {
                proxy_pass http://armory-deck/;
           }
 
           location /api/ {
                proxy_pass http://armory-gate:8084/;
           }
 
           location /slack/ {
               proxy_pass http://armory-platform:10000/;
           }
 
           rewrite ^/login(.*)$ /api/login$1 last;
           rewrite ^/auth(.*)$ /api/auth$1 last;
    }
  nginx.https.conf: |
    gzip on;
    gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/vnd.ms-fontobject application/x-font-ttf font/opentype image/svg+xml image/x-icon;
 
    server {
        listen 80;
        listen [::]:80;
        return 301 https://$host$request_uri;
    }
 
    server {
        listen 443 ssl;
        listen [::]:443 ssl;
        ssl on;
        ssl_certificate /opt/nginx/certs/ssl.crt;
        ssl_certificate_key /opt/nginx/certs/ssl.key;
 
        location / {
            proxy_pass http://armory-deck/;
        }
 
        location /api/ {
            proxy_pass http://armory-gate:8084/;
            proxy_set_header Host            $host;
            proxy_set_header X-Real-IP       $proxy_protocol_addr;
            proxy_set_header X-Forwarded-For $proxy_protocol_addr;
            proxy_set_header X-Forwarded-Proto $scheme;
        }
 
        location /slack/ {
            proxy_pass http://armory-platform:10000/;
        }
        rewrite ^/login(.*)$ /api/login$1 last;
        rewrite ^/auth(.*)$ /api/auth$1 last;
    }
  orca-armory.yml: |
    mine:
      baseUrl: http://${services.barometer.host}:${services.barometer.port}
 
    pipelineTemplate:
      enabled: ${features.pipelineTemplates.enabled:false}
      jinja:
        enabled: true
 
    kayenta:
      enabled: ${services.kayenta.enabled:false}
      baseUrl: ${services.kayenta.baseUrl}
 
    jira:
      enabled: ${features.jira.enabled:false}
      basicAuth:  "Basic ${features.jira.basicAuthToken}"
      url: ${features.jira.createIssueUrl}
 
    webhook:
      preconfigured:
        - label: Enforce Pipeline Policy
          description: Checks pipeline configuration against policy requirements
          type: enforcePipelinePolicy
          enabled: ${features.certifiedPipelines.enabled:false}
          url: "http://lighthouse:5000/v1/pipelines/${execution.application}/${execution.pipelineConfigId}?check_policy=yes"
          headers:
            Accept:
              - application/json
          method: GET
          waitForCompletion: true
          statusUrlResolution: getMethod
          statusJsonPath: $.status
          successStatuses: pass
          canceledStatuses:
          terminalStatuses: TERMINAL
 
        - label: "Jira: Create Issue"
          description:  Enter a Jira ticket when this pipeline runs
          type: createJiraIssue
          enabled: ${jira.enabled}
          url:  ${jira.url}
          customHeaders:
            "Content-Type": application/json
            Authorization: ${jira.basicAuth}
          method: POST
          parameters:
            - name: summary
              label: Issue Summary
              description: A short summary of your issue.
            - name: description
              label: Issue Description
              description: A longer description of your issue.
            - name: projectKey
              label: Project key
              description: The key of your JIRA project.
            - name: type
              label: Issue Type
              description: The type of your issue, e.g. "Task", "Story", etc.
          payload: |
            {
              "fields" : {
                "description": "${parameterValues['description']}",
                "issuetype": {
                   "name": "${parameterValues['type']}"
                },
                "project": {
                   "key": "${parameterValues['projectKey']}"
                },
                "summary":  "${parameterValues['summary']}"
              }
            }
          waitForCompletion: false
 
        - label: "Jira: Update Issue"
          description:  Update a previously created Jira Issue
          type: updateJiraIssue
          enabled: ${jira.enabled}
          url: "${execution.stages.?[type == 'createJiraIssue'][0]['context']['buildInfo']['self']}"
          customHeaders:
            "Content-Type": application/json
            Authorization: ${jira.basicAuth}
          method: PUT
          parameters:
            - name: summary
              label: Issue Summary
              description: A short summary of your issue.
            - name: description
              label: Issue Description
              description: A longer description of your issue.
          payload: |
            {
              "fields" : {
                "description": "${parameterValues['description']}",
                "summary": "${parameterValues['summary']}"
              }
            }
          waitForCompletion: false
 
        - label: "Jira: Transition Issue"
          description:  Change state of existing Jira Issue
          type: transitionJiraIssue
          enabled: ${jira.enabled}
          url: "${execution.stages.?[type == 'createJiraIssue'][0]['context']['buildInfo']['self']}/transitions"
          customHeaders:
            "Content-Type": application/json
            Authorization: ${jira.basicAuth}
          method: POST
          parameters:
            - name: newStateID
              label: New State ID
              description: The ID of the state you want to transition the issue to.
          payload: |
            {
              "transition" : {
                "id" : "${parameterValues['newStateID']}"
              }
            }
          waitForCompletion: false
        - label: "Jira: Add Comment"
          description:  Add a comment to an existing Jira Issue
          type: commentJiraIssue
          enabled: ${jira.enabled}
          url: "${execution.stages.?[type == 'createJiraIssue'][0]['context']['buildInfo']['self']}/comment"
          customHeaders:
            "Content-Type": application/json
            Authorization: ${jira.basicAuth}
          method: POST
          parameters:
            - name: body
              label: Comment body
              description: The text body of the component.
          payload: |
            {
              "body" : "${parameterValues['body']}"
            }
          waitForCompletion: false
 
  orca.yml: |
    server:
        port: ${services.orca.port:8083}
        address: ${services.orca.host:localhost}
    oort:
        baseUrl: ${services.oort.baseUrl:localhost:7002}
    front50:
        baseUrl: ${services.front50.baseUrl:localhost:8080}
    mort:
        baseUrl: ${services.mort.baseUrl:localhost:7002}
    kato:
        baseUrl: ${services.kato.baseUrl:localhost:7002}
    bakery:
        baseUrl: ${services.bakery.baseUrl:localhost:8087}
        extractBuildDetails: ${services.bakery.extractBuildDetails:true}
        allowMissingPackageInstallation: ${services.bakery.allowMissingPackageInstallation:true}
    echo:
        enabled: ${services.echo.enabled:false}
        baseUrl: ${services.echo.baseUrl:8089}
    igor:
        baseUrl: ${services.igor.baseUrl:8088}
    flex:
      baseUrl: http://not-a-host
    default:
      bake:
        account: ${providers.aws.primaryCredentials.name}
      securityGroups:
      vpc:
        securityGroups:
    redis:
      connection: ${REDIS_HOST:redis://localhost:6379}
    tasks:
      executionWindow:
        timezone: ${services.orca.timezone}
    spectator:
      applicationName: ${spring.application.name}
      webEndpoint:
        enabled: ${services.spectator.webEndpoint.enabled:false}
        prototypeFilter:
          path: ${services.spectator.webEndpoint.prototypeFilter.path:}        
      stackdriver:
        enabled: ${services.stackdriver.enabled}
        projectName: ${services.stackdriver.projectName}
        credentialsPath: ${services.stackdriver.credentialsPath}
    stackdriver:
      hints:
        - name: controller.invocations
          labels:
          - application
  rosco-armory.yml: |
    redis:
      timeout: 50000
 
    rosco:
      jobs:
        local:
          timeoutMinutes: 60
  rosco.yml: |
    server:
      port: ${services.rosco.port:8087}
      address: ${services.rosco.host:localhost}
 
    redis:
      connection: ${REDIS_HOST:redis://localhost:6379}
 
    aws:
      enabled: ${providers.aws.enabled:false}
 
    docker:
      enabled: ${services.docker.enabled:false}
      bakeryDefaults:
        targetRepository: ${services.docker.targetRepository}
 
    google:
      enabled: ${providers.google.enabled:false}
      accounts:
        - name: ${providers.google.primaryCredentials.name}
          project: ${providers.google.primaryCredentials.project}
          jsonPath: ${providers.google.primaryCredentials.jsonPath}
      gce:
        bakeryDefaults:
          zone: ${providers.google.defaultZone}
 
    rosco:
      configDir: ${services.rosco.configDir}
      jobs:
        local:
          timeoutMinutes: 30
 
    spectator:
      applicationName: ${spring.application.name}
      webEndpoint:
        enabled: ${services.spectator.webEndpoint.enabled:false}
        prototypeFilter:
          path: ${services.spectator.webEndpoint.prototypeFilter.path:}
      stackdriver:
        enabled: ${services.stackdriver.enabled}
        projectName: ${services.stackdriver.projectName}
        credentialsPath: ${services.stackdriver.credentialsPath}
 
    stackdriver:
      hints:
        - name: bakes
          labels:
          - success
  spinnaker-armory.yml: |
    armory:
      architecture: 'k8s'
      
    features:
      artifacts:
        enabled: true
      pipelineTemplates:
        enabled: ${PIPELINE_TEMPLATES_ENABLED:false}
      infrastructureStages:
        enabled: ${INFRA_ENABLED:false}
      certifiedPipelines:
        enabled: ${CERTIFIED_PIPELINES_ENABLED:false}
      configuratorEnabled:
        enabled: true
      configuratorWizard:
        enabled: true
      configuratorCerts:
        enabled: true
      loadtestStage:
        enabled: ${LOADTEST_ENABLED:false}
      jira:
        enabled: ${JIRA_ENABLED:false}
        basicAuthToken: ${JIRA_BASIC_AUTH}
        url: ${JIRA_URL}
        login: ${JIRA_LOGIN}
        password: ${JIRA_PASSWORD}
 
      slaEnabled:
        enabled: ${SLA_ENABLED:false}
      chaosMonkey:
        enabled: ${CHAOS_ENABLED:false}
 
      armoryPlatform:
        enabled: ${PLATFORM_ENABLED:false}
        uiEnabled: ${PLATFORM_UI_ENABLED:false}
 
    services:
      default:
        host: ${DEFAULT_DNS_NAME:localhost}
 
      clouddriver:
        host: ${DEFAULT_DNS_NAME:armory-clouddriver}
        entityTags:
          enabled: false
 
      configurator:
        baseUrl: http://${CONFIGURATOR_HOST:armory-configurator}:8069
 
      echo:
        host: ${DEFAULT_DNS_NAME:armory-echo}
 
      deck:
        gateUrl: ${API_HOST:service.default.host}
        baseUrl: ${DECK_HOST:armory-deck}
 
      dinghy:
        enabled: ${DINGHY_ENABLED:false}
        host: ${DEFAULT_DNS_NAME:armory-dinghy}
        baseUrl: ${services.default.protocol}://${services.dinghy.host}:${services.dinghy.port}
        port: 8081
 
      front50:
        host: ${DEFAULT_DNS_NAME:armory-front50}
        cassandra:
          enabled: false
        redis:
          enabled: true
        gcs:
          enabled: ${ARMORYSPINNAKER_GCS_ENABLED:false}
        s3:
          enabled: ${ARMORYSPINNAKER_S3_ENABLED:false}
        storage_bucket: ${ARMORYSPINNAKER_CONF_STORE_BUCKET}
        rootFolder: ${ARMORYSPINNAKER_CONF_STORE_PREFIX:front50}
 
      gate:
        host: ${DEFAULT_DNS_NAME:armory-gate}
 
      igor:
        host: ${DEFAULT_DNS_NAME:armory-igor}
 
 
      kayenta:
        enabled: true
        host: ${DEFAULT_DNS_NAME:armory-kayenta}
        canaryConfigStore: true
        port: 8090
        baseUrl: ${services.default.protocol}://${services.kayenta.host}:${services.kayenta.port}
        metricsStore: ${METRICS_STORE:stackdriver}
        metricsAccountName: ${METRICS_ACCOUNT_NAME}
        storageAccountName: ${STORAGE_ACCOUNT_NAME}
        atlasWebComponentsUrl: ${ATLAS_COMPONENTS_URL:}
        
      lighthouse:
        host: ${DEFAULT_DNS_NAME:armory-lighthouse}
        port: 5000
        baseUrl: ${services.default.protocol}://${services.lighthouse.host}:${services.lighthouse.port}
 
      orca:
        host: ${DEFAULT_DNS_NAME:armory-orca}
 
      platform:
        enabled: ${PLATFORM_ENABLED:false}
        host: ${DEFAULT_DNS_NAME:armory-platform}
        baseUrl: ${services.default.protocol}://${services.platform.host}:${services.platform.port}
        port: 5001
 
      rosco:
        host: ${DEFAULT_DNS_NAME:armory-rosco}
        enabled: true
        configDir: /opt/spinnaker/config/packer
 
      bakery:
        allowMissingPackageInstallation: true
 
      barometer:
        enabled: ${BAROMETER_ENABLED:false}
        host: ${DEFAULT_DNS_NAME:armory-barometer}
        baseUrl: ${services.default.protocol}://${services.barometer.host}:${services.barometer.port}
        port: 9092
        newRelicEnabled: ${NEW_RELIC_ENABLED:false}
 
      redis:
        host: redis
        port: 6379
        connection: ${REDIS_HOST:redis://localhost:6379}
 
      fiat:
        enabled: ${FIAT_ENABLED:false}
        host: ${DEFAULT_DNS_NAME:armory-fiat}
        port: 7003
        baseUrl: ${services.default.protocol}://${services.fiat.host}:${services.fiat.port}
 
    providers:
      aws:
        enabled: ${SPINNAKER_AWS_ENABLED:true}
        defaultRegion: ${SPINNAKER_AWS_DEFAULT_REGION:us-west-2}
        defaultIAMRole: ${SPINNAKER_AWS_DEFAULT_IAM_ROLE:SpinnakerInstanceProfile}
        defaultAssumeRole: ${SPINNAKER_AWS_DEFAULT_ASSUME_ROLE:SpinnakerManagedProfile}
        primaryCredentials:
          name: ${SPINNAKER_AWS_DEFAULT_ACCOUNT:default-aws-account}
 
      kubernetes:
        proxy: localhost:8001
        apiPrefix: api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/#
  spinnaker.yml: |2
    global:
      spinnaker:
        timezone: 'America/Los_Angeles'
        architecture: ${PLATFORM_ARCHITECTURE}
 
    services:
      default:
        host: localhost
        protocol: http
      clouddriver:
        host: ${services.default.host}
        port: 7002
        baseUrl: ${services.default.protocol}://${services.clouddriver.host}:${services.clouddriver.port}
        aws:
          udf:
            enabled: true
 
      echo:
        enabled: true
        host: ${services.default.host}
        port: 8089
        baseUrl: ${services.default.protocol}://${services.echo.host}:${services.echo.port}
        cassandra:
          enabled: false
        inMemory:
          enabled: true
 
        cron:
          enabled: true
          timezone: ${global.spinnaker.timezone}
 
        notifications:
          mail:
            enabled: false
            host: # the smtp host
            fromAddress: # the address for which emails are sent from
          hipchat:
            enabled: false
            url: # the hipchat server to connect to
            token: # the hipchat auth token
            botName: # the username of the bot
          sms:
            enabled: false
            account: # twilio account id
            token: # twilio auth token
            from: # phone number by which sms messages are sent
          slack:
            enabled: false
            token: # the API token for the bot
            botName: # the username of the bot
 
      deck:
        host: ${services.default.host}
        port: 9000
        baseUrl: ${services.default.protocol}://${services.deck.host}:${services.deck.port}
        gateUrl: ${API_HOST:services.gate.baseUrl}
        bakeryUrl: ${services.bakery.baseUrl}
        timezone: ${global.spinnaker.timezone}
        auth:
          enabled: ${AUTH_ENABLED:false}
 
 
      fiat:
        enabled: false
        host: ${services.default.host}
        port: 7003
        baseUrl: ${services.default.protocol}://${services.fiat.host}:${services.fiat.port}
 
      front50:
        host: ${services.default.host}
        port: 8080
        baseUrl: ${services.default.protocol}://${services.front50.host}:${services.front50.port}
        storage_bucket: ${SPINNAKER_DEFAULT_STORAGE_BUCKET:}
        bucket_location:
        bucket_root: front50
        cassandra:
          enabled: false
        redis:
          enabled: false
        gcs:
          enabled: false
        s3:
          enabled: false
 
      gate:
        host: ${services.default.host}
        port: 8084
        baseUrl: ${services.default.protocol}://${services.gate.host}:${services.gate.port}
 
      igor:
        enabled: false
        host: ${services.default.host}
        port: 8088
        baseUrl: ${services.default.protocol}://${services.igor.host}:${services.igor.port}
 
      kato:
        host: ${services.clouddriver.host}
        port: ${services.clouddriver.port}
        baseUrl: ${services.clouddriver.baseUrl}
 
      mort:
        host: ${services.clouddriver.host}
        port: ${services.clouddriver.port}
        baseUrl: ${services.clouddriver.baseUrl}
 
      orca:
        host: ${services.default.host}
        port: 8083
        baseUrl: ${services.default.protocol}://${services.orca.host}:${services.orca.port}
        timezone: ${global.spinnaker.timezone}
        enabled: true
 
      oort:
        host: ${services.clouddriver.host}
        port: ${services.clouddriver.port}
        baseUrl: ${services.clouddriver.baseUrl}
 
      rosco:
        host: ${services.default.host}
        port: 8087
        baseUrl: ${services.default.protocol}://${services.rosco.host}:${services.rosco.port}
        configDir: /opt/rosco/config/packer
 
      bakery:
        host: ${services.rosco.host}
        port: ${services.rosco.port}
        baseUrl: ${services.rosco.baseUrl}
        extractBuildDetails: true
        allowMissingPackageInstallation: false
 
      docker:
        targetRepository: # Optional, but expected in spinnaker-local.yml if specified.
 
      jenkins:
        enabled: ${services.igor.enabled:false}
        defaultMaster:
          name: Jenkins
          baseUrl:   # Expected in spinnaker-local.yml
          username:  # Expected in spinnaker-local.yml
          password:  # Expected in spinnaker-local.yml
 
      redis:
        host: redis
        port: 6379
        connection: ${REDIS_HOST:redis://localhost:6379}
 
      cassandra:
        host: ${services.default.host}
        port: 9042
        embedded: false
        cluster: CASS_SPINNAKER
 
      travis:
        enabled: false
        defaultMaster:
          name: ci # The display name for this server. Gets prefixed with "travis-"
          baseUrl: https://travis-ci.com
          address: https://api.travis-ci.org
          githubToken: # GitHub scopes currently required by Travis is required.
 
      spectator:
        webEndpoint:
          enabled: false
 
      stackdriver:
        enabled: ${SPINNAKER_STACKDRIVER_ENABLED:false}
        projectName: ${SPINNAKER_STACKDRIVER_PROJECT_NAME:${providers.google.primaryCredentials.project}}
        credentialsPath: ${SPINNAKER_STACKDRIVER_CREDENTIALS_PATH:${providers.google.primaryCredentials.jsonPath}}
 
 
    providers:
      aws:
        enabled: ${SPINNAKER_AWS_ENABLED:false}
        simpleDBEnabled: false
        defaultRegion: ${SPINNAKER_AWS_DEFAULT_REGION:us-west-2}
        defaultIAMRole: BaseIAMRole
        defaultSimpleDBDomain: CLOUD_APPLICATIONS
        primaryCredentials:
          name: default
        defaultKeyPairTemplate: "{{name}}-keypair"
 
 
      google:
        enabled: ${SPINNAKER_GOOGLE_ENABLED:false}
        defaultRegion: ${SPINNAKER_GOOGLE_DEFAULT_REGION:us-central1}
        defaultZone: ${SPINNAKER_GOOGLE_DEFAULT_ZONE:us-central1-f}
 
 
        primaryCredentials:
          name: my-account-name
          project: ${SPINNAKER_GOOGLE_PROJECT_ID:}
          jsonPath: ${SPINNAKER_GOOGLE_PROJECT_CREDENTIALS_PATH:}
          consul:
            enabled: ${SPINNAKER_GOOGLE_CONSUL_ENABLED:false}
 
 
      cf:
        enabled: false
        defaultOrg: spinnaker-cf-org
        defaultSpace: spinnaker-cf-space
        primaryCredentials:
          name: my-cf-account
          api: my-cf-api-uri
          console: my-cf-console-base-url
 
      azure:
        enabled: ${SPINNAKER_AZURE_ENABLED:false}
        defaultRegion: ${SPINNAKER_AZURE_DEFAULT_REGION:westus}
        primaryCredentials:
          name: my-azure-account
 
          clientId:
          appKey:
          tenantId:
          subscriptionId:
 
      titan:
        enabled: false
        defaultRegion: us-east-1
        primaryCredentials:
          name: my-titan-account
 
      kubernetes:
 
        enabled: ${SPINNAKER_KUBERNETES_ENABLED:false}
        primaryCredentials:
          name: my-kubernetes-account
          namespace: default
          dockerRegistryAccount: ${providers.dockerRegistry.primaryCredentials.name}
 
      dockerRegistry:
        enabled: ${SPINNAKER_KUBERNETES_ENABLED:false}
 
        primaryCredentials:
          name: my-docker-registry-account
          address: ${SPINNAKER_DOCKER_REGISTRY:https://index.docker.io/ }
          repository: ${SPINNAKER_DOCKER_REPOSITORY:}
          username: ${SPINNAKER_DOCKER_USERNAME:}
          passwordFile: ${SPINNAKER_DOCKER_PASSWORD_FILE:}
          
      openstack:
        enabled: false
        defaultRegion: ${SPINNAKER_OPENSTACK_DEFAULT_REGION:RegionOne}
        primaryCredentials:
          name: my-openstack-account
          authUrl: ${OS_AUTH_URL}
          username: ${OS_USERNAME}
          password: ${OS_PASSWORD}
          projectName: ${OS_PROJECT_NAME}
          domainName: ${OS_USER_DOMAIN_NAME:Default}
          regions: ${OS_REGION_NAME:RegionOne}
          insecure: false

注释:

可以看到对(clouddriver-armory.yml)基础配置,还分环境(clouddriver-dev.yml、clouddriver.yml),还对不同的产品进行区分(aws、azure、google)

[root@hdss7-200 armory]# vi custom-config.yaml  # 自定义配置

# custom-config.yaml
# 该配置文件指定访问k8s、harbor、minio、Jenkins的访问方式
# 其中部分地址可以根据是否在k8s内部,和是否同一个名称空间来选择是否使用短域名
kind: ConfigMap
apiVersion: v1
metadata:
  name: custom-config
  namespace: armory
data:
  clouddriver-local.yml: |
    kubernetes:
      enabled: true
      accounts:
        - name: cluster-admin
          serviceAccount: false
          dockerRegistries:
            - accountName: harbor
              namespace: []
          namespaces:
            - test
            - prod
          kubeconfigFile: /opt/spinnaker/credentials/custom/default-kubeconfig
      primaryAccount: cluster-admin
    dockerRegistry:
      enabled: true
      accounts:
        - name: harbor
          requiredGroupMembership: []
          providerVersion: V1
          insecureRegistry: true
          address: http://harbor.od.com:180
          username: admin
          password: Harbor12345
      primaryAccount: harbor
    artifacts:
      s3:
        enabled: true
        accounts:
        - name: armory-config-s3-account
          apiEndpoint: http://minio
          apiRegion: us-east-1
      gcs:
        enabled: false
        accounts:
        - name: armory-config-gcs-account
  custom-config.json: ""
  echo-configurator.yml: |
    diagnostics:
      enabled: true
  front50-local.yml: |
    spinnaker:
      s3:
        endpoint: http://minio
  igor-local.yml: |
    jenkins:
      enabled: true
      masters:
        - name: jenkins-admin
          address: http://jenkins.od.com
          username: admin
          password: admin123
      primaryAccount: jenkins-admin
  nginx.conf: |
    gzip on;
    gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/vnd.ms-fontobject application/x-font-ttf font/opentype image/svg+xml image/x-icon;
    server {
           listen 80;
           location / {
                proxy_pass http://armory-deck/;
           }
           location /api/ {
                proxy_pass http://armory-gate:8084/;
           }
           rewrite ^/login(.*)$ /api/login$1 last;
           rewrite ^/auth(.*)$ /api/auth$1 last;
    }
  spinnaker-local.yml: |
    services:
      igor:
        enabled: true

注释:

clouddriver-local.yml 进行配置:
  accounts:
     - name: cluster-admin    # 使用的accounts,名字是cluster-admin,刚才做的ua。
 dockerRegistries:
    - accountName: harbor   # 使用名字是harbor的dockerRegistries,在dockerRegistry下定义
 dockerRegistry: # 对应上述的字是harbor的dockerRegistries的配置
    accounts:
        - name: harbor
    namespaces:   # 管理得两个名称空间
      - test
      - prod

echo-configurator.yml: 进行配置:
 

front50-local.yml: 进行配置:
   
endpoint: http://minio   连接到minio中(endpoint,是对service匹配,这就解释了为什么是http://minio,因为在上文中,我们配置了minio的service资源,他的service资源的名字是minio(kind: Service  name:minio port: 80  targetPort: 9000),service资源的80端口代理minio 的 pod 的 9000端口。所以访问http://minio就是访问http://minio:80,也就是访问的http://minio的pod的IP:9000)

igor-local.yml: 进行配置:
   
address: http://jenkins.infra    使用的是jenkins.od.com
      username: admin   账户名
       password: admin123   密码
    primaryAccount: 配置主用户是cluster-admin
 

nginx.conf: 进行配置:最外层代理
      proxy_pass http://armory-deck/;  也是连接的service资源的servicename
      proxy_pass http://armory-gate:8084/;    也是连接的service资源的servicename

[root@hdss7-200 armory]# vi dp.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: armory-clouddriver
  name: armory-clouddriver
  namespace: armory
spec:
  replicas: 1
  revisionHistoryLimit: 7
  selector:
    matchLabels:
      app: armory-clouddriver
  template:
    metadata:
      annotations:
        artifact.spinnaker.io/location: '"armory"'
        artifact.spinnaker.io/name: '"armory-clouddriver"'
        artifact.spinnaker.io/type: '"kubernetes/deployment"'
        moniker.spinnaker.io/application: '"armory"'
        moniker.spinnaker.io/cluster: '"clouddriver"'
      labels:
        app: armory-clouddriver
    spec:
      containers:
      - name: armory-clouddriver
        image: harbor.od.com:180/armory/clouddriver:v1.11.x
        imagePullPolicy: IfNotPresent
        command:
        - bash
        - -c
        args:
        - bash /opt/spinnaker/config/default/fetch.sh && cd /home/spinnaker/config
          && /opt/clouddriver/bin/clouddriver
        ports:
        - containerPort: 7002
          protocol: TCP
        env:
        - name: JAVA_OPTS
          value: -Xmx2048M
        envFrom:
        - configMapRef:
            name: init-env
        livenessProbe:
          failureThreshold: 5
          httpGet:
            path: /health
            port: 7002
            scheme: HTTP
          initialDelaySeconds: 600
          periodSeconds: 3
          successThreshold: 1
          timeoutSeconds: 1
        readinessProbe:
          failureThreshold: 5
          httpGet:
            path: /health
            port: 7002
            scheme: HTTP
          initialDelaySeconds: 180
          periodSeconds: 3
          successThreshold: 5
          timeoutSeconds: 1
        securityContext:
          runAsUser: 0
        volumeMounts:
        - mountPath: /etc/podinfo
          name: podinfo
        - mountPath: /home/spinnaker/.aws
          name: credentials
        - mountPath: /opt/spinnaker/credentials/custom
          name: default-kubeconfig
        - mountPath: /opt/spinnaker/config/default
          name: default-config
        - mountPath: /opt/spinnaker/config/custom
          name: custom-config
      imagePullSecrets:
      - name: harbor
      volumes:
      - configMap:
          defaultMode: 420
          name: default-kubeconfig
        name: default-kubeconfig
      - configMap:
          defaultMode: 420
          name: custom-config
        name: custom-config
      - configMap:
          defaultMode: 420
          name: default-config
        name: default-config
      - name: credentials
        secret:
          defaultMode: 420
          secretName: credentials
      - downwardAPI:
          defaultMode: 420
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.labels
            path: labels
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.annotations
            path: annotations
        name: podinfo

注释:

1、有注解 
   annotations:
     artifact.spinnaker.io/location: '"armory"'
     artifact.spinnaker.io/name: '"armory-clouddrive

2、 如何执行,脚本在default-config.yaml中
   command:
     - bash
     - -c
   args:
     - bash /opt/spinnaker/config/default/fetch.sh && cd /home/spinnaker/config
      && /opt/clouddriver/bin/clouddriver
   总结:bash -c bash /opt/spinnaker/config/default/fetch.sh && cd /home/spinnaker/config && /opt/clouddriver/bin/clouddriver

3、暴露端口7220,本身是java的应用
    ports:
     - containerPort: 7002
       protocol: TCP

4、jvm 调优
    - name: JAVA_OPTS
     # 生产中调大到2048-4096M
      value: -Xmx1024M

5、downwardAPI 相当于把pod的一些标签注解等信息挂载到自己本身里面,目前还不清楚有什么用
     volumeMounts:
     - mountPath: /etc/podinfo
       name: podinfo
     - downwardAPI:
       name: podinfo

6、credentials  连接对象式存储minio 的配置文件
   volumeMounts:
   - mountPath: /home/spinnaker/.aws
     name: credentials

7、 ua的那个连接文件
    volumeMounts:
    - mountPath: /opt/spinnaker/credentials/custom
      name: default-kubeconfig
    - mountPath: /opt/spinnaker/confi

8、envFrom 定义环境的列表,第一部部署了名字是init-env的ConfigMap,其中定了很多环境变量,通过envFrom参数,挂载到这里
    envFrom:
    - configMapRef:
        name: init-env

9、defaultMode: 420 权限的意思,这个文件挂载到容器中
     volumes:
     - configMap:
         defaultMode: 420
         name: default-kubeconfig

[root@hdss7-200 armory]# vi svc.yaml   # 尽管Spinnaker-clouddriver 提供http接口,但是不直接对集群外提供服务,不需要ingress

apiVersion: v1
kind: Service
metadata:
  name: armory-clouddriver
  namespace: armory
spec:
  ports:
  - port: 7002
    protocol: TCP
    targetPort: 7002
  selector:
    app: armory-clouddriver
[root@hdss7-200 clouddriver]# kubectl apply -f init-env.yaml
[root@hdss7-200 clouddriver]# kubectl apply -f default-config.yaml
[root@hdss7-200 clouddriver]# kubectl apply -f custom-config.yaml
[root@hdss7-200 clouddriver]# kubectl apply -f dp.yaml
[root@hdss7-200 clouddriver]# kubectl apply -f svc.yaml

验证:必须操作

[root@hdss7-22 ~]# kubectl get pod -n armory -owide
NAME                                 READY   STATUS    RESTARTS   AGE     IP            NODE                NOMINATED NODE   READINESS GATES
armory-clouddriver-c45d94c59-4h87z   1/1     Running   0          4m20s   172.7.21.11   hdss7-21.host.com              
minio-847ffc9ccd-mskl2               1/1     Running   3          40h     172.7.21.8    hdss7-21.host.com              
redis-58b569cdd-4v5jk                1/1     Running   3          39h     172.7.21.4    hdss7-21.host.com              
[root@hdss7-22 ~]# curl 172.7.21.11:7002/health
{"status":"UP","kubernetes":{"status":"UP"},"dockerRegistry":{"status":"UP"},"redisHealth":{"status":"UP","maxIdle":100,"minIdle":25,"numActive":0,"numIdle":3,"numWaiters":0},"diskSpace":{"status":"UP","total":71897190400,"free":61508448256,"threshold":10485760}}

[root@hdss7-22 ~]# kubectl exec -it minio-847ffc9ccd-mskl2 -n armory sh
sh-4.4# curl armory-clouddriver:7002/health
{"status":"UP","kubernetes":{"status":"UP"},"dockerRegistry":{"status":"UP"},"redisHealth":{"status":"UP","maxIdle":100,"minIdle":25,"numActive":0,"numIdle":3,"numWaiters":0},"diskSpace":{"status":"UP","total":71897190400,"free":61508444160,"threshold":10485760}}

4、部署Spinnaker其余组件

4.1、部署FRONT50

4.1.1 准备镜像

[root@hdss7-200 ~]# docker pull armory/spinnaker-front50-slim:release-1.8.x-93febf2
[root@hdss7-200 ~]# docker image ls -a |grep front50
armory/spinnaker-front50-slim                  release-1.8.x-93febf2           0d353788f4f2   3 years ago     273MB

[root@hdss7-200 ~]# docker tag  0d353788f4f2 harbor.od.com:180/armory/front50:v1.8.x
[root@hdss7-200 ~]# docker push harbor.od.com:180/armory/front50:v1.8.x

4.1.2 准备资源清单

[root@hdss7-200 ~]# mkdir /data/k8s-yaml/armory/front50
[root@hdss7-200 ~]# cd /data/k8s-yaml/armory/front50/
Deployment:

cat >dp.yaml <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: armory-front50
  name: armory-front50
  namespace: armory
spec:
  replicas: 1
  revisionHistoryLimit: 7
  selector:
    matchLabels:
      app: armory-front50
  template:
    metadata:
      annotations:
        artifact.spinnaker.io/location: '"armory"'
        artifact.spinnaker.io/name: '"armory-front50"'
        artifact.spinnaker.io/type: '"kubernetes/deployment"'
        moniker.spinnaker.io/application: '"armory"'
        moniker.spinnaker.io/cluster: '"front50"'
      labels:
        app: armory-front50
    spec:
      containers:
      - name: armory-front50
        image: harbor.od.com:180/armory/front50:v1.8.x
        imagePullPolicy: IfNotPresent
        command:
        - bash
        - -c
        args:
        - bash /opt/spinnaker/config/default/fetch.sh && cd /home/spinnaker/config
          && /opt/front50/bin/front50
        ports:
        - containerPort: 8080
          protocol: TCP
        env:
        - name: JAVA_OPTS
          value: -javaagent:/opt/front50/lib/jamm-0.2.5.jar -Xmx1000M
        envFrom:
        - configMapRef:
            name: init-env
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 600
          periodSeconds: 3
          successThreshold: 1
          timeoutSeconds: 1
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 180
          periodSeconds: 5
          successThreshold: 8
          timeoutSeconds: 1
        volumeMounts:
        - mountPath: /etc/podinfo
          name: podinfo
        - mountPath: /home/spinnaker/.aws
          name: credentials
        - mountPath: /opt/spinnaker/config/default
          name: default-config
        - mountPath: /opt/spinnaker/config/custom
          name: custom-config
      imagePullSecrets:
      - name: harbor
      volumes:
      - configMap:
          defaultMode: 420
          name: custom-config
        name: custom-config
      - configMap:
          defaultMode: 420
          name: default-config
        name: default-config
      - name: credentials
        secret:
          defaultMode: 420
          secretName: credentials
      - downwardAPI:
          defaultMode: 420
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.labels
            path: labels
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.annotations
            path: annotations
        name: podinfo
EOF

Service:

cat >svc.yaml <<'EOF'
apiVersion: v1
kind: Service
metadata:
  name: armory-front50
  namespace: armory
spec:
  ports:
  - port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    app: armory-front50
EOF
[root@hdss7-200 front50]# kubectl apply -f dp.yaml
deployment.apps/armory-front50 created
[root@hdss7-200 front50]# kubectl apply -f svc.yaml
service/armory-front50 created

4.1.3 验证front50的健康端口:

[root@hdss7-200 clouddriver]# kubectl get pod -n armory

NAME                                 READY   STATUS    RESTARTS   AGE
armory-clouddriver-c45d94c59-4h87z   1/1     Running   2          3d20h
armory-front50-c57d59db-8fjdl        1/1     Running   0          7m46s
minio-847ffc9ccd-lwng4               1/1     Running   1          3d12h
redis-58b569cdd-4v5jk                1/1     Running   5          5d11h

 # 进入minio容器中,去 curl front50的健康端口

[root@hdss7-200 clouddriver]# kubectl exec minio-847ffc9ccd-lwng4 -n armory -- curl -s 'http://armory-front50:8080/health'
{"status":"UP"}[root@hdss7-200 clouddriver]# 

访问 http://minio.od.com/buckets,生成了一个桶,是由于init-env.yaml 中的 ARMORYSPINNAKER_CONF_STORE_BUCKET: armory-platform 创建,Font50是管minio写东西,包括sprinter配置的流水线统统存在minio

Kubernetes部署(八):k8s项目交付----(5)持续部署_第9张图片

这样的话以后就备份/data/nfs-volume/minio/就可以

[root@hdss7-200 clouddriver]# cd /data/nfs-volume/minio/
[root@hdss7-200 minio]# ll
total 0
drwxr-xr-x. 2 root root 6 Aug  1 19:12 armory-platform
[root@hdss7-200 minio]# 

4.2、部署Orca

4.2.1 准备镜像

[root@hdss7-200 ~]# docker pull armory/spinnaker-orca-slim:release-1.8.x-de4ab55
[root@hdss7-200 clouddriver]# docker image ls  |grep orca
armory/spinnaker-orca-slim                     release-1.8.x-de4ab55           5103b1f73e04   3 years ago     141MB
[root@hdss7-200 clouddriver]# docker tag 5103b1f73e04  harbor.od.com:180/armory/orca:v1.8.x
[root@hdss7-200 clouddriver]# docker push harbor.od.com:180/armory/orca:v1.8.x

4.2.2 准备资源清单

[root@hdss7-200 ~]# mkdir /data/k8s-yaml/armory/orca
[root@hdss7-200 ~]# cd /data/k8s-yaml/armory/orca
Deployment:

cat >dp.yaml <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: armory-orca
  name: armory-orca
  namespace: armory
spec:
  replicas: 1
  revisionHistoryLimit: 7
  selector:
    matchLabels:
      app: armory-orca
  template:
    metadata:
      annotations:
        artifact.spinnaker.io/location: '"armory"'
        artifact.spinnaker.io/name: '"armory-orca"'
        artifact.spinnaker.io/type: '"kubernetes/deployment"'
        moniker.spinnaker.io/application: '"armory"'
        moniker.spinnaker.io/cluster: '"orca"'
      labels:
        app: armory-orca
    spec:
      containers:
      - name: armory-orca
        image: harbor.od.com:180/armory/orca:v1.8.x
        imagePullPolicy: IfNotPresent
        command:
        - bash
        - -c
        args:
        - bash /opt/spinnaker/config/default/fetch.sh && cd /home/spinnaker/config
          && /opt/orca/bin/orca
        ports:
        - containerPort: 8083
          protocol: TCP
        env:
        - name: JAVA_OPTS
          value: -Xmx1000M
        envFrom:
        - configMapRef:
            name: init-env
        livenessProbe:
          failureThreshold: 5
          httpGet:
            path: /health
            port: 8083
            scheme: HTTP
          initialDelaySeconds: 600
          periodSeconds: 5
          successThreshold: 1
          timeoutSeconds: 1
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /health
            port: 8083
            scheme: HTTP
          initialDelaySeconds: 180
          periodSeconds: 3
          successThreshold: 5
          timeoutSeconds: 1
        volumeMounts:
        - mountPath: /etc/podinfo
          name: podinfo
        - mountPath: /opt/spinnaker/config/default
          name: default-config
        - mountPath: /opt/spinnaker/config/custom
          name: custom-config
      imagePullSecrets:
      - name: harbor
      volumes:
      - configMap:
          defaultMode: 420
          name: custom-config
        name: custom-config
      - configMap:
          defaultMode: 420
          name: default-config
        name: default-config
      - downwardAPI:
          defaultMode: 420
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.labels
            path: labels
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.annotations
            path: annotations
        name: podinfo
EOF

注释:

会发现启动命令等都跟之前部署的clouddriver、FRONT50 差不多,是因为他是有armory的Spinnaker发行版脚手架工具导出的,而armory把所有的启动参数,都放入到/opt/spinnaker/config/default/fetch.sh, 脚本会自动判断启动那个组件

Service:

cat >svc.yaml <<'EOF'
apiVersion: v1
kind: Service
metadata:
  name: armory-orca
  namespace: armory
spec:
  ports:
  - port: 8083
    protocol: TCP
    targetPort: 8083
  selector:
    app: armory-orca
EOF
[root@hdss7-200 orca]# kubectl apply -f dp.yaml
deployment.apps/armory-orca created
[root@hdss7-200 orca]# kubectl apply -f svc.yaml 
service/armory-orca created

4.2.3 验证Orca的健康端口:

[root@hdss7-200 clouddriver]# kubectl get pod -n armory

NAME                                 READY   STATUS    RESTARTS   AGE
armory-clouddriver-c45d94c59-4h87z   1/1     Running   2          3d20h
armory-front50-c57d59db-8fjdl        1/1     Running   0          43m
armory-orca-86466cc5b4-x9d2g         1/1     Running   0          8m52s
minio-847ffc9ccd-lwng4               1/1     Running   1          3d12h
redis-58b569cdd-4v5jk                1/1     Running   5          5d12h

 # 进入minio容器中,去 curl front50的健康端口

[root@hdss7-200 orca]# kubectl exec minio-847ffc9ccd-lwng4 -n armory -- curl -s 'http://armory-orca:8083/health'
{"status":"UP"}[root@hdss7-200 orca]# 

4.3、部署ECHO

4.3.1 准备镜像

[root@hdss7-200 orca]# docker pull docker.io/armory/echo-armory:c36d576-release-1.8.x-617c567
[root@hdss7-200 orca]# # docker image ls |grep echo
armory/echo-armory                             c36d576-release-1.8.x-617c567   415efd46f474   4 years ago     287MB
[root@hdss7-200 orca]# docker tag  415efd46f474 harbor.od.com:180/armory/echo:v1.8.x
[root@hdss7-200 orca]# docker push harbor.od.com:180/armory/echo:v1.8.x

4.1.2 准备资源清单

[root@hdss7-200 ~]# mkdir /data/k8s-yaml/armory/echo
[root@hdss7-200 ~]# cd /data/k8s-yaml/armory/echo/
Deployment:

cat >dp.yaml <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: armory-echo
  name: armory-echo
  namespace: armory
spec:
  replicas: 1
  revisionHistoryLimit: 7
  selector:
    matchLabels:
      app: armory-echo
  template:
    metadata:
      annotations:
        artifact.spinnaker.io/location: '"armory"'
        artifact.spinnaker.io/name: '"armory-echo"'
        artifact.spinnaker.io/type: '"kubernetes/deployment"'
        moniker.spinnaker.io/application: '"armory"'
        moniker.spinnaker.io/cluster: '"echo"'
      labels:
        app: armory-echo
    spec:
      containers:
      - name: armory-echo
        image: harbor.od.com:180/armory/echo:v1.8.x
        imagePullPolicy: IfNotPresent
        command:
        - bash
        - -c
        args:
        - bash /opt/spinnaker/config/default/fetch.sh && cd /home/spinnaker/config
          && /opt/echo/bin/echo
        ports:
        - containerPort: 8089
          protocol: TCP
        env:
        - name: JAVA_OPTS
          value: -javaagent:/opt/echo/lib/jamm-0.2.5.jar -Xmx1000M
        envFrom:
        - configMapRef:
            name: init-env
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /health
            port: 8089
            scheme: HTTP
          initialDelaySeconds: 600
          periodSeconds: 3
          successThreshold: 1
          timeoutSeconds: 1
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /health
            port: 8089
            scheme: HTTP
          initialDelaySeconds: 180
          periodSeconds: 3
          successThreshold: 5
          timeoutSeconds: 1
        volumeMounts:
        - mountPath: /etc/podinfo
          name: podinfo
        - mountPath: /opt/spinnaker/config/default
          name: default-config
        - mountPath: /opt/spinnaker/config/custom
          name: custom-config
      imagePullSecrets:
      - name: harbor
      volumes:
      - configMap:
          defaultMode: 420
          name: custom-config
        name: custom-config
      - configMap:
          defaultMode: 420
          name: default-config
        name: default-config
      - downwardAPI:
          defaultMode: 420
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.labels
            path: labels
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.annotations
            path: annotations
        name: podinfo
EOF

Service:

cat >svc.yaml <<'EOF'
apiVersion: v1
kind: Service
metadata:
  name: armory-echo
  namespace: armory
spec:
  ports:
  - port: 8089
    protocol: TCP
    targetPort: 8089
  selector:
    app: armory-echo
EOF
[root@hdss7-200 echo]# kubectl apply -f dp.yaml
deployment.apps/armory-echo created
[root@hdss7-200 echo]# kubectl apply -f svc.yaml
service/armory-echo created
[root@hdss7-200 echo]# 

4.3.3 验证Orca的健康端口:

[root@hdss7-200 echo]# kubectl get pod -n armory
NAME                                 READY   STATUS    RESTARTS   AGE
armory-clouddriver-c45d94c59-4h87z   1/1     Running   2          3d21h
armory-echo-64c9ffb959-j4svr         1/1     Running   0          7m30s
armory-front50-c57d59db-8fjdl        1/1     Running   0          61m
armory-orca-86466cc5b4-x9d2g         1/1     Running   0          27m
minio-847ffc9ccd-lwng4               1/1     Running   1          3d13h
redis-58b569cdd-4v5jk                1/1     Running   5          5d12h
[root@hdss7-200 echo]# kubectl exec minio-847ffc9ccd-lwng4 -n armory -- curl -s 'http://armory-echo:8089/health'
{"status":"UP"}[root@hdss7-200 echo]# 

4.4、部署IGOR

相当重要的组件,如果你想让Spinnaker跟jenkins通信,能够读取到jenkins的信息,流水线,必须装IGOR,装IGOR必须装ECHO。IGOR支持两位CI工具,jenkins和Travis GitHub - spinnaker/igor: Integration with Jenkins and Git for Spinnaker

Kubernetes部署(八):k8s项目交付----(5)持续部署_第10张图片

4.4.1 准备镜像

[root@hdss7-200 echo]# docker pull docker.io/armory/spinnaker-igor-slim:release-1.8-x-new-install-healthy-ae2b329
[root@hdss7-200 echo]# docker image ls |grep igor
armory/spinnaker-igor-slim                     release-1.8-x-new-install-healthy-ae2b329   23984f5b43f6   4 years ago     135MB
[root@hdss7-200 echo]# docker tag 23984f5b43f6 harbor.od.com:180/armory/igor:v1.8.x
[root@hdss7-200 echo]# docker push harbor.od.com:180/armory/igor:v1.8.x

4.4.2 准备资源清单

[root@hdss7-200 ~]# mkdir /data/k8s-yaml/armory/igor
[root@hdss7-200 ~]# cd /data/k8s-yaml/armory/igor/
Deployment:

cat >dp.yaml <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: armory-igor
  name: armory-igor
  namespace: armory
spec:
  replicas: 1
  revisionHistoryLimit: 7
  selector:
    matchLabels:
      app: armory-igor
  template:
    metadata:
      annotations:
        artifact.spinnaker.io/location: '"armory"'
        artifact.spinnaker.io/name: '"armory-igor"'
        artifact.spinnaker.io/type: '"kubernetes/deployment"'
        moniker.spinnaker.io/application: '"armory"'
        moniker.spinnaker.io/cluster: '"igor"'
      labels:
        app: armory-igor
    spec:
      containers:
      - name: armory-igor
        image: harbor.od.com:180/armory/igor:v1.8.x
        imagePullPolicy: IfNotPresent
        command:
        - bash
        - -c
        args:
        - bash /opt/spinnaker/config/default/fetch.sh && cd /home/spinnaker/config
          && /opt/igor/bin/igor
        ports:
        - containerPort: 8088
          protocol: TCP
        env:
        - name: IGOR_PORT_MAPPING
          value: -8088:8088
        - name: JAVA_OPTS
          value: -Xmx1000M
        envFrom:
        - configMapRef:
            name: init-env
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /health
            port: 8088
            scheme: HTTP
          initialDelaySeconds: 600
          periodSeconds: 3
          successThreshold: 1
          timeoutSeconds: 1
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /health
            port: 8088
            scheme: HTTP
          initialDelaySeconds: 180
          periodSeconds: 5
          successThreshold: 5
          timeoutSeconds: 1
        volumeMounts:
        - mountPath: /etc/podinfo
          name: podinfo
        - mountPath: /opt/spinnaker/config/default
          name: default-config
        - mountPath: /opt/spinnaker/config/custom
          name: custom-config
      imagePullSecrets:
      - name: harbor
      securityContext:
        runAsUser: 0
      volumes:
      - configMap:
          defaultMode: 420
          name: custom-config
        name: custom-config
      - configMap:
          defaultMode: 420
          name: default-config
        name: default-config
      - downwardAPI:
          defaultMode: 420
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.labels
            path: labels
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.annotations
            path: annotations
        name: podinfo
EOF

Service:

cat >svc.yaml <<'EOF'
apiVersion: v1
kind: Service
metadata:
  name: armory-igor
  namespace: armory
spec:
  ports:
  - port: 8088
    protocol: TCP
    targetPort: 8088
  selector:
    app: armory-igor
EOF
[root@hdss7-200 igor]# kubectl apply -f dp.yaml 
deployment.apps/armory-igor created
[root@hdss7-200 igor]# kubectl apply -f svc.yaml 
service/armory-igor created

4.4.3 验证igor的健康端口:

[root@hdss7-200 igor]# kubectl get pod -n armory
NAME                                 READY   STATUS    RESTARTS   AGE
armory-clouddriver-c45d94c59-4h87z   1/1     Running   2          3d21h
armory-echo-64c9ffb959-j4svr         1/1     Running   0          28m
armory-front50-c57d59db-8fjdl        1/1     Running   0          82m
armory-igor-5f4f87d864-hc4qz         1/1     Running   0          3m42s
armory-orca-86466cc5b4-x9d2g         1/1     Running   0          47m
minio-847ffc9ccd-lwng4               1/1     Running   1          3d13h
redis-58b569cdd-4v5jk                1/1     Running   5          5d13h
[root@hdss7-200 igor]# kubectl exec minio-847ffc9ccd-lwng4 -n armory -- curl -s 'http://armory-igor:8088/health'
{"status":"UP"}[root@hdss7-200 igor]#

4.5、部署GATE

4.5.1 准备镜像

[root@hdss7-200 igor]# docker pull docker.io/armory/gate-armory:dfafe73-release-1.8.x-5d505ca
[root@hdss7-200 igor]# docker image ls |grep gate
armory/gate-armory                             dfafe73-release-1.8.x-5d505ca               b092d4665301   4 years ago     179MB
[root@hdss7-200 igor]# docker tag  b092d4665301 harbor.od.com:180/armory/gate:v1.8.x
[root@hdss7-200 igor]# docker push harbor.od.com:180/armory/gate:v1.8.x

4.5.2 准备资源清单

[root@hdss7-200 ~]# mkdir /data/k8s-yaml/armory/gate
[root@hdss7-200 ~]# cd /data/k8s-yaml/armory/gate

Deployment:

cat >dp.yaml <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: armory-gate
  name: armory-gate
  namespace: armory
spec:
  replicas: 1
  revisionHistoryLimit: 7
  selector:
    matchLabels:
      app: armory-gate
  template:
    metadata:
      annotations:
        artifact.spinnaker.io/location: '"armory"'
        artifact.spinnaker.io/name: '"armory-gate"'
        artifact.spinnaker.io/type: '"kubernetes/deployment"'
        moniker.spinnaker.io/application: '"armory"'
        moniker.spinnaker.io/cluster: '"gate"'
      labels:
        app: armory-gate
    spec:
      containers:
      - name: armory-gate
        image: harbor.od.com:180/armory/gate:v1.8.x
        imagePullPolicy: IfNotPresent
        command:
        - bash
        - -c
        args:
        - bash /opt/spinnaker/config/default/fetch.sh gate && cd /home/spinnaker/config
          && /opt/gate/bin/gate
        ports:
        - containerPort: 8084
          name: gate-port
          protocol: TCP
        - containerPort: 8085
          name: gate-api-port
          protocol: TCP
        env:
        - name: GATE_PORT_MAPPING
          value: -8084:8084
        - name: GATE_API_PORT_MAPPING
          value: -8085:8085
        - name: JAVA_OPTS
          value: -Xmx1000M
        envFrom:
        - configMapRef:
            name: init-env
        livenessProbe:
          exec:
            command:
            - /bin/bash
            - -c
            - wget -O - http://localhost:8084/health || wget -O - https://localhost:8084/health
          failureThreshold: 5
          initialDelaySeconds: 600
          periodSeconds: 5
          successThreshold: 1
          timeoutSeconds: 1
        readinessProbe:
          exec:
            command:
            - /bin/bash
            - -c
            - wget -O - http://localhost:8084/health?checkDownstreamServices=true&downstreamServices=true
              || wget -O - https://localhost:8084/health?checkDownstreamServices=true&downstreamServices=true
          failureThreshold: 3
          initialDelaySeconds: 180
          periodSeconds: 5
          successThreshold: 10
          timeoutSeconds: 1
        volumeMounts:
        - mountPath: /etc/podinfo
          name: podinfo
        - mountPath: /opt/spinnaker/config/default
          name: default-config
        - mountPath: /opt/spinnaker/config/custom
          name: custom-config
      imagePullSecrets:
      - name: harbor
      securityContext:
        runAsUser: 0
      volumes:
      - configMap:
          defaultMode: 420
          name: custom-config
        name: custom-config
      - configMap:
          defaultMode: 420
          name: default-config
        name: default-config
      - downwardAPI:
          defaultMode: 420
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.labels
            path: labels
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.annotations
            path: annotations
        name: podinfo
EOF

Service:

cat >svc.yaml <<'EOF'
apiVersion: v1
kind: Service
metadata:
  name: armory-gate
  namespace: armory
spec:
  ports:
  - name: gate-port
    port: 8084
    protocol: TCP
    targetPort: 8084
  - name: gate-api-port
    port: 8085
    protocol: TCP
    targetPort: 8085
  selector:
    app: armory-gate
EOF
[root@hdss7-200 gate]# kubectl apply -f dp.yaml
deployment.apps/armory-gate created
[root@hdss7-200 gate]# kubectl apply -f svc.yaml
service/armory-gate created

4.5.3 验证gate的健康端口:

[root@hdss7-200 gate]# kubectl get pod -n armory
NAME                                 READY   STATUS    RESTARTS   AGE
armory-clouddriver-c45d94c59-4h87z   1/1     Running   2          3d21h
armory-echo-64c9ffb959-j4svr         1/1     Running   0          48m
armory-front50-c57d59db-8fjdl        1/1     Running   0          102m
armory-gate-5b954d9bd4-xc2jk         1/1     Running   0          4m12s
armory-igor-5f4f87d864-hc4qz         1/1     Running   0          23m
armory-orca-86466cc5b4-x9d2g         1/1     Running   0          68m
minio-847ffc9ccd-lwng4               1/1     Running   1          3d13h
redis-58b569cdd-4v5jk                1/1     Running   5          5d13h
[root@hdss7-200 gate]# kubectl exec minio-847ffc9ccd-lwng4 -n armory -- curl -s 'http://armory-gate:8084/health'
{"status":"UP"}[root@hdss7-200 gate]# 

4.6、部署DECK

4.6.1 准备镜像

[root@hdss7-200 gate]# docker pull docker.io/armory/deck-armory:d4bf0cf-release-1.8.x-0a33f94
[root@hdss7-200 gate]# docker image ls |grep deck
armory/deck-armory                             d4bf0cf-release-1.8.x-0a33f94               
[root@hdss7-200 gate]# docker tag 9a87ba3b319f  harbor.od.com:180/armory/deck:v1.8.x
[root@hdss7-200 gate]# docker push harbor.od.com:180/armory/deck:v1.8.x

4.6.2 准备资源清单

[root@hdss7-200 ~]# mkdir /data/k8s-yaml/armory/deck
[root@hdss7-200 ~]# cd /data/k8s-yaml/armory/deck

Deployment:

cat >dp.yaml <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: armory-deck
  name: armory-deck
  namespace: armory
spec:
  replicas: 1
  revisionHistoryLimit: 7
  selector:
    matchLabels:
      app: armory-deck
  template:
    metadata:
      annotations:
        artifact.spinnaker.io/location: '"armory"'
        artifact.spinnaker.io/name: '"armory-deck"'
        artifact.spinnaker.io/type: '"kubernetes/deployment"'
        moniker.spinnaker.io/application: '"armory"'
        moniker.spinnaker.io/cluster: '"deck"'
      labels:
        app: armory-deck
    spec:
      containers:
      - name: armory-deck
        image: harbor.od.com:180/armory/deck:v1.8.x
        imagePullPolicy: IfNotPresent
        command:
        - bash
        - -c
        args:
        - bash /opt/spinnaker/config/default/fetch.sh && /entrypoint.sh
        ports:
        - containerPort: 9000
          protocol: TCP
        envFrom:
        - configMapRef:
            name: init-env
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /
            port: 9000
            scheme: HTTP
          initialDelaySeconds: 180
          periodSeconds: 3
          successThreshold: 1
          timeoutSeconds: 1
        readinessProbe:
          failureThreshold: 5
          httpGet:
            path: /
            port: 9000
            scheme: HTTP
          initialDelaySeconds: 30
          periodSeconds: 3
          successThreshold: 5
          timeoutSeconds: 1
        volumeMounts:
        - mountPath: /etc/podinfo
          name: podinfo
        - mountPath: /opt/spinnaker/config/default
          name: default-config
        - mountPath: /opt/spinnaker/config/custom
          name: custom-config
      imagePullSecrets:
      - name: harbor
      volumes:
      - configMap:
          defaultMode: 420
          name: custom-config
        name: custom-config
      - configMap:
          defaultMode: 420
          name: default-config
        name: default-config
      - downwardAPI:
          defaultMode: 420
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.labels
            path: labels
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.annotations
            path: annotations
        name: podinfo
EOF

Service:

cat >svc.yaml <<'EOF'
apiVersion: v1
kind: Service
metadata:
  name: armory-deck
  namespace: armory
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 9000
  selector:
    app: armory-deck
EOF
[root@hdss7-200 deck]# kubectl apply -f dp.yaml
deployment.apps/armory-deck created
[root@hdss7-200 deck]# kubectl apply -f svc.yaml
service/armory-deck created
[root@hdss7-200 deck]# 

4.6.3 查看nginx容器是否Running   

[root@hdss7-21 ~]# kubectl get pod -n armory -o wide |grep "armory-deck"
armory-deck-67b6d6db4-pcz9r          1/1     Running   1          65m   172.7.22.12   hdss7-22.host.com               
[root@hdss7-21 ~]# curl 172.7.22.12:9000



  Armory Platform | Spinnaker