超详细实操-基于容器的本地工作流引擎-argo(一)

文章目录

    • 1. 什么是Argo工作流?
    • 2. 部署argo
      • 2.1 创建名称空间,下载yaml文件
    • 3. 如何使用 Argo workflow
      • 3.1 Hello World!
      • 3.2 Parameters参数
      • 3.3 Steps步骤
      • 3.4 DAG(有向无环图)
      • 3.5 Artifacts工件(制品)
      • 3.6 工作流结构
      • 3.7 Loops 循环
      • 3.8 条件
      • 3.9 secrets
      • 3.10 Scripts & Results
      • 3.11 Output Parameters
      • 3.12 Retrying Failed or Errored Steps
      • 3.13 Recursion(递归)
      • 3.14 Exit handlers(退出处理)
      • 3.15 Timeouts
    • 总结
    • 参考文档

超详细实操-基于容器的本地工作流引擎-argo(一)_第1张图片

1. 什么是Argo工作流?

Argo Workflows是一个开源的容器本地工作流引擎,用于在kubernetes上协调运行作业。Argo Workflows是基于kubernetes CRD实现的

功能:

  • 定义工作流,其中工作流中的每一个步骤都是一个容器

  • 将多个步骤工作流建模成一系列的任务,或者使用有向无环图(DAG)捕获任务间的依赖关系

  • 使用kubernetes上的Argo Workflows可以在短时间内轻松操作大量计算密集型作业

  • 不需要配置复杂的软件开发产品就可以在kubernetes本地环境中运行CI/CD

Argo是Cloud Native Computing Foundation(CNCF)托管的项目。

2. 部署argo

官网部署介绍

超详细实操-基于容器的本地工作流引擎-argo(一)_第2张图片

  • 创建名称空间并且运行安装文件(安装文件从网上下载)

  • 分配权限

  • 设置port-forward 访问

    kubectl port-forward 通过端口转发映射本地端口到指定的应用端口,从而访问集群中的应用程序(Pod).

  • 安装 Argo CLI 命令行客户端

实操演示

2.1 创建名称空间,下载yaml文件

kubectl create ns argo
//在安装yaml文件的目录下,执行
kubectl apply -n argo -f .

注意:

​ 1. yaml文件中的进行镜像吗,默认从docker hub中获取。注意网络。可以事先下载好

​ 2. 注意deployment的节点选择 确保相应节点包换对应标签。也可以去掉节点选择,随机调度

下载install.yaml,Argo版本下载是当时最新版本的2.9.0

# This is an auto-generated file. DO NOT EDIT
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: clusterworkflowtemplates.argoproj.io
spec:
  group: argoproj.io
  names:
    kind: ClusterWorkflowTemplate
    plural: clusterworkflowtemplates
    shortNames:
    - clusterwftmpl
    - cwft
  scope: Cluster
  version: v1alpha1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: cronworkflows.argoproj.io
spec:
  group: argoproj.io
  names:
    kind: CronWorkflow
    plural: cronworkflows
    shortNames:
    - cronwf
    - cwf
  scope: Namespaced
  version: v1alpha1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: workflows.argoproj.io
spec:
  additionalPrinterColumns:
  - JSONPath: .status.phase
    description: Status of the workflow
    name: Status
    type: string
  - JSONPath: .status.startedAt
    description: When the workflow was started
    format: date-time
    name: Age
    type: date
  group: argoproj.io
  names:
    kind: Workflow
    plural: workflows
    shortNames:
    - wf
  scope: Namespaced
  version: v1alpha1
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: workflowtemplates.argoproj.io
spec:
  group: argoproj.io
  names:
    kind: WorkflowTemplate
    plural: workflowtemplates
    shortNames:
    - wftmpl
  scope: Namespaced
  version: v1alpha1
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: argo
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: argo-server
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: argo-role
rules:
- apiGroups:
  - ""
  resources:
  - secrets
  verbs:
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
  name: argo-aggregate-to-admin
rules:
- apiGroups:
  - argoproj.io
  resources:
  - workflows
  - workflows/finalizers
  - workflowtemplates
  - workflowtemplates/finalizers
  - cronworkflows
  - cronworkflows/finalizers
  - clusterworkflowtemplates
  - clusterworkflowtemplates/finalizers
  verbs:
  - create
  - delete
  - deletecollection
  - get
  - list
  - patch
  - update
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
  name: argo-aggregate-to-edit
rules:
- apiGroups:
  - argoproj.io
  resources:
  - workflows
  - workflows/finalizers
  - workflowtemplates
  - workflowtemplates/finalizers
  - cronworkflows
  - cronworkflows/finalizers
  - clusterworkflowtemplates
  - clusterworkflowtemplates/finalizers
  verbs:
  - create
  - delete
  - deletecollection
  - get
  - list
  - patch
  - update
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: argo-aggregate-to-view
rules:
- apiGroups:
  - argoproj.io
  resources:
  - workflows
  - workflows/finalizers
  - workflowtemplates
  - workflowtemplates/finalizers
  - cronworkflows
  - cronworkflows/finalizers
  - clusterworkflowtemplates
  - clusterworkflowtemplates/finalizers
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: argo-cluster-role
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - pods/exec
  verbs:
  - create
  - get
  - list
  - watch
  - update
  - patch
  - delete
- apiGroups:
  - ""
  resources:
  - configmaps
  verbs:
  - get
  - watch
  - list
- apiGroups:
  - ""
  resources:
  - persistentvolumeclaims
  verbs:
  - create
  - delete
- apiGroups:
  - argoproj.io
  resources:
  - workflows
  - workflows/finalizers
  verbs:
  - get
  - list
  - watch
  - update
  - patch
  - delete
  - create
- apiGroups:
  - argoproj.io
  resources:
  - workflowtemplates
  - workflowtemplates/finalizers
  - clusterworkflowtemplates
  - clusterworkflowtemplates/finalizers
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - serviceaccounts
  verbs:
  - get
  - list
- apiGroups:
  - argoproj.io
  resources:
  - cronworkflows
  - cronworkflows/finalizers
  verbs:
  - get
  - list
  - watch
  - update
  - patch
  - delete
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - patch
- apiGroups:
  - policy
  resources:
  - poddisruptionbudgets
  verbs:
  - create
  - get
  - delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: argo-server-cluster-role
rules:
- apiGroups:
  - ""
  resources:
  - configmaps
  verbs:
  - get
  - watch
  - list
- apiGroups:
  - ""
  resources:
  - secrets
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - pods
  - pods/exec
  - pods/log
  verbs:
  - get
  - list
  - watch
  - delete
- apiGroups:
  - argoproj.io
  resources:
  - workflows
  - workflowtemplates
  - cronworkflows
  - clusterworkflowtemplates
  verbs:
  - create
  - get
  - list
  - watch
  - update
  - patch
  - delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: argo-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: argo-role
subjects:
- kind: ServiceAccount
  name: argo
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: argo-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: argo-cluster-role
subjects:
- kind: ServiceAccount
  name: argo
  namespace: argo
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: argo-server-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: argo-server-cluster-role
subjects:
- kind: ServiceAccount
  name: argo-server
  namespace: argo
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: workflow-controller-configmap
---
apiVersion: v1
kind: Service
metadata:
  name: argo-server
spec:
  ports:
  - name: web
    port: 2746
    targetPort: 2746
  selector:
    app: argo-server
---
apiVersion: v1
kind: Service
metadata:
  name: workflow-controller-metrics
spec:
  ports:
  - name: metrics
    port: 9090
    protocol: TCP
    targetPort: 9090
  selector:
    app: workflow-controller
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: argo-server
spec:
  selector:
    matchLabels:
      app: argo-server
  template:
    metadata:
      labels:
        app: argo-server
    spec:
      containers:
      - args:
        - server
        image: argoproj/argocli:v2.9.0
        name: argo-server
        ports:
        - containerPort: 2746
          name: web
        readinessProbe:
          httpGet:
            path: /
            port: 2746
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 20
      nodeSelector:
        kubernetes.io/os: linux
      serviceAccountName: argo-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: workflow-controller
spec:
  selector:
    matchLabels:
      app: workflow-controller
  template:
    metadata:
      labels:
        app: workflow-controller
    spec:
      containers:
      - args:
        - --configmap
        - workflow-controller-configmap
        - --executor-image
        - argoproj/argoexec:v2.9.0
        command:
        - workflow-controller
        image: argoproj/workflow-controller:v2.9.0
        name: workflow-controller
      nodeSelector:
        kubernetes.io/os: linux
      serviceAccountName: argo

可以看到argo安装yaml文件中有很多的kubernetes 资源对象。

  • CustomResourceDefinition
  • ServiceAccount、ClusterRole、RoleBinding
  • ConfigMap
  • Service
  • Deployment

安装完成后,为了访问方便,我们将 argo-ui 改成 NodePort 类型的 Service(当然也可以创建 Ingress 对象通过域名进行访问):

[root@mydevops ~]# kubectl get svc -n argo
NAME                          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
argo-server                   NodePort    10.109.54.114   <none>        2746:30006/TCP   12d
workflow-controller-metrics   ClusterIP   10.96.121.111   <none>        9090/TCP         12d
[root@mydevops ~]# kubectl get svc -n argo argo-server -oyaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"argo-server","namespace":"argo"},"spec":{"ports":[{"name":"web","port":2746,"targetPort":2746}],"selector":{"app":"argo-server"}}}
  creationTimestamp: 2020-07-03T08:40:58Z
  name: argo-server
  namespace: argo
  resourceVersion: "76349"
  selfLink: /api/v1/namespaces/argo/services/argo-server
  uid: eac4c614-bd08-11ea-90d4-000c2944bd9b
spec:
  clusterIP: 10.109.54.114
  externalTrafficPolicy: Cluster
  ports:
  - name: web
    nodePort: 30006
    port: 2746
    protocol: TCP
    targetPort: 2746
  selector:
    app: argo-server
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}
[root@mydevops ~]# kubectl get pod -n argo
NAME                                   READY     STATUS    RESTARTS   AGE
argo-server-bcdd887f6-f44tk            1/1       Running   0          10m
workflow-controller-69bb45b59d-dxkjh   1/1       Running   0          10m

安装部署完成后可以浏览器访问UI页面:

超详细实操-基于容器的本地工作流引擎-argo(一)_第3张图片
也可以安装argo 命令行客户端:

# Download the binary
curl -sLO https://github.com/argoproj/argo/releases/download/v2.9.0/argo-linux-amd64

# Make binary executable
chmod +x argo-linux-amd64

# Move binary to path
mv ./argo-linux-amd64 /usr/local/bin/argo

# Test installation
argo version
//本地演示
[root@mydevops ~]# argo version 
argo: v2.9.0
  BuildDate: 2020-07-02T00:49:55Z
  GitCommit: d67d3b1dbc61ebc5789806794ccd7e2debd71ffc
  GitTreeState: clean
  GitTag: v2.9.0
  GoVersion: go1.13.4
  Compiler: gc
  Platform: linux/amd64

3. 如何使用 Argo workflow

Argo是一个开源项目,它为Kubernetes提供容器本地工作流。Argo工作流中的每一步都被定义为一个容器。

下面通过演练官网示例学习语法:

关于Argo ClI | argo or kubectl

Argo实现为Kubernetes CRD(自定义资源)。因此,可以使用kubectl管理Argo工作流,但是argo命令行提供更加丰富的功能:如参数替换、yaml文件语法校验等等

argo示例

argo submit hello-world.yaml    # submit a workflow spec to Kubernetes
argo list                       # list current workflows
argo get hello-world-xxx        # get info about a specific workflow
argo logs -w hello-world-xxx    # get logs from all steps in a workflow
argo logs hello-world-xxx-yyy   # get logs from a specific step in a workflow
argo delete hello-world-xxx     # delete workflow

kubectl示例

kubectl create -f hello-world.yaml
kubectl get wf
kubectl get wf hello-world-xxx
kubectl get po --selector=workflows.argoproj.io/workflow=hello-world-xxx --show-all  # similar to argo
kubectl logs hello-world-xxx-yyy -c main
kubectl delete wf hello-world-xxx

3.1 Hello World!

使用dokcer Hub提供的镜像

[root@mydevops ~]# docker run docker/whalesay cowsay "hello world"
 _____________ 
< hello world >
 ------------- 
    \
     \
      \     
                    ##        .            
              ## ## ##       ==            
           ## ## ## ##      ===            
       /""""""""""""""""___/ ===        
  ~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ /  ===- ~~~   
       \______ o          __/            
        \    \        __/             
          \____\______/ 

现在开始使用argo在kubernetes上运行这个容器

apiVersion: argoproj.io/v1alpha1
kind: Workflow                  # new type of k8s spec
metadata:
  generateName: hello-world-    # name of the workflow spec
spec:
# 调用whalesay模板,这个工作流yaml,                                         
# 主要学习entrypoint、templates
# 在Workflow中 入口entrypoint决定从哪个模板开始
# templates定义所有的模板内容
  entrypoint: whalesay 
  templates:
  - name: whalesay              # name of the template
    container:
      image: docker/whalesay
      command: [cowsay]
      args: ["hello world"]
      resources:                # limit the resources
        limits:
          memory: 32Mi
          cpu: 100m

运行结果:

[root@mydevops examples]# argo submit hello-world.yaml 
Name:                hello-world-wpqhc
Namespace:           default
ServiceAccount:      default
Status:              Pending
Created:             Thu Jul 16 17:16:28 +0800 (now)
[root@mydevops examples]# argo list
NAME                STATUS    AGE   DURATION   PRIORITY
hello-world-wpqhc   Running   6s    6s         0
[root@mydevops examples]# kubectl get pod
NAME                READY     STATUS      RESTARTS   AGE
hello-world-wpqhc   0/2       Completed   0          47s
[root@mydevops examples]# argo get hello-world-wpqhc
Name:                hello-world-wpqhc
Namespace:           default
ServiceAccount:      default
Status:              Succeeded
Conditions:          
 Completed           True
Created:             Thu Jul 16 17:16:28 +0800 (1 minute ago)
Started:             Thu Jul 16 17:16:28 +0800 (1 minute ago)
Finished:            Thu Jul 16 17:16:45 +0800 (50 seconds ago)
Duration:            17 seconds
ResourcesDuration:   6s*(1 cpu),5s*(100Mi memory)

STEP                  TEMPLATE  PODNAME            DURATION  MESSAGE
 ✔ hello-world-wpqhc  whalesay  hello-world-wpqhc  15s         
[root@mydevops examples]# argo logs hello-world-wpqhc
hello-world-wpqhc:  _____________ 
hello-world-wpqhc: < hello world >
hello-world-wpqhc:  ------------- 
hello-world-wpqhc:     \
hello-world-wpqhc:      \
hello-world-wpqhc:       \     
hello-world-wpqhc:                     ##        .            
hello-world-wpqhc:               ## ## ##       ==            
hello-world-wpqhc:            ## ## ## ##      ===            
hello-world-wpqhc:        /""""""""""""""""___/ ===        
hello-world-wpqhc:   ~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ /  ===- ~~~   
hello-world-wpqhc:        \______ o          __/            
hello-world-wpqhc:         \    \        __/             
hello-world-wpqhc:           \____\______/   

UI页面显示:

超详细实操-基于容器的本地工作流引擎-argo(一)_第4张图片

3.2 Parameters参数

带有参数的yaml文件

apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
  generateName: hello-world-parameters-
spec:
  #  调用 whalesay
  entrypoint: whalesay
  #  定义参数message  值"hello world"
  arguments:
    parameters:
    - name: message
      value: hello world

  templates:
  - name: whalesay
    inputs:
      parameters:
      - name: message       # parameter declaration
    container:
      # run cowsay with that message input parameter as args
      image: docker/whalesay
      command: [cowsay]
      args: ["{{inputs.parameters.message}}"]

运行结果:

#如果工作流yaml文件包含输入参数,可以利用argo -p 给参数赋值
#argo  --parameter-file params.yaml 指定参数文件
#argo  --entrypoint whalesay-caps   指定工作流其实模板
[root@mydevops examples]# argo submit arguments-parameters.yaml -p message="goodbye world"
Name:                hello-world-parameters-ml4x4
Namespace:           default
ServiceAccount:      default
Status:              Pending
Created:             Thu Jul 16 17:33:47 +0800 (now)
Parameters:          
  message:           goodbye world
[root@mydevops examples]# kubectl get pod 
NAME                           READY     STATUS              RESTARTS   AGE
hello-world-parameters-ml4x4   0/2       ContainerCreating   0          12s
hello-world-wpqhc              0/2       Completed           0          17m
[root@mydevops examples]# argo list hello-world-parameters-ml4x4
NAME                           STATUS      AGE   DURATION   PRIORITY
hello-world-parameters-ml4x4   Succeeded   3m    17s        0
hello-world-wpqhc              Succeeded   21m   17s        0
[root@mydevops examples]# argo logs  hello-world-parameters-ml4x4
hello-world-parameters-ml4x4:  _______________ 
hello-world-parameters-ml4x4: < goodbye world >
hello-world-parameters-ml4x4:  --------------- 
hello-world-parameters-ml4x4:     \
hello-world-parameters-ml4x4:      \
hello-world-parameters-ml4x4:       \     
hello-world-parameters-ml4x4:                     ##        .            
hello-world-parameters-ml4x4:               ## ## ##       ==            
hello-world-parameters-ml4x4:            ## ## ## ##      ===            
hello-world-parameters-ml4x4:        /""""""""""""""""___/ ===        
hello-world-parameters-ml4x4:   ~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ /  ===- ~~~   
hello-world-parameters-ml4x4:        \______ o          __/            
hello-world-parameters-ml4x4:         \    \        __/             
hello-world-parameters-ml4x4:           \____\______/   

3.3 Steps步骤

在这个例子中,我们将看到如何创建多步骤工作流,如何在工作流规范中定义多个模板,以及如何创建嵌套工作流。

steps的yaml文件

apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
  generateName: steps-
spec:
  entrypoint: hello-hello-hello
  #两个包含容器的模板:hello-hello-hello and whalesay
  templates:
  - name: hello-hello-hello
    # 这个模板中包含很多步骤,每个步骤运行一个容器
    # 步骤按照排列顺序,从上往下,同一个数组内的步骤,是并行执行
    steps:
    - - name: hello1           
        template: whalesay
        arguments:
          parameters:
          - name: message
            value: "hello1"
    - - name: hello2a          
        template: whalesay
        arguments:
          parameters:
          - name: message
            value: "hello2a"
      - name: hello2b          
        template: whalesay
        arguments:
          parameters:
          - name: message
            value: "hello2b"
  - name: whalesay
    inputs:
      parameters:
      - name: message
    container:
      image: docker/whalesay
      command: [cowsay]
      args: ["{{inputs.parameters.message}}"]

运行结果:
超详细实操-基于容器的本地工作流引擎-argo(一)_第5张图片

[root@mydevops examples]# argo watch steps-9cvhn
# Watch the latest workflow:
  argo watch @latest
Name:                steps-9cvhn
Namespace:           default
ServiceAccount:      default
Status:              Succeeded
Conditions:          
 Completed           True
Created:             Fri Jul 17 10:12:20 +0800 (5 minutes ago)
Started:             Fri Jul 17 10:12:20 +0800 (5 minutes ago)
Finished:            Fri Jul 17 10:12:58 +0800 (4 minutes ago)
Duration:            38 seconds
ResourcesDuration:   20s*(1 cpu),20s*(100Mi memory)

STEP            TEMPLATE           PODNAME                 DURATION  MESSAGE
 ✔ steps-9cvhn  hello-hello-hello                                      
 ├---✔ hello1   whalesay           steps-9cvhn-374755726   16s         
 └-·-✔ hello2a  whalesay           steps-9cvhn-1017544751  16s         
   └-✔ hello2b  whalesay           steps-9cvhn-1034322370  19s 

3.4 DAG(有向无环图)

作为步骤的替代方法,DAG可以通过指定每个任务的依赖关系,将工作流定义为一个有向无环图(DAG)。对于复杂的工作流,这可以更简单地维护,并在运行任务时允许最大的并行性。

在下面的工作流中,步骤A首先运行,因为它没有依赖关系。一旦A完成,步骤B和C将并行运行。最后,一旦B和C完成,就可以运行步骤D

apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
  generateName: dag-diamond-
spec:
  entrypoint: diamond
  templates:
  - name: echo
    inputs:
      parameters:
      - name: message
    container:
      image: alpine:3.7
      command: [echo, "{{inputs.parameters.message}}"]
  - name: diamond
    dag:
      tasks:
      - name: A
        template: echo
        arguments:
          parameters: [{name: message, value: A}]
      - name: B
        dependencies: [A]
        template: echo
        arguments:
          parameters: [{name: message, value: B}]
      - name: C
        dependencies: [A]
        template: echo
        arguments:
          parameters: [{name: message, value: C}]
      - name: D
        dependencies: [B, C]
        template: echo
        arguments:
          parameters: [{name: message, value: D}]

DAG有一个内置的快速失败结束特性,一旦检测到一个DAG节点失败,它就会停止调度新步骤。然后,在DAG自身失败之前,它会等待所有DAG节点完成。FailFast标志默认为true,如果设置为false,它将允许DAG运行DAG的所有分支直到完成(成功或失败),而不考虑DAG中分支的失败结果。

运行结果:
超详细实操-基于容器的本地工作流引擎-argo(一)_第6张图片

[root@mydevops examples]# argo list 
NAME                           STATUS      AGE   DURATION   PRIORITY
dag-diamond-4z69w              Succeeded   12m   40s        0
steps-9cvhn                    Succeeded   31m   38s        0
hello-world-parameters-ml4x4   Succeeded   17h   17s        0
hello-world-wpqhc              Succeeded   17h   17s        0
[root@mydevops examples]# argo watch dag-diamond-4z69w 
Name:                dag-diamond-4z69w
Namespace:           default
ServiceAccount:      default
Status:              Succeeded
Conditions:          
 Completed           True
Created:             Fri Jul 17 10:30:46 +0800 (12 minutes ago)
Started:             Fri Jul 17 10:30:46 +0800 (12 minutes ago)
Finished:            Fri Jul 17 10:31:26 +0800 (12 minutes ago)
Duration:            40 seconds
ResourcesDuration:   3s*(1 cpu),3s*(100Mi memory)

STEP                  TEMPLATE  PODNAME                       DURATION  MESSAGE
 ✔ dag-diamond-4z69w  diamond                                             
 ├-✔ A                echo      dag-diamond-4z69w-1804781070  11s         
 ├-✔ B                echo      dag-diamond-4z69w-1788003451  12s         
 ├-✔ C                echo      dag-diamond-4z69w-1771225832  12s         
 └-✔ D                echo      dag-diamond-4z69w-1888669165  11s

3.5 Artifacts工件(制品)

注意:需要配置一个工件存储库来运行制品的示例

argo的输入输出–output和input输出目录或文件到下一步骤,有部分场景需要使用output把目录或者文件传递到下一个步骤。

argo提供了两种方式

  • 一种是参数方式parameter
  • 一种是工件方式artifacts

各自适用于不同的场景,参数方式是把某个文本的内容读取出来传递给下一步骤。工件方式可以传递文件本身或者文件目录。

工件方式需要有一个中转文件的地方,所以需要给argo配置一个存储引擎。

目前argo支持三种类型的存储:
aws的s3,gcs(Google Cloud Storage),MinIO

如果没有配置存储引擎,会报如下错误:

controller is not configured with a default archive location

这里我们部署一个开源的存储引擎-MinIO

官网部署方式:
超详细实操-基于容器的本地工作流引擎-argo(一)_第7张图片

由于安装helm以及添加helm仓库,网络原因,下载比较慢。我会直接使用yaml文件安装。或者按照官网使用docker部署一个简易版的minIO

在运行工作流时,生成或使用工件的步骤是非常常见的。通常,一个步骤的输出工件可以用作后续步骤的输入工件。

工件示例的yaml文件

apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
  generateName: artifact-passing-
spec:
  entrypoint: artifact-example
  templates:
  - name: artifact-example
    steps:
    - - name: generate-artifact
        template: whalesay
    - - name: consume-artifact
        template: print-message
        arguments:
          artifacts:
          # bind message to the hello-art artifact
          # generated by the generate-artifact step
          - name: message
            from: "{{steps.generate-artifact.outputs.artifacts.hello-art}}"

  - name: whalesay
    container:
      image: docker/whalesay:latest
      command: [sh, -c]
      args: ["cowsay hello world | tee /tmp/hello_world.txt"]
    outputs:
      artifacts:
      # generate hello-art artifact from /tmp/hello_world.txt
      # artifacts can be directories as well as files
      - name: hello-art
        path: /tmp/hello_world.txt

  - name: print-message
    inputs:
      artifacts:
      # unpack the message input artifact
      # and put it at /tmp/message
      - name: message
        path: /tmp/message
    container:
      image: alpine:latest
      command: [sh, -c]
      args: ["cat /tmp/message"]

运行结果:

[root@mydevops examples]# argo -n argo  list
NAME                     STATUS    AGE   DURATION   PRIORITY
artifact-passing-sp8zc   Running   14s   14s        0
[root@mydevops examples]# argo -n argo  watch artifact-passing-sp8zc
Name:                artifact-passing-sp8zc
Namespace:           argo
ServiceAccount:      default
Status:              Succeeded
Conditions:          
 Completed           True
Created:             Fri Jul 17 15:48:08 +0800 (1 minute ago)
Started:             Fri Jul 17 15:48:08 +0800 (1 minute ago)
Finished:            Fri Jul 17 15:48:48 +0800 (1 minute ago)
Duration:            40 seconds
ResourcesDuration:   13s*(1 cpu),13s*(100Mi memory)

STEP                       TEMPLATE          PODNAME                            DURATION  MESSAGE
 ✔ artifact-passing-sp8zc  artifact-example                                                 
 ├---✔ generate-artifact   whalesay          artifact-passing-sp8zc-3656188962  18s         
 └---✔ consume-artifact    print-message     artifact-passing-sp8zc-1545141922  20s

超详细实操-基于容器的本地工作流引擎-argo(一)_第8张图片

3.6 工作流结构

通过上述几个示例,对工作流的yaml文件的书写结构应该有了一个初步的认识。

先总结一下工作流模板的基础架构:

  • Kubernetes header 头信息包括Kubernetes 元信息
  • Spec body spec结构体包含工作流调用入口以及模板定义
    • Entrypoint invocation with optionally arguments 可选参数的入口调用
    • List of template definitions 模板列表的定义
      • Name of the template 模板名字
      • Optionally a list of inputs(可选的)输入列表
      • Optionally a list of outputs (可选的)输出列表
      • Container invocation (leaf template) or a list of steps 容器调用(叶模板)或步骤列表
        • 对于每个步骤,都有一个模板调用

总的来说,工作流规范由一组Argo template组成,其中每个template由一个可选的输入部分、一个可选的输出部分和一个容器调用或一个步骤列表组成,其中每个步骤调用另一个template。

注意,工作流规范的容器部分将接受与pod规范的容器部分相同的选项,包括但不限于environment variables, secrets, and volume mounts. Similarly, for volume claims and volumes。

3.7 Loops 循环

在编写工作流时,能够迭代一组输入通常非常有用

apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
  generateName: loops-param-arg-
spec:
  entrypoint: loop-param-arg-example
  arguments:
    parameters:
    - name: os-list                                     # a list of items
      value: |
        [
          { "image": "debian", "tag": "9.1" },
          { "image": "debian", "tag": "8.9" },
          { "image": "alpine", "tag": "3.6" },
          { "image": "ubuntu", "tag": "17.10" }
        ]

  templates:
  - name: loop-param-arg-example
    inputs:
      parameters:
      - name: os-list
    steps:
    - - name: test-linux
        template: cat-os-release
        arguments:
          parameters:
          - name: image
            value: "{{item.image}}"
          - name: tag
            value: "{{item.tag}}"
        withParam: "{{inputs.parameters.os-list}}"      # parameter specifies the list to iterate over

  # This template is the same as in the previous example
  - name: cat-os-release
    inputs:
      parameters:
      - name: image
      - name: tag
    container:
      image: "{{inputs.parameters.image}}:{{inputs.parameters.tag}}"
      command: [cat]
      args: [/etc/os-release]

运行结果:
超详细实操-基于容器的本地工作流引擎-argo(一)_第9张图片

3.8 条件

argo工作流也可以根据条件去执行具体的步骤,下面演示扔硬币流程。

条件示例yaml文件:

apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
  generateName: coinflip-
spec:
  entrypoint: coinflip
  templates:
  - name: coinflip
    steps:
    # flip a coin
    - - name: flip-coin
        template: flip-coin
    # evaluate the result in parallel
    - - name: heads
        template: heads                 # call heads template if "heads"
        when: "{{steps.flip-coin.outputs.result}} == heads"
      - name: tails
        template: tails                 # call tails template if "tails"
        when: "{{steps.flip-coin.outputs.result}} == tails"

  # Return heads or tails based on a random number
  - name: flip-coin
    script:
      image: python:alpine3.6
      command: [python]
      source: |
        import random
        result = "heads" if random.randint(0,1) == 0 else "tails"
        print(result)

  - name: heads
    container:
      image: alpine:3.6
      command: [sh, -c]
      args: ["echo \"it was heads\""]

  - name: tails
    container:
      image: alpine:3.6
      command: [sh, -c]
      args: ["echo \"it was tails\""]

运行结果:
超详细实操-基于容器的本地工作流引擎-argo(一)_第10张图片

3.9 secrets

Argo支持与Kubernetes Pod规范相同的secrets语法和机制,允许访问作为环境变量或volume mounts的secrets

示例的yaml文件

# To run this example, first create the secret by running:
# kubectl create secret generic my-secret --from-literal=mypassword=S00perS3cretPa55word
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
  generateName: secret-example-
spec:
  entrypoint: whalesay
  # To access secrets as files, add a volume entry in spec.volumes[] and
  # then in the container template spec, add a mount using volumeMounts.
  volumes:
  - name: my-secret-vol
    secret:
      secretName: my-secret     # name of an existing k8s secret
  templates:
  - name: whalesay
    container:
      image: alpine:3.7
      command: [sh, -c]
      args: ['
        echo "secret from env: $MYSECRETPASSWORD";
        echo "secret from file: `cat /secret/mountpath/mypassword`"
      ']
      # To access secrets as environment variables, use the k8s valueFrom and
      # secretKeyRef constructs.
      env:
      - name: MYSECRETPASSWORD  # name of env var
        valueFrom:
          secretKeyRef:
            name: my-secret     # name of an existing k8s secret
            key: mypassword     # 'key' subcomponent of the secret
      volumeMounts:
      - name: my-secret-vol     # mount file containing secret at /secret/mountpath
        mountPath: "/secret/mountpath"

运行结果:

[root@mydevops examples]# kubectl get secrets -n argo my-minio-cred -oyaml
apiVersion: v1
data:
  accessKey: YWRtaW4=
  secretKey: MTIzMTIzMTIz
kind: Secret
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"accessKey":"YWRtaW4=","secretKey":"MTIzMTIzMTIz"},"kind":"Secret","metadata":{"annotations":{},"name":"my-minio-cred","namespace":"argo"}}
  creationTimestamp: 2020-07-09T04:24:48Z
  name: my-minio-cred
  namespace: argo
  resourceVersion: "143571"
  selfLink: /api/v1/namespaces/argo/secrets/my-minio-cred
  uid: 1fef3aa5-c19c-11ea-8646-000c2944bd9b
type: Opaque
[root@mydevops examples]# argo logs -n argo secret-example-ft25s
secret-example-ft25s: secret from env: admin
secret-example-ft25s: secret from file: admin

3.10 Scripts & Results

通常,我们只需要一个执行工作流规范中指定脚本的模板。

yaml文件:

apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
  generateName: scripts-bash-
spec:
  entrypoint: bash-script-example
  templates:
  - name: bash-script-example
    steps:
    - - name: generate
        template: gen-random-int-bash
    - - name: print
        template: print-message
        arguments:
          parameters:
          - name: message
            value: "{{steps.generate.outputs.result}}"  # The result of the here-script

  - name: gen-random-int-bash
    script:
      image: debian:9.4
      command: [bash]
      source: |                                         # Contents of the here-script
        cat /dev/urandom | od -N2 -An -i | awk -v f=1 -v r=100 '{printf "%i\n", f + r * $1 / 65536}'

  - name: gen-random-int-python
    script:
      image: python:alpine3.6
      command: [python]
      source: |
        import random
        i = random.randint(1, 100)
        print(i)

  - name: gen-random-int-javascript
    script:
      image: node:9.1-alpine
      command: [node]
      source: |
        var rand = Math.floor(Math.random() * 100);
        console.log(rand);

  - name: print-message
    inputs:
      parameters:
      - name: message
    container:
      image: alpine:latest
      command: [sh, -c]
      args: ["echo result was: {{inputs.parameters.message}}"]

运行结果:

[root@mydevops examples]# argo logs scripts-bash-twsqv
scripts-bash-twsqv-4196821547: 19
scripts-bash-twsqv-202958314: result was: 19

在这里插入图片描述

3.11 Output Parameters

argo的输入输出–output和input输出目录或文件到下一步骤,有部分场景需要使用output把目录或者文件传递到下一个步骤。

argo提供了两种方式

  • 一种是参数方式parameter
  • 一种是工件方式artifacts

这里介绍参数的输出方式。工件输出方式,请参考3.5

apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
  generateName: output-parameter-
spec:
  entrypoint: output-parameter
  templates:
  - name: output-parameter
    steps:
    - - name: generate-parameter
        template: whalesay
    - - name: consume-parameter
        template: print-message
        arguments:
          parameters:
          # Pass the hello-param output from the generate-parameter step as the message input to print-message
          #DAG模板调用参数-{{tasks.generate-parameter.outputs.parameter .hello-param}}。
          - name: message
            value: "{{steps.generate-parameter.outputs.parameters.hello-param}}"

  - name: whalesay
    container:
      image: docker/whalesay:latest
      command: [sh, -c]
      args: ["echo -n hello world > /tmp/hello_world.txt"]  # generate the content of hello_world.txt
    outputs:
      parameters:
      - name: hello-param		# name of output parameter
        valueFrom:
          path: /tmp/hello_world.txt	# set the value of hello-param to the contents of this hello-world.txt

  - name: print-message
    inputs:
      parameters:
      - name: message
    container:
      image: docker/whalesay:latest
      command: [cowsay]
      args: ["{{inputs.parameters.message}}"]

运行结果:
超详细实操-基于容器的本地工作流引擎-argo(一)_第11张图片

3.12 Retrying Failed or Errored Steps

重试失败或者错误步骤,工作流yaml文件中提供一个retryStrategy字段来描述失败或者错误的工作流情况

  • limit最大的重试次数
  • retryPolicy重试策略,有AlwaysOnFailure(default)、OnError
  • backoff指定重试时间

当提供的是空的 retryStrategy (i.e. retryStrategy: {}) ,经会一直重试,知道成功运行工作流

# This example demonstrates the use of retry back offs
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
  generateName: retry-backoff-
spec:
  entrypoint: retry-backoff
  templates:
  - name: retry-backoff
    retryStrategy:
      limit: 10
      retryPolicy: "Always"
      backoff:
        duration: "1"      # Must be a string. Default unit is seconds. Could also be a Duration, e.g.: "2m", "6h", "1d"
        factor: 2
        maxDuration: "1m"  # Must be a string. Default unit is seconds. Could also be a Duration, e.g.: "2m", "6h", "1d"
    container:
      image: python:alpine3.6
      command: ["python", -c]
      # fail with a 66% probability
      args: ["import random; import sys; exit_code = random.choice([0, 1, 1]); sys.exit(exit_code)"]

3.13 Recursion(递归)

工作流中的模板可以递归地相互调用!在示例中我们可以继续使用抛硬币yaml文件,知道出现正面停止步骤

apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
  generateName: coinflip-recursive-
spec:
  entrypoint: coinflip
  templates:
  - name: coinflip
    steps:
    # flip a coin
    - - name: flip-coin
        template: flip-coin
    # evaluate the result in parallel
    - - name: heads
        template: heads                 # call heads template if "heads"
        when: "{{steps.flip-coin.outputs.result}} == heads"
      - name: tails                     # keep flipping coins if "tails"
        template: coinflip
        when: "{{steps.flip-coin.outputs.result}} == tails"
  - name: flip-coin
    script:
      image: python:alpine3.6
      command: [python]
      source: |
        import random
        result = "heads" if random.randint(0,1) == 0 else "tails"
        print(result)
  - name: heads
    container:
      image: alpine:3.6
      command: [sh, -c]
      args: ["echo \"it was heads\""]

运行结果:

第一次:
在这里插入图片描述

第二次:
在这里插入图片描述
第一次抛硬币,第二次出现正面结束

第二次抛硬币,第三次出现正面结束

3.14 Exit handlers(退出处理)

退出处理程序是始终在工作流结束时执行的模板,无论成功或失败

  • 在工作流运行后进行清理
  • 发送工作流状态通知
  • 发布通过/失败状态到webhook结果 (e.g. GitHub build result)
  • 重新提交或提交另一个工作流
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
  generateName: exit-handlers-
spec:
  entrypoint: intentional-fail
  onExit: exit-handler                  # invoke exit-hander template at end of the workflow
  templates:
  # primary workflow template
  - name: intentional-fail
    container:
      image: alpine:latest
      command: [sh, -c]
      args: ["echo intentional failure; exit 1"]

  # Exit handler templates
  # After the completion of the entrypoint template, the status of the
  # workflow is made available in the global variable {{workflow.status}}.
  # {{workflow.status}} will be one of: Succeeded, Failed, Error
  - name: exit-handler
    steps:
    - - name: notify
        template: send-email
      - name: celebrate
        template: celebrate
        when: "{{workflow.status}} == Succeeded"
      - name: cry
        template: cry
        when: "{{workflow.status}} != Succeeded"
  - name: send-email
    container:
      image: alpine:latest
      command: [sh, -c]
      args: ["echo send e-mail: {{workflow.name}} {{workflow.status}}"]
  - name: celebrate
    container:
      image: alpine:latest
      command: [sh, -c]
      args: ["echo hooray!"]
  - name: cry
    container:
      image: alpine:latest
      command: [sh, -c]
      args: ["echo boohoo!"]

3.15 Timeouts

可以为工作流设置超时时间,使用activeDeadlineSeconds字段

# To enforce a timeout for a container template, specify a value for activeDeadlineSeconds.
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
  generateName: timeouts-
spec:
  entrypoint: sleep
  templates:
  - name: sleep
    container:
      image: alpine:latest
      command: [sh, -c]
      args: ["echo sleeping for 1m; sleep 60; echo done"]
    activeDeadlineSeconds: 10           # terminate container template after 10 seconds

总结

argo工作流有着非常丰富的功能,不仅可以进行上述操作,还可以操作kubernetes的资源,创建伴生容器等等操作。在持续集成中工作流的一个流行应用程序。目前,Argo不提供自动启动CI作业的事件触发器,将来版本中会出现。现在可以通过编写cron作业来检查新的提交并启动所需的工作流,或者使用现有的Jenkins服务器启动工作流。

参考文档

https://github.com/argoproj/argo/blob/master/examples/README.md

你可能感兴趣的:(持续集成)