云原生CI/CD:Tekton/pipelin之pipeline概念篇

云原生CI/CD:Tekton/pipelin之pipeline概念篇

本节介绍下tekton中pipeline概念。作为云原生的CI/CD神器在之前介绍的task和taskrun之后,还有什么强大的功能呢?

Pipeline

用于定义一系列完成特定构建或交付目标的任务。pipeline的运行是由事件触发或从PipelineRun调用。pipeline和task的区别在于,task只能执行一个task,而pipeline中可以编排多个task,注意是编排,并不只是简单运行。pipeline的spec.tasks定义了需要编排的task,是个数组,而这个数组中的task的顺序并不一定是执行顺序,pipeline中task的执行顺序是可以指定的。下面讲下pipeline的使用:

1.1 Declared resources

为了使pipeline能与外界互动,可能需要PipelineResources,PipelineResources作为输入和输出提供给Tasks。下面就是pipeline中使用pipelineResource:

spec:
  resources:
    - name: my-repo
      type: git
    - name: my-image
      type: image

1.2 Workspaces

工作空间是一种为执行中的管道及其任务提供可用的共享卷的方法。在pipeline中定义worksapce作为共享卷传递个相关的task。在tekton中定义workspace的用途有以下几点:

  • 存储输入和/或输出
  • 在task之间共享数据
  • secret认证的挂载点
  • ConfigMap中保存的配置的挂载点
  • 组织共享的常用工具的挂载点
  • 高速缓存的构建工件可加快工作速度,简而言之,用于缓存构建时的包,例如作为Maven仓库存储
    使用方式如下:
spec:
  workspaces:
    - name: pipeline-ws1 # workspace的名称
  tasks:
    - name: use-ws-from-pipeline
      taskRef:
        name: gen-code # 使用的Task
      workspaces:
        - name: output
          workspace: pipeline-ws1
    - name: use-ws-again
      taskRef:
        name: commit # 使用的task
      runAfter:
        - use-ws-from-pipeline # 定义任务的执行顺序,该task在use-ws-from-pipeline之后执行
      workspaces:
        - name: src
          workspace: pipeline-ws1

1.3 使用task和pipelineresource

pipeline中使用已经定义好的task和pipelineResource:

spec:
  tasks:
    - name: build-the-image
      taskRef:
        name: build-push # 使用定义好的task
      resources:
        inputs:#输入资源 代码
          - name: workspace
            resource: my-repo
        outputs:#输出资源 镜像
          - name: image
            resource: my-image

1.4 from

你可能需要将先前任务的输出作为输入,举个例子:

- name: build-app
  taskRef:
    name: build-push
  resources:
    outputs: #定义任务输出
      - name: image
        resource: my-image
- name: deploy-app
  taskRef:
    name: deploy-kubectl
  resources:
    inputs:#定义任务输入
      - name: image
        resource: my-image
        from:
          - build-app #任务输入源,也意味着任务deploy-kubectl要在任务build-app之后执行

resource my-image将会从build-app的输出作为deploy-app的输入,所以my-image必须是build-app任务的输出结果,当然这也意味着build-app必须先于deloy-app运行完,无论它们在定义中出现的顺序如何。

1.5 runAfter

有时,您需要具有按特定顺序运行的pipeline task,但它们没有明确的输出来输入依赖项(通过from表示)。在这种情况下,可以使用runAfter指示应在一个或多个先前的管道任务之后执行管道任务。

- name: test-app
  taskRef:
    name: make-test
  resources:
    inputs:
      - name: workspace
        resource: my-repo
- name: build-app
  taskRef:
    name: kaniko-build
  runAfter:
    - test-app #build-app任务在test-app任务之后执行
  resources:
    inputs:
      - name: workspace
        resource: my-repo

build-app任务会在test-app之后执行

1.6 retries

有时,你需要一项重试策略以应对可能会遇到的网络错误、缺少依赖或者上传问题等.
默认retries确省值为0,不会重试,自定义重试策略如下:

tasks:
  - name: build-the-image
    retries: 1
    taskRef:
      name: build-push

build-the-image任务在第一次失败后,马上就会启动第二个。当前设置只能重试1次。

1.7 conditions

有时你需要在某些条件为true的情况下才去执行task,condition字段允许您列出对在任务运行之前运行的条件的一系列引用。如果所有条件都为真,则运行任务。要是有一个条件不满足,那任务就不会运行,同时任务的status标志位会被置为ConditionCheckFailed。注意正常来讲,一个task不能运行,不会影响整个pipelinerun。pipeline中使用condition:

tasks:
  - name: conditional-task
    taskRef:
      name: build-push
    conditions:
      - conditionRef: my-condition #使用已有condition 
        params:
          - name: my-param
            value: my-value
        resources:
          - name: workspace
            resource: source-repo

关于如何定义condition,后面找个机会详细找下。

1.8 Timeout

Pipeline Task的Timeout属性允许为PipelineRun的一部分TaskRun定义超时。如果TaskRun超过指定的时间,则TaskRun将失败,并且与Pipeline关联的PipelineRun也将失败。Pipeline的Tasks没有默认超时设置,因此在定义pipeline时必须使用pipeline task指定超时。带有超时的管道任务示例如下所示:

spec:
  tasks:
    - name: build-the-image
      taskRef:
        name: build-push
      Timeout: "0h1m30s"

timeout的完整例子:

apiVersion: tekton.dev/v1alpha1
kind: Task
metadata:
  name: task-echo-message
spec:
  inputs:
    params:
      - name: MESSAGE
        type: string
        default: "Hello World"
  steps:
    - name: echo
      image: ubuntu
      command:
        - sleep 90s
      args:
        - "$(inputs.params.MESSAGE)"
---

apiVersion: tekton.dev/v1alpha1
kind: PipelineRun
metadata:
  name: pipelinerun-timeout
spec:
  # 1 hour and half timeout
  timeout: 1h30m
  pipelineSpec:
    params:
      - name: MORNING_GREETINGS
        description: "morning greetings, default is Good Morning!"
        type: string
        default: "Good Morning!"
      - name: NIGHT_GREETINGS
        description: "Night greetings, default is Good Night!"
        type: string
        default: "Good Night!"
    tasks:
      # Task to display morning greetings
      - name: echo-good-morning
        taskRef:
          name: task-echo-message
        params:
          - name: MESSAGE
            value: $(params.MORNING_GREETINGS)
      # Task to display night greetings
      - name: echo-good-night
        taskRef:
          name: task-echo-message
        params:
          - name: MESSAGE
            value: $(params.NIGHT_GREETINGS)
  params:
    - name: MORNING_GREETINGS
      value: "Good Morning, Bob!"
    - name: NIGHT_GREETINGS
      value: "Good Night, Bob!"

注意:设置超时时间是很有必要的,不然task 的pod会一直运行不完,浪费k8s集群资源,对于超时的任务确实应该kill掉。我之前构建Java项目时,使用的maven仓库没有配置阿里云的源,拉包很慢,跑了一个小时,最后到了默认的超时时间才强制关掉任务。

1.8 Results

piple中可以使用task的运行结果作为其他Task的输入,即task可在执行过程中生成一些result,这些result可用作pipeline后续task中的参数值,此外Tekton将根据输入参数来推断tasks的执行顺序,以确保生成result的task在那些消耗其结果的task之前运行。
通过变量替换将Task结果用作另一个Task参数的值:

params:
  - name: foo
    value: "$(tasks.previous-task-name.results.bar-result)"

"previous-task-name"产生的result被用于参数值。完整的例子如下:

apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
  name: sum-and-multiply-pipeline
spec:
  params:
    - name: a
      type: string
      default: "1"
    - name: b
      type: string
      default: "1"
  tasks:
    - name: sum-inputs
      taskRef:
        name: sum
      params:
        - name: a
          value: "$(params.a)"
        - name: b
          value: "$(params.b)"
    - name: multiply-inputs
      taskRef:
        name: multiply
      params:
        - name: a
          value: "$(params.a)"
        - name: b
          value: "$(params.b)"
    - name: sum-and-multiply
      taskRef:
        name: sum
      params:
        - name: a
          value: "$(tasks.multiply-inputs.results.product)$(tasks.sum-inputs.results.sum)" #该任务在multiply-inputs之后执行
        - name: b
          value: "$(tasks.multiply-inputs.results.product)$(tasks.sum-inputs.results.sum)"
---
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: sum
  annotations:
    description: |
      A simple task that sums the two provided integers
spec:
  params:
    - name: a
      type: string
      default: "1"
      description: The first integer
    - name: b
      type: string
      default: "1"
      description: The second integer
  results:
    - name: sum
      description: The sum of the two provided integers
  steps:
    - name: sum
      image: bash:latest
      script: |
        #!/usr/bin/env bash
        echo -n $(( "$(params.a)" + "$(params.b)" )) | tee $(results.sum.path)
---
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: multiply
  annotations:
    description: |
      A simple task that multiplies the two provided integers
spec:
  params:
    - name: a
      type: string
      default: "1"
      description: The first integer
    - name: b
      type: string
      default: "1"
      description: The second integer
  results:
    - name: product
      description: The product of the two provided integers
  steps:
    - name: product
      image: bash:latest
      script: |
        #!/usr/bin/env bash
        echo -n $(( "$(params.a)" * "$(params.b)" )) | tee $(results.product.path)
---
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
  generateName: sum-and-multiply-pipeline-run-
spec:
  pipelineRef:
    name: sum-and-multiply-pipeline
  params:
    - name: a
      value: "2"
    - name: b
      value: "10"

1.9 Ordering

前面有讲过单独使用from和runAfter这边讲下,这两者联合使用能到达指定顺序执行任务:

- name: lint-repo
  taskRef:
    name: pylint
  resources:
    inputs:
      - name: workspace
        resource: my-repo
- name: test-app
  taskRef:
    name: make-test
  resources:
    inputs:
      - name: workspace
        resource: my-repo
- name: build-app
  taskRef:
    name: kaniko-build-app
  runAfter:
    - test-app #build-app在test-app之后执行
  resources:
    inputs:
      - name: workspace
        resource: my-repo
    outputs:
      - name: image
        resource: my-app-image
- name: build-frontend
  taskRef:
    name: kaniko-build-frontend
  runAfter:
    - test-app #build-frontend在test-app之后执行
  resources:
    inputs:
      - name: workspace
        resource: my-repo
    outputs:
      - name: image
        resource: my-frontend-image
- name: deploy-all
  taskRef:
    name: deploy-kubectl
  resources:
    inputs:
      - name: my-app-image
        resource: my-app-image
        from:
          - build-app # 在build-app之后执行
      - name: my-frontend-image
        resource: my-frontend-image
        from:
          - build-frontend #在build-frontend之后执行

执行过程图:

        |            |
        v            v
     test-app    lint-repo
    /        \
   v          v
build-app  build-frontend
   \          /
    v        v
    deploy-all

总结

对于比较复杂的CI/CD任务或者需要指定执行顺序时,可以选择使用pipeline来运行,pipeline定义好了就创建,然后不用管了。在运行pipelineRun时指定必要的参数,每次运行构建任务时,只要运行pipelineRun就行,pipelineRun的使用我们下一次再讲。总体来说,pipeline的功能相对于task来说还比较全,之后我会找一些场景进行演示。
欢迎关注“南君手记”公众号,欢迎评论指正。技术之路,我们一起成长!
在这里插入图片描述

猜你喜欢

转载自blog.csdn.net/u013276277/article/details/106089706