In this section, you’ll learn how to configure pipelines.

Step Types

Within each stage, you can add as many steps as you’d like. When there are multiple steps in one stage, they run concurrently.

Step types include:

Configuring Steps By UI

If you haven’t added any stages, click Configure pipeline for this branch to configure the pipeline through the UI.

  1. Add stages to your pipeline execution by clicking Add Stage.

    1. Enter a Name for each stage of your pipeline.
    2. For each stage, you can configure trigger rules by clicking on Show Advanced Options. Note: this can always be updated at a later time.
  2. After you’ve created a stage, start adding steps by clicking Add a Step. You can add multiple steps to each stage.

Configuring Steps by YAML

For each stage, you can add multiple steps. Read more about each step type and the advanced options to get all the details on how to configure the YAML. This is only a small example of how to have multiple stages with a singular step in each stage.

  1. # example
  2. stages:
  3. - name: Build something
  4. # Conditions for stages
  5. when:
  6. branch: master
  7. event: [ push, pull_request ]
  8. # Multiple steps run concurrently
  9. steps:
  10. - runScriptConfig:
  11. image: busybox
  12. shellScript: date -R
  13. - name: Publish my image
  14. steps:
  15. - publishImageConfig:
  16. dockerfilePath: ./Dockerfile
  17. buildContext: .
  18. tag: rancher/rancher:v2.0.0
  19. # Optionally push to remote registry
  20. pushRemote: true
  21. registry: reg.example.com

Step Type: Run Script

The Run Script step executes arbitrary commands in the workspace inside a specified container. You can use it to build, test and do more, given whatever utilities the base image provides. For your convenience, you can use variables to refer to metadata of a pipeline execution. Please refer to the pipeline variable substitution reference for the list of available variables.

Configuring Script by UI

  1. From the Step Type drop-down, choose Run Script and fill in the form.

  2. Click Add.

Configuring Script by YAML

  1. # example
  2. stages:
  3. - name: Build something
  4. steps:
  5. - runScriptConfig:
  6. image: golang
  7. shellScript: go build

Step Type: Build and Publish Images

The Build and Publish Image step builds and publishes a Docker image. This process requires a Dockerfile in your source code’s repository to complete successfully.

The option to publish an image to an insecure registry is not exposed in the UI, but you can specify an environment variable in the YAML that allows you to publish an image insecurely.

Configuring Building and Publishing Images by UI

  1. From the Step Type drop-down, choose Build and Publish.

  2. Fill in the rest of the form. Descriptions for each field are listed below. When you’re done, click Add.

    FieldDescription
    Dockerfile PathThe relative path to the Dockerfile in the source code repo. By default, this path is ./Dockerfile, which assumes the Dockerfile is in the root directory. You can set it to other paths in different use cases (./path/to/myDockerfile for example).
    Image NameThe image name in name:tag format. The registry address is not required. For example, to build example.com/repo/my-image:dev, enter repo/my-image:dev.
    Push image to remote repositoryAn option to set the registry that publishes the image that’s built. To use this option, enable it and choose a registry from the drop-down. If this option is disabled, the image is pushed to the internal registry.
    Build Context

    (Show advanced options)
    By default, the root directory of the source code (.). For more details, see the Docker build command documentation.

Configuring Building and Publishing Images by YAML

You can use specific arguments for Docker daemon and the build. They are not exposed in the UI, but they are available in pipeline YAML format, as indicated in the example below. Available environment variables include:

Variable NameDescription
PLUGIN_DRY_RUNDisable docker push
PLUGIN_DEBUGDocker daemon executes in debug mode
PLUGIN_MIRRORDocker daemon registry mirror
PLUGIN_INSECUREDocker daemon allows insecure registries
PLUGIN_BUILD_ARGSDocker build args, a comma separated list
  1. # This example shows an environment variable being used
  2. # in the Publish Image step. This variable allows you to
  3. # publish an image to an insecure registry:
  4. stages:
  5. - name: Publish Image
  6. steps:
  7. - publishImageConfig:
  8. dockerfilePath: ./Dockerfile
  9. buildContext: .
  10. tag: repo/app:v1
  11. pushRemote: true
  12. registry: example.com
  13. env:
  14. PLUGIN_INSECURE: "true"

Step Type: Publish Catalog Template

The Publish Catalog Template step publishes a version of a catalog app template (i.e. Helm chart) to a git hosted chart repository. It generates a git commit and pushes it to your chart repository. This process requires a chart folder in your source code’s repository and a pre-configured secret in the dedicated pipeline namespace to complete successfully. Any variables in the pipeline variable substitution reference is supported for any file in the chart folder.

Configuring Publishing a Catalog Template by UI

  1. From the Step Type drop-down, choose Publish Catalog Template.

  2. Fill in the rest of the form. Descriptions for each field are listed below. When you’re done, click Add.

    FieldDescription
    Chart FolderThe relative path to the chart folder in the source code repo, where the Chart.yaml file is located.
    Catalog Template NameThe name of the template. For example, wordpress.
    Catalog Template VersionThe version of the template you want to publish, it should be consistent with the version defined in the Chart.yaml file.
    ProtocolYou can choose to publish via HTTP(S) or SSH protocol.
    SecretThe secret that stores your Git credentials. You need to create a secret in dedicated pipeline namespace in the project before adding this step. If you use HTTP(S) protocol, store Git username and password in USERNAME and PASSWORD key of the secret. If you use SSH protocol, store Git deploy key in DEPLOY_KEY key of the secret. After the secret is created, select it in this option.
    Git URLThe Git URL of the chart repository that the template will be published to.
    Git BranchThe Git branch of the chart repository that the template will be published to.
    Author NameThe author name used in the commit message.
    Author EmailThe author email used in the commit message.

Configuring Publishing a Catalog Template by YAML

You can add Publish Catalog Template steps directly in the .rancher-pipeline.yml file.

Under the steps section, add a step with publishCatalogConfig. You will provide the following information:

  • Path: The relative path to the chart folder in the source code repo, where the Chart.yaml file is located.
  • CatalogTemplate: The name of the template.
  • Version: The version of the template you want to publish, it should be consistent with the version defined in the Chart.yaml file.
  • GitUrl: The git URL of the chart repository that the template will be published to.
  • GitBranch: The git branch of the chart repository that the template will be published to.
  • GitAuthor: The author name used in the commit message.
  • GitEmail: The author email used in the commit message.
  • Credentials: You should provide Git credentials by referencing secrets in dedicated pipeline namespace. If you publish via SSH protocol, inject your deploy key to the DEPLOY_KEY environment variable. If you publish via HTTP(S) protocol, inject your username and password to USERNAME and PASSWORD environment variables.
  1. # example
  2. stages:
  3. - name: Publish Wordpress Template
  4. steps:
  5. - publishCatalogConfig:
  6. path: ./charts/wordpress/latest
  7. catalogTemplate: wordpress
  8. version: ${CICD_GIT_TAG}
  9. gitUrl: git@github.com:myrepo/charts.git
  10. gitBranch: master
  11. gitAuthor: example-user
  12. gitEmail: user@example.com
  13. envFrom:
  14. - sourceName: publish-keys
  15. sourceKey: DEPLOY_KEY

Step Type: Deploy YAML

This step deploys arbitrary Kubernetes resources to the project. This deployment requires a Kubernetes manifest file to be present in the source code repository. Pipeline variable substitution is supported in the manifest file. You can view an example file at GitHub. Please refer to the pipeline variable substitution reference for the list of available variables.

Configure Deploying YAML by UI

  1. From the Step Type drop-down, choose Deploy YAML and fill in the form.

  2. Enter the YAML Path, which is the path to the manifest file in the source code.

  3. Click Add.

Configure Deploying YAML by YAML

  1. # example
  2. stages:
  3. - name: Deploy
  4. steps:
  5. - applyYamlConfig:
  6. path: ./deployment.yaml

Step Type :Deploy Catalog App

The Deploy Catalog App step deploys a catalog app in the project. It will install a new app if it is not present, or upgrade an existing one.

Configure Deploying Catalog App by UI

  1. From the Step Type drop-down, choose Deploy Catalog App.

  2. Fill in the rest of the form. Descriptions for each field are listed below. When you’re done, click Add.

    FieldDescription
    CatalogThe catalog from which the app template will be used.
    Template NameThe name of the app template. For example, wordpress.
    Template VersionThe version of the app template you want to deploy.
    NamespaceThe target namespace where you want to deploy the app.
    App NameThe name of the app you want to deploy.
    AnswersKey-value pairs of answers used to deploy the app.

Configure Deploying Catalog App by YAML

You can add Deploy Catalog App steps directly in the .rancher-pipeline.yml file.

Under the steps section, add a step with applyAppConfig. You will provide the following information:

  • CatalogTemplate: The ID of the template. This can be found by clicking Launch app and selecting View details for the app. It is the last part of the URL.
  • Version: The version of the template you want to deploy.
  • Answers: Key-value pairs of answers used to deploy the app.
  • Name: The name of the app you want to deploy.
  • TargetNamespace: The target namespace where you want to deploy the app.
  1. # example
  2. stages:
  3. - name: Deploy App
  4. steps:
  5. - applyAppConfig:
  6. catalogTemplate: cattle-global-data:library-mysql
  7. version: 0.3.8
  8. answers:
  9. persistence.enabled: "false"
  10. name: testmysql
  11. targetNamespace: test

Timeouts

By default, each pipeline execution has a timeout of 60 minutes. If the pipeline execution cannot complete within its timeout period, the pipeline is aborted.

Configuring Timeouts by UI

Enter a new value in the Timeout field.

Configuring Timeouts by YAML

In the timeout section, enter the timeout value in minutes.

  1. # example
  2. stages:
  3. - name: Build something
  4. steps:
  5. - runScriptConfig:
  6. image: busybox
  7. shellScript: ls
  8. # timeout in minutes
  9. timeout: 30

Notifications

You can enable notifications to any notifiers based on the build status of a pipeline. Before enabling notifications, Rancher recommends setting up notifiers so it will be easy to add recipients immediately.

Configuring Notifications by UI

  1. Within the Notification section, turn on notifications by clicking Enable.

  2. Select the conditions for the notification. You can select to get a notification for the following statuses: Failed, Success, Changed. For example, if you want to receive notifications when an execution fails, select Failed.

  3. If you don’t have any existing notifiers, Rancher will provide a warning that no notifiers are set up and provide a link to be able to go to the notifiers page. Follow the instructions to add a notifier. If you already have notifiers, you can add them to the notification by clicking the Add Recipient button.

    Note: Notifiers are configured at a cluster level and require a different level of permissions.

  4. For each recipient, select which notifier type from the dropdown. Based on the type of notifier, you can use the default recipient or override the recipient with a different one. For example, if you have a notifier for Slack, you can update which channel to send the notification to. You can add additional notifiers by clicking Add Recipient.

Configuring Notifications by YAML

In the notification section, you will provide the following information:

  • Recipients: This will be the list of notifiers/recipients that will receive the notification.
    • Notifier: The ID of the notifier. This can be found by finding the notifier and selecting View in API to get the ID.
    • Recipient: Depending on the type of the notifier, the “default recipient” can be used or you can override this with a different recipient. For example, when configuring a slack notifier, you select a channel as your default recipient, but if you wanted to send notifications to a different channel, you can select a different recipient.
  • Condition: Select which conditions of when you want the notification to be sent.
  • Message (Optional): If you want to change the default notification message, you can edit this in the yaml. Note: This option is not available in the UI.
  1. # Example
  2. stages:
  3. - name: Build something
  4. steps:
  5. - runScriptConfig:
  6. image: busybox
  7. shellScript: ls
  8. notification:
  9. recipients:
  10. - # Recipient
  11. recipient: "#mychannel"
  12. # ID of Notifier
  13. notifier: "c-wdcsr:n-c9pg7"
  14. - recipient: "test@example.com"
  15. notifier: "c-wdcsr:n-lkrhd"
  16. # Select which statuses you want the notification to be sent
  17. condition: ["Failed", "Success", "Changed"]
  18. # Ability to override the default message (Optional)
  19. message: "my-message"

Triggers and Trigger Rules

After you configure a pipeline, you can trigger it using different methods:

  • Manually:

    After you configure a pipeline, you can trigger a build using the latest CI definition from Rancher UI. When a pipeline execution is triggered, Rancher dynamically provisions a Kubernetes pod to run your CI tasks and then remove it upon completion.

  • Automatically:

    When you enable a repository for a pipeline, webhooks are automatically added to the version control system. When project users interact with the repo by pushing code, opening pull requests, or creating a tag, the version control system sends a webhook to Rancher Server, triggering a pipeline execution.

    To use this automation, webhook management permission is required for the repository. Therefore, when users authenticate and fetch their repositories, only those on which they have webhook management permission will be shown.

Trigger rules can be created to have fine-grained control of pipeline executions in your pipeline configuration. Trigger rules come in two types:

  • Run this when: This type of rule starts the pipeline, stage, or step when a trigger explicitly occurs.

  • Do Not Run this when: This type of rule skips the pipeline, stage, or step when a trigger explicitly occurs.

If all conditions evaluate to true, then the pipeline/stage/step is executed. Otherwise it is skipped. When a pipeline is skipped, none of the pipeline is executed. When a stage/step is skipped, it is considered successful and follow-up stages/steps continue to run.

Wildcard character (*) expansion is supported in branch conditions.

This section covers the following topics:

Configuring Pipeline Triggers

  1. From the Global view, navigate to the project that you want to configure a pipeline trigger rule.

  2. Click Resources > Pipelines.

  3. From the repository for which you want to manage trigger rules, select the vertical ⋮ > Edit Config.

  4. Click on Show Advanced Options.

  5. In the Trigger Rules section, configure rules to run or skip the pipeline.

    1. Click Add Rule. In the Value field, enter the name of the branch that triggers the pipeline.

    2. Optional: Add more branches that trigger a build.

  6. Click Done.

Configuring Stage Triggers

  1. From the Global view, navigate to the project that you want to configure a stage trigger rule.

  2. Click Resources > Pipelines.

  3. From the repository for which you want to manage trigger rules, select the vertical ⋮ > Edit Config.

  4. Find the stage that you want to manage trigger rules, click the Edit icon for that stage.

  5. Click Show advanced options.

  6. In the Trigger Rules section, configure rules to run or skip the stage.

    1. Click Add Rule.

    2. Choose the Type that triggers the stage and enter a value.

      TypeValue
      BranchThe name of the branch that triggers the stage.
      EventThe type of event that triggers the stage. Values are: Push, Pull Request, Tag
  7. Click Save.

Configuring Step Triggers

  1. From the Global view, navigate to the project that you want to configure a stage trigger rule.

  2. Click Resources > Pipelines.

  3. From the repository for which you want to manage trigger rules, select the vertical ⋮ > Edit Config.

  4. Find the step that you want to manage trigger rules, click the Edit icon for that step.

  5. Click Show advanced options.

  6. In the Trigger Rules section, configure rules to run or skip the step.

    1. Click Add Rule.

    2. Choose the Type that triggers the step and enter a value.

      TypeValue
      BranchThe name of the branch that triggers the step.
      EventThe type of event that triggers the step. Values are: Push, Pull Request, Tag
  7. Click Save.

Configuring Triggers by YAML

  1. # example
  2. stages:
  3. - name: Build something
  4. # Conditions for stages
  5. when:
  6. branch: master
  7. event: [ push, pull_request ]
  8. # Multiple steps run concurrently
  9. steps:
  10. - runScriptConfig:
  11. image: busybox
  12. shellScript: date -R
  13. # Conditions for steps
  14. when:
  15. branch: [ master, dev ]
  16. event: push
  17. # branch conditions for the pipeline
  18. branch:
  19. include: [ master, feature/*]
  20. exclude: [ dev ]

Environment Variables

When configuring a pipeline, certain step types allow you to use environment variables to configure the step’s script.

Configuring Environment Variables by UI

  1. From the Global view, navigate to the project that you want to configure pipelines.

  2. Click Resources > Pipelines.

  3. From the pipeline for which you want to edit build triggers, select ⋮ > Edit Config.

  4. Within one of the stages, find the step that you want to add an environment variable for, click the Edit icon.

  5. Click Show advanced options.

  6. Click Add Variable, and then enter a key and value in the fields that appear. Add more variables if needed.

  7. Add your environment variable(s) into either the script or file.

  8. Click Save.

Configuring Environment Variables by YAML

  1. # example
  2. stages:
  3. - name: Build something
  4. steps:
  5. - runScriptConfig:
  6. image: busybox
  7. shellScript: echo ${FIRST_KEY} && echo ${SECOND_KEY}
  8. env:
  9. FIRST_KEY: VALUE
  10. SECOND_KEY: VALUE2

Secrets

If you need to use security-sensitive information in your pipeline scripts (like a password), you can pass them in using Kubernetes secrets.

Prerequisite

Create a secret in the same project as your pipeline, or explicitly in the namespace where pipeline build pods run.

Note: Secret injection is disabled on pull request events.

Configuring Secrets by UI

  1. From the Global view, navigate to the project that you want to configure pipelines.

  2. Click Resources > Pipelines.

  3. From the pipeline for which you want to edit build triggers, select ⋮ > Edit Config.

  4. Within one of the stages, find the step that you want to use a secret for, click the Edit icon.

  5. Click Show advanced options.

  6. Click Add From Secret. Select the secret file that you want to use. Then choose a key. Optionally, you can enter an alias for the key.

  7. Click Save.

Configuring Secrets by YAML

  1. # example
  2. stages:
  3. - name: Build something
  4. steps:
  5. - runScriptConfig:
  6. image: busybox
  7. shellScript: echo ${ALIAS_ENV}
  8. # environment variables from project secrets
  9. envFrom:
  10. - sourceName: my-secret
  11. sourceKey: secret-key
  12. targetKey: ALIAS_ENV

Pipeline Variable Substitution Reference

For your convenience, the following variables are available for your pipeline configuration scripts. During pipeline executions, these variables are replaced by metadata. You can reference them in the form of ${VAR_NAME}.

Variable NameDescription
CICD_GIT_REPO_NAMERepository name (Github organization omitted).
CICD_GIT_URLURL of the Git repository.
CICD_GIT_COMMITGit commit ID being executed.
CICD_GIT_BRANCHGit branch of this event.
CICD_GIT_REFGit reference specification of this event.
CICD_GIT_TAGGit tag name, set on tag event.
CICD_EVENTEvent that triggered the build (push, pull_request or tag).
CICD_PIPELINE_IDRancher ID for the pipeline.
CICD_EXECUTION_SEQUENCEBuild number of the pipeline.
CICD_EXECUTION_IDCombination of {CICD_PIPELINE_ID}-{CICD_EXECUTION_SEQUENCE}.
CICD_REGISTRYAddress for the Docker registry for the previous publish image step, available in the Kubernetes manifest file of a Deploy YAML step.
CICD_IMAGEName of the image built from the previous publish image step, available in the Kubernetes manifest file of a Deploy YAML step. It does not contain the image tag.

Example

Global Pipeline Execution Settings

After configuring a version control provider, there are several options that can be configured globally on how pipelines are executed in Rancher. These settings can be edited by selecting Tools > Pipelines in the navigation bar.

Executor Quota

Select the maximum number of pipeline executors. The executor quota decides how many builds can run simultaneously in the project. If the number of triggered builds exceeds the quota, subsequent builds will queue until a vacancy opens. By default, the quota is 2. A value of 0 or less removes the quota limit.

Resource Quota for Executors

Configure compute resources for Jenkins agent containers. When a pipeline execution is triggered, a build pod is dynamically provisioned to run your CI tasks. Under the hood, A build pod consists of one Jenkins agent container and one container for each pipeline step. You can manage compute resources for every containers in the pod.

Edit the Memory Reservation, Memory Limit, CPU Reservation or CPU Limit, then click Update Limit and Reservation.

To configure compute resources for pipeline-step containers:

You can configure compute resources for pipeline-step containers in the .rancher-pipeline.yml file.

In a step, you will provide the following information:

  • CPU Reservation (CpuRequest): CPU request for the container of a pipeline step.
  • CPU Limit (CpuLimit): CPU limit for the container of a pipeline step.
  • Memory Reservation (MemoryRequest): Memory request for the container of a pipeline step.
  • Memory Limit (MemoryLimit): Memory limit for the container of a pipeline step.
  1. # example
  2. stages:
  3. - name: Build something
  4. steps:
  5. - runScriptConfig:
  6. image: busybox
  7. shellScript: ls
  8. cpuRequest: 100m
  9. cpuLimit: 1
  10. memoryRequest:100Mi
  11. memoryLimit: 1Gi
  12. - publishImageConfig:
  13. dockerfilePath: ./Dockerfile
  14. buildContext: .
  15. tag: repo/app:v1
  16. cpuRequest: 100m
  17. cpuLimit: 1
  18. memoryRequest:100Mi
  19. memoryLimit: 1Gi

Note: Rancher sets default compute resources for pipeline steps except for Build and Publish Images and Run Script steps. You can override the default value by specifying compute resources in the same way.

Custom CA

If you want to use a version control provider with a certificate from a custom/internal CA root, the CA root certificates need to be added as part of the version control provider configuration in order for the pipeline build pods to succeed.

  1. Click Edit cacerts.

  2. Paste in the CA root certificates and click Save cacerts.

Result: Pipelines can be used and new pods will be able to work with the self-signed-certificate.

Persistent Data for Pipeline Components

The internal Docker registry and the Minio workloads use ephemeral volumes by default. This default storage works out-of-the-box and makes testing easy, but you lose the build images and build logs if the node running the Docker Registry or Minio fails. In most cases this is fine. If you want build images and logs to survive node failures, you can configure the Docker Registry and Minio to use persistent volumes.

For details on setting up persistent storage for pipelines, refer to this page.

Example rancher-pipeline.yml

An example pipeline configuration file is on this page.