Understanding deployments

The Deployment and DeploymentConfig API objects in OKD provide two similar but different methods for fine-grained management over common user applications. They are composed of the following separate API objects:

  • A Deployment or DeploymentConfig object, either of which describes the desired state of a particular component of the application as a pod template.

  • Deployment objects involve one or more replica sets, which contain a point-in-time record of the state of a deployment as a pod template. Similarly, DeploymentConfig objects involve one or more replication controllers, which preceded replica sets.

  • One or more pods, which represent an instance of a particular version of an application.

Use Deployment objects unless you need a specific feature or behavior provided by DeploymentConfig objects.

As of OKD 4.14, DeploymentConfig objects are deprecated. DeploymentConfig objects are still supported, but are not recommended for new installations. Only security-related and critical issues will be fixed.

Instead, use Deployment objects or another alternative to provide declarative updates for pods.

Building blocks of a deployment

Deployments and deployment configs are enabled by the use of native Kubernetes API objects ReplicaSet and ReplicationController, respectively, as their building blocks.

Users do not have to manipulate replica sets, replication controllers, or pods owned by Deployment or DeploymentConfig objects. The deployment systems ensure changes are propagated appropriately.

If the existing deployment strategies are not suited for your use case and you must run manual steps during the lifecycle of your deployment, then you should consider creating a custom deployment strategy.

The following sections provide further details on these objects.

Replica sets

A ReplicaSet is a native Kubernetes API object that ensures a specified number of pod replicas are running at any given time.

Only use replica sets if you require custom update orchestration or do not require updates at all. Otherwise, use deployments. Replica sets can be used independently, but are used by deployments to orchestrate pod creation, deletion, and updates. Deployments manage their replica sets automatically, provide declarative updates to pods, and do not have to manually manage the replica sets that they create.

The following is an example ReplicaSet definition:

  1. apiVersion: apps/v1
  2. kind: ReplicaSet
  3. metadata:
  4. name: frontend-1
  5. labels:
  6. tier: frontend
  7. spec:
  8. replicas: 3
  9. selector: (1)
  10. matchLabels: (2)
  11. tier: frontend
  12. matchExpressions: (3)
  13. - {key: tier, operator: In, values: [frontend]}
  14. template:
  15. metadata:
  16. labels:
  17. tier: frontend
  18. spec:
  19. containers:
  20. - image: openshift/hello-openshift
  21. name: helloworld
  22. ports:
  23. - containerPort: 8080
  24. protocol: TCP
  25. restartPolicy: Always
1A label query over a set of resources. The result of matchLabels and matchExpressions are logically conjoined.
2Equality-based selector to specify resources with labels that match the selector.
3Set-based selector to filter keys. This selects all resources with key equal to tier and value equal to frontend.

Replication controllers

Similar to a replica set, a replication controller ensures that a specified number of replicas of a pod are running at all times. If pods exit or are deleted, the replication controller instantiates more up to the defined number. Likewise, if there are more running than desired, it deletes as many as necessary to match the defined amount. The difference between a replica set and a replication controller is that a replica set supports set-based selector requirements whereas a replication controller only supports equality-based selector requirements.

A replication controller configuration consists of:

  • The number of replicas desired, which can be adjusted at run time.

  • A Pod definition to use when creating a replicated pod.

  • A selector for identifying managed pods.

A selector is a set of labels assigned to the pods that are managed by the replication controller. These labels are included in the Pod definition that the replication controller instantiates. The replication controller uses the selector to determine how many instances of the pod are already running in order to adjust as needed.

The replication controller does not perform auto-scaling based on load or traffic, as it does not track either. Rather, this requires its replica count to be adjusted by an external auto-scaler.

Use a DeploymentConfig to create a replication controller instead of creating replication controllers directly.

If you require custom orchestration or do not require updates, use replica sets instead of replication controllers.

The following is an example definition of a replication controller:

  1. apiVersion: v1
  2. kind: ReplicationController
  3. metadata:
  4. name: frontend-1
  5. spec:
  6. replicas: 1 (1)
  7. selector: (2)
  8. name: frontend
  9. template: (3)
  10. metadata:
  11. labels: (4)
  12. name: frontend (5)
  13. spec:
  14. containers:
  15. - image: openshift/hello-openshift
  16. name: helloworld
  17. ports:
  18. - containerPort: 8080
  19. protocol: TCP
  20. restartPolicy: Always
1The number of copies of the pod to run.
2The label selector of the pod to run.
3A template for the pod the controller creates.
4Labels on the pod should include those from the label selector.
5The maximum name length after expanding any parameters is 63 characters.

Deployments

Kubernetes provides a first-class, native API object type in OKD called Deployment. Deployment objects describe the desired state of a particular component of an application as a pod template. Deployments create replica sets, which orchestrate pod lifecycles.

For example, the following deployment definition creates a replica set to bring up one hello-openshift pod:

Deployment definition

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: hello-openshift
  5. spec:
  6. replicas: 1
  7. selector:
  8. matchLabels:
  9. app: hello-openshift
  10. template:
  11. metadata:
  12. labels:
  13. app: hello-openshift
  14. spec:
  15. containers:
  16. - name: hello-openshift
  17. image: openshift/hello-openshift:latest
  18. ports:
  19. - containerPort: 80

DeploymentConfig objects

As of OKD 4.14, DeploymentConfig objects are deprecated. DeploymentConfig objects are still supported, but are not recommended for new installations. Only security-related and critical issues will be fixed.

Instead, use Deployment objects or another alternative to provide declarative updates for pods.

Building on replication controllers, OKD adds expanded support for the software development and deployment lifecycle with the concept of DeploymentConfig objects. In the simplest case, a DeploymentConfig object creates a new replication controller and lets it start up pods.

However, OKD deployments from DeploymentConfig objects also provide the ability to transition from an existing deployment of an image to a new one and also define hooks to be run before or after creating the replication controller.

The DeploymentConfig deployment system provides the following capabilities:

  • A DeploymentConfig object, which is a template for running applications.

  • Triggers that drive automated deployments in response to events.

  • User-customizable deployment strategies to transition from the previous version to the new version. A strategy runs inside a pod commonly referred as the deployment process.

  • A set of hooks (lifecycle hooks) for executing custom behavior in different points during the lifecycle of a deployment.

  • Versioning of your application to support rollbacks either manually or automatically in case of deployment failure.

  • Manual replication scaling and autoscaling.

When you create a DeploymentConfig object, a replication controller is created representing the DeploymentConfig object’s pod template. If the deployment changes, a new replication controller is created with the latest pod template, and a deployment process runs to scale down the old replication controller and scale up the new one.

Instances of your application are automatically added and removed from both service load balancers and routers as they are created. As long as your application supports graceful shutdown when it receives the TERM signal, you can ensure that running user connections are given a chance to complete normally.

The OKD DeploymentConfig object defines the following details:

  1. The elements of a ReplicationController definition.

  2. Triggers for creating a new deployment automatically.

  3. The strategy for transitioning between deployments.

  4. Lifecycle hooks.

Each time a deployment is triggered, whether manually or automatically, a deployer pod manages the deployment (including scaling down the old replication controller, scaling up the new one, and running hooks). The deployment pod remains for an indefinite amount of time after it completes the deployment to retain its logs of the deployment. When a deployment is superseded by another, the previous replication controller is retained to enable easy rollback if needed.

Example DeploymentConfig definition

  1. apiVersion: apps.openshift.io/v1
  2. kind: DeploymentConfig
  3. metadata:
  4. name: frontend
  5. spec:
  6. replicas: 5
  7. selector:
  8. name: frontend
  9. template: { ... }
  10. triggers:
  11. - type: ConfigChange (1)
  12. - imageChangeParams:
  13. automatic: true
  14. containerNames:
  15. - helloworld
  16. from:
  17. kind: ImageStreamTag
  18. name: hello-openshift:latest
  19. type: ImageChange (2)
  20. strategy:
  21. type: Rolling (3)
1A configuration change trigger results in a new replication controller whenever changes are detected in the pod template of the deployment configuration.
2An image change trigger causes a new deployment to be created each time a new version of the backing image is available in the named image stream.
3The default Rolling strategy makes a downtime-free transition between deployments.

Comparing Deployment and DeploymentConfig objects

Both Kubernetes Deployment objects and OKD-provided DeploymentConfig objects are supported in OKD; however, it is recommended to use Deployment objects unless you need a specific feature or behavior provided by DeploymentConfig objects.

The following sections go into more detail on the differences between the two object types to further help you decide which type to use.

As of OKD 4.14, DeploymentConfig objects are deprecated. DeploymentConfig objects are still supported, but are not recommended for new installations. Only security-related and critical issues will be fixed.

Instead, use Deployment objects or another alternative to provide declarative updates for pods.

Design

One important difference between Deployment and DeploymentConfig objects is the properties of the CAP theorem that each design has chosen for the rollout process. DeploymentConfig objects prefer consistency, whereas Deployments objects take availability over consistency.

For DeploymentConfig objects, if a node running a deployer pod goes down, it will not get replaced. The process waits until the node comes back online or is manually deleted. Manually deleting the node also deletes the corresponding pod. This means that you can not delete the pod to unstick the rollout, as the kubelet is responsible for deleting the associated pod.

However, deployment rollouts are driven from a controller manager. The controller manager runs in high availability mode on masters and uses leader election algorithms to value availability over consistency. During a failure it is possible for other masters to act on the same deployment at the same time, but this issue will be reconciled shortly after the failure occurs.

Deployment-specific features

Rollover

The deployment process for Deployment objects is driven by a controller loop, in contrast to DeploymentConfig objects that use deployer pods for every new rollout. This means that the Deployment object can have as many active replica sets as possible, and eventually the deployment controller will scale down all old replica sets and scale up the newest one.

DeploymentConfig objects can have at most one deployer pod running, otherwise multiple deployers might conflict when trying to scale up what they think should be the newest replication controller. Because of this, only two replication controllers can be active at any point in time. Ultimately, this results in faster rapid rollouts for Deployment objects.

Proportional scaling

Because the deployment controller is the sole source of truth for the sizes of new and old replica sets owned by a Deployment object, it can scale ongoing rollouts. Additional replicas are distributed proportionally based on the size of each replica set.

DeploymentConfig objects cannot be scaled when a rollout is ongoing because the controller will have issues with the deployer process about the size of the new replication controller.

Pausing mid-rollout

Deployments can be paused at any point in time, meaning you can also pause ongoing rollouts. However, you currently cannot pause deployer pods; if you try to pause a deployment in the middle of a rollout, the deployer process is not affected and continues until it finishes.

DeploymentConfig object-specific features

Automatic rollbacks

Currently, deployments do not support automatically rolling back to the last successfully deployed replica set in case of a failure.

Triggers

Deployments have an implicit config change trigger in that every change in the pod template of a deployment automatically triggers a new rollout. If you do not want new rollouts on pod template changes, pause the deployment:

  1. $ oc rollout pause deployments/<name>
Lifecycle hooks

Deployments do not yet support any lifecycle hooks.

Custom strategies

Deployments do not support user-specified custom deployment strategies.