Deployments

Replication controllers

A replication controller ensures that a specified number of replicas of a pod are running at all times. If pods exit or are deleted, the replication controller acts to instantiate more up to the defined number. Likewise, if there are more running than desired, it deletes as many as necessary to match the defined amount.

A replication controller configuration consists of:

  1. The number of replicas desired (which can be adjusted at runtime).

  2. A pod definition to use when creating a replicated pod.

  3. A selector for identifying managed pods.

A selector is a set of labels assigned to the pods that are managed by the replication controller. These labels are included in the pod definition that the replication controller instantiates. The replication controller uses the selector to determine how many instances of the pod are already running in order to adjust as needed.

The replication controller does not perform auto-scaling based on load or traffic, as it does not track either. Rather, this would require its replica count to be adjusted by an external auto-scaler.

A replication controller is a core Kubernetes object called **ReplicationController**.

The following is an example **ReplicationController** definition:

  1. apiVersion: v1
  2. kind: ReplicationController
  3. metadata:
  4. name: frontend-1
  5. spec:
  6. replicas: 1 (1)
  7. selector: (2)
  8. name: frontend
  9. template: (3)
  10. metadata:
  11. labels: (4)
  12. name: frontend (5)
  13. spec:
  14. containers:
  15. - image: openshift/hello-openshift
  16. name: helloworld
  17. ports:
  18. - containerPort: 8080
  19. protocol: TCP
  20. restartPolicy: Always
1The number of copies of the pod to run.
2The label selector of the pod to run.
3A template for the pod the controller creates.
4Labels on the pod should include those from the label selector.
5The maximum name length after expanding any parameters is 63 characters.

Replica set

Similar to a replication controller, a replica set ensures that a specified number of pod replicas are running at any given time. The difference between a replica set and a replication controller is that a replica set supports set-based selector requirements whereas a replication controller only supports equality-based selector requirements.

Only use replica sets if you require custom update orchestration or do not require updates at all, otherwise, use Deployments. Replica sets can be used independently, but are used by deployments to orchestrate pod creation, deletion, and updates. Deployments manage their replica sets automatically, provide declarative updates to pods, and do not have to manually manage the replica sets that they create.

A replica set is a core Kubernetes object called **ReplicaSet**.

The following is an example **ReplicaSet** definition:

  1. apiVersion: apps/v1
  2. kind: ReplicaSet
  3. metadata:
  4. name: frontend-1
  5. labels:
  6. tier: frontend
  7. spec:
  8. replicas: 3
  9. selector: (1)
  10. matchLabels: (2)
  11. tier: frontend
  12. matchExpressions: (3)
  13. - {key: tier, operator: In, values: [frontend]}
  14. template:
  15. metadata:
  16. labels:
  17. tier: frontend
  18. spec:
  19. containers:
  20. - image: openshift/hello-openshift
  21. name: helloworld
  22. ports:
  23. - containerPort: 8080
  24. protocol: TCP
  25. restartPolicy: Always
1A label query over a set of resources. The result of matchLabels and matchExpressions are logically conjoined.
2Equality-based selector to specify resources with labels that match the selector.
3Set-based selector to filter keys. This selects all resources with key equal to tier and value equal to frontend.

Jobs

A job is similar to a replication controller, in that its purpose is to create pods for specified reasons. The difference is that replication controllers are designed for pods that will be continuously running, whereas jobs are for one-time pods. A job tracks any successful completions and when the specified amount of completions have been reached, the job itself is completed.

The following example computes π to 2000 places, prints it out, then completes:

  1. apiVersion: extensions/v1
  2. kind: Job
  3. metadata:
  4. name: pi
  5. spec:
  6. selector:
  7. matchLabels:
  8. app: pi
  9. template:
  10. metadata:
  11. name: pi
  12. labels:
  13. app: pi
  14. spec:
  15. containers:
  16. - name: pi
  17. image: perl
  18. command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
  19. restartPolicy: Never

See the Jobs topic for more information on how to use jobs.

Deployments and Deployment Configurations

Building on replication controllers, OKD adds expanded support for the software development and deployment lifecycle with the concept of deployments. In the simplest case, a deployment just creates a new replication controller and lets it start up pods. However, OKD deployments also provide the ability to transition from an existing deployment of an image to a new one and also define hooks to be run before or after creating the replication controller.

The OKD **DeploymentConfig** object defines the following details of a deployment:

  1. The elements of a **ReplicationController** definition.

  2. Triggers for creating a new deployment automatically.

  3. The strategy for transitioning between deployments.

  4. Life cycle hooks.

Each time a deployment is triggered, whether manually or automatically, a deployer pod manages the deployment (including scaling down the old replication controller, scaling up the new one, and running hooks). The deployment pod remains for an indefinite amount of time after it completes the deployment in order to retain its logs of the deployment. When a deployment is superseded by another, the previous replication controller is retained to enable easy rollback if needed.

For detailed instructions on how to create and interact with deployments, refer to Deployments.

Here is an example **DeploymentConfig** definition with some omissions and callouts:

  1. apiVersion: v1
  2. kind: DeploymentConfig
  3. metadata:
  4. name: frontend
  5. spec:
  6. replicas: 5
  7. selector:
  8. name: frontend
  9. template: { ... }
  10. triggers:
  11. - type: ConfigChange (1)
  12. - imageChangeParams:
  13. automatic: true
  14. containerNames:
  15. - helloworld
  16. from:
  17. kind: ImageStreamTag
  18. name: hello-openshift:latest
  19. type: ImageChange (2)
  20. strategy:
  21. type: Rolling (3)
1A ConfigChange trigger causes a new deployment to be created any time the replication controller template changes.
2An ImageChange trigger causes a new deployment to be created each time a new version of the backing image is available in the named image stream.
3The default Rolling strategy makes a downtime-free transition between deployments.