Upgrade An Existing Cluster to CRDs

Upgrading to consul-helm versions >= 0.30.0 will require some changes if you utilize the following:

Central Config Enabled

If you were previously setting centralConfig.enabled to false:

  1. connectInject:
  2. centralConfig:
  3. enabled: false
  1. connectInject:
  2. centralConfig:
  3. enabled: false

Then instead you must use server.extraConfig and client.extraConfig:

  1. client:
  2. extraConfig: |
  3. {"enable_central_service_config": false}
  4. server:
  5. extraConfig: |
  6. {"enable_central_service_config": false}
  1. client:
  2. extraConfig: |
  3. {"enable_central_service_config": false}
  4. server:
  5. extraConfig: |
  6. {"enable_central_service_config": false}

If you were previously setting it to true, it now defaults to true so no changes are required, but you can remove it from your config if you desire.

Default Protocol

If you were previously setting:

  1. connectInject:
  2. centralConfig:
  3. defaultProtocol: 'http' # or any value
  1. connectInject:
  2. centralConfig:
  3. defaultProtocol: 'http' # or any value

Now you must use custom resources to manage the protocol for new and existing services:

  1. To upgrade, first ensure you’re running Consul >= 1.9.0. See Consul Version Upgrade for more information on how to upgrade Consul versions.

    This version is required to support custom resources.

  2. Next, modify your Helm values:

    1. Remove the defaultProtocol config. This won’t affect existing services.
    2. Set:

      1. controller:
      2. enabled: true
      1. controller:
      2. enabled: true
  3. Now you can upgrade your Helm chart to the latest version with the new Helm values.

  4. From now on, any new service will require a ServiceDefaults resource to set its protocol:

    1. apiVersion: consul.hashicorp.com/v1alpha1
    2. kind: ServiceDefaults
    3. metadata:
    4. name: my-service-name
    5. spec:
    6. protocol: 'http'
    1. apiVersion: consul.hashicorp.com/v1alpha1
    2. kind: ServiceDefaults
    3. metadata:
    4. name: my-service-name
    5. spec:
    6. protocol: 'http'
  5. Existing services will maintain their previously set protocol. If you wish to change that protocol, you must migrate that service’s service-defaults config entry to a ServiceDefaults resource. See Migrating Config Entries.

Note: This setting was removed because it didn’t support changing the protocol after a service was first run and because it didn’t work in secondary datacenters.

Proxy Defaults

If you were previously setting:

  1. connectInject:
  2. centralConfig:
  3. proxyDefaults: |
  4. {
  5. "key": "value" // or any values
  6. }
  1. connectInject:
  2. centralConfig:
  3. proxyDefaults: |
  4. {
  5. "key": "value" // or any values
  6. }

You will need to perform the following steps to upgrade:

  1. You must remove the setting from your Helm values. This won’t have any effect on your existing cluster because this config is only read when the cluster is first created.

  2. You can then upgrade the Helm chart.

  3. If you later wish to change any of the proxy defaults settings, you will need to follow the Migrating Config Entries instructions for your proxy-defaults config entry.

    This will require Consul >= 1.9.0.

Note: This setting was removed because it couldn’t be changed after initial installation.

Mesh Gateway Mode

If you were previously setting:

  1. meshGateway:
  2. globalMode: 'local' # or any value
  1. meshGateway:
  2. globalMode: 'local' # or any value

You will need to perform the following steps to upgrade:

  1. You must remove the setting from your Helm values. This won’t have any effect on your existing cluster because this config is only read when the cluster is first created.

  2. You can then upgrade the Helm chart.

  3. If you later wish to change the mode or any other setting in proxy-defaults, you will need to follow the Migrating Config Entries instructions to migrate your proxy-defaults config entry to a ProxyDefaults resource.

    This will require Consul >= 1.9.0.

Note: This setting was removed because it couldn’t be changed after initial installation.

connect-service-protocol Annotation

If any of your Connect services had the consul.hashicorp.com/connect-service-protocol annotation set, e.g.

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. ...
  4. spec:
  5. template:
  6. metadata:
  7. annotations:
  8. "consul.hashicorp.com/connect-inject": "true"
  9. "consul.hashicorp.com/connect-service-protocol": "http"
  10. ...
  1. apiVersion: apps/v1
  2. kind: Deployment
  3. ...
  4. spec:
  5. template:
  6. metadata:
  7. annotations:
  8. "consul.hashicorp.com/connect-inject": "true"
  9. "consul.hashicorp.com/connect-service-protocol": "http"
  10. ...

You will need to perform the following steps to upgrade:

  1. Ensure you’re running Consul >= 1.9.0. See Consul Version Upgrade for more information on how to upgrade Consul versions.

    This version is required to support custom resources.

  2. Next, remove this annotation from existing deployments. This will have no effect on the deployments because the annotation was only used when the service was first created.

  3. Modify your Helm values and add:

    1. controller:
    2. enabled: true
    1. controller:
    2. enabled: true
  4. Now you can upgrade your Helm chart to the latest version.

  5. From now on, any new service will require a ServiceDefaults resource to set its protocol:

    1. apiVersion: consul.hashicorp.com/v1alpha1
    2. kind: ServiceDefaults
    3. metadata:
    4. name: my-service-name
    5. spec:
    6. protocol: 'http'
    1. apiVersion: consul.hashicorp.com/v1alpha1
    2. kind: ServiceDefaults
    3. metadata:
    4. name: my-service-name
    5. spec:
    6. protocol: 'http'
  6. Existing services will maintain their previously set protocol. If you wish to change that protocol, you must migrate that service’s service-defaults config entry to a ServiceDefaults resource. See Migrating Config Entries.

Note: The annotation was removed because it didn’t support changing the protocol and it wasn’t supported in secondary datacenters.

Migrating Config Entries

A config entry that already exists in Consul must be migrated into a Kubernetes custom resource in order to manage it from Kubernetes:

  1. Determine the kind and name of the config entry. For example, the protocol would be set by a config entry with kind: service-defaults and name equal to the name of the service.

    In another example, a proxy-defaults config has kind: proxy-defaults and name: global.

  2. Once you’ve determined the kind and name, query Consul to get its contents:

    1. $ consul config read -kind <kind> -name <name>
    1. $ consul config read -kind <kind> -name <name>

    This will require kubectl exec‘ing into a Consul server or client pod. If you’re using ACLs, you will also need an ACL token passed via the -token flag.

    For example:

    1. $ kubectl exec consul-server-0 -- consul config read -name foo -kind service-defaults
    2. {
    3. "Kind": "service-defaults",
    4. "Name": "foo",
    5. "Protocol": "http",
    6. "MeshGateway": {},
    7. "Expose": {},
    8. "CreateIndex": 60,
    9. "ModifyIndex": 60
    10. }
    1. $ kubectl exec consul-server-0 -- consul config read -name foo -kind service-defaults
    2. {
    3. "Kind": "service-defaults",
    4. "Name": "foo",
    5. "Protocol": "http",
    6. "MeshGateway": {},
    7. "Expose": {},
    8. "CreateIndex": 60,
    9. "ModifyIndex": 60
    10. }
  3. Now we’re ready to construct a Kubernetes resource for the config entry.

    It will look something like:

    1. apiVersion: consul.hashicorp.com/v1alpha1
    2. kind: ServiceDefaults
    3. metadata:
    4. name: foo
    5. annotations:
    6. 'consul.hashicorp.com/migrate-entry': 'true'
    7. spec:
    8. protocol: 'http'
    1. apiVersion: consul.hashicorp.com/v1alpha1
    2. kind: ServiceDefaults
    3. metadata:
    4. name: foo
    5. annotations:
    6. 'consul.hashicorp.com/migrate-entry': 'true'
    7. spec:
    8. protocol: 'http'
    1. The apiVersion will always be consul.hashicorp.com/v1alpha1.

    2. The kind will be the CamelCase version of the Consul kind, e.g. proxy-defaults becomes ProxyDefaults.

    3. metadata.name will be the name of the config entry.

    4. metadata.annotations will contain the "consul.hashicorp.com/migrate-entry": "true" annotation.

    5. The namespace should be whatever namespace the service is deployed in. For ProxyDefaults, we recommend the namespace that Consul is deployed in.

    6. The contents of spec will be a transformation from JSON keys to YAML keys.

      The following keys can be ignored: CreateIndex, ModifyIndex and any key that has an empty object, e.g. "Expose": {}.

      For example:

      1. {
      2. "Kind": "service-defaults",
      3. "Name": "foo",
      4. "Protocol": "http",
      5. "MeshGateway": {},
      6. "Expose": {},
      7. "CreateIndex": 60,
      8. "ModifyIndex": 60
      9. }
      1. {
      2. "Kind": "service-defaults",
      3. "Name": "foo",
      4. "Protocol": "http",
      5. "MeshGateway": {},
      6. "Expose": {},
      7. "CreateIndex": 60,
      8. "ModifyIndex": 60
      9. }

      Becomes:

      1. apiVersion: consul.hashicorp.com/v1alpha1
      2. kind: ServiceDefaults
      3. metadata:
      4. name: foo
      5. annotations:
      6. 'consul.hashicorp.com/migrate-entry': 'true'
      7. spec:
      8. protocol: 'http'
      1. apiVersion: consul.hashicorp.com/v1alpha1
      2. kind: ServiceDefaults
      3. metadata:
      4. name: foo
      5. annotations:
      6. 'consul.hashicorp.com/migrate-entry': 'true'
      7. spec:
      8. protocol: 'http'

      And

      1. {
      2. "Kind": "proxy-defaults",
      3. "Name": "global",
      4. "MeshGateway": {
      5. "Mode": "local"
      6. },
      7. "Config": {
      8. "local_connect_timeout_ms": 1000,
      9. "handshake_timeout_ms": 10000
      10. },
      11. "CreateIndex": 60,
      12. "ModifyIndex": 60
      13. }
      1. {
      2. "Kind": "proxy-defaults",
      3. "Name": "global",
      4. "MeshGateway": {
      5. "Mode": "local"
      6. },
      7. "Config": {
      8. "local_connect_timeout_ms": 1000,
      9. "handshake_timeout_ms": 10000
      10. },
      11. "CreateIndex": 60,
      12. "ModifyIndex": 60
      13. }

      Becomes:

      1. apiVersion: consul.hashicorp.com/v1alpha1
      2. kind: ProxyDefaults
      3. metadata:
      4. name: global
      5. annotations:
      6. 'consul.hashicorp.com/migrate-entry': 'true'
      7. spec:
      8. meshGateway:
      9. mode: local
      10. config:
      11. # Note that anything under config for ProxyDefaults will use the exact
      12. # same keys.
      13. local_connect_timeout_ms: 1000
      14. handshake_timeout_ms: 10000
      1. apiVersion: consul.hashicorp.com/v1alpha1
      2. kind: ProxyDefaults
      3. metadata:
      4. name: global
      5. annotations:
      6. 'consul.hashicorp.com/migrate-entry': 'true'
      7. spec:
      8. meshGateway:
      9. mode: local
      10. config:
      11. # Note that anything under config for ProxyDefaults will use the exact
      12. # same keys.
      13. local_connect_timeout_ms: 1000
      14. handshake_timeout_ms: 10000
  4. Run kubectl apply to apply the Kubernetes resource.

  5. Next, check that it synced successfully:

    1. $ kubectl get servicedefaults foo
    2. NAME SYNCED AGE
    3. foo True 1s
    1. $ kubectl get servicedefaults foo
    2. NAME SYNCED AGE
    3. foo True 1s
  6. If its SYNCED status is True then the migration for this config entry was successful.

  7. If its SYNCED status is False, use kubectl describe to view the reason syncing failed:

    1. $ kubectl describe servicedefaults foo
    2. ...
    3. Status:
    4. Conditions:
    5. Last Transition Time: 2021-01-12T21:03:29Z
    6. Message: migration failed: Kubernetes resource does not match existing Consul config entry: consul={...}, kube={...}
    7. Reason: MigrationFailedError
    8. Status: False
    9. Type: Synced
    1. $ kubectl describe servicedefaults foo
    2. ...
    3. Status:
    4. Conditions:
    5. Last Transition Time: 2021-01-12T21:03:29Z
    6. Message: migration failed: Kubernetes resource does not match existing Consul config entry: consul={...}, kube={...}
    7. Reason: MigrationFailedError
    8. Status: False
    9. Type: Synced

    The most likely reason is that the contents of the Kubernetes resource don’t match the Consul resource. Make changes to the Kubernetes resource to match the Consul resource (ignoring the CreateIndex, ModifyIndex and Meta keys).

  8. Once the SYNCED status is true, you can make changes to the resource and they will get synced to Consul.