Upgrade Consul API gateway for Kubernetes

Since Consul v1.15, the Consul API gateway is a native feature within the Consul binary and is installed during the normal Consul installation process. Since Consul on Kubernetes v1.2 (Consul v1.16), the CRDs necessary for using the Consul API gateway for Kubernetes are also included. You can install Consul v1.16 using the Consul Helm chart v1.2 and later. Refer to Install API gateway for Kubernetes for additional information.

Introduction

Because Consul API gateway releases as part of Consul, it no longer has an independent version number. Instead, the API gateway inherits the same version number as the Consul binary. Refer to the release notes for additional information.

To begin using the native API gateway, complete one of the following upgrade paths:

Upgrade from Consul on Kubernetes v1.1.x

  1. Complete the instructions for upgrading to the native Consul API gateway.

Upgrade from v0.4.x - v0.5.x

  1. Complete the standard upgrade instructions
  2. Complete the instructions for upgrading to the native Consul API gateway.

Upgrade from v0.3.x

  1. Complete the instructions for upgrading to v0.4.0
  2. Complete the standard upgrade instructions
  3. Complete the instructions for upgrading to the native Consul API gateway.

Upgrade from v0.2.x

  1. Complete the instructions for upgrading to v0.3.0
  2. Complete the instructions for upgrading to v0.4.0
  3. Complete the standard upgrade instructions
  4. Complete the instructions for upgrading to the native Consul API gateway.

Upgrade from v0.1.x

  1. Complete the instructions for upgrading to v0.2.0
  2. Complete the instructions for upgrading to v0.3.0
  3. Complete the instructions for upgrading to v0.4.0
  4. Complete the standard upgrade instructions
  5. Complete the instructions for upgrading to the native Consul API gateway.

Upgrade to native Consul API gateway

You must begin the upgrade procedure with API gateway with Consul on Kubernetes v1.1 installed. If you are currently using a version of Consul on Kubernetes older than v1.1, complete the necessary stages of the upgrade path to v1.1 before you begin upgrading to the native API gateway. Refer to the Introduction for an overview of the upgrade paths.

Consul-managed CRDs

If you are able to tolerate downtime for your applications, you should delete previously installed CRDs and allow Consul to install and manage them for future updates. The amount of downtime depends on how quickly you are able to install the new version of Consul. If you are unable to tolerate any downtime, refer to Self-managed CRDs for instructions on how to upgrade without downtime.

  1. Run the kubectl delete command and reference the kustomize directory to delete the existing CRDs. The following example deletes the CRDs that were installed with API gateway v0.5.1:

    1. $ kubectl delete --kustomize="github.com/hashicorp/consul-api-gateway/config/crd?ref=v0.5.1"
  2. Issue the following command to use the API gateway packaged in Consul. Since Consul will not detected an external CRD, it will try to install the API gateway packaged with Consul.

    1. $ consul-k8s install -config-file values.yaml
  3. Create ServiceIntentions allowing Gateways to communicate with any backend services that they route to. Refer to Service intentions configuration entry reference for additional information.

  4. Change any existing Gateways to reference the new GatewayClass consul. Refer to gatewayClass for additional information.

  5. After updating all of your gateway configurations to use the new controller, you can remove the apiGateway block from the Helm chart and upgrade your Consul cluster. This completely removes the old gateway controller.

    Upgrades - 图1

    values.yaml

    1. global:
    2. image: hashicorp/consul:1.15
    3. imageK8S: hashicorp/consul-k8s-control-plane:1.1
    4. - apiGateway:
    5. - enabled: true
    6. - image: hashicorp/consul-api-gateway:0.5.4
    7. - managedGatewayClass:
    8. - enabled: true
    1. $ consul-k8s install -config-file values.yaml

Self-managed CRDs

Upgrades - 图2

Note

This upgrade method uses connectInject.apiGateway.manageExternalCRDs, which was introduced in Consul on Kubernetes v1.2. As a result, you must be on at least Consul on Kubernetes v1.2 for this upgrade method.

If you are unable to tolerate any downtime, you can complete the following steps to upgrade to the native Consul API gateway. If you choose this upgrade option, you must continue to manually install the CRDs necessary for operating the API gateway.

  1. Create a Helm chart that installs the version of Consul API gateway that ships with Consul and disables externally-managed CRDs:

    Upgrades - 图3

    values.yaml

    1. global:
    2. image: hashicorp/consul:1.16
    3. imageK8S: hashicorp/consul-k8s-control-plane:1.2
    4. connectInject:
    5. apiGateway:
    6. manageExternalCRDs: false
    7. apiGateway:
    8. enabled: true
    9. image: hashicorp/consul-api-gateway:0.5.4
    10. managedGatewayClass:
    11. enabled: true

    You must set connectInject.apiGateway.manageExternalCRDs to false. If you have external CRDs with legacy installation and you do not set this, you will get an error when you try to upgrade because Helm will try to install CRDs that already exist.

  2. Issue the following command to install the new version of API gateway and disables externally-managed CRDs:

    1. $ consul-k8s install -config-file values.yaml
  3. Create ServiceIntentions allowing Gateways to communicate with any backend services that they route to. Refer to Service intentions configuration entry reference for additional information.

  4. Change any existing Gateways to reference the new GatewayClass consul. Refer to gatewayClass for additional information.

  5. After updating all of your gateway configurations to use the new controller, you can remove the apiGateway block from the Helm chart and upgrade your Consul cluster. This completely removes the old gateway controller.

    Upgrades - 图4

    values.yaml

    1. global:
    2. image: hashicorp/consul:1.16
    3. imageK8S: hashicorp/consul-k8s-control-plane:1.2
    4. connectInject:
    5. apiGateway:
    6. manageExternalCRDs: false
    7. - apiGateway:
    8. - enabled: true
    9. - image: hashicorp/consul-api-gateway:0.5.4
    10. - managedGatewayClass:
    11. - enabled: true
    1. $ consul-k8s install -config-file values.yaml

Upgrade to v0.4.0

Consul API Gateway v0.4.0 adds support for Gateway API v0.5.0 and the following resources:

  • The graduated v1beta1 GatewayClass, Gateway and HTTPRoute resources.

  • The ReferenceGrant resource, which replaces the identical ReferencePolicy resource.

Consul API Gateway v0.4.0 is backward-compatible with existing ReferencePolicy resources, but we will remove support for ReferencePolicy resources in a future release. We recommend that you migrate to ReferenceGrant after upgrading.

Requirements

Ensure that the following requirements are met prior to upgrading:

  • Consul API Gateway should be running version v0.3.0.

Procedure

  1. Complete the standard upgrade.

  2. After completing the upgrade, complete the post-upgrade configuration changes. The post-upgrade procedure describes how to replace your ReferencePolicy resources with ReferenceGrant resources and how to upgrade your GatewayClass, Gateway, and HTTPRoute resources from v1alpha2 to v1beta1.

Post-upgrade configuration changes

Complete the following steps after performing standard upgrade procedure.

Requirements

  • Consul API Gateway should be running version v0.4.0.
  • Consul Helm chart should be v0.47.0 or later.
  • You should have the ability to run kubectl CLI commands.
  • kubectl should be configured to point to the cluster containing the installation you are upgrading.
  • You should have the following permissions for your Kubernetes cluster:

Procedure

  1. Verify the current version of the consul-api-gateway-controller Deployment:

    1. $ kubectl get deployment --namespace consul consul-api-gateway-controller --output=jsonpath="{@.spec.template.spec.containers[?(@.name=='api-gateway-controller')].image}"

    You should receive a response similar to the following:

    1. "hashicorp/consul-api-gateway:0.4.0"

  2. Issue the following command to get all ReferencePolicy resources across all namespaces.

    1. $ kubectl get referencepolicy --all-namespaces

    If you have any active ReferencePolicy resources, you will receive output similar to the response below.

    1. Warning: ReferencePolicy has been renamed to ReferenceGrant. ReferencePolicy will be removed in v0.6.0 in favor of the identical ReferenceGrant resource.
    2. NAMESPACE NAME
    3. default example-reference-policy

    If your output is empty, upgrade your GatewayClass, Gateway and HTTPRoute resources to v1beta1 as described in step 7.

  3. For each ReferencePolicy in the source YAML files, change the kind field to ReferenceGrant. You can optionally update the metadata.name field or filename if they include the term “policy”. In the following example, the kind and metadata.name fields and filename have been changed to reflect the new resource. Note that updating the kind field prevents you from using the kubectl edit command to edit the remote state directly.

    Upgrades - 图5

    referencegrant.yaml

    1. apiVersion: gateway.networking.k8s.io/v1alpha2
    2. kind: ReferenceGrant
    3. metadata:
    4. name: reference-grant
    5. namespace: web-namespace
    6. spec:
    7. from:
    8. - group: gateway.networking.k8s.io
    9. kind: HTTPRoute
    10. namespace: example-namesapce
    11. to:
    12. - group: ""
    13. kind: Service
    14. name: web-backend
  4. For each file, apply the updated YAML to your cluster to create a new ReferenceGrant resource.

    1. $ kubectl apply --filename <file>
  5. Check to confirm that each new ReferenceGrant was created successfully.

    1. $ kubectl get referencegrant <name> --namespace <namespace>
    2. NAME
    3. example-reference-grant
  6. Finally, delete each corresponding old ReferencePolicy resource. Because replacement ReferenceGrant resources have already been created, there should be no interruption in the availability of any referenced Service or Secret.

    1. $ kubectl delete referencepolicy <name> --namespace <namespace>
    2. Warning: ReferencePolicy has been renamed to ReferenceGrant. ReferencePolicy will be removed in v0.6.0 in favor of the identical ReferenceGrant resource.
    3. referencepolicy.gateway.networking.k8s.io "example-reference-policy" deleted

  7. For each GatewayClass, Gateway, and HTTPRoute in the source YAML, update the apiVersion field to gateway.networking.k8s.io/v1beta1. Note that updating the apiVersion field prevents you from using the kubectl edit command to edit the remote state directly.

    1. apiVersion: gateway.networking.k8s.io/v1beta1
    2. kind: Gateway
    3. metadata:
    4. name: example-gateway
    5. namespace: gateway-namespace
    6. spec:
    7. ...
  8. For each file, apply the updated YAML to your cluster to update the existing GatewayClass, Gateway or HTTPRoute resources.

    1. $ kubectl apply --filename <file>
    2. gateway.gateway.networking.k8s.io/example-gateway configured

Upgrade to v0.3.0 from v0.2.0 or lower

Consul API Gateway v0.3.0 introduces a change for people upgrading from lower versions. Gateways with listeners with a certificateRef defined in a different namespace now require a ReferencePolicy that explicitly allows Gateways from the gateway’s namespace to use certificateRef in the certificateRef‘s namespace.

Requirements

Ensure that the following requirements are met prior to upgrading:

  • Consul API Gateway should be running version v0.2.1 or lower.
  • You should have the ability to run kubectl CLI commands.
  • kubectl should be configured to point to the cluster containing the installation you are upgrading.
  • You should have the following permission rights on your Kubernetes cluster:
  • (Optional) The jq command line processor for JSON can be installed, which will ease gateway retrieval during the upgrade process.

Procedure

  1. Verify the current version of the consul-api-gateway-controller Deployment:

    1. $ kubectl get deployment --namespace consul consul-api-gateway-controller --output=jsonpath="{@.spec.template.spec.containers[?(@.name=='api-gateway-controller')].image}"

    You should receive a response similar to the following:

    1. "hashicorp/consul-api-gateway:0.2.1"
  2. Retrieve all gateways that have a certificateRefs in a different namespace. If you have installed the jq utility, you can skip to step 4. Otherwise, issue the following command to get all Gateways across all namespaces:

    1. $ kubectl get Gateway --output json --all-namespaces

    If you have any active Gateways, you will receive output similar to the following response. The output has been truncated to show only relevant fields:

    1. apiVersion: gateway.networking.k8s.io/v1alpha2
    2. kind: Gateway
    3. metadata:
    4. name: example-gateway
    5. namespace: gateway-namespace
    6. spec:
    7. gatewayClassName: "consul-api-gateway"
    8. listeners:
    9. - name: https
    10. port: 443
    11. protocol: HTTPS
    12. allowedRoutes:
    13. namespaces:
    14. from: All
    15. tls:
    16. certificateRefs:
    17. - group: ""
    18. kind: Secret
    19. name: example-certificate
    20. namespace: certificate-namespace
  3. Inspect the certificateRefs entries for each of the routes.

    If a namespace field is not defined in the certificateRefs or if the namespace matches the namespace of the parent Gateway, then no additional action is required for the certificateRefs. Otherwise, note the namespace field values for certificateRefs configurations with a namespace field that do not match the namespace of the parent Gateway. You must also note the namespace of the parent gateway. You will need these to create a ReferencePolicy that explicitly allows each cross-namespace certificateRefs-to-gateway pair. (see step 5).

    After completing this step, you will have a list of all secrets similar to the following:

    1. example-certificate:
    2. - namespace: certificate-namespace
    3. parentNamespace: gateway-namespace

    Proceed with the standard-upgrade if your list is empty.

  4. If you have installed jq, issue the following command to get all Gateways and filter for secrets that require a ReferencePolicy.

    1. $ kubectl get Gateway -o json -A | jq -r '.items[] | {gateway_name: .metadata.name, gateway_namespace: .metadata.namespace, kind: .kind, crossNamespaceSecrets: ( .metadata.namespace as $parentnamespace | .spec.listeners[] | select(has("tls")) | .tls.certificateRefs[] | select(.namespace != null and .namespace != $parentnamespace ) )} '

    The output will resemble the following response if gateways that require a new ReferencePolicy are returned:

    1. {
    2. "gateway_name": "example-gateway",
    3. "gateway_namespace": "gateway-namespace",
    4. "kind": "Gateway",
    5. "crossNamespaceSecrets": {
    6. "group": "",
    7. "kind": "Secret",
    8. "name": "cexample-certificate",
    9. "namespace": "certificate-namespace"
    10. }
    11. }

    If your output is empty, proceed with the standard-upgrade.

  5. Using the list of secrets you created earlier as a guide, create a ReferencePolicy to allow each gateway cross namespace secret access. The ReferencePolicy explicitly allows each cross-namespace gateway to secret pair. The ReferencePolicy must be created in the same namespace as the certificateRefs.

    Skip to the next step if you’ve already created a ReferencePolicy.

    The following example ReferencePolicy enables example-gateway in gateway-namespace to utilize certificateRefs in the certificate-namespace namespace:

    Upgrades - 图6

    referencepolicy.yaml

    1. apiVersion: gateway.networking.k8s.io/v1alpha2
    2. kind: ReferencePolicy
    3. metadata:
    4. name: reference-policy
    5. namespace: certificate-namespace
    6. spec:
    7. from:
    8. - group: gateway.networking.k8s.io
    9. kind: Gateway
    10. namespace: gateway-namespace
    11. to:
    12. - group: ""
    13. kind: Secret
  6. If you have already created a ReferencePolicy, modify it to allow your gateway to access your certificateRef and save it as referencepolicy.yaml. Note that each ReferencePolicy only supports one to field and one from field (refer the ReferencePolicy documentation). As a result, you may need to create multiple ReferencePolicys.

  7. Issue the following command to apply it to your cluster:

    1. $ kubectl apply --filename referencepolicy.yaml

    Repeat this step as needed until each of your cross-namespace certificateRefs have a corresponding ReferencePolicy.

    Proceed with the standard-upgrade.

Upgrade to v0.2.0

Consul API Gateway v0.2.0 introduces a change for people upgrading from Consul API Gateway v0.1.0. Routes with a backendRef defined in a different namespace now require a ReferencePolicy that explicitly allows traffic from the route’s namespace to the backendRef‘s namespace.

Requirements

Ensure that the following requirements are met prior to upgrading:

  • Consul API Gateway should be running version v0.1.0.
  • You should have the ability to run kubectl CLI commands.
  • kubectl should be configured to point to the cluster containing the installation you are upgrading.
  • You should have the following permission rights on your Kubernetes cluster:
  • (Optional) The jq command line processor for JSON can be installed, which will ease route retrieval during the upgrade process.

Procedure

  1. Verify the current version of the consul-api-gateway-controller Deployment:

    1. $ kubectl get deployment --namespace consul consul-api-gateway-controller --output=jsonpath= "{@.spec.template.spec.containers[?(@.name=='api-gateway-controller')].image}"

    You should receive the following response:

    1. "hashicorp/consul-api-gateway:0.1.0"
  2. Retrieve all routes that have a backend in a different namespace. If you have installed the jq utility, you can skip to step 4. Otherwise, issue the following command to get all HTTPRoutes and TCPRoutes across all namespaces:

    1. $ kubectl get HTTPRoute,TCPRoute --output json --all-namespaces

    Note that the command only retrieves HTTPRoutes and TCPRoutes. TLSRoutes and UDPRoutes are not supported in v0.1.0.

    If you have any active HTTPRoutes or TCPRoutes, you will receive output similar to the following response. The output has been truncated to show only relevant fields:

    1. apiVersion: v1
    2. items:
    3. - apiVersion: gateway.networking.k8s.io/v1alpha2
    4. kind: HTTPRoute
    5. metadata:
    6. name: example-http-route,
    7. namespace: example-namespace,
    8. ...
    9. spec:
    10. parentRefs:
    11. - group: gateway.networking.k8s.io
    12. kind: Gateway
    13. name: gateway
    14. namespace: gw-ns
    15. rules:
    16. - backendRefs:
    17. - group: ""
    18. kind: Service
    19. name: web-backend
    20. namespace: gateway-namespace
    21. ...
    22. ...
    23. - apiVersion: gateway.networking.k8s.io/v1alpha2
    24. kind: TCPRoute
    25. metadata:
    26. name: example-tcp-route,
    27. namespace: a-different-namespace,
    28. ...
    29. spec:
    30. parentRefs:
    31. - group: gateway.networking.k8s.io
    32. kind: Gateway
    33. name: gateway
    34. namespace: gateway-namespace
    35. rules:
    36. - backendRefs:
    37. - group: ""
    38. kind: Service
    39. name: web-backend
    40. namespace: gateway-namespace
    41. ...
    42. ...
  3. Inspect the backendRefs entries for each of the routes.

    If a namespace field is not defined in the backendRef or if the namespace matches the namespace of the route, then no additional action is required for the backendRef. Otherwise, note the group, kind, name, and namespace field values for backendRef configurations that have a namespace defined that do not match the namespace of the parent route. You must also note the kind and namespace of the parent route. You will need these to create a ReferencePolicy that explicitly allows each cross-namespace route-to-service pair (see step 5).

    After completing this step, you will have a list of all routes similar to the following:

    1. example-http-route:
    2. - namespace: example-namespace
    3. kind: HTTPRoute
    4. backendReferences:
    5. - group : ""
    6. kind: Service
    7. name: web-backend
    8. namespace: gateway-namespace
    9. example-tcp-route:
    10. - namespace: a-different-namespace
    11. kind: HTTPRoute
    12. backendReferences:
    13. - group : ""
    14. kind: Service
    15. name: web-backend
    16. namespace: gateway-namespace

    Proceed with standard-upgrade if your list is empty.

  4. If you have installed jq, issue the following command to get all HTTPRoutes and TCPRoutes and filter for routes that require a ReferencePolicy.

    1. $ kubectl get HTTPRoute,TCPRoute -o json -A | jq -r '.items[] | {name: .metadata.name, namespace: .metadata.namespace, kind: .kind, crossNamespaceBackendReferences: ( .metadata.namespace as $parentnamespace | .spec.rules[] .backendRefs[] | select(.namespace != null and .namespace != $parentnamespace ) )} '

    Note that the command retrieves all HTTPRoutes and TCPRoutes. TLSRoutes and UDPRoutes are not supported in v0.1.0.

    The output will resemble the following response if routes that require a new ReferencePolicy are returned:

    1. {
    2. "name": "example-http-route",
    3. "namespace": "example-namespace",
    4. "kind": "HTTPRoute",
    5. "crossNamespaceBackendReferences": {
    6. "group": "",
    7. "kind": "Service",
    8. "name": "web-backend",
    9. "namespace": "gateway-namespace",
    10. "port": 8080,
    11. "weight": 1
    12. }
    13. }
    14. {
    15. "name": "example-tcp-route",
    16. "namespace": "a-different-namespace",
    17. "kind": "TCPRoute",
    18. "crossNamespaceBackendReferences": {
    19. "group": "",
    20. "kind": "Service",
    21. "name": "web-backend",
    22. "namespace": "gateway-namespace",
    23. "port": 8080,
    24. "weight": 1
    25. }
    26. }

    If your output is empty, proceed with the standard-upgrade.

  5. Using the list of routes you created earlier as a guide, create a ReferencePolicy to allow cross namespace traffic for each route service pair. The ReferencePolicy explicitly allows each cross-namespace route to service pair. The ReferencePolicy must be created in the same namespace as the backend Service.

    Skip to the next step if you’ve already created a ReferencePolicy.

    The following example ReferencePolicy enables HTTPRoute traffic from the example-namespace to Kubernetes Services in the web-backend namespace:

    Upgrades - 图7

    referencepolicy.yaml

    1. apiVersion: gateway.networking.k8s.io/v1alpha2
    2. kind: ReferencePolicy
    3. metadata:
    4. name: reference-policy
    5. namespace: gateway-namespace
    6. spec:
    7. from:
    8. - group: gateway.networking.k8s.io
    9. kind: HTTPRoute
    10. namespace: example-namespace
    11. to:
    12. - group: ""
    13. kind: Service
    14. name: web-backend
  6. If you have already created a ReferencePolicy, modify it to allow your route and save it as referencepolicy.yaml. Note that each ReferencePolicy only supports one to field and one from field (refer the ReferencePolicy documentation). As a result, you may need to create multiple ReferencePolicys.

  7. Issue the following command to apply it to your cluster:

    1. $ kubectl apply --filename referencepolicy.yaml

    Repeat this step as needed until each of your cross-namespace routes have a corresponding ReferencePolicy.

    Proceed with the standard-upgrade.

Standard Upgrade

Note: When you see VERSION in examples of commands or configuration settings, replace VERSION with the version number of the release you are installing, like 0.2.0. If there is a lower case “v” in front of VERSION the version number needs to follow the “v” as is v0.2.0

Requirements

Ensure that the following requirements are met prior to upgrading:

  • You should have the ability to run kubectl CLI commands.
  • kubectl should be configured to point to the cluster containing the installation you are upgrading.

Procedure

This is the upgrade path to use when there are no version specific steps to take.

  1. Issue the following command to install the new version of CRDs into your cluster:

    1. $ kubectl apply --kustomize="github.com/hashicorp/consul-api-gateway/config/crd?ref=vVERSION"
  2. Update apiGateway.image in values.yaml:

    Upgrades - 图8

    values.yaml

    1. ...
    2. apiGateway:
    3. image: hashicorp/consul-api-gateway:VERSION
    4. ...
  3. Issue the following command to upgrade your Consul installation:

    1. $ helm upgrade --values values.yaml --namespace consul --version <NEW_VERSION> <DEPLOYMENT_NAME> hashicorp/consul

    Note that the upgrade will cause the Consul API Gateway controller shut down and restart with the new version.

  4. According to the Kubernetes Gateway API specification, Gateway Class configurations should only be applied to a gateway upon creation. To see the effects on preexisting gateways after upgrading your CRD installation, delete and recreate any gateways by issuing the following commands:

    1. $ kubectl delete --filename <path_to_gateway_config.yaml>
    2. $ kubectl create --filename <path_to_gateway_config.yaml>
  5. (Optional) Delete and recreate your routes. Note that it may take several minutes for attached routes to reconcile and start reporting bind errors.

    1. $ kubectl delete --filename <path_to_route_config.yaml>
    2. $ kubectl create --filename <path_to_route_config.yaml>

Post-Upgrade Configuration Changes

No additional configuration changes are required for this upgrade.