Red Hat OpenShift is a distribution of the Kubernetes platform that provides a number of usability and security enhancements.

In this tutorial you will:

  • Deploy an OpenShift cluster
  • Deploy a Consul datacenter
  • Access the Consul UI
  • Use the Consul CLI to inspect your environment
  • Decommission the OpenShift environment

Security Warning This tutorial is not for production use. The chart was installed with an insecure configuration of Consul. Refer to the Secure Consul and Registered Services on Kubernetes tutorial to learn how you can secure Consul on Kubernetes in production.

Prerequisites

To complete this tutorial you will need:

Download Helm chart

Red Hat OpenShift - 图1

Red Hat OpenShift - 图2

If you have not already done so, download the latest official consul-helm chart now.

  1. $ helm repo add hashicorp https://helm.releases.hashicorp.com
  2. "hashicorp" has been added to your repositories

Verify chart version

To ensure you have version 0.34.1 of the Helm chart, search your local repo.

  1. $ helm search repo hashicorp/consul
  2. NAME CHART VERSION APP VERSION DESCRIPTION
  3. hashicorp/consul 0.34.1 1.10.2 Official HashiCorp Consul Chart

If the correct version is not displayed in the output, try updating your helm repo.

  1. $ helm repo update
  2. Hang tight while we grab the latest from your chart repositories...
  3. ...Successfully got an update from the "hashicorp" chart repository

Deploy OpenShift

OpenShift can be deployed on multiple platforms, and there are several installation options available for installing OpenShift on either production and development environments. This tutorial requires a running OpenShift cluster to deploy Consul on Kubernetes. If an OpenShift cluster is already provisioned in a production or development environment to be used for deploying Consul on Kubernetes, please skip ahead to Deploy Consul. This tutorial will utilize CodeReady Containers (CRC) to provide a pre-configured development OpenShift environment on your local machine. CRC is bundled as a Red Hat Enterprise Linux virtual machine that supports native hypervisors for Linux, macOS, and Windows 10. CRC is the quickest way to get started building OpenShift clusters. It is designed to run on a local computer to simplify setup and emulate the cloud development environment locally with all the tools needed to develop container-based apps. While we use CRC in this tutorial, the Consul Helm deployment process will work on any OpenShift cluster and is production ready.

If deploying CRC is not preferred, a managed OpenShift cluster could quickly be provisioned within less than an hour using Azure RedHat OpenShift. Azure RedHat Openshift requires an Azure subscription. However, it provides the simplest installation flow for getting a production-ready OpenShift cluster available to be used for this tutorial.

CRC Setup

After installing CodeReady Containers, issue the following command to setup your environment.

  1. $ crc setup
  2. INFO Checking if oc binary is cached
  3. INFO Checking if podman remote binary is cached
  4. INFO Checking if goodhosts binary is cached
  5. INFO Checking if CRC bundle is cached in '$HOME/.crc'
  6. INFO Checking minimum RAM requirements
  7. INFO Checking if running as non-root
  8. INFO Checking if HyperKit is installed
  9. INFO Checking if crc-driver-hyperkit is installed
  10. INFO Checking file permissions for /etc/hosts
  11. INFO Checking file permissions for /etc/resolver/testing
  12. Setup is complete, you can now run 'crc start' to start the OpenShift cluster

CRC start

Once the setup is complete, you can start the CRC service with the following command. The command will perform a few system checks to ensure your system meets the minimum requirements and will then ask you to provide an image pull secret. You should have your Red Hat account open so that you can easily copy your image pull secret when prompted.

  1. $ crc start
  2. INFO Checking if oc binary is cached
  3. INFO Checking if podman remote binary is cached
  4. INFO Checking if goodhosts binary is cached
  5. INFO Checking minimum RAM requirements
  6. INFO Checking if running as non-root
  7. INFO Checking if HyperKit is installed
  8. INFO Checking if crc-driver-hyperkit is installed
  9. INFO Checking file permissions for /etc/hosts
  10. INFO Checking file permissions for /etc/resolver/testing
  11. ? Image pull secret [? for help]

Next, paste the image pull secret into the terminal and press enter.

Example output:

  1. INFO Loading bundle: crc_hyperkit_4.5.14.crcbundle ...
  2. INFO Checking size of the disk image /Users/derekstrickland/.crc/cache/crc_hyperkit_4.5.14/crc.qcow2 ...
  3. ...TRUNCATED...
  4. To access the cluster, first set up your environment by following 'crc oc-env' instructions.
  5. Then you can access it by running 'oc login -u developer -p developer https://api.crc.testing:6443'.
  6. To login as an admin, run 'oc login -u kubeadmin -p <redacted> https://api.crc.testing:6443'.
  7. To access the cluster, first set up your environment by following 'crc oc-env' instructions.
  8. You can now run 'crc console' and use these credentials to access the OpenShift web console.

Notice that the output instructs you to configure your oc-env, and also includes a login command and secret password. The secret is specific to your installation. Make note of this command, as you will use it to login to CRC on your development host later.

Configure CRC environment

Next, configure the environment as instructed by CRC using the following command.

  1. $ eval $(crc oc-env)

Login to the OpenShift cluster

Next, use the login command you made note of before to authenticate with the OpenShift cluster.

Note You will have to replace the secret password below with the value output by CRC.

  1. $ oc login -u kubeadmin -p <redacted> https://api.crc.testing:6443
  2. Login successful.
  3. You have access to 57 projects, the list has been suppressed. You can list all projects with 'oc projects'
  4. Using project "default".

Verify configuration

Validate that your CRC setup was successful with the following command.

  1. $ kubectl cluster-info
  2. Kubernetes master is running at https://api.crc.testing:6443
  3. To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Deploy Consul

Consul on Kubernetes provides a Helm chart to deploy a Consul datacenter on Kubernetes in a highly customized configuration. Review the docs on Helm chart Configuration to learn more about the available configurations.

Create a new project

First, create an OpenShift project to install Consul on Kubernetes. Creating an OpenShift project creates a Kubernetes namespace to deploy Kubernetes resources.

  1. $ oc new-project consul
  2. Now using project "consul" on server "https://api.crc.testing:6443".
  3. You can add applications to this project with the 'new-app' command. For example, try:
  4. oc new-app ruby~https://github.com/sclorg/ruby-ex.git
  5. to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application:
  6. kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node

Create an image pull secret for a RedHat Registry service account

You must create an image pull secret before authenticating to the RedHat Registry and pulling images from the container registry. You must first create a registry service account on the RedHat Customer Portal, and then apply the OpenShift secret that is associated with the registry service account as shown below:

  1. $ kubectl create -f openshift-secret.yml --namespace=consul

Import Consul and Consul on Kubernetes images from RedHat Catalog (Optional)

Instead of pulling images directly from the RedHat Registry, Consul and Consul on Kubernetes images could also be pre-loaded onto the internal OpenShift registry using the oc import command. Read more about importing images into the internal OpenShift Registry in the RedHat OpenShift cookbook

  1. $ oc import-image hashicorp/consul:1.10.3-ubi --from=registry.connect.redhat.com/hashicorp/consul:1.10.3-ubi --confirm
  2. $ oc import-image hashicorp/consul-k8s-control-plane:0.34.0-ubi --from=registry.connect.redhat.com/hashicorp/consul-k8s-control-plane:0.34.0-ubi --confirm

Helm chart configuration

To customize your deployment, you can pass a YAML configuration file to be used during the deployment. Any values specified in the values file will override the Helm chart’s default settings. The following example file sets the global.openshift.enabled entry to true, which is required to operate Consul on OpenShift. Use this command to generate a file named config.yaml that you will reference in the helm install command later.

  1. $ cat > config.yaml << EOF
  2. global:
  3. name: consul
  4. datacenter: dc1
  5. image: registry.connect.redhat.com/hashicorp/consul:1.10.3-ubi
  6. imageK8S: registry.connect.redhat.com/hashicorp/consul-k8s-control-plane:0.34.0-ubi
  7. imagePullSecrets:
  8. - name: <Insert image pull secret name for RedHat Registry Service Account>
  9. openshift:
  10. enabled: true
  11. server:
  12. replicas: 1
  13. bootstrapExpect: 1
  14. disruptionBudget:
  15. enabled: true
  16. maxUnavailable: 0
  17. client:
  18. enabled: true
  19. grpc: true
  20. ui:
  21. enabled: true
  22. connectInject:
  23. enabled: true
  24. default: true
  25. controller:
  26. enabled: true
  27. EOF

Install Consul with Helm

Now, issue the helm install command. The following command specifies that the installation should:

  • Use the custom values file you created in the last step
  • Use the hashicorp/consul chart you downloaded earlier
  • Set your Consul installation name to consul
  • Use consul-helm chart version 0.34.1
  1. $ helm install -f config.yaml consul hashicorp/consul --version "0.34.1" --wait

The output will be similar to the following.

  1. NAME: hashicorp
  2. ...
  3. $ helm status hashicorp
  4. $ helm get all hashicorp

Verify installation

Use kubectl get pods to verify your installation.

  1. $ watch kubectl get pods
  2. NAME READY STATUS RESTARTS AGE
  3. consul-c74zv 1/1 Running 0 10m
  4. consul-connect-injector-webhook-deployment-5f7d4cd45-vxmsl 1/1 Running 0 10m
  5. consul-controller-7c884544f8-nj9lt 1/1 Running 0 10m
  6. consul-server-0 1/1 Running 0 10m
  7. consul-webhook-cert-manager-7fcf99885-rqmm6 1/1 Running 0 10m

Once all pods have a status of Running, enter CTRL-C to stop the watch.

Accessing the Consul UI

Now that Consul has been deployed, you can access the Consul UI to verify that the Consul installation was successful, and that the environment is healthy.

Expose the UI service to the host

Since the application is running on your local development host, you can expose the Consul UI to the development host using kubectl port-forward. The UI and the HTTP API Server run on the consul-server-0 pod. Issue the following command to expose the server endpoint at port 8500 to your local development host.

  1. $ kubectl port-forward consul-server-0 8500:8500
  2. Forwarding from 127.0.0.1:8500 -> 8500
  3. Forwarding from [::1]:8500 -> 8500

Open http://localhost:8500 in a new browser tab, and you should observe a page that looks similar to the following.

OpenShift Consul UI

Accessing Consul with the CLI and HTTP API

To access Consul with the CLI, set the CONSUL_HTTP_ADDR following environment variable on the development host so that the Consul CLI knows which Consul server to interact with.

  1. $ export CONSUL_HTTP_ADDR=http://127.0.0.1:8500

You should be able to issue the consul members command to view all available Consul datacenter members.

  1. $ consul members
  2. Node Address Status Type Build Protocol DC Segment
  3. consul-server-0 10.116.0.78:8301 alive server 1.10.3 2 dc1 <all>
  4. crc-j55b9-master-0 10.116.0.77:8301 alive client 1.10.3 2 dc1 <default>

You can use the same URL to make HTTP API requests with your custom code.

Decommission the environment

Now that you have completed the tutorial, you should decommission the CRC environment. Enter CTRL-C in the terminal to stop the port forwarding process.

Stop CRC

First, stop the running cluster.

  1. $ crc stop

Example output:

  1. INFO Stopping the OpenShift cluster, this may take a few minutes...
  2. Stopped the OpenShift cluster

Delete CRC

Next, issue the following command to delete the cluster.

  1. $ crc delete

The CRC CLI will ask you to confirm that you want to delete the cluster.

Example prompt:

  1. Do you want to delete the OpenShift cluster? [y/N]:

Enter y to confirm.

Example output:

  1. Deleted the OpenShift cluster

Next steps

In this tutorial you created a Red Hat OpenShift cluster, and installed Consul to the cluster.

Specifically, you:

  • Deployed an OpenShift cluster
  • Deployed a Consul datacenter
  • Accessed the Consul UI
  • Used the Consul CLI to inspect your environment
  • Decommissioned the environment

It is highly recommended that you properly secure your Kubernetes cluster and that you understand and enable the recommended security features of Consul. Refer to the Secure Consul and Registered Services on Kubernetes tutorial to learn how you can deploy an example workload, and secure Consul on Kubernetes for production.

For more information on the Consul Helm chart configuration options, review the consul-helm chart documentation.