In this tutorial you will deploy a Consul datacenter to the Elastic Kubernetes Services (EKS) on Amazon Web Services (AWS) with HashiCorp’s official Helm chart or the Consul K8S CLI. You do not need to override any values in the Helm chart for a basic installation, however, in this guide you will be creating a config file with custom values to allow access to the Consul UI.

Security Warning This tutorial is not for production use. By default, the chart will install an insecure configuration of Consul. Please refer to the Kubernetes deployment guide to determine how you can secure Consul on Kubernetes in production. Additionally, it is highly recommended to use a properly secured Kubernetes cluster or make sure that you understand and enable the recommended security features.

Prerequisites

Installing aws-cli, kubectl, and helm CLI tools

To follow this tutorial, you will need the aws-cli binary installed, as well as kubectl and helm.

Reference the following instruction for setting up aws-cli as well as general documentation:

Reference the following instructions to download kubectl and helm:

Installing helm and kubectl with Homebrew

Homebrew allows you to quickly install both Helm and kubectl on MacOS & Linux.

Install kubectl with Homebrew.

  1. $ brew install kubernetes-cli
  1. $ brew install kubernetes-cli

Install helm with Homebrew.

  1. $ brew install kubernetes-helm
  1. $ brew install kubernetes-helm

VPC and security group creation

The AWS documentation for creating an EKS cluster assumes that you have a VPC and a dedicated security group created. The instructions on how to create these are here:

You will need the SecurityGroups, VpcId, and SubnetId values for the EKS cluster creation step.

Create an EKS cluster

At least a three node EKS cluster is required to deploy Consul using the official Consul Helm chart. Create a three node cluster on EKS by following the the EKS AWS documentation.

Note: If using eksctl, you can use this command to create a three-node cluster: eksctl create cluster --name=<YOUR CLUSTER NAME> --region=<YOUR REGION> --nodes=3

Configure kubectl to talk to your cluster

Setting up kubectl to talk to your EKS cluster should be as simple as running the following:

  1. $ aws eks update-kubeconfig --region <region where you deployed your cluster> --name <your cluster name>
  1. $ aws eks update-kubeconfig --region <region where you deployed your cluster> --name <your cluster name>

You can then run the command kubectl cluster-info to verify you are connected to your Kubernetes cluster:

  1. $ kubectl cluster-info
  2. Kubernetes master is running at https://<your K8s master location>.eks.amazonaws.com
  3. CoreDNS is running at https://<your CoreDNS location>.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
  1. $ kubectl cluster-info
  2. Kubernetes master is running at https://<your K8s master location>.eks.amazonaws.com
  3. CoreDNS is running at https://<your CoreDNS location>.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

You can also review the documentation for configuring kubectl and EKS here:

Deploy Consul

You can deploy a complete Consul datacenter using the official Consul Helm chart or the Consul K8S CLI. By default, these methods will install a total of three Consul servers as well as one client per Kubernetes node into your EKS cluster. You can review the Consul Kubernetes installation documentation to learn more about these installation options.

Create a values file

To customize your deployment, you can pass a yaml file to be used during the deployment; it will override the Helm chart’s default values. The following values change your datacenter name and enable the Consul UI via a service.

  1. global:
  2. name: consul
  3. datacenter: hashidc1
  4. ui:
  5. enabled: true
  6. service:
  7. type: LoadBalancer

EKS (AWS) - 图1

helm-consul-values.yaml

  1. global:
  2. name: consul
  3. datacenter: hashidc1
  4. ui:
  5. enabled: true
  6. service:
  7. type: LoadBalancer

Install Consul in your cluster

You can now deploy a complete Consul datacenter in your Kubernetes cluster using the official Consul Helm chart or the Consul K8S CLI.

EKS (AWS) - 图2

EKS (AWS) - 图3

  1. $ helm repo add hashicorp https://helm.releases.hashicorp.com
  2. "hashicorp" has been added to your repositories
  1. $ helm repo add hashicorp https://helm.releases.hashicorp.com
  2. "hashicorp" has been added to your repositories
  1. $ helm install --values helm-consul-values.yaml consul hashicorp/consul --create-namespace --namespace consul --version "0.43.0"
  1. $ helm install --values helm-consul-values.yaml consul hashicorp/consul --create-namespace --namespace consul --version "0.43.0"

Note: You can review the official Helm chart values to learn more about the default settings.

Run the command kubectl get pods to verify three servers and three clients were successfully created.

  1. $ kubectl get pods --namespace consul
  2. NAME READY STATUS RESTARTS AGE
  3. consul-5fkt7 1/1 Running 0 69s
  4. consul-8zkjc 1/1 Running 0 69s
  5. consul-lnr74 1/1 Running 0 69s
  6. consul-server-0 1/1 Running 0 69s
  7. consul-server-1 1/1 Running 0 69s
  8. consul-server-2 1/1 Running 0 69s
  1. $ kubectl get pods --namespace consul
  2. NAME READY STATUS RESTARTS AGE
  3. consul-5fkt7 1/1 Running 0 69s
  4. consul-8zkjc 1/1 Running 0 69s
  5. consul-lnr74 1/1 Running 0 69s
  6. consul-server-0 1/1 Running 0 69s
  7. consul-server-1 1/1 Running 0 69s
  8. consul-server-2 1/1 Running 0 69s

Accessing the Consul UI

Since you enabled the Consul UI in your values file, you can run the command kubectl get services to find the load balancer DNS name or external IP of your UI service.

  1. $ kubectl get services --namespace consul
  2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  3. consul-dns ClusterIP 172.20.39.92 <none> 53/TCP,53/UDP 8m17s
  4. consul-server ClusterIP None <none> 8500/TCP,8301/TCP,8301/UDP,8302/TCP,8302/UDP,8300/TCP,8600/TCP,8600/UDP 8m17s
  5. consul-ui LoadBalancer 172.20.223.228 aabd04e592a324a369daf25df429accd-601998447.us-east-1.elb.amazonaws.com 80:32026/TCP 8m17s
  6. kubernetes ClusterIP 172.20.0.1 <none> 443/TCP 21m
  1. $ kubectl get services --namespace consul
  2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  3. consul-dns ClusterIP 172.20.39.92 <none> 53/TCP,53/UDP 8m17s
  4. consul-server ClusterIP None <none> 8500/TCP,8301/TCP,8301/UDP,8302/TCP,8302/UDP,8300/TCP,8600/TCP,8600/UDP 8m17s
  5. consul-ui LoadBalancer 172.20.223.228 aabd04e592a324a369daf25df429accd-601998447.us-east-1.elb.amazonaws.com 80:32026/TCP 8m17s
  6. kubernetes ClusterIP 172.20.0.1 <none> 443/TCP 21m

You can verify that, in this case, the UI is exposed at http://aabd04e592a324a369daf25df429accd-601998447.us-east-1.elb.amazonaws.com over port 80. Navigate to the load balancer DNS name or external IP in your browser to interact with the Consul UI.

Click the Nodes tab and you can observe several Consul servers and agents running.

Consul UI nodes tab

Accessing Consul with the CLI and API

In addition to accessing Consul with the UI, you can manage Consul by directly connecting to the pod with kubectl.

You can also use the Consul HTTP API by communicating to the local agent running on the Kubernetes node. Feel free to explore the Consul API documentation if you are interested in learning more about using the Consul HTTP API with Kubernetes.

Kubectl

To access the pod and data directory, you can remote execute into the pod with the command kubectl to start a shell session.

  1. $ kubectl exec --stdin --tty consul-server-0 --namespace consul -- /bin/sh
  1. $ kubectl exec --stdin --tty consul-server-0 --namespace consul -- /bin/sh

This will allow you to navigate the file system and run Consul CLI commands on the pod. For example you can view the Consul members.

  1. $ consul members
  2. Node Address Status Type Build Protocol DC Segment
  3. consul-server-0 10.0.3.70:8301 alive server 1.10.3 2 hashidc1 <all>
  4. consul-server-1 10.0.2.253:8301 alive server 1.10.3 2 hashidc1 <all>
  5. consul-server-2 10.0.1.39:8301 alive server 1.10.3 2 hashidc1 <all>
  6. ip-10-0-1-139.ec2.internal 10.0.1.148:8301 alive client 1.10.3 2 hashidc1 <default>
  7. ip-10-0-2-47.ec2.internal 10.0.2.59:8301 alive client 1.10.3 2 hashidc1 <default>
  8. ip-10-0-3-94.ec2.internal 10.0.3.225:8301 alive client 1.10.3 2 hashidc1 <default>
  1. $ consul members
  2. Node Address Status Type Build Protocol DC Segment
  3. consul-server-0 10.0.3.70:8301 alive server 1.10.3 2 hashidc1 <all>
  4. consul-server-1 10.0.2.253:8301 alive server 1.10.3 2 hashidc1 <all>
  5. consul-server-2 10.0.1.39:8301 alive server 1.10.3 2 hashidc1 <all>
  6. ip-10-0-1-139.ec2.internal 10.0.1.148:8301 alive client 1.10.3 2 hashidc1 <default>
  7. ip-10-0-2-47.ec2.internal 10.0.2.59:8301 alive client 1.10.3 2 hashidc1 <default>
  8. ip-10-0-3-94.ec2.internal 10.0.3.225:8301 alive client 1.10.3 2 hashidc1 <default>

When you have finished interacting with the pod, exit the shell.

  1. $ exit
  1. $ exit

Using Consul environment variables

You can also access the Consul datacenter with your local Consul binary by enabling environment variables. You can read more about Consul environment variables documented here.

In this case, since you are exposing HTTP via the load balancer/UI service, you can export the CONSUL_HTTP_ADDR variable to point to the load balancer DNS name (or external IP) of your Consul UI service:

  1. $ export CONSUL_HTTP_ADDR=http://aabd04e592a324a369daf25df429accd-601998447.us-east-1.elb.amazonaws.com:80
  1. $ export CONSUL_HTTP_ADDR=http://aabd04e592a324a369daf25df429accd-601998447.us-east-1.elb.amazonaws.com:80

You can now use your local installation of the Consul binary to run Consul commands:

  1. $ consul members
  2. Node Address Status Type Build Protocol DC Partition Segment
  3. consul-server-0 10.0.3.70:8301 alive server 1.10.3 2 hashidc1 default <all>
  4. consul-server-1 10.0.2.253:8301 alive server 1.10.3 2 hashidc1 default <all>
  5. consul-server-2 10.0.1.39:8301 alive server 1.10.3 2 hashidc1 default <all>
  6. ip-10-0-1-139.ec2.internal 10.0.1.148:8301 alive client 1.10.3 2 hashidc1 default <default>
  7. ip-10-0-2-47.ec2.internal 10.0.2.59:8301 alive client 1.10.3 2 hashidc1 default <default>
  8. ip-10-0-3-94.ec2.internal 10.0.3.225:8301 alive client 1.10.3 2 hashidc1 default <default>
  1. $ consul members
  2. Node Address Status Type Build Protocol DC Partition Segment
  3. consul-server-0 10.0.3.70:8301 alive server 1.10.3 2 hashidc1 default <all>
  4. consul-server-1 10.0.2.253:8301 alive server 1.10.3 2 hashidc1 default <all>
  5. consul-server-2 10.0.1.39:8301 alive server 1.10.3 2 hashidc1 default <all>
  6. ip-10-0-1-139.ec2.internal 10.0.1.148:8301 alive client 1.10.3 2 hashidc1 default <default>
  7. ip-10-0-2-47.ec2.internal 10.0.2.59:8301 alive client 1.10.3 2 hashidc1 default <default>
  8. ip-10-0-3-94.ec2.internal 10.0.3.225:8301 alive client 1.10.3 2 hashidc1 default <default>

Next steps

In this tutorial, you deployed a Consul datacenter to AWS Elastic Kubernetes Service using the official Helm chart or Consul K8S CLI. You also configured access to the Consul UI. To learn more about deployment best practices, review the Kubernetes Reference Architecture tutorial.