Consul DNS on Kubernetes

One of the primary query interfaces to Consul is the DNS interface. You can configure Consul DNS in Kubernetes using a stub-domain configuration if using KubeDNS or a proxy configuration if using CoreDNS.

Once configured, DNS requests in the form <consul-service-name>.service.consul will resolve for services in Consul. This will work from all Kubernetes namespaces.

Note: If you want requests to just <consul-service-name> (without the .service.consul) to resolve, then you’ll need to turn on Consul to Kubernetes Service Sync.

Consul DNS Cluster IP

To configure KubeDNS or CoreDNS you’ll first need the ClusterIP of the Consul DNS service created by the Helm chart.

The default name of the Consul DNS service will be consul-consul-dns. Use that name to get the ClusterIP:

  1. $ kubectl get svc consul-consul-dns -o jsonpath='{.spec.clusterIP}'
  2. 10.35.240.78%

For this installation the ClusterIP is 10.35.240.78.

Note: If you’ve installed Consul using a different helm release name than consul then the DNS service name will be <release-name>-consul-dns.

KubeDNS

If using KubeDNS, you need to create a ConfigMap that tells KubeDNS to use the Consul DNS service to resolve all domains ending with .consul:

Export the Consul DNS IP as an environment variable:

  1. export CONSUL_DNS_IP=10.35.240.78

And create the ConfigMap:

  1. $ cat <<EOF | kubectl apply -f -
  2. apiVersion: v1
  3. kind: ConfigMap
  4. metadata:
  5. labels:
  6. addonmanager.kubernetes.io/mode: EnsureExists
  7. name: kube-dns
  8. namespace: kube-system
  9. data:
  10. stubDomains: |
  11. {"consul": ["$CONSUL_DNS_IP"]}
  12. EOF
  13. Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
  14. configmap/kube-dns configured

Ensure that the ConfigMap was created successfully:

  1. $ kubectl get configmap kube-dns -n kube-system -o yaml
  2. apiVersion: v1
  3. data:
  4. stubDomains: |
  5. {"consul": ["10.35.240.78"]}
  6. kind: ConfigMap
  7. ...

Note: The stubDomain can only point to a static IP. If the cluster IP of the Consul DNS service changes, then it must be updated in the config map to match the new service IP for this to continue working. This can happen if the service is deleted and recreated, such as in full cluster rebuilds.

Note: If using a different zone than .consul, change the stub domain to that zone.

Now skip ahead to the Verifying DNS Works section.

CoreDNS Configuration

If using CoreDNS instead of KubeDNS in your Kubernetes cluster, you will need to update your existing coredns ConfigMap in the kube-system namespace to include a forward definition for consul that points to the cluster IP of the Consul DNS service.

Edit the ConfigMap:

  1. $ kubectl edit configmap coredns -n kube-system

And add the consul block below the default .:53 block and replace <consul-dns-service-cluster-ip> with the DNS Service’s IP address you found previously.

  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. labels:
  5. addonmanager.kubernetes.io/mode: EnsureExists
  6. name: coredns
  7. namespace: kube-system
  8. data:
  9. Corefile: |
  10. .:53 {
  11. <Existing CoreDNS definition>
  12. }
  13. + consul {
  14. + errors
  15. + cache 30
  16. + forward . <consul-dns-service-cluster-ip>
  17. + }

Note: The consul proxy can only point to a static IP. If the cluster IP of the consul-dns service changes, then it must be updated to the new IP to continue working. This can happen if the service is deleted and recreated, such as in full cluster rebuilds.

Note: If using a different zone than .consul, change the key accordingly.

Verifying DNS Works

To verify DNS works, run a simple job to query DNS. Save the following job to the file job.yaml and run it:

  1. apiVersion: batch/v1
  2. kind: Job
  3. metadata:
  4. name: dns
  5. spec:
  6. template:
  7. spec:
  8. containers:
  9. - name: dns
  10. image: anubhavmishra/tiny-tools
  11. command: ['dig', 'consul.service.consul']
  12. restartPolicy: Never
  13. backoffLimit: 4
  1. $ kubectl apply -f job.yaml

Then query the pod name for the job and check the logs. You should see output similar to the following showing a successful DNS query. If you see any errors, then DNS is not configured properly.

  1. $ kubectl get pods --show-all | grep dns
  2. dns-lkgzl 0/1 Completed 0 6m
  3. $ kubectl logs dns-lkgzl
  4. ; <<>> DiG 9.11.2-P1 <<>> consul.service.consul
  5. ;; global options: +cmd
  6. ;; Got answer:
  7. ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 4489
  8. ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 4
  9. ;; OPT PSEUDOSECTION:
  10. ; EDNS: version: 0, flags:; udp: 4096
  11. ;; QUESTION SECTION:
  12. ;consul.service.consul. IN A
  13. ;; ANSWER SECTION:
  14. consul.service.consul. 0 IN A 10.36.2.23
  15. consul.service.consul. 0 IN A 10.36.4.12
  16. consul.service.consul. 0 IN A 10.36.0.11
  17. ;; ADDITIONAL SECTION:
  18. consul.service.consul. 0 IN TXT "consul-network-segment="
  19. consul.service.consul. 0 IN TXT "consul-network-segment="
  20. consul.service.consul. 0 IN TXT "consul-network-segment="
  21. ;; Query time: 5 msec
  22. ;; SERVER: 10.39.240.10#53(10.39.240.10)
  23. ;; WHEN: Wed Sep 12 02:12:30 UTC 2018
  24. ;; MSG SIZE rcvd: 206