NATS Cluster and Cert Manager

First we need to install the cert-manager component from jetstack:

  1. kubectl create namespace cert-manager
  2. kubectl label namespace cert-manager certmanager.k8s.io/disable-validation=true
  3. kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.14.0/cert-manager.yaml

If you are running Kubernetes < 1.15, use cert-manager-legacy.yaml instead.

  1. apiVersion: cert-manager.io/v1alpha2
  2. kind: ClusterIssuer
  3. metadata:
  4. name: selfsigning
  5. spec:
  6. selfSigned: {}
  1. clusterissuer.certmanager.k8s.io/selfsigning unchanged

Next, let’s create the CA for the certs:

  1. ---
  2. apiVersion: cert-manager.io/v1alpha2
  3. kind: Certificate
  4. metadata:
  5. name: nats-ca
  6. spec:
  7. secretName: nats-ca
  8. duration: 8736h # 1 year
  9. renewBefore: 240h # 10 days
  10. issuerRef:
  11. name: selfsigning
  12. kind: ClusterIssuer
  13. commonName: nats-ca
  14. usages:
  15. - cert sign
  16. organization:
  17. - Your organization
  18. isCA: true
  19. ---
  20. apiVersion: cert-manager.io/v1alpha2
  21. kind: Issuer
  22. metadata:
  23. name: nats-ca
  24. spec:
  25. ca:
  26. secretName: nats-ca

Now create the certs that will match the DNS name used by the clients to connect, in this case traffic is within Kubernetes so we are using the name nats which is backed up by a headless service (here is an example of sample deployment)

  1. ---
  2. apiVersion: cert-manager.io/v1alpha2
  3. kind: Certificate
  4. metadata:
  5. name: nats-server-tls
  6. spec:
  7. secretName: nats-server-tls
  8. duration: 2160h # 90 days
  9. renewBefore: 240h # 10 days
  10. issuerRef:
  11. name: nats-ca
  12. kind: Issuer
  13. usages:
  14. - signing
  15. - key encipherment
  16. - server auth
  17. organization:
  18. - Your organization
  19. commonName: nats.default.svc.cluster.local
  20. dnsNames:
  21. - nats.default.svc

In case of using the NATS operator, the Routes use a service named $YOUR_CLUSTER-mgmt (this may change in the future)

  1. ---
  2. apiVersion: cert-manager.io/v1alpha2
  3. kind: Certificate
  4. metadata:
  5. name: nats-routes-tls
  6. spec:
  7. secretName: nats-routes-tls
  8. duration: 2160h # 90 days
  9. renewBefore: 240h # 10 days
  10. issuerRef:
  11. name: nats-ca
  12. kind: Issuer
  13. usages:
  14. - signing
  15. - key encipherment
  16. - server auth
  17. - client auth
  18. organization:
  19. - Your organization
  20. commonName: "*.nats-mgmt.default.svc.cluster.local"
  21. dnsNames:
  22. - "*.nats-mgmt.default.svc"

Now let’s create an example NATS cluster with the operator:

  1. apiVersion: "nats.io/v1alpha2"
  2. kind: "NatsCluster"
  3. metadata:
  4. name: "nats"
  5. spec:
  6. # Number of nodes in the cluster
  7. size: 3
  8. version: "2.1.4"
  9. tls:
  10. # Certificates to secure the NATS client connections:
  11. serverSecret: "nats-server-tls"
  12. # Name of the CA in serverSecret
  13. serverSecretCAFileName: "ca.crt"
  14. # Name of the key in serverSecret
  15. serverSecretKeyFileName: "tls.key"
  16. # Name of the certificate in serverSecret
  17. serverSecretCertFileName: "tls.crt"
  18. # Certificates to secure the routes.
  19. routesSecret: "nats-routes-tls"
  20. # Name of the CA in routesSecret
  21. routesSecretCAFileName: "ca.crt"
  22. # Name of the key in routesSecret
  23. routesSecretKeyFileName: "tls.key"
  24. # Name of the certificate in routesSecret
  25. routesSecretCertFileName: "tls.crt"

Confirm that the pods were deployed:

  1. kubectl get pods -o wide
  1. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
  2. nats-1 1/1 Running 0 4s 172.17.0.8 minikube <none>
  3. nats-2 1/1 Running 0 3s 172.17.0.9 minikube <none>
  4. nats-3 1/1 Running 0 2s 172.17.0.10 minikube <none>

Follow the logs:

  1. kubectl logs nats-1
  1. [1] 2019/12/18 12:27:23.920417 [INF] Starting nats-server version 2.1.4
  2. [1] 2019/12/18 12:27:23.920590 [INF] Git commit [not set]
  3. [1] 2019/12/18 12:27:23.921024 [INF] Listening for client connections on 0.0.0.0:4222
  4. [1] 2019/12/18 12:27:23.921047 [INF] Server id is NDA6JC3TGEADLLBEPFAQ4BN4PM3WBN237KIXVTFCY3JSTDOSRRVOJCXN
  5. [1] 2019/12/18 12:27:23.921055 [INF] Server is ready