Deploying NATS with Helm

The NATS Helm charts can be used to deploy a StatefulSet of NATS servers using Helm templates which are easy to extend. Using Helm3 you can add the NATS Helm repo as follows:

  1. helm repo add nats https://nats-io.github.io/k8s/helm/charts/
  2. helm install my-nats nats/nats

Configuration

Server Image

  1. nats:
  2. image: nats:2.4.0
  3. pullPolicy: IfNotPresent

Limits

  1. nats:
  2. # The number of connect attempts against discovered routes.
  3. connectRetries: 30
  4. # How many seconds should pass before sending a PING
  5. # to a client that has no activity.
  6. pingInterval:
  7. # Server settings.
  8. limits:
  9. maxConnections:
  10. maxSubscriptions:
  11. maxControlLine:
  12. maxPayload:
  13. writeDeadline:
  14. maxPending:
  15. maxPings:
  16. lameDuckDuration:
  17. # Number of seconds to wait for client connections to end after the pod termination is requested
  18. terminationGracePeriodSeconds: 60

Logging

Note: It is not recommended to enable trace or debug in production since enabling it will significantly degrade performance.

  1. nats:
  2. logging:
  3. debug:
  4. trace:
  5. logtime:
  6. connectErrorReports:
  7. reconnectErrorReports:

TLS setup for client connections

You can find more on how to set up and troubleshoot TLS connections at: nats-server/configuration/securing_nats/tls

  1. nats:
  2. tls:
  3. secret:
  4. name: nats-client-tls
  5. ca: "ca.crt"
  6. cert: "tls.crt"
  7. key: "tls.key"

Clustering

If clustering is enabled, then a 3-node cluster will be set up. More info at: nats-server/configuration/clustering#nats-server-clustering

  1. cluster:
  2. enabled: true
  3. replicas: 3
  4. tls:
  5. secret:
  6. name: nats-server-tls
  7. ca: "ca.crt"
  8. cert: "tls.crt"
  9. key: "tls.key"

Example:

  1. $ helm install nats nats/nats --set cluster.enabled=true

Leafnodes

Leafnode connections to extend a cluster. More info at: nats-server/configuration/leafnodes

  1. leafnodes:
  2. enabled: true
  3. remotes:
  4. - url: "tls://connect.ngs.global:7422"
  5. # credentials:
  6. # secret:
  7. # name: leafnode-creds
  8. # key: TA.creds
  9. # tls:
  10. # secret:
  11. # name: nats-leafnode-tls
  12. # ca: "ca.crt"
  13. # cert: "tls.crt"
  14. # key: "tls.key"
  15. #######################
  16. # #
  17. # TLS Configuration #
  18. # #
  19. #######################
  20. #
  21. # # You can find more on how to setup and trouble shoot TLS connnections at:
  22. #
  23. # # https://docs.nats.io/nats-server/configuration/securing_nats/tls
  24. #
  25. tls:
  26. secret:
  27. name: nats-client-tls
  28. ca: "ca.crt"
  29. cert: "tls.crt"
  30. key: "tls.key"

Websocket Configuration

  1. websocket:
  2. enabled: true
  3. port: 443
  4. tls:
  5. secret:
  6. name: nats-tls
  7. cert: "fullchain.pem"
  8. key: "privkey.pem"

Setting up External Access

Using HostPorts

In case of both external access and advertisements being enabled, an initializer container will be used to gather the public IPs. This container will be required to have enough RBAC policy to be able to make a look up of the public IP of the node where it is running.

For example, to set up external access for a cluster and advertise the public IP to clients:

  1. nats:
  2. # Toggle whether to enable external access.
  3. # This binds a host port for clients, gateways and leafnodes.
  4. externalAccess: true
  5. # Toggle to disable client advertisements (connect_urls),
  6. # in case of running behind a load balancer (which is not recommended)
  7. # it might be required to disable advertisements.
  8. advertise: true
  9. # In case both external access and advertise are enabled
  10. # then a service account would be required to be able to
  11. # gather the public IP from a node.
  12. serviceAccount: "nats-server"

Where the service account named nats-server has the following RBAC policy for example:

  1. ---
  2. apiVersion: v1
  3. kind: ServiceAccount
  4. metadata:
  5. name: nats-server
  6. namespace: default
  7. ---
  8. apiVersion: rbac.authorization.k8s.io/v1
  9. kind: ClusterRole
  10. metadata:
  11. name: nats-server
  12. rules:
  13. - apiGroups: [""]
  14. resources:
  15. - nodes
  16. verbs: ["get"]
  17. ---
  18. apiVersion: rbac.authorization.k8s.io/v1
  19. kind: ClusterRoleBinding
  20. metadata:
  21. name: nats-server-binding
  22. roleRef:
  23. apiGroup: rbac.authorization.k8s.io
  24. kind: ClusterRole
  25. name: nats-server
  26. subjects:
  27. - kind: ServiceAccount
  28. name: nats-server
  29. namespace: default

The container image of the initializer can be customized via:

  1. bootconfig:
  2. image: natsio/nats-boot-config:latest
  3. pullPolicy: IfNotPresent

Using LoadBalancers

When using a load balancer for external access, it is recommended to disable no advertise so that internal IPs from the NATS Servers are not advertised to the clients connecting through the load balancer.

  1. nats:
  2. image: nats:alpine
  3. cluster:
  4. enabled: true
  5. noAdvertise: true
  6. leafnodes:
  7. enabled: true
  8. noAdvertise: true
  9. natsbox:
  10. enabled: true

You could then use an L4 enabled load balancer to connect to NATS, for example:

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: nats-lb
  5. spec:
  6. type: LoadBalancer
  7. selector:
  8. app.kubernetes.io/name: nats
  9. ports:
  10. - protocol: TCP
  11. port: 4222
  12. targetPort: 4222
  13. name: nats
  14. - protocol: TCP
  15. port: 7422
  16. targetPort: 7422
  17. name: leafnodes
  18. - protocol: TCP
  19. port: 7522
  20. targetPort: 7522
  21. name: gateways

Gateways

A supercluster can be formed by pointing to remote gateways. You can find more about gateways in the NATS documentation: nats-server/configuration/gateways.

  1. gateway:
  2. enabled: false
  3. name: 'default'
  4. #############################
  5. # #
  6. # List of remote gateways #
  7. # #
  8. #############################
  9. # gateways:
  10. # - name: other
  11. # url: nats://my-gateway-url:7522
  12. #######################
  13. # #
  14. # TLS Configuration #
  15. # #
  16. #######################
  17. #
  18. # # You can find more on how to setup and trouble shoot TLS connnections at:
  19. #
  20. # # https://docs.nats.io/nats-server/configuration/securing_nats/tls
  21. #
  22. # tls:
  23. # secret:
  24. # name: nats-client-tls
  25. # ca: "ca.crt"
  26. # cert: "tls.crt"
  27. # key: "tls.key"

Auth setup

Auth with a Memory Resolver

  1. auth:
  2. enabled: true
  3. # Reference to the Operator JWT.
  4. operatorjwt:
  5. configMap:
  6. name: operator-jwt
  7. key: KO.jwt
  8. # Public key of the System Account
  9. systemAccount:
  10. resolver:
  11. ############################
  12. # #
  13. # Memory resolver settings #
  14. # #
  15. ##############################
  16. type: memory
  17. #
  18. # Use a configmap reference which will be mounted
  19. # into the container.
  20. #
  21. configMap:
  22. name: nats-accounts
  23. key: resolver.conf

Auth using an Account Server Resolver

  1. auth:
  2. enabled: true
  3. # Reference to the Operator JWT.
  4. operatorjwt:
  5. configMap:
  6. name: operator-jwt
  7. key: KO.jwt
  8. # Public key of the System Account
  9. systemAccount:
  10. resolver:
  11. ##########################
  12. # #
  13. # URL resolver settings #
  14. # #
  15. ##########################
  16. type: URL
  17. url: "http://nats-account-server:9090/jwt/v1/accounts/"

JetStream

Setting up Memory and File Storage

  1. nats:
  2. image: nats:alpine
  3. jetstream:
  4. enabled: true
  5. memStorage:
  6. enabled: true
  7. size: 2Gi
  8. fileStorage:
  9. enabled: true
  10. size: 1Gi
  11. storageDirectory: /data/
  12. storageClassName: default

Using with an existing PersistentVolumeClaim

For example, given the following PersistentVolumeClaim:

  1. ---
  2. kind: PersistentVolumeClaim
  3. apiVersion: v1
  4. metadata:
  5. name: nats-js-disk
  6. annotations:
  7. volume.beta.kubernetes.io/storage-class: "default"
  8. spec:
  9. accessModes:
  10. - ReadWriteOnce
  11. resources:
  12. requests:
  13. storage: 3Gi

You can start JetStream so that one pod is bound to it:

  1. nats:
  2. image: nats:alpine
  3. jetstream:
  4. enabled: true
  5. fileStorage:
  6. enabled: true
  7. storageDirectory: /data/
  8. existingClaim: nats-js-disk
  9. claimStorageSize: 3Gi

Clustering example

  1. nats:
  2. image: nats:alpine
  3. jetstream:
  4. enabled: true
  5. memStorage:
  6. enabled: true
  7. size: "2Gi"
  8. fileStorage:
  9. enabled: true
  10. size: "1Gi"
  11. storageDirectory: /data/
  12. storageClassName: default
  13. cluster:
  14. enabled: true
  15. # Cluster name is required, by default will be release name.
  16. # name: "nats"
  17. replicas: 3

Misc

NATS Box

A lightweight container with NATS and NATS Streaming utilities deployed along the cluster to confirm the setup. You can find the image at: https://github.com/nats-io/nats-box

  1. natsbox:
  2. enabled: true
  3. image: nats:alpine
  4. pullPolicy: IfNotPresent
  5. # credentials:
  6. # secret:
  7. # name: nats-sys-creds
  8. # key: sys.creds

Configuration Reload sidecar

The NATS config reloader image to use:

  1. reloader:
  2. enabled: true
  3. image: natsio/nats-server-config-reloader:latest
  4. pullPolicy: IfNotPresent

Prometheus Exporter sidecar

You can toggle whether to start the sidecar to be used to feed metrics to Prometheus:

  1. exporter:
  2. enabled: true
  3. image: natsio/prometheus-nats-exporter:latest
  4. pullPolicy: IfNotPresent

Prometheus operator ServiceMonitor support

You can enable Prometheus operator ServiceMonitor:

  1. exporter:
  2. # You have to enable exporter first
  3. enabled: true
  4. serviceMonitor:
  5. enabled: true
  6. ## Specify the namespace where Prometheus Operator is running
  7. # namespace: monitoring
  8. # ...

Pod Customizations

Security Context

  1. # Toggle whether to use setup a Pod Security Context
  2. # ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
  3. securityContext:
  4. fsGroup: 1000
  5. runAsUser: 1000
  6. runAsNonRoot: true

Affinity

https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity

matchExpressions must be configured according to your setup

  1. affinity:
  2. nodeAffinity:
  3. requiredDuringSchedulingIgnoredDuringExecution:
  4. nodeSelectorTerms:
  5. - matchExpressions:
  6. - key: node.kubernetes.io/purpose
  7. operator: In
  8. values:
  9. - nats
  10. podAntiAffinity:
  11. requiredDuringSchedulingIgnoredDuringExecution:
  12. - labelSelector:
  13. matchExpressions:
  14. - key: app
  15. operator: In
  16. values:
  17. - nats
  18. - stan
  19. topologyKey: "kubernetes.io/hostname"

Service topology

Service topology is disabled by default but can be enabled by setting topologyKeys. For example:

  1. topologyKeys:
  2. - "kubernetes.io/hostname"
  3. - "topology.kubernetes.io/zone"
  4. - "topology.kubernetes.io/region"

CPU/Memory Resource Requests/Limits

Sets the pods CPU/memory requests/limits

  1. nats:
  2. resources:
  3. requests:
  4. cpu: 2
  5. memory: 4Gi
  6. limits:
  7. cpu: 4
  8. memory: 6Gi

No resources are set by default.

Annotations

https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations

  1. podAnnotations:
  2. key1 : "value1",
  3. key2 : "value2"

Name Overrides

Can change the name of the resources as needed with:

  1. nameOverride: "my-nats"

Image Pull Secrets

  1. imagePullSecrets:
  2. - name: myRegistry

Adds this to the StatefulSet:

  1. spec:
  2. imagePullSecrets:
  3. - name: myRegistry