Using YDB in Kubernetes

Monitoring

For convenience, YDB provides standard mechanisms for collecting logs and metrics.

Logging is done to standard stdout and stderr streams and can be redirected using popular solutions. We recommend using a combination of Fluentd and Elastic Stack.

To collect metrics, ydb-controller provides resources like ServiceMonitor. They can be handled using kube-prometheus-stack.

Description of YDB controller resources

Storage resource

  1. apiVersion: ydb.tech/v1alpha1
  2. kind: Storage
  3. metadata:
  4. # make sure you specify this name when creating a database
  5. name: storage-sample
  6. spec:
  7. # you can specify either the YDB version or the container name
  8. # image:
  9. # name: "cr.yandex/ydb/ydb:stable-21-4-14"
  10. version: 21.4.30
  11. # the number of cluster pods
  12. nodes: 8
  13. # https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims
  14. # specifying disk resources for cluster pods
  15. dataStore:
  16. volumeMode: Block
  17. accessModes:
  18. - ReadWriteOnce
  19. resources:
  20. requests:
  21. storage: 80Gi
  22. # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
  23. # Limiting the YDB cluster pod resources
  24. resources:
  25. limits:
  26. cpu: 2
  27. memory: 8Gi
  28. requests:
  29. cpu: 2
  30. memory: 8Gi
  31. # https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector
  32. # selector of the cluster nodes that the YDB pods can run on
  33. # nodeSelector:
  34. # network: fast
  35. # https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
  36. # indicating which cluster instances' disadvantages can be ignored when assigning YDB pods
  37. # tolerations:
  38. # - key: "example-key"
  39. # operator: "Exists"
  40. # effect: "NoSchedule"
  41. # https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity
  42. # instructing the scheduler to distribute the YDB pods evenly across the nodes
  43. # affinity:
  44. # podAntiAffinity:
  45. # preferredDuringSchedulingIgnoredDuringExecution:
  46. # - weight: 100
  47. # podAffinityTerm:
  48. # labelSelector:
  49. # matchExpressions:
  50. # - key: app.kubernetes.io/instance
  51. # operator: In
  52. # values:
  53. # - ydb
  54. # topologyKey: kubernetes.io/hostname

Use - 图1

Database resource

  1. apiVersion: ydb.tech/v1alpha1
  2. kind: Database
  3. metadata:
  4. # the name to be used when creating a database => `/root/database-sample`
  5. name: database-sample
  6. spec:
  7. # you can specify either the YDB version or the container name
  8. # image:
  9. # name: "cr.yandex/ydb/ydb:stable-21-4-14"
  10. version: 21.4.30
  11. # the number of database pods
  12. nodes: 6
  13. # the pointer of the YDB cluster storage nodes, corresponds to the Helm release name
  14. storageClusterRef:
  15. name: ydb
  16. namespace: default
  17. # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
  18. # Limiting the YDB dynamic pod resources
  19. resources:
  20. limits:
  21. cpu: 2
  22. memory: 8Gi
  23. requests:
  24. cpu: 2
  25. memory: 8Gi
  26. # https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector
  27. # selector of the cluster nodes that the YDB pods can run on
  28. # nodeSelector:
  29. # network: fast
  30. # https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
  31. # indicating which cluster instances' disadvantages can be ignored when assigning YDB pods
  32. # tolerations:
  33. # - key: "example-key"
  34. # operator: "Exists"
  35. # effect: "NoSchedule"
  36. # https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity
  37. # instructing the scheduler to distribute the YDB pods evenly across the nodes
  38. # affinity:
  39. # podAntiAffinity:
  40. # preferredDuringSchedulingIgnoredDuringExecution:
  41. # - weight: 100
  42. # podAffinityTerm:
  43. # labelSelector:
  44. # matchExpressions:
  45. # - key: app.kubernetes.io/instance
  46. # operator: In
  47. # values:
  48. # - ydb
  49. # topologyKey: kubernetes.io/hostname

Use - 图2

Allocating resources

You can limit resource consumption for each YDB pod. If you leave the limit values empty, a pod can use the entire CPU time and VM RAM. This may cause undesirable effects. We recommend that you always specify the resource limits explicitly.

To learn more about resource allocation and limits, see the Kubernetes documentation.