Access TiDB Dashboard

TiDB Dashboard is a visualized tool introduced since TiDB v4.0 and is used to monitor and diagnose TiDB clusters. For details, see TiDB Dashboard.

This document describes how to access TiDB Dashboard in Kubernetes.

Monitor and Diagnose TiDB Using TiDB Dashboard - 图1Note

Due to the special environment of Kubernetes, some features of TiDB Dashboard are not supported in TiDB Operator. See Unsupported TiDB Dashboard features for details.

In this document, you can use the Discovery service to access TiDB Dashboard. TiDB Operator starts a Discovery service for each TiDB cluster. The Discovery service can return the corresponding startup parameters for each PD Pod to support the startup of the PD cluster. The Discovery service can also send proxy requests to the TiDB Dashboard.

Monitor and Diagnose TiDB Using TiDB Dashboard - 图2Warning

The TiDB Dashboard is available in the /dashboard path of the PD. Other paths outside of this may not have access control.

Prerequisites

To access TiDB Dashboard smoothly in Kubernetes, you need to use TiDB Operator v1.1.1 (or later versions) and the TiDB cluster (v4.0.1 or later versions).

You need to configure the TidbCluster object file as follows to enable quick access to TiDB Dashboard:

  1. apiVersion: pingcap.com/v1alpha1
  2. kind: TidbCluster
  3. metadata:
  4. name: basic
  5. spec:
  6. pd:
  7. enableDashboardInternalProxy: true

Method 1. Access TiDB Dashboard by port forward

Monitor and Diagnose TiDB Using TiDB Dashboard - 图3Warning

This guide shows how to quickly access TiDB Dashboard. Do NOT use this method in the production environment. For production environments, refer to Access TiDB Dashboard by Ingress.

TiDB Dashboard is built in the PD component in TiDB 4.0 and later versions. You can refer to the following example to quickly deploy a TiDB cluster in Kubernetes.

  1. Deploy the following example .yaml file into the Kubernetes cluster by running the kubectl apply -f command:

    1. apiVersion: pingcap.com/v1alpha1
    2. kind: TidbCluster
    3. metadata:
    4. name: basic
    5. spec:
    6. version: v5.4.0
    7. timezone: UTC
    8. pvReclaimPolicy: Delete
    9. pd:
    10. enableDashboardInternalProxy: true
    11. baseImage: pingcap/pd
    12. maxFailoverCount: 0
    13. replicas: 1
    14. requests:
    15. storage: "10Gi"
    16. config: {}
    17. tikv:
    18. baseImage: pingcap/tikv
    19. maxFailoverCount: 0
    20. replicas: 1
    21. requests:
    22. storage: "100Gi"
    23. config: {}
    24. tidb:
    25. baseImage: pingcap/tidb
    26. maxFailoverCount: 0
    27. replicas: 1
    28. service:
    29. type: ClusterIP
    30. config: {}
  2. After the cluster is created, expose TiDB Dashboard to the local machine by running the following command:

    1. kubectl port-forward svc/basic-discovery -n ${namespace} 10262:10262

    By default, port-forward binds to the IP address 127.0.0.1. If you need to use another IP address to access the machine running the port-forward command, you can add the --address option and specify the IP address to be bound.

  3. Visit http://localhost:10262/dashboard in your browser to access TiDB Dashboard.

Method 2. Access TiDB Dashboard by Ingress

In important production environments, it is recommended to expose the TiDB Dashboard service using Ingress.

Prerequisites

Before using Ingress, install the Ingress controller in your Kubernetes cluster. Otherwise, simply creating Ingress resources does not take effect.

To deploy the Ingress controller, refer to ingress-nginx. You can also choose from various Ingress controllers.

Use Ingress

You can expose the TiDB Dashboard service outside the Kubernetes cluster by using Ingress. In this way, the service can be accessed outside Kubernetes via http/https. For more details, see Ingress.

The following is an .yaml example of accessing TiDB Dashboard using Ingress:

  1. Deploy the following .yaml file to the Kubernetes cluster by running the kubectl apply -f command:

    1. apiVersion: extensions/v1beta1
    2. kind: Ingress
    3. metadata:
    4. name: access-dashboard
    5. namespace: ${namespace}
    6. spec:
    7. rules:
    8. - host: ${host}
    9. http:
    10. paths:
    11. - backend:
    12. serviceName: ${cluster_name}-discovery
    13. servicePort: 10262
    14. path: /dashboard
  2. After Ingress is deployed, you can access TiDB Dashboard via http://${host}/dashboard outside the Kubernetes cluster.

Use Ingress with TLS

Ingress supports TLS. See Ingress TLS. The following example shows how to use Ingress TLS:

  1. apiVersion: extensions/v1beta1
  2. kind: Ingress
  3. metadata:
  4. name: access-dashboard
  5. namespace: ${namespace}
  6. spec:
  7. tls:
  8. - hosts:
  9. - ${host}
  10. secretName: testsecret-tls
  11. rules:
  12. - host: ${host}
  13. http:
  14. paths:
  15. - backend:
  16. serviceName: ${cluster_name}-discovery
  17. servicePort: 10262
  18. path: /dashboard

In the above file, testsecret-tls contains tls.crt and tls.key needed for example.com.

This is an example of testsecret-tls:

  1. apiVersion: v1
  2. kind: Secret
  3. metadata:
  4. name: testsecret-tls
  5. namespace: default
  6. data:
  7. tls.crt: base64 encoded cert
  8. tls.key: base64 encoded key
  9. type: kubernetes.io/tls

After Ingress is deployed, visit https://{host}/dashboard to access TiDB Dashboard.

Method 3. Use NodePort Service

Because ingress can only be accessed with a domain name, it might be difficult to use ingress in some scenarios. In this case, to access and use TiDB Dashboard, you can add a Service of NodePort type.

The following is an .yaml example using the Service of NodePort type to access the TiDB Dashboard. To deploy the following .yaml file into the Kubernetes cluster, you can run the kubectl apply -f command:

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: access-dashboard
  5. namespace: ${namespace}
  6. spec:
  7. ports:
  8. - name: dashboard
  9. port: 10262
  10. protocol: TCP
  11. targetPort: 10262
  12. type: NodePort
  13. selector:
  14. app.kubernetes.io/component: discovery
  15. app.kubernetes.io/instance: ${cluster_name}
  16. app.kubernetes.io/name: tidb-cluster

After deploying the Service, you can access TiDB Dashboard via https://{nodeIP}:{nodePort}/dashboard. By default, nodePort is randomly assigned by Kubernetes. You can also specify an available port in the .yaml file.

Note that if there is more than one PD Pod in the cluster, you need to set spec.pd.enableDashboardInternalProxy: true in the TidbCluster CR to ensure normal access to TiDB Dashboard.

Enable Continuous Profiling

With Continuous Profiling, you can collect continuous performance data of TiDB, PD, TiKV, and TiFlash instances, and have the nodes monitored day and night without restarting any of them. The data collected can be displayed in various forms, for example, on a flame graph or a directed acyclic graph. The data displayed visually shows what internal operations are performed on the instances during the performance profiling period and the corresponding proportions. With such data, you can quickly learn the CPU resource consumption of these instances.

To enable this feature, you need to deploy TidbNGMonitoring CR using TiDB Operator v1.3.0 or later versions.

  1. Deploy TidbMonitor CR.

  2. Deploy TidbNGMonitoring CR.

    Run the following command to deploy TidbNGMonitoring CR. In this command, ${cluster_name} is the name of the TidbCluster CR and ${cluster_ns} is the namespace of this CR.

    1. cat << EOF | kubectl apply -n ${ns} -f -
    2. apiVersion: pingcap.com/v1alpha1
    3. kind: TidbNGMonitoring
    4. metadata:
    5. name: ${name}
    6. spec:
    7. clusters:
    8. - name: ${cluster_name}
    9. namespace: ${cluster_ns}
    10. ngMonitoring:
    11. requests:
    12. storage: 10Gi
    13. version: v5.4.0
    14. # storageClassName: default
    15. baseImage: pingcap/ng-monitoring

    For more configuration items of the TidbNGMonitoring CR, see example in tidb-operator.

  3. Enable Continuous Profiling.

    1. On TiDB Dashboard, click Advanced Debugging > Profiling Instances > Continuous Profiling.

    2. In the displayed window, click Open Settings. Switch on the button under Enable Feature on the right. Modify the value of Retention Duration as required or retain the default value.

    3. Click Save to enable this feature.

    Enable the feature

For more operations of the Continuous Profiling function, see TiDB Dashboard Instance Profiling - Continuous Profiling.

Unsupported TiDB Dashboard features

Due to the special environment of Kubernetes, some features of TiDB Dashboard are not supported in TiDB Operator, including:

  • In Overview -> Monitor & Alert -> View Metrics, the link does not direct to the Grafana monitoring dashboard. If you need to access Grafana, refer to Access the Grafana monitoring dashboard.

  • The log search feature is unavailable. If you need to view the log of a component, execute kubectl logs ${pod_name} -n {namespace}. You can also view logs using the log service of the Kubernetes cluster.

  • In Cluster Info -> Hosts, the Disk Usage cannot display correctly. You can view the disk usage of each component by viewing the component dashboards in the TidbMonitor dashboard. You can also view the disk usage of Kubernetes nodes by deploying a Kubernetes host monitoring system.