Local SSD

AttentionThis page documents an earlier version. Go to the latest (v2.1)version.

This tutorial will cover how to deploy YugabyteDB on Kubernetes StatefulSets using locally mounted SSDs as the data disks.

1. Create a gcloud cluster

Each cluster brings up 3 nodes each of the type n1-standard-1 for the Kubernetes masters. You can directly create a cluster with the desired machine type using the —machine-type option. In thie example we are going to create a node-pool with n1-standard-8 type nodes for the Yugabyte universe.

  • Choose the zone

First, choose the zone in which you want to run the cluster in. In this tutorial, we are going to deploy the Kubernetes masters using the default machine type n1-standard-1 in the zone us-west1-a, and add a node pool with the desired node type and node count in order to deploy the YugabyteDB universe. You can view the list of zones by running the following command:

  1. $ gcloud compute zones list
  1. NAME REGION STATUS
  2. ...
  3. us-west1-b us-west1 UP
  4. us-west1-c us-west1 UP
  5. us-west1-a us-west1 UP
  6. ...
  • Create the glcoud container cluster

Create a Kubernetes cluster on GKE by running the following in order to create a cluster in the desired zone.

  1. $ gcloud container clusters create yugabyte --zone us-west1-b
  • List gcloud container clusters

You can list the available cluster by running the following command.

  1. $ gcloud container clusters list
  1. NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
  2. yugabyte us-west1-b 1.8.7-gke.1 35.199.164.253 n1-standard-1 1.8.7-gke.1 3 RUNNING
  1. Created [https://container.googleapis.com/v1/projects/yugabyte/zones/us-west1-b/clusters/yugabyte].

2. Create a node pool

Create a node pool with 3 nodes, each having 8 cpus and 2 local SSDs.

  1. $ gcloud container node-pools create node-pool-8cpu-2ssd \
  2. --cluster=yugabyte \
  3. --local-ssd-count=2 \
  4. --machine-type=n1-standard-8 \
  5. --num-nodes=3 \
  6. --zone=us-west1-b
  1. Created
  2. NAME MACHINE_TYPE DISK_SIZE_GB NODE_VERSION
  3. node-pool-8cpu-2ssd n1-standard-8 100 1.8.7-gke.1

Note the —local-ssd-count option above, which tells gcloud to mount the nodes with 2 local SSDs each.

We can list all the node pools by doing the following.

  1. $ gcloud container node-pools list --cluster yugabyte --zone=us-west1-b
  1. NAME MACHINE_TYPE DISK_SIZE_GB NODE_VERSION
  2. default-pool n1-standard-1 100 1.8.7-gke.1
  3. node-pool-8cpu-2ssd n1-standard-8 100 1.8.7-gke.1

You can view details of the node-pool just created by running the following command:

  1. $ gcloud container node-pools describe node-pool-8cpu-2ssd --cluster yugabyte --zone=us-west1-b
  1. config:
  2. diskSizeGb: 100
  3. imageType: COS
  4. localSsdCount: 2
  5. machineType: n1-standard-8
  6. initialNodeCount: 3
  7. name: node-pool-8cpu-2ssd

3. Create a YugabyteDB universe

If this is your only container cluster, kubectl automatically points to this cluster. However, if you have multiple clusters, you should switch kubectl to point to this cluster by running the following command:

  1. $ gcloud container clusters get-credentials yugabyte --zone us-west1-b
  1. Fetching cluster endpoint and auth data.
  2. kubeconfig entry generated for yugabyte.

You can launch a universe on this node pool to run on local SSDs by running the following command.

  1. $ kubectl apply -f https://raw.githubusercontent.com/yugabyte/yugabyte-db/master/cloud/kubernetes/yugabyte-statefulset-local-ssd-gke.yaml
  1. service "yb-masters" created
  2. service "yb-master-ui" created
  3. statefulset "yb-master" created
  4. service "yb-tservers" created
  5. statefulset "yb-tserver" created

You can see the yaml file to launch a YugabyteDB kubernetes universe on nodes with local disks.

Note the following nodeSelector snippet in the yaml file which instructs the Kubernetes scheduler to place the Yugabyte pods on nodes that have local disks:

  1. nodeSelector:
  2. cloud.google.com/gke-local-ssd: "true"

Also, note that we instruct the scheduler to place the various pods in the yb-master or yb-tserver services on different physical nodes with the antiAffinity hint:

  1. spec:
  2. affinity:
  3. # Set the anti-affinity selector scope to YB masters.
  4. podAntiAffinity:
  5. preferredDuringSchedulingIgnoredDuringExecution:
  6. - weight: 100
  7. podAffinityTerm:
  8. labelSelector:
  9. matchExpressions:
  10. - key: app
  11. operator: In
  12. values:
  13. - yb-master
  14. topologyKey: kubernetes.io/hostname

4. View the universe

You can verify that the YugabyteDB pods are running by doing the following:

  1. $ kubectl get pods
  1. NAME READY STATUS RESTARTS AGE
  2. yb-master-0 1/1 Running 0 49s
  3. yb-master-1 1/1 Running 0 49s
  4. yb-master-2 1/1 Running 0 49s
  5. yb-tserver-0 1/1 Running 0 48s
  6. yb-tserver-1 1/1 Running 0 48s
  7. yb-tserver-2 1/1 Running 0 48s

You can check all the services that are running by doing the following:

  1. $ kubectl get services
  1. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  2. kubernetes ClusterIP 10.7.240.1 <none> 443/TCP 11m
  3. yb-master-ui LoadBalancer 10.7.246.86 XX.XX.XX.XX 7000:30707/TCP 1m
  4. yb-masters ClusterIP None <none> 7000/TCP,7100/TCP 1m
  5. yb-tservers ClusterIP None <none> 9000/TCP,9100/TCP,9042/TCP,6379/TCP 1m

Note the yb-master-ui service above. It is a loadbalancer service, which exposes the YugabyteDB universe UI. You can view this by browsing to the url http://XX.XX.XX.XX:7000. It should look as follows.

GKE YugabyteDB dashboard

5. Connect to the universe

You can connect to one of the tserver pods and verify that the local disk is mounted into the pods.

  1. $ kubectl exec -it yb-tserver-0 bash

We can observe the local disks by running the following command.

  1. [[email protected] yugabyte]# df -kh
  2. Filesystem Size Used Avail Use% Mounted on
  3. ...
  4. /dev/sdb 369G 70M 350G 1% /mnt/disk0
  5. /dev/sdc 369G 69M 350G 1% /mnt/disk1
  6. ...

You can connect to the cqlsh shell on this universe by running the following command.

  1. $ kubectl exec -it yb-tserver-0 bin/cqlsh
  1. Connected to local cluster at 127.0.0.1:9042.
  2. [cqlsh 5.0.1 | Cassandra 3.9-SNAPSHOT | CQL spec 3.4.2 | Native protocol v4]
  3. Use HELP for help.
  4. cqlsh> DESCRIBE KEYSPACES;
  5. system_schema system_auth system

6. Destroy the cluster (optional)

You can destroy the YugabyteDB universe by running the following. Note that this does not destroy the data, and you may not be able to respawn the cluster because there is data left behind on the persistent disks.

  1. $ kubectl delete -f https://raw.githubusercontent.com/yugabyte/yugabyte-db/master/cloud/kubernetes/yugabyte-statefulset-local-ssd-gke.yaml

You can destroy the node-pool we created by running the following command:

  1. $ gcloud container node-pools delete node-pool-8cpu-2ssd --cluster yugabyte --zone=us-west1-b

Finally, you can destroy the entire gcloud container cluster by running the following:

  1. $ gcloud beta container clusters delete yugabyte --zone us-west1-b