Create Local Cluster

AttentionThis page documents an earlier version. Go to the latest (v2.1)version.

After installing YugabyteDB, follow the instructions below to create a local cluster.

1. Create a 3 node cluster with replication factor 3

We will use the yb-ctl utility located in the bin directory of the YugabyteDB package to create and administer a local cluster. The default data directory used is /tmp/yugabyte-local-cluster. You can change this directory with the —datadir option. Detailed output for the _create command is available in yb-ctl Reference.

  1. $ ./bin/yb-ctl create

You can now check /tmp/yugabyte-local-cluster to see node-i directories created where i represents the node_id of the node. Inside each such directory, there will be 2 disks disk1 and disk2 to highlight the fact that YugabyteDB can work with multiple disks at the same time. Note that the IP address of node-i is by default set to 127.0.0.i.

2. Check cluster status with yb-ctl

Run the command below to see that we now have 3 yb-master processes and 3 yb-tserver processes running on this localhost. Roles played by these processes in a Yugabyte cluster (aka Universe) is explained in detail here.

  1. $ ./bin/yb-ctl status
  1. 2018-05-10 09:06:58,821 INFO: Server is running: type=master, node_id=1, PID=5243, admin service=http://127.0.0.1:7000
  2. 2018-05-10 09:06:58,845 INFO: Server is running: type=master, node_id=2, PID=5246, admin service=http://127.0.0.2:7000
  3. 2018-05-10 09:06:58,871 INFO: Server is running: type=master, node_id=3, PID=5249, admin service=http://127.0.0.3:7000
  4. 2018-05-10 09:06:58,897 INFO: Server is running: type=tserver, node_id=1, PID=5252, admin service=http://127.0.0.1:9000, cql service=127.0.0.1:9042, redis service=127.0.0.1:6379, pgsql service=127.0.0.1:5433
  5. 2018-05-10 09:06:58,922 INFO: Server is running: type=tserver, node_id=2, PID=5255, admin service=http://127.0.0.2:9000, cql service=127.0.0.2:9042, redis service=127.0.0.2:6379, pgsql service=127.0.0.2:5433
  6. 2018-05-10 09:06:58,945 INFO: Server is running: type=tserver, node_id=3, PID=5258, admin service=http://127.0.0.3:9000, cql service=127.0.0.3:9042, redis service=127.0.0.3:6379, pgsql service=127.0.0.3:5433

3. Check cluster status with Admin UI

Node 1’s master Admin UI is available at http://127.0.0.1:7000 and the tserver Admin UI is available at http://127.0.0.1:9000. You can visit the other nodes’ Admin UIs by using their corresponding IP addresses.

3.1 Overview and Master status

Node 1’s master Admin UI home page shows that we have a cluster (aka a Universe) with Replication Factor of 3 and Num Nodes (TServers) as 3. The Num User Tables is 0 since there are no user tables created yet. YugabyteDB version number is also shown for your reference.

master-home

The Masters section highlights the 3 masters along with their corresponding cloud, region and zone placement.

3.2 TServer status

Clicking on the See all nodes takes us to the Tablet Servers page where we can observe the 3 tservers along with the time since they last connected to this master via their regular heartbeats. Since there are no user tables created yet, we can see that the Load (Num Tablets) is 0 across all the 3 tservers. As new tables get added, new tablets (aka shards) will get automatically created and distributed evenly across all the available tablet servers.

master-home

1. Create a 3 node cluster with replication factor 3

We will use the yb-ctl utility located in the bin directory of the YugabyteDB package to create and administer a local cluster. The default data directory used is /tmp/yugabyte-local-cluster. You can change this directory with the —datadir option. Detailed output for the _create command is available in yb-ctl Reference.

  1. $ ./bin/yb-ctl create

You can now check /tmp/yugabyte-local-cluster to see node-i directories created where i represents the node_id of the node. Inside each such directory, there will be 2 disks disk1 and disk2 to highlight the fact that YugabyteDB can work with multiple disks at the same time. Note that the IP address of node-i is by default set to 127.0.0.i.

2. Check cluster status with yb-ctl

Run the command below to see that we now have 3 yb-master processes and 3 yb-tserver processes running on this localhost. Roles played by these processes in a Yugabyte cluster (aka Universe) is explained in detail here.

  1. $ ./bin/yb-ctl status
  1. 2018-05-10 09:06:58,821 INFO: Server is running: type=master, node_id=1, PID=5243, admin service=http://127.0.0.1:7000
  2. 2018-05-10 09:06:58,845 INFO: Server is running: type=master, node_id=2, PID=5246, admin service=http://127.0.0.2:7000
  3. 2018-05-10 09:06:58,871 INFO: Server is running: type=master, node_id=3, PID=5249, admin service=http://127.0.0.3:7000
  4. 2018-05-10 09:06:58,897 INFO: Server is running: type=tserver, node_id=1, PID=5252, admin service=http://127.0.0.1:9000, cql service=127.0.0.1:9042, redis service=127.0.0.1:6379, pgsql service=127.0.0.1:5433
  5. 2018-05-10 09:06:58,922 INFO: Server is running: type=tserver, node_id=2, PID=5255, admin service=http://127.0.0.2:9000, cql service=127.0.0.2:9042, redis service=127.0.0.2:6379, pgsql service=127.0.0.2:5433
  6. 2018-05-10 09:06:58,945 INFO: Server is running: type=tserver, node_id=3, PID=5258, admin service=http://127.0.0.3:9000, cql service=127.0.0.3:9042, redis service=127.0.0.3:6379, pgsql service=127.0.0.3:5433

3. Check cluster status with Admin UI

Node 1’s master Admin UI is available at http://127.0.0.1:7000 and the tserver Admin UI is available at http://127.0.0.1:9000. You can visit the other nodes’ Admin UIs by using their corresponding IP addresses.

3.1 Overview and Master status

Node 1’s master Admin UI home page shows that we have a cluster (aka a Universe) with Replication Factor of 3 and Num Nodes (TServers) as 3. The Num User Tables is 0 since there are no user tables created yet. YugabyteDB version number is also shown for your reference.

master-home

The Masters section highlights the 3 masters along with their corresponding cloud, region and zone placement.

3.2 TServer status

Clicking on the See all nodes takes us to the Tablet Servers page where we can observe the 3 tservers along with the time since they last connected to this master via their regular heartbeats. Since there are no user tables created yet, we can see that the Load (Num Tablets) is 0 across all the 3 tservers. As new tables get added, new tablets (aka shards) will get automatically created and distributed evenly across all the available tablet servers.

master-home

1. Create a 3 node cluster with replication factor 3

We will use the yb-docker-ctl utility downloaded in the previous step to create and administer a containerized local cluster. Detailed output for the create command is available in yb-docker-ctl Reference.

  1. $ ./yb-docker-ctl create

Clients can now connect to YugabyteDB’s Cassandra-compatible YCQL API at localhost:9042 and to the Redis-compatible YEDIS API at localhost:6379.

2. Check cluster status with yb-docker-ctl

Run the command below to see that we now have 3 yb-master (yb-master-n1,yb-master-n2,yb-master-n3) and 3 yb-tserver (yb-tserver-n1,yb-tserver-n2,yb-tserver-n3) containers running on this localhost. Roles played by these containers in a Yugabyte cluster (aka Universe) is explained in detail here.

  1. $ ./yb-docker-ctl status
  1. PID Type Node URL Status Started At
  2. 26132 tserver n3 http://172.18.0.7:9000 Running 2017-10-20T17:54:54.99459154Z
  3. 25965 tserver n2 http://172.18.0.6:9000 Running 2017-10-20T17:54:54.412377451Z
  4. 25846 tserver n1 http://172.18.0.5:9000 Running 2017-10-20T17:54:53.806993683Z
  5. 25660 master n3 http://172.18.0.4:7000 Running 2017-10-20T17:54:53.197652566Z
  6. 25549 master n2 http://172.18.0.3:7000 Running 2017-10-20T17:54:52.640188158Z
  7. 25438 master n1 http://172.18.0.2:7000 Running 2017-10-20T17:54:52.084772289Z

3. Check cluster status with Admin UI

The yb-master-n1 Admin UI is available at http://localhost:7000 and the yb-tserver-n1 Admin UI is available at http://localhost:9000. Other masters and tservers do not have their admin ports mapped to localhost to avoid port conflicts.

NOTE: Clients connecting to the cluster will connect to only yb-tserver-n1. In case of Docker for Mac, routing traffic directly to containers is not even possible today. Since only 1 node will receive the incoming client traffic, throughput expected for Docker-based local clusters can be significantly lower than binary-based local clusters.

3.1 Overview and Master status

The yb-master-n1 home page shows that we have a cluster (aka a Universe) with Replication Factor of 3 and Num Nodes (TServers) as 3. The Num User Tables is 0 since there are no user tables created yet. YugabyteDB version number is also shown for your reference.

master-home

The Masters section highlights the 3 masters along with their corresponding cloud, region and zone placement.

3.2 TServer status

Clicking on the See all nodes takes us to the Tablet Servers page where we can observe the 3 tservers along with the time since they last connected to this master via their regular heartbeats. Additionally, we can see that the Load (Num Tablets) is balanced across all the 3 tservers. These tablets are the shards of the user tables currently managed by the cluster (which in this case is the system_redis.redis table). As new tables get added, new tablets will get automatically created and distributed evenly across all the available tablet servers.

master-home

1. Create a 3 node cluster with replication factor 3

Run the following command to create the cluster.

  1. $ kubectl apply -f yugabyte-statefulset.yaml
  1. service "yb-masters" created
  2. statefulset "yb-master" created
  3. service "yb-tservers" created
  4. statefulset "yb-tserver" created

2. Check cluster status

Run the command below to see that we now have two services with 3 pods each - 3 yb-master pods (yb-master-1,yb-master-2,yb-master-3) and 3 yb-tserver pods (yb-tserver-1,yb-tserver-2,yb-tserver-3) running. Roles played by these pods in a YugabyteDB cluster (aka Universe) is explained in detail here.

  1. $ kubectl get pods
  1. NAME READY STATUS RESTARTS AGE
  2. yb-master-0 0/1 ContainerCreating 0 5s
  3. yb-master-1 0/1 ContainerCreating 0 5s
  4. yb-master-2 1/1 Running 0 5s
  5. yb-tserver-0 0/1 ContainerCreating 0 4s
  6. yb-tserver-1 0/1 ContainerCreating 0 4s
  7. yb-tserver-2 0/1 ContainerCreating 0 4s

Eventually all the pods will have the Running state.

  1. $ kubectl get pods
  1. NAME READY STATUS RESTARTS AGE
  2. yb-master-0 1/1 Running 0 13s
  3. yb-master-1 1/1 Running 0 13s
  4. yb-master-2 1/1 Running 0 13s
  5. yb-tserver-0 1/1 Running 1 12s
  6. yb-tserver-1 1/1 Running 1 12s
  7. yb-tserver-2 1/1 Running 1 12s

3. Initialize the Redis-compatible YEDIS API

Initialize Redis-compatible YEDIS API in the YugabyteDB Universe we just setup by running the following yb-admin command.

  1. $ kubectl exec -it yb-master-0 /home/yugabyte/bin/yb-admin -- --master_addresses yb-master-0.yb-masters.default.svc.cluster.local:7100,yb-master-1.yb-masters.default.svc.cluster.local:7100,yb-master-2.yb-masters.default.svc.cluster.local:7100 setup_redis_table
  1. ...
  2. I0127 19:38:10.358551 115 client.cc:1292] Created table system_redis.redis of type REDIS_TABLE_TYPE
  3. I0127 19:38:10.358872 115 yb-admin_client.cc:400] Table 'system_redis.redis' created.

Clients can now connect to this YugabyteDB universe using Cassandra and Redis APIs on the 9042 and 6379 ports respectively.

4. Check cluster status via Kubernetes

You can see the status of the 3 services by simply running the following command.

  1. $ kubectl get services
  1. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  2. kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 10m
  3. yb-master-ui LoadBalancer 10.102.121.64 <pending> 7000:31283/TCP 8m
  4. yb-masters ClusterIP None <none> 7000/TCP,7100/TCP 8m
  5. yb-tservers ClusterIP None <none> 9000/TCP,9100/TCP,9042/TCP,6379/TCP 8m

5. Check cluster status with Admin UI

In order to do this, we would need to access the UI on port 7000 exposed by any of the pods in the yb-master service (one of yb-master-0, yb-master-1 or yb-master-2). In order to do so, we find the URL for the yb-master-ui LoadBalancer service.

  1. $ minikube service yb-master-ui --url
  1. http://192.168.99.v1.0:31283

Now, you can view the yb-master-0 Admin UI is available at the above URL.

5.1 Overview and Master status

The yb-master-0 home page shows that we have a cluster (aka a Universe) with Replication Factor of 3 and Num Nodes (TServers) as 3. The Num User Tables is 0 since there are no user tables created yet. YugabyteDB version is also shown for your reference.

master-home

The Masters section highlights the 3 masters along with their corresponding cloud, region and zone placement.

5.2 TServer status

Clicking on the See all nodes takes us to the Tablet Servers page where we can observe the 3 tservers along with the time since they last connected to this master via their regular heartbeats. Additionally, we can see that the Load (Num Tablets) is balanced across all the 3 tservers. These tablets are the shards of the user tables currently managed by the cluster (which in this case is the system_redis.redis table). As new tables get added, new tablets will get automatically created and distributed evenly across all the available tablet servers.

master-home