Create universe - Multi-region

This section will describe how to create a universe spanning multiple geographic region. In this example, we are first going to deploy a universe across Oregon (US-West), Northern Virginia (US-East) and Tokyo (Asia-Pacific). Once ready, we are going to connect to each node and perform the following:

  • Run the CassandraKeyValue workload
  • Write data with global consistency (higher latencies because we chose nodes in far away regions)
  • Read data from the local data center (low latency timeline consistent reads)
  • Verify the latencies of the overall app

1. Create the universe

We are going to enter the following values to create a multi-region universe on GCP cloud provider. Click Create Universe and enter the following intent.

  • Enter a universe name: helloworld2
  • Enter the set of regions: Oregon, Northern Virginia, Tokyo
  • Change instance type: n1-standard-8
  • Add the following G-Flag for Master and T-Server: leader_failure_max_missed_heartbeat_periods = 10. Since the the data is globally replicated, RPC latencies are higher. We use this flag to increase the failure detection interval in such a higher RPC latency deployment. See the screenshot below.

Click Create.

Create Multi-Region Universe on GCP

2. Examine the universe

Wait for the universe to get created. Note that YugaWare can manage multiple universes as shown below.

Multiple Universes in YugaWare

Once the universe is created, you should see something like the screenshot below in the unverse overview.

Nodes for a Pending Universe

Universe nodes

You can browse to the nodes tab of the universe to see a list of nodes. Note that the nodes are across the different geographic regions.

Nodes for a Pending Universe

Browse to the cloud provider’s instances page. In this example, since we are using Google Cloud Platform as the cloud provider, browse to Compute Engine -> VM Instances and search for instances that have helloworld2 in their name. You should see something as follows. It is easy to verify that the instances were created in the appropriate regions.

Instances for a Pending Universe

3. Run a global app

In this section, we are going to connect to each node and perform the following:

  • Run the CassandraKeyValue workload
  • Write data with global consistency (higher latencies because we chose nodes in far away regions)
  • Read data from the local data center (low latency, timeline-consistent reads)

Browse to the nodes tab to find the nodes and click on the Connect button. This should bring up a dialog showing how to connect to the nodes.

Multi-region universe nodes

Connect to the nodes

Create three Bash terminals and connect to each of the nodes by running the commands shown in the popup above. We are going to start a workload from each of the nodes. Below is a screenshot of the terminals.

Multi-region universe node terminals

On each of the terminals, do the following.

  • Install Java.
  1. $ sudo yum install java-1.8.0-openjdk.x86_64 -y
  • Switch to the yugabyte user.
  1. $ sudo su - yugabyte
  • Export the YCQL_ENDPOINTS env variable

Export an environment variable telling us the IP addresses for nodes in the cluster. Browse to the universe overview tab in YugaWare and click on the YCQL Endpoints link. This should open a new tab with a list of IP addresses.

YCQL end points

Export this into a shell variable on the database node yb-dev-helloworld1-n1 we had connected to. Remember to replace the ip addresses below with those shown by YugaWare.

  1. $ export YCQL_ENDPOINTS="10.138.0.3:9042,10.138.0.4:9042,10.138.0.5:9042"

Run the workload

Run the following command on each of the nodes. Remember to substitute <REGION> with the region code for each node.

  1. $ java -jar /home/yugabyte/tserver/java/yb-sample-apps.jar \
  2. --workload CassandraKeyValue \
  3. --nodes $YCQL_ENDPOINTS \
  4. --num_threads_write 1 \
  5. --num_threads_read 32 \
  6. --num_unique_keys 10000000 \
  7. --local_reads \
  8. --with_local_dc <REGION>

You can find the region codes for each of the nodes by browsing to the nodes tab for this universe in YugaWare. A screenshot is shown below. In this example, the value for <REGION> is:

  • us-east4 for node yb-dev-helloworld2-n1
  • asia-northeast1 for node yb-dev-helloworld2-n2
  • us-west1 for node yb-dev-helloworld2-n3

Region Codes For Universe Nodes

4. Check the performance characteristics of the app

Recall that we expect the app to have the following characteristics based on its deployment configuration:

  • Global consistency on writes, which would cause higher latencies in order to replicate data across multiple geographic regions.
  • Low latency reads from the nearest data center, which offers timeline consistency (similar to async replication).

Let us verify this by browse to the metrics tab of the universe in YugaWare to see the overall performance of the app. It should look similar to the screenshot below.

YCQL Load Metrics

Note the following:

  • Write latency is 139ms because it has to replicate data to a quorum of nodes across multiple geographic regions.
  • Read latency is 0.23 ms across all regions. Note that the app is performing 100K reads/sec across the regions (about 33K reads/sec in each region).

It is possible to repeat the same experiment with the RedisKeyValue app and get similar results.