Configure cloud providers

This section details how to configure cloud providers for YugabyteDB using the YugaWare Admin Console. If no cloud providers are configured in YugaWare yet, the main Dashboard page highlights the need to configure at least one cloud provider.

Configure Cloud Provider

Prerequisites

Public cloud

If you plan to run YugabyteDB nodes on public cloud providers, such as Amazon Web Services (AWS) or Google Cloud Platform (GCP), all you need to provide on YugaWare UI is your cloud provider credentials. YugaWare will use those credentials to automatically provision and de-provision instances that run Yugabyte. An ‘instance’ for YugabyteDB includes a compute instance as well as local or remote disk storage attached to the compute instance.

Private cloud or on-premise data centers

The prerequisites for Yugabyte Platform data nodes are same as that of YugabyteDB.

Configure cloud providers

Configuring YugaWare to deploy universes in AWS provides several knobs for you to tweak, depending on your preferences:

AWS Empty Provider

Provider name

This is an internal tag used for organizing your providers, so you know where you want to deploy your YugabyteDB universes.

Credentials

In order to actually deploy YugabyteDB nodes in your AWS account, YugaWare will require access to a set of cloud credentials. These can be provided in one of the following ways:

KeyPairs

In order to be able to provision EC2 instances with YugabyteDB, YugaWare will require SSH access to these. To that end, there are two options to choose from:

  • Allow YugaWare to create and manage KeyPairs. In this mode, YugaWare will create KeyPairs across all the regions you choose to setup and store the relevant private key part of these locally in order to SSH into future EC2 instances.
  • Use your own already existing KeyPairs. For this you will need to provide the name of the KeyPair, as well as the private key content and the corresponding SSH user. Note that currently, all this info must be the same across all the regions you choose to provision!

Enabling Hosted Zones

Integrating with hosted zones can make YugabyteDB universes easily discoverable. YugaWare can integrate with Route53 to provide you managed CNAME entries for your YugabyteDB universes, which will be updated as you change the set of nodes, to include all the relevant ones for each of your universes.

Global deployment

For deployment, YugaWare aims to provide you with easy access to the many regions that AWS makes available globally. To that end, it allows you to select which regions you wish to deploy to and supports two different ways of configuring your setup, based on your environment:

YugaWare-managed configuration

If you choose to allow YugaWare to configure, own and manage a full cross-region deployment of VPCs, it will generate a YugabyteDB specific VPC in each selected region, then interconnect them, as well as the VPC in which YugaWare was deployed, through VPC Peering. This mode will also setup all the other relevant sub-components in all regions, such as Subnets, Security Groups and Routing Table entries. Some notes:

  • You can optionally provide a custom CIDR block for each regional VPC, else we will choose some sensible defaults internally, aiming to not overlap across regions.
  • You can optionally provide a custom AMI ID to use in each region, else, we will use a recent marketplace centos AMI.

New Region Modal

Self-managed configuration

If you wish to use your own custom VPCs, this is also supported. This will allow you the most level of customization over your VPC setup:

  • You must provide a VPC ID to use for each region.
  • You must provide a Security Group ID to use for each region. This will be attached to all YugabyteDB nodes and must allow traffic from all other YugabyteDB nodes, even across regions, if you deploy across multiple regions.
  • You must provide the mapping of what Subnet IDs to use for each Availability Zone in which you wish to be able to deploy. This is required to ensure YugaWare can deploy nodes in the correct network isolation that you desire in your environment.
  • You can optionally provide a custom AMI ID to use in each region, else, we will use a recent marketplace centos AMI.

Custom Region Modal

One really important note if you choose to provide your own VPC information: it is your responsibility to have preconfigured networking connectivity! In the case of a single region deployment, this might simply be a matter of region or VPC local Security Groups. However, across regions, the setup can get quite complex. We suggest using the VPC Peering feature of AWS, such that you can setup private IP connectivity between nodes across regions:

  • VPC Peering Connections must be established in an N x N matrix, such that every VPC in every region you configure must be peered to every other VPC in every other region.
  • Routing Table entries in every regional VPC should route traffic to every other VPC CIDR block across the PeeringConnection to that respective VPC. This must match the Subnets that you provided during the configuration step.
  • Security Groups in each VPC can be hardened by only opening up the relevant ports to the CIDR blocks of the VPCs from which you are expecting traffic.
  • Lastly, if you deploy YugaWare in a different VPC than the ones in which you intend to deploy YugabyteDB nodes, then its own VPC must also be part of this cross-region VPC mesh, as well as setting up Routing Table entries in the source VPC (YugaWare) and allowing one further CIDR block (or public IP) ingress rule on the Security Groups for the YugabyteDB nodes (to allow traffic from YugaWare or its VPC).

Final notes

If you allow YugaWare to manage KeyPairs for you and you deploy multiple YugaWare instances across your environment, then the AWS Provider name should be unique for each instance of YugaWare integrating with a given AWS Account.

Marketplace acceptance

Finally, in case you did not provide your own custom AMI IDs, before we can proceed to creating a universe, let us check that you can actually spin up EC2 instances with our default AMIs. Our reference AMIs come from a Marketplace CentOS 7 Product. Visit that link while logged into your AWS account and click the top-right Continue to Subscribe button.

If you are not already subscribed and have thus not accepted the Terms and Conditions, then you should see something like this:

Marketplace accept

If so, please click the Accept Terms button and wait for the page to switch to a successful state. You should see the following once the operation completes, or if you had already previously subscribed and accepted the terms:

Marketplace success

Now we are ready to create a YugabyteDB universe on AWS.

Go to the Configuration nav on the left-side and then click on the GCP tab. You should seesomething like this:

GCP Configuration -- empty

Fill in the couple of pieces of data and you should get something like:

GCP Configuration -- full

Take note of the following for configuring your GCP provider:

  • Give this provider a relevant name. We recommend something that contains Google or GCP in it, especially if you will be configuring other providers as well.

  • Upload the JSON file that you obtained when you created your service account as per the Initial Setup.

  • Assuming this is a new deployment, we recommend creating a new VPC specifically for YugabyteDB nodes. You have to ensure that the YugaWare host machine is able to connect to your Google Cloud account where this new VPC will be created. Otherwise, you can choose to specify an existing VPC for YugabyteDB nodes. The 3rd option that is available only when your YugaWare host machine is also running on Google Cloud is to use the same VPC that the YugaWare host machine runs on.

  • Finally, click Save and give it a couple of minutes, as it will need to do a bit of work in the background. This includes generating a new VPC, a network, subnetworks in all available regions, as well as a new firewall rule, VPC peering for network connectivity and a custom SSH keypair for YugaWare-to-YugabyteDB connectivity

Note: Choosing to use the same VPC as YugaWare is an advanced option, which currently assumes that you are in complete control over this VPC and will be responsible for setting up the networking, SSH access and firewall rules for it!

The following shows the steps involved in creating this cloud provider.

GCP Configuration -- in progress

If all went well, you should see something like:

GCP Configuration -- success

Now we are ready to create a YugabyteDB universe on GCP.

Support for Microsoft Azure in Yugabyte Platform is currently in the works.You are recommended to treat Microsoft Azure as anOn-premise data center for now.

Pick appropriate k8s tab

For Kubernetes, you have two options, one is to using Pivotal Container Service or Managed Kubernetes Service, depending on what youare using click on the appropriate tab.K8s Configuration -- Tabs

Once you go to the appropriate tab, You should see configuration form something like this:

K8s Configuration -- empty

Select the Kubernetes provider type from the Type dropdown, in case of Pivotal Container Service this would be default to that option.

Configure the provider

Take note of the following for configuring your K8s provider:

  • Give a meaningful name for your config.

  • Service Account provide the name of the service account which has necessary access to managethe cluster, refer to Create Service Account.

  • Kube Config there are two ways to specify the kube config for an Availability Zone.

    • Specify at provider level in the provider form as shown above. If specified, this config file will be used for all AZ’s in all regions.
    • Specify at zone level inside of the region form as described below, this is especially needed for multi-az or multi-region deployments.
  • Image Registry specifies where to pull YugabyteDB image from leave this to default, unless you are hosting the registry on your end.

  • Pull Secret, Our Enterprise YugabyteDB image is in a private repo and we need to upload the pull secret to download the image, your sales representative should have provided this secret.

A filled in form looks something like this:

K8s Configuration -- filled

Configure the region/zones

Click on Add Region to open the modal.

  • Specify a Region and the dialog will expand to show the zone form.

  • Zone, enter a zone label, keep in mind this label should match with your failure domain zone label failure-domain.beta.kubernetes.io/zone

  • Storage Class is optional, it takes a comma delimited value, if not specified would default to standard, please make sure this storage class exists in your k8s cluster.

  • Kube Config is optional if specified at provider level or else required

K8s Configuration -- zone config

  • Overrides is optional, if not specified Yugabyte Platform would use defaults specified inside the helm chart,

  • Overrides to add Service level annotations

  1. serviceEndpoints:
  2. - name: "yb-master-service"
  3. type: "LoadBalancer"
  4. annotations:
  5. service.beta.kubernetes.io/aws-load-balancer-internal: "0.0.0.0/0"
  6. app: "yb-master"
  7. ports:
  8. ui: "7000"
  9. - name: "yb-tserver-service"
  10. type: "LoadBalancer"
  11. annotations:
  12. service.beta.kubernetes.io/aws-load-balancer-internal: "0.0.0.0/0"
  13. app: "yb-tserver"
  14. ports:
  15. ycql-port: "9042"
  16. yedis-port: "6379"
  17. ysql-port: "5433"
  • Overrides to disable LoadBalancer
  1. enableLoadBalancer: False
  • Overrides to change the cluster domain name
  1. domainName: my.cluster
  • Overrides to add annotations at StatefulSet level
  1. networkAnnotation:
  2. annotation1: 'foo'
  3. annotation2: 'bar'

Add a new Zone by clicking on Add Zone button on the bottom left of the zone form.

Your form may have multiple AZ’s as shown below.

K8s Configuration -- region

Click Add Region to add the region and close the modal.

Hit Save to save the configuration. If successful, it will redirect you to the table view of all configs.

Configure On-Premises Datacenter Provider

On-Premises Datacenter Provider Configuration in Progress

On-Premises Datacenter Provider Configured Successfully

Next step

You are now ready to create YugabyteDB universes as outlined in the next section.