Getting Started with kops on AWS

Make sure you have installed kops and installed kubectl.

Setup your environment

AWS

In order to correctly prepare your AWS account for kops, we require you toinstall the AWS CLI tools, and have API credentials for an account that hasthe permissions to create a new IAM account for kops later in the guide.

Once you've installed the AWS CLI tools and have correctly setupyour system to use the official AWS methods of registering security credentialsas defined here we'll be ready to run kops, as it uses the Go AWS SDK.

Setup IAM user

In order to build clusters within AWS we'll create a dedicated IAM user forkops. This user requires API credentials in order to use kops. Createthe user, and credentials, using the AWS console.

The kops user will require the following IAM permissions to function properly:

  1. AmazonEC2FullAccess
  2. AmazonRoute53FullAccess
  3. AmazonS3FullAccess
  4. IAMFullAccess
  5. AmazonVPCFullAccess

You can create the kops IAM user from the command line using the following:

  1. aws iam create-group --group-name kops
  2.  
  3. aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonEC2FullAccess --group-name kops
  4. aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonRoute53FullAccess --group-name kops
  5. aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess --group-name kops
  6. aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/IAMFullAccess --group-name kops
  7. aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonVPCFullAccess --group-name kops
  8.  
  9. aws iam create-user --user-name kops
  10.  
  11. aws iam add-user-to-group --user-name kops --group-name kops
  12.  
  13. aws iam create-access-key --user-name kops

You should record the SecretAccessKey and AccessKeyID in the returned JSONoutput, and then use them below:

  1. # configure the aws client to use your new IAM user
  2. aws configure # Use your new access and secret key here
  3. aws iam list-users # you should see a list of all your IAM users here
  4.  
  5. # Because "aws configure" doesn't export these vars for kops to use, we export them now
  6. export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)
  7. export AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key)

Configure DNS

Note: If you are using Kops 1.6.2 or later, then DNS configuration isoptional. Instead, a gossip-based cluster can be easily created. Theonly requirement to trigger this is to have the cluster name end with.k8s.local. If a gossip-based cluster is created then you can skipthis section.

In order to build a Kubernetes cluster with kops, we need to preparesomewhere to build the required DNS records. There are three scenariosbelow and you should choose the one that most closely matches your AWSsituation.

Scenario 1a: A Domain purchased/hosted via AWS

If you bought your domain with AWS, then you should already have a hosted zonein Route53. If you plan to use this domain then no more work is needed.

In this example you own example.com and your records for Kubernetes wouldlook like etcd-us-east-1c.internal.clustername.example.com

Scenario 1b: A subdomain under a domain purchased/hosted via AWS

In this scenario you want to contain all kubernetes records under a subdomainof a domain you host in Route53. This requires creating a second hosted zonein route53, and then setting up route delegation to the new zone.

In this example you own example.com and your records for Kubernetes wouldlook like etcd-us-east-1c.internal.clustername.subdomain.example.com

This is copying the NS servers of your SUBDOMAIN up to the PARENTdomain in Route53. To do this you should:

  • Create the subdomain, and note your SUBDOMAIN name servers (If you have already done this you can also get the values)
  1. # Note: This example assumes you have jq installed locally.
  2. ID=$(uuidgen) && aws route53 create-hosted-zone --name subdomain.example.com --caller-reference $ID | \
  3. jq .DelegationSet.NameServers
  • Note your PARENT hosted zone id
  1. # Note: This example assumes you have jq installed locally.
  2. aws route53 list-hosted-zones | jq '.HostedZones[] | select(.Name=="example.com.") | .Id'
  • Create a new JSON file with your values (subdomain.json)

Note: The NS values here are for the SUBDOMAIN

  1. {
  2. "Comment": "Create a subdomain NS record in the parent domain",
  3. "Changes": [
  4. {
  5. "Action": "CREATE",
  6. "ResourceRecordSet": {
  7. "Name": "subdomain.example.com",
  8. "Type": "NS",
  9. "TTL": 300,
  10. "ResourceRecords": [
  11. {
  12. "Value": "ns-1.awsdns-1.co.uk"
  13. },
  14. {
  15. "Value": "ns-2.awsdns-2.org"
  16. },
  17. {
  18. "Value": "ns-3.awsdns-3.com"
  19. },
  20. {
  21. "Value": "ns-4.awsdns-4.net"
  22. }
  23. ]
  24. }
  25. }
  26. ]
  27. }
  • Apply the SUBDOMAIN NS records to the PARENT hosted zone.
  1. aws route53 change-resource-record-sets \
  2. --hosted-zone-id <parent-zone-id> \
  3. --change-batch file://subdomain.json

Now traffic to *.subdomain.example.com will be routed to the correct subdomain hosted zone in Route53.

Scenario 2: Setting up Route53 for a domain purchased with another registrar

If you bought your domain elsewhere, and would like to dedicate the entire domain to AWS you should follow the guide here

Scenario 3: Subdomain for clusters in route53, leaving the domain at another registrar

If you bought your domain elsewhere, but only want to use a subdomain in AWSRoute53 you must modify your registrar's NS (NameServer) records. We'll createa hosted zone in Route53, and then migrate the subdomain's NS records to yourother registrar.

You might need to grab jqfor some of these instructions.

  • Create the subdomain, and note your name servers (If you have already done this you can also get the values)
  1. ID=$(uuidgen) && aws route53 create-hosted-zone --name subdomain.example.com --caller-reference $ID | jq .DelegationSet.NameServers
  • You will now go to your registrar's page and log in. You will need to create a new SUBDOMAIN, and use the 4 NS records received from the above command for the new SUBDOMAIN. This MUST be done in order to use your cluster. Do NOT change your top level NS record, or you might take your site offline.

  • Information on adding NS records with Godaddy.com

  • Information on adding NS records with Google Cloud Platform

Using Public/Private DNS (Kops 1.5+)

By default the assumption is that NS records are publicly available. If yourequire private DNS records you should modify the commands we run later in thisguide to include:

  1. kops create cluster --dns private $NAME

If you have a mix of public and private zones, you will also need to include the —dns-zone argument with the hosted zone id you wish to deploy in:

  1. kops create cluster --dns private --dns-zone ZABCDEFG $NAME

Testing your DNS setup

This section is not be required if a gossip-based cluster is created.

You should now able to dig your domain (or subdomain) and see the AWS NameServers on the other end.

  1. dig ns subdomain.example.com

Should return something similar to:

  1. ;; ANSWER SECTION:
  2. subdomain.example.com. 172800 IN NS ns-1.awsdns-1.net.
  3. subdomain.example.com. 172800 IN NS ns-2.awsdns-2.org.
  4. subdomain.example.com. 172800 IN NS ns-3.awsdns-3.com.
  5. subdomain.example.com. 172800 IN NS ns-4.awsdns-4.co.uk.

This is a critical component of setting up clusters. If you are experiencingproblems with the Kubernetes API not coming up, chances are something is wrongwith the cluster's DNS.

Please DO NOT MOVE ON until you have validated your NS records! This is not required if a gossip-based cluster is created.

Cluster State storage

In order to store the state of your cluster, and the representation of yourcluster, we need to create a dedicated S3 bucket for kops to use. Thisbucket will become the source of truth for our cluster configuration. Inthis guide we'll call this bucket example-com-state-store, but you shouldadd a custom prefix as bucket names need to be unique.

We recommend keeping the creation of this bucket confined to us-east-1,otherwise more work will be required.

  1. aws s3api create-bucket \
  2. --bucket prefix-example-com-state-store \
  3. --region us-east-1

Note: S3 requires —create-bucket-configuration LocationConstraint=<region> for regions other than us-east-1.

Note: We STRONGLY recommend versioning your S3 bucket in case you ever needto revert or recover a previous state store.

  1. aws s3api put-bucket-versioning --bucket prefix-example-com-state-store --versioning-configuration Status=Enabled

Information regarding cluster state store location must be set when using kops cli see state store for further information.

Using S3 default bucket encryption

kops supports default bucket encryption to encrypt the kops state in an S3 bucket. In this way, whatever default server side encryption is set for your bucket, it will be used for the kops state, too. You may want to use this AWS feature e.g. for easily encrypting every written object by default or when for compliance reasons you need to use specific encryption keys (KMS, CMK).

If your S3 bucket has a default encryption set up, kops will use it:

  1. aws s3api put-bucket-encryption --bucket prefix-example-com-state-store --server-side-encryption-configuration '{"Rules":[{"ApplyServerSideEncryptionByDefault":{"SSEAlgorithm":"AES256"}}]}'

If the default encryption is not set or it cannot be checked, kops will resort to using client side AES256 encryption.

Sharing an S3 bucket across multiple accounts

It is possible to use a single S3 bucket for storing kops state for clusterslocated in different accounts, by using cross-account bucket policies.

Kops will be able to use buckets configured with cross-account policies by default.

In this case you may want to override the object ACLs which kops places on thestate files, as default AWS ACLs will make it possible for an account that hasdelegated access to write files that the bucket owner can not read.

To do this you should set the environment variable KOPS_STATE_S3_ACL to thepreferred object ACL, for example bucket-owner-full-control.

For available canned ACLs please consult Amazon's S3documentation.

Creating your first cluster

Prepare local environment

We're ready to start creating our first cluster! Let's first set up a fewenvironment variables to make this process easier.

  1. export NAME=myfirstcluster.example.com
  2. export KOPS_STATE_STORE=s3://prefix-example-com-state-store

For a gossip-based cluster, make sure the name ends with k8s.local. For example:

  1. export NAME=myfirstcluster.k8s.local
  2. export KOPS_STATE_STORE=s3://prefix-example-com-state-store

Note: You don’t have to use environmental variables here. You can always definethe values using the –name and –state flags later.

Create cluster configuration

We will need to note which availability zones are available to us. In thisexample we will be deploying our cluster to the us-west-2 region.

  1. aws ec2 describe-availability-zones --region us-west-2

Below is a create cluster command. We'll use the most basic example possible,with more verbose examples in high availability.The below command will generate a cluster configuration, but not start buildingit. Make sure that you have generated SSH key pair before creating the cluster.

  1. kops create cluster \
  2. --zones us-west-2a \
  3. ${NAME}

All instances created by kops will be built within ASG (Auto Scaling Groups),which means each instance will be automatically monitored and rebuilt by AWS ifit suffers any failure.

Customize Cluster Configuration

Now we have a cluster configuration, we can look at every aspect that definesour cluster by editing the description.

  1. kops edit cluster ${NAME}

This opens your editor (as defined by $EDITOR) and allows you to edit theconfiguration. The configuration is loaded from the S3 bucket we createdearlier, and automatically updated when we save and exit the editor.

We'll leave everything set to the defaults for now, but the rest of the kopsdocumentation covers additional settings and configuration you can enable.

Build the Cluster

Now we take the final step of actually building the cluster. This'll take awhile. Once it finishes you'll have to wait longer while the booted instancesfinish downloading Kubernetes components and reach a "ready" state.

  1. kops update cluster ${NAME} --yes

Use the Cluster

Remember when you installed kubectl earlier? The configuration for yourcluster was automatically generated and written to ~/.kube/config for you!

A simple Kubernetes API call can be used to check if the API is online andlistening. Let's use kubectl to check the nodes.

  1. kubectl get nodes

You will see a list of nodes that should match the —zones flag definedearlier. This is a great sign that your Kubernetes cluster is online andworking.

Also kops ships with a handy validation tool that can be ran to ensure yourcluster is working as expected.

  1. kops validate cluster

You can look at all the system components with the following command.

  1. kubectl -n kube-system get po

Delete the Cluster

Running a Kubernetes cluster within AWS obviously costs money, and so you maywant to delete your cluster if you are finished running experiments.

You can preview all of the AWS resources that will be destroyed when the clusteris deleted by issuing the following command.

  1. kops delete cluster --name ${NAME}

When you are sure you want to delete your cluster, issue the delete commandwith the —yes flag. Note that this command is very destructive, and willdelete your cluster and everything contained within it!

  1. kops delete cluster --name ${NAME} --yes

What's next?

We've barely scratched the surface of the capabilities of kops in this guide,and we recommend researching other interestingmodes to learn more about generatingTerraform configurations, or running your cluster in an HA (Highly Available)mode.

The cluster spec docs can help to configure these "otherinteresting modes". Also be sure to check out how to run a private networktopology in AWS.

Feedback

There's an incredible team behind Kops and we encourage you to reach out to thecommunity on the KubernetesSlack(http://slack.k8s.io/). Bring yourquestions, comments, and requests and meet the people behind the project!

Legal

AWS Trademark used with limited permission under the AWS TrademarkGuidelines

Kubernetes Logo used with permission under the Kubernetes BrandingGuidelines