Amazon EKS provides a managed control plane for your Kubernetes cluster. Amazon EKS runs the Kubernetes control plane instances across multiple Availability Zones to ensure high availability. Rancher provides an intuitive user interface for managing and deploying the Kubernetes clusters you run in Amazon EKS. With this guide, you will use Rancher to quickly and easily launch an Amazon EKS Kubernetes cluster in your AWS account. For more information on Amazon EKS, see this documentation.

Prerequisites in Amazon Web Services

Note Deploying to Amazon AWS will incur charges. For more information, refer to the EKS pricing page.

To set up a cluster on EKS, you will need to set up an Amazon VPC (Virtual Private Cloud). You will also need to make sure that the account you will be using to create the EKS cluster has the appropriate permissions. For details, refer to the official guide on Amazon EKS Prerequisites.

Amazon VPC

An Amazon VPC is required to launch the EKS cluster. The VPC enables you to launch AWS resources into a virtual network that you’ve defined. You can set one up yourself and provide it during cluster creation in Rancher. If you do not provide one during creation, Rancher will create one. For more information, refer to the Tutorial: Creating a VPC with Public and Private Subnets for Your Amazon EKS Cluster.

IAM Policies

Rancher needs access to your AWS account in order to provision and administer your Kubernetes clusters in Amazon EKS. You’ll need to create a user for Rancher in your AWS account and define what that user can access.

  1. Create a user with programmatic access by following the steps here.

  2. Next, create an IAM policy that defines what this user has access to in your AWS account. It’s important to only grant this user minimal access within your account. The minimum permissions required for an EKS cluster are listed here. Follow the steps here to create an IAM policy and attach it to your user.

  3. Finally, follow the steps here to create an access key and secret key for this user.

Note: It’s important to regularly rotate your access and secret keys. See this documentation for more information.

For more detailed information on IAM policies for EKS, refer to the official documentation on Amazon EKS IAM Policies, Roles, and Permissions.

Architecture

The figure below illustrates the high-level architecture of Rancher 2.x. The figure depicts a Rancher Server installation that manages two Kubernetes clusters: one created by RKE and another created by EKS.

Managing Kubernetes Clusters through Rancher’s Authentication Proxy

Architecture

Create the EKS Cluster

Use Rancher to set up and configure your Kubernetes cluster.

  1. From the Clusters page, click Add Cluster.

  2. Choose Amazon EKS.

  3. Enter a Cluster Name.

  4. Use Member Roles to configure user authorization for the cluster. Click Add Member to add users that can access the cluster. Use the Role drop-down to set permissions for each user.

  5. Fill out the rest of the form. For help, refer to the configuration reference.

  6. Click Create.

Result:

Your cluster is created and assigned a state of Provisioning. Rancher is standing up your cluster.

You can access your cluster after its state is updated to Active.

Active clusters are assigned two Projects:

  • Default, containing the default namespace
  • System, containing the cattle-system, ingress-nginx, kube-public, and kube-system namespaces

EKS Cluster Configuration Reference

Changes in Rancher v2.5

More EKS options can be configured when you create an EKS cluster in Rancher, including the following:

  • Managed node groups
  • Desired size, minimum size, maximum size (requires the Cluster Autoscaler to be installed)
  • Control plane logging
  • Secrets encryption with KMS

The following capabilities have been added for configuring EKS clusters in Rancher:

  • GPU support
  • Exclusively use managed nodegroups that come with the most up-to-date AMIs
  • Add new nodes
  • Upgrade nodes
  • Add and remove node groups
  • Disable and enable private access
  • Add restrictions to public access
  • Use your cloud credentials to create the EKS cluster instead of passing in your access key and secret key

Due to the way that the cluster data is synced with EKS, if the cluster is modified from another source, such as in the EKS console, and in Rancher within five minutes, it could cause some changes to be overwritten. For information about how the sync works and how to configure it, refer to this section.

Account Access

Complete each drop-down and field using the information obtained for your IAM policy.

SettingDescription
RegionFrom the drop-down choose the geographical region in which to build your cluster.
Cloud CredentialsSelect the cloud credentials that you created for your IAM policy. For more information on creating cloud credentials in Rancher, refer to this page.

Service Role

Choose a service role.

Service RoleDescription
Standard: Rancher generated service roleIf you choose this role, Rancher automatically adds a service role for use with the cluster.
Custom: Choose from your existing service rolesIf you choose this role, Rancher lets you choose from service roles that you’re already created within AWS. For more information on creating a custom service role in AWS, see the Amazon documentation.

Secrets Encryption

Optional: To encrypt secrets, select or enter a key created in AWS Key Management Service (KMS)

API Server Endpoint Access

Configuring Public/Private API access is an advanced use case. For details, refer to the EKS cluster endpoint access control documentation.

Private-only API Endpoints

If you enable private and disable public API endpoint access when creating a cluster, then there is an extra step you must take in order for Rancher to connect to the cluster successfully. In this case, a pop-up will be displayed with a command that you will run on the cluster to register it with Rancher. Once the cluster is provisioned, you can run the displayed command anywhere you can connect to the cluster’s Kubernetes API.

There are two ways to avoid this extra manual step: - You can create the cluster with both private and public API endpoint access on cluster creation. You can disable public access after the cluster is created and in an active state and Rancher will continue to communicate with the EKS cluster. - You can ensure that Rancher shares a subnet with the EKS cluster. Then security groups can be used to enable Rancher to communicate with the cluster’s API endpoint. In this case, the command to register the cluster is not needed, and Rancher will be able to communicate with your cluster. For more information on configuring security groups, refer to the security groups documentation.

Public Access Endpoints

Optionally limit access to the public endpoint via explicit CIDR blocks.

If you limit access to specific CIDR blocks, then it is recommended that you also enable the private access to avoid losing network communication to the cluster.

One of the following is required to enable private access: - Rancher’s IP must be part of an allowed CIDR block - Private access should be enabled, and Rancher must share a subnet with the cluster and have network access to the cluster, which can be configured with a security group

For more information about public and private access to the cluster endpoint, refer to the Amazon EKS documentation.

Subnet

OptionDescription
Standard: Rancher generated VPC and SubnetWhile provisioning your cluster, Rancher generates a new VPC with 3 public subnets.
Custom: Choose from your existing VPC and SubnetsWhile provisioning your cluster, Rancher configures your Control Plane and nodes to use a VPC and Subnet that you’ve already created in AWS.

For more information, refer to the AWS documentation for Cluster VPC Considerations. Follow one of the sets of instructions below based on your selection from the previous step.

Security Group

Amazon Documentation:

Logging

Configure control plane logs to send to Amazon CloudWatch. You are charged the standard CloudWatch Logs data ingestion and storage costs for any logs sent to CloudWatch Logs from your clusters.

Each log type corresponds to a component of the Kubernetes control plane. To learn more about these components, see Kubernetes Components in the Kubernetes documentation.

For more information on EKS control plane logging, refer to the official documentation.

Managed Node Groups

Amazon EKS managed node groups automate the provisioning and lifecycle management of nodes (Amazon EC2 instances) for Amazon EKS Kubernetes clusters.

For more information about how node groups work and how they are configured, refer to the EKS documentation.

Bring your own launch template

A launch template ID and version can be provided in order to easily configure the EC2 instances in a node group. If a launch template is provided, then none of the settings below will be configurable in Rancher. Therefore, using a launch template would require that all the necessary and desired settings from the list below would need to be specified in the launch template. Also note that if a launch template ID and version is provided, then only the template version can be updated. Using a new template ID would require creating a new managed node group.

OptionDescriptionRequired/Optional
Instance TypeChoose the hardware specs for the instance you’re provisioning.Required
Image IDSpecify a custom AMI for the nodes. Custom AMIs used with EKS must be configured properlyOptional
Node Volume SizeThe launch template must specify an EBS volume with the desired sizeRequired
SSH KeyA key to be added to the instances to provide SSH access to the nodesOptional
User DataCloud init script in MIME multi-part formatOptional
Instance Resource TagsTag each EC2 instance in the node groupOptional

Rancher-managed launch templates

If you do not specify a launch template, then you will be able to configure the above options in the Rancher UI and all of them can be updated after creation. In order to take advantage of all of these options, Rancher will create and manage a launch template for you. Each cluster in Rancher will have one Rancher-managed launch template and each managed node group that does not have a specified launch template will have one version of the managed launch template. The name of this launch template will have the prefix “rancher-managed-lt-” followed by the display name of the cluster. In addition, the Rancher-managed launch template will be tagged with the key “rancher-managed-template” and value “do-not-modify-or-delete” to help identify it as Rancher-managed. It is important that this launch template and its versions not be modified, deleted, or used with any other clusters or managed node groups. Doing so could result in your node groups being “degraded” and needing to be destroyed and recreated.

Custom AMIs

If you specify a custom AMI, whether in a launch template or in Rancher, then the image must be configured properly and you must provide user data to bootstrap the node. This is considered an advanced use case and understanding the requirements is imperative.

If you specify a launch template that does not contain a custom AMI, then Amazon will use the EKS-optimized AMI for the Kubernetes version and selected region. You can also select a GPU enabled instance for workloads that would benefit from it.

Note The GPU enabled instance setting in Rancher is ignored if a custom AMI is provided, either in the dropdown or in a launch template.

Spot instances

Spot instances are now supported by EKS. If a launch template is specified, Amazon recommends that the template not provide an instance type. Instead, Amazon recommends providing multiple instance types. If the “Request Spot Instances” checkbox is enabled for a node group, then you will have the opportunity to provide multiple instance types.

Note Any selection you made in the instance type dropdown will be ignored in this situation and you must specify at least one instance type to the “Spot Instance Types” section. Furthermore, a launch template used with EKS cannot request spot instances. Requesting spot instances must be part of the EKS configuration.

Node Group Settings

The following settings are also configurable. All of these except for the “Node Group Name” are editable after the node group is created.

OptionDescription
Node Group NameThe name of the node group.
Desired ASG SizeThe desired number of instances.
Maximum ASG SizeThe maximum number of instances. This setting won’t take effect until the Cluster Autoscaler is installed.
Minimum ASG SizeThe minimum number of instances. This setting won’t take effect until the Cluster Autoscaler is installed.
LabelsKubernetes labels applied to the nodes in the managed node group.
TagsThese are tags for the managed node group and do not propagate to any of the associated resources.

Account Access

Complete each drop-down and field using the information obtained for your IAM policy.

SettingDescription
RegionFrom the drop-down choose the geographical region in which to build your cluster.
Cloud CredentialsSelect the cloud credentials that you created for your IAM policy. For more information on creating cloud credentials in Rancher, refer to this page.

Service Role

Choose a service role.

Service RoleDescription
Standard: Rancher generated service roleIf you choose this role, Rancher automatically adds a service role for use with the cluster.
Custom: Choose from your existing service rolesIf you choose this role, Rancher lets you choose from service roles that you’re already created within AWS. For more information on creating a custom service role in AWS, see the Amazon documentation.

Secrets Encryption

Optional: To encrypt secrets, select or enter a key created in AWS Key Management Service (KMS)

API Server Endpoint Access

Configuring Public/Private API access is an advanced use case. For details, refer to the EKS cluster endpoint access control documentation.

Private-only API Endpoints

If you enable private and disable public API endpoint access when creating a cluster, then there is an extra step you must take in order for Rancher to connect to the cluster successfully. In this case, a pop-up will be displayed with a command that you will run on the cluster to register it with Rancher. Once the cluster is provisioned, you can run the displayed command anywhere you can connect to the cluster’s Kubernetes API.

There are two ways to avoid this extra manual step: - You can create the cluster with both private and public API endpoint access on cluster creation. You can disable public access after the cluster is created and in an active state and Rancher will continue to communicate with the EKS cluster. - You can ensure that Rancher shares a subnet with the EKS cluster. Then security groups can be used to enable Rancher to communicate with the cluster’s API endpoint. In this case, the command to register the cluster is not needed, and Rancher will be able to communicate with your cluster. For more information on configuring security groups, refer to the security groups documentation.

Public Access Endpoints

Optionally limit access to the public endpoint via explicit CIDR blocks.

If you limit access to specific CIDR blocks, then it is recommended that you also enable the private access to avoid losing network communication to the cluster.

One of the following is required to enable private access: - Rancher’s IP must be part of an allowed CIDR block - Private access should be enabled, and Rancher must share a subnet with the cluster and have network access to the cluster, which can be configured with a security group

For more information about public and private access to the cluster endpoint, refer to the Amazon EKS documentation.

Subnet

OptionDescription
Standard: Rancher generated VPC and SubnetWhile provisioning your cluster, Rancher generates a new VPC with 3 public subnets.
Custom: Choose from your existing VPC and SubnetsWhile provisioning your cluster, Rancher configures your Control Plane and nodes to use a VPC and Subnet that you’ve already created in AWS.

For more information, refer to the AWS documentation for Cluster VPC Considerations. Follow one of the sets of instructions below based on your selection from the previous step.

Security Group

Amazon Documentation:

Logging

Configure control plane logs to send to Amazon CloudWatch. You are charged the standard CloudWatch Logs data ingestion and storage costs for any logs sent to CloudWatch Logs from your clusters.

Each log type corresponds to a component of the Kubernetes control plane. To learn more about these components, see Kubernetes Components in the Kubernetes documentation.

For more information on EKS control plane logging, refer to the official documentation.

Managed Node Groups

Amazon EKS managed node groups automate the provisioning and lifecycle management of nodes (Amazon EC2 instances) for Amazon EKS Kubernetes clusters.

For more information about how node groups work and how they are configured, refer to the EKS documentation.

Amazon will use the EKS-optimized AMI for the Kubernetes version. You can configure whether the AMI has GPU enabled.

OptionDescription
Instance TypeChoose the hardware specs for the instance you’re provisioning.
Maximum ASG SizeThe maximum number of instances. This setting won’t take effect until the Cluster Autoscaler is installed.
Minimum ASG SizeThe minimum number of instances. This setting won’t take effect until the Cluster Autoscaler is installed.

Account Access

Complete each drop-down and field using the information obtained for your IAM policy.

SettingDescription
RegionFrom the drop-down choose the geographical region in which to build your cluster.
Access KeyEnter the access key that you created for your IAM policy.
Secret KeyEnter the secret key that you created for your IAM policy.

Service Role

Choose a service role.

Service RoleDescription
Standard: Rancher generated service roleIf you choose this role, Rancher automatically adds a service role for use with the cluster.
Custom: Choose from your existing service rolesIf you choose this role, Rancher lets you choose from service roles that you’re already created within AWS. For more information on creating a custom service role in AWS, see the Amazon documentation.

Public IP for Worker Nodes

Your selection for this option determines what options are available for VPC & Subnet.

OptionDescription
YesWhen your cluster nodes are provisioned, they’re assigned a both a private and public IP address.
No: Private IPs onlyWhen your cluster nodes are provisioned, they’re assigned only a private IP address.

If you choose this option, you must also choose a VPC & Subnet that allow your instances to access the internet. This access is required so that your worker nodes can connect to the Kubernetes control plane.

VPC & Subnet

The available options depend on the public IP for worker nodes.

OptionDescription
Standard: Rancher generated VPC and SubnetWhile provisioning your cluster, Rancher generates a new VPC and Subnet.
Custom: Choose from your existing VPC and SubnetsWhile provisioning your cluster, Rancher configures your nodes to use a VPC and Subnet that you’ve already created in AWS. If you choose this option, complete the remaining steps below.

For more information, refer to the AWS documentation for Cluster VPC Considerations. Follow one of the sets of instructions below based on your selection from the previous step.

If you choose to assign a public IP address to your cluster’s worker nodes, you have the option of choosing between a VPC that’s automatically generated by Rancher (i.e., Standard: Rancher generated VPC and Subnet), or a VPC that you’ve already created with AWS (i.e., Custom: Choose from your existing VPC and Subnets). Choose the option that best fits your use case.

Click to expand

If you’re using Custom: Choose from your existing VPC and Subnets:

(If you’re using Standard, skip to the instance options.)

  1. Make sure Custom: Choose from your existing VPC and Subnets is selected.

  2. From the drop-down that displays, choose a VPC.

  3. Click Next: Select Subnets. Then choose one of the Subnets that displays.

  4. Click Next: Select Security Group.

If your worker nodes have Private IPs only, you must also choose a VPC & Subnet that allow your instances to access the internet. This access is required so that your worker nodes can connect to the Kubernetes control plane.

Click to expand

Follow the steps below.

Tip: When using only private IP addresses, you can provide your nodes internet access by creating a VPC constructed with two subnets, a private set and a public set. The private set should have its route tables configured to point toward a NAT in the public set. For more information on routing traffic from private subnets, please see the official AWS documentation.

  1. From the drop-down that displays, choose a VPC.

  2. Click Next: Select Subnets. Then choose one of the Subnets that displays.

Security Group

Amazon Documentation:

Instance Options

Instance type and size of your worker nodes affects how many IP addresses each worker node will have available. See this documentation for more information.

OptionDescription
Instance TypeChoose the hardware specs for the instance you’re provisioning.
Custom AMI OverrideIf you want to use a custom Amazon Machine Image (AMI), specify it here. By default, Rancher will use the EKS-optimized AMI for the EKS version that you chose.
Desired ASG SizeThe number of instances that your cluster will provision.
User DataCustom commands can to be passed to perform automated configuration tasks WARNING: Modifying this may cause your nodes to be unable to join the cluster. Note: Available as of v2.2.0

Troubleshooting

If your changes were overwritten, it could be due to the way the cluster data is synced with EKS. Changes shouldn’t be made to the cluster from another source, such as in the EKS console, and in Rancher within a five-minute span. For information on how this works and how to configure the refresh interval, refer to Syncing.

If an unauthorized error is returned while attempting to modify or register the cluster and the cluster was not created with the role or user that your credentials belong to, refer to Security and Compliance.

For any issues or troubleshooting details for your Amazon EKS Kubernetes cluster, please see this documentation.

AWS Service Events

To find information on any AWS Service events, please see this page.

Security and Compliance

By default only the IAM user or role that created a cluster has access to it. Attempting to access the cluster with any other user or role without additional configuration will lead to an error. In Rancher, this means using a credential that maps to a user or role that was not used to create the cluster will cause an unauthorized error. For example, an EKSCtl cluster will not register in Rancher unless the credentials used to register the cluster match the role or user used by EKSCtl. Additional users and roles can be authorized to access a cluster by being added to the aws-auth configmap in the kube-system namespace. For a more in-depth explanation and detailed instructions, please see this documentation.

For more information on security and compliance with your Amazon EKS Kubernetes cluster, please see this documentation.

Tutorial

This tutorial on the AWS Open Source Blog will walk you through how to set up an EKS cluster with Rancher, deploy a publicly accessible app to test the cluster, and deploy a sample project to track real-time geospatial data using a combination of other open-source software such as Grafana and InfluxDB.

Minimum EKS Permissions

Documented here is a minimum set of permissions necessary to use all functionality of the EKS driver in Rancher. Additional permissions are required for Rancher to provision the Service Role and VPC resources. Optionally these resources can be created before the cluster creation and will be selectable when defining the cluster configuration.

ResourceDescription
Service RoleThe service role provides Kubernetes the permissions it requires to manage resources on your behalf. Rancher can create the service role with the following Service Role Permissions.
VPCProvides isolated network resources utilised by EKS and worker nodes. Rancher can create the VPC resources with the following VPC Permissions.

Resource targeting uses * as the ARN of many of the resources created cannot be known before creating the EKS cluster in Rancher.

  1. {
  2. "Version": "2012-10-17",
  3. "Statement": [
  4. {
  5. "Sid": "EC2Permisssions",
  6. "Effect": "Allow",
  7. "Action": [
  8. "ec2:RunInstances",
  9. "ec2:RevokeSecurityGroupIngress",
  10. "ec2:RevokeSecurityGroupEgress",
  11. "ec2:DescribeVpcs",
  12. "ec2:DescribeTags",
  13. "ec2:DescribeSubnets",
  14. "ec2:DescribeSecurityGroups",
  15. "ec2:DescribeRouteTables",
  16. "ec2:DescribeLaunchTemplateVersions",
  17. "ec2:DescribeLaunchTemplates",
  18. "ec2:DescribeKeyPairs",
  19. "ec2:DescribeInternetGateways",
  20. "ec2:DescribeImages",
  21. "ec2:DescribeAvailabilityZones",
  22. "ec2:DescribeAccountAttributes",
  23. "ec2:DeleteTags",
  24. "ec2:DeleteSecurityGroup",
  25. "ec2:DeleteKeyPair",
  26. "ec2:CreateTags",
  27. "ec2:CreateSecurityGroup",
  28. "ec2:CreateLaunchTemplateVersion",
  29. "ec2:CreateLaunchTemplate",
  30. "ec2:CreateKeyPair",
  31. "ec2:AuthorizeSecurityGroupIngress",
  32. "ec2:AuthorizeSecurityGroupEgress"
  33. ],
  34. "Resource": "*"
  35. },
  36. {
  37. "Sid": "CloudFormationPermisssions",
  38. "Effect": "Allow",
  39. "Action": [
  40. "cloudformation:ListStacks",
  41. "cloudformation:ListStackResources",
  42. "cloudformation:DescribeStacks",
  43. "cloudformation:DescribeStackResources",
  44. "cloudformation:DescribeStackResource",
  45. "cloudformation:DeleteStack",
  46. "cloudformation:CreateStackSet",
  47. "cloudformation:CreateStack"
  48. ],
  49. "Resource": "*"
  50. },
  51. {
  52. "Sid": "IAMPermissions",
  53. "Effect": "Allow",
  54. "Action": [
  55. "iam:PassRole",
  56. "iam:ListRoles",
  57. "iam:ListRoleTags",
  58. "iam:ListInstanceProfilesForRole",
  59. "iam:ListInstanceProfiles",
  60. "iam:ListAttachedRolePolicies",
  61. "iam:GetRole",
  62. "iam:GetInstanceProfile",
  63. "iam:DetachRolePolicy",
  64. "iam:DeleteRole",
  65. "iam:CreateRole",
  66. "iam:AttachRolePolicy"
  67. ],
  68. "Resource": "*"
  69. },
  70. {
  71. "Sid": "KMSPermisssions",
  72. "Effect": "Allow",
  73. "Action": "kms:ListKeys",
  74. "Resource": "*"
  75. },
  76. {
  77. "Sid": "EKSPermisssions",
  78. "Effect": "Allow",
  79. "Action": [
  80. "eks:UpdateNodegroupVersion",
  81. "eks:UpdateNodegroupConfig",
  82. "eks:UpdateClusterVersion",
  83. "eks:UpdateClusterConfig",
  84. "eks:UntagResource",
  85. "eks:TagResource",
  86. "eks:ListUpdates",
  87. "eks:ListTagsForResource",
  88. "eks:ListNodegroups",
  89. "eks:ListFargateProfiles",
  90. "eks:ListClusters",
  91. "eks:DescribeUpdate",
  92. "eks:DescribeNodegroup",
  93. "eks:DescribeFargateProfile",
  94. "eks:DescribeCluster",
  95. "eks:DeleteNodegroup",
  96. "eks:DeleteFargateProfile",
  97. "eks:DeleteCluster",
  98. "eks:CreateNodegroup",
  99. "eks:CreateFargateProfile",
  100. "eks:CreateCluster"
  101. ],
  102. "Resource": "*"
  103. }
  104. ]
  105. }

Service Role Permissions

Rancher will create a service role with the following trust policy:

  1. {
  2. "Version": "2012-10-17",
  3. "Statement": [
  4. {
  5. "Action": "sts:AssumeRole",
  6. "Principal": {
  7. "Service": "eks.amazonaws.com"
  8. },
  9. "Effect": "Allow",
  10. "Sid": ""
  11. }
  12. ]
  13. }

This role will also have two role policy attachments with the following policies ARNs:

  1. arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
  2. arn:aws:iam::aws:policy/AmazonEKSServicePolicy

Permissions required for Rancher to create service role on users behalf during the EKS cluster creation process.

  1. {
  2. "Version": "2012-10-17",
  3. "Statement": [
  4. {
  5. "Sid": "IAMPermisssions",
  6. "Effect": "Allow",
  7. "Action": [
  8. "iam:AddRoleToInstanceProfile",
  9. "iam:AttachRolePolicy",
  10. "iam:CreateInstanceProfile",
  11. "iam:CreateRole",
  12. "iam:CreateServiceLinkedRole",
  13. "iam:DeleteInstanceProfile",
  14. "iam:DeleteRole",
  15. "iam:DetachRolePolicy",
  16. "iam:GetInstanceProfile",
  17. "iam:GetRole",
  18. "iam:ListAttachedRolePolicies",
  19. "iam:ListInstanceProfiles",
  20. "iam:ListInstanceProfilesForRole",
  21. "iam:ListRoles",
  22. "iam:ListRoleTags",
  23. "iam:PassRole",
  24. "iam:RemoveRoleFromInstanceProfile"
  25. ],
  26. "Resource": "*"
  27. }
  28. ]
  29. }

VPC Permissions

Permissions required for Rancher to create VPC and associated resources.

  1. {
  2. "Sid": "VPCPermissions",
  3. "Effect": "Allow",
  4. "Action": [
  5. "ec2:ReplaceRoute",
  6. "ec2:ModifyVpcAttribute",
  7. "ec2:ModifySubnetAttribute",
  8. "ec2:DisassociateRouteTable",
  9. "ec2:DetachInternetGateway",
  10. "ec2:DescribeVpcs",
  11. "ec2:DeleteVpc",
  12. "ec2:DeleteTags",
  13. "ec2:DeleteSubnet",
  14. "ec2:DeleteRouteTable",
  15. "ec2:DeleteRoute",
  16. "ec2:DeleteInternetGateway",
  17. "ec2:CreateVpc",
  18. "ec2:CreateSubnet",
  19. "ec2:CreateSecurityGroup",
  20. "ec2:CreateRouteTable",
  21. "ec2:CreateRoute",
  22. "ec2:CreateInternetGateway",
  23. "ec2:AttachInternetGateway",
  24. "ec2:AssociateRouteTable"
  25. ],
  26. "Resource": "*"
  27. }

Syncing

Syncing is the feature that causes Rancher to update its EKS clusters’ values so they are up to date with their corresponding cluster object in the EKS console. This enables Rancher to not be the sole owner of an EKS cluster’s state. Its largest limitation is that processing an update from Rancher and another source at the same time or within 5 minutes of one finishing may cause the state from one source to completely overwrite the other.

How it works

There are two fields on the Rancher Cluster object that must be understood to understand how syncing works:

  1. EKSConfig which is located on the Spec of the Cluster.
  2. UpstreamSpec which is located on the EKSStatus field on the Status of the Cluster.

Both of which are defined by the struct EKSClusterConfigSpec found in the eks-operator project: https://github.com/rancher/eks-operator/blob/master/pkg/apis/eks.cattle.io/v1/types.go

All fields with the exception of DisplayName, AmazonCredentialSecret, Region, and Imported are nillable on the EKSClusterConfigSpec.

The EKSConfig represents desired state for its non-nil values. Fields that are non-nil in the EKSConfig can be thought of as “managed”.When a cluster is created in Rancher, all fields are non-nil and therefore “managed”. When a pre-existing cluster is registered in rancher all nillable fields are nil and are not “managed”. Those fields become managed once their value has been changed by Rancher.

UpstreamSpec represents the cluster as it is in EKS and is refreshed on an interval of 5 minutes. After the UpstreamSpec has been refreshed rancher checks if the EKS cluster has an update in progress. If it is updating, nothing further is done. If it is not currently updating, any “managed” fields on EKSConfig are overwritten with their corresponding value from the recently updated UpstreamSpec.

The effective desired state can be thought of as the UpstreamSpec + all non-nil fields in the EKSConfig. This is what is displayed in the UI.

If Rancher and another source attempt to update an EKS cluster at the same time or within the 5 minute refresh window of an update finishing, then it is likely any “managed” fields can be caught in a race condition. For example, a cluster may have PrivateAccess as a managed field. If PrivateAccess is false and then enabled in EKS console, then finishes at 11:01, and then tags are updated from Rancher before 11:05 the value will likely be overwritten. This would also occur if tags were updated while the cluster was processing the update. If the cluster was registered and the PrivateAccess fields was nil then this issue should not occur in the aforementioned case.

Configuring the Refresh Interval

It is possible to change the refresh interval through the setting “eks-refresh-cron”. This setting accepts values in the Cron format. The default is */5 * * * *. The shorter the refresh window is the less likely any race conditions will occur, but it does increase the likelihood of encountering request limits that may be in place for AWS APIs.