Customizing Kubeflow on AWS

Tailoring a AWS deployment of Kubeflow

This guide describes how to customize your deployment of Kubeflow on Amazon EKS.These steps can be done before you run apply -V -f ${CONFIG_FILE} command. Please see the following sections for details. If you don’t understand the deployment process, please see deploy for details.

Customizing Kubeflow

Here are the optional configuration parameters for kfctl on the AWS platform.

OptionsDescriptionRequired
awsClusterNameName of your new or existing Amazon EKS clusterYES
awsRegionThe AWS Region to launch inYES
awsNodegroupRoleNamesThe IAM role names for your worker nodesYES for existing clusters / No for new clusters

Customize your Amazon EKS cluster

Before you run kfctl apply -V -f ${CONFIG_FILE}, you can edit the cluster configuration file to change cluster specification before you create the cluster.

Cluster configuration is stored in ${KF_DIR}/aws_config/cluster_config.yaml. Please see eksctl for configuration details.

For example, the following is a cluster manifest with one node group which has 2 p2.xlarge instances. You can easily enable SSH and configure a public key. All worker nodes will be in single Availability Zone.

  1. apiVersion: eksctl.io/v1alpha5
  2. kind: ClusterConfig
  3. metadata:
  4. # AWS_CLUSTER_NAME and AWS_REGION will override `name` and `region` here.
  5. name: kubeflow-example
  6. region: us-west-2
  7. version: '1.14'
  8. # If your region has multiple availability zones, you can specify 3 of them.
  9. #availabilityZones: ["us-west-2b", "us-west-2c", "us-west-2d"]
  10. # NodeGroup holds all configuration attributes that are specific to a nodegroup
  11. # You can have several node groups in your cluster.
  12. nodeGroups:
  13. - name: eks-gpu
  14. instanceType: p2.xlarge
  15. availabilityZones: ["us-west-2b"]
  16. desiredCapacity: 2
  17. minSize: 0
  18. maxSize: 2
  19. volumeSize: 30
  20. ssh:
  21. allow: true
  22. sshPublicKeyPath: '~/.ssh/id_rsa.pub'
  23. # Example of GPU node group
  24. # - name: Tesla-V100
  25. # Choose your Instance type for the node group.
  26. # instanceType: p3.2xlarge
  27. # GPU cluster can use single availability zone to improve network performance
  28. # availabilityZones: ["us-west-2b"]
  29. # Autoscaling Groups settings
  30. # desiredCapacity: 0
  31. # minSize: 0
  32. # maxSize: 4
  33. # Node Root Disk
  34. # volumeSize: 50
  35. # Enable SSH out side your VPC.
  36. # allowSSH: true
  37. # sshPublicKeyPath: '~/.ssh/id_rsa.pub'
  38. # Customize Labels
  39. # labels:
  40. # 'k8s.amazonaws.com/accelerator': 'nvidia-tesla-k80'
  41. # Setup pre-defined iam roles to node group.
  42. # iam:
  43. # withAddonPolicies:
  44. # autoScaler: true

Customize Authentication

Please see this section

Customize IAM Role for Pods

Please see this section

Customize Private Access

Please see this section

Customize Logging

Please see this section

Feedback

Was this page helpful?

Glad to hear it! Please tell us how we can improve.

Sorry to hear that. Please tell us how we can improve.

Last modified 23.02.2020: Add more details on authn and authz settings for AWS (#1721) (7599a779)