Configure Kubeflow Pipelines on AWS

Customize Kubeflow Pipelines to use AWS Services

Authenticate Kubeflow Pipeline using SDK inside cluster

In v1.1.0, in-cluster communication from notebook to Kubeflow Pipeline is not supported in this phase. In order to use kfp as previous, user needs to pass a cookie to KFP for communication as a workaround.

You can follow following steps to get cookie from your browser after you login Kubeflow. Following examples uses Chrome browser.

Note: You have to use images in AWS Jupyter Notebook because it includes a critical SDK fix here.

KFP SDK Browser Cookie

KFP SDK Browser Cookie Detail

Once you get cookie, you can easily authenticate kfp by passing the cookies. Please look at code snippets based on the manifest you use.

To get <aws_alb_host>, please run kubectl get ingress -n istio-system and get value from ADDRESS field.

  1. import kfp
  2. authservice_session='authservice_session=<cookie>'
  3. client = kfp.Client(host='http://<aws_alb_host>/pipeline', cookies=authservice_session)
  4. client.list_experiments(namespace="<your_namespace>")
  1. import kfp
  2. alb_session_cookie='AWSELBAuthSessionCookie-0=<cookie>'
  3. client = kfp.Client(host='https://<aws_alb_host>/pipeline', cookies=alb_session_cookie)
  4. client.list_experiments(namespace="<your_namespace>")

Authenticate Kubeflow Pipeline using SDK outside cluster

To do programmatic authentication with Dex, refer to the following comments under the #140 issue in the kfctl repository: #140 (comment) and #140 (comment).

You can still retrieve session cookie and pass to backend like we do [here] (#authenticate-kubeflow-pipeline-using-sdk-inside-cluster)

If you are looking for end to end experience, this is working in progress. Once feat(sdk): Enable AWS ALB authentication in KFP SDK Client PR is merged, user can pass Cognito user username and password to authenticate KFP via AWS Application Load Balancer.

S3 Access from Kubeflow Pipelines

Currently, it’s still recommended to use aws credentials or kube2iam to managed S3 access from Kubeflow Pipelines. IAM Role for Service Accounts requires applications to use latest AWS SDK to support assume-web-identity-role, we are still working on it. Track progress in issue #3405

A Kubernetes Secret is required by Kubeflow Pipelines and applications to access S3. Make sure it has S3 read and write access.

  1. apiVersion: v1
  2. kind: Secret
  3. metadata:
  4. name: aws-secret
  5. namespace: kubeflow
  6. type: Opaque
  7. data:
  8. AWS_ACCESS_KEY_ID: <YOUR_BASE64_ACCESS_KEY>
  9. AWS_SECRET_ACCESS_KEY: <YOUR_BASE64_SECRET_ACCESS>
  • YOUR_BASE64_ACCESS_KEY: Base64 string of AWS_ACCESS_KEY_ID
  • YOUR_BASE64_SECRET_ACCESS: Base64 string of AWS_SECRET_ACCESS_KEY

Note: To get base64 string, run echo -n $AWS_ACCESS_KEY_ID | base64

Configure containers to use AWS credentials

If you write any files to S3 in your application, use use_aws_secret to attach an AWS secret to access S3.

  1. import kfp
  2. from kfp import components
  3. from kfp import dsl
  4. from kfp.aws import use_aws_secret
  5. def iris_comp():
  6. return kfp.dsl.ContainerOp(
  7. .....
  8. output_artifact_paths={'mlpipeline-ui-metadata': '/mlpipeline-ui-metadata.json'}
  9. )
  10. @dsl.pipeline(
  11. name='IRIS Classification pipeline',
  12. description='IRIS Classification using LR in SKLEARN'
  13. )
  14. def iris_pipeline():
  15. iris_task = iris_comp().apply(use_aws_secret('aws-secret', 'AWS_ACCESS_KEY_ID', 'AWS_SECRET_ACCESS_KEY'))

Support S3 Artifact Store

Kubeflow Pipelines supports different artifact viewers. You can create files in S3 and reference them in output artifacts in your application as follows:

  1. metadata = {
  2. 'outputs' : [
  3. {
  4. 'source': 's3://bucket/kubeflow/README.md',
  5. 'type': 'markdown',
  6. },
  7. {
  8. 'type': 'confusion_matrix',
  9. 'format': 'csv',
  10. 'schema': [
  11. {'name': 'target', 'type': 'CATEGORY'},
  12. {'name': 'predicted', 'type': 'CATEGORY'},
  13. {'name': 'count', 'type': 'NUMBER'},
  14. ],
  15. 'source': s3://bucket/confusion_matrics.csv,
  16. # Convert vocab to string because for bealean values we want "True|False" to match csv data.
  17. 'labels': list(map(str, vocab)),
  18. },
  19. {
  20. 'type': 'tensorboard',
  21. 'source': s3://bucket/tb-events,
  22. }
  23. ]
  24. }
  25. with file_io.FileIO('/tmp/mlpipeline-ui-metadata.json', 'w') as f:
  26. json.dump(metadata, f)

In order for ml-pipeline-ui to read these artifacts:

  1. Create a Kubernetes secret aws-secret in kubeflow namespace. Follow instructions here.

  2. Update deployment ml-pipeline-ui to use AWS credential environment variables by running kubectl edit deployment ml-pipeline-ui -n kubeflow.

    1. apiVersion: extensions/v1beta1
    2. kind: Deployment
    3. metadata:
    4. name: ml-pipeline-ui
    5. namespace: kubeflow
    6. ...
    7. spec:
    8. template:
    9. spec:
    10. containers:
    11. - env:
    12. - name: AWS_ACCESS_KEY_ID
    13. valueFrom:
    14. secretKeyRef:
    15. key: AWS_ACCESS_KEY_ID
    16. name: aws-secret
    17. - name: AWS_SECRET_ACCESS_KEY
    18. valueFrom:
    19. secretKeyRef:
    20. key: AWS_SECRET_ACCESS_KEY
    21. name: aws-secret
    22. ....
    23. image: gcr.io/ml-pipeline/frontend:0.2.0
    24. name: ml-pipeline-ui

Here’s an example. Kubeflow Pipelines viewer tensorboard

Support TensorBoard in Kubeflow Pipelines

TensorBoard needs some extra settings on AWS like below:

  1. Create a Kubernetes secret aws-secret in the kubeflow namespace. Follow instructions here.

  2. Create a ConfigMap to store the configuration of TensorBoard on your cluster. Replace <your_region> with your S3 region.

    1. apiVersion: v1
    2. kind: ConfigMap
    3. metadata:
    4. name: ml-pipeline-ui-viewer-template
    5. data:
    6. viewer-tensorboard-template.json: |
    7. {
    8. "spec": {
    9. "containers": [
    10. {
    11. "env": [
    12. {
    13. "name": "AWS_ACCESS_KEY_ID",
    14. "valueFrom": {
    15. "secretKeyRef": {
    16. "name": "aws-secret",
    17. "key": "AWS_ACCESS_KEY_ID"
    18. }
    19. }
    20. },
    21. {
    22. "name": "AWS_SECRET_ACCESS_KEY",
    23. "valueFrom": {
    24. "secretKeyRef": {
    25. "name": "aws-secret",
    26. "key": "AWS_SECRET_ACCESS_KEY"
    27. }
    28. }
    29. },
    30. {
    31. "name": "AWS_REGION",
    32. "value": "<your_region>"
    33. }
    34. ]
    35. }
    36. ]
    37. }
    38. }
  3. Update the ml-pipeline-ui deployment to use the ConfigMap by running kubectl edit deployment ml-pipeline-ui -n kubeflow.

    1. apiVersion: extensions/v1beta1
    2. kind: Deployment
    3. metadata:
    4. name: ml-pipeline-ui
    5. namespace: kubeflow
    6. ...
    7. spec:
    8. template:
    9. spec:
    10. containers:
    11. - env:
    12. - name: VIEWER_TENSORBOARD_POD_TEMPLATE_SPEC_PATH
    13. value: /etc/config/viewer-tensorboard-template.json
    14. ....
    15. volumeMounts:
    16. - mountPath: /etc/config
    17. name: config-volume
    18. .....
    19. volumes:
    20. - configMap:
    21. defaultMode: 420
    22. name: ml-pipeline-ui-viewer-template
    23. name: config-volume

Last modified 02.11.2020: Add reference to authenticate Python SDK with Dex (#2328) (1ad295f4)