E2E testing with kops clusters

Testing tips

If you are running kops as part of an e2e test, the following tips may be useful.

CI Kubernetes Build

Set the KubernetesVersion to a http:// or https:// base url, such as https://storage.googleapis.com/kubernetes-release-dev/ci/v1.4.0-alpha.2.677+ea69570f61af8e/

We expect the base url to have bin/linux/amd64 directory containing:

  • kubelet
  • kubelet.sha1
  • kubectl
  • kubectl.sha1
  • kube-apiserver.docker_tag
  • kube-apiserver.tar
  • kube-apiserver.tar.sha1
  • kube-controller-manager.docker_tag
  • kube-controller-manager.tar
  • kube-controller-manager.tar.sha1
  • kube-proxy.docker_tag
  • kube-proxy.tar
  • kube-proxy.tar.sha1
  • kube-scheduler.docker_tag
  • kube-scheduler.tar
  • kube-scheduler.tar.sha1

Do this with kops edit cluster <clustername>. The spec should look like

  1. ...
  2. spec:
  3. kubernetesVersion: "https://storage.googleapis.com/kubernetes-release-dev/ci/v1.4.0-alpha.2.677+ea69570f61af8e/"
  4. cloudProvider: aws
  5. etcdClusters:
  6. - etcdMembers:
  7. - name: us-east-1c
  8. zone: us-east-1c
  9. name: main
  10. ...

Running the kubernetes e2e test suite

The simple way:

  1. # cd wherever you tend to put git repos
  2. git clone https://github.com/kubernetes/test-infra.git
  3. export KOPS_E2E_STATE_STORE=s3://your-kops-state-store # Change to your state store path
  4. export KOPS_E2E_CLUSTER_NAME=e2e.cluster.name # Change to an FQDN for your e2e cluster name
  5. test-infra/jobs/ci-kubernetes-e2e-kops-aws.sh |& tee /tmp/testlog

This:

  • Brings up a cluster using the latest kops build from master (see below for how to use your current build)
  • Runs the default series of tests (which the Kubernetes team is alsorunning here) (see below for how to override the test list)
  • Tears down the cluster
  • Pipes all output to /tmp/testlog

(Note: By default this script assumes that your AWS credentials are in~/.aws/credentials, and the SSH keypair you want to use is~/.ssh/kube_aws_rsa. You can override JENKINS_AWS_CREDENTIALS_FILE,JENKINS_AWS_SSH_PRIVATE_KEY_FILE and JENKINS_AWS_SSH_PUBLIC_KEY_FILE if youwant to change this.)

This isn't yet terribly useful, though - it just shows how to replicate theexisting job, but not with your custom code. To test a custom kops build, youcan do the following:

To use S3:

  1. # cd to your kops repo
  2. export S3_BUCKET_NAME=kops-dev-${USER}
  3. make kops-install dev-upload UPLOAD_DEST=s3://${S3_BUCKET_NAME}
  4.  
  5. KOPS_VERSION=`bazel run //cmd/kops version -- --short`
  6. export KOPS_BASE_URL=https://${S3_BUCKET_NAME}.s3.amazonaws.com/kops/${KOPS_VERSION}/

To use GCS:

  1. export GCS_BUCKET_NAME=kops-dev-${USER}
  2. make kops-install dev-upload UPLOAD_DEST=gs://${GCS_BUCKET_NAME}
  3.  
  4. KOPS_VERSION=`bazel run //cmd/kops version -- --short`
  5. export KOPS_BASE_URL=https://${GCS_BUCKET_NAME}.storage.googleapis.com/kops/${KOPS_VERSION}/

Whether using GCS or S3, you probably want to upload dns-controller &kops-contoller images if you have changed them:

For dns-controller:

  1. KOPS_VERSION=`bazel run //cmd/kops version -- --short`
  2. export DOCKER_IMAGE_PREFIX=${USER}/
  3. export DOCKER_REGISTRY=
  4. make dns-controller-push
  5. export DNSCONTROLLER_IMAGE=${DOCKER_IMAGE_PREFIX}dns-controller:${KOPS_VERSION}

For kops-controller:

  1. KOPS_VERSION=`bazel run //cmd/kops version -- --short`
  2. export DOCKER_IMAGE_PREFIX=${USER}/
  3. export DOCKER_REGISTRY=
  4. make kops-controller-push
  5. export KOPSCONTROLLER_IMAGE=${DOCKER_IMAGE_PREFIX}kops-controller:${KOPS_VERSION}

You can create a cluster using kops create cluster <clustername> —zones us-east-1b

Then follow the test directions above.

To override the test list for the job, you need to familiar with theginkgo.focus and ginkgo.skipflags. Using these flags, you can do:

  1. export GINKGO_TEST_ARGS="--ginkgo.focus=\[Feature:Performance\]"

and follow the instructions above. Here are some other examples from the e2e.go documentation..

If you want to test against an existing cluster, you can do:

  1. export E2E_UP=false; export E2E_DOWN=false

and follow the instructions above. This is particularly useful for testing themyriad of kops configuration/topology options without having to modify thetesting infrastructure. Note: This is also the only way currently to test acustom Kubernetes build(seekubernetes/test-infra#1454).

Uploading a custom build

If you want to upload a custom Kubernetes build, here is a simple way (note:this assumes you've run make quick-release in the Kubernetes repo first):

  1. # cd wherever you tend to put git repos
  2. git clone https://github.com/kubernetes/release.git
  3.  
  4. # cd back to your kubernetes repo
  5. /path/to/release/push-build.sh # Fix /path/to/release with wherever you cloned the release repo

That will upload the release to a GCS bucket and make it public. You can thenuse the outputted URL in kops with —kubernetes-version.

If you need it private in S3, here's a manual way:

  1. make quick-release
  2. cd ./_output/release-tars/
  3. # ??? rm -rf kubernetes/
  4. tar zxf kubernetes-server-linux-amd64.tar.gz
  5.  
  6. rm kubernetes/server/bin/federation*
  7. rm kubernetes/server/bin/hyperkube
  8. rm kubernetes/server/bin/kubeadm
  9. rm kubernetes/server/bin/kube-apiserver
  10. rm kubernetes/server/bin/kube-controller-manager
  11. rm kubernetes/server/bin/kube-discovery
  12. rm kubernetes/server/bin/kube-dns
  13. rm kubernetes/server/bin/kubemark
  14. rm kubernetes/server/bin/kube-proxy
  15. rm kubernetes/server/bin/kube-scheduler
  16. rm kubernetes/kubernetes-src.tar.gz
  17.  
  18.  
  19. find kubernetes/server/bin -type f -name "*.tar" | xargs -I {} /bin/bash -c "sha1sum {} | cut -f1 -d ' ' > {}.sha1"
  20. find kubernetes/server/bin -type f -name "kube???" | xargs -I {} /bin/bash -c "sha1sum {} | cut -f1 -d ' ' > {}.sha1"
  21.  
  22. aws s3 sync --acl public-read kubernetes/server/bin/ s3://${S3_BUCKET_NAME}/kubernetes/dev/v1.6.0-dev/bin/linux/amd64/

Example e2e command

  1. go run hack/e2e.go -v -up -down -kops `which kops` -kops-cluster test.test-aws.k8s.io -kops-state s3://k8s-kops-state-store/ -kops-nodes=
  2. 4 -deployment kops --kops-kubernetes-version https://storage.googleapis.com/kubernetes-release-dev/ci/$(curl -SsL https://storage.googleapis.com/kubernetes-release-dev/ci/latest-green.txt)

(note the v1.6.0-dev: we insert a kubernetes version so that kops canautomatically detect which k8s version is in use, which it uses to controlflags that are not compatible between versions)

Then: