Google Kubernetes Engine

Follow these instructions to prepare a GKE cluster for Istio.

  1. Create a new cluster.

    1. $ export PROJECT_ID=`gcloud config get-value project` && \
    2. export M_TYPE=n1-standard-2 && \
    3. export ZONE=us-west2-a && \
    4. export CLUSTER_NAME=${PROJECT_ID}-${RANDOM} && \
    5. gcloud services enable container.googleapis.com && \
    6. gcloud container clusters create $CLUSTER_NAME \
    7. --cluster-version latest \
    8. --machine-type=$M_TYPE \
    9. --num-nodes 4 \
    10. --zone $ZONE \
    11. --project $PROJECT_ID

    The default installation of Istio requires nodes with >1 vCPU. If you are installing with the demo configuration profile, you can remove the --machine-type argument to use the smaller n1-standard-1 machine size instead.

    To use the Istio CNI feature on GKE, please check the CNI installation guide for prerequisite cluster configuration steps.

    For private GKE clusters

    An automatically created firewall rule does not open port 15017. This is needed by the Pilot discovery validation webhook.

    To review this firewall rule for master access:

    1. $ gcloud compute firewall-rules list --filter="name~gke-${CLUSTER_NAME}-[0-9a-z]*-master"

    To replace the existing rule and allow master access:

    1. $ gcloud compute firewall-rules update <firewall-rule-name> --allow tcp:10250,tcp:443,tcp:15017
  2. Retrieve your credentials for kubectl.

    1. $ gcloud container clusters get-credentials $CLUSTER_NAME \
    2. --zone $ZONE \
    3. --project $PROJECT_ID
  3. Grant cluster administrator (admin) permissions to the current user. To create the necessary RBAC rules for Istio, the current user requires admin permissions.

    1. $ kubectl create clusterrolebinding cluster-admin-binding \
    2. --clusterrole=cluster-admin \
    3. --user=$(gcloud config get-value core/account)

Multi-cluster communication

In some cases, a firewall rule must be explicitly created to allow cross-cluster traffic.

The following instructions will allow communication between all clusters in your project. Adjust the commands as needed.

  1. Gather information about your clusters’ network.

    1. $ function join_by { local IFS="$1"; shift; echo "$*"; }
    2. $ ALL_CLUSTER_CIDRS=$(gcloud --project $PROJECT_ID container clusters list --format='value(clusterIpv4Cidr)' | sort | uniq)
    3. $ ALL_CLUSTER_CIDRS=$(join_by , $(echo "${ALL_CLUSTER_CIDRS}"))
    4. $ ALL_CLUSTER_NETTAGS=$(gcloud --project $PROJECT_ID compute instances list --format='value(tags.items.[0])' | sort | uniq)
    5. $ ALL_CLUSTER_NETTAGS=$(join_by , $(echo "${ALL_CLUSTER_NETTAGS}"))
  2. Create the firewall rule.

    1. $ gcloud compute firewall-rules create istio-multicluster-pods \
    2. --allow=tcp,udp,icmp,esp,ah,sctp \
    3. --direction=INGRESS \
    4. --priority=900 \
    5. --source-ranges="${ALL_CLUSTER_CIDRS}" \
    6. --target-tags="${ALL_CLUSTER_NETTAGS}" --quiet