Setup a Kubernetes Cluster

This is work in progress. We will add its sections in pieces. Your feedback is welcome at discuss.istio.io.

In this module, you set up a Kubernetes cluster that has Istio installed and a namespace to use throughout the tutorial.

If you are in a workshop and the instructors provide a cluster for you, proceed to setting up your local computer.

  1. Ensure you have access to a Kubernetes cluster. You can use the Google Kubernetes Engine or the IBM Cloud Kubernetes Service.

  2. Create an environment variable to store the name of a namespace that you will use when you run the tutorial commands. You can use any name, for example tutorial.

    1. $ export NAMESPACE=tutorial
  3. Create the namespace:

    1. $ kubectl create namespace $NAMESPACE

    If you are an instructor, you should allocate a separate namespace per each participant. The tutorial supports work in multiple namespaces simultaneously by multiple participants.

  4. Install Istio using the demo profile.

  5. The Kiali and Prometheus addons are used in this example and need to be installed. All addons are installed using:

    Zip

    1. $ kubectl apply -f @samples/addons@

    If there are errors trying to install the addons, try running the command again. There may be some timing issues which will be resolved when the command is run again.

  6. Next, enable Envoy’s access logging as described in Enable Envoy’s access logging. Skip the clean up and delete steps, because you need the sleep application for later tutorial modules.

  7. Create a Kubernetes Ingress resource for these common Istio services using the kubectl command shown. It is not necessary to be familiar with each of these services at this point in the tutorial.

    The kubectl command can accept an in-line configuration to create the Ingress resources for each service:

    1. $ kubectl apply -f - <<EOF
    2. apiVersion: networking.k8s.io/v1beta1
    3. kind: Ingress
    4. metadata:
    5. name: istio-system
    6. namespace: istio-system
    7. spec:
    8. rules:
    9. - host: my-istio-dashboard.io
    10. http:
    11. paths:
    12. - path: /
    13. pathType: Prefix
    14. backend:
    15. serviceName: grafana
    16. servicePort: 3000
    17. - host: my-istio-tracing.io
    18. http:
    19. paths:
    20. - path: /
    21. pathType: Prefix
    22. backend:
    23. serviceName: tracing
    24. servicePort: 9411
    25. - host: my-istio-logs-database.io
    26. http:
    27. paths:
    28. - path: /
    29. pathType: Prefix
    30. backend:
    31. serviceName: prometheus
    32. servicePort: 9090
    33. - host: my-kiali.io
    34. http:
    35. paths:
    36. - path: /
    37. pathType: Prefix
    38. backend:
    39. serviceName: kiali
    40. servicePort: 20001
    41. EOF
  8. Create a role to provide read access to the istio-system namespace. This role is required to limit permissions of the participants in the steps below.

    1. $ kubectl apply -f - <<EOF
    2. kind: Role
    3. apiVersion: rbac.authorization.k8s.io/v1beta1
    4. metadata:
    5. name: istio-system-access
    6. namespace: istio-system
    7. rules:
    8. - apiGroups: ["", "extensions", "apps"]
    9. resources: ["*"]
    10. verbs: ["get", "list"]
    11. EOF
  9. Create a service account for each participant:

    1. $ kubectl apply -f - <<EOF
    2. apiVersion: v1
    3. kind: ServiceAccount
    4. metadata:
    5. name: ${NAMESPACE}-user
    6. namespace: $NAMESPACE
    7. EOF
  10. Limit each participant’s permissions. During the tutorial, participants only need to create resources in their namespace and to read resources from istio-system namespace. It is a good practice, even if using your own cluster, to avoid interfering with other namespaces in your cluster.

    Create a role to allow read-write access to each participant’s namespace. Bind the participant’s service account to this role and to the role for reading resources from istio-system:

    1. $ kubectl apply -f - <<EOF
    2. kind: Role
    3. apiVersion: rbac.authorization.k8s.io/v1beta1
    4. metadata:
    5. name: ${NAMESPACE}-access
    6. namespace: $NAMESPACE
    7. rules:
    8. - apiGroups: ["", "extensions", "apps", "networking.k8s.io", "networking.istio.io", "authentication.istio.io",
    9. "rbac.istio.io", "config.istio.io", "security.istio.io"]
    10. resources: ["*"]
    11. verbs: ["*"]
    12. ---
    13. kind: RoleBinding
    14. apiVersion: rbac.authorization.k8s.io/v1beta1
    15. metadata:
    16. name: ${NAMESPACE}-access
    17. namespace: $NAMESPACE
    18. subjects:
    19. - kind: ServiceAccount
    20. name: ${NAMESPACE}-user
    21. namespace: $NAMESPACE
    22. roleRef:
    23. apiGroup: rbac.authorization.k8s.io
    24. kind: Role
    25. name: ${NAMESPACE}-access
    26. ---
    27. kind: RoleBinding
    28. apiVersion: rbac.authorization.k8s.io/v1beta1
    29. metadata:
    30. name: ${NAMESPACE}-istio-system-access
    31. namespace: istio-system
    32. subjects:
    33. - kind: ServiceAccount
    34. name: ${NAMESPACE}-user
    35. namespace: $NAMESPACE
    36. roleRef:
    37. apiGroup: rbac.authorization.k8s.io
    38. kind: Role
    39. name: istio-system-access
    40. EOF
  11. Each participant needs to use their own Kubernetes configuration file. This configuration file specifies the cluster details, the service account, the credentials and the namespace of the participant. The kubectl command uses the configuration file to operate on the cluster.

    Generate a Kubernetes configuration file for each participant:

    This command assumes your cluster is named tutorial-cluster. If your cluster is named differently, replace all references with the name of your cluster.

    1. $ cat <<EOF > ./${NAMESPACE}-user-config.yaml
    2. apiVersion: v1
    3. kind: Config
    4. preferences: {}
    5. clusters:
    6. - cluster:
    7. certificate-authority-data: $(kubectl get secret $(kubectl get sa ${NAMESPACE}-user -n $NAMESPACE -o jsonpath={.secrets..name}) -n $NAMESPACE -o jsonpath='{.data.ca\.crt}')
    8. server: $(kubectl config view -o jsonpath="{.clusters[?(.name==\"$(kubectl config view -o jsonpath="{.contexts[?(.name==\"$(kubectl config current-context)\")].context.cluster}")\")].cluster.server}")
    9. name: ${NAMESPACE}-cluster
    10. users:
    11. - name: ${NAMESPACE}-user
    12. user:
    13. as-user-extra: {}
    14. client-key-data: $(kubectl get secret $(kubectl get sa ${NAMESPACE}-user -n $NAMESPACE -o jsonpath={.secrets..name}) -n $NAMESPACE -o jsonpath='{.data.ca\.crt}')
    15. token: $(kubectl get secret $(kubectl get sa ${NAMESPACE}-user -n $NAMESPACE -o jsonpath={.secrets..name}) -n $NAMESPACE -o jsonpath={.data.token} | base64 --decode)
    16. contexts:
    17. - context:
    18. cluster: ${NAMESPACE}-cluster
    19. namespace: ${NAMESPACE}
    20. user: ${NAMESPACE}-user
    21. name: ${NAMESPACE}
    22. current-context: ${NAMESPACE}
    23. EOF
  12. Set the KUBECONFIG environment variable for the ${NAMESPACE}-user-config.yaml configuration file:

    1. $ export KUBECONFIG=$PWD/${NAMESPACE}-user-config.yaml
  13. Verify that the configuration took effect by printing the current namespace:

    1. $ kubectl config view -o jsonpath="{.contexts[?(@.name==\"$(kubectl config current-context)\")].context.namespace}"
    2. tutorial

    You should see the name of your namespace in the output.

  14. If you are setting up the cluster for yourself, copy the ${NAMESPACE}-user-config.yaml file mentioned in the previous steps to your local computer, where ${NAMESPACE} is the name of the namespace you provided in the previous steps. For example, tutorial-user-config.yaml. You will need this file later in the tutorial.

    If you are an instructor, send the generated configuration files to each participant. The participants must copy their configuration file to their local computer.

Congratulations, you configured your cluster for the tutorial!

You are ready to setup a local computer.