Virtual Machines in Single-Network Meshes

This example shows how to integrate a VM or a bare metal host into a single-network Istio mesh deployed on Kubernetes.

Prerequisites

  • You have already set up Istio on Kubernetes. If you haven’t done so, you can find out how in the Installation guide.

  • Virtual machines (VMs) must have IP connectivity to the endpoints in the mesh. This typically requires a VPC or a VPN, as well as a container network that provides direct (without NAT or firewall deny) routing to the endpoints. The machine is not required to have access to the cluster IP addresses assigned by Kubernetes.

  • VMs must have access to a DNS server that resolves names to cluster IP addresses. Options include exposing the Kubernetes DNS server through an internal load balancer, using a Core DNS server, or configuring the IPs in any other DNS server accessible from the VM.

The following instructions:

  • Assume the expansion VM is running on GCE.
  • Use Google platform-specific commands for some steps.

Installation steps

Setup consists of preparing the mesh for expansion and installing and configuring each VM.

Preparing the Kubernetes cluster for VMs

The first step when adding non-Kubernetes services to an Istio mesh is to configure the Istio installation itself, and generate the configuration files that let VMs connect to the mesh. Prepare the cluster for the VM with the following commands on a machine with cluster admin privileges:

  1. Create a Kubernetes secret for your generated CA certificates using a command similar to the following. See Certificate Authority (CA) certificates for more details.

    The root and intermediate certificate from the samples directory are widely distributed and known. Do NOT use these certificates in production as your clusters would then be open to security vulnerabilities and compromise.

    ZipZipZipZip

    1. $ kubectl create namespace istio-system
    2. $ kubectl create secret generic cacerts -n istio-system \
    3. --from-file=@samples/certs/ca-cert.pem@ \
    4. --from-file=@samples/certs/ca-key.pem@ \
    5. --from-file=@samples/certs/root-cert.pem@ \
    6. --from-file=@samples/certs/cert-chain.pem@
  2. For a simple setup, deploy Istio control plane into the cluster

    1. $ istioctl manifest apply

    For further details and customization options, refer to the installation instructions.

Alternatively, the user can create an explicit service of type LoadBalancer and use internal load balancer type. User can also deploy a separate ingress Gateway, with internal load balancer type for both mesh expansion and multicluster. The main requirement is for the exposed address to do TCP load balancing to the Istiod deployment, and for the DNS name associated with the assigned load balancer address to match the certificate provisioned into Istiod deployment, defaulting to istiod.istio-system.svc.

  1. Define the namespace the VM joins. This example uses the SERVICE_NAMESPACE environment variable to store the namespace. The value of this variable must match the namespace you use in the configuration files later on, and the identity encoded in the certificates.

    1. $ export SERVICE_NAMESPACE="vm"
  2. Determine and store the IP address of the Istiod since the VMs access Istiod through this IP address.

    1. $ export IstiodIP=$(kubectl get -n istio-system service istiod -o jsonpath='{.spec.clusterIP}')
    2. $ echo $IstiodIP
    3. 10.55.240.12
  3. Generate a cluster.env configuration to deploy in the VMs. This file contains the Kubernetes cluster IP address ranges to intercept and redirect via Envoy. You specify the CIDR range when you install Kubernetes as servicesIpv4Cidr. Replace $MY_ZONE and $MY_PROJECT in the following example commands with the appropriate values to obtain the CIDR after installation:

    1. $ ISTIO_SERVICE_CIDR=$(gcloud container clusters describe $K8S_CLUSTER --zone $MY_ZONE --project $MY_PROJECT --format "value(servicesIpv4Cidr)")
    2. $ echo -e "ISTIO_SERVICE_CIDR=$ISTIO_SERVICE_CIDR\n" > cluster.env

    It is also possible to intercept all traffic, as is done for pods. Depending on vendor and installation mechanism you may use different commands to determine the IP range used for services and pods. Multiple ranges can be specified if the VM is making requests to multiple K8S clusters.

  4. Check the contents of the generated cluster.env file. It should be similar to the following example:

    1. $ cat cluster.env
    2. ISTIO_SERVICE_CIDR=10.55.240.0/20
  5. If the VM only calls services in the mesh, you can skip this step. Otherwise, add the ports the VM exposes to the cluster.env file with the following command. You can change the ports later if necessary.

    1. $ echo "ISTIO_INBOUND_PORTS=3306,8080" >> cluster.env
  6. In order to use mesh expansion, the VM must be provisioned with certificates signed by the same root CA as the rest of the mesh.

    It is recommended to follow the instructions for “Plugging in External CA Key and Certificates”, and use a separate intermediary CA for provisioning the VM. There are many tools and procedures for managing certificates for VMs - Istio requirement is that the VM will get a certificate with an Istio-compatible SPIFFE SAN, with the correct trust domain, namespace and service account.

    As an example, for very simple demo setups, you can also use:

    1. $ go run istio.io/istio/security/tools/generate_cert \
    2. -client -host spiffee://cluster.local/vm/vmname --out-priv key.pem --out-cert cert-chain.pem -mode citadel
    3. $ kubectl -n istio-system get cm istio-ca-root-cert -o jsonpath='{.data.root-cert\.pem}' > root-cert.pem

Setting up the VM

Next, run the following commands on each machine that you want to add to the mesh:

  1. Copy the previously created cluster.env and *.pem files to the VM. For example:

    1. $ export GCE_NAME="your-gce-instance"
    2. $ gcloud compute scp --project=${MY_PROJECT} --zone=${MY_ZONE} {key.pem,cert-chain.pem,cluster.env,root-cert.pem} ${GCE_NAME}:~
  2. Install the Debian package with the Envoy sidecar.

    1. $ gcloud compute ssh --project=${MY_PROJECT} --zone=${MY_ZONE} "${GCE_NAME}"
    2. $ curl -L https://storage.googleapis.com/istio-release/releases/1.6.0/deb/istio-sidecar.deb > istio-sidecar.deb
    3. $ sudo dpkg -i istio-sidecar.deb
  3. Add the IP address of the Istiod to /etc/hosts. Revisit the preparing the cluster section to learn how to obtain the IP address. The following example updates the /etc/hosts file with the Istiod address:

    1. $ echo "${IstiodIP} istiod.istio-system.svc" | sudo tee -a /etc/hosts

A better options is to configure the DNS resolver of the VM to resolve the address, using a split-DNS server. Using /etc/hosts is an easy to use example. It is also possible to use a real DNS and certificate for Istiod, this is beyond the scope of this document.

  1. Install root-cert.pem, key.pem and cert-chain.pem under /etc/certs/.

    1. $ sudo mkdir -p /etc/certs
    2. $ sudo cp {root-cert.pem,cert-chain.pem,key.pem} /etc/certs
  2. Install root-cert.pem under /var/run/secrets/istio/.

  3. Install cluster.env under /var/lib/istio/envoy/.

    1. $ sudo cp cluster.env /var/lib/istio/envoy
  4. Transfer ownership of the files in /etc/certs/ , /var/lib/istio/envoy/ and /var/run/secrets/istio/to the Istio proxy.

    1. $ sudo chown -R istio-proxy /etc/certs /var/lib/istio/envoy /var/run/secrets/istio/
  5. Start Istio using systemctl.

    1. $ sudo systemctl start istio

Send requests from VM workloads to Kubernetes services

After setup, the machine can access services running in the Kubernetes cluster or on other VMs.

The following example shows accessing a service running in the Kubernetes cluster from a VM using /etc/hosts/, in this case using a service from the Bookinfo example.

  1. First, on the cluster admin machine get the virtual IP address (clusterIP) for the service:

    1. $ kubectl get svc productpage -o jsonpath='{.spec.clusterIP}'
    2. 10.55.246.247
  2. Then on the added VM, add the service name and address to its etc/hosts file. You can then connect to the cluster service from the VM, as in the example below:

    1. $ echo "10.55.246.247 productpage.default.svc.cluster.local" | sudo tee -a /etc/hosts
    2. $ curl -v productpage.default.svc.cluster.local:9080
    3. < HTTP/1.1 200 OK
    4. < content-type: text/html; charset=utf-8
    5. < content-length: 1836
    6. < server: envoy
    7. ... html content ...

The server: envoy header indicates that the sidecar intercepted the traffic.

Running services on the added VM

  1. Setup an HTTP server on the VM instance to serve HTTP traffic on port 8080:

    1. $ gcloud compute ssh ${GCE_NAME}
    2. $ python -m SimpleHTTPServer 8080
  2. Determine the VM instance’s IP address. For example, find the IP address of the GCE instance with the following commands:

    1. $ export GCE_IP=$(gcloud --format="value(networkInterfaces[0].networkIP)" compute instances describe ${GCE_NAME})
    2. $ echo ${GCE_IP}
  3. Add VM services to the mesh

    1. $ istioctl experimental add-to-mesh external-service vmhttp ${VM_IP} http:8080 -n ${SERVICE_NAMESPACE}

    Ensure you have added the istioctl client to your path, as described in the download page.

  4. Deploy a pod running the sleep service in the Kubernetes cluster, and wait until it is ready:

    Zip

    1. $ kubectl apply -f @samples/sleep/sleep.yaml@
    2. $ kubectl get pod
    3. NAME READY STATUS RESTARTS AGE
    4. sleep-88ddbcfdd-rm42k 2/2 Running 0 1s
    5. ...
  5. Send a request from the sleep service on the pod to the VM’s HTTP service:

    1. $ kubectl exec -it sleep-88ddbcfdd-rm42k -c sleep -- curl vmhttp.${SERVICE_NAMESPACE}.svc.cluster.local:8080

    You should see something similar to the output below.

    1. <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><html>
    2. <title>Directory listing for /</title>
    3. <body>
    4. <h2>Directory listing for /</h2>
    5. <hr>
    6. <ul>
    7. <li><a href=".bashrc">.bashrc</a></li>
    8. <li><a href=".ssh/">.ssh/</a></li>
    9. ...
    10. </body>

Congratulations! You successfully configured a service running in a pod within the cluster to send traffic to a service running on a VM outside of the cluster and tested that the configuration worked.

Cleanup

Run the following commands to remove the expansion VM from the mesh’s abstract model.

  1. $ istioctl experimental remove-from-mesh -n ${SERVICE_NAMESPACE} vmhttp
  2. Kubernetes Service "vmhttp.vm" has been deleted for external service "vmhttp"
  3. Service Entry "mesh-expansion-vmhttp" has been deleted for external service "vmhttp"

Troubleshooting

The following are some basic troubleshooting steps for common VM-related issues.

  • When making requests from a VM to the cluster, ensure you don’t run the requests as root or istio-proxy user. By default, Istio excludes both users from interception.

  • Verify the machine can reach the IP of the all workloads running in the cluster. For example:

    1. $ kubectl get endpoints productpage -o jsonpath='{.subsets[0].addresses[0].ip}'
    2. 10.52.39.13
    1. $ curl 10.52.39.13:9080
    2. html output
  • Check the status of the Istio Agent and sidecar:

    1. $ sudo systemctl status istio
  • Check that the processes are running. The following is an example of the processes you should see on the VM if you run ps, filtered for istio:

    1. $ ps aux | grep istio
    2. root 6955 0.0 0.0 49344 3048 ? Ss 21:32 0:00 su -s /bin/bash -c INSTANCE_IP=10.150.0.5 POD_NAME=demo-vm-1 POD_NAMESPACE=vm exec /usr/local/bin/pilot-agent proxy > /var/log/istio/istio.log istio-proxy
    3. istio-p+ 7016 0.0 0.1 215172 12096 ? Ssl 21:32 0:00 /usr/local/bin/pilot-agent proxy
    4. istio-p+ 7094 4.0 0.3 69540 24800 ? Sl 21:32 0:37 /usr/local/bin/envoy -c /etc/istio/proxy/envoy-rev1.json --restart-epoch 1 --drain-time-s 2 --parent-shutdown-time-s 3 --service-cluster istio-proxy --service-node sidecar~10.150.0.5~demo-vm-1.vm-vm.svc.cluster.local
  • Check the Envoy access and error logs:

    1. $ tail /var/log/istio/istio.log
    2. $ tail /var/log/istio/istio.err.log

See also

Virtual Machine Installation

Deploy istio and connect a workload running within a virtual machine to it.

Virtual Machines in Multi-Network Meshes

Learn how to add a service running on a virtual machine to your multi-network Istio mesh.

DNS Certificate Management

Provision and manage DNS certificates in Istio.

Secure Webhook Management

A more secure way to manage Istio webhooks.

Demystifying Istio’s Sidecar Injection Model

De-mystify how Istio manages to plugin its data-plane components into an existing deployment.

Bookinfo with a Virtual Machine

Run the Bookinfo application with a MySQL service running on a virtual machine within your mesh.