Collecting Metrics for TCP services with Mixer

Mixer is deprecated. The functionality provided by Mixer is being moved into the Envoy proxies. Use of Mixer with Istio will only be supported through the 1.7 release of Istio.

This task shows how to configure Istio to automatically gather telemetry for TCP services in a mesh. At the end of this task, a new metric will be enabled for calls to a TCP service within your mesh.

The Bookinfo sample application is used as the example application throughout this task.

Before you begin

  • Install Istio with Mixer enabled in your cluster and deploy an application.

    The custom configuration needed to use Mixer for telemetry is:

    1. values:
    2. prometheus:
    3. enabled: true
    4. telemetry:
    5. v1:
    6. enabled: true
    7. v2:
    8. enabled: false
    9. components:
    10. citadel:
    11. enabled: true
    12. telemetry:
    13. enabled: true

    Please see the guide on Customizing the configuration for information on how to apply these settings.

    Once the configuration has been applied, confirm a telemetry-focused instance of Mixer is running:

    1. $ kubectl -n istio-system get service istio-telemetry
    2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    3. istio-telemetry ClusterIP 10.4.31.226 <none> 9091/TCP,15004/TCP,15014/TCP,42422/TCP 80s
  • This task assumes that the Bookinfo sample will be deployed in the default namespace. If you use a different namespace, you will need to update the example configuration and commands.

Collecting new telemetry data

  1. Apply a YAML file with configuration for the new metrics that Istio will generate and collect automatically.

    Zip

    1. $ kubectl apply -f @samples/bookinfo/telemetry/tcp-metrics.yaml@

    If you use Istio 1.1.2 or prior, please use the following configuration instead:

    Zip

    1. $ kubectl apply -f @samples/bookinfo/telemetry/tcp-metrics-crd.yaml@
  2. Setup Bookinfo to use MongoDB.

    1. Install v2 of the ratings service.

      If you are using a cluster with automatic sidecar injection enabled, simply deploy the services using kubectl:

      Zip

      1. $ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo-ratings-v2.yaml@

      If you are using manual sidecar injection, use the following command instead:

      Zip

      1. $ kubectl apply -f <(istioctl kube-inject -f @samples/bookinfo/platform/kube/bookinfo-ratings-v2.yaml@)
      2. deployment "ratings-v2" configured
    2. Install the mongodb service:

      If you are using a cluster with automatic sidecar injection enabled, simply deploy the services using kubectl:

      Zip

      1. $ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo-db.yaml@

      If you are using manual sidecar injection, use the following command instead:

      Zip

      1. $ kubectl apply -f <(istioctl kube-inject -f @samples/bookinfo/platform/kube/bookinfo-db.yaml@)
      2. service "mongodb" configured
      3. deployment "mongodb-v1" configured
    3. The Bookinfo sample deploys multiple versions of each microservice, so you will start by creating destination rules that define the service subsets corresponding to each version, and the load balancing policy for each subset.

      Zip

      1. $ kubectl apply -f @samples/bookinfo/networking/destination-rule-all.yaml@

      If you enabled mutual TLS, please run the following instead

      Zip

      1. $ kubectl apply -f @samples/bookinfo/networking/destination-rule-all-mtls.yaml@

      You can display the destination rules with the following command:

      1. $ kubectl get destinationrules -o yaml

      Since the subset references in virtual services rely on the destination rules, wait a few seconds for destination rules to propagate before adding virtual services that refer to these subsets.

    4. Create ratings and reviews virtual services:

      Zip

      1. $ kubectl apply -f @samples/bookinfo/networking/virtual-service-ratings-db.yaml@
      2. Created config virtual-service/default/reviews at revision 3003
      3. Created config virtual-service/default/ratings at revision 3004
  3. Send traffic to the sample application.

    For the Bookinfo sample, visit http://$GATEWAY_URL/productpage in your web browser or issue the following command:

    1. $ curl http://$GATEWAY_URL/productpage
  4. Verify that the new metric values are being generated and collected.

    In a Kubernetes environment, setup port-forwarding for Prometheus by executing the following command:

    1. $ kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=prometheus -o jsonpath='{.items[0].metadata.name}') 9090:9090 &

    View values for the new metric in the Prometheus browser window. Select Graph. Enter the istio_mongo_received_bytes metric and select Execute. The table displayed in the Console tab includes entries similar to:

    1. istio_mongo_received_bytes{destination_version="v1",instance="172.17.0.18:42422",job="istio-mesh",source_service="ratings-v2",source_version="v2"}

Understanding TCP telemetry collection

In this task, you added Istio configuration that instructed Mixer to automatically generate and report a new metric for all traffic to a TCP service within the mesh.

Similar to the Collecting Metrics Task, the new configuration consisted of instances, a handler, and a rule. Please see that Task for a complete description of the components of metric collection.

Metrics collection for TCP services differs only in the limited set of attributes that are available for use in instances.

TCP attributes

Several TCP-specific attributes enable TCP policy and control within Istio. These attributes are generated by server-side Envoy proxies. They are forwarded to Mixer at connection establishment, and forwarded periodically when connection is alive (periodical report), and forwarded at connection close (final report). The default interval for periodical report is 10 seconds, and it should be at least 1 second. Additionally, context attributes provide the ability to distinguish between http and tcp protocols within policies.

Attribute Generation Flow for TCP Services in an Istio Mesh.

TCP Attribute Flow

Cleanup

  • Remove the new telemetry configuration:

    Zip

    1. $ kubectl delete -f @samples/bookinfo/telemetry/tcp-metrics.yaml@

    If you are using Istio 1.1.2 or prior:

    Zip

    1. $ kubectl delete -f @samples/bookinfo/telemetry/tcp-metrics-crd.yaml@
  • Remove the port-forward process:

    1. $ killall kubectl
  • If you are not planning to explore any follow-on tasks, refer to the Bookinfo cleanup instructions to shutdown the application.

See also

Collecting Metrics for TCP Services

This task shows you how to configure Istio to collect metrics for TCP services.

Classifying Metrics Based on Request or Response (Experimental)

This task shows you how to improve telemetry by grouping requests and responses by their type.

Collecting Metrics With Mixer

This task shows you how to configure Istio’s Mixer to collect and customize metrics.

Customizing Istio Metrics

This task shows you how to customize the Istio metrics.

Querying Metrics from Prometheus

This task shows you how to query for Istio Metrics using Prometheus.

Reworking our Addon Integrations

A new way to manage installation of telemetry addons.