Operator for Kubernetes


Understanding Operators

The Jaeger Operator is an implementation of a Kubernetes Operator. Operators are pieces of software that ease the operational complexity of running another piece of software. More technically, Operators are a method of packaging, deploying, and managing a Kubernetes application.

A Kubernetes application is an application that is both deployed on Kubernetes and managed using the Kubernetes APIs and kubectl (kubernetes) or oc (OKD) tooling. To be able to make the most of Kubernetes, you need a set of cohesive APIs to extend in order to service and manage your apps that run on Kubernetes. Think of Operators as the runtime that manages this type of app on Kubernetes.

Installing the Operator

The Jaeger Operator version tracks one version of the Jaeger components (Query, Collector, Agent). When a new version of the Jaeger components is released, a new version of the operator will be released that understands how running instances of the previous version can be upgraded to the new version.

Installing the Operator on Kubernetes

The following instructions will create the observability namespace and install the Jaeger Operator.

Make sure your kubectl command is properly configured to talk to a valid Kubernetes cluster. If you don’t have a cluster, you can create one locally using minikube.

To install the operator, run:

  1. kubectl create namespace observability # <1>
  2. kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/crds/jaegertracing.io_jaegers_crd.yaml # <2>
  3. kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/service_account.yaml
  4. kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/role.yaml
  5. kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/role_binding.yaml
  6. kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/operator.yaml

<1> This creates the namespace used by default in the deployment files. If you want to install the Jaeger operator in a different namespace, you must edit the deployment files to change observability to the desired namespace value.

<2> This installs the “Custom Resource Definition” for the apiVersion: jaegertracing.io/v1

At this point, there should be a jaeger-operator deployment available. You can view it by running the following command:

  1. $ kubectl get deployment jaeger-operator -n observability
  2. NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
  3. jaeger-operator 1 1 1 1 48s

The operator is now ready to create Jaeger instances.

Installing the Operator on OKD/OpenShift

The instructions from the previous section also work for installing the operator on OKD or OpenShift. Make sure you are logged in as a privileged user, when you install the role based acces control (RBAC) rules, the custom resource definition, and the operator.

  1. oc login -u <privileged user>
  2. oc new-project observability # <1>
  3. oc create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/crds/jaegertracing.io_jaegers_crd.yaml # <2>
  4. oc create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/service_account.yaml
  5. oc create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/role.yaml
  6. oc create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/role_binding.yaml
  7. oc create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/operator.yaml

<1> This creates the namespace used by default in the deployment files. If you want to install the Jaeger operator in a different namespace, you must edit the deployment files to change observability to the desired namespace value.

<2> This installs the “Custom Resource Definition” for the apiVersion: jaegertracing.io/v1

Once the operator is installed, grant the role jaeger-operator to users who should be able to install individual Jaeger instances. The following example creates a role binding allowing the user developer to create Jaeger instances:

  1. oc create \
  2. rolebinding developer-jaeger-operator \
  3. --role=jaeger-operator \
  4. --user=developer

After the role is granted, switch back to a non-privileged user.

Quick Start - Deploying the AllInOne image

The simplest possible way to create a Jaeger instance is by creating a YAML file like the following example. This will install the default AllInOne strategy, which deploys the “all-in-one” image (agent, collector, query, ingestor, Jaeger UI) in a single pod, using in-memory storage by default.

This default strategy is intended for development, testing, and demo purposes, not for production.

  1. apiVersion: jaegertracing.io/v1
  2. kind: Jaeger
  3. metadata:
  4. name: simplest

The YAML file can then be used with kubectl:

  1. kubectl apply -f simplest.yaml

In a few seconds, a new in-memory all-in-one instance of Jaeger will be available, suitable for quick demos and development purposes. To check the instances that were created, list the jaeger objects:

  1. $ kubectl get jaegers
  2. NAME CREATED AT
  3. simplest 28s

To get the pod name, query for the pods belonging to the simplest Jaeger instance:

  1. $ kubectl get pods -l app.kubernetes.io/instance=simplest
  2. NAME READY STATUS RESTARTS AGE
  3. simplest-6499bb6cdd-kqx75 1/1 Running 0 2m

Similarly, the logs can be queried either from the pod directly using the pod name obtained from the previous example, or from all pods belonging to our instance:

  1. $ kubectl logs -l app.kubernetes.io/instance=simplest
  2. ...
  3. {"level":"info","ts":1535385688.0951214,"caller":"healthcheck/handler.go:133","msg":"Health Check state change","status":"ready"}

On OKD/OpenShift the container name must be specified.

  1. $ kubectl logs -l app.kubernetes.io/instance=simplest -c jaeger
  2. ...
  3. {"level":"info","ts":1535385688.0951214,"caller":"healthcheck/handler.go:133","msg":"Health Check state change","status":"ready"}

Deployment Strategies

When you create a Jaeger instance, it is associated with a strategy. The strategy is defined in the custom resource file, and determines the architecture to be used for the Jaeger backend. The default strategy is allInOne. The other possible values are production and streaming.

The available strategies are described in the following sections.

AllInOne (Default) strategy

This strategy is intended for development, testing, and demo purposes.

The main backend components, agent, collector and query service, are all packaged into a single executable which is configured (by default) to use in-memory storage.

Production strategy

The production strategy is intended (as the name suggests) for production environments, where long term storage of trace data is important, as well as a more scalable and highly available architecture is required. Each of the backend components is therefore separately deployed.

The agent can be injected as a sidecar on the instrumented application or as a daemonset.

The query and collector services are configured with a supported storage type - currently Cassandra or Elasticsearch. Multiple instances of each of these components can be provisioned as required for performance and resilience purposes.

The main additional requirement is to provide the details of the storage type and options, for example:

  1. storage:
  2. type: elasticsearch
  3. options:
  4. es:
  5. server-urls: http://elasticsearch:9200

Streaming strategy

The streaming strategy is designed to augment the production strategy by providing a streaming capability that effectively sits between the collector and the backend storage (Cassandra or Elasticsearch). This provides the benefit of reducing the pressure on the backend storage, under high load situations, and enables other trace post-processing capabilities to tap into the real time span data directly from the streaming platform (Kafka).

The only additional information required is to provide the details for accessing the Kafka platform, which is configured in the collector component (as producer) and ingester component (as consumer):

  1. apiVersion: jaegertracing.io/v1
  2. kind: Jaeger
  3. metadata:
  4. name: simple-streaming
  5. spec:
  6. strategy: streaming
  7. collector:
  8. options:
  9. kafka: # <1>
  10. producer:
  11. topic: jaeger-spans
  12. brokers: my-cluster-kafka-brokers.kafka:9092
  13. ingester:
  14. options:
  15. kafka: # <1>
  16. consumer:
  17. topic: jaeger-spans
  18. brokers: my-cluster-kafka-brokers.kafka:9092
  19. ingester:
  20. deadlockInterval: 0 # <2>
  21. storage:
  22. type: elasticsearch
  23. options:
  24. es:
  25. server-urls: http://elasticsearch:9200

<1> Identifies the Kafka configuration used by the collector, to produce the messages, and the ingester to consume the messages.

<2> The deadlock interval can be disabled to avoid the ingester being terminated when no messages arrive within the default 1 minute period

A Kafka environment can be configured using Strimzi’s Kafka operator.

Understanding Custom Resource Definitions

In the Kubernetes API, a resource is an endpoint that stores a collection of API objects of a certain kind. For example, the built-in Pods resource contains a collection of Pod objects. A Custom Resource Definition (CRD) object defines a new, unique object Kind in the cluster and lets the Kubernetes API server handle its entire lifecycle.

To create Custom Resource (CR) objects, cluster administrators must first create a Custom Resource Definition (CRD). The CRDs allow cluster users to create CRs to add the new resource types into their projects. An Operator watches for custom resource objects to be created, and when it sees a custom resource being created, it creates the application based on the parameters defined in the custom resource object.

While only cluster administrators can create CRDs, developers can create the CR from an existing CRD if they have read and write permission to it.

For reference, here’s how you can create a more complex all-in-one instance:

  1. apiVersion: jaegertracing.io/v1
  2. kind: Jaeger
  3. metadata:
  4. name: my-jaeger
  5. spec:
  6. strategy: allInOne # <1>
  7. allInOne:
  8. image: jaegertracing/all-in-one:latest # <2>
  9. options: # <3>
  10. log-level: debug # <4>
  11. storage:
  12. type: memory # <5>
  13. options: # <6>
  14. memory: # <7>
  15. max-traces: 100000
  16. ingress:
  17. enabled: false # <8>
  18. agent:
  19. strategy: DaemonSet # <9>
  20. annotations:
  21. scheduler.alpha.kubernetes.io/critical-pod: "" # <10>

<1> The default strategy is allInOne. The other possible values are production and streaming.

<2> The image to use, in a regular Docker syntax.

<3> The (non-storage related) options to be passed verbatim to the underlying binary. Refer to the Jaeger documentation and/or to the —help option from the related binary for all the available options.

<4> The option is a simple key: value map. In this case, we want the option —log-level=debug to be passed to the binary.

<5> The storage type to be used. By default it will be memory, but can be any other supported storage type (Cassandra, Elasticsearch, Kafka).

<6> All storage related options should be placed here, rather than under the ‘allInOne’ or other component options.

<7> Some options are namespaced and we can alternatively break them into nested objects. We could have specified memory.max-traces: 100000.

<8> By default, an ingress object is created for the query service. It can be disabled by setting its enabled option to false. If deploying on OpenShift, this will be represented by a Route object.

<9> By default, the operator assumes that agents are deployed as sidecars within the target pods. Specifying the strategy as “DaemonSet” changes that and makes the operator deploy the agent as DaemonSet. Note that your tracer client will probably have to override the “JAEGER_AGENT_HOST” environment variable to use the node’s IP.

<10> Define annotations to be applied to all deployments (not services). These can be overridden by annotations defined on the individual components.

You can view example custom resources for different Jaeger configurations on GitHub.

Configuring the Custom Resource

You can use the simplest example (shown above) and create a Jaeger instance using the defaults, or you can create your own custom resource file.

Storage options

Cassandra storage

When the storage type is set to Cassandra, the operator will automatically create a batch job that creates the required schema for Jaeger to run. This batch job will block the Jaeger installation, so that it starts only after the schema is successfuly created. The creation of this batch job can be disabled by setting the enabled property to false:

  1. apiVersion: jaegertracing.io/v1
  2. kind: Jaeger
  3. metadata:
  4. name: cassandra-without-create-schema
  5. spec:
  6. strategy: allInOne
  7. storage:
  8. type: cassandra
  9. cassandraCreateSchema:
  10. enabled: false # <1>

<1> Defaults to true

Further aspects of the batch job can be configured as well. An example with all the possible options is shown below:

  1. apiVersion: jaegertracing.io/v1
  2. kind: Jaeger
  3. metadata:
  4. name: cassandra-with-create-schema
  5. spec:
  6. strategy: allInOne # <1>
  7. storage:
  8. type: cassandra
  9. options: # <2>
  10. cassandra:
  11. servers: cassandra
  12. keyspace: jaeger_v1_datacenter3
  13. cassandraCreateSchema: # <3>
  14. datacenter: "datacenter3"
  15. mode: "test"

<1> The same works for production and streaming.

<2> These options are for the regular Jaeger components, like collector and query.

<3> The options for the create-schema job.

The default create-schema job uses MODE=prod, which implies a replication factor of 2, using NetworkTopologyStrategy as the class, effectively meaning that at least 3 nodes are required in the Cassandra cluster. If a SimpleStrategy is desired, set the mode to test, which then sets the replication factor of 1. Refer to the create-schema script for more details.

Elasticsearch storage

By default Elasticsearch storage does not require any initialization job to be run. However Elasticsearchstorage requires a cron job to be run to clean old data from the storage.

When rollover (es.use-aliases) is enabled, Jaeger operator also deploys a job to initialize Elasticsearch storageand another two cron jobs to perform required index management actions.

External Elasticsearch

Jaeger can be used with an external Elasticsearch cluster.The following example shows a Jaeger CR using an external Elasticsearch clusterwith TLS CA certificate mounted from a volume and user/password stored in a secret.

  1. apiVersion: jaegertracing.io/v1
  2. kind: Jaeger
  3. metadata:
  4. name: simple-prod
  5. spec:
  6. strategy: production
  7. storage:
  8. type: elasticsearch # <1>
  9. options:
  10. es:
  11. server-urls: https://elasticsearch.default.svc:9200 # <2>
  12. tls: # <3>
  13. ca: /es/certificates/root-ca.pem
  14. secretName: jaeger-secret # <4>
  15. volumeMounts: # <5>
  16. - name: certificates
  17. mountPath: /es/certificates/
  18. readOnly: true
  19. volumes:
  20. - name: certificates
  21. secret:
  22. secretName: quickstart-es-http-certs-public

<1> Storage type Elasticsearch.

<2> Url to Elasticsearch service running in default namespace.

<3> TLS configuration. In this case only CA certificate, but it can also contain es.tls.key and es.tls.cert when using mutual TLS.

<4> Secret which defines environment variables ES_PASSWORD and ES_USERNAME. Created by kubectl create secret generic jaeger-secret —from-literal=ES_PASSWORD=changeme —from-literal=ES_USERNAME=elastic

<5> Volume mounts and volumes which are mounted into all storage components.

Self provisioned

Under some circumstances, the Jaeger Operator can make use of the Elasticsearch Operator to provision a suitable Elasticsearch cluster.

This feature is supported only on OKD/OpenShift clusters. Spark dependencies are not supported with this feature Issue #294.

When there is no es.server-urls option as part of a Jaeger production instance and elasticsearch is set as the storage type, the Jaeger Operator creates an Elasticsearch cluster via the Elasticsearch Operator by creating a Custom Resource based on the configuration provided in storage section. The Elasticsearch cluster is meant to be dedicated for a single Jaeger instance.

The self-provision of an Elasticsearch cluster can be disabled by setting the flag —es-provision to false. The default value is auto, which will make the Jaeger Operator query the Kubernetes cluster for its ability to handle a Elasticsearch custom resource. This is usually set by the Elasticsearch Operator during its installation process, so, if the Elasticsearch Operator is expected to run after the Jaeger Operator, the flag can be set to true.

At the moment there can be only one Jaeger with self-provisioned Elasticsearch instance per namespace.

Elasticsearch index cleaner job

When using elasticsearch storage by default a cron job is created to clean old traces from it, the options for it are listed below so you can configure it to your use case.The connection configuration is derived from the storage options.

  1. storage:
  2. type: elasticsearch
  3. esIndexCleaner:
  4. enabled: true // turn the cron job deployment on and off
  5. numberOfDays: 7 // number of days to wait before deleting a record
  6. schedule: "55 23 * * *" // cron expression for it to run

The connection configuration to storage is derived from storage options.

Elasticsearch rollover

This index management strategy is more complicated than using the default daily indices andit requires an initialisation job to prepare the storage and two cron jobs to manage indices.The first cron job is used for rolling-over to a new index and the second for removingindices from read alias. The rollover feature is used when storage option es.use-aliases is enabled.

To learn more about rollover index management in Jaeger refer to thisarticle.

  1. storage:
  2. type: elasticsearch
  3. options:
  4. es:
  5. use-aliases: true
  6. esRollover:
  7. enabled: true // turn the cron job deployment on and off
  8. conditions: "{\"max_age\": \"2d\"}" // conditions when to rollover to a new index
  9. readTTL: 7d // how long should be old data available for reading
  10. schedule: "55 23 * * *" // cron expression for it to run

The connection configuration to storage is derived from storage options.

Deriving dependencies

The processing to derive dependencies will collect spans from storage, analyzes links between services and store them for later presentation in the UI.This job can only be used with the production strategy and storage type cassandra or elasticsearch.

  1. storage:
  2. type: elasticsearch
  3. dependencies:
  4. enabled: true // turn the job deployment on and off
  5. schedule: "55 23 * * *" // cron expression for it to run
  6. sparkMaster: // spark master connection string, when empty spark runs in embedded local mode

The connection configuration to storage is derived from storage options.

Auto-injecting Jaeger Agent Sidecars

The operator can inject Jaeger Agent sidecars in Deployment workloads, provided that the deployment has the annotation sidecar.jaegertracing.io/inject with a suitable value. The values can be either "true" (as string), or the Jaeger instance name, as returned by kubectl get jaegers. When "true" is used, there should be exactly one Jaeger instance for the same namespace as the deployment, otherwise, the operator can’t figure out automatically which Jaeger instance to use.

The following snippet shows a simple application that will get a sidecar injected, with the Jaeger Agent pointing to the single Jaeger instance available in the same namespace:

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: myapp
  5. annotations:
  6. "sidecar.jaegertracing.io/inject": "true" # <1>
  7. spec:
  8. selector:
  9. matchLabels:
  10. app: myapp
  11. template:
  12. metadata:
  13. labels:
  14. app: myapp
  15. spec:
  16. containers:
  17. - name: myapp
  18. image: acme/myapp:myversion

<1> Either "true" (as string) or the Jaeger instance name.

A complete sample deployment is available at deploy/examples/business-application-injected-sidecar.yaml.

When the sidecar is injected, the Jaeger Agent can then be accessed at its default location on localhost.

Installing the Agent as DaemonSet

By default, the Operator expects the agents to be deployed as sidecars to the target applications. This is convenient for several purposes, like in a multi-tenant scenario or to have better load balancing, but there are scenarios where you might want to install the agent as a DaemonSet. In that case, specify the Agent’s strategy to DaemonSet, as follows:

  1. apiVersion: jaegertracing.io/v1
  2. kind: Jaeger
  3. metadata:
  4. name: my-jaeger
  5. spec:
  6. agent:
  7. strategy: DaemonSet

If you attempt to install two Jaeger instances on the same cluster with DaemonSet as the strategy, only one will end up deploying a DaemonSet, as the agent is required to bind to well-known ports on the node. Because of that, the second daemon set will fail to bind to those ports.

Your tracer client will then most likely need to be told where the agent is located. This is usually done by setting the environment variable JAEGER_AGENT_HOST to the value of the Kubernetes node’s IP, for example:

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: myapp
  5. spec:
  6. selector:
  7. matchLabels:
  8. app: myapp
  9. template:
  10. metadata:
  11. labels:
  12. app: myapp
  13. spec:
  14. containers:
  15. - name: myapp
  16. image: acme/myapp:myversion
  17. env:
  18. - name: JAEGER_AGENT_HOST
  19. valueFrom:
  20. fieldRef:
  21. fieldPath: status.hostIP

OpenShift

In OpenShift, a HostPort can only be set when a special security context is set. A separate service account can be used by the Jaeger Agent with the permission to bind to HostPort, as follows:

  1. oc create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/examples/openshift/hostport-scc-daemonset.yaml # <1>
  2. oc new-project myappnamespace
  3. oc create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/examples/openshift/service_account_jaeger-agent-daemonset.yaml # <2>
  4. oc adm policy add-scc-to-user daemonset-with-hostport -z jaeger-agent-daemonset # <3>
  5. oc apply -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/examples/openshift/agent-as-daemonset.yaml # <4>

<1> The SecurityContextConstraints with the allowHostPorts policy

<2> The ServiceAccount to be used by the Jaeger Agent

<3> Adds the security policy to the service account

<4> Creates the Jaeger Instance using the serviceAccount created in the steps above

Without such a policy, errors like the following will prevent a DaemonSet to be created: Warning FailedCreate 4s (x14 over 45s) daemonset-controller Error creating: pods "agent-as-daemonset-agent-daemonset-" is forbidden: unable to validate against any security context constraint: [spec.containers[0].securityContext.containers[0].hostPort: Invalid value: 5775: Host ports are not allowed to be used

After a few seconds, the DaemonSet should be up and running:

  1. $ oc get daemonset agent-as-daemonset-agent-daemonset
  2. NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE
  3. agent-as-daemonset-agent-daemonset 1 1 1 1 1

Secrets Support

The Operator supports passing secrets to the Collector, Query and All-In-One deployments. This can be used for example, to pass credentials (username/password) to access the underlying storage backend (for example: Elasticsearch).The secrets are available as environment variables in the (Collector/Query/All-In-One) nodes.

  1. storage:
  2. type: elasticsearch
  3. options:
  4. es:
  5. server-urls: http://elasticsearch:9200
  6. secretName: jaeger-secrets

The secret itself would be managed outside of the jaeger-operator custom resource.

Configuring the UI

Information on various configuration options for the UI can be found here, defined in json format.

To apply UI configuration changes within the Custom Resource, the same information can be included in yaml format as shown below:

  1. ui:
  2. options:
  3. dependencies:
  4. menuEnabled: false
  5. tracking:
  6. gaID: UA-000000-2
  7. menu:
  8. - label: "About Jaeger"
  9. items:
  10. - label: "Documentation"
  11. url: "https://www.jaegertracing.io/docs/latest"
  12. linkPatterns:
  13. - type: "logs"
  14. key: "customer_id"
  15. url: /search?limit=20&lookback=1h&service=frontend&tags=%7B%22customer_id%22%3A%22#{customer_id}%22%7D
  16. text: "Search for other traces for customer_id=#{customer_id}"

Defining Sampling Strategies

This is not relevant if a trace was started by the Istio proxy as the sampling decision is made there. And the Jaeger sampling decisions are only relevant when you are using the Jaeger tracer (client).

The operator can be used to define sampling strategies that will be supplied to tracers that have been configured to use a remote sampler:

  1. apiVersion: jaegertracing.io/v1
  2. kind: Jaeger
  3. metadata:
  4. name: with-sampling
  5. spec:
  6. strategy: allInOne
  7. sampling:
  8. options:
  9. default_strategy:
  10. type: probabilistic
  11. param: 50

This example defines a default sampling strategy that is probabilistic, with a 50% chance of the trace instances being sampled.

Refer to the Jaeger documentation on Collector Sampling Configuration to see how service and endpoint sampling can be configured. The JSON representation described in that documentation can be used in the operator by converting to YAML.

Finer grained configuration

The custom resource can be used to define finer grained Kubernetes configuration applied to all Jaeger components or at the individual component level.

When a common definition (for all Jaeger components) is required, it is defined under the spec node. When the definition relates to an individual component, it is placed under the spec/<component> node.

The types of supported configuration include:

  1. apiVersion: jaegertracing.io/v1
  2. kind: Jaeger
  3. metadata:
  4. name: simple-prod
  5. spec:
  6. strategy: production
  7. storage:
  8. type: elasticsearch
  9. options:
  10. es:
  11. server-urls: http://elasticsearch:9200
  12. annotations:
  13. key1: value1
  14. labels:
  15. key2: value2
  16. resources:
  17. requests:
  18. memory: "64Mi"
  19. cpu: "250m"
  20. limits:
  21. memory: "128Mi"
  22. cpu: "500m"
  23. affinity:
  24. nodeAffinity:
  25. requiredDuringSchedulingIgnoredDuringExecution:
  26. nodeSelectorTerms:
  27. - matchExpressions:
  28. - key: kubernetes.io/e2e-az-name
  29. operator: In
  30. values:
  31. - e2e-az1
  32. - e2e-az2
  33. preferredDuringSchedulingIgnoredDuringExecution:
  34. - weight: 1
  35. preference:
  36. matchExpressions:
  37. - key: another-node-label-key
  38. operator: In
  39. values:
  40. - another-node-label-value
  41. tolerations:
  42. - key: "key1"
  43. operator: "Equal"
  44. value: "value1"
  45. effect: "NoSchedule"
  46. - key: "key1"
  47. operator: "Equal"
  48. value: "value1"
  49. effect: "NoExecute"
  50. serviceAccount: nameOfServiceAccount
  51. securityContext:
  52. runAsUser: 1000
  53. volumeMounts:
  54. - name: config-vol
  55. mountPath: /etc/config
  56. volumes:
  57. - name: config-vol
  58. configMap:
  59. name: log-config
  60. items:
  61. - key: log_level
  62. path: log_level

Accessing the Jaeger Console (UI)

Kubernetes

The operator creates a Kubernetes ingress route, which is the Kubernetes’ standard for exposing a service to the outside world, but by default it does not come with Ingress providers.Check the Kubernetes documentation for the most appropriate way to achieve an Ingress provider for your platform. The following command enables the Ingress provider on minikube:

  1. minikube addons enable ingress

Once Ingress is enabled, the address for the Jaeger console can be found by querying the Ingress object:

  1. $ kubectl get ingress
  2. NAME HOSTS ADDRESS PORTS AGE
  3. simplest-query * 192.168.122.34 80 3m

In this example, the Jaeger UI is available at http://192.168.122.34.

To enable TLS in the Ingress, pass a secretName with the name of a Secret containing the TLS certificate:

  1. apiVersion: jaegertracing.io/v1
  2. kind: Jaeger
  3. metadata:
  4. name: ingress-with-tls
  5. spec:
  6. ingress:
  7. secretName: my-tls-secret

OpenShift

When the Operator is running on OpenShift, the Operator will automatically create a Route object for the query services. Use the following command to check the hostname/port:

  1. oc get routes

Make sure to use https with the hostname/port you get from the command above, otherwise you’ll see a message like: “Application is not available”.

By default, the Jaeger UI is protected with OpenShift’s OAuth service and any valid user is able to login. To disable this feature and leave the Jaeger UI unsecured, set the Ingress property security to none in the custom resource file:

  1. apiVersion: jaegertracing.io/v1
  2. kind: Jaeger
  3. metadata:
  4. name: disable-oauth-proxy
  5. spec:
  6. ingress:
  7. security: none

Custom SAR and Delegate URL values can be specified as part of the .Spec.Ingress.OpenShift.SAR and .Spec.Ingress.Openshift.DelegateURLs, as follows:

  1. apiVersion: jaegertracing.io/v1
  2. kind: Jaeger
  3. metadata:
  4. name: custom-sar-oauth-proxy
  5. spec:
  6. ingress:
  7. openshift:
  8. sar: '{"namespace": "default", "resource": "pods", "verb": "get"}'
  9. delegateUrls: '{"/":{"namespace": "default", "resource": "pods", "verb": "get"}}'

When the delegateUrls is set, the Jaeger Operator needs to create a new ClusterRoleBinding between the service account used by the UI Proxy ({InstanceName}-ui-proxy) and the role system:auth-delegator, as required by the OpenShift OAuth Proxy. Because of that, the service account used by the operator itself needs to have the same cluster role binding. To accomplish that, a ClusterRoleBinding such as the following has to be created:

  1. kind: ClusterRoleBinding
  2. apiVersion: rbac.authorization.k8s.io/v1
  3. metadata:
  4. name: jaeger-operator-with-auth-delegator
  5. namespace: observability
  6. subjects:
  7. - kind: ServiceAccount
  8. name: jaeger-operator
  9. namespace: observability
  10. roleRef:
  11. kind: ClusterRole
  12. name: system:auth-delegator
  13. apiGroup: rbac.authorization.k8s.io

Cluster administrators not comfortable in letting users deploy Jaeger instances with this cluster role are free to not add this cluster role to the operator’s service account. In that case, the Operator will auto-detect that the required permissions are missing and will log a message similar to: the requested instance specifies the delegateUrls option for the OAuth Proxy, but this operator cannot assign the proper cluster role to it (system:auth-delegator). Create a cluster role binding between the operator's service account and the cluster role 'system:auth-delegator' in order to allow instances to use 'delegateUrls'.

The Jaeger Operator also supports authentication using htpasswd files via the OpenShift OAuth Proxy. To make use of that, specify the htpasswdFile option within the OpenShift-specific entries, pointing to the file htpasswd file location in the local disk. The htpasswd file can be created using the htpasswd utility:

  1. $ htpasswd -cs /tmp/htpasswd jdoe
  2. New password:
  3. Re-type new password:
  4. Adding password for user jdoe

This file can then be used as the input for the kubectl create secret command:

  1. $ kubectl create secret generic htpasswd --from-file=htpasswd=/tmp/htpasswd
  2. secret/htpasswd created

Once the secret is created, it can be specified in the Jaeger CR as a volume/volume mount:

  1. apiVersion: jaegertracing.io/v1
  2. kind: Jaeger
  3. metadata:
  4. name: with-htpasswd
  5. spec:
  6. ingress:
  7. openshift:
  8. sar: '{"namespace": "default", "resource": "pods", "verb": "get"}'
  9. htpasswdFile: /usr/local/data/htpasswd
  10. volumeMounts:
  11. - name: htpasswd-volume
  12. mountPath: /usr/local/data
  13. volumes:
  14. - name: htpasswd-volume
  15. secret:
  16. secretName: htpasswd

Upgrading the Operator and its managed instances

Each version of the Jaeger Operator follows one Jaeger version. Whenever a new version of the Jaeger Operator is installed, all the Jaeger instances managed by the operator will be upgraded to the Operator’s supported version. For example, an instance named simplest that was created with Jaeger Operator 1.12.0 will be running Jaeger 1.12.0. Once the Jaeger Operator is upgraded to 1.13.0, the instance simplest will be upgraded to the version 1.13.0, following the official upgrade instructions from the Jaeger project.

The Jaeger Operator can be upgraded manually by changing the deployment (kubectl edit deployment jaeger-operator), or via specialized tools such as the Operator Lifecycle Manager (OLM).

Updating a Jaeger instance (experimental)

A Jaeger instance can be updated by changing the CustomResource, either via kubectl edit jaeger simplest, where simplest is the Jaeger’s instance name, or by applying the updated YAML file via kubectl apply -f simplest.yaml.

The name of the Jaeger instance cannot be updated, as it is part of the identifying information for the resource.

Simpler changes such as changing the replica sizes can be applied without much concern, whereas changes to the strategy should be watched closely and might potentially cause an outage for individual components (collector/query/agent).

While changing the backing storage is supported, migration of the data is not.

Removing a Jaeger instance

To remove an instance, use the delete command with the custom resource file used when you created the instance:

  1. kubectl delete -f simplest.yaml

Alternatively, you can remove a Jaeger instance by running:

  1. kubectl delete jaeger simplest

Deleting the instance will not remove the data from any permanent storage used with this instance. Data from in-memory instances, however, will be lost.

Monitoring the operator

The Jaeger Operator starts a Prometheus-compatible endpoint on 0.0.0.0:8383/metrics with internal metrics that can be used to monitor the process.

The Jaeger Operator does not yet publish its own metrics. Rather, it makes available metrics reported by the components it uses, such as the Operator SDK.

Uninstalling the operator

To uninstall the operator, run the following commands:

  1. kubectl delete -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/operator.yaml
  2. kubectl delete -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/role_binding.yaml
  3. kubectl delete -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/role.yaml
  4. kubectl delete -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/service_account.yaml
  5. kubectl delete -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/crds/jaegertracing.io_jaegers_crd.yaml