Troubleshooting

This section provides resolution steps for common problems reported with thelinkerd check command.

The “pre-kubernetes-cluster-setup” checks

These checks only run when the —pre flag is set. This flag is intended foruse prior to running linkerd install, to verify your cluster is prepared forinstallation.

√ control plane namespace does not already exist

Example failure:

  1. × control plane namespace does not already exist
  2. The "linkerd" namespace already exists

By default linkerd install will create a linkerd namespace. Prior toinstallation, that namespace should not exist. To check with a differentnamespace, run:

  1. linkerd check --pre --linkerd-namespace linkerd-test

√ can create Kubernetes resources

The subsequent checks in this section validate whether you have permission tocreate the Kubernetes resources required for Linkerd installation, specifically:

  1. can create Namespaces
  2. can create ClusterRoles
  3. can create ClusterRoleBindings
  4. can create CustomResourceDefinitions

The “pre-kubernetes-setup” checks

These checks only run when the —pre flag is set This flag is intended foruse prior to running linkerd install, to verify you have the correct RBACpermissions to install Linkerd.

  1. can create Namespaces
  2. can create ClusterRoles
  3. can create ClusterRoleBindings
  4. can create CustomResourceDefinitions
  5. can create PodSecurityPolicies
  6. can create ServiceAccounts
  7. can create Services
  8. can create Deployments
  9. can create ConfigMaps

√ no clock skew detected

This check verifies whether there is clock skew between the system runningthe linkerd install command and the Kubernetes node(s), causingpotential issues.

The “pre-kubernetes-capability” checks

These checks only run when the —pre flag is set. This flag is intended foruse prior to running linkerd install, to verify you have the correctKubernetes capability permissions to install Linkerd.

√ has NET_ADMIN capability

Example failure:

  1. × has NET_ADMIN capability
  2. found 3 PodSecurityPolicies, but none provide NET_ADMIN
  3. see https://linkerd.io/checks/#pre-k8s-cluster-net-admin for hints

Linkerd installation requires the NET_ADMIN Kubernetes capability, to allowfor modification of iptables.

For more information, see the Kubernetes documentation onPod Security Policies,Security Contexts,and the man page on Linux Capabilities.

√ has NET_RAW capability

Example failure:

  1. × has NET_RAW capability
  2. found 3 PodSecurityPolicies, but none provide NET_RAW
  3. see https://linkerd.io/checks/#pre-k8s-cluster-net-raw for hints

Linkerd installation requires the NET_RAW Kubernetes capability, to allow formodification of iptables.

For more information, see the Kubernetes documentation onPod Security Policies,Security Contexts,and the man page on Linux Capabilities.

The “pre-linkerd-global-resources” checks

These checks only run when the —pre flag is set. This flag is intended foruse prior to running linkerd install, to verify you have not already installedthe Linkerd control plane.

  1. no ClusterRoles exist
  2. no ClusterRoleBindings exist
  3. no CustomResourceDefinitions exist
  4. no MutatingWebhookConfigurations exist
  5. no ValidatingWebhookConfigurations exist
  6. no PodSecurityPolicies exist

The “pre-kubernetes-single-namespace-setup” checks

If you do not expect to have the permission for a full cluster install, try the—single-namespace flag, which validates if Linkerd can be installed in asingle namespace, with limited cluster access:

  1. linkerd check --pre --single-namespace

√ control plane namespace exists

  1. × control plane namespace exists
  2. The "linkerd" namespace does not exist

In —single-namespace mode, linkerd check assumes that the installer doesnot have permission to create a namespace, so the installation namespace mustalready exist.

By default the linkerd namespace is used. To use a different namespace run:

  1. linkerd check --pre --single-namespace --linkerd-namespace linkerd-test

√ can create Kubernetes resources

The subsequent checks in this section validate whether you have permission tocreate the Kubernetes resources required for Linkerd —single-namespaceinstallation, specifically:

  1. can create Roles
  2. can create RoleBindings

For more information on cluster access, see theGKE Setup section above.

The “kubernetes-api” checks

Example failures:

  1. × can initialize the client
  2. error configuring Kubernetes API client: stat badconfig: no such file or directory
  3. × can query the Kubernetes API
  4. Get https://8.8.8.8/version: dial tcp 8.8.8.8:443: i/o timeout

Ensure that your system is configured to connect to a Kubernetes cluster.Validate that the KUBECONFIG environment variable is set properly, and/or~/.kube/config points to a valid cluster.

For more information see these pages in the Kubernetes Documentation:

Also verify that these command works:

  1. kubectl config view
  2. kubectl cluster-info
  3. kubectl version

Another example failure:

  1. can query the Kubernetes API
  2. Get REDACTED/version: x509: certificate signed by unknown authority

As an (unsafe) workaround to this, you may try:

  1. kubectl config set-cluster ${KUBE_CONTEXT} --insecure-skip-tls-verify=true \
  2. --server=${KUBE_CONTEXT}

The “kubernetes-version” checks

√ is running the minimum Kubernetes API version

Example failure:

  1. × is running the minimum Kubernetes API version
  2. Kubernetes is on version [1.7.16], but version [1.13.0] or more recent is required

Linkerd requires at least version 1.13.0. Verify your cluster version with:

  1. kubectl version

√ is running the minimum kubectl version

Example failure:

  1. × is running the minimum kubectl version
  2. kubectl is on version [1.9.1], but version [1.13.0] or more recent is required
  3. see https://linkerd.io/checks/#kubectl-version for hints

Linkerd requires at least version 1.13.0. Verify your kubectl version with:

  1. kubectl version --client --short

To fix please update kubectl version.

For more information on upgrading Kubernetes, see the page in the KubernetesDocumentation onUpgrading a cluster

The “linkerd-config” checks

This category of checks validates that Linkerd's cluster-wide RBAC and relatedresources have been installed. These checks run via a default linkerd check,and also in the context of a multi-stage setup, for example:

  1. # install cluster-wide resources (first stage)
  2. linkerd install config | kubectl apply -f -
  3. # validate successful cluster-wide resources installation
  4. linkerd check config
  5. # install Linkerd control plane
  6. linkerd install control-plane | kubectl apply -f -
  7. # validate successful control-plane installation
  8. linkerd check

√ control plane Namespace exists

Example failure:

  1. × control plane Namespace exists
  2. The "foo" namespace does not exist
  3. see https://linkerd.io/checks/#l5d-existence-ns for hints

Ensure the Linkerd control plane namespace exists:

  1. kubectl get ns

The default control plane namespace is linkerd. If you installed Linkerd intoa different namespace, specify that in your check command:

  1. linkerd check --linkerd-namespace linkerdtest

√ control plane ClusterRoles exist

Example failure:

  1. × control plane ClusterRoles exist
  2. missing ClusterRoles: linkerd-linkerd-controller
  3. see https://linkerd.io/checks/#l5d-existence-cr for hints

Ensure the Linkerd ClusterRoles exist:

  1. $ kubectl get clusterroles | grep linkerd
  2. linkerd-linkerd-controller 9d
  3. linkerd-linkerd-identity 9d
  4. linkerd-linkerd-prometheus 9d
  5. linkerd-linkerd-proxy-injector 20d
  6. linkerd-linkerd-sp-validator 9d

Also ensure you have permission to create ClusterRoles:

  1. $ kubectl auth can-i create clusterroles
  2. yes

√ control plane ClusterRoleBindings exist

Example failure:

  1. × control plane ClusterRoleBindings exist
  2. missing ClusterRoleBindings: linkerd-linkerd-controller
  3. see https://linkerd.io/checks/#l5d-existence-crb for hints

Ensure the Linkerd ClusterRoleBindings exist:

  1. $ kubectl get clusterrolebindings | grep linkerd
  2. linkerd-linkerd-controller 9d
  3. linkerd-linkerd-identity 9d
  4. linkerd-linkerd-prometheus 9d
  5. linkerd-linkerd-proxy-injector 20d
  6. linkerd-linkerd-sp-validator 9d

Also ensure you have permission to create ClusterRoleBindings:

  1. $ kubectl auth can-i create clusterrolebindings
  2. yes

√ control plane ServiceAccounts exist

Example failure:

  1. × control plane ServiceAccounts exist
  2. missing ServiceAccounts: linkerd-controller
  3. see https://linkerd.io/checks/#l5d-existence-sa for hints

Ensure the Linkerd ServiceAccounts exist:

  1. $ kubectl -n linkerd get serviceaccounts
  2. NAME SECRETS AGE
  3. default 1 23m
  4. linkerd-controller 1 23m
  5. linkerd-grafana 1 23m
  6. linkerd-identity 1 23m
  7. linkerd-prometheus 1 23m
  8. linkerd-proxy-injector 1 7m
  9. linkerd-sp-validator 1 23m
  10. linkerd-web 1 23m

Also ensure you have permission to create ServiceAccounts in the Linkerdnamespace:

  1. $ kubectl -n linkerd auth can-i create serviceaccounts
  2. yes

√ control plane CustomResourceDefinitions exist

Example failure:

  1. × control plane CustomResourceDefinitions exist
  2. missing CustomResourceDefinitions: serviceprofiles.linkerd.io
  3. see https://linkerd.io/checks/#l5d-existence-crd for hints

Ensure the Linkerd CRD exists:

  1. $ kubectl get customresourcedefinitions
  2. NAME CREATED AT
  3. serviceprofiles.linkerd.io 2019-04-25T21:47:31Z

Also ensure you have permission to create CRDs:

  1. $ kubectl auth can-i create customresourcedefinitions
  2. yes

√ control plane MutatingWebhookConfigurations exist

Example failure:

  1. × control plane MutatingWebhookConfigurations exist
  2. missing MutatingWebhookConfigurations: linkerd-proxy-injector-webhook-config
  3. see https://linkerd.io/checks/#l5d-existence-mwc for hints

Ensure the Linkerd MutatingWebhookConfigurations exists:

  1. $ kubectl get mutatingwebhookconfigurations | grep linkerd
  2. linkerd-proxy-injector-webhook-config 2019-07-01T13:13:26Z

Also ensure you have permission to create MutatingWebhookConfigurations:

  1. $ kubectl auth can-i create mutatingwebhookconfigurations
  2. yes

√ control plane ValidatingWebhookConfigurations exist

Example failure:

  1. × control plane ValidatingWebhookConfigurations exist
  2. missing ValidatingWebhookConfigurations: linkerd-sp-validator-webhook-config
  3. see https://linkerd.io/checks/#l5d-existence-vwc for hints

Ensure the Linkerd ValidatingWebhookConfiguration exists:

  1. $ kubectl get validatingwebhookconfigurations | grep linkerd
  2. linkerd-sp-validator-webhook-config 2019-07-01T13:13:26Z

Also ensure you have permission to create ValidatingWebhookConfigurations:

  1. $ kubectl auth can-i create validatingwebhookconfigurations
  2. yes

√ control plane PodSecurityPolicies exist

Example failure:

  1. × control plane PodSecurityPolicies exist
  2. missing PodSecurityPolicies: linkerd-linkerd-control-plane
  3. see https://linkerd.io/checks/#l5d-existence-psp for hints

Ensure the Linkerd PodSecurityPolicy exists:

  1. $ kubectl get podsecuritypolicies | grep linkerd
  2. linkerd-linkerd-control-plane false NET_ADMIN,NET_RAW RunAsAny RunAsAny MustRunAs MustRunAs true configMap,emptyDir,secret,projected,downwardAPI,persistentVolumeClaim

Also ensure you have permission to create PodSecurityPolicies:

  1. $ kubectl auth can-i create podsecuritypolicies
  2. yes

The “linkerd-existence” checks

√ ‘linkerd-config’ config map exists

Example failure:

  1. × 'linkerd-config' config map exists
  2. missing ConfigMaps: linkerd-config
  3. see https://linkerd.io/checks/#l5d-existence-linkerd-config for hints

Ensure the Linkerd ConfigMap exists:

  1. $ kubectl -n linkerd get configmap/linkerd-config
  2. NAME DATA AGE
  3. linkerd-config 3 61m

Also ensure you have permission to create ConfigMaps:

  1. $ kubectl -n linkerd auth can-i create configmap
  2. yes

√ control plane replica sets are ready

This failure occurs when one of Linkerd's ReplicaSets fails to schedule a pod.

For more information, see the Kubernetes documentation onFailed Deployments.

√ no unschedulable pods

Example failure:

  1. × no unschedulable pods
  2. linkerd-prometheus-6b668f774d-j8ncr: 0/1 nodes are available: 1 Insufficient cpu.
  3. see https://linkerd.io/checks/#l5d-existence-unschedulable-pods for hints

For more information, see the Kubernetes documentation on theUnschedulable Pod Condition.

√ controller pod is running

Example failure:

  1. × controller pod is running
  2. No running pods for "linkerd-controller"

Note, it takes a little bit for pods to be scheduled, images to be pulled andeverything to start up. If this is a permanent error, you'll want to validatethe state of the controller pod with:

  1. $ kubectl -n linkerd get po --selector linkerd.io/control-plane-component=controller
  2. NAME READY STATUS RESTARTS AGE
  3. linkerd-controller-7bb8ff5967-zg265 4/4 Running 0 40m

Check the controller's logs with:

  1. linkerd logs --control-plane-component controller

√ can initialize the client

Example failure:

  1. × can initialize the client
  2. parse http:// bad/: invalid character " " in host name

Verify that a well-formed —api-addr parameter was specified, if any:

  1. linkerd check --api-addr " bad"

√ can query the control plane API

Example failure:

  1. × can query the control plane API
  2. Post http://8.8.8.8/api/v1/Version: context deadline exceeded

This check indicates a connectivity failure between the cli and the Linkerdcontrol plane. To verify connectivity, manually connect to the controller pod:

  1. kubectl -n linkerd port-forward \
  2. $(kubectl -n linkerd get po \
  3. --selector=linkerd.io/control-plane-component=controller \
  4. -o jsonpath='{.items[*].metadata.name}') \
  5. 9995:9995

…and then curl the /metrics endpoint:

  1. curl localhost:9995/metrics

The “linkerd-identity” checks

√ certificate config is valid

Example failures:

  1. × certificate config is valid
  2. key ca.crt containing the trust anchors needs to exist in secret linkerd-identity-issuer if --identity-external-issuer=true
  3. see https://linkerd.io/checks/#l5d-identity-cert-config-valid
  1. × certificate config is valid
  2. key crt.pem containing the issuer certificate needs to exist in secret linkerd-identity-issuer if --identity-external-issuer=false
  3. see https://linkerd.io/checks/#l5d-identity-cert-config-valid

Ensure that your linkerd-identity-issuer secret contains the correct keysfor the scheme that Linkerd is configured with. If the scheme iskubernetes.io/tls your secret should contain the tls.crt, tls.keyand ca.crt keys. Alternatively if your scheme is linkerd.io/tls, therequired keys are crt.pem and key.pem.

√ trust roots are using supported crypto algorithm

Example failure:

  1. × trust roots are using supported crypto algorithm
  2. Invalid roots:
  3. * 165223702412626077778653586125774349756 identity.linkerd.cluster.local must use P-256 curve for public key, instead P-521 was used
  4. see https://linkerd.io/checks/#l5d-identity-roots-use-supported-crypto

You need to ensure that all of your roots use ECDSA P-256 for their public keyalgorithm.

√ trust roots are within their validity period

Example failure:

  1. × trust roots are within their validity period
  2. Invalid roots:
  3. * 199607941798581518463476688845828639279 identity.linkerd.cluster.local not valid anymore. Expired on 2019-12-19T13:08:18Z
  4. see https://linkerd.io/checks/#l5d-identity-roots-are-time-valid for hints

Failures of such nature indicate that your roots have expired. If that is thecase you will have to update both the root and issuer certificates at once.You can follow the process outlined inReplacing Expired Certificatesto get your cluster back to a stable state.

√ trust roots are valid for at least 60 days

Example warnings:

  1. trust roots are valid for at least 60 days
  2. Roots expiring soon:
  3. * 66509928892441932260491975092256847205 identity.linkerd.cluster.local will expire on 2019-12-19T13:30:57Z
  4. see https://linkerd.io/checks/#l5d-identity-roots-not-expiring-soon for hints

This warning indicates that the expiry of some of your roots is approaching.In order to address this problem without incurring downtime, you can followthe process outlined in Rotating your identity certificates.

√ issuer cert is using supported crypto algorithm

Example failure:

  1. × issuer cert is using supported crypto algorithm
  2. issuer certificate must use P-256 curve for public key, instead P-521 was used
  3. see https://linkerd.io/checks/#5d-identity-issuer-cert-uses-supported-crypto for hints

You need to ensure that your issuer certificate uses ECDSA P-256 for its publickey algorithm. You can refer toGenerating your own mTLS root certificatesto see how you can generate certificates that will work with Linkerd.

√ issuer cert is within its validity period

Example failure:

  1. × issuer cert is within its validity period
  2. issuer certificate is not valid anymore. Expired on 2019-12-19T13:35:49Z
  3. see https://linkerd.io/checks/#l5d-identity-issuer-cert-is-time-valid

This failure indicates that your issuer certificate has expired. In order tobring your cluster back to a valid state, follow the process outlined inReplacing Expired Certificates.

√ issuer cert is valid for at least 60 days

Example warning:

  1. issuer cert is valid for at least 60 days
  2. issuer certificate will expire on 2019-12-19T13:35:49Z
  3. see https://linkerd.io/checks/#l5d-identity-issuer-cert-not-expiring-soon for hints

This warning means that your issuer certificate is expiring soon. If you do notrely on external certificate management solution such as cert-manager, youcan follow the process outlined inRotating your identity certificates

√ issuer cert is issued by the trust root

Example error:

  1. × issuer cert is issued by the trust root
  2. x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "identity.linkerd.cluster.local")
  3. see https://linkerd.io/checks/#l5d-identity-issuer-cert-issued-by-trust-root for hints

This error indicates that the issuer certificate that is in thelinkerd-identity-issuer secret cannot be verified with any of the rootsthat Linkerd has been configured with. Using the CLI install process, thisshould never happen. If Helm was used for installation or the issuercertificates are managed by a malfunctioning certificate management solution,it is possible for the cluster to end up in such an invalid state. At thatpoint the best to do is to use the upgrade command to update your certificates:

  1. linkerd upgrade \
  2. --identity-issuer-certificate-file=./your-new-issuer.crt \
  3. --identity-issuer-key-file=./your-new-issuer.key \
  4. --identity-trust-anchors-file=./your-new-roots.crt \
  5. --force | kubectl apply -f -

Once the upgrade process is over, the output of linkerd check —proxy shouldbe:

  1. linkerd-identity

√ certificate config is valid√ trust roots are using supported crypto algorithm√ trust roots are within their validity period√ trust roots are valid for at least 60 days√ issuer cert is using supported crypto algorithm√ issuer cert is within its validity period√ issuer cert is valid for at least 60 days√ issuer cert is issued by the trust root

linkerd-identity-data-plane

√ data plane proxies certificate match CA

The “linkerd-identity-data-plane” checks

√ data plane proxies certificate match CA

Example warning:

  1. data plane proxies certificate match CA
  2. Some pods do not have the current trust bundle and must be restarted:
  3. * emojivoto/emoji-d8d7d9c6b-8qwfx
  4. * emojivoto/vote-bot-588499c9f6-zpwz6
  5. * emojivoto/voting-8599548fdc-6v64k
  6. see https://linkerd.io/checks/{#l5d-identity-data-plane-proxies-certs-match-ca for hints

Observing this warning indicates that some of your meshed pods have proxiesthat have stale certificates. This is most likely to happen during upgradeoperations that deal with cert rotation. In order to solve the problem youcan use rollout restart to restart the pods in question. That should causethem to pick the correct certs from the linkerd-config configmap.When upgrade is performed using the —identity-trust-anchors-file flag tomodify the roots, the Linkerd components are restarted. While this operationis in progress the check —proxy command may output a warning, pertaining tothe Linkerd components:

  1. data plane proxies certificate match CA
  2. Some pods do not have the current trust bundle and must be restarted:
  3. * linkerd/linkerd-sp-validator-75f9d96dc-rch4x
  4. * linkerd/linkerd-tap-68d8bbf64-mpzgb
  5. * linkerd/linkerd-web-849f74b7c6-qlhwc
  6. see https://linkerd.io/checks/{#l5d-identity-data-plane-proxies-certs-match-ca for hints

If that is the case, simply wait for the upgrade operation to complete.The stale pods should terminate and be replaced by new ones, configured withthe correct certificates.

The “linkerd-api” checks

√ control plane pods are ready

Example failure:

  1. × control plane pods are ready
  2. No running pods for "linkerd-web"

Verify the state of the control plane pods with:

  1. $ kubectl -n linkerd get po
  2. NAME READY STATUS RESTARTS AGE
  3. pod/linkerd-controller-b8c4c48c8-pflc9 4/4 Running 0 45m
  4. pod/linkerd-grafana-776cf777b6-lg2dd 2/2 Running 0 1h
  5. pod/linkerd-prometheus-74d66f86f6-6t6dh 2/2 Running 0 1h
  6. pod/linkerd-web-5f6c45d6d9-9hd9j 2/2 Running 0 3m

√ control plane self-check

Example failure:

  1. × control plane self-check
  2. Post https://localhost:6443/api/v1/namespaces/linkerd/services/linkerd-controller-api:http/proxy/api/v1/SelfCheck: context deadline exceeded

Check the logs on the control-plane's public API:

  1. linkerd logs --control-plane-component controller --container public-api

√ [kubernetes] control plane can talk to Kubernetes

Example failure:

  1. × [kubernetes] control plane can talk to Kubernetes
  2. Error calling the Kubernetes API: FAIL

Check the logs on the control-plane's public API:

  1. linkerd logs --control-plane-component controller --container public-api

√ [prometheus] control plane can talk to Prometheus

Example failure:

  1. × [prometheus] control plane can talk to Prometheus
  2. Error calling Prometheus from the control plane: FAIL

NoteThis will fail if you have changed your default cluster domain fromcluster.local, see theassociated issue for moreinformation and potential workarounds.

Validate that the Prometheus instance is up and running:

  1. kubectl -n linkerd get all | grep prometheus

Check the Prometheus logs:

  1. linkerd logs --control-plane-component prometheus

Check the logs on the control-plane's public API:

  1. linkerd logs --control-plane-component controller --container public-api

The “linkerd-service-profile” checks

Example failure:

  1. no invalid service profiles
  2. ServiceProfile "bad" has invalid name (must be "<service>.<namespace>.svc.cluster.local")

Validate the structure of your service profiles:

  1. $ kubectl -n linkerd get sp
  2. NAME AGE
  3. bad 51s
  4. linkerd-controller-api.linkerd.svc.cluster.local 1m

Example failure:

  1. no invalid service profiles
  2. the server could not find the requested resource (get serviceprofiles.linkerd.io)

Validate that the Service Profile CRD is installed on your cluster and that itslinkerd.io/created-by annotation matches your linkerd version clientversion:

  1. kubectl get crd/serviceprofiles.linkerd.io -o yaml | grep linkerd.io/created-by

If the CRD is missing or out-of-date you can update it:

  1. linkerd upgrade | kubectl apply -f -

The “linkerd-version” checks

√ can determine the latest version

Example failure:

  1. × can determine the latest version
  2. Get https://versioncheck.linkerd.io/version.json?version=edge-19.1.2&uuid=test-uuid&source=cli: context deadline exceeded

Ensure you can connect to the Linkerd version check endpoint from theenvironment the linkerd cli is running:

  1. $ curl "https://versioncheck.linkerd.io/version.json?version=edge-19.1.2&uuid=test-uuid&source=cli"
  2. {"stable":"stable-2.1.0","edge":"edge-19.1.2"}

√ cli is up-to-date

Example failure:

  1. cli is up-to-date
  2. is running version 19.1.1 but the latest edge version is 19.1.2

See the page on Upgrading Linkerd.

The “control-plane-version” checks

Example failures:

  1. control plane is up-to-date
  2. is running version 19.1.1 but the latest edge version is 19.1.2
  3. control plane and cli versions match
  4. mismatched channels: running stable-2.1.0 but retrieved edge-19.1.2

See the page on Upgrading Linkerd.

The “linkerd-data-plane” checks

These checks only run when the —proxy flag is set. This flag is intended foruse after running linkerd inject, to verify the injected proxies are operatingnormally.

√ data plane namespace exists

Example failure:

  1. $ linkerd check --proxy --namespace foo
  2. ...
  3. × data plane namespace exists
  4. The "foo" namespace does not exist

Ensure the —namespace specified exists, or, omit the parameter to check allnamespaces.

√ data plane proxies are ready

Example failure:

  1. × data plane proxies are ready
  2. No "linkerd-proxy" containers found

Ensure you have injected the Linkerd proxy into your application via thelinkerd inject command.

For more information on linkerd inject, seeStep 5: Install the demo appin our Getting Started guide.

√ data plane proxy metrics are present in Prometheus

Example failure:

  1. × data plane proxy metrics are present in Prometheus
  2. Data plane metrics not found for linkerd/linkerd-controller-b8c4c48c8-pflc9.

Ensure Prometheus can connect to each linkerd-proxy via the Prometheusdashboard:

  1. kubectl -n linkerd port-forward svc/linkerd-prometheus 9090

…and then browse tohttp://localhost:9090/targets, validate thelinkerd-proxy section.

You should see all your pods here. If they are not:

  • Prometheus might be experiencing connectivity issues with the k8s api server.Check out the logs and delete the pod to flush any possible transient errors.

√ data plane is up-to-date

Example failure:

  1. data plane is up-to-date
  2. linkerd/linkerd-prometheus-74d66f86f6-6t6dh: is running version 19.1.2 but the latest edge version is 19.1.3

See the page on Upgrading Linkerd.

√ data plane and cli versions match

  1. data plane and cli versions match
  2. linkerd/linkerd-web-5f6c45d6d9-9hd9j: is running version 19.1.2 but the latest edge version is 19.1.3

See the page on Upgrading Linkerd.

The “linkerd-ha-checks” checks

These checks are ran if Linkerd has been installed in HA mode.

√ pod injection disabled on kube-system

Example warning:

  1. pod injection disabled on kube-system
  2. kube-system namespace needs to have the label config.linkerd.io/admission-webhooks: disabled if HA mode is enabled
  3. see https://linkerd.io/checks/#l5d-injection-disabled for hints

Ensure the kube-system namespace has theconfig.linkerd.io/admission-webhooks:disabled label:

  1. $ kubectl get namespace kube-system -oyaml
  2. kind: Namespace
  3. apiVersion: v1
  4. metadata:
  5. name: linkerd
  6. annotations:
  7. linkerd.io/inject: disabled
  8. labels:
  9. linkerd.io/is-control-plane: "true"
  10. config.linkerd.io/admission-webhooks: disabled

The “linkerd-cni-plugin” checks

These checks run if Linkerd has been installed with the —linkerd-cni-enabledflag. Alternatively they can be run as part of the pre-checks by providing the—linkerd-cni-enabled flag. Most of these checks verify that the requiredresources are in place. If any of them are missing, you can uselinkerd install-cni | kubectl apply -f - to re-install them.

√ cni plugin ConfigMap exists

Example error:

  1. × cni plugin ConfigMap exists
  2. configmaps "linkerd-cni-config" not found
  3. see https://linkerd.io/checks/#cni-plugin-cm-exists for hints

Ensure that the linkerd-cni-config ConfigMap exists in the CNI namespace:

  1. $ kubectl get cm linkerd-cni-config -n linkerd-cni
  2. NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES
  3. linkerd-linkerd-cni-cni false RunAsAny RunAsAny RunAsAny RunAsAny false hostPath,secret

Also ensure you have permission to create ConfigMaps:

  1. $ kubectl auth can-i create ConfigMaps
  2. yes

√ cni plugin PodSecurityPolicy exists

Example error:

  1. × cni plugin PodSecurityPolicy exists
  2. missing PodSecurityPolicy: linkerd-linkerd-cni-cni
  3. see https://linkerd.io/checks/#cni-plugin-psp-exists for hint

Ensure that the pod security policy exists:

  1. $ kubectl get psp linkerd-linkerd-cni-cni
  2. NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES
  3. linkerd-linkerd-cni-cni false RunAsAny RunAsAny RunAsAny RunAsAny false hostPath,secret

Also ensure you have permission to create PodSecurityPolicies:

  1. $ kubectl auth can-i create PodSecurityPolicies
  2. yes

√ cni plugin ClusterRole exist

Example error:

  1. × cni plugin ClusterRole exists
  2. missing ClusterRole: linkerd-cni
  3. see https://linkerd.io/checks/#cni-plugin-cr-exists for hints

Ensure that the cluster role exists:

  1. $ kubectl get clusterrole linkerd-cni
  2. NAME AGE
  3. linkerd-cni 54m

Also ensure you have permission to create ClusterRoles:

  1. $ kubectl auth can-i create ClusterRoles
  2. yes

√ cni plugin ClusterRoleBinding exist

Example error:

  1. × cni plugin ClusterRoleBinding exists
  2. missing ClusterRoleBinding: linkerd-cni
  3. see https://linkerd.io/checks/#cni-plugin-crb-exists for hints

Ensure that the cluster role binding exists:

  1. $ kubectl get clusterrolebinding linkerd-cni
  2. NAME AGE
  3. linkerd-cni 54m

Also ensure you have permission to create ClusterRoleBindings:

  1. $ kubectl auth can-i create ClusterRoleBindings
  2. yes

√ cni plugin Role exists

Example error:

  1. × cni plugin Role exists
  2. missing Role: linkerd-cni
  3. see https://linkerd.io/checks/#cni-plugin-r-exists for hints

Ensure that the role exists in the CNI namespace:

  1. $ kubectl get role linkerd-cni -n linkerd-cni
  2. NAME AGE
  3. linkerd-cni 52m

Also ensure you have permission to create Roles:

  1. $ kubectl auth can-i create Roles -n linkerd-cni
  2. yes

√ cni plugin RoleBinding exists

Example error:

  1. × cni plugin RoleBinding exists
  2. missing RoleBinding: linkerd-cni
  3. see https://linkerd.io/checks/#cni-plugin-rb-exists for hints

Ensure that the role binding exists in the CNI namespace:

  1. $ kubectl get rolebinding linkerd-cni -n linkerd-cni
  2. NAME AGE
  3. linkerd-cni 49m

Also ensure you have permission to create RoleBindings:

  1. $ kubectl auth can-i create RoleBindings -n linkerd-cni
  2. yes

√ cni plugin ServiceAccount exists

Example error:

  1. × cni plugin ServiceAccount exists
  2. missing ServiceAccount: linkerd-cni
  3. see https://linkerd.io/checks/#cni-plugin-sa-exists for hints

Ensure that the CNI service account exists in the CNI namespace:

  1. $ kubectl get ServiceAccount linkerd-cni -n linkerd-cni
  2. NAME SECRETS AGE
  3. linkerd-cni 1 45m

Also ensure you have permission to create ServiceAccount:

  1. $ kubectl auth can-i create ServiceAccounts -n linkerd-cni
  2. yes

√ cni plugin DaemonSet exists

Example error:

  1. × cni plugin DaemonSet exists
  2. missing DaemonSet: linkerd-cni
  3. see https://linkerd.io/checks/#cni-plugin-ds-exists for hints

Ensure that the CNI daemonset exists in the CNI namespace:

  1. $ kubectl get ds -n linkerd-cni
  2. NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
  3. linkerd-cni 1 1 1 1 1 beta.kubernetes.io/os=linux 14m

Also ensure you have permission to create DaemonSets:

  1. $ kubectl auth can-i create DaemonSets -n linkerd-cni
  2. yes

√ cni plugin pod is running on all nodes

Example failure:

  1. cni plugin pod is running on all nodes
  2. number ready: 2, number scheduled: 3
  3. see https://linkerd.io/checks/#cni-plugin-ready

Ensure that all the CNI pods are running:

  1. $ kubectl get po -n linkerd-cn
  2. NAME READY STATUS RESTARTS AGE
  3. linkerd-cni-rzp2q 1/1 Running 0 9m20s
  4. linkerd-cni-mf564 1/1 Running 0 9m22s
  5. linkerd-cni-p5670 1/1 Running 0 9m25s

Ensure that all pods have finished the deployment of the CNI config and binary:

  1. $ kubectl logs linkerd-cni-rzp2q -n linkerd-cni
  2. Wrote linkerd CNI binaries to /host/opt/cni/bin
  3. Created CNI config /host/etc/cni/net.d/10-kindnet.conflist
  4. Done configuring CNI. Sleep=true