Kubernetes Resources

The commands/steps listed on this page can be used to check the most important Kubernetes resources and apply to Rancher Launched Kubernetes clusters.

Make sure you configured the correct kubeconfig (for example, export KUBECONFIG=$PWD/kube_config_cluster.yml for Rancher HA) or are using the embedded kubectl via the UI.

Nodes

Get nodes

Run the command below and check the following:

  • All nodes in your cluster should be listed, make sure there is not one missing.
  • All nodes should have the Ready status (if not in Ready state, check the kubelet container logs on that node using docker logs kubelet)
  • Check if all nodes report the correct version.
  • Check if OS/Kernel/Docker values are shown as expected (possibly you can relate issues due to upgraded OS/Kernel/Docker)
  1. kubectl get nodes -o wide

Example output:

  1. NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
  2. controlplane-0 Ready controlplane 31m v1.13.5 138.68.188.91 <none> Ubuntu 18.04.2 LTS 4.15.0-47-generic docker://18.9.5
  3. etcd-0 Ready etcd 31m v1.13.5 138.68.180.33 <none> Ubuntu 18.04.2 LTS 4.15.0-47-generic docker://18.9.5
  4. worker-0 Ready worker 30m v1.13.5 139.59.179.88 <none> Ubuntu 18.04.2 LTS 4.15.0-47-generic docker://18.9.5

Get node conditions

Run the command below to list nodes with Node Conditions

  1. kubectl get nodes -o go-template='{{range .items}}{{$node := .}}{{range .status.conditions}}{{$node.metadata.name}}{{": "}}{{.type}}{{":"}}{{.status}}{{"\n"}}{{end}}{{end}}'

Run the command below to list nodes with Node Conditions that are active that could prevent normal operation.

  1. kubectl get nodes -o go-template='{{range .items}}{{$node := .}}{{range .status.conditions}}{{if ne .type "Ready"}}{{if eq .status "True"}}{{$node.metadata.name}}{{": "}}{{.type}}{{":"}}{{.status}}{{"\n"}}{{end}}{{else}}{{if ne .status "True"}}{{$node.metadata.name}}{{": "}}{{.type}}{{": "}}{{.status}}{{"\n"}}{{end}}{{end}}{{end}}{{end}}'

Example output:

  1. worker-0: DiskPressure:True

Kubernetes leader election

Kubernetes Controller Manager leader

The leader is determined by a leader election process. After the leader has been determined, the leader (holderIdentity) is saved in the kube-controller-manager endpoint (in this example, controlplane-0).

  1. kubectl -n kube-system get endpoints kube-controller-manager -o jsonpath='{.metadata.annotations.control-plane\.alpha\.kubernetes\.io/leader}'
  2. {"holderIdentity":"controlplane-0_xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx","leaseDurationSeconds":15,"acquireTime":"2018-12-27T08:59:45Z","renewTime":"2018-12-27T09:44:57Z","leaderTransitions":0}>

Kubernetes Scheduler leader

The leader is determined by a leader election process. After the leader has been determined, the leader (holderIdentity) is saved in the kube-scheduler endpoint (in this example, controlplane-0).

  1. kubectl -n kube-system get endpoints kube-scheduler -o jsonpath='{.metadata.annotations.control-plane\.alpha\.kubernetes\.io/leader}'
  2. {"holderIdentity":"controlplane-0_xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx","leaseDurationSeconds":15,"acquireTime":"2018-12-27T08:59:45Z","renewTime":"2018-12-27T09:44:57Z","leaderTransitions":0}>

Ingress Controller

The default Ingress Controller is NGINX and is deployed as a DaemonSet in the ingress-nginx namespace. The pods are only scheduled to nodes with the worker role.

Check if the pods are running on all nodes:

  1. kubectl -n ingress-nginx get pods -o wide

Example output:

  1. kubectl -n ingress-nginx get pods -o wide
  2. NAME READY STATUS RESTARTS AGE IP NODE
  3. default-http-backend-797c5bc547-kwwlq 1/1 Running 0 17m x.x.x.x worker-1
  4. nginx-ingress-controller-4qd64 1/1 Running 0 14m x.x.x.x worker-1
  5. nginx-ingress-controller-8wxhm 1/1 Running 0 13m x.x.x.x worker-0

If a pod is unable to run (Status is not Running, Ready status is not showing 1/1 or you see a high count of Restarts), check the pod details, logs and namespace events.

Pod details

  1. kubectl -n ingress-nginx describe pods -l app=ingress-nginx

Pod container logs

The below command can show the logs of all the pods labeled “app=ingress-nginx”, but it will display only 10 lines of log because of the restrictions of the kubectl logs command. Refer to --tail of kubectl logs -h for more information.

  1. kubectl -n ingress-nginx logs -l app=ingress-nginx

If the full log is needed, specify the pod name in the trailing command:

  1. kubectl -n ingress-nginx logs <pod name>

Namespace events

  1. kubectl -n ingress-nginx get events

Debug logging

To enable debug logging:

  1. kubectl -n ingress-nginx patch ds nginx-ingress-controller --type='json' -p='[{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--v=5"}]'

Check configuration

Retrieve generated configuration in each pod:

  1. kubectl -n ingress-nginx get pods -l app=ingress-nginx --no-headers -o custom-columns=.NAME:.metadata.name | while read pod; do kubectl -n ingress-nginx exec $pod -- cat /etc/nginx/nginx.conf; done

Rancher agents

Communication to the cluster (Kubernetes API via cattle-cluster-agent) and communication to the nodes (cluster provisioning via cattle-node-agent) is done through Rancher agents.

cattle-node-agent

Check if the cattle-node-agent pods are present on each node, have status Running and don’t have a high count of Restarts:

  1. kubectl -n cattle-system get pods -l app=cattle-agent -o wide

Example output:

  1. NAME READY STATUS RESTARTS AGE IP NODE
  2. cattle-node-agent-4gc2p 1/1 Running 0 2h x.x.x.x worker-1
  3. cattle-node-agent-8cxkk 1/1 Running 0 2h x.x.x.x etcd-1
  4. cattle-node-agent-kzrlg 1/1 Running 0 2h x.x.x.x etcd-0
  5. cattle-node-agent-nclz9 1/1 Running 0 2h x.x.x.x controlplane-0
  6. cattle-node-agent-pwxp7 1/1 Running 0 2h x.x.x.x worker-0
  7. cattle-node-agent-t5484 1/1 Running 0 2h x.x.x.x controlplane-1
  8. cattle-node-agent-t8mtz 1/1 Running 0 2h x.x.x.x etcd-2

Check logging of a specific cattle-node-agent pod or all cattle-node-agent pods:

  1. kubectl -n cattle-system logs -l app=cattle-agent

cattle-cluster-agent

Check if the cattle-cluster-agent pod is present in the cluster, has status Running and doesn’t have a high count of Restarts:

  1. kubectl -n cattle-system get pods -l app=cattle-cluster-agent -o wide

Example output:

  1. NAME READY STATUS RESTARTS AGE IP NODE
  2. cattle-cluster-agent-54d7c6c54d-ht9h4 1/1 Running 0 2h x.x.x.x worker-1

Check logging of cattle-cluster-agent pod:

  1. kubectl -n cattle-system logs -l app=cattle-cluster-agent

Jobs and Pods

Check that pods or jobs have status Running/Completed

To check, run the command:

  1. kubectl get pods --all-namespaces

If a pod is not in Running state, you can dig into the root cause by running:

Describe pod

  1. kubectl describe pod POD_NAME -n NAMESPACE

Pod container logs

  1. kubectl logs POD_NAME -n NAMESPACE

If a job is not in Completed state, you can dig into the root cause by running:

Describe job

  1. kubectl describe job JOB_NAME -n NAMESPACE

Logs from the containers of pods of the job

  1. kubectl logs -l job-name=JOB_NAME -n NAMESPACE

Evicted pods

Pods can be evicted based on eviction signals.

Retrieve a list of evicted pods (podname and namespace):

  1. kubectl get pods --all-namespaces -o go-template='{{range .items}}{{if eq .status.phase "Failed"}}{{if eq .status.reason "Evicted"}}{{.metadata.name}}{{" "}}{{.metadata.namespace}}{{"\n"}}{{end}}{{end}}{{end}}'

To delete all evicted pods:

  1. kubectl get pods --all-namespaces -o go-template='{{range .items}}{{if eq .status.phase "Failed"}}{{if eq .status.reason "Evicted"}}{{.metadata.name}}{{" "}}{{.metadata.namespace}}{{"\n"}}{{end}}{{end}}{{end}}' | while read epod enamespace; do kubectl -n $enamespace delete pod $epod; done

Retrieve a list of evicted pods, scheduled node and the reason:

  1. kubectl get pods --all-namespaces -o go-template='{{range .items}}{{if eq .status.phase "Failed"}}{{if eq .status.reason "Evicted"}}{{.metadata.name}}{{" "}}{{.metadata.namespace}}{{"\n"}}{{end}}{{end}}{{end}}' | while read epod enamespace; do kubectl -n $enamespace get pod $epod -o=custom-columns=NAME:.metadata.name,NODE:.spec.nodeName,MSG:.status.message; done

Job does not complete

If you have enabled Istio, and you are having issues with a Job you deployed not completing, you will need to add an annotation to your pod using these steps.

Since Istio Sidecars run indefinitely, a Job cannot be considered complete even after its task has completed. This is a temporary workaround and will disable Istio for any traffic to/from the annotated Pod. Keep in mind this may not allow you to continue to use a Job for integration testing, as the Job will not have access to the service mesh.