Hello Minikube

This tutorial shows you how to run a sample app on Kubernetes using minikube. The tutorial provides a container image that uses NGINX to echo back all the requests.

Objectives

  • Deploy a sample application to minikube.
  • Run the app.
  • View application logs.

Before you begin

This tutorial assumes that you have already set up minikube. See minikube start for installation instructions.

You also need to install kubectl. See Install tools for installation instructions.

Create a minikube cluster

  1. minikube start

Open the Dashboard

Open the Kubernetes dashboard. You can do this two different ways:

Open a new terminal, and run:

  1. # Start a new terminal, and leave this running.
  2. minikube dashboard

Now, switch back to the terminal where you ran minikube start.

Note:

The dashboard command enables the dashboard add-on and opens the proxy in the default web browser. You can create Kubernetes resources on the dashboard such as Deployment and Service.

To find out how to avoid directly invoking the browser from the terminal and get a URL for the web dashboard, see the “URL copy and paste” tab.

By default, the dashboard is only accessible from within the internal Kubernetes virtual network. The dashboard command creates a temporary proxy to make the dashboard accessible from outside the Kubernetes virtual network.

To stop the proxy, run Ctrl+C to exit the process. After the command exits, the dashboard remains running in the Kubernetes cluster. You can run the dashboard command again to create another proxy to access the dashboard.

If you don’t want minikube to open a web browser for you, run the dashboard subcommand with the --url flag. minikube outputs a URL that you can open in the browser you prefer.

Open a new terminal, and run:

  1. # Start a new terminal, and leave this running.
  2. minikube dashboard --url

Now, you can use this URL and switch back to the terminal where you ran minikube start.

Create a Deployment

A Kubernetes Pod is a group of one or more Containers, tied together for the purposes of administration and networking. The Pod in this tutorial has only one Container. A Kubernetes Deployment checks on the health of your Pod and restarts the Pod’s Container if it terminates. Deployments are the recommended way to manage the creation and scaling of Pods.

  1. Use the kubectl create command to create a Deployment that manages a Pod. The Pod runs a Container based on the provided Docker image.

    1. # Run a test container image that includes a webserver
    2. kubectl create deployment hello-node --image=registry.k8s.io/e2e-test-images/agnhost:2.39 -- /agnhost netexec --http-port=8080
  2. View the Deployment:

    1. kubectl get deployments

    The output is similar to:

    1. NAME READY UP-TO-DATE AVAILABLE AGE
    2. hello-node 1/1 1 1 1m
  3. View the Pod:

    1. kubectl get pods

    The output is similar to:

    1. NAME READY STATUS RESTARTS AGE
    2. hello-node-5f76cf6ccf-br9b5 1/1 Running 0 1m
  4. View cluster events:

    1. kubectl get events
  5. View the kubectl configuration:

    1. kubectl config view
  6. View application logs for a container in a pod.

    1. kubectl logs hello-node-5f76cf6ccf-br9b5

    The output is similar to:

    1. I0911 09:19:26.677397 1 log.go:195] Started HTTP server on port 8080
    2. I0911 09:19:26.677586 1 log.go:195] Started UDP server on port 8081

Note: For more information about kubectl commands, see the kubectl overview.

Create a Service

By default, the Pod is only accessible by its internal IP address within the Kubernetes cluster. To make the hello-node Container accessible from outside the Kubernetes virtual network, you have to expose the Pod as a Kubernetes Service.

  1. Expose the Pod to the public internet using the kubectl expose command:

    1. kubectl expose deployment hello-node --type=LoadBalancer --port=8080

    The --type=LoadBalancer flag indicates that you want to expose your Service outside of the cluster.

    The application code inside the test image only listens on TCP port 8080. If you used kubectl expose to expose a different port, clients could not connect to that other port.

  2. View the Service you created:

    1. kubectl get services

    The output is similar to:

    1. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    2. hello-node LoadBalancer 10.108.144.78 <pending> 8080:30369/TCP 21s
    3. kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23m

    On cloud providers that support load balancers, an external IP address would be provisioned to access the Service. On minikube, the LoadBalancer type makes the Service accessible through the minikube service command.

  3. Run the following command:

    1. minikube service hello-node

    This opens up a browser window that serves your app and shows the app’s response.

Enable addons

The minikube tool includes a set of built-in addons that can be enabled, disabled and opened in the local Kubernetes environment.

  1. List the currently supported addons:

    1. minikube addons list

    The output is similar to:

    1. addon-manager: enabled
    2. dashboard: enabled
    3. default-storageclass: enabled
    4. efk: disabled
    5. freshpod: disabled
    6. gvisor: disabled
    7. helm-tiller: disabled
    8. ingress: disabled
    9. ingress-dns: disabled
    10. logviewer: disabled
    11. metrics-server: disabled
    12. nvidia-driver-installer: disabled
    13. nvidia-gpu-device-plugin: disabled
    14. registry: disabled
    15. registry-creds: disabled
    16. storage-provisioner: enabled
    17. storage-provisioner-gluster: disabled
  2. Enable an addon, for example, metrics-server:

    1. minikube addons enable metrics-server

    The output is similar to:

    1. The 'metrics-server' addon is enabled
  3. View the Pod and Service you created by installing that addon:

    1. kubectl get pod,svc -n kube-system

    The output is similar to:

    1. NAME READY STATUS RESTARTS AGE
    2. pod/coredns-5644d7b6d9-mh9ll 1/1 Running 0 34m
    3. pod/coredns-5644d7b6d9-pqd2t 1/1 Running 0 34m
    4. pod/metrics-server-67fb648c5 1/1 Running 0 26s
    5. pod/etcd-minikube 1/1 Running 0 34m
    6. pod/influxdb-grafana-b29w8 2/2 Running 0 26s
    7. pod/kube-addon-manager-minikube 1/1 Running 0 34m
    8. pod/kube-apiserver-minikube 1/1 Running 0 34m
    9. pod/kube-controller-manager-minikube 1/1 Running 0 34m
    10. pod/kube-proxy-rnlps 1/1 Running 0 34m
    11. pod/kube-scheduler-minikube 1/1 Running 0 34m
    12. pod/storage-provisioner 1/1 Running 0 34m
    13. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    14. service/metrics-server ClusterIP 10.96.241.45 <none> 80/TCP 26s
    15. service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 34m
    16. service/monitoring-grafana NodePort 10.99.24.54 <none> 80:30002/TCP 26s
    17. service/monitoring-influxdb ClusterIP 10.111.169.94 <none> 8083/TCP,8086/TCP 26s
  4. Check the output from metrics-server:

    1. kubectl top pods

    The output is similar to:

    1. NAME CPU(cores) MEMORY(bytes)
    2. hello-node-ccf4b9788-4jn97 1m 6Mi

    If you see the following message, wait, and try again:

    1. error: Metrics API not available
  5. Disable metrics-server:

    1. minikube addons disable metrics-server

    The output is similar to:

    1. metrics-server was successfully disabled

Clean up

Now you can clean up the resources you created in your cluster:

  1. kubectl delete service hello-node
  2. kubectl delete deployment hello-node

Stop the Minikube cluster

  1. minikube stop

Optionally, delete the Minikube VM:

  1. # Optional
  2. minikube delete

If you want to use minikube again to learn more about Kubernetes, you don’t need to delete it.

What’s next