Hello Minikube

This tutorial shows you how to run a sample app on Kubernetes using minikube and Katacoda. Katacoda provides a free, in-browser Kubernetes environment.

Note: You can also follow this tutorial if you’ve installed minikube locally. See minikube start for installation instructions.

Objectives

  • Deploy a sample application to minikube.
  • Run the app.
  • View application logs.

Before you begin

This tutorial provides a container image that uses NGINX to echo back all the requests.

Create a minikube cluster

  1. Click Launch Terminal

    Launch Terminal

Note: If you installed minikube locally, run minikube start. Before you run minikube dashboard, you should open a new terminal, start minikube dashboard there, and then switch back to the main terminal.

  1. Open the Kubernetes dashboard in a browser:

    1. minikube dashboard
  2. Katacoda environment only: At the top of the terminal pane, click the plus sign, and then click Select port to view on Host 1.

  3. Katacoda environment only: Type 30000, and then click Display Port.

Note:

The dashboard command enables the dashboard add-on and opens the proxy in the default web browser. You can create Kubernetes resources on the dashboard such as Deployment and Service.

If you are running in an environment as root, see Open Dashboard with URL.

By default, the dashboard is only accessible from within the internal Kubernetes virtual network. The dashboard command creates a temporary proxy to make the dashboard accessible from outside the Kubernetes virtual network.

To stop the proxy, run Ctrl+C to exit the process. After the command exits, the dashboard remains running in the Kubernetes cluster. You can run the dashboard command again to create another proxy to access the dashboard.

Open Dashboard with URL

If you don’t want to open a web browser, run the dashboard command with the --url flag to emit a URL:

  1. minikube dashboard --url

Create a Deployment

A Kubernetes Pod is a group of one or more Containers, tied together for the purposes of administration and networking. The Pod in this tutorial has only one Container. A Kubernetes Deployment checks on the health of your Pod and restarts the Pod’s Container if it terminates. Deployments are the recommended way to manage the creation and scaling of Pods.

  1. Use the kubectl create command to create a Deployment that manages a Pod. The Pod runs a Container based on the provided Docker image.

    1. kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4
  2. View the Deployment:

    1. kubectl get deployments

    The output is similar to:

    1. NAME READY UP-TO-DATE AVAILABLE AGE
    2. hello-node 1/1 1 1 1m
  3. View the Pod:

    1. kubectl get pods

    The output is similar to:

    1. NAME READY STATUS RESTARTS AGE
    2. hello-node-5f76cf6ccf-br9b5 1/1 Running 0 1m
  4. View cluster events:

    1. kubectl get events
  5. View the kubectl configuration:

    1. kubectl config view

Note: For more information about kubectl commands, see the kubectl overview.

Create a Service

By default, the Pod is only accessible by its internal IP address within the Kubernetes cluster. To make the hello-node Container accessible from outside the Kubernetes virtual network, you have to expose the Pod as a Kubernetes Service.

  1. Expose the Pod to the public internet using the kubectl expose command:

    1. kubectl expose deployment hello-node --type=LoadBalancer --port=8080

    The --type=LoadBalancer flag indicates that you want to expose your Service outside of the cluster.

    The application code inside the image k8s.gcr.io/echoserver only listens on TCP port 8080. If you used kubectl expose to expose a different port, clients could not connect to that other port.

  2. View the Service you created:

    1. kubectl get services

    The output is similar to:

    1. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    2. hello-node LoadBalancer 10.108.144.78 <pending> 8080:30369/TCP 21s
    3. kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23m

    On cloud providers that support load balancers, an external IP address would be provisioned to access the Service. On minikube, the LoadBalancer type makes the Service accessible through the minikube service command.

  3. Run the following command:

    1. minikube service hello-node
  4. Katacoda environment only: Click the plus sign, and then click Select port to view on Host 1.

  5. Katacoda environment only: Note the 5-digit port number displayed opposite to 8080 in services output. This port number is randomly generated and it can be different for you. Type your number in the port number text box, then click Display Port. Using the example from earlier, you would type 30369.

    This opens up a browser window that serves your app and shows the app’s response.

Enable addons

The minikube tool includes a set of built-in addons that can be enabled, disabled and opened in the local Kubernetes environment.

  1. List the currently supported addons:

    1. minikube addons list

    The output is similar to:

    1. addon-manager: enabled
    2. dashboard: enabled
    3. default-storageclass: enabled
    4. efk: disabled
    5. freshpod: disabled
    6. gvisor: disabled
    7. helm-tiller: disabled
    8. ingress: disabled
    9. ingress-dns: disabled
    10. logviewer: disabled
    11. metrics-server: disabled
    12. nvidia-driver-installer: disabled
    13. nvidia-gpu-device-plugin: disabled
    14. registry: disabled
    15. registry-creds: disabled
    16. storage-provisioner: enabled
    17. storage-provisioner-gluster: disabled
  2. Enable an addon, for example, metrics-server:

    1. minikube addons enable metrics-server

    The output is similar to:

    1. The 'metrics-server' addon is enabled
  3. View the Pod and Service you created:

    1. kubectl get pod,svc -n kube-system

    The output is similar to:

    1. NAME READY STATUS RESTARTS AGE
    2. pod/coredns-5644d7b6d9-mh9ll 1/1 Running 0 34m
    3. pod/coredns-5644d7b6d9-pqd2t 1/1 Running 0 34m
    4. pod/metrics-server-67fb648c5 1/1 Running 0 26s
    5. pod/etcd-minikube 1/1 Running 0 34m
    6. pod/influxdb-grafana-b29w8 2/2 Running 0 26s
    7. pod/kube-addon-manager-minikube 1/1 Running 0 34m
    8. pod/kube-apiserver-minikube 1/1 Running 0 34m
    9. pod/kube-controller-manager-minikube 1/1 Running 0 34m
    10. pod/kube-proxy-rnlps 1/1 Running 0 34m
    11. pod/kube-scheduler-minikube 1/1 Running 0 34m
    12. pod/storage-provisioner 1/1 Running 0 34m
    13. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    14. service/metrics-server ClusterIP 10.96.241.45 <none> 80/TCP 26s
    15. service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 34m
    16. service/monitoring-grafana NodePort 10.99.24.54 <none> 80:30002/TCP 26s
    17. service/monitoring-influxdb ClusterIP 10.111.169.94 <none> 8083/TCP,8086/TCP 26s
  4. Disable metrics-server:

    1. minikube addons disable metrics-server

    The output is similar to:

    1. metrics-server was successfully disabled

Clean up

Now you can clean up the resources you created in your cluster:

  1. kubectl delete service hello-node
  2. kubectl delete deployment hello-node

Optionally, stop the Minikube virtual machine (VM):

  1. minikube stop

Optionally, delete the Minikube VM:

  1. minikube delete

What’s next