Although familiar with Docker containers and superficially played with Kubernetes in the past, I’ve not taken the required time to learn the core concepts of Kubernetes to really grasp all the possibilities the technology offers. This post is the first in a series of posts geared towards a deeper understanding of the Kubernetes orchestration application via the use of the Minikube implementation of Kubernetes to simplify the installation. The series of posts will walk through a full implementation of a Python Flask application with MySQL backend and explore concepts such as scaling, auto-healing, service discovery, and other beneficial concepts that Kubernetes offers out of box.
Series
As noted, this is Part 1 of the 9-part series:
- Kubernetes Part 1: Core Concepts and Installation (Minikube)
- Kubernetes Part 2: Python Flask Application Deployment
- Kubernetes Part 3: MySQL Database Deployment
- Kubernetes Part 4: Application Deployments (The Smart Way - YAML Files)
- Kubernetes Part 5: Linking Application with Database (Discovery)
- Kubernetes Part 6: Rolling Updates
- Kubernetes Part 7: Secrets
- Kubernetes Part 8: Persistent Volumes
- Kubernetes Part 9: ConfigMaps
Core Concepts
First, let’s start with some basic object definitions for the Kubernetes application. I attempted to place “Equivalent” descriptions next to each core concept for the basic object definitions - these are not entirely accurate, but are a way of thinking about these new concepts within Kubernetes if you are familiar with legacy/traditional VM-based load-balanced applications and micro-services:
- Pod: Basic building block of deployment. Typically 1:1 with a running process/Container,
but in more complicated scenarios, can be one-to-many.
- Equivalent: In a traditional load-balanced VM-based configuration, this can be thought of as a VM (or set of VMs) all running the same application.
- Service: Abstraction defining a logical set of Pods - typically a single IP address that routes
requests to one or more Pods via balancing mechanisms.
- Equivalent: In a traditional load-balanced VM-based configuration, this can be thought of as the load balancer in front of the VMs all running the same application, serving requests to each (single ingress point).
- Volume: File system/disk that enables cross-Container storage and reference.
- Equivalent: In legacy load-balanced VM-based deployments, can be thought of as an NFS mount across all VMs running the same application for persistent storage. The difference is in a Kubernetes/container-based environment, there is no persistent storage without a Volume, as without this, all locally stored data is lost when the containers die/are destroyed.
- Namespace: Construct to enable large-scale usage of the Kubernetes cluster by many users,
logically grouping clusters.
- Equivalent: Can be thought of as a permissions-based model in a traditional VMware VCenter-based model where certain groups of users can only see certain types of compute resources.
Next, there are several higher-level abstractions known as Controllers:
- ReplicaSet: Component which governs create a destroy Pods dynamically based on scale specifications. such as Blue/Green deployments, upgrades, rollbacks, pausing/resuming software upgrades, etc.
- Deployment: Construct to enable declarative state of a Pod/ReplicaSet. This concept enables features
- StatefulSet: Similar to a deployment but maintains stickiness/identity for Pods.
- DaemonSet: Similar to a Deployment, and ensures that all Nodes run at least 1 instance of a Pod.
- Job: Runs one or more Pods to accomplish a specific task, maintaining state about each Pod utilized.
Finally, some additional terminology that may help:
- Node: A worker machine that is available for running applications/servicing various other types of objects Kubernetes affords the user.
This tutorial will not utilize all of the above concepts/abstractions, but they are useful to understand the capabilities of Kubernetes.
Installation (Minikube)
Now that we’ve learned a few core concepts/terminology, let’s get started with implementing and experimenting with a Kubernetes setup.
Technology Ecosystem
The steps in this post are executed on an older physical server running the Ubuntu operating system (specifically, Ubuntu 16.04, 64-bit). Details related to the host machine are as follows:
Host
- Hostname: kubernetes.localhost
- OS: Ubuntu 16.04
- CPU: 2
- RAM: 3 GB (device used originally 4GB, but failed memory module decreased available memory)
- Network: Private Network
- IP: 192.168.1.233
Additionally, the following are the versions of software utilized at the time of this post (for reference):
- Docker: 18.06.1-ce
- Kubectl: 1.11.2
- Minikube: 0.28.2
Prerequisites
In order to install Minikube, we will first install the following prerequisites:
- Docker
- kubectl
First, let’s install Docker (per documentation):
Next, we will install kubectl:
Install Minikube
We’re now ready to install Minikube. This process will download the binary, move it to a location
within the standard PATH
variable, and make it executable for use:
Next, we will start Minikube. There are quite a few options you can provide, but for this tutorial we will stick to a few of the more common options, such as specifying the maximum memory footprint allowed for the local install, what type of bootstrap script to use, and the fact that we do not wish to use a hypervisor for the installation:
One item to note - when running the above command, sudo
was necessary in order to ensure the binaries could
be written to the appropriate locations on the file system. This, however, creates a permission issue with
the kubectl configuration for the current user. To correct this issue, run the following:
Next, you can verify your Minikube installation to ensure it is running as expected:
If all went successfully, you should see the IP address of the local host output - you now have a running Kubernetes instance!
There are some additional items you can play around with - initially, Minikube does not install the heapster addon. This addon is required if you wish to see nice graphs and metrics associated with resource consumption of the Kubernetes installation, so let’s enable it:
It takes several minutes for the heapster addon (InfluxDB plus Grafana components) to fully install. There is a graphical/UI dashboard you can view about the Kubernetes installation as well:
Note if you do not see graphs/resource information displayed, it is possible the Heapster addon is not yet complete with its installation - wait several minutes and you should eventually see graphs populate on various UI screens in the browser.
Additionally, in future posts of this series, service discovery will be designed via the Kubernetes DNS service, “kube-dns”. Ensure this addon is enabled (should be by default) and if not, enable this for the future service discovery posts:
Next Steps
You now have a fully-functioning Kubernetes instance via the Minikube implementation of Kubernetes and are ready to start building and deploying applications. Proceed to the next post in the series to learn how to create a basic Python “Hello World” application and deploy it to the cluster.
Credit
The above tutorial was pieced together with some information from the following sites/resources: