Quick multi-node Kubernetes Cluster Getting Started¶
I have not created a blog entry in some time, and therefore this one might try to over compensate slightly with a fairly long technical entry.
In this entry I hope to demonstrate a couple of quick steps to get started with a Kubernetes cluster running on multiple nodes on a local development machine, ideal to quickly test applications and tools.
Technologies and Stack¶
The stack consist of:
- k3s Kubernetes distribution
- multipass for creating multiple VM nodes quickly on a development machine running any Operating System
- A Python Flask application, containerized with Docker
The following tools/application/utilities are assumed to be installed:
- kubectl
- Either BASH or ZSH (examples will assume ZSH)
- Common GNU utilities such as
grep
,awk
etc. For Windows users, using WSL2 is highly recommended - A Git client, should you whish to clone the repositories referenced (recommended)
System Requirements¶
The 3x Kubernetes nodes assume at least the following resources are available on the host system:
- 6x CPU cores (2x cores per VM)
- 8GB RAM allocated per VM (a system with 32GB RAM is recommended)
The settings can be adjusted with the minimum recommended spec for the host system:
- 4 CPU cores on your host system (1x core for each VM)
- 16 GB RAM (4GB allocated per VM)
Deploying a K3s Cluster in Multipass in less than 5 minutes¶
Obtain the installation script from my Gist. I save these types of files on my system under a directory ~/opt/bin
(which is also in my path)
Based on your system configuration, you can adjust the Core and RAM allocation as follow:
In the 2nd line with the text multipass launch -n $node -c 2 -m 4G
adjust the core count (-c
parameter) and the memory allocation (-m
parameter) as required.
Once done, simply run:
~/opt/bin/k3s-multipass.sh
Once all is deployed, the VM's can be listed with the command multipass list
Configuring Kubectl¶
The Kubernetes client configuration will be stored in the file ~/k3s.yaml
Set this as your default with the following command:
export KUBECONFIG=/home/nicc777/k3s.yaml
And quickly check your installation with te command kubectl get nodes
which should give output similar to the following:
NAME STATUS ROLES AGE VERSION
node2 Ready <none> 8h v1.22.7+k3s1
node3 Ready <none> 8h v1.22.7+k3s1
node1 Ready control-plane,master 8h v1.22.7+k3s1
The Demo Application¶
The demo application repository is at github.com/nicc777/pyk8sdemo-app
You can clone this repository and explore, but the application is already containerized and available in Docker Hub
This particular container includes some additional tools which can be very useful for troubleshooting and testing.
If you follow these instructions practically, you would need the following files:
.
├── kubernetes_manifests
│ ├── troubleshooting-app-pod-only.yaml
│ └── troubleshooting-app.yaml
These files point to the pre-existing image in Docker Hub, but you can change it to suite your needs.
Kubernetes Deployments¶
Preparation - Create a Namespace¶
A namespace is a good way to isolate deployments for various purposes - for example to separate a development and test deployment for the same application.
To create a namespace is really easy:
kubectl create namespace test
To view the current namespaces:
kubectl get namespaces
For the brand new k3s cluster the output may look like the following:
NAME STATUS AGE
kube-system Active 24h
default Active 24h
kube-public Active 24h
kube-node-lease Active 24h
test Active 1s
to view any resources, like pods, in a namespace, simply append the kubectl
command with -n test
Simple Deployment - Pod Only¶
In the demo app project there are two kubernetes manifests - one for deploying a pod only, and the other for deploying the application with an ingress. The ingress is required to connect to the pod from the outside world (outside, from the perspective of teh Kubernetes cluster). But not all application require to be connected to from the outside, and therefore may only require the pod to be deployed.
The simplest way to accomplish this with the demo project is the following command:
kubectl apply -f kubernetes_manifests/troubleshooting-app-pod-only.yaml -n test
When the deployment is done, you can view the newly created resources with the command kubectl get all -n test
which may have output looking similar to the following:
NAME READY STATUS RESTARTS AGE
pod/flask-demo-app-deployment-67764d6679-hkkfg 1/1 Running 0 48s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/flask-demo-app-deployment 1/1 1 1 48s
NAME DESIRED CURRENT READY AGE
replicaset.apps/flask-demo-app-deployment-67764d6679 1 1 1 48s
No ingress have been created. You can access the pod with the following commands:
export POD_ID=`kubectl get pods -n test | grep flask | tail -1 | awk '{print "pod/"$1}'`
kubectl exec --stdin --tty -n test $POD_ID -- /bin/bash
The first command is just a way to get the POD identifier in an environment variable to make it easier to reference multiple times, as may sometime be necessary when testing/debugging.
Once you have a shell you can run any command available in that container. Since the demo app were packaged with several utilities to aid in troubleshooting, we can test Internet connectivity (as an example), using the following command:
curl -vvv https://www.google.com | head -1
Provided your host system have access to the Internet and there is no firewall rules or other systems that may block Internet access, you should now see the google home page retrieved successfully.
To delete the deployment, simply run:
kubectl delete -f kubernetes_manifests/troubleshooting-app-pod-only.yaml -n test
Deployment with Ingress¶
Similar to the pod only deployment, we only need to point to another manifest in order to deploy the same application with an ingress:
kubectl apply -f kubernetes_manifests/troubleshooting-app.yaml -n test
This time a service is also deployed and if you view all the deployed objects with kubectl get all -n test
you may see something similar to the following:
NAME READY STATUS RESTARTS AGE
pod/svclb-flask-demo-app-service-bdbdh 1/1 Running 0 40s
pod/svclb-flask-demo-app-service-cg8jf 1/1 Running 0 40s
pod/svclb-flask-demo-app-service-7gxjh 1/1 Running 0 40s
pod/flask-demo-app-deployment-67764d6679-lv9s7 1/1 Running 0 41s
pod/flask-demo-app-deployment-67764d6679-bvjsx 1/1 Running 0 40s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/flask-demo-app-service LoadBalancer 10.43.79.111 10.0.50.130,10.0.50.190,10.0.50.241 8880:30223/TCP 41s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/svclb-flask-demo-app-service 3 3 3 3 3 <none> 40s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/flask-demo-app-deployment 2/2 2 2 41s
NAME DESIRED CURRENT READY AGE
replicaset.apps/flask-demo-app-deployment-67764d6679 2 2 2 41s
K3s ships with a reverse proxy known as traefik. When you deploy an application with a Service, traefik
will automatically wire the ingress to your application.
In the above exa,ple, note the IP addresses of the EXTERNAL-IP
- they will correspond to the IP addresses of our Multipass nodes that we retrieve with running the command multipass list
:
Name State IPv4 Image
node1 Running 10.0.50.130 Ubuntu 20.04 LTS
10.42.0.0
10.42.0.1
node2 Running 10.0.50.190 Ubuntu 20.04 LTS
10.42.1.0
10.42.1.1
node3 Running 10.0.50.241 Ubuntu 20.04 LTS
10.42.2.0
10.42.2.1
You can therefore test access against any of those IP addresses, for example the command curl http://10.0.50.130:8880/
should give you the output from the application, looking something like this:
<html><head><title>Demo App</title></head><body><h3>Demo App</h3><hr /><p>Environment:</p><pre>PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=flask-demo-app-deployment-67764d6679-bvjsx
DEMO_ENV_VAR_1=Set by deployment
DEMO_ENV_VAR_2=My super cool app is working!
DEMO_ENV_VAR_3=VALUE_03
# lines omitted....
Since you may have many microservice pods that listen on the same port (8080, in this example), you have to ensure a unique port for the external end-points, hence we use port 8880 in this example. If we start another deployment also with a service, we may choose a port like 8881 and so on.
You can also still access the pod's shell by using the technique described in the previous section:
export POD_ID=`kubectl get pods -n test | grep flask | tail -1 | awk '{print "pod/"$1}'`
kubectl exec --stdin --tty -n test $POD_ID -- /bin/bash
And once you are done, you can delete the deployment also in a similar way by running the command kubectl delete -f kubernetes_manifests/troubleshooting-app.yaml -n test
Quick review / Summary¶
This was a very quick look into how to get Kubernetes up and running in a very short time using a combination of multipass
and k3s
.
I believe this stack is ideal for the following use cases:
- Learning about Kubernetes
- Setting up a local Kubernetes development environment in order to test your applications and configurations
- Experiment with features in a safe localized environment that can be thrown away and recreated from scratch very quickly
- Using this within a pipeline to test kubernetes deployments using throw away mini-clusters
K3s is also ideal for running Kubernetes on older hardware and even Raspberry Pi's which makes it ideal for a host of other interesting use cases.
I hope this short introduction may help others discover the potential of the K3s Kubernetes distribution.
References¶
Below is the CTO of civo explaining what K3s and how it relates to Kubernetes. Civo is a Cloud Native service provider but I am not sponsored or supported by them - This is merely a really short and very good explanation of what K3s is. I do have a Civo account and I have used them in a personal capacity, but this is not at all required to follow any of the steps in this blog to get K3s up and running on your own system.
Tags¶
kubernetes, k3s