K0s - Zero Friction Kubernetes
For small servers we need tiny Kubernetes nodes like K0s. This is my how i repurposed an old wrecked laptop as an tiny min server running a Kubernetes cluster - K0s. #k0s
Kubernetes, a project under the Cloud Native Computing Foundation, is a popular container orchestration platform for managing distributed systems. If you don't need the full power of Google K8s Kubernetes, like on a Raspberry Pi cluster or actually typically in any HomeLab - K0s is a great choice. For Kubernetes we have 3 main flavors to choose from K0s from Mirantis, K3s from Racher (SUSE) and CNCF's K8s that is the enterprise work horse of today.
K0s
Kos is a minimalist, certified Kubernetes distribution. K0s is packaged into a single binary. It comes with no host OS dependencies, relying solely on the host OS kernel to function, hence providing an efficient, clean, and straightforward approach to run Kubernetes
K0s is an open source, all-inclusive Kubernetes distribution, which is configured with all of the features needed to build a Kubernetes cluster. Due to its simple design, flexible deployment options and modest system requirements, k0s is well suited for
- Any cloud
- Bare metal
- Edge and IoT
K0s drastically reduces the complexity of installing and running a CNCF certified Kubernetes distribution. With K0s new clusters can be bootstrapped in minutes and developer friction is reduced to zero. This allows anyone with no special skills or expertise in Kubernetes to easily get started.
K0s is distributed as a single binary with zero host OS dependencies besides the host OS kernel. It works with any Linux without additional software packages or configuration. Any security vulnerabilities or performance issues can be fixed directly in the k0s distribution that makes it extremely straightforward to keep the clusters up-to-date and secure.
System requirements
Minimum memory and CPU requirements
The minimum requirements for K0s detailed below are approximations,
and thus your results may vary.
Role | Memory (RAM) | Virtual CPU (vCPU) |
---|---|---|
Controller node | 1 GB | 1 vCPU |
Worker node | 0.5 GB | 1 vCPU |
Controller + worker | 1 GB | 1 vCPU |
Controller node recommendations, workers and pods are up to:
- 10 workers and 1.000 pods is 1-2 G RAM and 1-2 vCPU
- 50 workers and 5.000 pods is 2-4 G RAM and 2-4 vCPU
Storage consumption
It's recommended to use an SSD for optimal storage performance (cluster latency and throughput are sensitive to storage). More in the documentation.
Role | Storage (for k0s) |
---|---|
Controller node | ~0.5 GB + OS and apps |
Worker node | ~1.3 GB + OS and apps |
Controller + worker | ~1.7 GB + OS and apps |
Note: The operating system and application requirements must be considered in addition to the k0s part. e.g. 6 GB for Ubuntu
Exposing services
Kubernetes offers multiple options for exposing services to external networks. The main options are NodePort, LoadBalancer and Ingress controller.
- NodePort, as the name says, means that a port on a node is configured to route incoming requests to a certain service. The port range is limited to 30000-32767, so you cannot expose commonly used ports like 80 or 443 with NodePort.
- LoadBalancer is a service, which is typically implemented by the cloud provider as an external service (with additional cost). Load balancers can also be installed internally in the Kubernetes cluster with MetalLB, which is typically used for bare-metal deployments. Load balancer provides a single IP address to access your services, which can run on multiple nodes.
- Ingress controller helps to consolidate routing rules of multiple applications into one entity. Ingress controller is exposed to an external network with the help of NodePort, LoadBalancer or host network. You can also use Ingress controller to terminate TLS for your domain in one place, instead of terminating TLS for each application separately.
GitOps with k0s
K0s doesn't come with a lot of different extensions and add-ons that some users might find useful (and some not). Instead, k0s comes with 100% upstream Kubernetes and is compatible with all Kubernetes extensions. This makes it easy for k0s users to freely select the needed extensions that their applications and infrastructure need, without conflicting to any predefined options.
Now, GitOps is a perfect practice to deploy these extensions automatically with applications by defining and configuring them directly in Git. This will also help with cluster security as the cluster doesn't need to be accessed directly when application changes are needed. However, this puts more stress on the Git access control, because changes in Git are propagated automatically to the cluster.
Extensions
The K0s have a impressive set of extensions to choose from
- Ambassador API Gateway
Once your organization hits a certain scale, a reverse proxy or load balancer isn’t enough for traffic management. If you’re using Kubernetes, you’re probably already at that scale and need support for advanced ingress and API management solutions like blue-green deployment. The AWS EKS Anywhere team recommends Emissary-Ingress for. - Flux for GitOps
GitOps is a practice where you leverage Git as the single source of truth. It offers a declarative way to do Kubernetes cluster management and application delivery. The desired states, using Kubernetes manifests and helm packages, are pulled from a git repository and automatically deployed to the cluster. This also makes it quick to re-deploy and recover applications whenever needed. - MetalLB Load Balancer
Load balancers can be used for exposing applications to the external network. Load balancer provides a single IP address to route incoming requests to your app. In order to successfully create Kubernetes services of type LoadBalancer, you need to have the load balancer (implementation) available for Kubernetes.
Load balancer can be implemented by a cloud provider as an external service (with additional cost). This can also be implemented internally in the Kubernetes cluster (pure SW solution) with MetalLB. - NGINX Ingress Controller
NGINX is a very popular Ingress for Kubernetes. In many cloud environments, it can be exposed to an external network by using the load balancer offered by the cloud provider. However, cloud load balancers are not necessary. Load balancer can also be implemented with MetalLB, which can be deployed in the same Kubernetes cluster. Another option to expose the Ingress controller to an external network is to use NodePort. The third option is to use host network. All of these alternatives are described in more detail on below, with separate examples. - Rook for Ceph Storage
In this tutorial you'll create a Ceph storage for k0s. Ceph is a highly scalable, distributed storage solution. It offers object, block, and file storage, and it's designed to run on any common hardware. Ceph implements data replication into multiple volumes that makes it fault-tolerant. Another clear advantage of Ceph in Kubernetes is the dynamic provisioning. This means that applications just need to request the storage (persistent volume claim) and Ceph will automatically provision the requested storage without a manual creation of the persistent volume each time.
- Traefik Ingress Controller
One of the major players is definitely Traefik. With the Traefik Ingress Controller it is possible to use 3rd party tools, such as ngrok, to go further and expose your load balancer to the world. In doing this you enable dynamic certificate provisioning through Let's Encrypt, using either cert-manager or Traefik's own built-in ACME provider.
Other Kubernetes distros
K3s
K3s is a lightweight, powerful Kubernetes distribution developed by Rancher Labs (2020 acquire by SUSE). Compacted into a single binary, K3s stands apart due to its rich feature set and compatibility with varied environments. It’s designed to focus on minimal resource environments and edge computing scenarios.
K8s
Kubernetes (K8s) is an open-source platform known for its robust feature set and complex deployments, making it suitable for large-scale and cloud deployments. It supports a full range of Kubernetes API and services, including service discovery, load balancing, and complex applications.
Today it's handled by the Cloud Native Computing Foundation CNCF, original authors Google.
The name Kubernetes originates from Ancient Greek, meaning helmsman or pilot. Kubernetes is often abbreviated as K8s, counting the eight letters between the K and the s (a numeronym).
Background to why I got interested in K0s
I just made this 1U server from a badly physically broken Lenovo Yoga laptop (fell corner first to the floor). My box is small in size, it barely had sufficient depth to fit the MoBo inside. The board had not much memory nor disk. It was to small for K8s. I already have K3s clusters so - why not test K0s too.
Install K0s
We need a set of 3 VM's, 1 controller and 2 workers. We can use any of the networks we have (You have more than one if you have SDN installed on the Proxmox server). jammy-server-cloudimg-amd64.img
Recommendations
The size of the 3 nodes can be minimally small.
Role | Memory (RAM) | Virtual CPU (vCPU) |
---|---|---|
Controller node | 1 GB | 1 vCPU |
Worker node | 0.5 GB | 1 vCPU |
Controller + worker | 1 GB | 1 vCPU |
The disk space is minimum 2 GiB, but I prefer to use 4 as a bare minimum.
All depends what you are going to run on the cluster.
You must refer to the documentation on how to setup a production K0s system.
The Cluster Network on SDN
Installing and then make a K0s Kubernetes Cluster is easy, pull down the binary and execute the install script.
sudo apt install -y containerd qemu-guest-agent
sudo curl -sSLf https://get.k0s.sh | sudo sh
To be able to create the Cluster we need a token for the workers
Save the token k0s.token
, on all worker nodes.
Install the workers
On each worker node do the following 3 steps. You need to have the token file.
sudo k0s install worker --token-file k0s.token
sudo k0s start
Check the Status of the K0s Cluster
When all worker are up and running and have issued the join statement its time to see if the cluster is working. On the Controller node:
sudo k0s kubectl get nodes
sudo k0s kubectl get nodes -o wide
References
CNCF [1] K0s [2] Ambassador [3] Flux [4] MetalLB [5] NGINX [6] Traefik [7] Rook [8] SDN [9]