Deploying a Local Kubernetes Cluster
This tutorial will assume that nmaas is installed in a virtual machine that is completely isolated from any production environment. However, the discussed steps are applicable to bare-metal hardware as well, once the correct network strategy has been identified by the system administrator.
Virtual Machine Prerequisites
- Debian 12 or Ubuntu >= 22.04
- 12GB+ RAM
- 2+ VCPUs
- 60GB+ storage space
Virtual Machine Setup
Although we will focus on VirtualBox, any virtualization software can be used, depending on the user's preference. Virtualbox 7 is an open-source virtualization software which can be downloaded for free from the official website.
After installation, additional network configuration needs to be done before a Kubernetes cluster can be set up. The following network configuration will make the nmaas deployment accessible by any host in the same local area network (bridged-mode). nmaas can be isolated from the local network by altering the network strategy and using NAT, host-only network adapaters or a combination of the two. Such customization is beyond the scope of this tutorial.
Creating the Virtual Machine in VirtualBox
Create a regular virtual machine in VirtualBox, using the latest Debian 12 or Ubuntu 22.04 ISOs. Either the desktop or the server edition can be used. To conserve resources, it is recommended to use the server edition of Ubuntu. The following parameters need to be altered:
- Choose
Skip unattended installation
if you want to manually control the deployment process, similar to the default behavior in VirtualBox versions prior to 7. - Allocate sufficient memory to the virtual machine. 12GB is the minimum amount which will support a complete nmaas installation, along with the possibility for deploying additional applications via the catalog.
- Allocate sufficient number of CPU cores, depending on the performance of your system.
- After the VM has been created, using the
Settings
option, adjust the following parameters:- In the
Network
configuration tab make sure to choose theBridged
adapter type. - If a Desktop version of Ubuntu is being installed, make sure to enable 3D acceleration in the
Display
tab.
- In the
Configuring the Guest Operating System
Once the guest operating system has been installed, it will automatically acquire an IP address from the local DHCP server.
Kubernetes Cluster Setup
In this section we discuss how to quickly get a Kubernetes cluster up and running using the lightweight K3s Kubernetes distribution.
Kubernetes Deployment Using K3s
K3s is one of the many options to deploy full-fledged Kubernetes cluster in a matter of minutes. K3s is more lightweight than other Kubernetes distributions since it does not ship with unnecessary modules and is packaged as a single binary. K3s offers seamless scalability across multiple nodes and provides the ability to either use an embedded database for storing the cluster state or a relational one, such as PostgreSQL or MySQL.
-
K3s can be installed with the following command:
--tls-san
– can be specified multiple times to add additional names for which the automatically generated Kubernetes API certificates will be valid. If using a static IP address on your VM, make sure to replace the IP address with the IP address of your VM.--disable=traefik
– Traefik needs to be explicitly disabled since it ships by default with new K3s installations. We will use ingress-nginx as our ingress controller and will install it manually in a later step.--flannel-backend=none
– Flannel CNI needs to be explicitly disabled, since we will manually install Calico.--disable-network-policy
– we do not need the default network policy addon that enabled the use of Kubernetes NetworkPolicy objects, since Calico has built-in support for network policies.--disable=servicelb
– the preconfigured implementation for LoadBalancer service objects should be disabled, since we will manually install MetalLB.--write-kubeconfig-mode 664
– more permissive permissions are needed for the automatically generated kubeconfig file so that regular users, apart from root, can use the kubectl client as well.--clister-cidr=10.136.0.0/16
– a free subnet range which will be used as the pod network. Should be written down since it will be required in the Calico deployment as well.
-
Another way of providing
kubectl
access to different users is to make a copy of the original kubeconfig file located in/etc/rancher/k3s/k3s.yaml
into a directory and changing its permissions. Then, by exporting theKUBECONFIG
environment variable, the kubectl client will be forced to use the newly created configuration: -
Our cluster is still not in a Ready state, since we do not have a CNI plugin installed yet.
Addons Setup
CNI
-
Calico can be manually installed by downloading the manifest file and setting the CALICO_IPV4POOL_CIDR parameter to the value set when deploying K3s.
-
Edit the downloaded
custom-resources.yaml
file (~/nmaas-deployment/manifests/calico/custom-resources.yaml
) and change thecidr
andencapsulation
properties as below: -
Once Calico has been installed, the node should transition to a
Ready
state.
DNS
CoreDNS is installed by default with K3s, so no need for any manual installation or configuration. Once Calico CNI has been deployed and the cluster has entered a Ready
state, DNS resolution can be tested using the dnsutil
pod, as described in the official Kubernetes documentation page.
Once the Pod enters a ready state, we can open a shell session:
Storage
An instance of local path provisioner is automatically installed when deploying K3s, which is sufficient for development single-node clusters such as ours.
Helm
To install Helm, we need to first download the latest binary for our architecture and extract it to a location which is in the PATH
system variable.
- Visit https://github.com/helm/helm/releases and copy the download link for the latest release.
-
Download the latest release locally
-
Test whether Helm has been successfully installed by executing
helm version
.
Warning
For helm to function properly, the kube.config
file must be copied (or linked) to ~/.kube/config
. This can be done like so:
Ingress Nginx
The last application that needs to be installed before we can move on to installing the nmaas components is Ingress Nginx. Since we have already configured Helm, the Ingress Nginx installation is simple.
-
Customize the values.yaml file according to the local environment:
In our case we have opted to use a Deployment instead of a DaemonSet for the deployment strategy. Additionally, we have selected a service type of
ClusterIP
and enabledhostPort
so that the ingress controller can be reachable using the VMs LAN IP address. In this way we avoid using LoadBalancer addons, simplifying the single node nmaas deployment. -
Add the
ingress-nginx
Helm repository and install the application:We have chosen to install
ingress-nginx
in thenmaas-system
namespace, which will house all the other nmaas components as well.Note About Helm Errors
When running the helm install command, Helm might throw an error about the cluster being unreachable. This is most likely because Helm looks for the kube.config file in the default location, but
--write-kubeconfig-mode 664
has been specified during the K3s installation, and the actual location is/etc/rancher/k3s/k3s.yaml
.This can be fixed by simply executing:
-
We can test the installed ingress by directly visiting the allocated LoadBalancer IP address in a browser. We should be presented with a generic
404-not found
page.