3v-Hosting Blog

How to Build a Kubernetes Cluster from Three VPS Servers

Administration

10 min read


If you're looking to get some hands-on experience with container orchestration, building a Kubernetes cluster from three Virtual Private Servers (VPS) is a great way to go. Kubernetes is a powerful open-source system for automating the deployment, scaling, and management of containerized applications. It's used a lot in production environments. This article will walk you through creating a Kubernetes cluster using three VPS servers step-by-step.

 

 

 

 

What is Kubernetes?

 

Kubernetes - often called K8s - is a container orchestration platform that automates application deployment, scaling, and management. It makes it easy to manage containerized applications across a bunch of machines. No matter if you're working with microservices or monolithic applications, Kubernetes handles a lot of the complexity of managing containerized environments for you.

 

The primary components of Kubernetes include:

    Nodes: Machines (physical or virtual) that run Kubernetes workloads.
    Pods: The smallest deployable units that contain one or more containers.
    Control Plane: The brain of the cluster, which makes global decisions about the cluster (e.g., scheduling, monitoring).

In this guide, we will create a Kubernetes cluster with three VPS servers. These VPS servers will serve different roles: one as the master node (control plane) and the other two as worker nodes (data plane).

 

 

 

 

Prerequisites

 

Before you begin, ensure you have the following:

    - Three VPS instances with Ubuntu 20.04 or 22.04 (or another compatible Linux distribution).
    - At least 2 GB of RAM and 2 CPUs per VPS.
    - SSH access to each VPS.
    - Basic knowledge of Linux commands and networking.

You will also need to install Kubeadm, Kubelet, and Kubectl on each of the VPS servers. Kubeadm is the tool used for initializing the Kubernetes cluster, while Kubelet runs on each node to ensure that containers are running in Pods. Kubectl is the command-line interface to interact with the Kubernetes cluster.

 

 

 

 

Step 1: Prepare the Servers

 

Update and Upgrade the Servers

On each VPS, update the package list and upgrade installed packages.

    sudo apt update && sudo apt upgrade -y

 


Install Docker

Kubernetes relies on a container runtime, and Docker is the most widely used option.

To install Docker:

    sudo apt install -y docker.io

Start and enable Docker to run on boot:

    sudo systemctl enable docker
    sudo systemctl start docker

Add your user to the Docker group to avoid needing sudo for Docker commands:

    sudo usermod -aG docker $USER


Disable Swap

Kubernetes requires swap to be disabled to function correctly. Run the following commands to disable swap on each server:

    sudo swapoff -a
    sudo sed -i '/ swap / s/^/#/' /etc/fstab


Configure Networking

Kubernetes uses certain ports for communication between the nodes. Make sure the following ports are open in your firewall:

    - 6443: API server (master)
    - 10250: Kubelet API
    - 10251: Scheduler
    - 10252: Controller manager
    - 2379-2380: Etcd server
    - 30000-32767: NodePort Services (for exposing applications)

If you're using UFW for firewall management, run:

    sudo ufw allow 6443,10250,10251,10252,2379:2380,30000:32767/tcp

 

 


 

Other articles on administration in our Blog:


    - Docker Cheat Sheet: Basic Commands to Get Started

    - Docker-Compose Basics

    - Cron - Schedule tasks on Linux servers correctly

    - 10 useful console utilities for monitoring a Linux server

 


 

 

Step 2: Install Kubernetes Components

 

Install Kubeadm, Kubelet, and Kubectl

On each VPS, install the Kubernetes tools.

First, add the Kubernetes repository:

    sudo apt update && sudo apt install -y apt-transport-https
    sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
    sudo apt-add-repository "deb https://apt.kubernetes.io/ kubernetes-xenial main"

 

Then, install the components:

    sudo apt update
    sudo apt install -y kubeadm kubelet kubectl

 

Enable and start the Kubelet:

    sudo systemctl enable kubelet
    sudo systemctl start kubelet

 

 

 

Step 3: Initialize the Master Node

 

On the VPS that will serve as the master node, initialize the Kubernetes cluster using kubeadm. This command will set up the control plane and generate a token for worker nodes to join the cluster.

 

Run the following on the master node:

    sudo kubeadm init --pod-network-cidr=10.244.0.0/16

 

This will initiate the cluster and provide a command with a token that worker nodes will use to join the cluster. The output will look something like this:

    kubeadm join 192.168.1.100:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>

 

 

 

 

Step 4: Set Up Kubectl Access

 

After the cluster has been initialized, you need to set up kubectl to interact with the cluster from the master node.

Create a Kubernetes configuration file for the root user:

    sudo mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

Test the configuration by running:

    kubectl get nodes

You should see the master node in a NotReady state at this point because the worker nodes are not yet added.

 

 

 

 

Step 5: Set Up a Network Plugin

 

Kubernetes requires a network plugin to manage pod networking. Flannel is a simple and popular choice for this.

On the master node, install Flannel:

    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

 

Wait a few moments for the network setup to complete. You can check the status by running:

    kubectl get pods --all-namespaces

 

 

 

 

Step 6: Join Worker Nodes to the Cluster

 

On each of the two worker nodes, use the token provided earlier to join the cluster. Run the following command (replacing with the actual token and hash):

    sudo kubeadm join 192.168.1.100:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>

 

After successfully joining, check the status on the master node:

    kubectl get nodes

You should now see the worker nodes listed as Ready.

 

 

 

 

Step 7: Verifying the Cluster

 

Run the following command to check the status of your cluster:

    kubectl get nodes

You should see all three nodes (master and two workers) listed as Ready.

 

 

 

 


Step 8: Deploying Applications

 

Now that your Kubernetes cluster is up and running, you can begin deploying applications. Start with a simple Nginx deployment to test everything:

    kubectl create deployment nginx --image=nginx
    kubectl expose deployment nginx --port=80 --type=NodePort

To access the Nginx service, use the NodePort assigned to the deployment.

 

 

 


Conclusion

Congratulations! You've successfully built a simple Kubernetes cluster with three VPS servers. This setup is great for learning, since it shows you how Kubernetes manages and orchestrates containers across multiple nodes. With this foundation, you can start experimenting with deploying more complex applications, scaling your services, and exploring Kubernetes features like persistent storage, Helm charts, and more.