Prerequisites for High-Available (HA) K8s Deployment on Debian
Before starting, ensure the following requirements are met to avoid configuration issues:
Step 1: Prepare All Nodes
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
/etc/hosts: Assign unique hostnames (e.g., k8s-master-01, k8s-worker-01) and add IP-hostname mappings to /etc/hosts on all nodes:sudo hostnamectl set-hostname <your-hostname>
echo "<node-ip> <hostname>" | sudo tee -a /etc/hosts
containerd (default for K8s) as the container runtime. On all nodes:sudo apt update && sudo apt install -y containerd
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
sudo systemctl restart containerd
kubelet, kubeadm, and kubectl:sudo apt update && sudo apt install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt update && sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl # Prevent accidental upgrades
Step 2: Initialize the Master Node with HA Support
kubeadm to initialize the master node, specifying a pod network range (e.g., 10.244.0.0/16 for Calico):sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --control-plane-endpoint <VIP>:6443
Replace <VIP> with a virtual IP (e.g., 192.168.1.100) that will route traffic to the master nodes.kubectl: Set up kubectl for the current user to manage the cluster:mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
Verify the network is running:kubectl get pods -n calico-system
Step 3: Join Worker Nodes to the Cluster
Run the kubeadm join command (output during master initialization) on each worker node to add them to the cluster:
sudo kubeadm join <VIP>:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>
Replace <VIP>, <token>, and <hash> with values from the master initialization output.
Step 4: Configure HA for Control Plane Components
The control plane (kube-apiserver, etcd, kube-scheduler, kube-controller-manager) requires redundancy to avoid single points of failure.
Etcd is the key-value store for K8s cluster state. Deploy a 3-node etcd cluster (recommended for quorum) using kubeadm:
--control-plane-endpoint and --upload-certs to the kubeadm init command to enable etcd clustering:sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --control-plane-endpoint <VIP>:6443 --upload-certs
kubeadm join command with --control-plane flag (output during initial master setup):sudo kubeadm join <VIP>:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash> --control-plane --certificate-key <certificate-key>
This deploys etcd on each master node, forming a Raft cluster.Use a load balancer (e.g., HAProxy, Nginx, or cloud-native LB) to distribute traffic to multiple kube-apiserver instances (running on each master). Example HAProxy config (/etc/haproxy/haproxy.cfg):
frontend k8s-api
bind <VIP>:6443
mode tcp
default_backend k8s-api-backend
backend k8s-api-backend
mode tcp
balance roundrobin
server master-01 <master-01-ip>:6443 check
server master-02 <master-02-ip>:6443 check
server master-03 <master-03-ip>:6443 check
Restart HAProxy to apply changes:
sudo systemctl restart haproxy
These components run on each master node and use leader election (--leader-elect=true, enabled by default) to ensure only one instance is active at a time. No additional configuration is needed during initialization.
Step 5: Validate the HA Cluster
Ready:kubectl get nodes
kubectl get pods -n kube-system
kubectl get componentstatuses).Step 6: Implement Monitoring & Backup
etcdctl or tools like Velero:# Example: Snapshot etcd data (run on any master)
ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key snapshot save /tmp/etcd-snapshot.db
Store snapshots in a secure, off-cluster location.Key Best Practices