Kubernetes Cluster Install


I have a number of Intel NUC systems that I purchased off ebay a while back and wanted to use them to create my own personal Kubernetes cluster. I’ll be using the cluster to test out new container-based products as well as securing the installation based on the CIS Benchmark for Kubernetes.

Install Fedora Server 31

I am using Fedora 31 as my base OS instead of something like CoreOS for a number of reasons, most notably that I want to install some other software that is not container-based. I installed the standard Fedora Server version.

Install docker-ce

I am using Docker as my container runtime. Add the docker repository using this command:

sudo dnf config-manager --add-repo https://download.docker.com/linux/fedora/docker-ce.repo

Then install docker-ce:

sudo dnf install docker-ce docker-ce-cli containerd.io

There is a problem with the Docker CE 19.03 version on Fedora 31. This is beacaue Fedora 31 uses cgroups v2 by default and docker doesn’t yet support cgroups v2.

You have to revert to using the old cgroups v1:

sudo nano /etc/default/grub
GRUB_CMDLINE_LINUX="systemd.unified_cgroup_hierarchy=0"

sudo grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg
sudo reboot

If you want to have DNS resolution work with your containers, disable the firewall on Fedora and then start docker and add your user to the docker group:

sudo systemctl disable firewalld
sudo systemctl enable --now docker
sudo usermod -aG docker $USER
sudo mkdir /etc/docker
sudo cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF
sudo mkdir -p /etc/systemd/system/docker.service.d
sudo systemctl daemon-reload
sudo systemctl restart docker

Kubernetes

We have to switch Fedora 31 to use the legacy iptables version:

sudo update-alternatives --set iptables /usr/sbin/iptables-legacy

Disable SELinux:

setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

Install Kubernetes

Add the Kubernetes repository to yum, add the contents below to file /etc/yum.repos.d/kubernetes.repo:

[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
dnf install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable --now kubelet

Add the contents below to file /etc/sysctl.d/k8s.conf:

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
sysctl --system

Install tc

dnf install -y tc

Disable Swap

Swap must be disabled for Kubernetes to work.

swapoff -a

To make it permanent over reboot, comment out the /dev/mapper/* entry in /etc/fstab.

Repeat Above

Repeat the process outlined above for each node in the cluster.

Create Master Control Node

I have just one master control node in my cluster (I only have 3 NUCs). I’ll show you a trick about letting the master node act as a worker node below. I will be using flannel as my CNI. To create the master control node:

kubeadm init --pod-network-cidr=10.244.0.0/16

Fix any issues or warnings that are displayed before proceeding. You can use the –dry-run flag to kubadm to see any errors.

Install CNI Network

Follow the directions for your favorite CNI, as I indicated above I used flannel and issued this command after configuring a user:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Add Worker Nodes

The output from the kubeadm command should have a command line you execute on each worker node to add them to the cluster, execute that command on each worker node.

kubeadm join :6443 –token 2kdqcd.s5oh7e0ykph8wk99
–discovery-token-ca-cert-hash sha256:9cf5070b009674ec43153a488889891dd9380fb43585f231cf79852616e4b0c4