An easy guide to install kubernetes cluster with kubeadm
Kubernetes can be installed with the following deployments tools.
- Bootstrapping clusters with kubeadm
- Installing Kubernetes with kops
- Installing Kubernetes with Kubespray
In this article, we learn to install Kubernetes cluster with kubeadm.
Prepare the cluster nodes
We have three Centos nodes to install Kubernetes cluster.
$ cat /etc/centos-release
CentOS Linux release 7.9.2009 (Core)
$ uname -r
3.10.0-1160.11.1.el7.x86_64
Configuring network, firewall and selinux
We disable firewall and selinux to make deployment easier and this is only for study purpose.
You can configure network by following the official document.
Installing container runtime
To run containers in Pods, Kubernetes uses a container runtime. We need to install a container runtime into each node in the cluster so that Pods can run there. The following are the common container runtimes with Kubernetes on Linux:
- containerd
- CRI-O
- Docker
Install Docker runtime
On each node, install Docker Engine as below:
$ yum install -y yum-utils
$ yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
$ yum install docker-ce docker-ce-cli containerd.io
$ systemctl start docker
$ systemctl status docker
Configure Docker daemon
On each node, configure the Docker daemon, in particular to use systemd for the management of the container’s cgroups.
$ sudo mkdir /etc/docker
$ cat <<EOF | sudo tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
$ sudo systemctl enable docker
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker
Note: overlay2 is the preferred storage driver for systems running Linux kernel version 4.0 or higher, or RHEL or CentOS using version 3.10.0-514 and above.
Installing kubeadm, kubelet and kubectl
We need to install the following packages on all of cluster nodes:
kubeadm: the command to bootstrap the cluster.
kubelet: the component that runs on all of the machines in the cluster and does things like starting pods and containers.
kubectl: the command line util to talk to the cluster.
$ cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF$ sudo yum install -y kubelet kubeadm kubectl –disableexcludes=kubernetes
Installed:
kubeadm.x86_64 0:1.22.1-0 kubectl.x86_64 0:1.22.1-0 kubelet.x86_64 0:1.22.1-0
$ sudo systemctl enable –now kubelet
Configuring a cgroup driver
Both the container runtime and the kubelet have a property called “cgroup driver”, which is important for the management of cgroups on Linux machines.
kubeadm allows you to pass a KubeletConfiguration structure during kubeadm init. This KubeletConfiguration can include the cgroupDriver field which controls the cgroup driver of the kubelet.
A minimal example of configuring the field explicitly:
[root@node1 ~]# cat kubeadm-config.yaml
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
kubernetesVersion: v1.22.1
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd
Such a configuration file can then be passed to the kubeadm command:
[root@node1 ~]# kubeadm init --config kubeadm-config.yaml
[init] Using Kubernetes version: v1.22.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 <node1-ip>]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [node1 localhost] and IPs [<node1-ip> 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [node1 localhost] and IPs [<node1-ip> 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 9.002713 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node node1 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node node1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: un7mhw.i9enhg84xl2tpgup
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join <node1-ip>:6443 --token un7mhw.i9enhg84xl2tpgup \
--discovery-token-ca-cert-hash sha256:5553ba3acbbec95383fc4a274e4f21126ac8101c39dfe5262718a9f0fd1b3c32
Creating a cluster with kubeadm
Using kubeadm, you can create a minimum viable Kubernetes cluster that conforms to best practices.
The kubeadm tool is good if you need:
- A simple way for you to try out Kubernetes, possibly for the first time.
- A way for existing users to automate setting up a cluster and test their application.
- A building block in other ecosystem and/or installer tools with a larger scope.
Initializing the control-plane node
The control-plane node is the machine where the control plane components run, including etcd (the cluster database) and the API Server (which the kubectl command line tool communicates with).
- (Recommended) If you have plans to upgrade this single control-plane kubeadm cluster to high availability you should specify the –control-plane-endpoint to set the shared endpoint for all control-plane nodes. Such an endpoint can be either a DNS name or an IP address of a load-balancer.
- Choose a Pod network add-on, and verify whether it requires any arguments to be passed to kubeadm init. Depending on which third-party provider you choose, you might need to set the –pod-network-cidr to a provider-specific value.
- (Optional) Since version 1.14, kubeadm tries to detect the container runtime on Linux by using a list of well known domain socket paths. To use different container runtime or if there are more than one installed on the provisioned node, specify the –cri-socket argument to kubeadm init.
- (Optional) Unless otherwise specified, kubeadm uses the network interface associated with the default gateway to set the advertise address for this particular control-plane node’s API server. To use a different network interface, specify the –apiserver-advertise-address= argument to kubeadm init. To deploy an IPv6 Kubernetes cluster using IPv6 addressing, you must specify an IPv6 address, for example –apiserver-advertise-address=fd00::101
- (Optional) Run kubeadm config images pull prior to kubeadm init to verify connectivity to the gcr.io container image registry
To initialize the control-plane node run “kubeadm init “.
$ kubeadm init --pod-network-cidr=192.168.0.0/16
kubeadm init first runs a series of prechecks to ensure that the machine is ready to run Kubernetes. These prechecks expose warnings and exit on errors. kubeadm init then downloads and installs the cluster control plane components. This may take several minutes.
In previous section Configuring a cgroup driver, we have run the command to initialize the control-plane node.
If you need to run kubeadm init again, you must first tear down the cluster.
[root@node1 ~]# kubeadm reset
Execute the following commands to configure kubectl (also returned by kubeadm init) if you are the root user.
[root@node1 ~]# export KUBECONFIG=/etc/kubernetes/admin.conf
Installing a Pod network add-on
In this practice, we install Calico which is an open source networking and network security solution for containers, virtual machines, and native host-based workloads.
[root@node1 ~]# kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml
[root@node1 ~]# kubectl create -f https://docs.projectcalico.org/manifests/custom-resources.yaml
Note: Before creating this manifest, read its contents and make sure its settings are correct for your environment. For example, you may need to change the default IP pool CIDR to match your pod network CIDR.
[root@node1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1 Ready control-plane,master 11m v1.22.1
[root@node1 ~]# kubectl get pods -n calico-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-868b656ff4-gv2tq 1/1 Running 0 2m41s
calico-node-wclb2 1/1 Running 0 2m41s
calico-typha-d8c5c85c5-kldfh 1/1 Running 0 2m42s
[root@node1 ~]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
calico-apiserver calico-apiserver-554fbf9554-45d6l 1/1 Running 0 15m
calico-system calico-kube-controllers-868b656ff4-gv2tq 1/1 Running 0 16m
calico-system calico-node-wclb2 1/1 Running 0 16m
calico-system calico-typha-d8c5c85c5-kldfh 1/1 Running 0 16m
kube-system coredns-78fcd69978-lq9pp 1/1 Running 0 18m
kube-system coredns-78fcd69978-nm29f 1/1 Running 0 18m
kube-system etcd-node1 1/1 Running 1 19m
kube-system kube-apiserver-node1 1/1 Running 1 19m
kube-system kube-controller-manager-node1 1/1 Running 0 19m
kube-system kube-proxy-m48qn 1/1 Running 0 18m
kube-system kube-scheduler-node1 1/1 Running 1 19m
tigera-operator tigera-operator-698876cbb5-dghgv 1/1 Running 0 17m
You can install only one Pod network per cluster.
Control plane node isolation
Untaint the master so that it will be available for scheduling workloads
[root@node1 ~]# kubectl taint nodes --all node-role.kubernetes.io/master-
node/node1 untainted
Joining your nodes
The nodes are where your workloads (containers and Pods, etc) run. To add new nodes to your cluster do the following for each machine:
SSH to the machine
Become root (e.g. sudo su -)
Run the command that was output by kubeadm init
[root@node2 ~]# kubeadm join
:6443 –token un7mhw.i9enhg84xl2tpgup –discovery-token-ca-cert-hash sha256:5553ba3acbbec95383fc4a274e4f21126ac8101c39dfe5262718a9f0fd1b3c32
[root@node3 ~]# kubeadm join:6443 –token un7mhw.i9enhg84xl2tpgup –discovery-token-ca-cert-hash sha256:5553ba3acbbec95383fc4a274e4f21126ac8101c39dfe5262718a9f0fd1b3c32
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster…
[preflight] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -o yaml’
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap…This node has joined the cluster:
- Certificate signing request was sent to apiserver and a response was received.
- The Kubelet was informed of the new secure connection details.
Run ‘kubectl get nodes’ on the control-plane to see this node join the cluster.
If you do not have the token, you can get it by running the following command on the control-plane node:
$ kubeadm token list
By default, tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired, you can create a new token by running the following command on the control-plane node:
$ kubeadm token create
If you don’t have the value of –discovery-token-ca-cert-hash, you can get it by running the following command chain on the control-plane node:
$ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
We can check the cluster nodes as below.
[root@node1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1 Ready control-plane,master 37m v1.22.1
node2 Ready <none> 4m56s v1.22.1
node3 Ready <none> 7m49s v1.22.1
[root@node1 ~]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
calico-apiserver calico-apiserver-554fbf9554-45d6l 1/1 Running 0 34m
calico-system calico-kube-controllers-868b656ff4-gv2tq 1/1 Running 0 35m
calico-system calico-node-cl5kt 1/1 Running 0 7m51s
calico-system calico-node-rtgcs 1/1 Running 0 4m58s
calico-system calico-node-wclb2 1/1 Running 0 35m
calico-system calico-typha-d8c5c85c5-7knvv 1/1 Running 0 7m46s
calico-system calico-typha-d8c5c85c5-kldfh 1/1 Running 0 35m
calico-system calico-typha-d8c5c85c5-qflvv 1/1 Running 0 4m56s
kube-system coredns-78fcd69978-lq9pp 1/1 Running 0 37m
kube-system coredns-78fcd69978-nm29f 1/1 Running 0 37m
kube-system etcd-node1 1/1 Running 1 37m
kube-system kube-apiserver-node1 1/1 Running 1 37m
kube-system kube-controller-manager-node1 1/1 Running 0 37m
kube-system kube-proxy-d55xr 1/1 Running 0 7m51s
kube-system kube-proxy-m48qn 1/1 Running 0 37m
kube-system kube-proxy-m7drg 1/1 Running 0 4m58s
kube-system kube-scheduler-node1 1/1 Running 1 37m
tigera-operator tigera-operator-698876cbb5-dghgv 1/1 Running 0 35m