One of the things I noticed while studying for my CKAD exam is that my test cluster was a bit behind. It’s been up and running for over 200 days at this point and I’m several versions behind. Version 1.17 is out, the exam was based on 1.16, and I’m running 1.13.
NAME STATUS ROLES AGE VERSION runlevl41c.mylabserver.com Ready master 207d v1.13.5 runlevl42c.mylabserver.com Ready <none> 207d v1.13.5 runlevl43c.mylabserver.com Ready <none> 207d v1.13.5
At this point, I haven’t updated anything. So let’s see what my options are with sudo kubeadm upgrade plan
I0112 00:57:07.499711 16540 version.go:237] remote version is much newer: v1.17.0; falling back to: stable-1.13 Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': COMPONENT CURRENT AVAILABLE Kubelet 3 x v1.13.5 v1.13.12 Upgrade to the latest version in the v1.13 series: COMPONENT CURRENT AVAILABLE API Server v1.13.7 v1.13.12 Controller Manager v1.13.7 v1.13.12 Scheduler v1.13.7 v1.13.12 Kube Proxy v1.13.7 v1.13.12 CoreDNS 1.2.6 1.2.6 Etcd 3.2.24 3.2.24
I’ve culled out the boring bits. The key takeaways are that I can see that I’m four versions behind and I can currently only upgrade to another 1.13 dot release. So let’s update kubeadm
and see what happens.
Now for the downside. You really need to keep your cluster(s) current since you can’t jump multiple versions. Per the official documentation, you can only jump between single minor versions or patches. For example, in my case, I can only go to the latest patch version or from 1.13 to 1.14, but not from 1.13 to 1.17.
Let’s start by upgrading kubeadm
. First we’ll find the latest version to upgrade to.
$ apt update $ apt-cache policy kubeadm ... 1.15.0-00 500 500 https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages 1.14.10-00 500 500 https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages 1.14.9-00 500 500 https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages ...
So 1.14.10 is the latest 1.14 version.
sudo apt-mark unhold kubeadm kubelet && \ sudo apt-get update && sudo apt-get install -y kubeadm=1.14.10-00 && \ sudo apt-mark hold kubeadm
Now if we check our upgrade plan again, we’ll see 1.14 is available.
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': COMPONENT CURRENT AVAILABLE Kubelet 3 x v1.13.5 v1.14.10 Upgrade to the latest stable version: COMPONENT CURRENT AVAILABLE API Server v1.13.7 v1.14.10 Controller Manager v1.13.7 v1.14.10 Scheduler v1.13.7 v1.14.10 Kube Proxy v1.13.7 v1.14.10 CoreDNS 1.2.6 1.3.1 Etcd 3.2.24 3.3.10
We’ll use the command provided by the plan. sudo kubeadm upgrade apply v1.14.10
Hopefully, you’ll get the following message: [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.14.10". Enjoy!
Next we’ll upgrade the kubelet
.
sudo apt-mark unhold kubelet kubectl && \ sudo apt-get update && sudo apt-get install -y kubelet=1.14.10-00 kubectl=1.14.10-00 && \ sudo apt-mark hold kubelet kubectl
Now we’ll update kubeadm
on our worker nodes using the same command we used above.
sudo apt-mark unhold kubeadm kubelet && \ sudo apt-get update && sudo apt-get install -y kubeadm=1.14.10-00 && \ sudo apt-mark hold kubeadm
From a master node, we’ll start draining the pods which aren’t needed (e.g. app pods). To preserve kube pods, we’ll ignore DaemonSets.
kubectl drain worker_node --ignore-daemonsets
Before we drain the nodes, let’s see what pods we have deployed and where.
$ k get po -o custom-columns=NAME:{.metadata.name},NODE:{.spec.nodeName} NAME NODE bash runlevl42c.mylabserver.com nginx runlevl42c.mylabserver.com nginx-7cdbd8cdc9-qhhh2 runlevl42c.mylabserver.com nginx-7cdbd8cdc9-s6dvp runlevl42c.mylabserver.com nginx-7cdbd8cdc9-xbdb2 runlevl43c.mylabserver.com pod-calc runlevl43c.mylabserver.com pod-calc-47chz runlevl43c.mylabserver.com pod-calc-4pff2 runlevl42c.mylabserver.com pod1 runlevl42c.mylabserver.com repel-pod runlevl42c.mylabserver.com secrets runlevl43c.mylabserver.com xmas runlevl42c.mylabserver.com
Once we’ve drained the node, we only have pods on runlevl43c
.
NAME NODE nginx-7cdbd8cdc9-7wbnq runlevl43c.mylabserver.com nginx-7cdbd8cdc9-ghggf runlevl43c.mylabserver.com nginx-7cdbd8cdc9-xbdb2 runlevl43c.mylabserver.com pod-calc runlevl43c.mylabserver.com pod-calc-47chz runlevl43c.mylabserver.com pod-calc-r7ct5 runlevl43c.mylabserver.com secrets runlevl43c.mylabserver.com
From each worker node, let’s update the kubelet
config.
sudo kubeadm upgrade node config --kubelet-version v1.14.10
And the kubelet
itself.
sudo apt-mark unhold kubelet kubectl && \ sudo apt-get update && sudo apt-get install -y kubelet=1.14.10-00 kubectl=1.14.10-00 && \ sudo apt-mark hold kubelet kubectl
Restart the kubelet
.
sudo systemctl restart kubelet
Uncordon each worker.
kubectl uncordon node_name
Once you’re done, check the nodes.
$ k get no NAME STATUS ROLES AGE VERSION runlevl41c.mylabserver.com Ready master 207d v1.14.10 runlevl42c.mylabserver.com Ready <none> 207d v1.14.10 runlevl43c.mylabserver.com Ready <none> 207d v1.14.10
Now to keep upgrading until I’m current. 🙂