The bare metal deployment of Kubernetes is described as follows:
A configuration directory layout used in this page is shown as follows:
> kubernetes
> cert <- etcd peer certificate
> cni <- calico configuration
> config <- configurations
> vip <- kube-vip
For each control plane nodes, /etc/hosts
has been updated to:
--- a/etc/hosts 2023-12-28 22:42:56.049585065 +0700
+++ b/etc/hosts 2023-12-28 22:42:56.049585065 +0700
@@ -7,3 +7,8 @@
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
+
+10.0.0.5 etcd-one
+10.0.0.6 etcd-two
+10.0.0.7 etcd-three
+10.0.0.11 k8s-cp-endpoint
For each worker nodes, /etc/hosts
has been updated to:
--- a/etc/hosts 2023-12-28 22:42:56.049585065 +0700
+++ b/etc/hosts 2023-12-28 22:42:56.049585065 +0700
@@ -7,3 +7,5 @@
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
+
+10.0.0.11 k8s-cp-endpoint
Initialize Kubernetes Cluster
Initialize First Control Plane
Kubernetes has instructions to create a cluster, to initialize the cluster follow this steps:
-
Prepare certificates
sudo mkdir /etc/cert sudo cp ~/cert/ca.crt /etc/cert/ sudo cp ~/cert/k8s-cp-endpoint.crt /etc/cert/ sudo cp ~/cert/k8s-cp-endpoint.key /etc/cert/
-
Review cluster configuration
vi config/kube.yaml
apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 10.0.0.11 --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration kubernetesVersion: stable controlPlaneEndpoint: k8s-cp-endpoint etcd: external: endpoints: - https://etcd-one:2379 - https://etcd-two:2379 - https://etcd-three:2379 caFile: /etc/cert/ca.crt certFile: /etc/cert/k8s-cp-endpoint.crt keyFile: /etc/cert/k8s-cp-endpoint.key networking: podSubnet: "10.244.0.0/16" --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration serverTLSBootstrap: true
-
Initialize Kubernetes
sudo kubeadm init --config config/kube.yaml --upload-certs
[init] Using Kubernetes version: v1.26.1 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-cp-endpoint k8s-cp-one kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.11] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] External etcd mode: Skipping etcd/ca certificate authority generation [certs] External etcd mode: Skipping etcd/server certificate generation [certs] External etcd mode: Skipping etcd/peer certificate generation [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 7.001839 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [upload-certs] Using certificate key: 8e93e48a526f0e6758f4f8ad5acede40f7518db1946a629c539a7d3d192dbaef [mark-control-plane] Marking the node k8s-cp-one as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node k8s-cp-one as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule] [bootstrap-token] Using token: i4yfia.87n0pz9f0buuqso8 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of the control-plane node running the following command on each as root: kubeadm join k8s-cp-endpoint:6443 --token i4yfia.87n0pz9f0buuqso8 \ --discovery-token-ca-cert-hash sha256:7ff3b26a6108abde89eaaf583941509ae399fdafd18c788a0fd6b6ee7c95be20 \ --control-plane --certificate-key 8e93e48a526f0e6758f4f8ad5acede40f7518db1946a629c539a7d3d192dbaef Please note that the certificate-key gives access to cluster sensitive data, keep it secret! As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use "kubeadm init phase upload-certs --upload-certs" to reload certs afterward. Then you can join any number of worker nodes by running the following on each as root: kubeadm join k8s-cp-endpoint:6443 --token i4yfia.87n0pz9f0buuqso8 \ --discovery-token-ca-cert-hash sha256:7ff3b26a6108abde89eaaf583941509ae399fdafd18c788a0fd6b6ee7c95be20
-
Setup
kubectl
cluster access as regular usermkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
Deploy Container Network Interface (CNI)
Calico is a networking and security solution that enables Kubernetes workloads and non-Kubernetes/legacy workloads to communicate seamlessly and securely.
Deployment of Calico is described here, follow this steps to deploy Calico as CNI:
-
Deploy Tigera operator by applying its CRD and deployment
wget -O cni/tigera-operator.yaml https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/tigera-operator.yaml kubectl create -f cni/tigera-operator.yaml
namespace/tigera-operator created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created serviceaccount/tigera-operator created clusterrole.rbac.authorization.k8s.io/tigera-operator created clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created deployment.apps/tigera-operator created
-
Deploy Calico installation
wget -O cni/calico-tigera.yaml https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/custom-resources.yaml vi cni/calico-tigera.yaml
# This section includes base Calico installation configuration. # For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.Installation apiVersion: operator.tigera.io/v1 kind: Installation metadata: name: default spec: # Configures Calico networking. calicoNetwork: # Note: The ipPools section cannot be modified post-install. ipPools: - blockSize: 26 cidr: 10.244.0.0/16 encapsulation: VXLANCrossSubnet natOutgoing: Enabled nodeSelector: all() --- # This section configures the Calico API server. # For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.APIServer apiVersion: operator.tigera.io/v1 kind: APIServer metadata: name: default spec: {}
kubectl apply -f cni/calico-tigera.yaml
installation.operator.tigera.io/default created apiserver.operator.tigera.io/default created
Join Other Control Nodes
-
From the second and third control plane node, join to cluster
sudo kubeadm join k8s-cp-endpoint:6443 --token i4yfia.87n0pz9f0buuqso8 \ --discovery-token-ca-cert-hash sha256:7ff3b26a6108abde89eaaf583941509ae399fdafd18c788a0fd6b6ee7c95be20 \ --control-plane --certificate-key 8e93e48a526f0e6758f4f8ad5acede40f7518db1946a629c539a7d3d192dbaef
[preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [preflight] Running pre-flight checks before initializing the new control plane instance [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [download-certs] Saving the certificates to the folder: "/etc/kubernetes/pki" [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-cp-endpoint k8s-cp-two kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.12] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Valid certificates and keys now exist in "/etc/kubernetes/pki" [certs] Using the existing "sa" key [kubeconfig] Generating kubeconfig files [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [check-etcd] Skipping etcd check in external mode [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... [control-plane-join] Using external etcd - no local stacked instance added The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation [mark-control-plane] Marking the node k8s-cp-two as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node k8s-cp-two as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule] This node has joined the cluster and a new control plane instance was created: * Certificate signing request was sent to apiserver and approval was received. * The Kubelet was informed of the new secure connection details. * Control plane label and taint were applied to the new node. * The Kubernetes control plane instances scaled up. To start administering your cluster from this node, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Run 'kubectl get nodes' to see this node join the cluster.
-
Setup
kubectl
cluster access as regular usermkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
Join Worker Nodes
-
From the worker node, join to cluster
sudo kubeadm join k8s-cp-endpoint:6443 --token i4yfia.87n0pz9f0buuqso8 \ --discovery-token-ca-cert-hash sha256:7ff3b26a6108abde89eaaf583941509ae399fdafd18c788a0fd6b6ee7c95be20
[preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
-
After all control planes and worker nodes join the cluster, issue this command from a control plane to ensure all nodes exist
kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k8s-cp-one Ready control-plane 39m v1.26.1 10.0.0.11
Ubuntu 22.04.1 LTS 5.15.0-58-generic containerd://1.6.16 k8s-cp-three Ready control-plane 27m v1.26.1 10.0.0.13 Ubuntu 22.04.1 LTS 5.15.0-58-generic containerd://1.6.16 k8s-cp-two Ready control-plane 28m v1.26.1 10.0.0.12 Ubuntu 22.04.1 LTS 5.15.0-58-generic containerd://1.6.16 k8s-node-five Ready 3m45s v1.26.1 10.0.0.18 Ubuntu 22.04.1 LTS 5.15.0-58-generic containerd://1.6.16 k8s-node-four Ready 6m5s v1.26.1 10.0.0.17 Ubuntu 22.04.1 LTS 5.15.0-58-generic containerd://1.6.16 k8s-node-one Ready 13m v1.26.1 10.0.0.14 Ubuntu 22.04.1 LTS 5.15.0-58-generic containerd://1.6.16 k8s-node-three Ready 7m57s v1.26.1 10.0.0.16 Ubuntu 22.04.1 LTS 5.15.0-58-generic containerd://1.6.16 k8s-node-two Ready 10m v1.26.1 10.0.0.15 Ubuntu 22.04.1 LTS 5.15.0-58-generic containerd://1.6.16
The cluster configuration above is using serverTLSBootstrap: true
so it is necessary to approve CSR. To see CSR simply issue kubectl get csr
and then approve CSR by issuing kubectl certificate approve <csr-name>
. To automate CSR approval, one can use this command:
kubectl get csr | grep Pending | awk '/csr\-[a-z0-9]+/{print $1}' | xargs /usr/bin/kubectl certificate approve
For more information see here.
Upgrade Cluster
To upgrade the cluster to the latest version can be achieved using official guide here.
Upgrade Plan
-
Verify if upgrade is applicable
sudo kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [preflight] Running pre-flight checks. [upgrade] Running cluster health checks [upgrade] Fetching available versions to upgrade to [upgrade/versions] Cluster version: v1.28.1 [upgrade/versions] kubeadm version: v1.28.3 [upgrade/versions] Target version: v1.28.3 [upgrade/versions] Latest version in the v1.28 series: v1.28.3 Upgrade to the latest version in the v1.28 series: COMPONENT CURRENT TARGET kube-apiserver v1.28.1 v1.28.3 kube-controller-manager v1.28.1 v1.28.3 kube-scheduler v1.28.1 v1.28.3 kube-proxy v1.28.1 v1.28.3 CoreDNS v1.10.1 v1.10.1 You can now apply the upgrade by executing the following command: kubeadm upgrade apply v1.28.3 _____________________________________________________________________ The table below shows the current state of component configs as understood by this version of kubeadm. Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually upgrade to is denoted in the "PREFERRED VERSION" column. API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED kubeproxy.config.k8s.io v1alpha1 v1alpha1 no kubelet.config.k8s.io v1beta1 v1beta1 no _____________________________________________________________________
-
Apply upgrade plan
sudo kubeadm upgrade apply v1.28.3
[upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [preflight] Running pre-flight checks. [upgrade] Running cluster health checks [upgrade/version] You have chosen to change the cluster version to "v1.28.3" [upgrade/versions] Cluster version: v1.28.1 [upgrade/versions] kubeadm version: v1.28.3 [upgrade/prepull] Pulling images required for setting up a Kubernetes cluster [upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection [upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull' W1024 17:08:59.606816 15647 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image. [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.28.3" (timeout: 5m0s)... [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests2874241714" [upgrade/staticpods] Preparing for "kube-apiserver" upgrade [upgrade/staticpods] Renewing apiserver certificate [upgrade/staticpods] Renewing apiserver-kubelet-client certificate [upgrade/staticpods] Renewing front-proxy-client certificate [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-10-24-17-08-59/kube-apiserver.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) [apiclient] Found 3 Pods for label selector component=kube-apiserver [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! [upgrade/staticpods] Preparing for "kube-controller-manager" upgrade [upgrade/staticpods] Renewing controller-manager.conf certificate [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-10-24-17-08-59/kube-controller-manager.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) [apiclient] Found 3 Pods for label selector component=kube-controller-manager [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! [upgrade/staticpods] Preparing for "kube-scheduler" upgrade [upgrade/staticpods] Renewing scheduler.conf certificate [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-10-24-17-08-59/kube-scheduler.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) [apiclient] Found 3 Pods for label selector component=kube-scheduler [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster [upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config1422056288/config.yaml [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [upgrade/addons] skip upgrade addons because control plane instances [k8s-cp-three k8s-cp-two] have not been upgraded [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.28.3". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
Upgrade Other Control Planes
To upgrade the rest of control planes, do:
-
On first control plane, drain the node
kubectl drain k8s-cp-two --ignore-daemonsets
-
Apply upgrade
sudo kubeadm upgrade node
[upgrade] Reading configuration from the cluster... [upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' W1024 17:13:39.714933 8495 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image. [upgrade] Upgrading your Static Pod-hosted control plane instance to version "v1.28.3"... [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests3719544563" [upgrade/staticpods] Preparing for "kube-apiserver" upgrade [upgrade/staticpods] Renewing apiserver certificate [upgrade/staticpods] Renewing apiserver-kubelet-client certificate [upgrade/staticpods] Renewing front-proxy-client certificate [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-10-24-17-13-39/kube-apiserver.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) [apiclient] Found 3 Pods for label selector component=kube-apiserver [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! [upgrade/staticpods] Preparing for "kube-controller-manager" upgrade [upgrade/staticpods] Renewing controller-manager.conf certificate [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-10-24-17-13-39/kube-controller-manager.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) [apiclient] Found 3 Pods for label selector component=kube-controller-manager [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! [upgrade/staticpods] Preparing for "kube-scheduler" upgrade [upgrade/staticpods] Renewing scheduler.conf certificate [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-10-24-17-13-39/kube-scheduler.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) [apiclient] Found 3 Pods for label selector component=kube-scheduler [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! [upgrade/addons] skip upgrade addons because control plane instances [k8s-cp-three] have not been upgraded [upgrade] The control plane instance for this node was successfully updated! [upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config2167940923/config.yaml [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [upgrade] The configuration for this node was successfully updated! [upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
-
Once completed, uncordon the node
kubectl uncordon k8s-cp-two
Upgrade Worker Nodes
To upgrade worker nodes, do:
-
On control plane, drain the node
kubectl drain k8s-node-one --ignore-daemonsets
-
Apply upgrade
sudo kubeadm upgrade node
[upgrade] Reading configuration from the cluster... [upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [preflight] Running pre-flight checks [preflight] Skipping prepull. Not a control plane node. [upgrade] Skipping phase. Not a control plane node. [upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config2604754615/config.yaml [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [upgrade] The configuration for this node was successfully updated! [upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
-
Once completed, uncordon the node
kubectl uncordon k8s-node-one
-
Verify all nodes has been upgraded
kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k8s-cp-one Ready control-plane 262d v1.28.3 10.0.0.11
Ubuntu 22.04.3 LTS 5.15.0-87-generic containerd://1.6.24 k8s-cp-three Ready control-plane 262d v1.28.3 10.0.0.13 Ubuntu 22.04.3 LTS 5.15.0-87-generic containerd://1.6.24 k8s-cp-two Ready control-plane 262d v1.28.3 10.0.0.12 Ubuntu 22.04.3 LTS 5.15.0-87-generic containerd://1.6.24 k8s-node-five Ready 262d v1.28.3 10.0.0.18 Ubuntu 22.04.3 LTS 5.15.0-87-generic containerd://1.6.24 k8s-node-four Ready 262d v1.28.3 10.0.0.17 Ubuntu 22.04.3 LTS 5.15.0-87-generic containerd://1.6.24 k8s-node-one Ready 262d v1.28.3 10.0.0.14 Ubuntu 22.04.3 LTS 5.15.0-87-generic containerd://1.6.24 k8s-node-three Ready 262d v1.28.3 10.0.0.16 Ubuntu 22.04.3 LTS 5.15.0-87-generic containerd://1.6.24 k8s-node-two Ready 262d v1.28.3 10.0.0.15 Ubuntu 22.04.3 LTS 5.15.0-87-generic containerd://1.6.24
High Availibility
Kube-vip is used to provides virtual IP for high availibility. The steps to install kube-vip in the cluster is described below:
-
Apply RBAC
wget -O vip/rbac.yaml https://kube-vip.io/manifests/rbac.yaml kubectl apply -f vip/rbac.yaml
-
Create daemonset manifest
vi vip/kube-vip.sh
#!/bin/bash CD=
dirname $0
#https://kube-vip.io/docs/installation/daemonset/ VIP=10.0.0.10 INTERFACE=eth0 KVVERSION=$(curl -sL https://api.github.com/repos/kube-vip/kube-vip/releases | jq -r ".[0].name") IMG=ghcr.io/kube-vip/kube-vip:$KVVERSION ctr image pull $IMG ctr run --rm --net-host $IMG vip /kube-vip manifest daemonset \ --interface $INTERFACE \ --address $VIP \ --inCluster \ --taint \ --controlplane \ --services \ --arp \ --leaderElection | tee ${CD}/kube-vip.yamlsudo ./vip/kube-vip.sh
-
Apply generated manifest
vi vip/kube-vip.yaml
apiVersion: apps/v1 kind: DaemonSet metadata: creationTimestamp: null labels: app.kubernetes.io/name: kube-vip-ds app.kubernetes.io/version: v0.6.3 name: kube-vip-ds namespace: kube-system spec: selector: matchLabels: app.kubernetes.io/name: kube-vip-ds template: metadata: creationTimestamp: null labels: app.kubernetes.io/name: kube-vip-ds app.kubernetes.io/version: v0.6.3 spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node-role.kubernetes.io/master operator: Exists - matchExpressions: - key: node-role.kubernetes.io/control-plane operator: Exists containers: - args: - manager env: - name: vip_arp value: "true" - name: port value: "6443" - name: vip_interface value: eth0.49 - name: vip_cidr value: "32" - name: cp_enable value: "true" - name: cp_namespace value: kube-system - name: vip_ddns value: "false" - name: svc_enable value: "true" - name: svc_leasename value: plndr-svcs-lock - name: vip_leaderelection value: "true" - name: vip_leasename value: plndr-cp-lock - name: vip_leaseduration value: "5" - name: vip_renewdeadline value: "3" - name: vip_retryperiod value: "1" - name: address value: 10.0.0.10 - name: prometheus_server value: :2112 image: ghcr.io/kube-vip/kube-vip:v0.6.3 imagePullPolicy: Always name: kube-vip resources: {} securityContext: capabilities: add: - NET_ADMIN - NET_RAW hostNetwork: true serviceAccountName: kube-vip tolerations: - effect: NoSchedule operator: Exists - effect: NoExecute operator: Exists updateStrategy: {} status: currentNumberScheduled: 0 desiredNumberScheduled: 0 numberMisscheduled: 0 numberReady: 0
kubectl apply -f vip/kube-vip.yaml
-
Update
/etc/hosts
to reflect new virtual IP on all nodes--- a/etc/hosts 2023-12-28 22:42:56.049585065 +0700 +++ b/etc/hosts 2023-12-28 22:42:56.049585065 +0700 @@ -8,4 +8,4 @@ ff02::1 ip6-allnodes ff02::2 ip6-allrouters -10.0.0.11 k8s-cp-endpoint +10.0.0.10 k8s-cp-endpoint