In general, it’s recommended to upgrade to the latest version of Cluster API to take advantage of bug fixes, new features and improvements.
If moving between different API versions, there may be additional tasks that you need to complete. See below for instructions moving between v1alpha2 and v1alpha3.
Ensure that the version of Cluster API is compatible with the Kubernetes version of the management cluster.
For detailed information about the changes from
v1alpha3, please refer to the Cluster API v1alpha2 compared to v1alpha3 section.
cabpk-system namespace by running:
kubectl delete namespace cabpk-system
capi-controller-manager deployment from the
kubectl delete deployment capi-controller-manager -n capi-system
Depending on your infrastructure provider, delete the controller-manager deployment.
For example, if you are using the AWS provider, delete the
capa-controller-manager deployment from the
kubectl delete deployment capa-controller-manager -n capa-system
This should be the case for all infrastructure providers using conversion webhooks to allow upgrading from
This can verified by running
kubectl get crd <crd name>.infrastructure.cluster.x-k8s.io -o yaml for all the
infrastructure provider CRDs.
clusterctl init --infrastructure aws
You should now be able to manage your resources using the
v1alpha3 version of the Cluster API components.
If your cluster has existing machines labeled with
cluster.x-k8s.io/control-plane, you may opt in to management of those machines by
creating a new KubeadmControlPlane object and updating the associated Cluster object’s
controlPlaneRef like so:
--- apiVersion: "cluster.x-k8s.io/v1alpha3" kind: Cluster ... spec: controlPlaneRef: apiVersion: controlplane.cluster.x-k8s.io/v1alpha3 kind: KubeadmControlPlane name: controlplane namespace: default ...
- The KCP controller will refuse to adopt any control plane Machines not bootstrapped with the kubeadm bootstrapper.
- The KCP controller may immediately begin upgrading Machines post-adoption if they’re out of date.
- The KCP controller attempts to behave intelligently when adopting existing Machines, but because the bootstrapping process sets various fields in the KubeadmConfig of a machine it’s not always obvious the original user-supplied
KubeadmConfigwould have been for that machine. The controller attempts to guess this intent to not replace Machines unnecessarily, so if it guesses wrongly, the consequence is that the KCP controller will effect an “upgrade” to its current config.
- If the cluster’s PKI materials were generated by an initial KubeadmConfig reconcile, they’ll be owned by the KubeadmConfig bound to that machine. The adoption process re-parents these resources to the KCP so they’re not lost during an upgrade, but deleting the KCP post-adoption will destroy those materials.
ClusterConfigurationis only partially reconciled with their ConfigMaps the workload cluster, and
kubeadmconsiders the ConfigMap authoritative. Fields which are reconciled include:
- Further information can be found in issue 2083