Upgrading Cluster API components
When to upgrade
In general, it’s recommended to upgrade to the latest version of Cluster API to take advantage of bug fixes, new features and improvements.
Considerations
If moving between different API versions, there may be additional tasks that you need to complete. See below for instructions moving between v1alpha2 and v1alpha3.
Ensure that the version of Cluster API is compatible with the Kubernetes version of the management cluster.
Upgrading to newer versions of 0.3.x
It is recommended to use clusterctl to upgrade between versions of Cluster API 0.3.x.
Upgrading from Cluster API v1alpha2 (0.2.x) to Cluster API v1alpha3 (0.3.x)
We will be using the clusterctl init command to upgrade an existing management cluster from v1alpha2
to v1alpha3
.
For detailed information about the changes from v1alpha2
to v1alpha3
, please refer to the Cluster API v1alpha2 compared to v1alpha3 section.
Prerequisites
There are a few preliminary steps needed to be able to run clusterctl init
on a management cluster with v1alpha2
components installed.
Delete the cabpk-system namespace
Delete the cabpk-system
namespace by running:
kubectl delete namespace cabpk-system
Delete the core and infrastructure provider controller-manager deployments
Delete the capi-controller-manager
deployment from the capi-system
namespace:
kubectl delete deployment capi-controller-manager -n capi-system
Depending on your infrastructure provider, delete the controller-manager deployment.
For example, if you are using the AWS provider, delete the capa-controller-manager
deployment from the capa-system
namespace:
kubectl delete deployment capa-controller-manager -n capa-system
Optional: Ensure preserveUnknownFields is set to ‘false’ for the infrastructure provider CRDs Spec
This should be the case for all infrastructure providers using conversion webhooks to allow upgrading from v1alpha2
to
v1alpha3
.
This can verified by running kubectl get crd <crd name>.infrastructure.cluster.x-k8s.io -o yaml
for all the
infrastructure provider CRDs.
Upgrade Cluster API components using clusterctl
Run clusterctl init with the relevant infrastructure flag. For the AWS provider you would run:
clusterctl init --infrastructure aws
You should now be able to manage your resources using the v1alpha3
version of the Cluster API components.
Adopting existing machines into KubeadmControlPlane management
If your cluster has existing machines labeled with cluster.x-k8s.io/control-plane
, you may opt in to management of those machines by
creating a new KubeadmControlPlane object and updating the associated Cluster object’s controlPlaneRef
like so:
---
apiVersion: "cluster.x-k8s.io/v1alpha3"
kind: Cluster
...
spec:
controlPlaneRef:
apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
kind: KubeadmControlPlane
name: controlplane
namespace: default
...
Caveats:
- The KCP controller will refuse to adopt any control plane Machines not bootstrapped with the kubeadm bootstrapper.
- The KCP controller may immediately begin upgrading Machines post-adoption if they’re out of date.
- The KCP controller attempts to behave intelligently when adopting existing Machines, but because the bootstrapping process sets various fields in the KubeadmConfig of a machine it’s not always obvious the original user-supplied
KubeadmConfig
would have been for that machine. The controller attempts to guess this intent to not replace Machines unnecessarily, so if it guesses wrongly, the consequence is that the KCP controller will effect an “upgrade” to its current config. - If the cluster’s PKI materials were generated by an initial KubeadmConfig reconcile, they’ll be owned by the KubeadmConfig bound to that machine. The adoption process re-parents these resources to the KCP so they’re not lost during an upgrade, but deleting the KCP post-adoption will destroy those materials.
- The
ClusterConfiguration
is only partially reconciled with their ConfigMaps the workload cluster, andkubeadm
considers the ConfigMap authoritative. Fields which are reconciled include:kubeadmConfigSpec.clusterConfiguration.etcd.local.imageRepository
kubeadmConfigSpec.clusterConfiguration.etcd.local.imageTag
kubeadmConfigSpec.clusterConfiguration.dns.imageRepository
kubeadmConfigSpec.clusterConfiguration.dns.imageTag
- Further information can be found in issue 2083