Creating Clusters on Huawei Cloud Stack
This document provides comprehensive instructions for creating Kubernetes clusters on the Huawei Cloud Stack platform using Cluster API.
TOC
Prerequisites1. Required Plugin Installation2. HCS Infrastructure Input PreparationCluster Creation OverviewControl Plane ConfigurationConfigure HCS AuthenticationConfigure Machine Configuration PoolConfigure Machine TemplateConfigure KubeadmControlPlaneConfigure HCSClusterConfigure ClusterCluster VerificationUsing kubectlExpected ResultsAdding Worker NodesUpgrading ClustersPrerequisites
Before creating clusters, ensure all of the following prerequisites are met:
1. Required Plugin Installation
Install the following plugins on the 's global cluster:
- Alauda Container Platform Kubeadm Provider
- Alauda Container Platform HCS Infrastructure Provider
For detailed installation instructions, refer to the Installation Guide.
2. HCS Infrastructure Input Preparation
Prepare all HCS-specific inputs before writing any YAML in this document:
- HCS credential Secret values
- Provider-recognized compute values such as
imageName,flavorName, andavailabilityZone - Cluster network inventory, including the subnets and free IP ranges used by the cluster
- Control plane ELB address planning, including
vipAddress,vipSubnetName, and fixed L4 and L7 IPs - Static IP pool planning for control plane and worker nodes
See Infrastructure Resources for Huawei Cloud Stack for the complete checklist, source information, and constraints.
Cluster Creation Overview
At a high level, you'll create the following Cluster API resources in the 's global cluster to provision infrastructure and bootstrap a functional Kubernetes cluster.
Before you write any YAML in this page, complete the preparation checklist in Infrastructure Resources for Huawei Cloud Stack. This checklist covers the values that the provider expects, where to get them, and which values must be planned before you fill the manifests.
Important Namespace Requirement
To ensure proper integration with the as business clusters, all resources must be deployed in the cpaas-system namespace. Deploying resources in other namespaces may result in integration issues.
The cluster creation process follows this order:
- Configure HCS authentication (Secret)
- Create machine configuration pool (HCSMachineConfigPool)
- Configure machine template (HCSMachineTemplate)
- Configure KubeadmControlPlane
- Configure HCSCluster
- Create the Cluster
Control Plane Configuration
The control plane manages cluster state, scheduling, and the Kubernetes API. This section shows how to configure a highly available control plane.
Configuration Parameter Guidelines
When configuring resources, exercise caution with parameter modifications:
- Replace only values enclosed in
<>with your environment-specific values - Preserve all other parameters as they represent optimized or required configurations
- Modifying non-placeholder parameters may result in cluster instability or integration issues
Configure HCS Authentication
HCS authentication information is stored in a Secret resource.
You can reuse an existing HCS credential Secret. Its name does not need to match the cluster name, but HCSCluster.spec.identityRef.name must reference this Secret.
Configure Machine Configuration Pool
The HCSMachineConfigPool defines pre-configured hostnames and static IP addresses for VMs.
Pool Size Requirement
The configuration pool must include at least as many entries as the number of control plane nodes you plan to deploy.
Use one subnet selector per networks[] entry. For new manifests, set either subnetName or subnetId, but not both. Existing manifests may keep the deprecated subenetName field; if you also add subnetName while updating that manifest, its value must exactly match subenetName. Do not supply conflicting values across subenetName, subnetName, and subnetId.
If you use subnetName in the machine configuration pool, include the same subnet name in HCSCluster.spec.network.subnets.
For the initial cluster create flow, listing an existing subnet by name is enough because the controller resolves subnet metadata before the cluster becomes Ready. If you later add another subnet to an existing Ready HCSCluster, do not append only name. Patch the parent HCSCluster.spec.network.subnets entry with the full subnet object so later machine or ELB operations can reuse the resolved subnet metadata.
*For new manifests, set either subnetName or subnetId. Existing manifests may continue to use subenetName, and may add subnetName only if both fields use the same value. Do not provide conflicting subnet selector values.
Note: The CRD schema lists subnetName, subenetName, and subnetId as optional fields and does not express their allowed combinations. Follow the provider-level rules above when writing manifests.
Note: To attach multiple NICs to one node, add multiple networks[] entries. The provider only uses these entries to attach NICs and assign subnet selectors plus static IPs. It does not support declaring per-NIC roles, default gateways, static routes, or per-NIC DNS settings.
Configure Machine Template
The HCSMachineTemplate defines the VM specifications for control plane nodes.
Storage Requirements
The following data disk mount points are recommended for control plane nodes:
/var/lib/etcd- etcd data (10GB+)/var/lib/kubelet- kubelet data (100GB+)/var/lib/containerd- container runtime data (100GB+)/var/cpaas- platform data and logs (40GB+)
*Required when dataVolumes is specified.
Note: Do not set runtime identity fields such as providerID or serverId in HCSMachineTemplate manifests. The provider assigns these values when it creates HCS instances.
Note: Tenant administrators cannot retrieve the provider-recognized flavorName and availabilityZone values from the HCS UI. Get the exact values from the HCS administrator before you apply the manifest.
Configure KubeadmControlPlane
The KubeadmControlPlane defines the Kubernetes control plane configuration.
The HCS controller also injects files while resolving cloud-init data. It writes /etc/kubernetes/pki/kubelet.crt, /etc/kubernetes/pki/kubelet.key, and /etc/kubernetes/encryption-provider.conf for control plane machines. For the first control plane machine, the controller generates the encryption provider configuration. After the control plane is initialized, it tries to reuse the existing kube-apiserver encryption provider configuration. If you include a bootstrap file at /etc/kubernetes/encryption-provider.conf, treat it as a placeholder because the controller-generated or synchronized file takes precedence.
Note: Configure apiServer.extraArgs and apiServer.extraVolumes together. If the volume is not mounted, kube-apiserver cannot read the files written under /etc/kubernetes.
Note: The rolloutStrategy.rollingUpdate.maxSurge: 0 example above is for highly available static-IP control planes. Keep this setting for fixed-size control plane pools with at least three replicas so replacements happen in a scale-down-then-scale-up order. If you create a single-control-plane HCS cluster (spec.replicas: 1), do not copy the rolloutStrategy block into the create manifest. KubeadmControlPlane validation rejects that scale-in style rollout configuration for a single replica.
Note: HCS also supports creating a single-control-plane cluster by setting spec.replicas: 1 and preparing one control plane config entry in the referenced HCSMachineConfigPool. Treat this as a creation-only topology, and leave the rollout strategy unset in that create manifest. The upgrade flow in this documentation does not support single-control-plane HCS clusters.
Use the OS Support Matrix only for the component versions it explicitly lists, such as coredns and etcd image tags for supported MicroOS images. It is not a complete source for all HCS manifest values. Before you apply this YAML, also use the approved release baseline for values such as imageRepository, DNS image repository, Kube-OVN version, Kube-OVN join CIDR, Pod CIDR, and Service CIDR.
Configure HCSCluster
The HCSCluster resource defines the HCS infrastructure configuration.
The HCS provider creates an Elastic Load Balance (ELB) on the HCS platform for the Kubernetes API server. This ELB must keep Hybrid Load Balancing enabled so cluster nodes can also reach the API server through the ELB address.
For the documented HCS workflow, provide vipAddress, elbVirsubnetL4Ips, and elbVirsubnetL7Ips. Each elbVirsubnetL4Ips[].ips and elbVirsubnetL7Ips[].ips entry must contain two IPs.
If you set vipDomainName, configure HCS Cloud DNS Private Zones so that the domain resolves to vipAddress.
List every cluster subnet in spec.network.subnets before you reference it anywhere else. vipSubnetName, elbVirsubnetL4Ips[].subnetName, elbVirsubnetL7Ips[].subnetName, and the subnetName values used by HCSMachineConfigPool must all exist in spec.network.subnets.
For the initial cluster create flow, the controller can resolve existing subnet metadata from name. For an existing Ready cluster, append a full subnet object instead of only name. Include id, and include neutronSubnetId for any subnet that the control plane ELB will use. Keep cidr, gatewayIp, primaryDNS, and secondaryDNS in the subnet inventory as well.
Do not disable Hybrid Load Balancing on the provider-created ELB after the cluster is created. The cluster depends on that ELB mode so nodes can reach the API server through the ELB address.
Do not include spec.controlPlaneEndpoint in the create manifest. In the HCS create flow, the controller derives and populates this field from spec.controlPlaneLoadBalancer after the HCSCluster is created. Do not set controlPlaneEndpoint manually, and do not add an empty controlPlaneEndpoint object. If controlPlaneEndpoint is explicitly present in the manifest, it must include both host and port.
Configure Cluster
The Cluster resource in Cluster API declares the cluster and references the control plane and infrastructure resources.
Cluster Verification
After deploying all cluster resources, verify that the cluster has been created successfully.
Using kubectl
Expected Results
A successfully created cluster should show:
- Cluster status: Running or Provisioned
- All control plane machines: Running
- Kubernetes nodes: Ready
- Cluster Module Status: Completed
Adding Worker Nodes
For instructions on adding worker nodes to the cluster, refer to Managing Nodes.
Upgrading Clusters
For instructions on upgrading cluster components, refer to Upgrading Clusters.