|
| 1 | +// Module included in the following assemblies: |
| 2 | +// |
| 3 | +// * scalability_and_performance/ztp_far_edge/ztp-deploying-far-edge-sites.adoc |
| 4 | + |
| 5 | +:_mod-docs-content-type: REFERENCE |
| 6 | +[id="ztp-clusterinstance-config-reference_{context}"] |
| 7 | += ClusterInstance CR installation reference |
| 8 | + |
| 9 | +The following tables describe the cluster-level and node-level `ClusterInstance` custom resource (CR) fields used for cluster installation. |
| 10 | + |
| 11 | +== Cluster-level ClusterInstance CR fields |
| 12 | + |
| 13 | +.ClusterInstance CR cluster-level fields |
| 14 | +[cols="1,3", options="header"] |
| 15 | +|==== |
| 16 | +|ClusterInstance CR field |
| 17 | +|Description |
| 18 | + |
| 19 | +|`spec.clusterName` |
| 20 | +| The name of the cluster. |
| 21 | + |
| 22 | +|`spec.baseDomain` |
| 23 | +|The base domain to use for the deployed cluster. |
| 24 | + |
| 25 | +|`spec.pullSecretRef.name` |
| 26 | +|The name of the secret containing the pull secret to use when pulling images. The secret must exist in the same namespace as the `ClusterInstance` CR. |
| 27 | + |
| 28 | +|`spec.clusterImageSetNameRef` |
| 29 | +|The name of the `ClusterImageSet` resource indicating which {product-title} version to deploy. |
| 30 | + |
| 31 | +|`spec.sshPublicKey` |
| 32 | +|Optional. The public SSH key to authenticate SSH access to the cluster nodes. |
| 33 | + |
| 34 | +|`spec.templateRefs` |
| 35 | +a|A list of references to cluster-level templates. A cluster-level template consists of a `ConfigMap` in which the keys of the data field represent the kind of the installation manifest(s). Cluster-level templates are instantiated once per cluster. |
| 36 | + |
| 37 | +|`spec.extraLabels` |
| 38 | +a|Optional. Additional cluster-wide labels to be applied to the rendered templates. This is a nested map structure where the outer key is the resource type (for example, `ManagedCluster`) and the inner map contains the label key-value pairs. |
| 39 | + |
| 40 | +|`spec.extraAnnotations` |
| 41 | +|Optional. Additional cluster-wide annotations to be applied to the rendered templates. Uses the same nested map structure as `extraLabels`. |
| 42 | + |
| 43 | +|`spec.extraManifestsRefs` |
| 44 | +a|Optional. A list of `ConfigMap` references containing additional manifests to be applied to the cluster at install time. Manifests must be bundled in `ConfigMap` resources. |
| 45 | + |
| 46 | +|`spec.suppressedManifests` |
| 47 | +|Optional. A list of manifest names to be excluded from the template rendering process by the SiteConfig Operator. |
| 48 | + |
| 49 | +|`spec.pruneManifests` |
| 50 | +a|Optional. A list of manifests to remove. Each entry requires `apiVersion` and `kind`. |
| 51 | + |
| 52 | +|`spec.installConfigOverrides` |
| 53 | +a|Optional. A JSON formatted string that provides a generic way of passing `install-config` parameters. |
| 54 | + |
| 55 | +[IMPORTANT] |
| 56 | +==== |
| 57 | +Use the reference configuration as specified in the example `ClusterInstance` CR. Adding additional components back into the system might require additional reserved CPU capacity. |
| 58 | +==== |
| 59 | +// TODO: Is this note still relevant? |
| 60 | + |
| 61 | +|`spec.cpuPartitioningMode` |
| 62 | +|Optional. Configure workload partitioning by setting the value to `AllNodes`. The default is `None`. To complete the configuration, specify the `isolated` and `reserved` CPUs in the `PerformanceProfile` CR. |
| 63 | + |
| 64 | +|`spec.networkType` |
| 65 | +|Optional. The Container Network Interface (CNI) plug-in to install. Valid values are `OpenShiftSDN` or `OVNKubernetes`. The default is `OVNKubernetes`. |
| 66 | + |
| 67 | +|`spec.clusterNetwork` |
| 68 | +a|Optional. The list of IP address pools for pods. |
| 69 | + |
| 70 | +|`spec.machineNetwork` |
| 71 | +a|Optional. The list of IP address pools for machines. |
| 72 | + |
| 73 | +|`spec.serviceNetwork` |
| 74 | +a|Optional. The list of IP address pools for services. |
| 75 | + |
| 76 | +|`spec.apiVIPs` |
| 77 | +|Optional. The virtual IPs used to reach the OpenShift cluster API. Enter one IP address for single-stack clusters, or up to two for dual-stack clusters. |
| 78 | + |
| 79 | +|`spec.ingressVIPs` |
| 80 | +|Optional. The virtual IPs used for cluster ingress traffic. Enter one IP address for single-stack clusters, or up to two for dual-stack clusters. |
| 81 | + |
| 82 | +|`spec.additionalNTPSources` |
| 83 | +|Optional. A list of NTP sources (hostname or IP) to be added to all cluster hosts. |
| 84 | + |
| 85 | +|`spec.diskEncryption` |
| 86 | +a|Optional. Configure this field to enable disk encryption for cluster nodes |
| 87 | + |
| 88 | +|`spec.diskEncryption.type` |
| 89 | +|Set the disk encryption type. |
| 90 | + |
| 91 | +|`spec.diskEncryption.tang` |
| 92 | +|Optional. Configure Tang server settings for disk encryption. |
| 93 | + |
| 94 | +|`spec.proxy` |
| 95 | +|Optional. Configure proxy settings that you want to use for the install config. |
| 96 | + |
| 97 | +|`spec.caBundleRef` |
| 98 | +|Optional. Reference to a `ConfigMap` containing the bundle of trusted certificates for the host. This field is referenced by image-based installations only. |
| 99 | +// TODO: Is this correct? I got it from a matrix in the slidedeck. |
| 100 | + |
| 101 | +|`spec.platformType` |
| 102 | +|Optional. The name for the platform for the installation. Valid values are `BareMetal`, `None`, `VSphere`, `Nutanix`, or `External`. |
| 103 | + |
| 104 | +|`spec.cpuArchitecture` |
| 105 | +|Optional. The software architecture used for nodes that do not have an architecture defined. Valid values are `x86_64`, `aarch64`, or `multi`. The default is `x86_64`. |
| 106 | + |
| 107 | +|`spec.clusterType` |
| 108 | +|Optional. The type of cluster. Valid values are `SNO`, `HighlyAvailable`, `HostedControlPlane`, or `HighlyAvailableArbiter`. |
| 109 | + |
| 110 | +|`spec.holdInstallation` |
| 111 | +|Optional. When set to `true`, prevents installation from happening. Inspection and validation proceed as usual, but installation does not begin until this field is set to `false`. The default is `false`. |
| 112 | + |
| 113 | +|`spec.ignitionConfigOverride` |
| 114 | +|Optional. A JSON formatted string containing the user overrides for the ignition config. |
| 115 | + |
| 116 | +|`spec.reinstall` |
| 117 | +|Optional. Configuration for reinstallation of the cluster. Includes `generation` and `preservationMode` fields. |
| 118 | +|==== |
| 119 | + |
| 120 | +== Node-level ClusterInstance CR fields |
| 121 | + |
| 122 | +.ClusterInstance CR node-level fields |
| 123 | +[cols="1,3", options="header"] |
| 124 | +|==== |
| 125 | +|ClusterInstance CR field |
| 126 | +|Description |
| 127 | + |
| 128 | +|`spec.nodes` |
| 129 | +|A list of node objects defining the hosts in the cluster. |
| 130 | + |
| 131 | +|`spec.nodes[].hostName` |
| 132 | +|The desired hostname for the host. For single-node deployments, define a single host. For three-node deployments, define three hosts. For standard deployments, define three hosts with `role: master` and two or more hosts with `role: worker`. |
| 133 | +// TODO: Is the additional naming guidance still valid? It was in the SiteConfig resource reference for this field previously. |
| 134 | + |
| 135 | +|`spec.nodes[].role` |
| 136 | +|Optional. The role of the node. Valid values are `master`, `worker`, or `arbiter`. The default is `master`. |
| 137 | + |
| 138 | +|`spec.nodes[].bmcAddress` |
| 139 | +a|The BMC address used to access the host. {ztp} supports iPXE and virtual media booting by using Redfish or IPMI protocols. For more information about BMC addressing, see the "Additional resources" section. |
| 140 | + |
| 141 | +[NOTE] |
| 142 | +==== |
| 143 | +In far edge Telco use cases, only virtual media is supported for use with {ztp}. |
| 144 | +==== |
| 145 | + |
| 146 | +|`spec.nodes[].bmcCredentialsName.name` |
| 147 | +|The name of the `Secret` CR containing the BMC credentials. When creating the secret, use the same namespace as the `ClusterInstance` CR. |
| 148 | + |
| 149 | +|`spec.nodes[].bootMACAddress` |
| 150 | +|The MAC address used for PXE boot. |
| 151 | + |
| 152 | +|`spec.nodes[].bootMode` |
| 153 | +|Optional. The boot mode for the host. The default value is `UEFI`. Use `UEFISecureBoot` to enable secure boot on the host. |
| 154 | + |
| 155 | +|`spec.nodes[].rootDeviceHints` |
| 156 | +|Optional. Specifies the device for deployment. Use disk identifiers that are stable across reboots. For example, `wwn: <disk_wwn>` or `deviceName: /dev/disk/by-path/<device_path>`. For a detailed list of stable identifiers, see the "About root device hints" section. |
| 157 | + |
| 158 | +|`spec.nodes[].automatedCleaningMode` |
| 159 | +|Optional. When set to `disabled`, the provisioning service does not automatically clean the disk during provisioning and deprovisioning. Set the value to `metadata` to remove the disk's partitioning table only, without fully wiping the disk. The default value is `disabled`. |
| 160 | + |
| 161 | +|`spec.nodes[].nodeNetwork` |
| 162 | +|Optional. Configure the network settings for the node. |
| 163 | + |
| 164 | +|`spec.nodes[].nodeNetwork.interfaces` |
| 165 | +|Optional. Configure the network interfaces for the node. |
| 166 | + |
| 167 | +|`spec.nodes[].nodeNetwork.config` |
| 168 | +|Optional. Configure the NMState network configuration for the node, including interfaces, DNS, and routes. |
| 169 | + |
| 170 | +|`spec.nodes[].nodeLabels` |
| 171 | +|Optional. Specify custom roles for your nodes in your managed clusters. These are additional roles not used by any {product-title} components. When you add a custom role, it can be associated with a custom machine config pool that references a specific configuration for that role. Adding custom labels or roles during installation makes the deployment process more effective and prevents the need for additional reboots after the installation is complete. |
| 172 | + |
| 173 | +|`spec.nodes[].ignitionConfigOverride` |
| 174 | +|Optional. A JSON formatted string containing the overrides for the host's ignition config. Use this field to assign partitions for persistent storage. Adjust the disk ID and size to the specific hardware. |
| 175 | + |
| 176 | +|`spec.nodes[].installerArgs` |
| 177 | +|Optional. A JSON formatted string containing the user overrides for the host's CoreOS installer args. |
| 178 | + |
| 179 | +|`spec.nodes[].templateRefs` |
| 180 | +a|A list of references to node-level templates. A node-level template consists of a `ConfigMap` in which the keys of the data field represent the kind of the installation manifests. Node-level templates are instantiated once for each node. |
| 181 | + |
| 182 | +|`spec.nodes[].extraLabels` |
| 183 | +|Optional. Additional node-level labels to be applied to the rendered templates. Uses the same nested map structure as the cluster-level `extraLabels`. |
| 184 | + |
| 185 | +|`spec.nodes[].extraAnnotations` |
| 186 | +|Optional. Additional node-level annotations to be applied to the rendered templates. |
| 187 | + |
| 188 | +|`spec.nodes[].suppressedManifests` |
| 189 | +|Optional. A list of node-level manifest names to be excluded from the template rendering process. |
| 190 | + |
| 191 | +|`spec.nodes[].pruneManifests` |
| 192 | +|Optional. A list of mode-level manifests to remove. Each entry requires `apiVersion` and `kind`. |
| 193 | + |
| 194 | +|`spec.nodes[].cpuArchitecture` |
| 195 | +|Optional. The software architecture of the node. If you do not define a value, the value is inherited from `spec.cpuArchitecture`. Valid values are `x86_64` or `aarch64`. |
| 196 | + |
| 197 | +|`spec.nodes[].ironicInspect` |
| 198 | +|Optional. Disable automatic introspection during registration of the BMH by specifying `disabled` for this field. Automatic introspection by the provisioning service is enabled by default. |
| 199 | + |
| 200 | +|`spec.nodes[].hostRef` |
| 201 | +|Optional. Reference to an existing `BareMetalHost` resource located in another namespace. Includes `name` and `namespace` fields. |
| 202 | +|==== |
| 203 | + |
0 commit comments