Skip to content

Commit 335ebb9

Browse files
committed
TELCODOCS-2484: Replacing SiteConfig with ClusterInstance
1 parent aa32cb6 commit 335ebb9

15 files changed

+627
-294
lines changed

_topic_maps/_topic_map.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3490,7 +3490,7 @@ Topics:
34903490
File: ztp-preparing-the-hub-cluster
34913491
- Name: Updating GitOps ZTP
34923492
File: ztp-updating-gitops
3493-
- Name: Installing managed clusters with RHACM and SiteConfig resources
3493+
- Name: Installing managed clusters with RHACM and ClusterInstance resources
34943494
File: ztp-deploying-far-edge-sites
34953495
- Name: Manually installing a single-node OpenShift cluster with GitOps ZTP
34963496
File: ztp-manual-install

edge_computing/ztp-deploying-far-edge-sites.adoc

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
:_mod-docs-content-type: ASSEMBLY
22
[id="ztp-deploying-far-edge-sites"]
3-
= Installing managed clusters with {rh-rhacm} and SiteConfig resources
3+
= Installing managed clusters with {rh-rhacm} and ClusterInstance resources
44
include::_attributes/common-attributes.adoc[]
55
:context: ztp-deploying-far-edge-sites
66

@@ -35,7 +35,7 @@ include::modules/ztp-deploying-a-site.adoc[leveloffset=+1]
3535
[role="_additional-resources"]
3636
.Additional resources
3737

38-
* xref:../edge_computing/ztp-deploying-far-edge-sites.adoc#ztp-sno-siteconfig-config-reference_ztp-deploying-far-edge-sites[{sno-caps} SiteConfig CR installation reference]
38+
* xref:../edge_computing/ztp-deploying-far-edge-sites.adoc#ztp-clusterinstance-config-reference_ztp-deploying-far-edge-sites[{sno-caps} ClusterInstance CR installation reference]
3939
4040
include::modules/ztp-sno-accelerated-ztp.adoc[leveloffset=+2]
4141

@@ -67,7 +67,7 @@ include::modules/ztp-configuring-ipsec-using-ztp-and-siteconfig-for-mno.adoc[lev
6767
6868
include::modules/ztp-verifying-ipsec.adoc[leveloffset=+2]
6969

70-
include::modules/ztp-sno-siteconfig-config-reference.adoc[leveloffset=+2]
70+
include::modules/ztp-clusterinstance-config-reference.adoc[leveloffset=+2]
7171

7272
[role="_additional-resources"]
7373
.Additional resources
@@ -112,7 +112,9 @@ include::modules/ztp-site-cleanup.adoc[leveloffset=+1]
112112
[role="_additional-resources"]
113113
.Additional resources
114114

115-
* For information about removing a cluster, see link:https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.9/html/clusters/cluster_mce_overview#remove-managed-cluster[Removing a cluster from management].
115+
* link:https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.9/html/clusters/cluster_mce_overview#remove-managed-cluster[Removing a cluster from management].
116+
117+
* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.15/html/multicluster_engine_operator_with_red_hat_advanced_cluster_management/ibio-intro#deprovision-clusters[Deprovisioning clusters]
116118
117119
include::modules/ztp-removing-obsolete-content.adoc[leveloffset=+1]
118120

Lines changed: 203 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,203 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * scalability_and_performance/ztp_far_edge/ztp-deploying-far-edge-sites.adoc
4+
5+
:_mod-docs-content-type: REFERENCE
6+
[id="ztp-clusterinstance-config-reference_{context}"]
7+
= ClusterInstance CR installation reference
8+
9+
The following tables describe the cluster-level and node-level `ClusterInstance` custom resource (CR) fields used for cluster installation.
10+
11+
== Cluster-level ClusterInstance CR fields
12+
13+
.ClusterInstance CR cluster-level fields
14+
[cols="1,3", options="header"]
15+
|====
16+
|ClusterInstance CR field
17+
|Description
18+
19+
|`spec.clusterName`
20+
| The name of the cluster.
21+
22+
|`spec.baseDomain`
23+
|The base domain to use for the deployed cluster.
24+
25+
|`spec.pullSecretRef.name`
26+
|The name of the secret containing the pull secret to use when pulling images. The secret must exist in the same namespace as the `ClusterInstance` CR.
27+
28+
|`spec.clusterImageSetNameRef`
29+
|The name of the `ClusterImageSet` resource indicating which {product-title} version to deploy.
30+
31+
|`spec.sshPublicKey`
32+
|Optional. The public SSH key to authenticate SSH access to the cluster nodes.
33+
34+
|`spec.templateRefs`
35+
a|A list of references to cluster-level templates. A cluster-level template consists of a `ConfigMap` in which the keys of the data field represent the kind of the installation manifest(s). Cluster-level templates are instantiated once per cluster.
36+
37+
|`spec.extraLabels`
38+
a|Optional. Additional cluster-wide labels to be applied to the rendered templates. This is a nested map structure where the outer key is the resource type (for example, `ManagedCluster`) and the inner map contains the label key-value pairs.
39+
40+
|`spec.extraAnnotations`
41+
|Optional. Additional cluster-wide annotations to be applied to the rendered templates. Uses the same nested map structure as `extraLabels`.
42+
43+
|`spec.extraManifestsRefs`
44+
a|Optional. A list of `ConfigMap` references containing additional manifests to be applied to the cluster at install time. Manifests must be bundled in `ConfigMap` resources.
45+
46+
|`spec.suppressedManifests`
47+
|Optional. A list of manifest names to be excluded from the template rendering process by the SiteConfig Operator.
48+
49+
|`spec.pruneManifests`
50+
a|Optional. A list of manifests to remove. Each entry requires `apiVersion` and `kind`.
51+
52+
|`spec.installConfigOverrides`
53+
a|Optional. A JSON formatted string that provides a generic way of passing `install-config` parameters.
54+
55+
[IMPORTANT]
56+
====
57+
Use the reference configuration as specified in the example `ClusterInstance` CR. Adding additional components back into the system might require additional reserved CPU capacity.
58+
====
59+
// TODO: Is this note still relevant?
60+
61+
|`spec.cpuPartitioningMode`
62+
|Optional. Configure workload partitioning by setting the value to `AllNodes`. The default is `None`. To complete the configuration, specify the `isolated` and `reserved` CPUs in the `PerformanceProfile` CR.
63+
64+
|`spec.networkType`
65+
|Optional. The Container Network Interface (CNI) plug-in to install. Valid values are `OpenShiftSDN` or `OVNKubernetes`. The default is `OVNKubernetes`.
66+
67+
|`spec.clusterNetwork`
68+
a|Optional. The list of IP address pools for pods.
69+
70+
|`spec.machineNetwork`
71+
a|Optional. The list of IP address pools for machines.
72+
73+
|`spec.serviceNetwork`
74+
a|Optional. The list of IP address pools for services.
75+
76+
|`spec.apiVIPs`
77+
|Optional. The virtual IPs used to reach the OpenShift cluster API. Enter one IP address for single-stack clusters, or up to two for dual-stack clusters.
78+
79+
|`spec.ingressVIPs`
80+
|Optional. The virtual IPs used for cluster ingress traffic. Enter one IP address for single-stack clusters, or up to two for dual-stack clusters.
81+
82+
|`spec.additionalNTPSources`
83+
|Optional. A list of NTP sources (hostname or IP) to be added to all cluster hosts.
84+
85+
|`spec.diskEncryption`
86+
a|Optional. Configure this field to enable disk encryption for cluster nodes
87+
88+
|`spec.diskEncryption.type`
89+
|Set the disk encryption type.
90+
91+
|`spec.diskEncryption.tang`
92+
|Optional. Configure Tang server settings for disk encryption.
93+
94+
|`spec.proxy`
95+
|Optional. Configure proxy settings that you want to use for the install config.
96+
97+
|`spec.caBundleRef`
98+
|Optional. Reference to a `ConfigMap` containing the bundle of trusted certificates for the host. This field is referenced by image-based installations only.
99+
// TODO: Is this correct? I got it from a matrix in the slidedeck.
100+
101+
|`spec.platformType`
102+
|Optional. The name for the platform for the installation. Valid values are `BareMetal`, `None`, `VSphere`, `Nutanix`, or `External`.
103+
104+
|`spec.cpuArchitecture`
105+
|Optional. The software architecture used for nodes that do not have an architecture defined. Valid values are `x86_64`, `aarch64`, or `multi`. The default is `x86_64`.
106+
107+
|`spec.clusterType`
108+
|Optional. The type of cluster. Valid values are `SNO`, `HighlyAvailable`, `HostedControlPlane`, or `HighlyAvailableArbiter`.
109+
110+
|`spec.holdInstallation`
111+
|Optional. When set to `true`, prevents installation from happening. Inspection and validation proceed as usual, but installation does not begin until this field is set to `false`. The default is `false`.
112+
113+
|`spec.ignitionConfigOverride`
114+
|Optional. A JSON formatted string containing the user overrides for the ignition config.
115+
116+
|`spec.reinstall`
117+
|Optional. Configuration for reinstallation of the cluster. Includes `generation` and `preservationMode` fields.
118+
|====
119+
120+
== Node-level ClusterInstance CR fields
121+
122+
.ClusterInstance CR node-level fields
123+
[cols="1,3", options="header"]
124+
|====
125+
|ClusterInstance CR field
126+
|Description
127+
128+
|`spec.nodes`
129+
|A list of node objects defining the hosts in the cluster.
130+
131+
|`spec.nodes[].hostName`
132+
|The desired hostname for the host. For single-node deployments, define a single host. For three-node deployments, define three hosts. For standard deployments, define three hosts with `role: master` and two or more hosts with `role: worker`.
133+
// TODO: Is the additional naming guidance still valid? It was in the SiteConfig resource reference for this field previously.
134+
135+
|`spec.nodes[].role`
136+
|Optional. The role of the node. Valid values are `master`, `worker`, or `arbiter`. The default is `master`.
137+
138+
|`spec.nodes[].bmcAddress`
139+
a|The BMC address used to access the host. {ztp} supports iPXE and virtual media booting by using Redfish or IPMI protocols. For more information about BMC addressing, see the "Additional resources" section.
140+
141+
[NOTE]
142+
====
143+
In far edge Telco use cases, only virtual media is supported for use with {ztp}.
144+
====
145+
146+
|`spec.nodes[].bmcCredentialsName.name`
147+
|The name of the `Secret` CR containing the BMC credentials. When creating the secret, use the same namespace as the `ClusterInstance` CR.
148+
149+
|`spec.nodes[].bootMACAddress`
150+
|The MAC address used for PXE boot.
151+
152+
|`spec.nodes[].bootMode`
153+
|Optional. The boot mode for the host. The default value is `UEFI`. Use `UEFISecureBoot` to enable secure boot on the host.
154+
155+
|`spec.nodes[].rootDeviceHints`
156+
|Optional. Specifies the device for deployment. Use disk identifiers that are stable across reboots. For example, `wwn: <disk_wwn>` or `deviceName: /dev/disk/by-path/<device_path>`. For a detailed list of stable identifiers, see the "About root device hints" section.
157+
158+
|`spec.nodes[].automatedCleaningMode`
159+
|Optional. When set to `disabled`, the provisioning service does not automatically clean the disk during provisioning and deprovisioning. Set the value to `metadata` to remove the disk's partitioning table only, without fully wiping the disk. The default value is `disabled`.
160+
161+
|`spec.nodes[].nodeNetwork`
162+
|Optional. Configure the network settings for the node.
163+
164+
|`spec.nodes[].nodeNetwork.interfaces`
165+
|Optional. Configure the network interfaces for the node.
166+
167+
|`spec.nodes[].nodeNetwork.config`
168+
|Optional. Configure the NMState network configuration for the node, including interfaces, DNS, and routes.
169+
170+
|`spec.nodes[].nodeLabels`
171+
|Optional. Specify custom roles for your nodes in your managed clusters. These are additional roles not used by any {product-title} components. When you add a custom role, it can be associated with a custom machine config pool that references a specific configuration for that role. Adding custom labels or roles during installation makes the deployment process more effective and prevents the need for additional reboots after the installation is complete.
172+
173+
|`spec.nodes[].ignitionConfigOverride`
174+
|Optional. A JSON formatted string containing the overrides for the host's ignition config. Use this field to assign partitions for persistent storage. Adjust the disk ID and size to the specific hardware.
175+
176+
|`spec.nodes[].installerArgs`
177+
|Optional. A JSON formatted string containing the user overrides for the host's CoreOS installer args.
178+
179+
|`spec.nodes[].templateRefs`
180+
a|A list of references to node-level templates. A node-level template consists of a `ConfigMap` in which the keys of the data field represent the kind of the installation manifests. Node-level templates are instantiated once for each node.
181+
182+
|`spec.nodes[].extraLabels`
183+
|Optional. Additional node-level labels to be applied to the rendered templates. Uses the same nested map structure as the cluster-level `extraLabels`.
184+
185+
|`spec.nodes[].extraAnnotations`
186+
|Optional. Additional node-level annotations to be applied to the rendered templates.
187+
188+
|`spec.nodes[].suppressedManifests`
189+
|Optional. A list of node-level manifest names to be excluded from the template rendering process.
190+
191+
|`spec.nodes[].pruneManifests`
192+
|Optional. A list of mode-level manifests to remove. Each entry requires `apiVersion` and `kind`.
193+
194+
|`spec.nodes[].cpuArchitecture`
195+
|Optional. The software architecture of the node. If you do not define a value, the value is inherited from `spec.cpuArchitecture`. Valid values are `x86_64` or `aarch64`.
196+
197+
|`spec.nodes[].ironicInspect`
198+
|Optional. Disable automatic introspection during registration of the BMH by specifying `disabled` for this field. Automatic introspection by the provisioning service is enabled by default.
199+
200+
|`spec.nodes[].hostRef`
201+
|Optional. Reference to an existing `BareMetalHost` resource located in another namespace. Includes `name` and `namespace` fields.
202+
|====
203+

modules/ztp-configuring-host-firmware-with-gitops-ztp.adoc

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,8 +13,9 @@ Tune hosts with specific hardware profiles in your lab and ensure they are optim
1313
When you have completed host tuning to your satisfaction, you extract the host profile and save it in your {ztp} repository.
1414
Then, you use the host profile to configure firmware settings in the managed cluster hosts that you deploy with {ztp}.
1515

16-
You specify the required hardware profiles in `SiteConfig` custom resources (CRs) that you use to deploy the managed clusters.
17-
The {ztp} pipeline generates the required `HostFirmwareSettings` (`HFS`) and `BareMetalHost` (`BMH`) CRs that are applied to the hub cluster.
16+
You specify the required hardware profiles by creating custom node templates that include `HostFirmwareSettings` CRs, and reference them in the `spec.nodes[].templateRefs` field of your `ClusterInstance` CR.
17+
The {ztp} pipeline generates the required `HostFirmwareSettings` and `BareMetalHost` CRs that are applied to the hub cluster.
18+
//TODO: Is this true for ClusterInstance workflow too?
1819

1920
Use the following best practices to manage your host firmware profiles.
2021

modules/ztp-configuring-ipsec-using-ztp-and-siteconfig-for-mno.adoc

Lines changed: 53 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44

55
:_mod-docs-content-type: PROCEDURE
66
[id="ztp-configuring-ipsec-using-ztp-and-siteconfig-for-mno_{context}"]
7-
= Configuring IPsec encryption for multi-node clusters using {ztp} and SiteConfig resources
7+
= Configuring IPsec encryption for multi-node clusters using {ztp} and ClusterInstance resources
88

99
You can enable IPsec encryption in managed multi-node clusters that you install using {ztp} and {rh-rhacm-first}.
1010
You can encrypt traffic between the managed cluster and IPsec endpoints external to the managed cluster. All network traffic between nodes on the OVN-Kubernetes cluster network is encrypted with IPsec in Transport mode.
@@ -15,6 +15,8 @@ You can encrypt traffic between the managed cluster and IPsec endpoints external
1515
1616
* You have logged in to the hub cluster as a user with `cluster-admin` privileges.
1717
18+
* You have installed the SiteConfig Operator in the hub cluster.
19+
1820
* You have configured {rh-rhacm} and the hub cluster for generating the required installation and policy custom resources (CRs) for managed clusters.
1921
2022
* You have created a Git repository where you manage your custom site configuration data.
@@ -119,40 +121,72 @@ out
119121
<1> The `ipsec/import-certs.sh` script generates the Butane and endpoint configuration CRs.
120122
<2> Add the `ca.pem` and `left_server.p12` certificate files that are relevant to your network.
121123

122-
. Create a `custom-manifest/` folder in the repository where you manage your custom site configuration data and add the `enable-ipsec.yaml` and `99-ipsec-*` YAML files to the directory.
124+
. Create an `ipsec-manifests/` folder in the repository where you manage your custom site configuration data and add the `enable-ipsec.yaml` and `99-ipsec-*` YAML files to the directory.
123125
+
124-
.Example `siteconfig` directory
126+
.Example site configuration directory
125127
[source,terminal]
126128
----
127-
siteconfig
128-
├── site1-mno-du.yaml
129-
├── extra-manifest/
130-
└── custom-manifest
131-
├── enable-ipsec.yaml
132-
├── 99-ipsec-master-import-certs.yaml
133-
└── 99-ipsec-worker-import-certs.yaml
129+
site-configs/
130+
├── hub-1/
131+
│ └── clusterinstance-site1-mno-du.yaml
132+
├── ipsec-manifests/
133+
│ ├── enable-ipsec.yaml
134+
│ ├── 99-ipsec-master-import-certs.yaml
135+
│ └── 99-ipsec-worker-import-certs.yaml
136+
└── kustomization.yaml
134137
----
135138

136-
. In your `SiteConfig` CR, add the `custom-manifest/` directory to the `extraManifests.searchPaths` field, as in the following example:
139+
. Create a `kustomization.yaml` file that uses `configMapGenerator` to package your IPsec manifests into a `ConfigMap`:
137140
+
138141
[source,yaml]
139142
----
140-
clusters:
141-
- clusterName: "site1-mno-du"
143+
apiVersion: kustomize.config.k8s.io/v1beta1
144+
kind: Kustomization
145+
resources:
146+
- hub-1/clusterinstance-site1-mno-du.yaml
147+
configMapGenerator:
148+
- name: ipsec-manifests-cm
149+
namespace: site1-mno-du <1>
150+
files:
151+
- ipsec-manifests/enable-ipsec.yaml
152+
- ipsec-manifests/99-ipsec-master-import-certs.yaml
153+
- ipsec-manifests/99-ipsec-worker-import-certs.yaml
154+
generatorOptions:
155+
disableNameSuffixHash: true <2>
156+
----
157+
<1> The namespace must match the `ClusterInstance` namespace.
158+
<2> Disables the hash suffix so the `ConfigMap` name is predictable.
159+
160+
. In your `ClusterInstance` CR, reference the `ConfigMap` in the `extraManifestsRefs` field:
161+
+
162+
[source,yaml]
163+
----
164+
apiVersion: siteconfig.open-cluster-management.io/v1alpha1
165+
kind: ClusterInstance
166+
metadata:
167+
name: "site1-mno-du"
168+
namespace: "site1-mno-du"
169+
spec:
170+
clusterName: "site1-mno-du"
142171
networkType: "OVNKubernetes"
143-
extraManifests:
144-
searchPaths:
145-
- extra-manifest/
146-
- custom-manifest/
172+
extraManifestsRefs:
173+
- name: ipsec-manifests-cm <1>
174+
# ...
147175
----
176+
<1> Reference to the `ConfigMap` containing the IPsec certificate import manifests.
177+
+
178+
[NOTE]
179+
====
180+
If you have other extra manifests, you can either include them in the same `ConfigMap` or create multiple `ConfigMap` resources and reference them all in `extraManifestsRefs`.
181+
====
148182

149183
. Include the `ipsec-config-policy.yaml` config policy file in the `source-crs` directory in GitOps and reference the file in one of the `PolicyGenerator` CRs.
150184

151-
. Commit the `SiteConfig` CR changes and updated files in your Git repository and push the changes to provision the managed cluster and configure IPsec encryption.
185+
. Commit the `ClusterInstance` CR, IPsec manifest files, and `kustomization.yaml` changes in your Git repository and push the changes to provision the managed cluster and configure IPsec encryption.
152186
+
153187
The Argo CD pipeline detects the changes and begins the managed cluster deployment.
154188
+
155-
During cluster provisioning, the {ztp} pipeline appends the CRs in the `custom-manifest/` directory to the default set of extra manifests stored in the `extra-manifest/` directory.
189+
During cluster provisioning, the SiteConfig Operator applies the CRs contained in the referenced `ConfigMap` resources as extra manifests. The IPsec configuration policy is applied as a Day 2 operation after the cluster is provisioned.
156190

157191
.Verification
158192

0 commit comments

Comments
 (0)