Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion _topic_maps/_topic_map.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3490,7 +3490,7 @@ Topics:
File: ztp-preparing-the-hub-cluster
- Name: Updating GitOps ZTP
File: ztp-updating-gitops
- Name: Installing managed clusters with RHACM and SiteConfig resources
- Name: Installing managed clusters with RHACM and ClusterInstance resources
File: ztp-deploying-far-edge-sites
- Name: Manually installing a single-node OpenShift cluster with GitOps ZTP
File: ztp-manual-install
Expand Down
10 changes: 6 additions & 4 deletions edge_computing/ztp-deploying-far-edge-sites.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
:_mod-docs-content-type: ASSEMBLY
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 [error] AsciiDocDITA.ShortDescription: Assign [role="_abstract"] to a paragraph to use it as in DITA.

[id="ztp-deploying-far-edge-sites"]
= Installing managed clusters with {rh-rhacm} and SiteConfig resources
= Installing managed clusters with {rh-rhacm} and ClusterInstance resources
include::_attributes/common-attributes.adoc[]
:context: ztp-deploying-far-edge-sites

Expand Down Expand Up @@ -35,7 +35,7 @@ include::modules/ztp-deploying-a-site.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources

* xref:../edge_computing/ztp-deploying-far-edge-sites.adoc#ztp-sno-siteconfig-config-reference_ztp-deploying-far-edge-sites[{sno-caps} SiteConfig CR installation reference]
* xref:../edge_computing/ztp-deploying-far-edge-sites.adoc#ztp-clusterinstance-config-reference_ztp-deploying-far-edge-sites[{sno-caps} ClusterInstance CR installation reference]
include::modules/ztp-sno-accelerated-ztp.adoc[leveloffset=+2]

Expand Down Expand Up @@ -67,7 +67,7 @@ include::modules/ztp-configuring-ipsec-using-ztp-and-siteconfig-for-mno.adoc[lev
include::modules/ztp-verifying-ipsec.adoc[leveloffset=+2]

include::modules/ztp-sno-siteconfig-config-reference.adoc[leveloffset=+2]
include::modules/ztp-clusterinstance-config-reference.adoc[leveloffset=+2]

[role="_additional-resources"]
.Additional resources
Expand Down Expand Up @@ -112,7 +112,9 @@ include::modules/ztp-site-cleanup.adoc[leveloffset=+1]
[role="_additional-resources"]
.Additional resources

* For information about removing a cluster, see link:https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.9/html/clusters/cluster_mce_overview#remove-managed-cluster[Removing a cluster from management].
* link:https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.9/html/clusters/cluster_mce_overview#remove-managed-cluster[Removing a cluster from management].
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 [error] AsciiDocDITA.RelatedLinks: Content other than links cannot be mapped to DITA related-links.

* link:https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.15/html/multicluster_engine_operator_with_red_hat_advanced_cluster_management/ibio-intro#deprovision-clusters[Deprovisioning clusters]
include::modules/ztp-removing-obsolete-content.adoc[leveloffset=+1]

Expand Down
2 changes: 1 addition & 1 deletion edge_computing/ztp-manual-install.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ include::modules/ztp-generating-install-and-config-crs-manually.adoc[leveloffset
* xref:../installing/installing_with_agent_based_installer/preparing-to-install-with-agent-based-installer.adoc#root-device-hints_preparing-to-install-with-agent-based-installer[About root device hints]
* xref:../edge_computing/ztp-deploying-far-edge-sites.adoc#ztp-sno-siteconfig-config-reference_ztp-deploying-far-edge-sites[{sno-caps} SiteConfig CR installation reference]
* xref:../edge_computing/ztp-deploying-far-edge-sites.adoc#ztp-clusterinstance-config-reference_ztp-deploying-far-edge-sites[{sno-caps} ClusterInstance CR installation reference]
include::modules/ztp-creating-the-site-secrets.adoc[leveloffset=+1]

Expand Down
203 changes: 203 additions & 0 deletions modules/ztp-clusterinstance-config-reference.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,203 @@
// Module included in the following assemblies:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 [error] AsciiDocDITA.ShortDescription: Assign [role="_abstract"] to a paragraph to use it as in DITA.

//
// * scalability_and_performance/ztp_far_edge/ztp-deploying-far-edge-sites.adoc

:_mod-docs-content-type: REFERENCE
[id="ztp-clusterinstance-config-reference_{context}"]
= ClusterInstance CR installation reference

The following tables describe the cluster-level and node-level `ClusterInstance` custom resource (CR) fields used for cluster installation.

== Cluster-level ClusterInstance CR fields

.ClusterInstance CR cluster-level fields
[cols="1,3", options="header"]
|====
|ClusterInstance CR field
|Description

|`spec.clusterName`
| The name of the cluster.

|`spec.baseDomain`
|The base domain to use for the deployed cluster.

|`spec.pullSecretRef.name`
|The name of the secret containing the pull secret to use when pulling images. The secret must exist in the same namespace as the `ClusterInstance` CR.

|`spec.clusterImageSetNameRef`
|The name of the `ClusterImageSet` resource indicating which {product-title} version to deploy.

|`spec.sshPublicKey`
|Optional. The public SSH key to authenticate SSH access to the cluster nodes.

|`spec.templateRefs`
a|A list of references to cluster-level templates. A cluster-level template consists of a `ConfigMap` in which the keys of the data field represent the kind of the installation manifest(s). Cluster-level templates are instantiated once per cluster.

|`spec.extraLabels`
a|Optional. Additional cluster-wide labels to be applied to the rendered templates. This is a nested map structure where the outer key is the resource type (for example, `ManagedCluster`) and the inner map contains the label key-value pairs.

|`spec.extraAnnotations`
|Optional. Additional cluster-wide annotations to be applied to the rendered templates. Uses the same nested map structure as `extraLabels`.

|`spec.extraManifestsRefs`
a|Optional. A list of `ConfigMap` references containing additional manifests to be applied to the cluster at install time. Manifests must be bundled in `ConfigMap` resources.

|`spec.suppressedManifests`
|Optional. A list of manifest names to be excluded from the template rendering process by the SiteConfig Operator.

|`spec.pruneManifests`
a|Optional. A list of manifests to remove. Each entry requires `apiVersion` and `kind`.

|`spec.installConfigOverrides`
a|Optional. A JSON formatted string that provides a generic way of passing `install-config` parameters.

[IMPORTANT]
====
Use the reference configuration as specified in the example `ClusterInstance` CR. Adding additional components back into the system might require additional reserved CPU capacity.
====
// TODO: Is this note still relevant?

|`spec.cpuPartitioningMode`
|Optional. Configure workload partitioning by setting the value to `AllNodes`. The default is `None`. To complete the configuration, specify the `isolated` and `reserved` CPUs in the `PerformanceProfile` CR.

|`spec.networkType`
|Optional. The Container Network Interface (CNI) plug-in to install. Valid values are `OpenShiftSDN` or `OVNKubernetes`. The default is `OVNKubernetes`.

|`spec.clusterNetwork`
a|Optional. The list of IP address pools for pods.

|`spec.machineNetwork`
a|Optional. The list of IP address pools for machines.

|`spec.serviceNetwork`
a|Optional. The list of IP address pools for services.

|`spec.apiVIPs`
|Optional. The virtual IPs used to reach the OpenShift cluster API. Enter one IP address for single-stack clusters, or up to two for dual-stack clusters.

|`spec.ingressVIPs`
|Optional. The virtual IPs used for cluster ingress traffic. Enter one IP address for single-stack clusters, or up to two for dual-stack clusters.

|`spec.additionalNTPSources`
|Optional. A list of NTP sources (hostname or IP) to be added to all cluster hosts.

|`spec.diskEncryption`
a|Optional. Configure this field to enable disk encryption for cluster nodes

|`spec.diskEncryption.type`
|Set the disk encryption type.

|`spec.diskEncryption.tang`
|Optional. Configure Tang server settings for disk encryption.

|`spec.proxy`
|Optional. Configure proxy settings that you want to use for the install config.

|`spec.caBundleRef`
|Optional. Reference to a `ConfigMap` containing the bundle of trusted certificates for the host. This field is referenced by image-based installations only.
// TODO: Is this correct? I got it from a matrix in the slidedeck.

|`spec.platformType`
|Optional. The name for the platform for the installation. Valid values are `BareMetal`, `None`, `VSphere`, `Nutanix`, or `External`.

|`spec.cpuArchitecture`
|Optional. The software architecture used for nodes that do not have an architecture defined. Valid values are `x86_64`, `aarch64`, or `multi`. The default is `x86_64`.

|`spec.clusterType`
|Optional. The type of cluster. Valid values are `SNO`, `HighlyAvailable`, `HostedControlPlane`, or `HighlyAvailableArbiter`.

|`spec.holdInstallation`
|Optional. When set to `true`, prevents installation from happening. Inspection and validation proceed as usual, but installation does not begin until this field is set to `false`. The default is `false`.

|`spec.ignitionConfigOverride`
|Optional. A JSON formatted string containing the user overrides for the ignition config.

|`spec.reinstall`
|Optional. Configuration for reinstallation of the cluster. Includes `generation` and `preservationMode` fields.
|====

== Node-level ClusterInstance CR fields

.ClusterInstance CR node-level fields
[cols="1,3", options="header"]
|====
|ClusterInstance CR field
|Description

|`spec.nodes`
|A list of node objects defining the hosts in the cluster.

|`spec.nodes[].hostName`
|The desired hostname for the host. For single-node deployments, define a single host. For three-node deployments, define three hosts. For standard deployments, define three hosts with `role: master` and two or more hosts with `role: worker`.
// TODO: Is the additional naming guidance still valid? It was in the SiteConfig resource reference for this field previously.

|`spec.nodes[].role`
|Optional. The role of the node. Valid values are `master`, `worker`, or `arbiter`. The default is `master`.

|`spec.nodes[].bmcAddress`
a|The BMC address used to access the host. {ztp} supports iPXE and virtual media booting by using Redfish or IPMI protocols. For more information about BMC addressing, see the "Additional resources" section.

[NOTE]
====
In far edge Telco use cases, only virtual media is supported for use with {ztp}.
====

|`spec.nodes[].bmcCredentialsName.name`
|The name of the `Secret` CR containing the BMC credentials. When creating the secret, use the same namespace as the `ClusterInstance` CR.

|`spec.nodes[].bootMACAddress`
|The MAC address used for PXE boot.

|`spec.nodes[].bootMode`
|Optional. The boot mode for the host. The default value is `UEFI`. Use `UEFISecureBoot` to enable secure boot on the host.

|`spec.nodes[].rootDeviceHints`
|Optional. Specifies the device for deployment. Use disk identifiers that are stable across reboots. For example, `wwn: <disk_wwn>` or `deviceName: /dev/disk/by-path/<device_path>`. For a detailed list of stable identifiers, see the "About root device hints" section.

|`spec.nodes[].automatedCleaningMode`
|Optional. When set to `disabled`, the provisioning service does not automatically clean the disk during provisioning and deprovisioning. Set the value to `metadata` to remove the disk's partitioning table only, without fully wiping the disk. The default value is `disabled`.

|`spec.nodes[].nodeNetwork`
|Optional. Configure the network settings for the node.

|`spec.nodes[].nodeNetwork.interfaces`
|Optional. Configure the network interfaces for the node.

|`spec.nodes[].nodeNetwork.config`
|Optional. Configure the NMState network configuration for the node, including interfaces, DNS, and routes.

|`spec.nodes[].nodeLabels`
|Optional. Specify custom roles for your nodes in your managed clusters. These are additional roles not used by any {product-title} components. When you add a custom role, it can be associated with a custom machine config pool that references a specific configuration for that role. Adding custom labels or roles during installation makes the deployment process more effective and prevents the need for additional reboots after the installation is complete.

|`spec.nodes[].ignitionConfigOverride`
|Optional. A JSON formatted string containing the overrides for the host's ignition config. Use this field to assign partitions for persistent storage. Adjust the disk ID and size to the specific hardware.

|`spec.nodes[].installerArgs`
|Optional. A JSON formatted string containing the user overrides for the host's CoreOS installer args.

|`spec.nodes[].templateRefs`
a|A list of references to node-level templates. A node-level template consists of a `ConfigMap` in which the keys of the data field represent the kind of the installation manifests. Node-level templates are instantiated once for each node.

|`spec.nodes[].extraLabels`
|Optional. Additional node-level labels to be applied to the rendered templates. Uses the same nested map structure as the cluster-level `extraLabels`.

|`spec.nodes[].extraAnnotations`
|Optional. Additional node-level annotations to be applied to the rendered templates.

|`spec.nodes[].suppressedManifests`
|Optional. A list of node-level manifest names to be excluded from the template rendering process.

|`spec.nodes[].pruneManifests`
|Optional. A list of mode-level manifests to remove. Each entry requires `apiVersion` and `kind`.

|`spec.nodes[].cpuArchitecture`
|Optional. The software architecture of the node. If you do not define a value, the value is inherited from `spec.cpuArchitecture`. Valid values are `x86_64` or `aarch64`.

|`spec.nodes[].ironicInspect`
|Optional. Disable automatic introspection during registration of the BMH by specifying `disabled` for this field. Automatic introspection by the provisioning service is enabled by default.

|`spec.nodes[].hostRef`
|Optional. Reference to an existing `BareMetalHost` resource located in another namespace. Includes `name` and `namespace` fields.
|====

5 changes: 3 additions & 2 deletions modules/ztp-configuring-host-firmware-with-gitops-ztp.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,9 @@ Tune hosts with specific hardware profiles in your lab and ensure they are optim
When you have completed host tuning to your satisfaction, you extract the host profile and save it in your {ztp} repository.
Then, you use the host profile to configure firmware settings in the managed cluster hosts that you deploy with {ztp}.

You specify the required hardware profiles in `SiteConfig` custom resources (CRs) that you use to deploy the managed clusters.
The {ztp} pipeline generates the required `HostFirmwareSettings` (`HFS`) and `BareMetalHost` (`BMH`) CRs that are applied to the hub cluster.
You specify the required hardware profiles by creating custom node templates that include `HostFirmwareSettings` CRs, and reference them in the `spec.nodes[].templateRefs` field of your `ClusterInstance` CR.
The {ztp} pipeline generates the required `HostFirmwareSettings` and `BareMetalHost` CRs that are applied to the hub cluster.
//TODO: Is this true for ClusterInstance workflow too?

Use the following best practices to manage your host firmware profiles.

Expand Down
72 changes: 53 additions & 19 deletions modules/ztp-configuring-ipsec-using-ztp-and-siteconfig-for-mno.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

:_mod-docs-content-type: PROCEDURE
[id="ztp-configuring-ipsec-using-ztp-and-siteconfig-for-mno_{context}"]
= Configuring IPsec encryption for multi-node clusters using {ztp} and SiteConfig resources
= Configuring IPsec encryption for multi-node clusters using {ztp} and ClusterInstance resources

You can enable IPsec encryption in managed multi-node clusters that you install using {ztp} and {rh-rhacm-first}.
You can encrypt traffic between the managed cluster and IPsec endpoints external to the managed cluster. All network traffic between nodes on the OVN-Kubernetes cluster network is encrypted with IPsec in Transport mode.
Expand All @@ -15,6 +15,8 @@ You can encrypt traffic between the managed cluster and IPsec endpoints external
* You have logged in to the hub cluster as a user with `cluster-admin` privileges.
* You have installed the SiteConfig Operator in the hub cluster.
* You have configured {rh-rhacm} and the hub cluster for generating the required installation and policy custom resources (CRs) for managed clusters.
* You have created a Git repository where you manage your custom site configuration data.
Expand Down Expand Up @@ -119,40 +121,72 @@ out
<1> The `ipsec/import-certs.sh` script generates the Butane and endpoint configuration CRs.
<2> Add the `ca.pem` and `left_server.p12` certificate files that are relevant to your network.

. Create a `custom-manifest/` folder in the repository where you manage your custom site configuration data and add the `enable-ipsec.yaml` and `99-ipsec-*` YAML files to the directory.
. Create an `ipsec-manifests/` folder in the repository where you manage your custom site configuration data and add the `enable-ipsec.yaml` and `99-ipsec-*` YAML files to the directory.
+
.Example `siteconfig` directory
.Example site configuration directory
[source,terminal]
----
siteconfig
├── site1-mno-du.yaml
├── extra-manifest/
└── custom-manifest
├── enable-ipsec.yaml
├── 99-ipsec-master-import-certs.yaml
└── 99-ipsec-worker-import-certs.yaml
site-configs/
├── hub-1/
│ └── clusterinstance-site1-mno-du.yaml
├── ipsec-manifests/
│ ├── enable-ipsec.yaml
│ ├── 99-ipsec-master-import-certs.yaml
│ └── 99-ipsec-worker-import-certs.yaml
└── kustomization.yaml
----

. In your `SiteConfig` CR, add the `custom-manifest/` directory to the `extraManifests.searchPaths` field, as in the following example:
. Create a `kustomization.yaml` file that uses `configMapGenerator` to package your IPsec manifests into a `ConfigMap`:
+
[source,yaml]
----
clusters:
- clusterName: "site1-mno-du"
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- hub-1/clusterinstance-site1-mno-du.yaml
configMapGenerator:
- name: ipsec-manifests-cm
namespace: site1-mno-du <1>
files:
- ipsec-manifests/enable-ipsec.yaml
- ipsec-manifests/99-ipsec-master-import-certs.yaml
- ipsec-manifests/99-ipsec-worker-import-certs.yaml
generatorOptions:
disableNameSuffixHash: true <2>
----
<1> The namespace must match the `ClusterInstance` namespace.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 [error] AsciiDocDITA.CalloutList: Callouts are not supported in DITA.

<2> Disables the hash suffix so the `ConfigMap` name is predictable.

. In your `ClusterInstance` CR, reference the `ConfigMap` in the `extraManifestsRefs` field:
+
[source,yaml]
----
apiVersion: siteconfig.open-cluster-management.io/v1alpha1
kind: ClusterInstance
metadata:
name: "site1-mno-du"
namespace: "site1-mno-du"
spec:
clusterName: "site1-mno-du"
networkType: "OVNKubernetes"
extraManifests:
searchPaths:
- extra-manifest/
- custom-manifest/
extraManifestsRefs:
- name: ipsec-manifests-cm <1>
# ...
----
<1> Reference to the `ConfigMap` containing the IPsec certificate import manifests.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 [error] AsciiDocDITA.CalloutList: Callouts are not supported in DITA.

+
[NOTE]
====
If you have other extra manifests, you can either include them in the same `ConfigMap` or create multiple `ConfigMap` resources and reference them all in `extraManifestsRefs`.
====

. Include the `ipsec-config-policy.yaml` config policy file in the `source-crs` directory in GitOps and reference the file in one of the `PolicyGenerator` CRs.

. Commit the `SiteConfig` CR changes and updated files in your Git repository and push the changes to provision the managed cluster and configure IPsec encryption.
. Commit the `ClusterInstance` CR, IPsec manifest files, and `kustomization.yaml` changes in your Git repository and push the changes to provision the managed cluster and configure IPsec encryption.
+
The Argo CD pipeline detects the changes and begins the managed cluster deployment.
+
During cluster provisioning, the {ztp} pipeline appends the CRs in the `custom-manifest/` directory to the default set of extra manifests stored in the `extra-manifest/` directory.
During cluster provisioning, the SiteConfig Operator applies the CRs contained in the referenced `ConfigMap` resources as extra manifests. The IPsec configuration policy is applied as a Day 2 operation after the cluster is provisioned.

.Verification

Expand Down
Loading