-
Notifications
You must be signed in to change notification settings - Fork 425
Description
Hello,
We have started experiencing the deployment of nested clusters in openshift using hypershift.
We have tried to deploy a cluster using openshift virtualization, and the nodes (VMs) that were created, failed to provision successfully because they weren't able to communicate with the ignition server.
After some research, we came to the conclusion that the one responsible for the connectivity issues is the virt launcher networkPolicy. To verify it we removed the hypershift.openshift.io/infra-id label (which is the label picking the VMs) and the nodes were successfully provisioned.
We analyzed the networkPolicy and saw that traffic is allowed only to the default ingressController:
- podSelector:
matchLabels:
ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default
We do use ingress sharding in our hosting cluster, which means that traffic is forwarded to the sharded-default ingressController and not the default ingressController
Right now, we cannot modify manually the networkPolicy, as the controller just reconciles it back to the original state.
I would like to suggest adding a field in the HostedCluster resource which allows changing the the name of the ingressController that will be used for the virt launcher network policy.
Best regards,
Or Rener