|
| 1 | +--- |
| 2 | +title: "Deploying Knative to remote clusters with the Operator" |
| 3 | +linkTitle: "Deploying Knative to remote clusters with the Operator" |
| 4 | +author: "kahirokunn" |
| 5 | +author handle: https://github.com/kahirokunn |
| 6 | +date: 2026-05-07 |
| 7 | +description: "How to deploy Knative Serving and Eventing to remote clusters with Knative Operator v1.22" |
| 8 | +type: "blog" |
| 9 | +--- |
| 10 | + |
| 11 | +# Deploying Knative to remote clusters with the Operator |
| 12 | + |
| 13 | +**Author: [kahirokunn](https://github.com/kahirokunn)** |
| 14 | + |
| 15 | +_In this blog post you will learn how to use Knative Operator v1.22 to deploy Knative Serving and Eventing components to remote Kubernetes clusters from a hub cluster._ |
| 16 | + |
| 17 | +Platform teams often manage more than one Kubernetes cluster. Some clusters are split by region, some by environment, and some by tenant or business unit. Until now, installing Knative across those clusters usually meant running a separate Operator in every cluster, or building extra automation around the installation manifests. |
| 18 | + |
| 19 | +Starting with Knative Operator v1.22, a single Operator can deploy Knative components to a different Kubernetes cluster. The cluster that runs the Operator acts as the **hub cluster**. The cluster that receives Knative Serving or Knative Eventing acts as a **spoke cluster**. You select the spoke by setting `spec.clusterProfileRef` on the `KnativeServing` or `KnativeEventing` custom resource. |
| 20 | + |
| 21 | +This feature uses the SIG-Multicluster Cluster Inventory API, introduced through [KEP-5339](https://github.com/kubernetes/enhancements/issues/5339). The Operator reads a `ClusterProfile` resource to discover the remote cluster endpoint and access provider, then runs the normal Knative installation pipeline against that remote cluster. |
| 22 | + |
| 23 | +If `spec.clusterProfileRef` is not set, nothing changes. The Operator keeps deploying Knative to the local cluster exactly as it did before. |
| 24 | + |
| 25 | +## Why this matters |
| 26 | + |
| 27 | +Multi-cluster support makes the Knative Operator a better fit for fleet-oriented platforms. You can keep the Knative installation API on the hub while still placing Serving and Eventing components close to the workloads that need them. |
| 28 | + |
| 29 | +This is useful when you want to: |
| 30 | + |
| 31 | +- manage Knative installations for many spoke clusters from one control point; |
| 32 | +- integrate with a fleet manager that publishes `ClusterProfile` resources; |
| 33 | +- keep using the same `KnativeServing` and `KnativeEventing` APIs for local and remote installs; |
| 34 | +- apply consistent configuration across regions or environments; and |
| 35 | +- delete a hub-side CR and let the Operator clean up the remote installation. |
| 36 | + |
| 37 | +The feature is intentionally based on the Cluster Inventory API instead of a specific fleet manager. A fleet system such as Open Cluster Management can publish the `ClusterProfile` resources, and proof-of-concept environments can register them manually. |
| 38 | + |
| 39 | +!!! important |
| 40 | + Multi-cluster deployment is a beta feature in Knative Operator v1.22 and later. It depends on the SIG-Multicluster Cluster Inventory API, which is still in alpha. The API schema and recommended access provider plugins might change between releases. |
| 41 | + |
| 42 | +## How the Operator targets a remote cluster |
| 43 | + |
| 44 | +The Operator resolves the target cluster at the start of reconciliation. When `spec.clusterProfileRef` is present, it reads the referenced `ClusterProfile`, invokes the configured access provider plugin, and builds a Kubernetes client for the spoke cluster. |
| 45 | + |
| 46 | +After that, the existing install stages continue to run as usual, but they use the spoke client. This means the Operator still applies the standard Knative manifests, CRDs, ConfigMaps, Deployments, Services, and RBAC resources. The difference is where those resources land. |
| 47 | + |
| 48 | +For cleanup, the Operator uses a helper resource on the spoke cluster to anchor ownership of namespace-scoped Knative resources. Namespace-scoped resources are garbage collected through Kubernetes owner references. Cluster-scoped resources, such as ClusterRoles and CRDs, are removed explicitly by the hub-side finalizer. |
| 49 | + |
| 50 | +Always uninstall a remote Knative deployment by deleting the `KnativeServing` or `KnativeEventing` CR on the hub. If the spoke cluster is temporarily unreachable, the finalizer retries until the spoke can be reached again. |
| 51 | + |
| 52 | +## What you need before enabling it |
| 53 | + |
| 54 | +Before deploying Knative to a remote cluster, prepare the following pieces: |
| 55 | + |
| 56 | +- Knative Operator v1.22 or later. |
| 57 | +- A hub cluster running Kubernetes v1.35 or later. Knative v1.22 supports Kubernetes v1.34 for standard installations, but remote deployment uses Kubernetes image volumes on the hub to mount credential plugin binaries. That volume type is enabled by default starting in Kubernetes v1.35. |
| 58 | +- The Cluster Inventory API `ClusterProfile` CRD installed on the hub. |
| 59 | +- Network connectivity from the hub cluster to the spoke cluster API server. |
| 60 | +- A credential plugin that implements the Cluster Inventory API access provider interface. |
| 61 | +- Spoke-cluster RBAC permissions that let the returned credential manage Knative resources. |
| 62 | + |
| 63 | +The upstream Cluster Inventory API project publishes credential plugins such as `secretreader` and `kubeconfig-secretreader`. Use the plugin that matches how you want to store spoke credentials. |
| 64 | + |
| 65 | +For full prerequisites and setup steps, see [Deploy Knative to a remote cluster](https://knative.dev/docs/install/operator/multi-cluster-deployment/). |
| 66 | + |
| 67 | +## Enable multi-cluster support |
| 68 | + |
| 69 | +Multi-cluster support is disabled by default. If you install or upgrade the Operator with Helm, enable it through the `knative_operator.multicluster.*` values: |
| 70 | + |
| 71 | +```yaml |
| 72 | +knative_operator: |
| 73 | + multicluster: |
| 74 | + enabled: true |
| 75 | + accessProvidersConfig: |
| 76 | + providers: |
| 77 | + - name: secretreader |
| 78 | + execConfig: |
| 79 | + apiVersion: client.authentication.k8s.io/v1 |
| 80 | + command: /credential-plugin/secretreader-plugin |
| 81 | + provideClusterInfo: true |
| 82 | + plugins: |
| 83 | + - name: secretreader |
| 84 | + image: registry.k8s.io/cluster-inventory-api/secretreader:v0.1.1 |
| 85 | + mountPath: /credential-plugin |
| 86 | + remoteDeploymentsPollInterval: 10s |
| 87 | +``` |
| 88 | +
|
| 89 | +The provider name in `accessProvidersConfig.providers[].name` must match the plugin name in `plugins[].name`. The Operator uses that name to connect the `ClusterProfile` access provider entry to the credential plugin configuration. |
| 90 | + |
| 91 | +If you do not use Helm, you can patch an existing Operator Deployment. The important pieces are the same: mount the access provider configuration, mount the credential plugin binary, and start the Operator with `--clusterprofile-provider-file`. |
| 92 | + |
| 93 | +## Register the spoke cluster |
| 94 | + |
| 95 | +The hub discovers spoke clusters through `ClusterProfile` resources. In production, a fleet manager can publish and update those resources. For a local test or proof of concept, you can create the `ClusterProfile` yourself and patch its status. |
| 96 | + |
| 97 | +A `ClusterProfile` includes the cluster manager identity in `spec`, and the remote access information in `status.accessProviders`. The Operator also checks that the `ControlPlaneHealthy` condition is `True`. |
| 98 | + |
| 99 | +The important relationship is: |
| 100 | + |
| 101 | +- `KnativeServing` or `KnativeEventing` points to a `ClusterProfile` by name and namespace. |
| 102 | +- The `ClusterProfile` advertises an access provider by name. |
| 103 | +- The Operator has a provider configuration with the same name. |
| 104 | +- The credential plugin returns credentials for the spoke cluster. |
| 105 | + |
| 106 | +After those pieces line up, the Operator can build a client for the spoke and reconcile Knative there. |
| 107 | + |
| 108 | +## Deploy Knative Serving to a spoke |
| 109 | + |
| 110 | +After enabling multi-cluster support and registering a `ClusterProfile`, set `spec.clusterProfileRef` on the `KnativeServing` CR: |
| 111 | + |
| 112 | +```yaml |
| 113 | +apiVersion: operator.knative.dev/v1beta1 |
| 114 | +kind: KnativeServing |
| 115 | +metadata: |
| 116 | + name: knative-serving |
| 117 | + namespace: knative-serving |
| 118 | +spec: |
| 119 | + clusterProfileRef: |
| 120 | + name: spoke-cluster-1 |
| 121 | + namespace: fleet-system |
| 122 | + ingress: |
| 123 | + kourier: |
| 124 | + enabled: true |
| 125 | + config: |
| 126 | + network: |
| 127 | + ingress-class: kourier.ingress.networking.knative.dev |
| 128 | +``` |
| 129 | + |
| 130 | +Apply this CR on the hub cluster. The Operator runs on the hub, but the Serving resources are created on the spoke cluster described by `spoke-cluster-1`. |
| 131 | + |
| 132 | +For ingress, choose the implementation that fits the spoke. Kourier is the simplest path because the Operator can deploy it as part of the Serving installation. If you use Istio or Gateway API ingress, prepare the ingress implementation and gateway resources on the spoke before applying the `KnativeServing` CR. |
| 133 | + |
| 134 | +## Deploy Knative Eventing to a spoke |
| 135 | + |
| 136 | +Knative Eventing uses the same targeting field: |
| 137 | + |
| 138 | +```yaml |
| 139 | +apiVersion: operator.knative.dev/v1beta1 |
| 140 | +kind: KnativeEventing |
| 141 | +metadata: |
| 142 | + name: knative-eventing |
| 143 | + namespace: knative-eventing |
| 144 | +spec: |
| 145 | + clusterProfileRef: |
| 146 | + name: spoke-cluster-1 |
| 147 | + namespace: fleet-system |
| 148 | +``` |
| 149 | + |
| 150 | +Apply this CR on the hub, then check the spoke cluster for the Eventing components. |
| 151 | + |
| 152 | +## Operating at fleet scale |
| 153 | + |
| 154 | +Each `KnativeServing` or `KnativeEventing` CR targets exactly one `ClusterProfile`. To deploy to several spokes, create one CR per spoke and use a naming convention that makes the target clear, such as `knative-serving-us-east` or `knative-eventing-prod-eu`. |
| 155 | + |
| 156 | +The `spec.clusterProfileRef` field is immutable after the CR is created. To move an installation to a different spoke, delete the existing CR and create a new one with the new `clusterProfileRef`. |
| 157 | + |
| 158 | +The Operator reports remote targeting through the `TargetClusterResolved` condition. If targeting fails, the condition reason points to the next action. For example, `ClusterProfileNotFound` means the referenced profile does not exist, `MulticlusterDisabled` means the Operator was not started with a provider file, and `AccessProviderFailed` means the credential plugin returned an error. |
| 159 | + |
| 160 | +For larger fleets, tune the remote readiness polling interval. The default is `10s`, but you can increase it with `--remote-deployments-poll-interval` or the Helm value `knative_operator.multicluster.remoteDeploymentsPollInterval`. |
| 161 | + |
| 162 | +## Conclusion |
| 163 | + |
| 164 | +Knative Operator v1.22 adds a new way to run Knative across a fleet: keep the Operator and installation API on a hub cluster, and deploy Serving or Eventing components to remote spoke clusters through `spec.clusterProfileRef`. |
| 165 | + |
| 166 | +This keeps the single-cluster path unchanged while giving platform teams a declarative API for remote installations, cleanup, and fleet integration through the Cluster Inventory API. |
| 167 | + |
| 168 | +To try it end to end, follow the [multi-cluster deployment guide](https://knative.dev/docs/install/operator/multi-cluster-deployment/). |
0 commit comments