Skip to content

Commit 1951c69

Browse files
authored
Add blog post for deploying Knative to remote clusters via Cluster Inventory API (#6634)
* Add blog post for deploying Knative to remote clusters via Cluster Inventory API Signed-off-by: kahirokunn <okinakahiro@gmail.com> * Add metadata to Gateway API ingress blog post Signed-off-by: kahirokunn <okinakahiro@gmail.com> * Fix gateway API blog link styling Signed-off-by: kahirokunn <okinakahiro@gmail.com> --------- Signed-off-by: kahirokunn <okinakahiro@gmail.com>
1 parent d0a4e4f commit 1951c69

4 files changed

Lines changed: 188 additions & 9 deletions

File tree

docs/blog/.nav.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -45,6 +45,7 @@ nav:
4545
- releases/announcing-knative-v0-3-release.md
4646
- releases/announcing-knative-v0-2-release.md
4747
- Articles:
48+
- articles/deploying-knative-to-remote-clusters-with-operator.md
4849
- articles/gateway-api-ingress-with-knative-operator.md
4950
- articles/Enhancing-func-cli-ux.md
5051
- articles/knative-eventing-eda-agents.md
Lines changed: 168 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,168 @@
1+
---
2+
title: "Deploying Knative to remote clusters with the Operator"
3+
linkTitle: "Deploying Knative to remote clusters with the Operator"
4+
author: "kahirokunn"
5+
author handle: https://github.com/kahirokunn
6+
date: 2026-05-07
7+
description: "How to deploy Knative Serving and Eventing to remote clusters with Knative Operator v1.22"
8+
type: "blog"
9+
---
10+
11+
# Deploying Knative to remote clusters with the Operator
12+
13+
**Author: [kahirokunn](https://github.com/kahirokunn)**
14+
15+
_In this blog post you will learn how to use Knative Operator v1.22 to deploy Knative Serving and Eventing components to remote Kubernetes clusters from a hub cluster._
16+
17+
Platform teams often manage more than one Kubernetes cluster. Some clusters are split by region, some by environment, and some by tenant or business unit. Until now, installing Knative across those clusters usually meant running a separate Operator in every cluster, or building extra automation around the installation manifests.
18+
19+
Starting with Knative Operator v1.22, a single Operator can deploy Knative components to a different Kubernetes cluster. The cluster that runs the Operator acts as the **hub cluster**. The cluster that receives Knative Serving or Knative Eventing acts as a **spoke cluster**. You select the spoke by setting `spec.clusterProfileRef` on the `KnativeServing` or `KnativeEventing` custom resource.
20+
21+
This feature uses the SIG-Multicluster Cluster Inventory API, introduced through [KEP-5339](https://github.com/kubernetes/enhancements/issues/5339). The Operator reads a `ClusterProfile` resource to discover the remote cluster endpoint and access provider, then runs the normal Knative installation pipeline against that remote cluster.
22+
23+
If `spec.clusterProfileRef` is not set, nothing changes. The Operator keeps deploying Knative to the local cluster exactly as it did before.
24+
25+
## Why this matters
26+
27+
Multi-cluster support makes the Knative Operator a better fit for fleet-oriented platforms. You can keep the Knative installation API on the hub while still placing Serving and Eventing components close to the workloads that need them.
28+
29+
This is useful when you want to:
30+
31+
- manage Knative installations for many spoke clusters from one control point;
32+
- integrate with a fleet manager that publishes `ClusterProfile` resources;
33+
- keep using the same `KnativeServing` and `KnativeEventing` APIs for local and remote installs;
34+
- apply consistent configuration across regions or environments; and
35+
- delete a hub-side CR and let the Operator clean up the remote installation.
36+
37+
The feature is intentionally based on the Cluster Inventory API instead of a specific fleet manager. A fleet system such as Open Cluster Management can publish the `ClusterProfile` resources, and proof-of-concept environments can register them manually.
38+
39+
!!! important
40+
Multi-cluster deployment is a beta feature in Knative Operator v1.22 and later. It depends on the SIG-Multicluster Cluster Inventory API, which is still in alpha. The API schema and recommended access provider plugins might change between releases.
41+
42+
## How the Operator targets a remote cluster
43+
44+
The Operator resolves the target cluster at the start of reconciliation. When `spec.clusterProfileRef` is present, it reads the referenced `ClusterProfile`, invokes the configured access provider plugin, and builds a Kubernetes client for the spoke cluster.
45+
46+
After that, the existing install stages continue to run as usual, but they use the spoke client. This means the Operator still applies the standard Knative manifests, CRDs, ConfigMaps, Deployments, Services, and RBAC resources. The difference is where those resources land.
47+
48+
For cleanup, the Operator uses a helper resource on the spoke cluster to anchor ownership of namespace-scoped Knative resources. Namespace-scoped resources are garbage collected through Kubernetes owner references. Cluster-scoped resources, such as ClusterRoles and CRDs, are removed explicitly by the hub-side finalizer.
49+
50+
Always uninstall a remote Knative deployment by deleting the `KnativeServing` or `KnativeEventing` CR on the hub. If the spoke cluster is temporarily unreachable, the finalizer retries until the spoke can be reached again.
51+
52+
## What you need before enabling it
53+
54+
Before deploying Knative to a remote cluster, prepare the following pieces:
55+
56+
- Knative Operator v1.22 or later.
57+
- A hub cluster running Kubernetes v1.35 or later. Knative v1.22 supports Kubernetes v1.34 for standard installations, but remote deployment uses Kubernetes image volumes on the hub to mount credential plugin binaries. That volume type is enabled by default starting in Kubernetes v1.35.
58+
- The Cluster Inventory API `ClusterProfile` CRD installed on the hub.
59+
- Network connectivity from the hub cluster to the spoke cluster API server.
60+
- A credential plugin that implements the Cluster Inventory API access provider interface.
61+
- Spoke-cluster RBAC permissions that let the returned credential manage Knative resources.
62+
63+
The upstream Cluster Inventory API project publishes credential plugins such as `secretreader` and `kubeconfig-secretreader`. Use the plugin that matches how you want to store spoke credentials.
64+
65+
For full prerequisites and setup steps, see [Deploy Knative to a remote cluster](https://knative.dev/docs/install/operator/multi-cluster-deployment/).
66+
67+
## Enable multi-cluster support
68+
69+
Multi-cluster support is disabled by default. If you install or upgrade the Operator with Helm, enable it through the `knative_operator.multicluster.*` values:
70+
71+
```yaml
72+
knative_operator:
73+
multicluster:
74+
enabled: true
75+
accessProvidersConfig:
76+
providers:
77+
- name: secretreader
78+
execConfig:
79+
apiVersion: client.authentication.k8s.io/v1
80+
command: /credential-plugin/secretreader-plugin
81+
provideClusterInfo: true
82+
plugins:
83+
- name: secretreader
84+
image: registry.k8s.io/cluster-inventory-api/secretreader:v0.1.1
85+
mountPath: /credential-plugin
86+
remoteDeploymentsPollInterval: 10s
87+
```
88+
89+
The provider name in `accessProvidersConfig.providers[].name` must match the plugin name in `plugins[].name`. The Operator uses that name to connect the `ClusterProfile` access provider entry to the credential plugin configuration.
90+
91+
If you do not use Helm, you can patch an existing Operator Deployment. The important pieces are the same: mount the access provider configuration, mount the credential plugin binary, and start the Operator with `--clusterprofile-provider-file`.
92+
93+
## Register the spoke cluster
94+
95+
The hub discovers spoke clusters through `ClusterProfile` resources. In production, a fleet manager can publish and update those resources. For a local test or proof of concept, you can create the `ClusterProfile` yourself and patch its status.
96+
97+
A `ClusterProfile` includes the cluster manager identity in `spec`, and the remote access information in `status.accessProviders`. The Operator also checks that the `ControlPlaneHealthy` condition is `True`.
98+
99+
The important relationship is:
100+
101+
- `KnativeServing` or `KnativeEventing` points to a `ClusterProfile` by name and namespace.
102+
- The `ClusterProfile` advertises an access provider by name.
103+
- The Operator has a provider configuration with the same name.
104+
- The credential plugin returns credentials for the spoke cluster.
105+
106+
After those pieces line up, the Operator can build a client for the spoke and reconcile Knative there.
107+
108+
## Deploy Knative Serving to a spoke
109+
110+
After enabling multi-cluster support and registering a `ClusterProfile`, set `spec.clusterProfileRef` on the `KnativeServing` CR:
111+
112+
```yaml
113+
apiVersion: operator.knative.dev/v1beta1
114+
kind: KnativeServing
115+
metadata:
116+
name: knative-serving
117+
namespace: knative-serving
118+
spec:
119+
clusterProfileRef:
120+
name: spoke-cluster-1
121+
namespace: fleet-system
122+
ingress:
123+
kourier:
124+
enabled: true
125+
config:
126+
network:
127+
ingress-class: kourier.ingress.networking.knative.dev
128+
```
129+
130+
Apply this CR on the hub cluster. The Operator runs on the hub, but the Serving resources are created on the spoke cluster described by `spoke-cluster-1`.
131+
132+
For ingress, choose the implementation that fits the spoke. Kourier is the simplest path because the Operator can deploy it as part of the Serving installation. If you use Istio or Gateway API ingress, prepare the ingress implementation and gateway resources on the spoke before applying the `KnativeServing` CR.
133+
134+
## Deploy Knative Eventing to a spoke
135+
136+
Knative Eventing uses the same targeting field:
137+
138+
```yaml
139+
apiVersion: operator.knative.dev/v1beta1
140+
kind: KnativeEventing
141+
metadata:
142+
name: knative-eventing
143+
namespace: knative-eventing
144+
spec:
145+
clusterProfileRef:
146+
name: spoke-cluster-1
147+
namespace: fleet-system
148+
```
149+
150+
Apply this CR on the hub, then check the spoke cluster for the Eventing components.
151+
152+
## Operating at fleet scale
153+
154+
Each `KnativeServing` or `KnativeEventing` CR targets exactly one `ClusterProfile`. To deploy to several spokes, create one CR per spoke and use a naming convention that makes the target clear, such as `knative-serving-us-east` or `knative-eventing-prod-eu`.
155+
156+
The `spec.clusterProfileRef` field is immutable after the CR is created. To move an installation to a different spoke, delete the existing CR and create a new one with the new `clusterProfileRef`.
157+
158+
The Operator reports remote targeting through the `TargetClusterResolved` condition. If targeting fails, the condition reason points to the next action. For example, `ClusterProfileNotFound` means the referenced profile does not exist, `MulticlusterDisabled` means the Operator was not started with a provider file, and `AccessProviderFailed` means the credential plugin returned an error.
159+
160+
For larger fleets, tune the remote readiness polling interval. The default is `10s`, but you can increase it with `--remote-deployments-poll-interval` or the Helm value `knative_operator.multicluster.remoteDeploymentsPollInterval`.
161+
162+
## Conclusion
163+
164+
Knative Operator v1.22 adds a new way to run Knative across a fleet: keep the Operator and installation API on a hub cluster, and deploy Serving or Eventing components to remote spoke clusters through `spec.clusterProfileRef`.
165+
166+
This keeps the single-cluster path unchanged while giving platform teams a declarative API for remote installations, cleanup, and fleet integration through the Cluster Inventory API.
167+
168+
To try it end to end, follow the [multi-cluster deployment guide](https://knative.dev/docs/install/operator/multi-cluster-deployment/).

docs/blog/articles/gateway-api-ingress-with-knative-operator.md

Lines changed: 12 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,22 @@
1+
---
2+
title: "Managing Gateway API ingress with the Knative Operator"
3+
linkTitle: "Managing Gateway API ingress with the Knative Operator"
4+
author: "kahirokunn"
5+
author handle: https://github.com/kahirokunn
6+
date: 2026-05-07
7+
description: "How to use the Knative Operator to manage Knative Serving with Gateway API ingress"
8+
type: "blog"
9+
---
10+
111
# Managing Gateway API ingress with the Knative Operator
212

3-
**Author: kahirokunn**
13+
**Author: [kahirokunn](https://github.com/kahirokunn)**
414

515
_In this blog post you will learn how to use the Knative Operator to manage Knative Serving with Gateway API ingress._
616

717
Starting with Knative v1.22, the Knative Operator supports `net-gateway-api` as an ingress option for Knative Serving. If your platform already standardizes on Kubernetes Gateway API resources such as `GatewayClass`, `Gateway`, and `HTTPRoute`, you can now keep the Knative ingress choice and gateway configuration together in the Operator-managed `KnativeServing` custom resource.
818

9-
Gateway API support in Knative is currently in beta. You must install a Gateway API implementation in your cluster before using `net-gateway-api`. Knative currently tests `net-gateway-api` against the Istio, Contour, and Envoy Gateway implementations. For tested versions, see the [`net-gateway-api` test version documentation](https://github.com/knative-extensions/net-gateway-api/blob/release-1.22/docs/test-version.md).
19+
Gateway API support in Knative is currently in beta. You must install a Gateway API implementation in your cluster before using `net-gateway-api`. Knative currently tests `net-gateway-api` against the Istio, Contour, and Envoy Gateway implementations. For tested versions, see the `net-gateway-api` [test version documentation](https://github.com/knative-extensions/net-gateway-api/blob/release-1.22/docs/test-version.md).
1020

1121
## What the Operator manages
1222

docs/versioned/install/operator/multi-cluster-deployment.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -33,12 +33,12 @@ The Operator continues to manage the lifecycle of the CR from the hub. To delete
3333
Before you deploy Knative to a remote cluster, you must have:
3434

3535
- Knative Operator v1.22 or later.
36-
- A hub cluster running Kubernetes v1.35 or later. The Operator loads credential plugin binaries through an image volume, a Kubernetes feature that became generally available in v1.35. No other delivery method for credential plugins is supported.
36+
- A hub cluster running Kubernetes v1.35 or later. Knative v1.22 supports Kubernetes v1.34 for standard installations, but multi-cluster support uses Kubernetes image volumes to mount credential plugin binaries on the hub. That volume type is enabled by default starting in Kubernetes v1.35. No other delivery method for credential plugins is supported.
3737
- The Cluster Inventory API `ClusterProfile` CRD installed on the hub cluster. See the installation instructions in the [kubernetes-sigs/cluster-inventory-api](https://github.com/kubernetes-sigs/cluster-inventory-api) repository.
3838
- Network connectivity from the hub cluster to each spoke cluster's API server. If the hub cannot reach a spoke directly, use a reverse tunnel such as the OCM cluster-proxy.
3939
- A credential plugin that implements the Cluster Inventory API access provider interface. The upstream `kubernetes-sigs/cluster-inventory-api` project publishes two plugins:
40-
- `registry.k8s.io/cluster-inventory-api/secretreader:v0.1.0` reads a bearer token from a `Secret`'s `data.token` field.
41-
- `registry.k8s.io/cluster-inventory-api/kubeconfig-secretreader:v0.1.0` reads a complete kubeconfig from a `Secret`.
40+
- `registry.k8s.io/cluster-inventory-api/secretreader:v0.1.1` reads a bearer token from a `Secret`'s `data.token` field.
41+
- `registry.k8s.io/cluster-inventory-api/kubeconfig-secretreader:v0.1.1` reads a complete kubeconfig from a `Secret`.
4242

4343
Pick whichever matches the credential format you intend to use, or use a plugin from another source.
4444
- RBAC permissions on each spoke cluster that let the credential returned by the plugin create and manage Knative resources. See [Spoke RBAC requirements](#spoke-rbac-requirements).
@@ -76,11 +76,11 @@ knative_operator:
7676
- name: secretreader
7777
execConfig:
7878
apiVersion: client.authentication.k8s.io/v1
79-
command: /credential-plugin/plugin-binary
79+
command: /credential-plugin/secretreader-plugin
8080
provideClusterInfo: true
8181
plugins:
8282
- name: secretreader
83-
image: registry.k8s.io/cluster-inventory-api/secretreader:v0.1.0
83+
image: registry.k8s.io/cluster-inventory-api/secretreader:v0.1.1
8484
mountPath: /credential-plugin
8585
remoteDeploymentsPollInterval: 10s
8686
```
@@ -120,7 +120,7 @@ If you do not use Helm, add multi-cluster support to an Operator that is already
120120
"name": "secretreader",
121121
"execConfig": {
122122
"apiVersion": "client.authentication.k8s.io/v1",
123-
"command": "/credential-plugin/plugin-binary",
123+
"command": "/credential-plugin/secretreader-plugin",
124124
"provideClusterInfo": true
125125
}
126126
}
@@ -175,7 +175,7 @@ A `ClusterProfile` resource on the hub describes one spoke. Register it in one o
175175

176176
### Register a ClusterProfile manually
177177

178-
Manual registration prepares the spoke first, then publishes its endpoint and credentials on the hub. The examples below use the `secretreader` plugin (`registry.k8s.io/cluster-inventory-api/secretreader:v0.1.0`); replace image references and configuration fields with those required by the plugin you choose.
178+
Manual registration prepares the spoke first, then publishes its endpoint and credentials on the hub. The examples below use the `secretreader` plugin (`registry.k8s.io/cluster-inventory-api/secretreader:v0.1.1`); replace image references and configuration fields with those required by the plugin you choose.
179179

180180
1. On the spoke cluster, create a `ServiceAccount`, the required permissions, and a token `Secret`:
181181

0 commit comments

Comments
 (0)