Skip to content

Commit a2d4f84

Browse files
committed
Add CCM quickstart
1 parent fa9f875 commit a2d4f84

File tree

2 files changed

+167
-1
lines changed

2 files changed

+167
-1
lines changed

README.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,9 @@
1111
This repository implements the [cloud provider](https://github.com/kubernetes/cloud-provider) interface for [Google Cloud Platform (GCP)](https://cloud.google.com/).
1212
It provides components for Kubernetes clusters running on GCP and is maintained primarily by the Kubernetes team at Google.
1313

14-
To see all available commands in this repository, run `make help`.
14+
To get started with the GCP CCM, see the **[CCM Quickstart](docs/ccm-quickstart.md)**.
15+
16+
For local development, use `make help` to see all available commands.
1517

1618
## Components
1719

docs/ccm-quickstart.md

Lines changed: 164 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,164 @@
1+
# GCP Cloud Controller Manager (CCM) Quickstart
2+
3+
This guide provides a quickstart for building and deploying the GCP Cloud Controller Manager (CCM) to a self-managed Kubernetes cluster.
4+
5+
## Prerequisites
6+
7+
1. **Kubernetes Cluster**: A Kubernetes cluster running on Google Cloud Platform.
8+
* The cluster's components (`kube-apiserver`, `kube-controller-manager`, and `kubelet`) must have the `--cloud-provider=external` flag.
9+
10+
For a simple example, use `make kops-up` to create a cluster:
11+
12+
```sh
13+
# Enable required GCP APIs
14+
gcloud services enable compute.googleapis.com
15+
gcloud services enable artifactregistry.googleapis.com
16+
17+
# Set environment variables
18+
export GCP_PROJECT=$(gcloud config get-value project) # or set manually
19+
export GCP_LOCATION=us-central1
20+
export GCP_ZONES=${GCP_LOCATION}-a
21+
export KOPS_CLUSTER_NAME=kops.k8s.local
22+
export KOPS_STATE_STORE=gs://${GCP_PROJECT}-kops-state
23+
24+
# Create the state store bucket if it doesn't already exist
25+
gcloud storage buckets create ${KOPS_STATE_STORE} --location=${GCP_LOCATION} || true
26+
27+
# Run the cluster creation target, may take several minutes
28+
make kops-up
29+
```
30+
31+
2. **GCP Service Account**: The nodes (or the CCM pod itself) must have access to a GCP IAM Service Account with sufficient permissions to manage compute resources (e.g. instances, load balancers, and routes).
32+
3. **Docker & gcloud CLI**: Authorized and configured for pushing images to GCP Artifact Registry.
33+
34+
> [!NOTE]
35+
> **If you used `make kops-up` to provision your cluster, you can skip Step 1 and Step 2!**
36+
> The `make kops-up` target is an end-to-end workflow that automatically builds the CCM image, pushes it to your registry, and deploys it (along with all required RBAC) to the cluster. You can proceed directly to **Step 3: Verification**.
37+
38+
## Step 1: Build and Push the CCM Image (Manual Clusters)
39+
40+
If you are using a manually provisioned cluster (e.g. `kubeadm`), build the `cloud-controller-manager` Docker image and push it to your registry:
41+
42+
```sh
43+
# Google Cloud Project ID, registry location, and repository name.
44+
GCP_PROJECT=$(gcloud config get-value project)
45+
GCP_LOCATION=us-central1
46+
REPO=my-repo
47+
48+
# Create an Artifact Registry repository (if it doesn't already exist)
49+
gcloud artifacts repositories create ${REPO} \
50+
--project=${GCP_PROJECT} \
51+
--repository-format=docker \
52+
--location=${GCP_LOCATION} \
53+
--description="Docker repository for CCM"
54+
55+
# Grant the cluster nodes permission to read from the newly created Artifact Registry.
56+
# This automatically extracts your GCE node's service account using kubectl and gcloud.
57+
NODE_NAME=$(kubectl get nodes -o jsonpath='{.items[0].metadata.name}')
58+
NODE_ZONE=$(kubectl get node $NODE_NAME -o jsonpath='{.metadata.labels.topology\.kubernetes\.io/zone}')
59+
NODE_SA=$(gcloud compute instances describe $NODE_NAME \
60+
--zone=$NODE_ZONE --project=${GCP_PROJECT} \
61+
--format="value(serviceAccounts[0].email)")
62+
63+
gcloud artifacts repositories add-iam-policy-binding ${REPO} \
64+
--project=${GCP_PROJECT} \
65+
--location=${GCP_LOCATION} \
66+
--member="serviceAccount:${NODE_SA}" \
67+
--role="roles/artifactregistry.reader"
68+
# Configure docker to authenticate with Artifact Registry
69+
gcloud auth configure-docker ${GCP_LOCATION}-docker.pkg.dev
70+
71+
# Build and Push
72+
IMAGE_REPO=${GCP_LOCATION}-docker.pkg.dev/${GCP_PROJECT}/${REPO} IMAGE_TAG=v0 make publish
73+
```
74+
75+
*Note: If `IMAGE_TAG` is omitted, the Makefile will use a combination of the current Git commit SHA and the build date.*
76+
77+
## Step 2: Deploy the CCM to your Cluster (Manual Clusters)
78+
79+
Once the image is pushed, you must deploy the necessary RBAC permissions and the CCM pod itself to the Kubernetes cluster.
80+
81+
For native Kubernetes clusters, avoid the legacy `deploy/cloud-controller-manager.manifest` (which is a SaltStack template used by legacy `kube-up`). Instead, use the kustomize-ready DaemonSet which correctly includes the RBAC roles and deployment:
82+
83+
1. Update the image to your newly pushed tag:
84+
```sh
85+
(cd deploy/packages/default && kustomize edit set image k8scloudprovidergcp/cloud-controller-manager=$IMAGE_REPO:$IMAGE_TAG)
86+
```
87+
2. The `manifest.yaml` DaemonSet is left intentionally blank of execution flags (`args: []`). You **must** provide the necessary command-line arguments to the `cloud-controller-manager` container. For a typical Kops or GCE cluster, you can supply these arguments by creating a Kustomize patch.
88+
89+
> [!NOTE]
90+
> If you skipped building your own image in Step 1 and chose to deploy the public upstream image (`k8scloudprovidergcp/cloud-controller-manager:latest`), you **must** also include `command: ["/cloud-controller-manager"]` in your patch's `containers` block. Locally built Dockerfile images automatically set the correct `ENTRYPOINT`, so they do not require this override!
91+
92+
> [!IMPORTANT]
93+
> Be sure to update the `--cluster-cidr` and `--cluster-name` arguments below to match your specific cluster's configuration. Note that GCP resource names cannot contain dots (`.`), so if your cluster name is `my.cluster.net`, you **must** use a sanitized format like `my-cluster-net` here!
94+
95+
```sh
96+
cat << EOF > deploy/packages/default/args-patch.yaml
97+
apiVersion: apps/v1
98+
kind: DaemonSet
99+
metadata:
100+
name: cloud-controller-manager
101+
namespace: kube-system
102+
spec:
103+
template:
104+
spec:
105+
containers:
106+
- name: cloud-controller-manager
107+
args:
108+
- --cloud-provider=gce
109+
- --allocate-node-cidrs=true
110+
- --cluster-cidr=10.4.0.0/14
111+
- --cluster-name=kops-k8s-local
112+
- --configure-cloud-routes=true
113+
- --leader-elect=true
114+
- --use-service-account-credentials=true
115+
- --v=2
116+
EOF
117+
(cd deploy/packages/default && kustomize edit add patch --path args-patch.yaml)
118+
119+
# Deploy the configured package (this applies the DaemonSet and its required roles):
120+
kubectl apply -k deploy/packages/default
121+
```
122+
123+
```sh
124+
# Clean up the local patch file and reset all changes to kustomization.yaml
125+
rm deploy/packages/default/args-patch.yaml
126+
git checkout deploy/packages/default/kustomization.yaml
127+
```
128+
129+
### Alternative: Apply Standalone RBAC Roles
130+
131+
If you prefer to deploy the RBAC rules independently from the base daemonset package, you can apply them directly:
132+
133+
```sh
134+
kubectl apply -f deploy/cloud-node-controller-role.yaml
135+
kubectl apply -f deploy/cloud-node-controller-binding.yaml
136+
kubectl apply -f deploy/pvl-controller-role.yaml
137+
```
138+
139+
## Step 3: Verification
140+
141+
To verify that the Cloud Controller Manager is running successfully:
142+
143+
1. **Check the Pod Status**: Verify the pod is `Running` in the `kube-system` namespace.
144+
```sh
145+
kubectl get pods -n kube-system -l component=cloud-controller-manager
146+
```
147+
148+
2. **Check Pod Logs**: Look for any errors or access and authentication issues with the GCP API.
149+
```sh
150+
kubectl logs -n kube-system -l component=cloud-controller-manager
151+
```
152+
153+
3. **Check Node Initialization**: The `kubelet` initially applies a `node.cloudprovider.kubernetes.io/uninitialized` taint when bound to an external cloud provider. The CCM should remove this taint once it successfully fetches the node's properties from the GCP API.
154+
```sh
155+
# Ensure no nodes have the uninitialized taint, output should be empty.
156+
kubectl get nodes -o custom-columns=NAME:.metadata.name,TAINTS:.spec.taints | grep uninitialized
157+
```
158+
159+
4. **Verify External IPs and ProviderID**: Check if your nodes are correctly populated with GCP-specific data (e.g., `ProviderID` in the format `gce://...`).
160+
```sh
161+
kubectl describe nodes | grep "ProviderID:"
162+
```
163+
164+
If you used `make kops-up` to provision your cluster, you can use `make kops-down` to tear down the cluster.

0 commit comments

Comments
 (0)