Kubeconfig Provider
The Kubeconfig provider lets multicluster-runtime discover and engage clusters from kubeconfig-bearing Secrets in a Kubernetes cluster.
Instead of talking directly to Kind or Cluster API, this provider treats each Secret as “this is one cluster”, builds a cluster.Cluster from the kubeconfig, and wires it into the Multi-Cluster Manager.
At a high level the Kubeconfig provider:
- watches Secrets in a single namespace, filtered by a label (by default
sigs.k8s.io/multicluster-runtime-kubeconfig: "true"), - extracts kubeconfig bytes from a specific data key (by default
kubeconfig), - creates and runs a
cluster.Clusterper Secret, - engages each cluster with the Multi-Cluster Manager under the name
<secret-name>.
For an overview of providers in general, see Core Concepts — Providers (03-core-concepts--providers.md).
This chapter focuses on how the Kubeconfig provider is configured, how it behaves, and how to use it effectively.
When to use the Kubeconfig provider
Use the Kubeconfig provider when:
- Your source of truth is “a list of kubeconfigs”:
- You already have kubeconfigs for each member cluster.
- You are comfortable materializing them as
Secrets in a management cluster.
- You want to onboard clusters incrementally:
- Creating a new Secret is all it takes to add a cluster to the fleet.
- Deleting the Secret removes the cluster.
- You prefer a simple, explicit model:
- No additional CRDs (
ClusterProfile,Cluster) are required. - You can control RBAC and credential scopes per cluster via the kubeconfig content.
- No additional CRDs (
Other providers may be a better fit when:
- You already have a Cluster Inventory API / ClusterProfile–based inventory
→ Prefer the Cluster Inventory API provider, which aligns with KEP‑4322 and KEP‑5339. - You use Cluster API as your lifecycle manager
→ Prefer the Cluster API provider, which consumes CAPIClusterobjects and their kubeconfig Secrets. - You only need local development against Kind fleets
→ Prefer the Kind provider. - You want to manage clusters from static files on disk rather than Secrets
→ Prefer the File provider.
Conceptually, the Kubeconfig provider corresponds to the “Push Model via Credentials in Secret (Not Recommended)” described in the ClusterProfile KEP (KEP‑4322/KEP‑5339): it is simple and explicit, but long‑lived kubeconfig Secrets have security and rotation drawbacks you should be aware of.
Topology: where the Kubeconfig provider runs
The typical deployment looks like this:
- A management cluster:
- hosts the controller Pod that runs your multi-cluster controllers,
- hosts
Secretobjects containing kubeconfigs for each member cluster, - hosts the Multi-Cluster Manager (
mcmanager.Manager).
- One or more member clusters:
- any Kubernetes clusters reachable from the management cluster’s network,
- each has a service account / identity with RBAC appropriate for your controllers,
- exposes a kubeconfig that the management cluster can use.
The provider itself is not a separate process. It is:
- constructed as a Go value in your main,
- wired into
mcmanager.New(...), - set up as a controller on the local manager (
mgr.GetLocalManager()), - driven by the Multi-Cluster Manager and local manager when you call
mgr.Start(ctx).
From a reconciler’s perspective, this provider looks just like any other:
mcreconcile.Request.ClusterNameis the Secret name,mgr.GetCluster(ctx, req.ClusterName)returns acluster.Clusterbacked by that kubeconfig,- controllers don’t need to know anything about Secrets.
Configuration: kubeconfig.Options
The Kubeconfig provider is implemented in providers/kubeconfig/provider.go and constructed via:
import kubeconfig "sigs.k8s.io/multicluster-runtime/providers/kubeconfig"
provider := kubeconfig.New(kubeconfig.Options{
Namespace: "default",
KubeconfigSecretLabel: "sigs.k8s.io/multicluster-runtime-kubeconfig",
KubeconfigSecretKey: "kubeconfig",
})The Options type controls how Secrets are discovered and how clients are built:
type Options struct {
// Namespace is the namespace where kubeconfig secrets are stored.
Namespace string
// KubeconfigSecretLabel is the label used to identify secrets containing kubeconfig data.
KubeconfigSecretLabel string
// KubeconfigSecretKey is the key in the secret data that contains the kubeconfig.
KubeconfigSecretKey string
// ClusterOptions is the list of options to pass to the cluster object.
ClusterOptions []cluster.Option
// RESTOptions is the list of options to pass to the rest client.
RESTOptions []func(cfg *rest.Config) error
}Defaults:
- If
KubeconfigSecretLabelis empty, it defaults to
sigs.k8s.io/multicluster-runtime-kubeconfig. - If
KubeconfigSecretKeyis empty, it defaults tokubeconfig.
Field descriptions:
Namespace:- Only
Secrets in this namespace are considered. - Typically you dedicate a namespace (for example
multicluster-kubeconfigs) and restrict access via RBAC.
- Only
KubeconfigSecretLabel:- Only
Secrets with this label key set to"true"are watched. - This lets you keep other Secrets in the same namespace without them becoming clusters.
- Only
KubeconfigSecretKey:- The data key that holds the kubeconfig bytes.
- Must be present and non-empty, otherwise the Secret is ignored with a log message.
ClusterOptions:- Options forwarded to
cluster.New(restConfig, ...)for each member cluster. - Use this to:
- tune cache or client behaviour,
- register additional schemes,
- attach health probes or metrics.
- Options forwarded to
RESTOptions:- Mutators applied to the generated
*rest.Configbefore constructing the cluster. - Typical use: adjust QPS/Burst, user agent, or TLS settings.
- Mutators applied to the generated
In tests (kubeconfig_suite_test.go), the provider is exercised with non‑default values, confirming that these knobs are honoured.
Secret layout and cluster naming
The provider treats each matching Secret as one cluster.
By default, a cluster is defined as:
- a
SecretinOptions.Namespace, - with label
<KubeconfigSecretLabel>: "true", - with kubeconfig bytes in
data[<KubeconfigSecretKey>].
Cluster name:
ClusterNameis exactly theSecret’s name.- This is the string your controllers will see in
mcreconcile.Request.ClusterName. - It is local to this provider and namespace; for global identity, see KEP‑2149 and About API.
For example, a minimal Secret using defaults:
apiVersion: v1
kind: Secret
metadata:
name: cluster-a
namespace: multicluster-kubeconfigs
labels:
sigs.k8s.io/multicluster-runtime-kubeconfig: "true"
data:
kubeconfig: <base64-encoded kubeconfig>will appear to your reconcilers as a cluster named "cluster-a".
You are responsible for:
- choosing Secret names that are unique within the namespace,
- ensuring each Secret’s kubeconfig contains only one logical cluster (or at least that the default context points to the intended target),
- provisioning credentials in each kubeconfig with appropriate RBAC on the member cluster.
For more structured inventories (for example, when you expose ClusterProfile objects as in KEP‑4322), consider using the Cluster Inventory API provider instead of manually managing Secrets.
How discovery and engagement work
The provider implements:
multicluster.Provider(forGetandIndexField), and- a standard
controller-runtimeReconcilerforcorev1.Secret.
Discovery is driven through SetupWithManager:
func (p *Provider) SetupWithManager(ctx context.Context, mgr mcmanager.Manager) errorSetupWithManager:
- stores a reference to the
mcmanager.Manager, - obtains the local manager via
mgr.GetLocalManager(), - registers a controller on that local manager:
For(&corev1.Secret{}, ...),- filtered to Secrets in
Options.Namespacewith the selected label.
Once this controller and the Multi-Cluster Manager are running, the reconcile loop handles all lifecycle transitions.
Reconcile loop for Secrets
The core logic lives in Reconcile(ctx, req):
- Load the Secret
- Uses the local manager’s client to
GettheSecretbyreq.NamespacedName. - If it is not found:
- the provider calls
removeCluster(req.Name)to:- delete the cluster from its map,
- cancel the per-cluster context,
- returns success.
- the provider calls
- Uses the local manager’s client to
- Handle deletion timestamps
- If
secret.DeletionTimestampis non-nil:- the provider removes the cluster (if any) and returns.
- This covers cases where finalizers delay actual Secret deletion.
- If
- Extract kubeconfig bytes
- Reads
secret.Data[Options.KubeconfigSecretKey]. - If missing or empty:
- logs a message and returns success (the Secret is effectively ignored).
- Reads
- Detect changes using a hash
- Computes a SHA‑256 hash of the kubeconfig bytes.
- If a cluster with this name already exists and the hash is unchanged:
- logs “Cluster already exists and has the same kubeconfig, skipping”,
- returns success.
- If the cluster exists but the hash changed:
- logs “Cluster already exists, updating it”,
- calls
removeCluster(clusterName)to shut down the old instance.
- Create and engage the cluster
- Calls
createAndEngageCluster(ctx, clusterName, kubeconfigData, hashStr, log):- parses the kubeconfig into a
*rest.Config(clientcmd.RESTConfigFromKubeConfig), - applies
RESTOptionsto the config, - constructs a
cluster.Clusterviacluster.New(restConfig, ClusterOptions...), - applies all stored field indexers to the new cluster’s cache,
- creates a per-cluster context with
context.WithCancel(ctx), - starts the cluster in a goroutine (
cl.Start(clusterCtx)), - calls
mgr.Engage(clusterCtx, clusterName, cl), - waits for
cl.GetCache().WaitForCacheSync(clusterCtx)to succeed, - stores the
cluster.Cluster, cancel func, and hash in an internal map.
- parses the kubeconfig into a
- Calls
Concurrency is handled via a RW mutex:
- read operations (
Get,ListClusters) take a read lock, - writes (
setCluster,removeCluster, indexer registration) take a write lock, - indexers are applied under the appropriate lock to avoid races (see tests under “Provider race condition”).
This design ensures that:
- adding a Secret creates and engages a new cluster,
- updating a Secret either:
- leaves the cluster untouched (if kubeconfig unchanged), or
- tears it down and recreates it (if kubeconfig changed),
- deleting a Secret cleanly removes the cluster from the fleet.
Get and IndexField
The provider implements the multicluster.Provider contract:
Get(ctx, clusterName):- looks up the name in its internal
clustersmap, - returns
multicluster.ErrClusterNotFoundif absent, - otherwise returns the
cluster.Cluster.
- looks up the name in its internal
IndexField(ctx, obj, field, extractValue):- appends an
index{object, field, extractValue}entry to an internalindexersslice (for future clusters), - applies the index immediately to all existing clusters’ caches.
- appends an
This matches the behaviour documented in Core Concepts — Providers:
- controllers can register field indexes once, at startup, via
mgr.GetFieldIndexer().IndexField(...), - the provider ensures that all clusters, current and future, share the same indexes.
Using the provider: end-to-end example
The example program in examples/kubeconfig/main.go demonstrates a complete setup. In outline:
-
Parse flags and configure logging
var namespace, kubeconfigSecretLabel, kubeconfigSecretKey string flag.StringVar(&namespace, "namespace", "default", "Namespace where kubeconfig secrets are stored") flag.StringVar(&kubeconfigSecretLabel, "kubeconfig-label", "sigs.k8s.io/multicluster-runtime-kubeconfig", "Label used to identify secrets containing kubeconfig data") flag.StringVar(&kubeconfigSecretKey, "kubeconfig-key", "kubeconfig", "Key in the secret data that contains the kubeconfig") // configure zap logger... -
Create the Kubeconfig provider
providerOpts := kubeconfigprovider.Options{ Namespace: namespace, KubeconfigSecretLabel: kubeconfigSecretLabel, KubeconfigSecretKey: kubeconfigSecretKey, } provider := kubeconfigprovider.New(providerOpts) -
Create a Multi-Cluster Manager
mgr, err := mcmanager.New(ctrl.GetConfigOrDie(), provider, mcmanager.Options{ Metrics: metricsserver.Options{ BindAddress: "0", // disable metrics server in this example }, }) -
Register the provider’s controller
if err := provider.SetupWithManager(ctx, mgr); err != nil { // handle error } -
Register a multi-cluster controller
if err := mcbuilder.ControllerManagedBy(mgr). Named("multicluster-configmaps"). For(&corev1.ConfigMap{}). Complete(mcreconcile.Func( func(ctx context.Context, req mcreconcile.Request) (ctrl.Result, error) { log := ctrllog.FromContext(ctx).WithValues("cluster", req.ClusterName) log.Info("Reconciling ConfigMap") cl, err := mgr.GetCluster(ctx, req.ClusterName) if err != nil { return reconcile.Result{}, err } cm := &corev1.ConfigMap{} if err := cl.GetClient().Get(ctx, req.Request.NamespacedName, cm); err != nil { if apierrors.IsNotFound(err) { return ctrl.Result{}, nil } return ctrl.Result{}, err } log.Info("ConfigMap found", "namespace", cm.Namespace, "name", cm.Name, "cluster", req.ClusterName) return ctrl.Result{}, nil }, )); err != nil { // handle error } -
Start the manager
if err := mgr.Start(ctx); err != nil && !errors.Is(err, context.Canceled) { // handle error }
At runtime:
- you create one Secret per member cluster in the chosen namespace,
- the provider discovers those Secrets and engages clusters accordingly,
- your controller receives
mcreconcile.Requestevents for each cluster and resource, - deleting or updating Secrets dynamically shrinks or reshapes the fleet.
For practical scripts that generate kubeconfig Secrets and RBAC on the member clusters, see examples/kubeconfig/README.md and the helper in examples/kubeconfig/scripts/create-kubeconfig-secret.sh.
Field indexing behaviour
The Kubeconfig provider’s indexing behaviour mirrors the other providers:
- every call to
mgr.GetFieldIndexer().IndexField(...)is forwarded toProvider.IndexField, - the provider:
- remembers the index definition for future clusters,
- applies the index immediately to all current clusters.
During cluster creation, createAndEngageCluster calls applyIndexers before engaging the cluster:
- this ensures that by the time the cluster is visible to the Multi-Cluster Manager and controllers,
- its cache already has all registered indexes.
For consumers, the rule of thumb is:
- register indexes once, at manager setup time,
- assume they are available in every engaged cluster, regardless of when the Secret appeared.
Security and RBAC considerations
There are two security planes to consider: the management cluster and the member clusters.
On the management cluster:
- The ServiceAccount used by your controller Pod must be able to:
get,list, andwatchSecrets inOptions.Namespace,- specifically, those Secrets that hold kubeconfigs and are labeled for this provider.
- If you use the example scripts:
- they create ServiceAccounts, Roles/ClusterRoles, and bindings in the member clusters for your operator,
- they also create the kubeconfig Secrets in the management cluster with the expected labels and keys.
On member clusters:
- The kubeconfig in each Secret determines what your controllers can do:
- for production, prefer least-privilege service accounts per controller and per cluster,
- keep tokens short‑lived where possible, or plan rotation.
- Because the provider copies the kubeconfig bytes verbatim into a
rest.Config:- any mis‑scoped RBAC or overly broad role in the remote cluster is immediately visible to your controllers.
Relation to KEP‑5339 (credentials plugins):
- The Kubeconfig provider uses static kubeconfig Secrets, not the ClusterProfile credentials plugin model.
- For higher security and better rotation, consider:
- using ClusterProfile + credentials plugins with the Cluster Inventory API provider, or
- layering an external mechanism that periodically refreshes kubeconfig Secrets according to a secure workflow.
Operational notes and troubleshooting
Some common behaviours and how to reason about them:
-
Cluster never appears in
ListClustersorGetCluster- Check that:
- the Secret is in
Options.Namespace, - the Secret has the correct label key and value,
- the Secret’s data contains the expected kubeconfig key.
- the Secret is in
- Look at logs from the
kubeconfig-providerlogger for parse or RBAC errors.
- Check that:
-
ErrClusterNotFoundin reconcilers- This means:
- the Secret never existed,
- or it was deleted,
- or it existed but the provider never successfully created/engaged the cluster.
- By default, controllers created with
mcbuildertreatErrClusterNotFoundas a non‑fatal condition and drop the work item.
- This means:
-
Secret updated, but behaviour still uses old credentials
- The provider compares the hash of the kubeconfig:
- if the bytes changed, it will:
- tear down the old cluster,
- create a new
cluster.Clusterwith the updated config, - re‑apply indexers and re‑engage it.
- if only metadata (annotations, unrelated data keys) changed, the kubeconfig hash is identical and the cluster is reused.
- if the bytes changed, it will:
- If you changed the kubeconfig but see no effect:
- verify that the kubeconfig bytes really differ (for example, changed context or token),
- check logs for errors during the re-engagement path.
- The provider compares the hash of the kubeconfig:
-
High fleet turnover
- Creating and deleting many Secrets in quick succession will:
- create and tear down many
cluster.Clusterinstances, - start and stop many caches.
- create and tear down many
- For large fleets or frequent churn, consider:
- more structured inventories (ClusterProfile),
- or providers built on
pkg/clusters.Clustersthat can reuse infrastructure more aggressively.
- Creating and deleting many Secrets in quick succession will:
Summary
The Kubeconfig provider offers a straightforward bridge between kubeconfig Secrets and the multicluster-runtime fleet model: each labeled Secret becomes a cluster.Cluster engaged under the Secret’s name, and updates to those Secrets transparently rotate credentials and endpoints.
It is ideal for environments where kubeconfigs are already the primary integration surface, or where you need a simple, explicit onramp to multi-cluster controllers, while leaving room to evolve towards more structured inventories and credential plugins as described in KEP‑4322 and KEP‑5339.