Providers
This chapter dives deeper into Providers, the components that tell multicluster-runtime which clusters exist and how to connect to them.
Where the Architecture, Key Concepts, and Multi-Cluster Manager chapters introduced Providers at a high level, this chapter focuses on:
- the
multicluster.ProviderandProviderRunnableinterfaces, - how Providers discover clusters and engage them with the Multi-Cluster Manager,
- how the built-in Providers map to real-world inventory and identity systems,
- what to keep in mind when writing your own Provider.
What a Provider is responsible for
Conceptually, a Provider answers two questions for the Multi-Cluster Manager:
- Discovery: Which clusters are part of my fleet right now?
- Add clusters when they appear, remove them when they go away.
- React to changes in cluster health, inventory, or credentials.
- Connectivity: How do I talk to each cluster?
- Construct a
cluster.Cluster(client, cache, indexer) for each member. - Keep per-cluster caches running and in sync.
- Construct a
The separation of concerns is:
- The Multi-Cluster Manager (
mcmanager.Manager) owns:- the host cluster manager and global controller pipeline,
- the list of multi-cluster–aware components that must be notified when new clusters appear.
- The Provider owns:
- the source of truth for the fleet (Cluster API, ClusterProfile, kubeconfig, Kind, files, namespaces, …),
- the lifecycle of per-cluster
cluster.Clusterobjects, - fan-out of field indexes across all engaged clusters.
Providers do not implement business logic; they are infrastructure that other controllers build on.
The multicluster.Provider interface
At the core of multicluster-runtime is a small interface in pkg/multicluster/multicluster.go:
type Provider interface {
// Get returns a cluster for the given name, or ErrClusterNotFound if unknown.
Get(ctx context.Context, clusterName string) (cluster.Cluster, error)
// IndexField registers a field index on all engaged clusters, current and future.
IndexField(ctx context.Context, obj client.Object, field string, extract client.IndexerFunc) error
}Key points:
-
Get- Returns an existing
cluster.Clusterif the Provider has already created or engaged it. - May lazily create a new
cluster.Cluster(for example, from a kubeconfig or API call) and start its cache. - Returns
multicluster.ErrClusterNotFoundwhen the name is not known; reconcilers and helpers likeClusterNotFoundWrappertreat this as a signal that the cluster has left the fleet. - The cluster name is a logical identifier (string) and should be stable over time; it often aligns with:
- the
ClusterProperty-based Cluster ID from KEP‑2149, - or a
ClusterProfilename / property from KEP‑4322.
- the
- Returns an existing
-
IndexField- Registers a field index on all clusters now and in the future.
- The Multi-Cluster Manager’s
GetFieldIndexer()stores definitions and delegates toProvider.IndexField, so controller authors can keep using:mgr.GetFieldIndexer().IndexField(ctx, &MyType{}, "spec.foo", extractFoo)- without worrying about new clusters that appear later.
- Concrete Providers typically:
- store the index definitions internally,
- apply them to any existing
cluster.Clusterinstances immediately, - re-apply them to new clusters as they are created.
Because the interface is so small, it is straightforward to implement custom Providers that integrate with any inventory system, cluster registry, or bespoke source of truth.
ProviderRunnable: driving discovery loops
Many Providers need to run a watch loop or background process to discover and track clusters over time.
For that, multicluster-runtime defines an optional interface:
type ProviderRunnable interface {
// Start is a long-running discovery loop.
// It must block until the Provider should shut down.
Start(ctx context.Context, aware multicluster.Aware) error
}Responsibilities of Start:
- Observe the source of truth:
- watch CAPI
Clusterobjects, - watch
ClusterProfileobjects, - poll Kind clusters or scan kubeconfig files,
- watch Namespaces, Secrets, or other resources.
- watch CAPI
- Create and manage per-cluster
cluster.Clusterinstances:- build
*rest.Configandcluster.Clusterobjects, - call
cl.Start(clusterCtx)and wait forcl.GetCache().WaitForCacheSync(...), - cancel per-cluster contexts when clusters are removed.
- build
- Engage clusters with the Manager:
- call
aware.Engage(clusterCtx, name, cl)once a cluster is ready, - ensure
Engageis non-blocking from the Provider’s perspective (the Manager takes care of wiring multi-cluster Sources and controllers).
- call
When you construct a Multi-Cluster Manager via mcmanager.New(config, provider, opts):
- if
providerimplementsProviderRunnable, the Manager will start it automatically onmgr.Start(ctx); - you should not call
Starton such providers manually.
This pattern mirrors controller-runtime’s manager.Runnable but is specific to cluster discovery and engagement.
How Providers discover and engage clusters
Although Providers share a common interface, they fall into a few discovery patterns.
-
Pre-constructed clusters
- Example:
providers/single,providers/clusters. - You create
cluster.Clusterinstances yourself (for example in tests), then:- feed them into the Provider,
- or wrap a single cluster under a chosen name.
- Useful in unit tests or when you want to reuse
multicluster-runtimetypes in a mostly single-cluster setting.
- Example:
-
Polling-based discovery
- Example: Kind Provider.
- Periodically lists all Kind clusters using the Kind CLI library.
- For each matching cluster name:
- builds a kubeconfig and
cluster.Cluster, - starts its cache,
- calls
Engage(ctx, clusterName, cl)on the Manager.
- builds a kubeconfig and
- When a Kind cluster disappears, the Provider cancels its context and drops it from its internal map.
-
File- and kubeconfig-based discovery
- Example: File Provider, Kubeconfig Provider.
- File Provider:
- scans known kubeconfig files and directories (with glob patterns),
- watches them with
fsnotify, - creates/updates/removes clusters as files or contexts appear and disappear.
- Kubeconfig Provider:
- runs as a controller in the host cluster,
- watches Secrets in a namespace with a specific label,
- whenever a Secret with kubeconfig data appears or changes:
- parses the kubeconfig into a
rest.Config, - constructs / updates a
cluster.Cluster, - engages it with the Multi-Cluster Manager.
- parses the kubeconfig into a
-
Kubernetes API–driven discovery
- Example: Cluster API Provider, Cluster Inventory API Provider, Namespace Provider.
- Cluster API Provider:
- runs a controller that watches CAPI
Clusterobjects, - when a
Clusteris provisioned, fetches its kubeconfig Secret, - constructs a
cluster.Clusterand engages it.
- runs a controller that watches CAPI
- Cluster Inventory API Provider:
- watches
ClusterProfileobjects (KEP‑4322) on a hub cluster, - uses a pluggable kubeconfig strategy to obtain a
rest.Config:- often implemented via the credential plugins defined in KEP‑5339,
- for example, calling an external binary that uses Workload Identity Federation.
- creates/updates a
cluster.Clusterand engages it when the profile is healthy.
- watches
- Namespace Provider:
- watches
Namespaceobjects in a single cluster, - treats each namespace as a virtual cluster backed by a shared
cluster.Cluster, - engages a
NamespacedClusterwrapper per namespace, mapping all operations into that namespace.
- watches
Across all of these, the engagement contract is the same:
- Providers are free to choose how and when clusters appear and disappear.
- Once they call
Engage(ctx, name, cl):- the Manager and multi-cluster Sources take over wiring,
- controllers begin to receive
mcreconcile.Request{ClusterName: name, ...}for that cluster.
Built-in Providers at a glance
multicluster-runtime ships several Providers that cover common environments:
-
Kind Provider (
providers/kind)- Discovers local Kind clusters, optionally filtered by a name prefix.
- Ideal for development, testing, and the Quickstart.
-
File Provider (
providers/file)- Discovers clusters from kubeconfig files on disk (paths and directories, with globs).
- Watches files and directories and updates the fleet when kubeconfigs change.
-
Kubeconfig Provider (
providers/kubeconfig)- Runs in a management cluster and watches Secrets labelled as containing kubeconfigs.
- Each Secret corresponds to a member cluster; the Provider builds a
cluster.Clusterfrom the kubeconfig and engages it. - Pairs well with scripts or controllers that generate kubeconfig Secrets (for example, the example
create-kubeconfig-secret.shscript).
-
Cluster API Provider (
providers/cluster-api)- Integrates with Cluster API (
cluster.x-k8s.io). - Watches CAPI
Clusterresources, obtains their kubeconfig from Secrets, and creates clusters when they reach the Provisioned phase. - Suitable when CAPI is already your source of truth for workload clusters.
- Integrates with Cluster API (
-
Cluster Inventory API Provider (
providers/cluster-inventory-api)- Integrates with the ClusterProfile API from KEP‑4322.
- Watches
ClusterProfileresources on a hub cluster and:- uses
status.conditions(for exampleControlPlaneHealthy,Joined) to decide when a cluster is ready, - uses
status.properties(includingcluster.clusterset.k8s.ioandclusterset.k8s.iofrom KEP‑2149) for identity and ClusterSet membership, - uses
status.credentialProvidersand plugins (per KEP‑5339) to obtain credentials.
- uses
- This makes it a natural fit for platforms that expose a standardized cluster inventory.
-
Namespace Provider (
providers/namespace)- Treats each Namespace as a virtual cluster backed by a shared API server.
- Useful for simulating multi-cluster behaviour on a single physical cluster.
-
Composition and utility Providers
- Multi Provider (
providers/multi):- Allows you to register multiple Providers under name prefixes (for example,
kind#dev-cluster,capi#prod-eu1). - Splits
ClusterNameintoproviderPrefix+innerClusterNameand forwards calls appropriately.
- Allows you to register multiple Providers under name prefixes (for example,
- Clusters Provider (
providers/clusters):- Example Provider built on
pkg/clusters.Clusters, used mainly for tests and demonstrations.
- Example Provider built on
- Single Provider (
providers/single):- Wraps a single pre-constructed
cluster.Clusterunder a fixed name.
- Wraps a single pre-constructed
- Nop Provider (
providers/nop):- Implements the interface but never returns clusters; useful for tests or for explicitly single-cluster setups that still use multi-cluster types.
- Multi Provider (
Details for each Provider (configuration flags, RBAC, and deployment patterns) are covered in the Providers Reference chapters.
Cluster identity, inventory, and credentials (KEPs 2149, 4322, 5339)
Providers are where multicluster-runtime connects to the broader SIG‑Multicluster ecosystem:
-
Cluster identity — KEP‑2149 (ClusterId for ClusterSet identification)
- Defines
ClusterPropertyCRDs and well-known properties:cluster.clusterset.k8s.io: a stable, unique cluster ID within a ClusterSet.clusterset.k8s.io: identifies the ClusterSet membership.
- Providers that integrate with ClusterSets (for example via ClusterProfile) should:
- use these properties to derive stable
ClusterNamevalues, - treat them as the canonical identity across tools.
- use these properties to derive stable
- Defines
-
Cluster inventory — KEP‑4322 (ClusterProfile API)
- Defines the
ClusterProfileresource as a portable representation of clusters in an inventory. - The Cluster Inventory API Provider reads:
status.versionfor Kubernetes versions,status.propertiesfor attributes like location and ClusterSet,status.conditions(health, joined-ness),status.credentialProvidersfor how to obtain credentials.
- This lets multi-cluster controllers work across different inventory implementations (OCM, Clusternet, Fleet, Karmada, …) through a single API.
- Defines the
-
Credentials — KEP‑5339 (Plugin for Credentials in ClusterProfile)
- Introduces a plugin-based mechanism to turn a
ClusterProfileinto arest.Config:credentialProvidersin the ClusterProfile’s status describe what credential types a cluster accepts (for example"google"for GKE).- Controllers use a library that:
- calls an external binary (plugin) per credentials type,
- reuses the
execprotocol from client-go’s external credential providers, - returns a
rest.Configto the Provider.
- The Cluster Inventory API Provider is designed to plug into this model: it delegates all credential retrieval to the credential plugins and focuses on wiring the resulting
rest.Configintocluster.Clusterinstances.
- Introduces a plugin-based mechanism to turn a
By aligning Providers with these KEPs, multicluster-runtime makes cluster identity, inventory, and authentication pluggable and standardized, instead of baking cloud- or vendor-specific logic into each controller.
Writing your own Provider
Implementing a custom Provider usually follows these steps:
-
1. Choose a cluster naming scheme
- Decide how to map your inventory’s notion of “cluster” to a stable, unique string.
- Prefer IDs that:
- are unique within the fleet (and ideally within a ClusterSet),
- are stable across restarts and over the lifetime of the cluster,
- can be mapped back to whatever ClusterProperty / ClusterProfile you use.
-
2. Decide how you will obtain
rest.Configand credentials- For strongly-typed APIs like ClusterProfile, reuse the credential plugin mechanisms from KEP‑5339.
- For simpler setups, you can:
- read kubeconfigs from files, Secrets, or other CRDs,
- or construct
rest.Configprogrammatically (for example, for in-cluster or sidecar scenarios).
-
3. Construct
cluster.Clusterinstances- Use
cluster.New(restConfig, options...)from controller-runtime. - Apply options as needed (for example, custom schemes, rate limits, health checks).
- Start each cluster with its own
clusterCtx:go cl.Start(clusterCtx)- wait for
cl.GetCache().WaitForCacheSync(clusterCtx)before callingEngage.
- Use
-
4. Implement the Provider interfaces
- Implement
GetandIndexFielddirectly, or embed the reusableclusters.Clusters[T]helper to handle:- storing clusters in a map,
- starting them in goroutines,
- re-applying indexes to new clusters.
- If you have a long-running discovery loop, also implement
ProviderRunnable.Start(ctx, aware):- watch or poll your source of truth,
- create/update/remove
cluster.Clusterinstances as needed, - call
aware.Engage(clusterCtx, name, cl)for new or updated clusters.
- Implement
-
5. Integrate with the Multi-Cluster Manager
- Pass your Provider to
mcmanager.New(config, provider, opts)(orWithMultiCluster). - Use
mcbuilder.ControllerManagedBy(mgr)to register controllers; they will automatically start receivingmcreconcile.Requests for your clusters.
- Pass your Provider to
For more advanced scenarios:
- Use the Multi Provider to combine multiple cluster sources under different prefixes.
- Use the Clusters helper or Single/Nop Providers for tests and experimental setups.
With these pieces, you can evolve from simple, static fleets to rich, KEP-aligned multi-cluster inventories without changing your controller business logic.