Kind Provider
The Kind provider lets multicluster-runtime discover and connect to multiple local Kind clusters from a single controller process.
It is optimised for development, testing, and demos where you manage a small fleet of Docker‑backed clusters on your workstation or CI runner.
This chapter explains:
- what the Kind provider does and when to use it,
- how discovery and lifecycle management work,
- how to configure it via
kind.Options, and - concrete usage patterns and troubleshooting tips.
When to use the Kind provider
Use the Kind provider when:
- Developing locally:
- You want quick feedback loops against a handful of ephemeral clusters.
- You can create and destroy Kind clusters freely without impacting shared environments.
- Testing multi-cluster logic:
- You want to validate your reconcilers across multiple clusters before plugging into a real inventory (Cluster API, ClusterProfile, kubeconfig Secrets, etc.).
- You want deterministic, scriptable clusters for CI jobs.
- Learning
multicluster-runtime:- You are following the Quickstart or experimenting with controller patterns, and you need a simple fleet source.
You should not use the Kind provider:
- for production fleets,
- when clusters live outside the local container runtime (for example, managed cloud clusters), or
- when you already have a stronger source of truth such as Cluster API or the Cluster Inventory API.
In those cases, prefer the Cluster API, Cluster Inventory API, Kubeconfig, or File providers instead.
How the Kind provider discovers clusters
The Kind provider implementation lives in sigs.k8s.io/multicluster-runtime/providers/kind and implements both:
multicluster.Provider, andmulticluster.ProviderRunnable.
At runtime it behaves as follows:
-
Discovery loop
- Internally, it uses the Kind Go library (
"sigs.k8s.io/kind/pkg/cluster") to callkind.List()in a loop. - The loop runs periodically (every few seconds) until the top‑level context is cancelled (for example, when the manager stops).
- On each iteration it calculates:
- new clusters that appeared since last time, and
- clusters that disappeared (were deleted with
kind delete cluster).
- Internally, it uses the Kind Go library (
-
Name filtering with
Prefix- Each discovered Kind cluster has a name such as
fleet-alphaordev-cluster. - The provider exposes an optional
Prefixfield inkind.Options:- if
Prefixis empty: all Kind clusters are considered part of the fleet, - if
Prefixis non‑empty: only clusters whose names start with that prefix are included (for example,"fleet-").
- if
- This makes it easy to share a host with other Kind clusters (for example, personal experiments) while letting your controller focus only on the clusters it owns.
- Each discovered Kind cluster has a name such as
-
Constructing
cluster.Clusterinstances- For each matching Kind cluster name, the provider:
- obtains its kubeconfig via
provider.KubeConfig(clusterName, false), - turns that kubeconfig into a
*rest.Config, - applies any custom REST options (QPS, Burst, impersonation, etc.),
- creates a
cluster.Clusterviacluster.New(restConfig, options...), - applies any previously registered field indexers to its cache,
- starts the cluster’s cache in its own goroutine and waits for it to sync,
- engages the cluster with the Multi-Cluster Manager.
- obtains its kubeconfig via
- For each matching Kind cluster name, the provider:
-
Removing clusters
- If a cluster disappears from Kind’s
Listoutput:- the provider cancels the context associated with that cluster,
- the cluster’s cache and informers stop,
- the cluster is removed from the provider’s internal map,
- subsequent
Getcalls for that name returnmulticluster.ErrClusterNotFound.
- Controllers typically rely on the ClusterNotFound wrapper so these stale requests are treated as successful and are not requeued.
- If a cluster disappears from Kind’s
In short, the Kind provider turns whatever Kind clusters currently exist on your machine into a dynamic fleet for multicluster-runtime.
Configuration: kind.Options
You create a Kind provider with the Options struct:
type Options struct {
// Prefix is an optional prefix applied to filter kind clusters by name.
Prefix string
// ClusterOptions is the list of options to pass to the cluster object.
ClusterOptions []cluster.Option
// RESTOptions is the list of options to pass to the rest client.
RESTOptions []func(cfg *rest.Config) error
}-
Prefix- Filters which Kind cluster names are considered part of the fleet.
- Recommended in almost all real projects so you do not accidentally pick up unrelated clusters.
- Examples:
Prefix: "fleet-"→ matchesfleet-alpha,fleet-beta, etc.Prefix: ""→ matches all Kind clusters on the host.
-
ClusterOptions- Options forwarded to
cluster.Newfrom controller-runtime. - Use this to:
- register additional API schemes,
- tune cache behaviour or rate limits,
- apply health probes or other cluster‑level options.
- In many simple examples this slice is left empty, relying on defaults.
- Options forwarded to
-
RESTOptions- Functions that mutate the generated
*rest.Configbefore it is used. - Typical uses:
- adjusting
QPS/Burstlimits for heavy test workloads, - setting custom user agents for telemetry,
- configuring TLS details if you have a customised Kind setup.
- adjusting
- Functions that mutate the generated
All three fields are optional; you can start with a minimal configuration and introduce advanced options later as needed.
Wiring the Kind provider into a Multi-Cluster Manager
In code, you typically:
- Create the provider with the desired options.
- Create a Multi-Cluster Manager using
mcmanager.New. - Register controllers using
mcbuilder.ControllerManagedBy. - Start the manager, which will start the provider automatically.
The following is a minimal example (very similar to the Quickstart) that:
- discovers Kind clusters whose names start with
fleet-, - watches
ConfigMapobjects in all engaged clusters, - logs and emits an Event when a
ConfigMapis found.
package main
import (
"context"
"errors"
"os"
corev1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
ctrl "sigs.k8s.io/controller-runtime"
ctrllog "sigs.k8s.io/controller-runtime/pkg/log"
"sigs.k8s.io/controller-runtime/pkg/log/zap"
"sigs.k8s.io/controller-runtime/pkg/manager/signals"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
mcbuilder "sigs.k8s.io/multicluster-runtime/pkg/builder"
mcmanager "sigs.k8s.io/multicluster-runtime/pkg/manager"
mcreconcile "sigs.k8s.io/multicluster-runtime/pkg/reconcile"
"sigs.k8s.io/multicluster-runtime/providers/kind"
)
func main() {
ctrllog.SetLogger(zap.New(zap.UseDevMode(true)))
log := ctrllog.Log.WithName("kind-example")
ctx := signals.SetupSignalHandler()
// 1. Discover Kind clusters whose names start with "fleet-".
provider := kind.New(kind.Options{Prefix: "fleet-"})
// 2. Create a Multi-Cluster Manager wired to the Kind provider.
mgr, err := mcmanager.New(ctrl.GetConfigOrDie(), provider, mcmanager.Options{})
if err != nil {
log.Error(err, "unable to create manager")
os.Exit(1)
}
// 3. Register a simple uniform reconciler for ConfigMaps.
if err := mcbuilder.ControllerManagedBy(mgr).
Named("multicluster-configmaps").
For(&corev1.ConfigMap{}).
Complete(mcreconcile.Func(
func(ctx context.Context, req mcreconcile.Request) (ctrl.Result, error) {
log := ctrllog.FromContext(ctx).WithValues("cluster", req.ClusterName)
log.Info("Reconciling ConfigMap")
cl, err := mgr.GetCluster(ctx, req.ClusterName)
if err != nil {
return reconcile.Result{}, err
}
cm := &corev1.ConfigMap{}
if err := cl.GetClient().Get(ctx, req.Request.NamespacedName, cm); err != nil {
if apierrors.IsNotFound(err) {
return ctrl.Result{}, nil
}
return ctrl.Result{}, err
}
cl.GetEventRecorderFor("kind-multicluster-configmaps").Event(
cm,
corev1.EventTypeNormal,
"ConfigMapFound",
"ConfigMap found in cluster "+req.ClusterName,
)
log.Info("ConfigMap found",
"namespace", cm.Namespace,
"name", cm.Name,
"cluster", req.ClusterName,
)
return ctrl.Result{}, nil
},
)); err != nil {
log.Error(err, "unable to create controller")
os.Exit(1)
}
// 4. Start the manager (and, transitively, the Kind provider).
if err := mgr.Start(ctx); ignoreCanceled(err) != nil {
log.Error(err, "manager exited with error")
os.Exit(1)
}
}
func ignoreCanceled(err error) error {
if errors.Is(err, context.Canceled) {
return nil
}
return err
}Key points:
- You do not call
Starton the Kind provider yourself;mcmanager.Managerdetects that it implementsProviderRunnableand starts it automatically when you callmgr.Start(ctx). - Reconcilers receive
mcreconcile.Request, which includesClusterNameplus the innerreconcile.Request. mgr.GetCluster(ctx, req.ClusterName)returns a per‑clustercluster.Clusterthat behaves just like a standard controller-runtime cluster, with its own client and cache.
For a step‑by‑step walkthrough of this example, including how to create Kind clusters and inspect Events, see Getting Started — Quickstart.
Runtime behaviour and lifecycle details
Some additional details that matter when operating the Kind provider:
-
Polling interval
- The provider uses a fixed polling interval (currently a few seconds) to re‑list Kind clusters.
- This is sufficient for development and CI; if you create or delete clusters, they will be recognised on the next poll.
-
Concurrency and safety
- Internal maps of clusters and cancel functions are protected by a read‑write mutex.
Getis safe to call from multiple reconcilers concurrently.
-
Field indexing
- When you call
mgr.GetFieldIndexer().IndexField(...), the Multi-Cluster Manager forwards the registration to the provider. - The Kind provider:
- records index definitions in memory, and
- applies them to:
- all existing clusters immediately, and
- any newly discovered clusters when they are created.
- This guarantees consistent indexing semantics even as clusters are added or removed.
- When you call
-
Cluster naming
ClusterNameinmcreconcile.Requestis exactly the Kind cluster name (for example,"fleet-alpha").- The provider does not add extra prefixes beyond whatever you set in
Options.Prefix. - Your reconcilers should treat
ClusterNameas an opaque string and avoid baking in Kind‑specific assumptions.
Prerequisites and environment assumptions
Because the Kind provider talks directly to the local container runtime and kubeconfig files:
- The controller process must run in an environment where:
- the Kind CLI (
kind) is installed and available inPATH, - the underlying container runtime (typically Docker) is reachable.
- the Kind CLI (
- This usually means:
- running the controller on your workstation during development, or
- running it in a CI job container that has access to Docker‑in‑Docker or a similar setup.
You can still deploy a Kind‑backed controller into a Kubernetes cluster, but only if:
- the Pod’s container image includes the
kindCLI, and - the cluster nodes have access to the same Docker daemon where the Kind clusters live.
Most users start with a local binary (go run ./...) rather than an in‑cluster deployment when using the Kind provider.
Troubleshooting
-
The manager fails to start with “failed to list kind clusters”
- The Kind provider performs an initial
Listbefore entering the polling loop. - Check that:
kindis installed and inPATH,- your process has permission to talk to the container runtime.
- Running
kind get clustersmanually in the same environment is a good sanity check.
- The Kind provider performs an initial
-
New Kind clusters are not discovered
- Ensure their names match the configured
Prefix(if any). - Wait at least one polling interval after creating the clusters.
- Verify discovery by looking at controller logs for messages like
"Added new cluster"and by runningkind get clusters.
- Ensure their names match the configured
-
Reconcilers see
ErrClusterNotFound- This usually means:
- the Kind cluster was deleted after events were queued, or
- the cluster name never matched the provider’s prefix.
- By default, controllers created with
mcbuilderuse aClusterNotFoundwrapper that:- treats this error as non‑fatal, and
- does not requeue the request.
- If you need custom metrics or logging, you can disable that wrapper and handle
ErrClusterNotFoundexplicitly.
- This usually means:
-
Load testing many Kind clusters
- Kind clusters are relatively heavy; running too many in parallel can exhaust local CPU or memory.
- Consider:
- keeping fleets small for local development (a handful of clusters),
- using the Namespace provider or other lightweight providers for large‑scale simulations,
- moving to a more realistic provider (Cluster API or Cluster Inventory API) when testing higher cluster counts.
With these behaviours and caveats in mind, the Kind provider is an excellent starting point for exploring multicluster-runtime and validating multi-cluster controller designs on your laptop before integrating with production‑grade inventory systems.