File Provider
The File provider discovers clusters from kubeconfig files on the local filesystem.
It is a small, self-contained provider that is ideal when you already have kubeconfigs for your fleet and want to point multicluster-runtime at them without standing up a management API.
This chapter explains:
- what the File provider does and when to use it,
- how discovery, naming, and live reloading work,
- how to configure it via
file.Options, and - a concrete usage example based on
examples/file.
For a conceptual overview of Providers, see Core Concepts — Providers.
When to use the File provider
Use the File provider when:
- You already manage clusters via kubeconfigs:
- You have one or more kubeconfig files checked into a repo, generated by a provisioning tool, or handed to you by operators.
- You want a controller to reconcile across those clusters without also depending on Cluster API or the ClusterProfile API.
- You want a very lightweight inventory:
- You are building a prototype or running in a lab environment.
- “The list of clusters” is “whatever kubeconfig files exist in these paths”.
- You are migrating to multicluster-runtime from an existing kubeconfig-based tooling:
- Existing tooling already reads
~/.kube/configor a directory of kubeconfigs. - You want to reuse that layout as the initial source of truth.
- Existing tooling already reads
You might prefer other providers when:
- you already have a Kubernetes-native inventory:
- Cluster API
Clusterobjects → Cluster API provider, - ClusterProfile API (
ClusterProfile) → Cluster Inventory API provider.
- Cluster API
- you want kubeconfigs stored as Secrets in a hub cluster → Kubeconfig provider.
- you are doing local development with Kind → Kind provider.
- you want to simulate multi-cluster on a single cluster → Namespace provider.
How the File provider discovers and manages clusters
The File provider implementation lives in providers/file and implements both:
multicluster.Provider, andmulticluster.ProviderRunnable.
At runtime, it behaves like a filesystem-backed inventory:
-
Kubeconfig sources
- You configure:
KubeconfigFiles: explicit paths to kubeconfig files.KubeconfigDirs: directories to search.KubeconfigGlobs: glob patterns used inside those directories.
- If both
KubeconfigFilesandKubeconfigDirsare empty, defaults are applied:- If
$KUBECONFIGpoints to a readable file, that path is used. - Else, if
$HOME/.kube/configexists, it is used. - Else, the current working directory is added as a directory to search.
- If
- You configure:
-
Cluster creation from kubeconfigs
- The provider first collects all kubeconfig file paths:
- all existing
KubeconfigFiles, and - for each
KubeconfigDir, every file that matches one ofKubeconfigGlobs(by default:kubeconfig.yaml,kubeconfig.yml,*.kubeconfig,*.kubeconfig.yaml,*.kubeconfig.yml).
- all existing
- For each kubeconfig file:
- it loads the file using
clientcmd.LoadFromFile, - iterates over all contexts defined in that kubeconfig,
- for each context:
- builds a
*rest.Configviaclientcmd.NewNonInteractiveClientConfig(...).ClientConfig(), - turns it into a
cluster.Clusterusingcluster.New(restConfig, opts.ClusterOptions...).
- builds a
- it loads the file using
- This yields a map
map[string]cluster.Clustercovering all contexts in all configured kubeconfig files.
- The provider first collects all kubeconfig file paths:
-
Cluster naming
- Each context becomes a separate cluster identified by:
ClusterName = "<absolute-or-relative-filepath><Separator><context-name>".
Separatordefaults to"+".- For example, if:
- file:
/home/user/.kube/config, - context:
kind-dev, - separator:
"+", - then:
ClusterName = "/home/user/.kube/config+kind-dev".
- file:
- This makes cluster names:
- unique as long as you do not reuse the same
(file path, context name)pair, - deterministic and easy to relate back to the on-disk configuration.
- unique as long as you do not reuse the same
- Your reconcilers should continue to treat
ClusterNameas an opaque string:- log it,
- use it for metrics and routing,
- but avoid depending on the exact path format.
- If you need a long-lived, globally unique cluster identity, use the About API /
ClusterProperty(cluster.clusterset.k8s.io) in each member cluster as described in the Cluster Identification chapters.
- Each context becomes a separate cluster identified by:
-
Initial synchronization and updates
-
When the provider starts (
Start(ctx, aware)):- It performs an initial
run:- loads all kubeconfigs,
- builds all
cluster.Clusterinstances, - calls
AddOrReplacefor each cluster via the embeddedpkg/clusters.Clusters. - for each new cluster,
Clusters.AddOrReplace:- starts the cluster’s cache in its own goroutine,
- waits for
WaitForCacheSync, - calls
aware.Engage(clusterCtx, clusterName, cl)to hand the cluster to the Multi-Cluster Manager.
- It records the set of known cluster names.
- It performs an initial
-
On later updates (see next section), the provider:
- recomputes the current set of clusters from disk,
- for each cluster in the new set:
- logs “adding or updating cluster”,
- calls
AddOrReplace(replacing the existing cluster if its underlying config changed),
- for any cluster that used to exist but is no longer present in the new set:
- logs “removing cluster”,
- calls
Remove(clusterName), which cancels its context and removes it from the fleet.
-
This model is very similar to the custom provider pattern described in Custom Providers: the on-disk kubeconfig layout is the “inventory”, and the provider reconciles that inventory with an in-memory map of cluster.Cluster instances.
Live reloading with filesystem watches
The File provider does not just load kubeconfigs once; it also watches relevant paths and keeps the fleet in sync as files change.
-
Watcher setup
- After the initial
run, the provider creates anfsnotifywatcher. - For each configured kubeconfig file:
- it watches the parent directory rather than the file itself (to cope with editors or tools that replace files atomically).
- For each configured kubeconfig directory:
- it adds a watch on that directory.
- After the initial
-
Event handling
- The provider enters a loop:
- if
ctx.Done()is closed, it stops and returns. - if it receives a filesystem event:
- logs
"received fsnotify event"with the event data, - calls
run(ctx, aware)again to recompute and apply the fleet state.
- logs
- if it receives an error from the watcher:
- logs
"file watcher error"and continues unless the error channel is closed.
- logs
- if
- The provider enters a loop:
-
What changes are detected?
- The provider recalculates all clusters from the configured files and directories after each event, so it reacts to:
- new kubeconfig files created in a watched directory that match the glob patterns,
- existing kubeconfig files being edited (for example, new contexts added, credentials rotated),
- kubeconfig files or directories being deleted.
- It does not try to determine which specific file changed; instead it treats the filesystem as the source of truth and reconciles against that.
- The provider recalculates all clusters from the configured files and directories after each event, so it reacts to:
Operationally this means:
- adding a new kubeconfig or context automatically engages a new cluster;
- removing a kubeconfig or context automatically removes the cluster from the fleet;
- editing endpoints or credentials in-place will cause the provider to recreate or replace cluster clients, thanks to
AddOrReplace.
Configuration: file.Options
You construct a File provider via:
provider, err := file.New(file.Options{
KubeconfigFiles: []string{"/path/to/kubeconfig"},
KubeconfigDirs: []string{"/path/to/kubeconfig-dir"},
KubeconfigGlobs: []string{"*.kubeconfig"},
Separator: "+",
ClusterOptions: []cluster.Option{/* ... */},
})The Options type is:
type Options struct {
// Explicit kubeconfig file paths.
KubeconfigFiles []string
// Directories to search for kubeconfig files.
KubeconfigDirs []string
// Glob patterns to match kubeconfig files inside directories.
// Defaults to a small set of common kubeconfig names.
KubeconfigGlobs []string
// String between file path and context name in the ClusterName.
// Default: "+".
Separator string
// Options forwarded to controller-runtime cluster.New.
ClusterOptions []cluster.Option
}-
KubeconfigFiles- Use this when you know the exact paths to kubeconfig files.
- Each file may contain multiple contexts; each context yields a separate cluster.
- Non-existent files are skipped with an informational log; other
os.Staterrors abort the sync.
-
KubeconfigDirs+KubeconfigGlobs- Use this when you want to treat “all kubeconfigs in this directory tree” as your fleet.
- For each directory, the provider uses
filepath.Glob(filepath.Join(dir, glob))for each glob. - By default (
KubeconfigGlobsempty inOptions), the provider searches for:kubeconfig.yaml,kubeconfig.yml,*.kubeconfig,*.kubeconfig.yaml,*.kubeconfig.yml.
- Non-existent directories are skipped with an informational log; other errors abort the sync.
-
Separator- Sets how file path and context are joined in the final
ClusterName. - The default
"+"avoids ambiguity with normal filesystem characters. - You can change it if:
- you prefer a different delimiter (for example
"#"), or - you want to line up with existing naming conventions in your metrics / logs.
- you prefer a different delimiter (for example
- Sets how file path and context are joined in the final
-
ClusterOptions- Extra options passed into
cluster.New. - Use these to:
- register additional schemes,
- tune cache behaviour or resync periods,
- adjust client QPS/Burst for heavy workloads.
- Extra options passed into
If you call file.New with empty KubeconfigFiles and KubeconfigDirs, the provider automatically applies defaultKubeconfigPaths as described above; this is convenient for local development, where ~/.kube/config is almost always present.
Using the File provider in a multi-cluster controller
The example program in examples/file/main.go demonstrates a complete setup that:
- discovers clusters from kubeconfig files/directories,
- uses a uniform reconciler to watch
ConfigMaps in each cluster, - logs when a
ConfigMapis found.
In outline, the program:
-
Parses CLI flags for kubeconfig locations:
-kubeconfigs— comma-separated list of kubeconfig file paths,-kubeconfig-dirs— comma-separated list of directories to search,-globs— comma-separated list of glob patterns (optional).
-
Cleans up flag values:
- converts empty
"",into empty slices for files and dirs.
- converts empty
-
Constructs the File provider:
provider, err := file.New(file.Options{ KubeconfigFiles: kubeconfigFiles, KubeconfigDirs: kubeconfigDirs, KubeconfigGlobs: strings.Split(*fGlobs, ","), }) -
Creates a Multi-Cluster Manager with the File provider:
mgr, err := mcmanager.New(ctrl.GetConfigOrDie(), provider, mcmanager.Options{}) -
Registers a multi-cluster controller for
corev1.ConfigMap:if err := mcbuilder.ControllerManagedBy(mgr). Named("multicluster-configmaps"). For(&corev1.ConfigMap{}). Complete(mcreconcile.Func(func(ctx context.Context, req mcreconcile.Request) (ctrl.Result, error) { log := ctrllog.FromContext(ctx).WithValues("cluster", req.ClusterName) log.Info("Reconciling ConfigMap") cl, err := mgr.GetCluster(ctx, req.ClusterName) if err != nil { return reconcile.Result{}, err } cm := &corev1.ConfigMap{} if err := cl.GetClient().Get(ctx, req.Request.NamespacedName, cm); err != nil { if apierrors.IsNotFound(err) { return ctrl.Result{}, nil } return ctrl.Result{}, err } log.Info("ConfigMap found", "namespace", cm.Namespace, "name", cm.Name, "cluster", req.ClusterName) return ctrl.Result{}, nil })); err != nil { // handle setup error } -
Starts the manager:
if err := mgr.Start(ctx); ignoreCanceled(err) != nil { entryLog.Error(err, "unable to start") }
At runtime:
- the provider loads all configured kubeconfig files and directories,
- clusters are engaged under names like
/home/user/.kube/config+kind-kind, - the reconciler receives
mcreconcile.Requestvalues for eachConfigMapin each cluster.
This is structurally identical to the Kind example from Quickstart, but with the filesystem as the source of truth instead of Kind’s cluster registry.
Operational notes and best practices
-
Cluster identity vs. file layout
- The File provider’s
ClusterNameis derived from the file path and context name, not from a stable, cross-provider ID. - If you later adopt About API (
ClusterProperty) and ClusterProfile API (KEP‑2149, KEP‑4322), treat:ClusterNameas a routing key insidemulticluster-runtime,cluster.clusterset.k8s.ioandclusterset.k8s.ioas cross-system, long-lived identities.
- The File provider’s
-
Security considerations
- The File provider uses whatever kubeconfigs you point it at:
- if those kubeconfigs contain cluster-admin credentials, your controllers will have full power across all those clusters.
- Good practice:
- generate dedicated service accounts and kubeconfigs for your multi-cluster controllers,
- store only the minimal necessary permissions in each kubeconfig,
- use filesystem permissions or OS-level secrets management to protect the files.
- The File provider uses whatever kubeconfigs you point it at:
-
Fleet changes and error handling
- If a kubeconfig file becomes unreadable or invalid:
- the provider logs an error, skips that file, and continues.
- If
Getis called for a cluster that no longer exists on disk:multicluster.ErrClusterNotFoundis returned via theClustershelper.- The default controller wrapper treats this as a non-fatal, non-requeued condition.
- If a kubeconfig file becomes unreadable or invalid:
-
Use in CI and automation
- The File provider is a good fit for CI jobs that:
- generate short-lived kind or managed clusters,
- write kubeconfigs to a known directory,
- then run multi-cluster tests against that directory.
- When combined with the Multi provider, you can:
- mount a directory of kubeconfigs for one environment,
- and also use other providers (Cluster API, ClusterProfile) for additional fleets,
- all under a single set of controllers.
- The File provider is a good fit for CI jobs that:
In summary, the File provider offers a straightforward way to turn “a bunch of kubeconfig files” into a dynamic multi-cluster fleet, with live reloading and consistent engagement semantics shared with all other providers in multicluster-runtime.