Expected Repo Structure

Fleet will create bundles from a git repository. This happens either explicitly by specifying paths, or when a fleet.yaml is found.

Each bundle is created from paths in a GitRepo and modified further by reading the discovered fleet.yaml file. Bundle lifecycles are tracked between releases by the helm releaseName field added to each bundle. If the releaseName is not specified within fleet.yaml it is generated from GitRepo.name + path. Long names are truncated and a -<hash> prefix is added.

The git repository has no explicitly required structure. It is important to realize the scanned resources will be saved as a resource in Kubernetes so you want to make sure the directories you are scanning in git do not contain arbitrarily large resources. Right now there is a limitation that the resources deployed must gzip to less than 1MB.

How repos are scanned

Multiple paths can be defined for a GitRepo and each path is scanned independently. Internally each scanned path will become a bundle that Fleet will manage, deploy, and monitor independently.

The following files are looked for to determine the how the resources will be deployed.

FileLocationMeaning
Chart.yaml:/ relative to path or custom path from fleet.yamlThe resources will be deployed as a Helm chart. Refer to the fleet.yaml for more options.
kustomization.yaml:/ relative to path or custom path from fleet.yamlThe resources will be deployed using Kustomize. Refer to the fleet.yaml for more options.
fleet.yamlAny subpathIf any fleet.yaml is found a new bundle will be defined. This allows mixing charts, kustomize, and raw YAML in the same repo
*.yamlAny subpathIf a Chart.yaml or kustomization.yaml is not found then any .yaml or .yml file will be assumed to be a Kubernetes resource and will be deployed.
overlays/{name}/ relative to pathWhen deploying using raw YAML (not Kustomize or Helm) overlays is a special directory for customizations.

fleet.yaml

The fleet.yaml is an optional file that can be included in the git repository to change the behavior of how the resources are deployed and customized. The fleet.yaml is always at the root relative to the path of the GitRepo and if a subdirectory is found with a fleet.yaml a new bundle is defined that will then be configured differently from the parent bundle.

Expected Repo Structure - 图1caution

Helm chart dependencies: It is up to the user to fulfill the dependency list for the Helm charts. As such, you must manually run helm dependencies update $chart OR run helm dependencies build $chart prior to install. See the Fleet docs in Rancher for more information.

Reference

Expected Repo Structure - 图2info

How changes are applied to values.yaml:

  • Note that the most recently applied changes to the values.yaml will override any previously existing values.

  • When changes are applied to the values.yaml from multiple sources at the same time, the values will update in the following order: helmValues -> helm.valuesFiles -> helm.valuesFrom.

  1. # The default namespace to be applied to resources. This field is not used to
  2. # enforce or lock down the deployment to a specific namespace, but instead
  3. # provide the default value of the namespace field if one is not specified
  4. # in the manifests.
  5. # Default: default
  6. defaultNamespace: default
  7. # All resources will be assigned to this namespace and if any cluster scoped
  8. # resource exists the deployment will fail.
  9. # Default: ""
  10. namespace: default
  11. kustomize:
  12. # Use a custom folder for kustomize resources. This folder must contain
  13. # a kustomization.yaml file.
  14. dir: ./kustomize
  15. helm:
  16. # Use a custom location for the Helm chart. This can refer to any go-getter URL or
  17. # OCI registry based helm chart URL e.g. "oci://ghcr.io/fleetrepoci/guestbook".
  18. # This allows one to download charts from most any location. Also know that
  19. # go-getter URL supports adding a digest to validate the download. If repo
  20. # is set below this field is the name of the chart to lookup
  21. chart: ./chart
  22. # A https URL to a Helm repo to download the chart from. It's typically easier
  23. # to just use `chart` field and refer to a tgz file. If repo is used the
  24. # value of `chart` will be used as the chart name to lookup in the Helm repository.
  25. repo: https://charts.rancher.io
  26. # A custom release name to deploy the chart as. If not specified a release name
  27. # will be generated by combining the invoking GitRepo.name + GitRepo.path.
  28. releaseName: my-release
  29. # The version of the chart or semver constraint of the chart to find. If a constraint
  30. # is specified it is evaluated each time git changes.
  31. # The version also determines which chart to download from OCI registries.
  32. version: 0.1.0
  33. # Any values that should be placed in the `values.yaml` and passed to helm during
  34. # install.
  35. values:
  36. any-custom: value
  37. # All labels on Rancher clusters are available using global.fleet.clusterLabels.LABELNAME
  38. # These can now be accessed directly as variables
  39. variableName: global.fleet.clusterLabels.LABELNAME
  40. # Path to any values files that need to be passed to helm during install
  41. valuesFiles:
  42. - values1.yaml
  43. - values2.yaml
  44. # Allow to use values files from configmaps or secrets defined in the downstream clusters
  45. valuesFrom:
  46. - configMapKeyRef:
  47. name: configmap-values
  48. # default to namespace of bundle
  49. namespace: default
  50. key: values.yaml
  51. secretKeyRef:
  52. name: secret-values
  53. namespace: default
  54. key: values.yaml
  55. # Override immutable resources. This could be dangerous.
  56. force: false
  57. # Set the Helm --atomic flag when upgrading
  58. atomic: false
  59. # A paused bundle will not update downstream clusters but instead mark the bundle
  60. # as OutOfSync. One can then manually confirm that a bundle should be deployed to
  61. # the downstream clusters.
  62. # Default: false
  63. paused: false
  64. rolloutStrategy:
  65. # A number or percentage of clusters that can be unavailable during an update
  66. # of a bundle. This follows the same basic approach as a deployment rollout
  67. # strategy. Once the number of clusters meets unavailable state update will be
  68. # paused. Default value is 100% which doesn't take effect on update.
  69. # default: 100%
  70. maxUnavailable: 15%
  71. # A number or percentage of cluster partitions that can be unavailable during
  72. # an update of a bundle.
  73. # default: 0
  74. maxUnavailablePartitions: 20%
  75. # A number of percentage of how to automatically partition clusters if not
  76. # specific partitioning strategy is configured.
  77. # default: 25%
  78. autoPartitionSize: 10%
  79. # A list of definitions of partitions. If any target clusters do not match
  80. # the configuration they are added to partitions at the end following the
  81. # autoPartitionSize.
  82. partitions:
  83. # A user friend name given to the partition used for Display (optional).
  84. # default: ""
  85. - name: canary
  86. # A number or percentage of clusters that can be unavailable in this
  87. # partition before this partition is treated as done.
  88. # default: 10%
  89. maxUnavailable: 10%
  90. # Selector matching cluster labels to include in this partition
  91. clusterSelector:
  92. matchLabels:
  93. env: prod
  94. # A cluster group name to include in this partition
  95. clusterGroup: agroup
  96. # Selector matching cluster group labels to include in this partition
  97. clusterGroupSelector: agroup
  98. # Target customization are used to determine how resources should be modified per target
  99. # Targets are evaluated in order and the first one to match a cluster is used for that cluster.
  100. targetCustomizations:
  101. # The name of target. If not specified a default name of the format "target000"
  102. # will be used. This value is mostly for display
  103. - name: prod
  104. # Custom namespace value overriding the value at the root
  105. namespace: newvalue
  106. # Custom defaultNamespace value overriding the value at the root
  107. defaultNamespace: newdefaultvalue
  108. # Custom kustomize options overriding the options at the root
  109. kustomize: {}
  110. # Custom Helm options override the options at the root
  111. helm: {}
  112. # If using raw YAML these are names that map to overlays/{name} that will be used
  113. # to replace or patch a resource. If you wish to customize the file ./subdir/resource.yaml
  114. # then a file ./overlays/myoverlay/subdir/resource.yaml will replace the base file.
  115. # A file named ./overlays/myoverlay/subdir/resource_patch.yaml will patch the base file.
  116. # A patch can in JSON Patch or JSON Merge format or a strategic merge patch for builtin
  117. # Kubernetes types. Refer to "Raw YAML Resource Customization" below for more information.
  118. yaml:
  119. overlays:
  120. - custom2
  121. - custom3
  122. # A selector used to match clusters. The structure is the standard
  123. # metav1.LabelSelector format. If clusterGroupSelector or clusterGroup is specified,
  124. # clusterSelector will be used only to further refine the selection after
  125. # clusterGroupSelector and clusterGroup is evaluated.
  126. clusterSelector:
  127. matchLabels:
  128. env: prod
  129. # A selector used to match a specific cluster by name.
  130. clusterName: dev-cluster
  131. # A selector used to match cluster groups.
  132. clusterGroupSelector:
  133. matchLabels:
  134. region: us-east
  135. # A specific clusterGroup by name that will be selected
  136. clusterGroup: group1
  137. # dependsOn allows you to configure dependencies to other bundles. The current bundle
  138. # will only be deployed, after all dependencies are deployed and in a Ready state.
  139. dependsOn:
  140. # Format: <GITREPO-NAME>-<BUNDLE_PATH> with all path separators replaced by "-"
  141. # Example: GitRepo name "one", Bundle path "/multi-cluster/hello-world" => "one-multi-cluster-hello-world"
  142. - name: one-multi-cluster-hello-world

Private Helm Repositories

For a private Helm repo, users can reference a secret from the git repo resource. See Using Private Helm Repositories for more information.

Using ValuesFrom

These examples showcase the style and format for using valuesFrom. ConfigMaps and Secrets should be created in downstream clusters.

Example ConfigMap:

  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: configmap-values
  5. namespace: default
  6. data:
  7. values.yaml: |-
  8. replication: true
  9. replicas: 2
  10. serviceType: NodePort

Example Secret:

  1. apiVersion: v1
  2. kind: Secret
  3. metadata:
  4. name: secret-values
  5. namespace: default
  6. stringData:
  7. values.yaml: |-
  8. replication: true
  9. replicas: 2
  10. serviceType: NodePort

Per Cluster Customization

The GitRepo defines which clusters a git repository should be deployed to and the fleet.yaml in the repository determines how the resources are customized per target.

All clusters and cluster groups in the same namespace as the GitRepo will be evaluated against all targets of that GitRepo. The targets list is evaluated one by one and if there is a match the resource will be deployed to the cluster. If no match is made against the target list on the GitRepo then the resources will not be deployed to that cluster. Once a target cluster is matched the fleet.yaml from the git repository is then consulted for customizations. The targetCustomizations in the fleet.yaml will be evaluated one by one and the first match will define how the resource is to be configured. If no match is made the resources will be deployed with no additional customizations.

There are three approaches to matching clusters for both GitRepo targets and fleet.yaml targetCustomizations. One can use cluster selectors, cluster group selectors, or an explicit cluster group name. All criteria is additive so the final match is evaluated as “clusterSelector && clusterGroupSelector && clusterGroup”. If any of the three have the default value it is dropped from the criteria. The default value is either null or “”. It is important to realize that the value {} for a selector means “match everything.”

  1. # Match everything
  2. clusterSelector: {}
  3. # Selector ignored
  4. clusterSelector: null

Raw YAML Resource Customization

When using Kustomize or Helm the kustomization.yaml or the helm.values will control how the resource are customized per target cluster. If you are using raw YAML then the following simple mechanism is built-in and can be used. The overlays/ folder in the git repo is treated specially as folder containing folders that can be selected to overlay on top per target cluster. The resource overlay content uses a file name based approach. This is different from kustomize which uses a resource based approach. In kustomize the resource Group, Kind, Version, Name, and Namespace identify resources and are then merged or patched. For Fleet the overlay resources will override or patch content with a matching file name.

  1. # Base files
  2. deployment.yaml
  3. svc.yaml
  4. # Overlay files
  5. # The following file we be added
  6. overlays/custom/configmap.yaml
  7. # The following file will replace svc.yaml
  8. overlays/custom/svc.yaml
  9. # The following file will patch deployment.yaml
  10. overlays/custom/deployment_patch.yaml

A file named foo will replace a file called foo from the base resources or a previous overlay. In order to patch the contents a file the convention of adding _patch. (notice the trailing period) to the filename is used. The string _patch. will be replaced with . from the file name and that will be used as the target. For example deployment_patch.yaml will target deployment.yaml. The patch will be applied using JSON Merge, Strategic Merge Patch, or JSON Patch. Which strategy is used is based on the file content. Even though JSON strategies are used, the files can be written using YAML syntax.

Cluster and Bundle state

See Cluster and Bundle state.