Frequently Asked Questions

This page provides help with the most common questions about Helm.

We’d love your help making this document better. To add, correct, or removeinformation, file an issue or send usa pull request.

Changes since Helm 2

Here’s an exhaustive list of all the major changes introduced in Helm 3.

Removal of Tiller

During the Helm 2 development cycle, we introduced Tiller. Tiller played animportant role for teams working on a shared cluster - it made it possible formultiple different operators to interact with the same set of releases.

With role-based access controls (RBAC) enabled by default in Kubernetes 1.6,locking down Tiller for use in a production scenario became more difficult tomanage. Due to the vast number of possible security policies, our stance was toprovide a permissive default configuration. This allowed first-time users tostart experimenting with Helm and Kubernetes without having to dive headfirstinto the security controls. Unfortunately, this permissive configuration couldgrant a user a broad range of permissions they weren’t intended to have. DevOpsand SREs had to learn additional operational steps when installing Tiller into amulti-tenant cluster.

After hearing how community members were using Helm in certain scenarios, wefound that Tiller’s release management system did not need to rely upon anin-cluster operator to maintain state or act as a central hub for Helm releaseinformation. Instead, we could simply fetch information from the Kubernetes APIserver, render the Charts client-side, and store a record of the installation inKubernetes.

Tiller’s primary goal could be accomplished without Tiller, so one of the firstdecisions we made regarding Helm 3 was to completely remove Tiller.

With Tiller gone, the security model for Helm is radically simplified. Helm 3now supports all the modern security, identity, and authorization features ofmodern Kubernetes. Helm’s permissions are evaluated using your kubeconfigfile.Cluster administrators can restrict user permissions at whatever granularitythey see fit. Releases are still recorded in-cluster, and the rest of Helm’sfunctionality remains.

Improved Upgrade Strategy: 3-way Strategic Merge Patches

Helm 2 used a two-way strategic merge patch. During an upgrade, it compared themost recent chart’s manifest against the proposed chart’s manifest (the onesupplied during helm upgrade). It compared the differences between these twocharts to determine what changes needed to be applied to the resources inKubernetes. If changes were applied to the cluster out-of-band (such as during akubectl edit), those changes were not considered. This resulted in resourcesbeing unable to roll back to its previous state: because Helm only consideredthe last applied chart’s manifest as its current state, if there were no changesin the chart’s state, the live state was left unchanged.

In Helm 3, we now use a three-way strategic merge patch. Helm considers the oldmanifest, its live state, and the new manifest when generating a patch.

Examples

Let’s go through a few common examples what this change impacts.

Rolling back where live state has changed

Your team just deployed their application to production on Kubernetes usingHelm. The chart contains a Deployment object where the number of replicas is setto three:

  1. $ helm install myapp ./myapp

A new developer joins the team. On their first day while observing theproduction cluster, a horrible coffee-spilling-on-the-keyboard accident happensand they kubectl scale the production deployment from three replicas down tozero.

  1. $ kubectl scale --replicas=0 deployment/myapp

Another developer on your team notices that the production site is down anddecides to rollback the release to its previous state:

  1. $ helm rollback myapp

What happens?

In Helm 2, it would generate a patch, comparing the old manifest against the newmanifest. Because this is a rollback, it’s the same manifest. Helm woulddetermine that there is nothing to change because there is no difference betweenthe old manifest and the new manifest. The replica count continues to stay atzero. Panic ensues.

In Helm 3, the patch is generated using the old manifest, the live state, andthe new manifest. Helm recognizes that the old state was at three, the livestate is at zero and the new manifest wishes to change it back to three, so itgenerates a patch to change the state back to three.

Upgrades where live state has changed

Many service meshes and other controller-based applications inject data intoKubernetes objects. This can be something like a sidecar, labels, or otherinformation. Previously if you had the given manifest rendered from a Chart:

  1. containers:
  2. - name: server
  3. image: nginx:2.0.0

And the live state was modified by another application to

  1. containers:
  2. - name: server
  3. image: nginx:2.0.0
  4. - name: my-injected-sidecar
  5. image: my-cool-mesh:1.0.0

Now, you want to upgrade the nginx image tag to 2.1.0. So, you upgrade to achart with the given manifest:

  1. containers:
  2. - name: server
  3. image: nginx:2.1.0

What happens?

In Helm 2, Helm generates a patch of the containers object between the oldmanifest and the new manifest. The cluster’s live state is not considered duringthe patch generation.

The cluster’s live state is modified to look like the following:

  1. containers:
  2. - name: server
  3. image: nginx:2.1.0

The sidecar pod is removed from live state. More panic ensues.

In Helm 3, Helm generates a patch of the containers object between the oldmanifest, the live state, and the new manifest. It notices that the new manifestchanges the image tag to 2.1.0, but live state contains a sidecar container.

The cluster’s live state is modified to look like the following:

  1. containers:
  2. - name: server
  3. image: nginx:2.1.0
  4. - name: my-injected-sidecar
  5. image: my-cool-mesh:1.0.0

Release Names are now scoped to the Namespace

With the removal of Tiller, the information about each release had to gosomewhere. In Helm 2, this was stored in the same namespace as Tiller. Inpractice, this meant that once a name was used by a release, no other releasecould use that same name, even if it was deployed in a different namespace.

In Helm 3, release information about a particular release is now stored in thesame namespace as the release itself. This means that users can now helminstall wordpress stable/wordpress in two separate namespaces, and each can bereferred with helm list by changing the current namespace context (e.g. helmlist —namespace foo).

Secrets as the default storage driver

Helm 2 used ConfigMaps by default to store release information. In Helm 3,Secrets are now used as the default storage driver.

Go import path changes

In Helm 3, Helm switched the Go import path over from k8s.io/helm tohelm.sh/helm/v3. If you intend to upgrade to the Helm 3 Go client libraries,make sure to change your import paths.

Capabilities

The .Capabilities built-in object available during the rendering stage hasbeen simplified.

Built-in Objects

Validating Chart Values with JSONSchema

A JSON Schema can now be imposed upon chart values. This ensures that valuesprovided by the user follow the schema laid out by the chart maintainer,providing better error reporting when the user provides an incorrect set ofvalues for a chart.

Validation occurs when any of the following commands are invoked:

  • helm install
  • helm upgrade
  • helm template
  • helm lintSee the documentation on Schema files formore information.

Consolidation of requirements.yaml into Chart.yaml

The Chart dependency management system moved from requirements.yaml andrequirements.lock to Chart.yaml and Chart.lock. We recommend that new chartsmeant for Helm 3 use the new format. However, Helm 3 still understands Chart APIversion 1 (v1) and will load existing requirements.yaml files

In Helm 2, this is how a requirements.yaml looked:

  1. dependencies:
  2. - name: mariadb
  3. version: 5.x.x
  4. repository: https://kubernetes-charts.storage.googleapis.com/
  5. condition: mariadb.enabled
  6. tags:
  7. - database

In Helm 3, the dependency is expressed the same way, but now from yourChart.yaml:

  1. dependencies:
  2. - name: mariadb
  3. version: 5.x.x
  4. repository: https://kubernetes-charts.storage.googleapis.com/
  5. condition: mariadb.enabled
  6. tags:
  7. - database

Charts are still downloaded and placed in the charts/ directory, so subchartsvendored into the charts/ directory will continue to work withoutmodification.

Name (or –generate-name) is now required on install

In Helm 2, if no name was provided, an auto-generated name would be given. Inproduction, this proved to be more of a nuisance than a helpful feature. In Helm3, Helm will throw an error if no name is provided with helm install.

For those who still wish to have a name auto-generated for you, you can use the—generate-name flag to create one for you.

Pushing Charts to OCI Registries

This is an experimental feature introduced in Helm 3. To use, set theenvironment variable HELM_EXPERIMENTAL_OCI=1.

At a high level, a Chart Repository is a location where Charts can be stored andshared. The Helm client packs and ships Helm Charts to a Chart Repository.Simply put, a Chart Repository is a basic HTTP server that houses an index.yamlfile and some packaged charts.

While there are several benefits to the Chart Repository API meeting the mostbasic storage requirements, a few drawbacks have started to show:

  • Chart Repositories have a very hard time abstracting most of the securityimplementations required in a production environment. Having a standard APIfor authentication and authorization is very important in productionscenarios.
  • Helm’s Chart provenance tools used for signing and verifying the integrity andorigin of a chart are an optional piece of the Chart publishing process.
  • In multi-tenant scenarios, the same Chart can be uploaded by another tenant,costing twice the storage cost to store the same content. Smarter chartrepositories have been designed to handle this, but it’s not a part of theformal specification.
  • Using a single index file for search, metadata information, and fetchingCharts has made it difficult or clunky to design around in secure multi-tenantimplementations.Docker’s Distribution project (also known as Docker Registry v2) is thesuccessor to the Docker Registry project. Many major cloud vendors have aproduct offering of the Distribution project, and with so many vendors offeringthe same product, the Distribution project has benefited from many years ofhardening, security best practices, and battle-testing.

Please have a look at helm help chart and helm help registry for moreinformation on how to package a chart and push it to a Docker registry.

For more info, please see this page.

Removal of helm serve

helm serve ran a local Chart Repository on your machine for developmentpurposes. However, it didn’t receive much uptake as a development tool and hadnumerous issues with its design. In the end, we decided to remove it and splitit out as a plugin.

Library chart support

Helm 3 supports a class of chart called a “library chart”. This is a chart thatis shared by other charts, but does not create any release artifacts of its own.A library chart’s templates can only declare define elements. Globally scopednon-define content is simply ignored. This allows users to re-use and sharesnippets of code that can be re-used across many charts, avoiding redundancy andkeeping charts DRY.

Library charts are declared in the dependencies directive in Chart.yaml, and areinstalled and managed like any other chart.

  1. dependencies:
  2. - name: mylib
  3. version: 1.x.x
  4. repository: quay.io

We’re very excited to see the use cases this feature opens up for chartdevelopers, as well as any best practices that arise from consuming librarycharts.

Chart.yaml apiVersion bump

With the introduction of library chart support and the consolidation ofrequirements.yaml into Chart.yaml, clients that understood Helm 2’s packageformat won’t understand these new features. So, we bumped the apiVersion inChart.yaml from v1 to v2.

helm create now creates charts using this new format, so the defaultapiVersion was bumped there as well.

Clients wishing to support both versions of Helm charts should inspect theapiVersion field in Chart.yaml to understand how to parse the package format.

XDG Base Directory Support

The XDG Base DirectorySpecificationis a portable standard defining where configuration, data, and cached filesshould be stored on the filesystem.

In Helm 2, Helm stored all this information in ~/.helm (affectionately knownas helm home), which could be changed by setting the $HELM_HOME environmentvariable, or by using the global flag —home.

In Helm 3, Helm now respects the following environment variables as per the XDGBase Directory Specification:

  • $XDG_CACHE_HOME
  • $XDG_CONFIG_HOME
  • $XDG_DATA_HOMEHelm plugins are still passed $HELM_HOME as an alias to $XDG_DATA_HOME forbackwards compatibility with plugins looking to use $HELM_HOME as a scratchpadenvironment.

Several new environment variables are also passed in to the plugin’s environmentto accommodate this change:

  • $HELM_PATH_CACHE for the cache path
  • $HELM_PATH_CONFIG for the config path
  • $HELM_PATH_DATA for the data pathHelm plugins looking to support Helm 3 should consider using these newenvironment variables instead.

CLI Command Renames

In order to better align the verbiage from other package managers, helm deletewas re-named to helm uninstall. helm delete is still retained as an alias tohelm uninstall, so either form can be used.

In Helm 2, in order to purge the release ledger, the —purge flag had to beprovided. This functionality is now enabled by default. To retain the previousbehavior, use helm uninstall —keep-history.

Additionally, several other commands were re-named to accommodate the sameconventions:

  • helm inspect -> helm show
  • helm fetch -> helm pullThese commands have also retained their older verbs as aliases, so you cancontinue to use them in either form.

Automatically creating namespaces

When creating a release in a namespace that does not exist, Helm 2 created thenamespace. Helm 3 follows the behavior of other Kubernetes tooling and returnsan error if the namespace does not exist.

Installing

Why aren’t there Debian/Fedora/… native packages of Helm?

We’d love to provide these or point you toward a trusted provider. If you’reinterested in helping, we’d love it. This is how the Homebrew formula wasstarted.

Why do you provide a curl …|bash script?

There is a script in our repository (scripts/get-helm-3) that can be executed as acurl ..|bash script. The transfers are all protected by HTTPS, and the scriptdoes some auditing of the packages it fetches. However, the script has all theusual dangers of any shell script.

We provide it because it is useful, but we suggest that users carefully read thescript first. What we’d really like, though, are better packaged releases ofHelm.

How do I put the Helm client files somewhere other than their defaults?

Helm uses the XDG structure for storing files. There are environment variablesyou can use to override these locations:

  • $XDG_CACHE_HOME: set an alternative location for storing cached files.
  • $XDG_CONFIG_HOME: set an alternative location for storing Helmconfiguration.
  • $XDG_DATA_HOME: set an alternative location for storing Helm data.Note that if you have existing repositories, you will need to re-add them withhelm repo add….

Uninstalling

I want to delete my local Helm. Where are all its files?

Along with the helm binary, Helm stores some files in the following locations:

  • $XDG_CACHE_HOME
  • $XDG_CONFIG_HOME
  • $XDG_DATA_HOMEThe following table gives the default folder for each of these, by OS:
Operating SystemCache PathConfiguration PathData Path
Linux$HOME/.cache/helm$HOME/.config/helm$HOME/.local/share/helm
macOS$HOME/Library/Caches/helm$HOME/Library/Preferences/helm$HOME/Library/helm
Windows%TEMP%\helm%APPDATA%\helm%APPDATA%\helm

Troubleshooting

On GKE (Google Container Engine) I get “No SSH tunnels currently open”

  1. Error: Error forwarding ports: error upgrading connection: No SSH tunnels currently open. Were the targets able to accept an ssh-key for user "gke-[redacted]"?

Another variation of the error message is:

  1. Unable to connect to the server: x509: certificate signed by unknown authority

The issue is that your local Kubernetes config file must have the correctcredentials.

When you create a cluster on GKE, it will give you credentials, including SSLcertificates and certificate authorities. These need to be stored in aKubernetes config file (Default: ~/.kube/config so that kubectl and helmcan access them.