Skip to main content

How to Rename a Helm Release

Savithru Lokanath
Aug 24 - 7 min read

Problem

The process for migrating from Helm v2 to v3, the latest stable major release, was pretty straightforward. However, while performing the migration, we encountered an anomaly with how one of the application charts had been deployed, thus introducing additional challenges.

One of our application’s Helm v2 releases did not adhere to the standard naming convention when we installed it in our pre-production and later into the production environments. This had gone unnoticed (we do make mistakes!!!) and surfaced at the time we decided to migrate from Helm v2 to v3. We have some automation capabilities in our pipeline that rely on the naming convention of the Helm release to sign off the deployment. Naturally, this started to fail for the application in scope. So our first task before we could start migrating the charts to Helm v3 was to fix the release name.

A quick internet search showed us that many in the Kubernetes community had faced similar problems and there isn’t a simple solution yet. The closest answer we got was to delete the release and install it again with the correct name. Unfortunately, this wouldn’t work for us, as the service is customer-facing, serving in real-time, and we couldn’t afford any downtime. Hence, we had to consider other ways to solve this problem.


Options

There were two theoretically possible solutions that would allow us to rename an existing release without causing any service disruption.

Let’s take a closer look at the approaches:

  1. Trick the datastore

The first option was to modify the datastore (ie. ConfigMap in Helm v2 and Secrets in Helm v3) that stores the resource manifest by replacing the existing (incorrect) release name string with the desired (correct) value. Helm v3 stores the resource manifest in a zipped, double base64 encoded secret in the namespace.

## GET RELEASE INFO$ kubectl get secret -n <NAMESPACE> sh.helm.release.v1.<RELEASE-NAME>.v1 -o json | jq -r ".data.release" | base64 -D | base64 -D | gzip > release.json## REPLACE RELEASE NAME WITH DESIRED NAME & ENCODEDATA=`cat release.json | gzip -c | base64 | base64`## PATCH THE RELEASE$ kubectl patch secret -n <NAMESPACE> sh.helm.release.v1.<RELEASE-NAME>.v1 --type='json' --p="[{\"op\":\"replace\",\"path\":\"/data/release\",\"value\":\"$DATA\"}]”

With this approach, on multiple attempts, we noticed that the decoding/encoding was off due to the escape characters, binary data, etc., and we couldn’t upgrade the release after changing the name; in another instance, we lost a release’s info and had to restore from backup. The unpredictable results did not inspire confidence, so we decided to drop this approach. We would re-evaluate this as a last resort approach.

  1. Orphan & Adopt

The second approach that we experimented with was more deterministic and simple in a way; it didn’t need us to go through the complex process of modifying the datastore. Instead, we disconnect the Kubernetes resources (orphan) from the incorrectly named Helm release and later have the new Helm release with the correct name start managing these resources (adopt). Sounds simple right….voila!!!

Let’s walk through the steps with an example. Assume that we have an incorrectly named release called “world-hello.” We’ll have to rename this to something more meaningful, such as “hello-world.”

  • First things first, we use Helm release names in the labelSelectors to select what backend pods the Kubernetes service (kube-proxy) directs traffic to. Since we are renaming the release, the correctly named new release will be installed, and the Kubernetes service will immediately start proxying traffic to the new ReplicaSet pods while they are still booting.
    The service will be unavailable to our customers during this time. The application pods typically take about 20–30s to boot and we can’t afford to have a disruption this long. To prevent this, we decided to remove the release name from the labelSelectors fields in the service spec.
Fig1. Remove the release label from the service’s selector field
## REMOVE RELEASE LABEL$ git diff templates/service.yamlapp: {{ .Values.app.name }}
- release: {{ .Release.Name }}
  • Next, let us follow the official steps to migrate the release from Helm v2 to Helm v3 without correcting the name. Once done, issue an upgrade using the new client to validate that the resources are now managed by Helm v3.
    The upgrade step will also add the label app.kubernetes.io/managed-by=Helm to the resources managed by the release. Without this label on the resources, the release renaming will fail.
## MIGRATE RELEASE FROM HELM v2 TO HELM v3$ helm3 2to3 convert world-hello --release-versions-max 1 -n dev
2020/11/12 19:06:44 Release “world-hello” will be converted from Helm v2 to Helm v3.
2020/11/12 19:06:44 [Helm 3] Release “world-hello” will be created.
2020/11/12 19:06:46 [Helm 3] ReleaseVersion “world-hello.v1” will be created.
2020/11/12 19:06:47 [Helm 3] ReleaseVersion “world-hello.v1” created.
2020/11/12 19:06:47 [Helm 3] Release “world-hello” created.
2020/11/12 19:06:47 Release “world-hello” was converted successfully from Helm v2 to Helm v3.
2020/11/12 19:06:47 Note: The v2 release information still remains and should be removed to avoid conflicts with the migrated v3 release.
2020/11/12 19:06:47 v2 release information should only be removed using `helm 2to3` cleanup and when all releases have been migrated over## LIST HELM v3 RELEASE$ helm3 ls -n dev
NAME NAMESPACE REVISION
world-hello dev 1## UPGRADE HELM v3 RELEASE$ helm3 upgrade --install world-hello -n dev
Release “world-hello” has been upgraded. Happy Helming!NAME: world-hello
LAST DEPLOYED: Thu Nov 12 20:06:02 2020
NAMESPACE: dev
STATUS: deployed
REVISION: 2
TEST SUITE: None
  • Now that we’ve validated that the resources can be managed by Helm v3, let’s begin the process of adopting the existing resources. We need to add two annotations and a label to all the resources that need to be adopted by the new (correctly named) Helm v3 release. These annotations will indicate to Helm v3 that the new release should now start managing these resources.

NOTE: Up to this point, the Kubernetes resources have been managed by the incorrectly named Helm release that we migrated from v2 to v3.

## LABELS TO BE ADDEDapp.kubernetes.io/managed-by=Helm## ANNOTATIONS TO BE ADDEDmeta.helm.sh/release-name=<NEW-RELEASE-NAME>
meta.helm.sh/release-namespace=<NAMESPACE>## ADD RELEASE NAME ANNOTATION$ for i in deploy cm sa svc role rolebinding; do kubectl annotate -n dev $i hello-world meta.helm.sh/release-name=hello-world --overwrite; donedeployment.extensions/hello-world annotated
configmap/hello-world annotated
serviceaccount/hello-world annotated
service/hello-world annotated
role.rbac.authorization.k8s.io/hello-world annotated
rolebinding.rbac.authorization.k8s.io/hello-world annotated## ADD RELEASE NAMESPACE ANNOTATION$ for i in deploy cm sa svc role rolebinding; do kubectl annotate -n dev $i hello-world meta.helm.sh/release-namespace=dev --overwrite; donedeployment.extensions/hello-world annotated
configmap/hello-world annotated
serviceaccount/hello-world annotated
service/hello-world annotated
role.rbac.authorization.k8s.io/hello-world annotated
rolebinding.rbac.authorization.k8s.io/hello-world annotated
  • Once the annotations and labels are added to the Kubernetes resources, install the release with the correct name to sign-off on the adoption process. Once the release is upgraded, all the resources are actively managed by the correctly named release “hello-world.”
  • Because we have rolling deployments, the ReplicaSet managed by the incorrectly named release will be orphaned and hence would need to be cleaned up manually.
## INSTALL HELM v3 RELEASE WITH CORRECT NAME$ helm3 install hello-world -n dev
Release “hello-world” does not exist. Installing it now.NAME: hello-world
LAST DEPLOYED: Thu Nov 12 20:06:02 2020
NAMESPACE: dev
STATUS: deployed
REVISION: 1
TEST SUITE: None## LIST HELM v3 RELEASE$ helm3 ls -n dev
NAME NAMESPACE REVISION
world-hello dev 2
hello-world dev 1## LIST REPLICASET MANAGED BY INCORRECTLY NAMED RELEASE$ kubectl get rs -n dev -l release=world-hello
NAME DESIRED CURRENT READY AGE
hello-world-8c5959d67 2 2 2 30m## LIST REPLICASET MANAGED BY CORRECTLY NAMED RELEASE$ kubectl get rs -n dev -l release=hello-world
NAME DESIRED CURRENT READY AGE
hello-world-7f88445494 2 2 2 2m
  • Since we also removed the release info from the service’s labelSelector, the traffic is proxied toReplicaSets (pods) managed by both the correctly named and incorrectly named releases, ie. “hello-world” and “world-hello.”
  • Now we can start cleaning up orphaned resources and the datastore containing the incorrectly named release.

Cleanup

First, let’s add the release name that we initially removed back to the service’s labelSelectors field. Once done, the Kubernetes service (kube-proxy) will start sending traffic to pods managed by the new release “hello-world” only.

Next, delete the orphaned ReplicaSet and the incorrectly named Helm v2 and v3 releases.

## ADD RELEASE LABEL$ git diff templates/service.yaml
app: {{ .Values.app.name }}
+ release: {{ .Release.Name }}
## DELETE REPLICASET MANAGED BY INCORRECTLY NAMED RELEASE$ kubectl get rs -n dev -l release=world-hello
NAME DESIRED CURRENT READY AGE
hello-world-8c5959d67 2 2 2 30m$ kubectl delete rs hello-world-8c5959d67 -n dev
## LIST INCORRECTLY NAMED RELEASE DATASTORE (Helm v3)$ kubectl get secret -n dev | grep “sh.helm.release.v1.world-hello”
sh.helm.release.v1.world-hello.v1
sh.helm.release.v1.world-hello.v2
## DELETE INCORRECTLY NAMED RELEASE DATASTORE (Helm v3)$ kubectl delete secret sh.helm.release.v1.world-hello.v1 -n dev
$ kubectl delete secret sh.helm.release.v1.world-hello.v2 -n dev
## DELETE INCORRECTLY NAMED RELEASE DATASTORE (Helm v2)$ helm3 2to3 cleanup --name world-hello

Finally, redeploy the application chart one more time through the deployment pipeline you might own and verify that the upgrade went smoothly.

NOTE: Throughout this exercise, we had a traffic generator making continuous requests to the service endpoint and we didn’t notice a single failure (!2XX response code), indicating a seamless and successful migration/renaming of a Helm release.

## UPGRADE HELM v3 RELEASE WITH CORRECT NAME$ helm3 upgrade — install hello-world -n dev
Release “hello-world” has been upgraded. Happy Helming!NAME: hello-world
LAST DEPLOYED: Thu Nov 12 20:40:06 2020
NAMESPACE: hello-world
STATUS: deployed
REVISION: 2
TEST SUITE: None

On a closing note, renaming a Helm release without downtime is not a simple task. There is a ton of prep work and experimentation that’s involved. In hindsight, we got to learn interesting details on how Helm functions internally (around migrations, executions, etc) and found out yet another way to rename a release.…but with documentation!

We hope these learnings are useful and help the community alleviate some of the problems we faced during our migration!


If you’re interested in solving problems like these, join our Talent Portal to check out open roles and get periodic updates from our recruiting team!

Related DevOps Articles

View all