I am supporting CandidateX

CandidateX is a startup that focuses on creating inclusion-focused hiring solutions, designed to increase access to job opportunities for underestimated talent. Check them out if you have a few minutes to spare. They need visibility!

Sharing secrets between the local development environment and the target platform can often be complex. Kubernetes secrets are a really simple solution once you are running in a cluster but hard to get hold of in a local development environment, and GCP secrets are hard to get at once you are in Kubernetes but handy just to pull to the local development environment. Doppler is a flexible and neat solution for injecting secrets into local builds. Let’s look at these 3 secret managers working together.

GCP secrets

I’ve written about the Secret Manager API a few times – for example SuperFetch Plugin: Cloud Manager Secrets and Apps Script. Its role in this article to provide secrets required for build and configuration scripting.

Kubernetes secrets

If you are running in a Kubernetes cluster, injecting Kubernetes secrets into the env variables is the simplest solution. Its role here is to provide secrets to containers running on the Kubernetes cluster.

Doppler secrets

These are my source of truth. Doppler secrets are setup on per project, and within project by config – eg dev, production and staging etc.

Getting started with Doppler

Visit doppler.com and register. It’s free up to a point. Create a project and it’ll automatically setup 3 configs for you.

example dopple project

Let’s add a couple of secrets to the dev config – dont forget to SAVE when done.

Doppler cli

Next we’ll install the Doppler CLI so we can get at these secrets. You’ll need to pick the installation method that suits your mac/linux/windows environment. It works on them all.

doppler --version
Confirm it's installed properly

Login to doppler

This is actually pretty slick. The login creates a browser login and dialog and copies an authorization code into the paste buffer which you can then paste into to browser dialog.

doppler login
login

Setup doppler locally

This will allow you to pick a config and project and will install a token so you can access the secrets from the cli.

These are known as ‘command line tokens’. There are other types of tokens, some of which I’ll cover later.

You should now be able to access those secrets locally.

GCP secrets

To minimize the visibility of secret (and config) info, I have just one file locally which looks like this. This could be expanded for different configs, but let’s keep it to a single config for now. The project is my GCP project id, and the configSecret is a GCP secret containing anything required for local scripts – for example the region, or if you are doing ci/cd as I am, trigger names, artifact details etc. In other words, everything to do with GCP that I’ll need to access locally in a script.

{
"project": "my-cloud-project",
"configSecret": "kgd-config"
}
gcp.json

GCP Secret manager console

You can create a GCP secret with a name matching the gcp.json configSecret property in the cloud console secret manager containing things like the below – anything you need to be able to build your project. Note that at this point I’m not referencing any of the secrets held in Doppler, but I do have a doppler service token. Earlier in the section on command line tokens, I mentioned other types of tokens. This service token can be created there and is the doppler equivalent of a Google service account. This allows me to delegate access to my doppler secrets – in my case I want cloud build to be able to access them on my behalf, so I can use these as build substitutions in my cloudbuild file.

{
"project": "my-project",
"source": "my-source",
"branch": "dev",
"region": "europe-west2",
"triggerName": "my-trigger",
"artifactsName": "my-artifacts",
"dopplerServiceToken": "my_service_token",
"nodeVersion": "18.15.0",
"nodeFlavor": "alpine",
"hostImage": "my-host-image",
"pnpmStore": "/workspace/.pnpm-store/v3",
"kubeDopplerSecrets": "doppler-secrets",
"kubeNamespace": "ns-dev"
}
contents of GCP secret kgd-config

Using GCP secrets in scripts.

First you’ll need to install jq, which is a super handy linux utility to extract values of properties from a json file from within a bash script.

Here’s an example script using the secrets above to create an artifact registry. The name of the GCP secret is picked up from the local gcp.json, and the other required values come from the value of the GCP secret.

CONFIG=$(jq -r .configSecret private/gcp.json)
SECRETS=$(gcloud secrets versions access latest --secret=${CONFIG} | sed -E s/'\n'//g)

REGION=$(echo $SECRETS | jq -r .region)
ARTIFACTSNAME=$(echo $SECRETS | jq -r .artifactsName)
gcloud artifacts repositories \
create $ARTIFACTSNAME \
--repository-format=docker \
--location=$REGION \
--description="artifact registry for kgd"
create an artifact registry

Creating a build trigger

Here’s another example, this time creating a build trigger.

CONFIG=$(jq -r .configSecret private/gcp.json)
SECRETS=$(gcloud secrets versions access latest --secret=${CONFIG} | sed -E s/'\n'//g)

REPO=$(echo $SECRETS | jq -r .source)
PROJECT=$(echo $SECRETS | jq -r .project)
BRANCH=$(echo $SECRETS | jq -r .branch)
TRIGGERNAME=$(echo $SECRETS | jq -r .triggerName)
REGION=$(echo $SECRETS | jq -r .region)
VERSION=$(echo $SECRETS | jq -r .nodeVersion)
ARCH=$(echo $SECRETS | jq -r .nodeFlavor)
ARTIFACTS=$(echo $SECRETS | jq -r .artifactsName)
HOST_IMAGE=$(echo $SECRETS | jq -r .hostImage)
DOPPLER_TOKEN=$(echo $SECRETS | jq -r .dopplerServiceToken)
GCPSECRETS=$(echo $SECRETS | jq -r .dopplerSecretName)
STORE=$(echo $SECRETS | jq -r .pnpmStore)
SUBSTITUTIONS=_HOST_IMAGE=$HOST_IMAGE,_DOPPLER_TOKEN=$DOPPLER_TOKEN,_ARTIFACTS=$ARTIFACTS,_REGION=$REGION,_VERSION=$VERSION,_ARCH=$ARCH,_GCPSECRETS=$GCPSECRETS,_STORE=$STORE
gcloud builds triggers delete $TRIGGERNAME --region=$REGION
gcloud builds triggers create cloud-source-repositories \
--repo=$REPO \
--branch-pattern=$BRANCH \
--description="cx dev" \
--name=$TRIGGERNAME \
--build-config=cloudbuild.yaml \
--region=$REGION \
--substitutions=$SUBSTITUTIONS
create build trigger

Doppler GCP integrations

Back in the section on the doppler config dashboard, you’ll see a tab called integrations. It’s possible to create a synched relationship between GCP secrets and Doppler. In other words, the doppler secrets are copied over to a matching gcp secret when updated. This is certainly another option, but I found it a little fiddly with service accounts and IAM changes required, and in any case I like the idea of keeping build secrets separate from run secrets, while providing a way to access both.

Kubernetes secrets

Some of these secrets are really configuration items rather than secrets, but I have just lumped them all together as secrets – you could split into configmaps and secrets if you wanted to differentiate visibility. One of the great things about Kubernetes is how easy it is to inject secrets (and configmaps) into your app environment. Now we’ll see how to create Kubernetes secrets in a similar way to the scripts used for building.

Bash versions

If you are using a Mac, because of licensing, the Mac is shipped with an old version of Bash that can’t run these scripts, so you’ll need to ensure that you pick up a later version.

If you have a Mac with this problem, you can install a later version with

brew install bash

# (it should point to /opt/homebrew/bin/bash) rather than /bin/bash
which bash

# check it's v5
bash --version

# bring up a bash shell
bash
install later version of bash on the Mac

Create kubernetes secret

This will work on both minikube and kubernetes, whichever your kubectl context is set to. It’s going to create a kubernetes secret from the doppler secrets, referencing the GCP build secrets to discover what to call and where to put everything. This can be part of the kubernetes deployment process, as we are setting desired state and changes will only happen if there’s been a change back in doppler.


echo "instead of sh use bash to run this shell-the mac version of /bim/bash is too old and i cant shebang it"
CONFIG=$(jq -r .configSecret private/gcp.json)
SECRETS=$(gcloud secrets versions access latest --secret=${CONFIG} | sed -E s/'\n'//g)

KUBEDOPPLERSECRETS=$(echo $SECRETS | jq -r .kubeDopplerSecrets)
KUBENAMESPACE=$(echo $SECRETS | jq -r .kubeNamespace)
# we use dry-run to create a yaml file without apllyling it

# create namespace if it doesnt exist
kubectl create namespace ${KUBENAMESPACE} --dry-run=client -o yaml | kubectl apply -f -

# create secret
kubectl create secret generic ${KUBEDOPPLERSECRETS} \
--save-config \
--dry-run=client \
--from-env-file <(doppler secrets download --no-file --format docker) \
-o yaml | \
kubectl apply -n ${KUBENAMESPACE} -f -
create kube secret

Checking the kube secret

You can check if it’s been created correctly by selecting a secret to see its value

kubectl get secret doppler-secrets -n dev -o jsonpath='{.data.MY_URL}' | base64 --decode
verifying kube secret

Using the kube secret

Finally, we can inject these Kubernetes secrets from doppler as env variables into whichever deployment specs need them.

envFrom:
- secretRef: doppler-secrets
spec env

Local injection of doppler secrets

You’ll still want access to the doppler secrets locally, and you can see there are a number of ways to achieve this in the example scripts above. However, Doppler also allows injection of values into env variables locally.

You could even use it to directly subsitute from the shell – for example

curl $(doppler secrets get --plain MY_URL)?key==$(doppler secrets get --plain MY_KEY)
direct substitution

However, lets focus on env injection.

Doppler run <command>

The Doppler run command injects all the secrets it knows about into the env, then runs the command. Let’s test it like this:

This means of course than we can access those same values in a node app. Let’s try this

console.log(process.env.MY_URL)
console.log(process.env.MY_KEY);
index.js

Summary

These 3 secret manager solutions combined to fit your environment make a very flexible secret management solution, with not a single .env file or local copies of secrets necessary.

Of course this won’t be for everybody, but who want more fine grained access, but it’s just fine for small teams, who will likely benefit from the free tier in each of these solutions.