In Getting cockroachDB running with Kubernetes I covered how to get cockroachdb going on Kubernetes, but that was in insecure mode, which is fine for playing around inside the Kubernetes cluster, but not good enough for production. Here’s how to create a secure instance. It’s always best to create scripts for each of the steps so you can repeat them.

Creating the Kubernetes cluster

You may have created one already if you have already followed along with Getting cockroachDB running with Kubernetes. You can create a cluster from the Google Cloud console, or using gcloud. I’m using gcloud. I’ll be using ssd disks for performance, but pre-emptible nodes (they can be shut down at any time) to keep the costs down. In principle, most people don’t use pre-emptibles for production environments – especially for a stateful set like cockroachdb, but there are a few who’ve jumped in and created various techniques to avoid issues – I’ll go into some of these in a later post. The savings can be quite substantial. I’m doing all this from the cloud console which gives me a free VM with the Google APIS already setup.

1 . gcloud cli for creating a cluster with pre-emptible nodes and auto scaling.

2. set the context to the newly created cluster

3. give yourself cluster admin rights in rbac

4. create an ssd storageClass

using this .yaml file

Installing cockroachdb

You can get the secure stateful set yaml file from cockroach github repo, but it needs changing to use ssd disks, and also to set the database size. Here’s a script to do that in one – just change to the size of db you need.

1. Create the stateful set and persistent storage

2. Approve the node csrs
The cockroach nodes communicate securely with each other using certificates. The previous step created some certificate signing requests which you’ll need to approve. They make take a minute or two to come through, so you may need to repeat this step a few times till they are all done.

3. Check everything is running
Make sure the pods are running and persistent storage is allocated

4. Initializing the cluster
This next step generates a csr for the root user. Users need certificates to be allowed to access the database.

5. approving the root csr

6. creating a secure pod
You’ll need a secure pod from which to access the database from inside the cluster. This one will use the approved user, root and you can use it to run sql commands against the database.
echo “create a permanently running pod to access the db using root”
kubectl create -f
kubectl get pods

It’s handy to have a generic script to execute sql commands, using the secure pod just created, so create this.

8. create your database

You’ll probably want to create more users than just root, so create this. The password is fallback, as the user will normally be connecting using a certificate which we’ll create later. Grant the relevant access to your users

10. create users
Use the script just created to add users to the DB

You’ll want a script that modifies the client-secure.yaml from cockroach to generate a csr for each of your users, as well as a secure pod so you can access the database as each one of them, and generate certificates and keys if you need to expose the database to any kind of access outside the cluster.

12. create csrs for users
Call the script for each of the users

You’ll need a script to approve each of the csrs for each user.

14. approve the usercsrs and export certificates and keys
Use the script for each of the users.

You’ll need the a script to export certs and keys for each user – in this case to a local ../certs directory.

16. export the user certs and keys
Use for each user, and you’ll also need the ca.crt for the cluster.

Next steps

Creating services, and ingresses etc are described in Getting an API running in Kubernetes

Why not join our forum, follow the blog or follow me on twitter to ensure you get updates when they are available.