Creating the Kubernetes cluster
You may have created one already if you have already followed along with Getting cockroachDB running with Kubernetes. You can create a cluster from the Google Cloud console, or using gcloud. I’m using gcloud. I’ll be using ssd disks for performance, but pre-emptible nodes (they can be shut down at any time) to keep the costs down. In principle, most people don’t use pre-emptibles for production environments – especially for a stateful set like cockroachdb, but there are a few who’ve jumped in and created various techniques to avoid issues – I’ll go into some of these in a later post. The savings can be quite substantial. I’m doing all this from the cloud console which gives me a free VM with the Google APIS already setup.
1 . gcloud cli for creating a cluster with pre-emptible nodes and auto scaling.
gcloud beta container --project "your-project" clusters create "your-cluster" --zone "europe-west2-b" --username "admin" --cluster-version "1.9.7-gke.6" --machine-type "n1-standard-2" --image-type "COS" --disk-type "pd-standard" --disk-size "100" --scopes "https://www.googleapis.com/auth/compute","https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" --preemptible --num-nodes "3" --enable-cloud-logging --enable-cloud-monitoring --network "projects/fid-prod/global/networks/default" --subnetwork "projects/fid-prod/regions/europe-west2/subnetworks/default" --enable-autoscaling --min-nodes "2" --max-nodes "6" --addons HorizontalPodAutoscaling,HttpLoadBalancing,KubernetesDashboard --enable-autoupgrade --enable-autorepair
2. set the context to the newly created cluster
gcloud container clusters get-credentials your-cluster --zone europe-west2-b kubectl config get-contexts echo "check that nodes are for your-cluster - europe-west2-b" kubectl get nodes
3. give yourself cluster admin rights in rbac
kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=your@email.com
4. create an ssd storageClass
kubectl apply -f ssd.yaml kubectl get storageclasses
using this .yaml file
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: fast provisioner: kubernetes.io/gce-pd parameters: type: pd-ssd
Installing cockroachdb
You can get the secure stateful set yaml file from cockroach github repo, but it needs changing to use ssd disks, and also to set the database size. Here’s a script to do that in one – just change to the size of db you need.
1. Create the stateful set and persistent storage
curl https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cockroachdb-statefulset-secure.yaml | sed -E -e '/- "ReadWriteOnce"/a\' -e ' storageClassName: fast' -e 's/storage: 1Gi/storage: 32Gi/' > csdb.yaml kubectl apply -f csdb.yaml
2. Approve the node csrs
The cockroach nodes communicate securely with each other using certificates. The previous step created some certificate signing requests which you’ll need to approve. They make take a minute or two to come through, so you may need to repeat this step a few times till they are all done.
kubectl certificate approve default.node.cockroachdb-0 kubectl certificate approve default.node.cockroachdb-1 kubectl certificate approve default.node.cockroachdb-2 kubectl get csr
3. Check everything is running
Make sure the pods are running and persistent storage is allocated
kubectl get pods kubectl get persistentvolumes
4. Initializing the cluster
This next step generates a csr for the root user. Users need certificates to be allowed to access the database.
kubectl create -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init-secure.yaml
5. approving the root csr
kubectl certificate approve default.client.root
6. creating a secure pod
You’ll need a secure pod from which to access the database from inside the cluster. This one will use the approved user, root and you can use it to run sql commands against the database.
echo “create a permanently running pod to access the db using root”
kubectl create -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/client-secure.yaml
kubectl get pods
7. execsql.sh
It’s handy to have a generic script to execute sql commands, using the secure pod just created, so create this.
echo "executing $1" kubectl exec -i cockroachdb-client-secure -- ./cockroach sql -d=your_database --certs-dir=/cockroach-certs --host=cockroachdb-public -e "$1"
8. create your database
echo "creating database" bash execsql.sh "create database your_database;"
9. createuser.sh
You’ll probably want to create more users than just root, so create this. The password is fallback, as the user will normally be connecting using a certificate which we’ll create later. Grant the relevant access to your users
echo "provide password for $1" kubectl exec -it cockroachdb-client-secure -- ./cockroach user set $1 --password --certs-dir=/cockroach-certs --host=cockroachdb-public bash execsql.sh "GRANT update ON database fid TO $1;" bash execsql.sh "GRANT insert ON database fid TO $1;" bash execsql.sh "GRANT delete ON database fid TO $1;" bash execsql.sh "GRANT select ON database fid TO $1;"
10. create users
Use the script just created to add users to the DB
sh createuser.sh usera sh createuser.sh userb sh createuser.sh userc
11. signuser.sh
You’ll want a script that modifies the client-secure.yaml from cockroach to generate a csr for each of your users, as well as a secure pod so you can access the database as each one of them, and generate certificates and keys if you need to expose the database to any kind of access outside the cluster.
echo "creating a signing request for $1" https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/client-secure.yaml | sed -e "s/ -user=root/ -user=$1/" -e "s/name: cockroachdb-client-secure/name: $1-csr/" signusers.yaml > s$1.yaml kubectl apply -f s$1.yaml kubectl get csr
12. create csrs for users
Call the script for each of the users
sh signuser.sh usera sh signuser.sh userb sh signuser.sh userc
13. usercsr.sh
You’ll need a script to approve each of the csrs for each user.
kubectl get pod $1-csr kubectl get csr default.client.$1 echo "approving csr for $1" kubectl certificate approve default.client.$1
14. approve the usercsrs and export certificates and keys
Use the script for each of the users.
echo "approving and exporting certs" bash usercsr.sh usera bash usercsr.sh userb bash usercsr.sh userc
15. getca.sh
You’ll need the a script to export certs and keys for each user – in this case to a local ../certs directory.
echo "exporting crt for $1 to client.$1.crt" kubectl exec $1-csr -i -- cat /cockroach-certs/client.$1.crt > ../certs/client.$1.crt echo "exporting key for $1 to client.$1.key" kubectl exec $1-csr -i -- cat /cockroach-certs/client.$1.key > ../certs/client.$1.key echo "here is the cert in ../certs/client.$1.crt" cat ../certs/client.$1.crt echo "here is the key in ../certs/client.$1.key" cat ../certs/client.$1.key
16. export the user certs and keys
Use getca.sh for each user, and you’ll also need the ca.crt for the cluster.
bash getca.sh usera bash getca.sh userb bash getca.sh fidserver echo "getting ca.userc" kubectl exec cockroachdb-client-secure -it -- cat /cockroach-certs/ca.crt > ../certs/ca.crt echo "here is ../certs/ca/crt" cat ../certs/ca.crt
Next steps
Creating services, and ingresses etc are described in Getting an API running in Kubernetes