This is a step in Getting an API running in Kubernetes, and you should first have created an app to be deployed as described in Building your App ready for Kubernetes deployment, deployed it as described in Creating a Kubernetes deployment and you should already have a cluster up and running as described in Getting cockroachDB running with Kubernetes and are using the Google Cloud console shell with the kubectl CLI.

I recommend that you save your commands in various scripts so you can repeat them or modify them later.

Creating a service

A Kubernetes service is the “product” of one or more pods. The pods are running your app(s), and the service is an abstraction of the final result and communications path to it. In the workflow for the demo playback app, the service part is marked below

Yaml files

The configuration of a Kubernetes resource can become quite involved, so unlike the deployment example, where it was specified through flags to the kubectl CLI interface, this time the configuration will be in a file. These .yaml files use a syntax not unlike, but more concise than JSON. You can read about yaml here but the layout is fairly intuitive and it’s probably not necessary.

The service yaml file

The main points of interest are

  • Type ClusterIP flags the service as one that will only be accessible within the cluster. The diagram above shows that it’s not yet being exposed to the outside world
  • The port is the port the service will use to communicate to the next stage in the workflow, and the targetPort is the one on which it will communicate with the pods.
  • The selector is a label that the service will use to select which pods it is supposed to be servicing, by checking their labels.

You can see the connection to the pods by checking the command below for how the deployment was specified

###kubectl run playback --replicas=2 --image=gcr.io/fid-sql/playback --port=8080 --labels="run=playback-app"

server-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: playback-service
  labels:
    app:  playback-service
spec:
  type:  ClusterIP
  ports:
  - port: 80
    targetPort: 8080
  selector:
    run: playback-app

A .yaml file can be passed to kubectl with the -f flag and the verb apply (handy for making changes to an existing resource) or create. I generally use apply for both. As a point of interest, a resource can also be deleted by referencing its yaml file along with the delete verb.

service-service.sh

kubectl apply -f service-server.yaml

Check the service

kubectl get service playback-service
NAME               TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
playback-service   ClusterIP   10.3.251.79   <none>        80/TCP    5m

Notice that there is a cluster-ip (an internal network address), but no external-ip yet.

Exposing the service

At this point you could expose the service directly, since Kubernetes on Google Cloud platform has a loadbalancer type that you could attach this service to with something like. However this is not what we’re going to do. I only show it below to complete the picture.

kubectl expose service playback-service --port=80 --target-port=80 --name=playback-temp --type=LoadBalancer

After a time, this would be allocated an external ip address. You could then head over and open the firewall on that ip address on that port.

kubectl get service playback-temp
NAME            TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)        AGE
playback-temp   LoadBalancer   10.3.247.213   35.193.13.32   80:30609/TCP   1m

But this is not what we need to do – as this will only give an insecure connection, so let’s delete that service again

kubectl delete service playback-temp

Next step

Now there is an internal service running, offering up the services of your app – but only internally within the cluster. The next step is to work on an ingress controller. See Getting an API running in Kubernetes for how (subpages below|).

Why not join our forum, follow the blog or follow me on Twitter to ensure you get updates when they are available.