Blog

Background workers in Cloud Run

Background workers in Cloud Run


(Deze blog is alleen beschikbaar in het Engels)

Cloud Run is ideal if you have a stateless service that you need to host somewhere. Just put your code in a container, throw the container over to Cloud Run, and let Cloud Run figure out the tricky things like seamless deployments, autoscaling, networking, loadbalancing, logging etc. And because you only pay when containers are actually handling HTTP requests, the costs are minimal.

But sometimes you have a service that needs to run continuously in the background. What do you do then?

Spin up a VM?

You could, but who will look after that VM?

Use Kubernetes?

That’s a lot better – especially with GKE autopilot – however doing Kubernetes right is far from trivial, and just having a Kubernetes cluster lying around is not free, even before you start scheduling workloads on it.

No, the best way to go about is entering the new Cloud Run feature: Always allocated CPU.

Normally when Cloud Run isn’t processing HTTP requests it deallocates the CPU, and it disables networking requests. With always allocated CPU your background service will always have access to CPU and networking, even when it isn’t processing HTTP requests. Set the autoscaling to both a minimum and a maximum of 1 instance and voilà: you got yourself an always-on background service running in Cloud Run!

Do make sure that your background worker can handle restarts, especially around deployments. Normally Cloud Run lets the old revision handle any in-flight HTTP requests before it shuts down the container instance. Your background worker probably isn’t handling a lot of HTTP traffic, so Cloud Run will just shut it down when deploying a new revision.

 

Let’s build an example

We’re going to build a simple service in go, that does some really important counting in the background.

We’re also adding some HTTP endpoints, so we can monitor our background service while it’s running. You can check out the code here.

				
					package main

import (
	"encoding/json"
	"fmt"
	"log"
	"net/http"
	"os"
	"time"
)

var workCount = 0

func main() {
	startBackgroundWork()
	startMetricsServer()
}

func startBackgroundWork() {
	go func() {
		for {
			workCount++
			fmt.Printf("Doing important background work round %d\n", workCount)
			time.Sleep(5 * time.Second)
		}
	}()
}

func startMetricsServer() {
	log.Print("starting server...")
	http.HandleFunc("/", statusHandler)

	port := os.Getenv("PORT")
	if port == "" {
		port = "8080"
		log.Printf("defaulting to port %s", port)
	}

	log.Printf("listening on port %s", port)
	if err := http.ListenAndServe(":"+port, nil); err != nil {
		log.Fatal(err)
	}
}

func statusHandler(w http.ResponseWriter, r *http.Request) {
	status := map[string]interface{} {"healthy": true, "workCount": workCount}
	jsonStatus, err := json.Marshal(status)
	if err != nil {
		log.Fatal(err)
	}
	if _, err = w.Write(jsonStatus); err != nil {
		log.Fatal(err)
	}
}

				
			

We will also need a Dockerfile to containerize our new background worker.

				
					FROM golang:1.17 AS build-api
WORKDIR /api
COPY main.go .
RUN CGO_ENABLED=0 GOOS=linux go build -v -o background-worker main.go

FROM gcr.io/distroless/base:latest
COPY --from=build-api /api/background-worker /background-worker
CMD ["/background-worker"]
				
			

Now that we have the code for a background worker, let’s define a place for it to run.

First let’s set our GCP project replacing <your-project> with your actual GCP project name and <your-region> with your actual GCP region. Let’s also enable the services that we need just in case they’re not enabled yet.

				
					export GCP_PROJECT="<your-project>"
export GCP_REGION="<your-region>"
gcloud services enable artifactregistry.googleapis.com --project=${GCP_PROJECT}
gcloud services enable run.googleapis.com --project=${GCP_PROJECT}

				
			

Next, we’ll create and configure our docker registry:

				
					gcloud artifacts repositories create docker-repo --repository-format=docker --location=${GCP_REGION} --project=${GCP_PROJECT}
gcloud auth configure-docker ${GCP_REGION}-docker.pkg.dev
				
			

Now that we have a place to send our container, let’s compile our code, wrap it in a container and send it there!

				
					docker build -t ${GCP_REGION}-docker.pkg.dev/${GCP_PROJECT}/docker-repo/background-worker .
docker push ${GCP_REGION}-docker.pkg.dev/${GCP_PROJECT}/docker-repo/background-worker

				
			

Only thing left is to tell Cloud Run to run our container:

				
					gcloud run deploy background-foo --region=${GCP_REGION} --project=${GCP_PROJECT} \
--image=${GCP_REGION}-docker.pkg.dev/${GCP_PROJECT}/docker-repo/background-worker \
--min-instances=1 --max-instances=1 --allow-unauthenticated --no-cpu-throttling
				
			

All done!

Let’s use the service url that the last command returned to check the status of our new background worker.

				
					curl <service-url>
				
			

You should see the output of the go service we just defined.

Summary

Cloud Run is great for stateless autoscaling HTTP endpoints, but with the use of Always allocated CPU and setting min/max instances to one, we get a background worker that’s always-on doing work, even when there’s no HTTP calls being processed. The example code we used can be found here.