Deploying Go + React to Heroku using Docker, Part 2 (the database)
In this article, we will follow on from Part 1 of this series, and add a database to our Heroku stack.
For best results, complete Part 1, or at least checkout the code from here and get your environment working. The completed solution can be found in the database
branch of the repo.
What you’ll build
Extending the app we built in Part 1, we will add the capability for the /ping
endpoint to return the duration from the last request, so the longer you wait between pings, the larger the value becomes. There will be no changes to the client application.
What you’ll need
Everything from the previous tutorial, and a local Postgres instance (I recommend docker!)
Getting Started
First, let’s provision a Postgres instance, so we will pull one down from docker. Note that the following is a single line:
$ docker run -p 5432:5432 --name go-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
Now we have a Postgres instance running locally with the port 5432
available locally. Let's create the database we will be using for this tutorial.
# Log into the container using the postgres user and start psql
$ docker exec -it -u postgres go-postgres psql# Create the database using the create command
postgres=# create database gotutorial;# Exit psql and the container
postgres=# \q# You should be back in your terminal.
Database migrations
Now that we have a database at our disposal, we need a robust way to migrate the schemas up (and down if needed). Ideally, we don’t want to write this logic ourselves, so for this tutorial, I have looked at Goose database migration tool, which had a nice way of specifying SQL and Go based migrations.
Make sure that your $GOPATH/bin directory is in your $PATH
$ export PATH=$PATH:$(go env GOPATH)/bin# Install the tool
$ go get -u github.com/pressly/goose/cmd/goose
Now we should be able to invoke the goose
binary from wherever we like. In our case, let’s keep all of our migrations in a directory called migrations
in the root of our project. Let’s create our first migration.
$ mkdir migrations
$ goose -dir migrations create initial_seed sql
This command will create a new timestamped file for us in the /migrations
directory, I like the fact that a single migration file holds the command(s) to migrate the database both up and down. Open the migration file created and update it to look like this:
Now let’s run the thing from the root of our project. Note that the following is a single line:
$ goose -dir migrations postgres "postgres://postgres:mysecretpassword@localhost:5432/gotutorial?sslmode=disable" up
To break this down:
- we are telling
goose
to run the migrations found in themigrations
directory. - the driver will be
postgres
- the database connection string is
postgres://postgres:mysecretpassword@localhost:5432/gotutorial?sslmode=disable
- and we want to migrate the database
up
Let’s see how the database looks now using the psql
tool in the container.
$ docker exec -it -u postgres go-postgres psql# \c will connect to the database we have created
postgres=# \c gotutorial
You are now connected to database "gotutorial" as user "postgres".
gotutorial=# \dt
List of relations
Schema | Name | Type | Owner
--------+------------------+-------+----------
public | goose_db_version | table | postgres
public | ping_timestamp | table | postgres
(2 rows)gotutorial=# \q
As you can see, there are two tables created in our database, but we only wrote the SQL to create ping_timestamp
— that is because goose has its own table goose_db_version
to ensure that it only runs migrations once, so we can migrate up as many times as we like without any trouble.
We could run goose again with a down command and destroy ping_timestamp
Extend the API
Let’s get this API code talking to the database. To do this we will use the standard database/sql package and the lib/pq postgres driver. We will not be covering any ORM capability of the pg driver, but you are welcome to have a look and see what you think.
Enhance main.go
We will add two new functions to our main.go server, let’s have a look at how it might look now.
Alright, there are a few things to go over here. First, let's focus on the main method.
Setting up the database
line 50
is where we are good little 12-factor application developers and fetch the database details from the environment. DATABASE_URL
is the standard variable name that Heroku will inject into our runtime for us to access the database.
line 52
is where we use the standard database/sql libraries to create a connection to a pg database (we have loaded the driver back in the imports online 12
)
Application logic
Great, now we have the ability to interact with a database, let's register a simple ping timestamp so we can calculate the time elapsed since the last request.
Our new functionpingFunc
takes in a reference to our connection and does two things.
- Defer a call to the
registerPing
function that will eventually insert a row into ourping_timestamp
table to capture the time this call was invoked. Read more about defer here. - Select the latest entry from
ping_timestamp
and calculate the time elapsed, so we can show the end user.
That’s about it! There is no change needed for the client at this stage since there is no change to the server side contract.
Run it
Now we should be in a position to run the application locally, again we will start the client and server in individual terminal windows.
We will have to tell the process where the database is, so we will set DATABASE_URL
in the environment at startup time.
# From the /server directory
$ DATABASE_URL=postgres://postgres:mysecretpassword@localhost:5432/gotutorial?sslmode=disable go run main.go# From the /client directory
$ npm start
Deploying to Heroku
Now our local environment is taken care of, we can migrate our database up and down, the client and server are working well together. Next, let’s provision a Postgres addon to our Heroku app using the Heroku CLI.
# From the root directory
$ heroku addons:create heroku-postgresql:hobby-dev
Creating heroku-postgresql:hobby-dev on ⬢ shielded-caverns-93486... free
Database has been created and is available
! This database is empty. If upgrading, you can transfer
! data from another database with pg:copy
Created postgresql-tetrahedral-94851 as DATABASE_URL
Use heroku addons:docs heroku-postgresql to view documentation
You will now be able to see that we have successfully provisioned a database to our environment.
$ heroku addons
Add-on Plan Price State
───────────────── ───────── ───── ───────
heroku-postgresql hobby-dev free created
└─ as DATABASEThe table above shows add-ons and the attachments to the current app (glacial-tor-13081) or other apps.
To get our new database up to scratch, you guessed it, we are going to have to run our database migrations against it. For this we are going to hook into the Heroku release phase of the build, you can read more about it here. Essentially the release phase will allow us to run things like a database migration before the code is deployed, if we have an error in the migration then the code will not be deployed.
If a release phase task fails, the new release is not deployed, leaving your current release unaffected.
The Release Phase
You might recall in the previous article our multi-stage docker build. The first container that we create is a build container that has everything necessary to build our Go API. Heroku lets us hook into this interim container at release time if we choose to reuse it, and that is exactly what we will do.
First, let’s ensure that goose is installed on the container, so modify your Dockerfile
to look like the one below.
# Build the Go API
FROM golang:latest AS builder
ADD . /app
WORKDIR /app/server
RUN go mod download
RUN go get -u github.com/pressly/goose/cmd/goose
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -ldflags "-w" -a -o /main
Now we will create a script to allow us to execute the migration. Create a file called migrate.sh
in the /server
directory with the following details:
#!/bin/shecho $DATABASE_URL
goose -dir ../migrations postgres $DATABASE_URL up
Notice that we are referencing $DATABASE_URL
here — it is no coincidence we are using the same variable in main.go
. This is the environment variable that Heroku uses to inject into our runtime for us to use. Heroku will make any and all environment variables available to us in the release phase for us to hook into. We also reference ../migrations
since we created this directory in the root of the project, and the working directory of this container is /server
.
Make it executable:
$ chmod +x migrate.sh
By adding this file into the /server
directory, it will be made available in the docker image (see the WORKDIR
command in the Dockerfile). We can now tell Heroku how we want to migrate by updating the heroku.yml
file located in the project root, please update it so it looks like the following:
build:
docker:
web: Dockerfile
worker:
dockerfile: Dockerfile
target: builder
release:
image: worker
command:
- ./migrate.sh
Notice how we create a reference to the builder
container, we will call this worker
in this context. We can now tell Heroku to run our migration script found on that container to complete the release phase.
That is a whole lot to take in, but let's go ahead and push this up to Heroku and see how it behaves. For this, I recommend using two terminals so we can tail the logs of our instance.
# Add and commit all of our changes, then push.
$ git add .
$ git commit -m 'Adding database support'
$ git push heroku master# In a new terminal tail the container logs
$ heroku logs --tail
Hopefully, you should see interesting things in each terminal window. Once the release is complete, navigate to your production site and see the numbers tick up!
Tip: use
$ heroku apps:info
to find your production url etc.
Summary
Congratulations on making it this far! We now have an application that can store data in Postgres, which is a great second step. Go ahead and play around with some database migrations and databasey things. Try to migrate a database up and then down again to see how it behaves. Go nuts!
In part 3 we will look at securing our application using Auth0, a popular IDAM product. At the time of writing this article, you can use a free account for up to 1000 users, which should be enough to get your MVP up and running. If you exceed that threshold then congratulations, you’ve probably got some good problems to solve :)