Using Make to Improve Your Test Suite

Buy or Subscribe

You can access this course in just a minute, and support my efforts to rid the world of crappy online courses!

Buy Standalone  Subscribe

I'm a giant Make fan to the point where I've been accused of abusing it. It's an incredibly useful tool and, when used with care, can replace a mountain of cruft and nonsense in your projects.

Let's see how you can use Make to help you build your database as well as clean and reset your test database before each test run.

Setting Up Your Database Scripts

I know people dig migrations, which is fine as it gives you a versioned way of working with data. Hopefully you'll see that there's a simple way to use them with this strategy as we go on. For now I'm going to do a super simple first step and discuss changes later on (better options if you ask me). I'll be using PostgreSQL running locally (surprise!) and NodeJS for these samples.

Create a Scripts Directory

We're going to create a SQL script that will run before each test, so let's pop that in a /scripts directory to keep things clean. Let's start out by cleaning everything in the database with our build script:

    --drop everything... bye bye
    drop schema if exists public cascade;
    
    --recreate
    create schema public;

Careful now! Be sure there's nothing in your Postgres db before before running this script! Dropping the public schema with cascade is scorched earth and will drop all extensions, tables, views - everything. Using the cascade flag tells Postgres to drop all objects in order as well, so you don't need to worry about integrity problems. In a brand new database this isn't all that bad but in an existing one... ouch.

Let's run this to be sure it works. I'll create a Postgres database called shadetree which will be our test db and then run this script using psql, the Postgres CLI:

    createdb shadetree
    psql shadetree < scripts/db.sql

You should see absolutely nothing, which is bash telling you everything's fine :).

Creating the Makefile

Now let's create our Makefile. Make is a build utility that works with a file called a Makefile and basic shell scripts. It can be used to orchestrate the output of any project that requires a build phase. It’s part of Linux and it’s easy to use.

The power of Make comes with it's super simple orchestration. We'll see that in a minute, for now let's hook up our command. Create your Makefile using touch Makefile or File > New in your favorite editor. Keep it in your project root and add this:

db:
    psql shadetree < ./scripts/db.sql

That's it! We create our "target" (the db thing with a colon) and our "action" (or "command") is the second part, which is the script we ran before. This isn't strictly bash - it just looks a lot like it. I'm not going to get into that here (it's a bit detailed) but there's a lot more we can do to standardize this Makefile with variables and shorthand... for the sake of staying on target (ha ha get it!) let's keep going.

The whole structure - the target, action, etc - is called a rule. This Makefile contains the rules for building our app.

Let's run and see what happens:

make db
psql shadetree < ./scripts/db.sql
DROP SCHEMA
CREATE SCHEMA

Executing the Makefile

To run a Makefile just use the make command in the same directory as your Makefile. You pass in the target you want to run and it will execute. You can see each command and its output as its executed.

Since we only have a single rule, we could also just run make without any arguments and the same thing will happen. By convention, the first rule in a Makefile will be executed if you don't specify your target during execution.

Creating Our Tests

I'm using Node and for testing I'll use my favorite test tool: Mocha.js. Let's quickly set that up - if you don't know what you're seeing have a Google to get familiar:

mkdir test
mkdir test/user_test.js

This is a pretty standard way of setting up Mocha in a project but if you don't use Mocha that's fine - pretend we're using your favorite library.

OK, let's fast-forward a bit and pretend I have created some tests for my users and I want to run them. Let's put this all together and see how we can build our database and test suite in harmony.

Putting It All Together

We could just use JSONB to save a document, but we want to get relational! We'll be working with users so let's crack open our /scripts/db.sql file and define what a user looks like together with some roles:

--drop everything... bye bye
drop schema if exists public cascade;

--recreate
create schema public;

create table users(
  id bigserial primary key,
  name text,
  email text not null unique,
  profile jsonb,
  created_at timestamp not null default now()
);

create table roles(
  id serial primary key,
  name text not null,
  privilege int not null default 0
);

create table users_roles(
  role_id int not null references roles(id),
  user_id bigint not null references users(id),
  primary key (user_id, role_id)
);

--let's add some test data
insert into roles(name, privilege) values('Admin',99);
insert into roles(name, privilege) values('User',10);

insert into users(name, email) values('Admin Kim','admin@test.com');
insert into users(name, email) values('Test Jones','test@test.com');

insert into users_roles(role_id, user_id) values (1, 1);
insert into users_roles(role_id, user_id) values (2, 2);

Definition for users and roles

Right here is why I love working with SQL. I can do so much to protect my data that saves quite a few lines of validation code and tests. For instance: the unique constraint on email, the composite primary key on my many to many join table and the bigint (int64) on my users as a primary key.

You'll also notice that I've added some test data in here for use with my tests. Yes yes I could do this in setup but that's extra code! This is a bit easier don't you think?

We can run this script using make:

make
psql shadetree < ./scripts/db.sql
NOTICE:  drop cascades to 3 other objects
DETAIL:  drop cascades to table users
drop cascades to table roles
drop cascades to table users_roles
DROP SCHEMA
CREATE SCHEMA
CREATE TABLE
CREATE TABLE
CREATE TABLE
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1

Hooorah! Nice clean database with test data planted. The only thing I don't care for is the chatter - I don't want to see this every time I run my tests and I can fix that quickly:

db:
    @psql shadetree < ./scripts/db.sql -q

Here I've updated my Makefile to 1) silence the action output using @ prepended to the command and 2) tell psql I don't want to see the results using the -q flag (quiet mode).

Now let's wire up our tests with another command:

test: db
    mocha

db:
    @psql shadetree < ./scripts/db.sql -q

I've created a new target called test but this time I've specified a dependency that must run to completion before the target executes. You do that by specifying a target name just to the right of the initial target as you see. Now my db target will run first and off we go.

But there's a bit more we have to do before we're done.

Phony Commands

Make is used to compile code, that's its primary use case. This can be a lengthy process so before each target is run, Make will check and see if the output exists and if it has been updated. If it hasn't, Make will skip the entire rule. You can override this check by specifying that a rule is "phony":

.PHONY: test db

The last line of our Makefile

Cleaning Up

We also want to be sure we can clean things up if things go wrong, which they just might. When compiling software, you typically want to start from a clean state if there's an error - same goes for what we're trying to do. By convention, every Makefile should have a clean rule (some people call this rollback - it's up to you).

We can add this quickly, which brings our rounded out Makefile to a solid state:

test:	db
    mocha

db:
    @psql shadetree < ./scripts/db.sql -q

clean:
    psql shadetree -c "drop schema if exists public cascade; create schema public;" -q

.PHONY: test db

To run this properly we can execute a simple command:

make || make clean

If a rule returns an error Make will stop running and return an error back to the shell. We can capture that and clean things up with an "or" - || - if anything fails, clean it up!

We can now build our database and add test data before each test run, stopping everything if we have a problem. Yay!

Going a Bit Further

If you're a zsh user (which I hope you are) and use Oh My Zsh! (which I hope you do!) make sure you have the dotenv plugin active. This will load a .env file automatically if it's present in a given directory.

We can use this to make executing our test suite a little easier on the fingers:

alias mt="make || make clean"

Our .env file, loaded into our shell using the dotenv plugin
Now we're jam slammin!

Handling Changes

This approach works great for getting things off the ground - but what about handling changes as the project matures? This is where migrations can be helpful - but then again they can also cause some serious headaches too. I'll sidestep the discussion on migrations - I don't particularly care for them since I'm a SQL fan - but if you like them that's OK ... I'm not judging...

Go Ahead, Migrate!

You can create rule to run your migrations before each test if you like instead of executing an entire SQL file. It'll work just the same as long as your migration tool has a CLI component - just call it from your Make rule.

Me? I like change scripts.

Change Scripts

Once you're deployed, feel free to wipe out the ./scripts/db.sql file. Don't delete it - just remove the code! It will hopefully be versioned (and tagged properly) in Git, which is where it belongs.

Moving forward, this file will contain your alter table scripts or, more likely, the SQL you need for your next iteration. The trick here is resetting your database to the state it was in prior to executing your change script which, thankfully, is pretty easy using pg_dump.

The entire command you want is pg_dump -d shadetree > ./scripts/reset.sql. This command will generate a SQL file that will create your database for you. There are a few options you can send in if you need - use --help if you want to see them all.

Now that you have your reset script, just run it prior to your change script:

test:	db
    mocha

db: clean reset_db
    @psql shadetree < ./scripts/db.sql -q

reset_db:
    @psql shadetree < ./scripts/reset.sql -q

clean:
    psql shadetree -c "drop schema if exists public cascade; create schema public;" -q

.PHONY: test db reset_db clean

Notice that I now have two dependencies for my db rule: clean and reset_db. They're executed in order - you just need to separate them with tabs!

Now you can build out your change script as you need and when you deploy, erase it and let Git do its job versioning your database with reset.sql and db.sql.

Summary and Some Caveats

Using a few simple lines in a Makefile and 30 or so lines of SQL, we've replaced the need for quite a few external dependencies. Database cleaners, for one and ORM validators with tests to be sure things work as we want. I know it's not for everyone, but I find it very helpful.

Now for the caveats! If you're a Make fan you might be cringing right now at my lack of convention and hard-coded names in place of variables. This is a conscious decision on my part as the point of this whole post was how to automate database clean & build with testing.

If you want to see what those conventions look like together with a "clean" Makefile you can hit the subscribe button above and support my efforts :).