This video is free to watch - hope you enjoy it! If you like what you see, there's a ton more where that came from...

Buy Now

Using Make to Improve Your Test Suite

No video with this one - just a post with lots of code. Make has been around forever and is often overlooked in favor of tools that recreate precisely what it does, but in crappier ways. Let's see how you can use Make to help your testing process.

This is premium content - you can own it in just 30 seconds!

Buy Now

I’m a giant Make fan to the point where I’ve been accused of abusing it. It’s an incredibly useful tool and, when used with care, can replace a mountain of cruft and nonsense in your projects.

Let’s see how you can use Make to help you build your database as well as clean and reset your test database before each test run.

Setting Up Your Database Scripts

I know people dig migrations, which is fine as it gives you a versioned way of working with data. Hopefully you’ll see that there’s a simple way to use them with this strategy as we go on. For now I’m going to do a super simple first step and discuss changes later on (better options if you ask me). I’ll be using PostgreSQL running locally (surprise!) and NodeJS for these samples.

Create a Scripts Directory

We’re going to create a SQL script that will run before each test, so let’s pop that in a /scripts directory to keep things clean. Let’s start out by cleaning everything in the database with our build script:

    --drop everything... bye bye
    drop schema if exists public cascade;
    create schema public;

Careful now! Be sure there’s nothing in your Postgres db before before running this script! Dropping the public schema with cascade is scorched earth and will drop all extensions, tables, views - everything. Using the cascade flag tells Postgres to drop all objects in order as well, so you don’t need to worry about integrity problems. In a brand new database this isn’t all that bad but in an existing one… ouch.

Let’s run this to be sure it works. I’ll create a Postgres database called shadetree which will be our test db and then run this script using psql, the Postgres CLI:

    createdb shadetree
    psql shadetree < scripts/db.sql

You should see absolutely nothing, which is bash telling you everything’s fine :).

Creating the Makefile

Now let’s create our Makefile. Make is a build utility that works with a file called a Makefile and basic shell scripts. It can be used to orchestrate the output of any project that requires a build phase. It’s part of Linux and it’s easy to use.

The power of Make comes with it’s super simple orchestration. We’ll see that in a minute, for now let’s hook up our command. Create your Makefile using touch Makefile or File > New in your favorite editor. Keep it in your project root and add this:

	psql shadetree < ./scripts/db.sql

That’s it! We create our “target” (the db thing with a colon) and our “action” (or “command”) is the second part, which is the script we ran before. This isn’t strictly bash - it just looks a lot like it. I’m not going to get into that here (it’s a bit detailed) but there’s a lot more we can do to standardize this Makefile with variables and shorthand… for the sake of staying on target (ha ha get it!) let’s keep going.

The whole structure - the target, action, etc - is called a rule. This Makefile contains the rules for building our app.

Let’s run and see what happens:

make db
psql shadetree < ./scripts/db.sql

Executing the Makefile

To run a Makefile just use the make command in the same directory as your Makefile. You pass in the target you want to run and it will execute. You can see each command and its output as its executed.

Since we only have a single rule, we could also just run make without any arguments and the same thing will happen. By convention, the first rule in a Makefile will be executed if you don’t specify your target during execution.

Creating Our Tests

I’m using Node and for testing I’ll use my favorite test tool: Mocha.js. Let’s quickly set that up - if you don’t know what you’re seeing have a Google to get familiar:

mkdir test
mkdir test/user_test.js

This is a pretty standard way of setting up Mocha in a project but if you don’t use Mocha that’s fine - pretend we’re using your favorite library.

OK, let’s fast-forward a bit and pretend I have created some tests for my users and I want to run them. Let’s put this all together and see how we can build our database and test suite in harmony.

Putting It All Together

We could just use JSONB to save a document, but we want to get relational! We’ll be working with users so let’s crack open our /scripts/db.sql file and define what a user looks like together with some roles:

--drop everything... bye bye
drop schema if exists public cascade;

create schema public;

create table users(
  id bigserial primary key,
  name text,
  email text not null unique,
  profile jsonb,
  created_at timestamp not null default now()

create table roles(
  id serial primary key,
  name text not null,
  privilege int not null default 0

create table users_roles(
  role_id int not null references roles(id),
  user_id bigint not null references users(id),
  primary key (user_id, role_id)

--let's add some test data
insert into roles(name, privilege) values('Admin',99);
insert into roles(name, privilege) values('User',10);

insert into users(name, email) values('Admin Kim','[email protected]');
insert into users(name, email) values('Test Jones','[email protected]');

insert into users_roles(role_id, user_id) values (1, 1);
insert into users_roles(role_id, user_id) values (2, 2);

Definition for users and roles

Right here is why I love working with SQL. I can do so much to protect my data that saves quite a few lines of validation code and tests. For instance: the unique constraint on email, the composite primary key on my many to many join table and the bigint (int64) on my users as a primary key.

You’ll also notice that I’ve added some test data in here for use with my tests. Yes yes I could do this in setup but that’s extra code! This is a bit easier don’t you think?

We can run this script using make:

psql shadetree < ./scripts/db.sql
NOTICE:  drop cascades to 3 other objects
DETAIL:  drop cascades to table users
drop cascades to table roles
drop cascades to table users_roles

Hooorah! Nice clean database with test data planted. The only thing I don’t care for is the chatter - I don’t want to see this every time I run my tests and I can fix that quickly:

	@psql shadetree < ./scripts/db.sql -q

Here I’ve updated my Makefile to 1) silence the action output using @ prepended to the command and 2) tell psql I don’t want to see the results using the -q flag (quiet mode).

Now let’s wire up our tests with another command:

test: db

	@psql shadetree < ./scripts/db.sql -q

I’ve created a new target called test but this time I’ve specified a dependency that must run to completion before the target executes. You do that by specifying a target name just to the right of the initial target as you see. Now my db target will run first and off we go.

But there’s a bit more we have to do before we’re done.

Phony Commands

Make is used to compile code, that’s its primary use case. This can be a lengthy process so before each target is run, Make will check and see if the output exists and if it has been updated. If it hasn’t, Make will skip the entire rule. You can override this check by specifying that a rule is “phony”:

.PHONY: test db

The last line of our Makefile

Cleaning Up

We also want to be sure we can clean things up if things go wrong, which they just might. When compiling software, you typically want to start from a clean state if there’s an error - same goes for what we’re trying to do. By convention, every Makefile should have a clean rule (some people call this rollback - it’s up to you).

We can add this quickly, which brings our rounded out Makefile to a solid state:

test:	db

	@psql shadetree < ./scripts/db.sql -q

	psql shadetree -c "drop schema if exists public cascade; create schema public;" -q

.PHONY: test db

To run this properly we can execute a simple command:

make || make clean

If a rule returns an error Make will stop running and return an error back to the shell. We can capture that and clean things up with an “or” - || - if anything fails, clean it up!

We can now build our database and add test data before each test run, stopping everything if we have a problem. Yay!

Going a Bit Further

If you’re a zsh user (which I hope you are) and use Oh My Zsh! (which I hope you do!) make sure you have the dotenv plugin active. This will load a .env file automatically if it’s present in a given directory.

We can use this to make executing our test suite a little easier on the fingers:

alias mt="make || make clean"

Our .env file, loaded into our shell using the dotenv plugin Now we’re jam slammin!

Handling Changes

This approach works great for getting things off the ground - but what about handling changes as the project matures? This is where migrations can be helpful - but then again they can also cause some serious headaches too. I’ll sidestep the discussion on migrations - I don’t particularly care for them since I’m a SQL fan - but if you like them that’s OK … I’m not judging…

Go Ahead, Migrate!

You can create rule to run your migrations before each test if you like instead of executing an entire SQL file. It’ll work just the same as long as your migration tool has a CLI component - just call it from your Make rule.

Me? I like change scripts.

Change Scripts

Once you’re deployed, feel free to wipe out the ./scripts/db.sql file. Don’t delete it - just remove the code! It will hopefully be versioned (and tagged properly) in Git, which is where it belongs.

Moving forward, this file will contain your alter table scripts or, more likely, the SQL you need for your next iteration. The trick here is resetting your database to the state it was in prior to executing your change script which, thankfully, is pretty easy using pg_dump.

The entire command you want is pg_dump -d shadetree > ./scripts/reset.sql. This command will generate a SQL file that will create your database for you. There are a few options you can send in if you need - use --help if you want to see them all.

Now that you have your reset script, just run it prior to your change script:

test:	db

db: clean reset_db
	@psql shadetree < ./scripts/db.sql -q

	@psql shadetree < ./scripts/reset.sql -q

	psql shadetree -c "drop schema if exists public cascade; create schema public;" -q

.PHONY: test db reset_db clean

Notice that I now have two dependencies for my db rule: clean and reset_db. They’re executed in order - you just need to separate them with tabs!

Now you can build out your change script as you need and when you deploy, erase it and let Git do its job versioning your database with reset.sql and db.sql.

Summary and Some Caveats

Using a few simple lines in a Makefile and 30 or so lines of SQL, we’ve replaced the need for quite a few external dependencies. Database cleaners, for one and ORM validators with tests to be sure things work as we want. I know it’s not for everyone, but I find it very helpful.

Now for the caveats! If you’re a Make fan you might be cringing right now at my lack of convention and hard-coded names in place of variables. This is a conscious decision on my part as the point of this whole post was how to automate database clean & build with testing.

If you want to see what those conventions look like together with a “clean” Makefile you can hit the subscribe button above and support my efforts :).


The Code

Code for this video (and for the entire series) can be found in the production GitHub repo.

  • The Basics of Logic

    Let’s jump right in at the only place we can: the very begining, diving into the perfectly obvious and terribly argumentative 'rules of logic'.

  • Boolean Algebra

    You're George Boole, a self-taught mathematician and somewhat of a genius. You want to know what God's thinking so you decide to take Aristotle's ideas of logic and go 'above and beyond' to include mathematical proofs.

  • Binary Mathematics

    This is a famous interview question: 'write a routine that adds two positive integers and do it without using mathematic operators'. Turns out you can do this using binary!

  • Bitwise Operators

    Up until now we've been representing binary values as strings so we could see what's going on. It's time now to change that and get into some real binary operations.

  • Logical Negation

    We've covered how to add binary numbers together, but how do you subtract them? For that, you need a system for recognizing a number as negative and a few extra rules. Those rules are one's and two's complement.

  • Entropy and Quantifying Information

    Now that we know how to use binary to create switches and digitally represent information we need to ask the obvious question: 'is this worthwhile'? Are we improving things and if so, how much?

  • Encoding and Lossless Compression

    Claude Shannon showed us how to change the way we encode things in order to increase efficiency and speed up information trasmission. We see how in this video.

  • Correcting Errors in a Digital Transmission, Part 1

    There are *always* errors during the transmission of information, digital or otherwise. Whether it's written (typos, illegible writing), spoken (mumbling, environment noise) or digital (flipped bits), we have to account for and fix these problems.

  • Correcting Errors in a Digital Transmission, Part 2

    In the previous video we saw how we could correct errors using parity bits. In this video we'll orchestrate those bits using some math along with a divide and conquer algorithm to correct single-bit errors in transmissions of any size.

  • Encryption Basics

    In this video we play around with cryptography and learn how to encrypt things in a very simple, basic way. We then ramp up our efforts quickliy, creating our own one-time pad and Diffie-Hellman secure key transmitter.

  • Hashing and Asymmetric Encryption

    In this video we dive into hashing algorithms, how they're used and what they're good (and not so good) for. We'll also dig into RSA, one of the most important pieces of software ever created.

  • Functional Programming

    Functional programming builds on the concepts developed by Church when he created Lambda Calculus. We'll be using Elixir for this one, which is a wonderful language to use when discovering functional programming for the first time

  • Lambda Calculus

    Before their were computers or programming languages, Alonzo Church came up with a set of rules for working with functions, what he termed lambdas. These rules allow you to compute anything that can be computed.

  • Database Normalization

    How does a spreadsheet become a highly-tuned set of tables in a relational system? There are rules for this - the rules of normalization - which is an essential skill for any developer working with data

  • Big O Notation

    No video with this one - just a post with lots of code for quick review. Understanding Big O has many real world benefits, aside from passing a technical interview. In this post I'll provide a cheat sheet and some real world examples.

  • Arrays and Linked Lists

    The building block data structures from which so many others are built. Arrays are incredibly simple - but how much do you know about them? Can you build a linked list from scratch?

  • Stacks, Queues and Hash Tables

    You can build all kinds of things using the flexibility of a linked list. In this video we'll get to know a few of the more common data structures that you use every day.

  • Trees, Binary Trees and Graphs

    The bread and butter of technical interview questions. If you're going for a job at Google, Microsoft, Amazon or Facebook - you can be almost guaranteed to be asked a question that used a binary tree of some kind.

  • Basic Sorting Algorithms

    You will likely *never* need to implement a sorting algorithm - but understanding how they work could come in handy at some point. Interviews and workarounds for framework problems come to mind.

  • DFS, BFS and Binary Tree Search

    You now know all about trees and graphs - but how do you use them? With search and traversal algorithms of course! This is the next part you'll need to know when you're asked a traversal question in an interview. And you will be.

  • Dynamic Programming and Fibonnaci

    Dynamic programming gives us a way to elegantly create algorithms for various problems and can greatly improve the way you solve problems in your daily work. It can also help you ace an interview.

  • Calculating Prime Numbers

    The use of prime numbers is everywhere in computer science... in fact you're using them right now to connect to this website, read your email and send text messages.

  • Graph Traversal: Bellman Ford

    How can you traverse a graph ensuring you take the route with the lowest cost? The Bellman-Ford algorithm will answer this question.

  • Graph Traversal: Djikstra

    Bellman-Ford works well but it takes too long and your graph can't have cycles. Djikstra solved this problem with an elegant solution.

  • Design Patterns: Creational

    Tried and true design patterns for creating objects in an object-oriented language.

  • Design Patterns: Structural

    As your application grows in size you need to have a plan to handle the increase in complexity. The Gang of Four have some ideas that could work for you.

  • Design Patterns: Behavioral

    Mediators, Decorators and Facades - this is the deep end of object-oriented programming and something you'll come face to face with as your application grows.

  • Principles of Software Design

    You've heard the terms before: YAGNI, SOLID, Tell Don't ASK, DRY... what are they and what do they mean?

  • Testing Your Code: TDD and BDD

    Testing code has moved beyond the realm of QA and into the realm of design, asking you to think about what you do before you do it. Let's have a look at some strategies.

  • Shell Script Basics

    It's a Unix world. You should have a functional knowledge of how to get around a Unix machine using the command line, as well as how to complete basic tasks using shell scripts and Make files.

  • Hands On: Creating a Useful Shell Script

    I use the static site generator Jekyll to write my blog. I store the site at Github, who then translates and hosts it all for me for free. Jekyll is simple to use and I like it a lot. There's only one problem: it's a bit manual.

  • Deciphering a Complex Bash Script

    I use the static site generator Jekyll to write my blog. I store the site at Github, who then translates and hosts it all for me for free. Jekyll is simple to use and I like it a lot. There's only one problem: it's a bit manual.

  • Making Your Life Easier with Make

    Make is a build utility that works with a file called a Makefile and basic shell scripts. It can be used to orchestrate the output of any project that requires a build phase. It's part of Linux and it's easy to use.

  • Using Make to Improve Your Test Suite

    No video with this one - just a post with lots of code. Make has been around forever and is often overlooked in favor of tools that recreate precisely what it does, but in crappier ways. Let's see how you can use Make to help your testing process.

Watch Again



Next Up