Brandur Leach lays out the argument:
The textbook example of this is the database mock. Here’s a rough articulation of the bull case for this idea: CPUs are fast. Memory is fast. Disks are slow. Why should tests have to store data to a full relational database with all its associated bookkeeping when that could be swapped out for an ultra-fast, in-memory key/value store? Think of all the time that could be saved by skipping that pesky fsync
, not having to update that plethora of indexes, and foregoing all that expensive WAL accounting. Database operations measured in hundreds of microseconds or even *gasp*, milliseconds, could plausibly be knocked down to 10s of microseconds instead.
Prior to reading the article, my stance was as follows: use database mocks for unit test libraries, in which you aren’t testing the actual data processing or retrieval. Those should be able to run on an isolated build server with no access to a database. But you also need proper integration tests that cover how you interact with the database, and those tests should be a majority of your test suite. You should have a known state database before each test run (which is where Docker containers or database snapshots become extremely helpful), and passing the database tests should be a gate early on in the CI/CD process.
After reading the article, my priors remain the same. I think there’s still scope for database mocks, but not as a replacement for proper integration testing with the database.