2020-11-11
|~5 min read
|885 words
Continuing my foray into becoming a better tester, today, I’m exploring the Test Object Factory. This pattern shares some similarities with the case factory approach I wrote about in Using Test Factories To Increase Confidence. Both approaches are about highlighting what’s different between tests. The thrust of the difference, however, is that while the cases focus on highlighting the differences between assertions, the object factories are more useful in test setup.
Sometimes called an object mother (presumably because the factory produces objects to be used by tests), the test object factory can be thought of as a shared setup for tests. The factory is not very personalized. It creates a common object to all scenarios (though this should not be taken as dogma - it’s very possible to have multiple factories within a single test file to address different scenarios).
By delivering a common foundation to multiple tests, the object factory highlights the differences, the places where a test must diverge from the common path. Caveat emptor: While the factory can be used to highlight differences, it’s also possible to bury critical information in the setup function, which can be considered an anti-pattern.1
It’s also worth calling out how the object factory differs from using a beforeEach
hook. The intent behind the beforeEach
is to create a consistent, independent testing environment for each test (clean up, etc.). So, while it’s technically possible to create a similar type of experience through the beforeEach
, I prefer the explicitness of the object factory and the ease with which the provided foundation is extended to suit the specific needs of a test.
While theory is all fine and dandy, let’s look at a small example of how we can refactor two tests so that they can benefit from a test object factory.
For the purposes of this example, we’ll be examining a blog’s API and validating that it behaves as expected when interacting with posts. While I’m using Jest, the principles are framework agnostic:
test("calling a getPosts without parameters returns all posts", async () => {
// arrange
const req = {}
const res = { json: jest.fn() }
// act
await postController.getPosts(req, res)
// assert
expect(res.json).toHaveBeenCalledTimes(1)
const firstCall = res.json.mock.calls[0]
const firstArg = firstCall[0]
const { posts } = firstArg
const actualPosts = await db.getPosts()
expect(actualPosts).toEqual(posts)
})
test("calling getPost with an postId returns that post", async () => {
// Arrange
const testPost = await db.insertPost(generate.postData())
const req = { params: { id: testPost.id } }
const res = { json: jest.fn() }
// Act
await postsController.getPost(req, res)
// Assert
expect(res.json).toHaveBeenCalledTimes(1)
const firstCall = res.json.mock.calls[0]
const firstArg = firstCall[0]
const { post } = firstArg
expect(post).toEqual(testPost)
})
//... and many more tests that all use some similar variants of `req` and `res`
Both of these tests require a req
and res
variable, but they’re slightly different. Imagine for a moment that the number of properties they share is significant (i.e., instead of only a few properties on the req
and the res
, there are dozens), making the differences subtle and hard to detect - particularly if the setup is done inline as above.
To mitigate the detective work involved, we can abstract away the shared components by adding a setup
function to the top of the file.2 Each test will then invoke this new function which will return the components common to the tests. Interestingly, in this pattern, the test can then decide which components it needs and modify them accordingly. For example:
function setup(){
const req = {
body: {}
params: {}
}
const res = {
json: jest.fn(),
}
return {req, res}
}
test("calling a getPosts without parameters returns all posts", async () => {
const {req, res} = setup()
//... carry on
}
test("calling getPost with an postId returns that post", async () => {
const {req, res} = setup()
req.params = { id: testPost.id }
//... carry on
}
With this pattern, understanding what is unique to a test is much easier. While both tests get setup with a req
and a res
object, the latter needs more than what’s provided by default, so it adds a params
key to its request. This, however, is no longer buried in lines of code that all look the same!
Testing is a form of communication. We are trying to communicate to ourselves, now and in the future, as well as our colleagues that our code works as expected. The test object factory patterns helps to this end by focusing on what’s relevant. The danger that tests will copied with unnecessary information remains, however, this pattern makes it more obvious when that happens by highlighting variance from the baseline. When a test adds a property that wasn’t accounted for in a setup method, the natural question is why. Is that actually needed? And by drawing attention to it, you and future developers will be able to peel away the cruft more readily to focus the test only on what needs to be there.
Hi there and thanks for reading! My name's Stephen. I live in Chicago with my wife, Kate, and dog, Finn. Want more? See about and get in touch!