DEV Community

Cover image for Why Mocks Are Considered Harmful
Ben Brazier
Ben Brazier

Posted on • Originally published at torvo.com.au

Why Mocks Are Considered Harmful

Automated testing during software development involves many different techniques, one that shouldn’t be used is mocking. Mocks are a distraction at best and provide false confidence at worst.

Mocks Considered Harmful

What is Mocking?

It is common for software developers to use mocks to simulate behaviour of code for network calls to other services or for database access. This enables unit tests to be run that are both:

  • Fast because they don’t need to rely on additional services.
  • Stable because they avoid availability issues.

This means that mocks are generally used for code with side effects, which is code that relies on or modifies something outside its parameters. This lets us classify functions as:

  • Pure: A function without any side effects.
  • Impure: A function that contains one or more side effects.

Pure vs Impure Functions

The Problems with Mocks

Mocks aren’t equivalent to the integrations they replace. If you mock a database client then you haven’t tested the integration with the real client. This means that your code may work with the mock but you will still need to do integration testing to make sure it works without mocks.

Feature Parity Is Not Feasible. If you make a quick mock then it won’t return useful data. The more time you spend improving the mock the more useful the data will be. However it can never be a true representation.

Mocks that aren’t used are a waste of time and effort. If you mock out a database client and don’t use it then there is no point mocking it. This can occur if some code requires valid configuration to initialise but doesn’t use it.

Mocks Are Not Equivalent

How Do We Replace Mocks?

Mocks are used to provide speed and stability but we can manage this in other ways.

Refactor your code! We can replace the need for mocks by separating the pure from the impure functions. Pure functions can be unit tested without mocks and impure functions should only be integration tested.

Code Refactoring Example

Improve Your Automation! By automating software packaging, deployment, and testing we can focus on integration testing faster instead of relying on unit tests. This also enables continuous delivery and reduces the impact of “it works on my machine” which are beneficial in modern software development.

Automated Build, Deploy, & Test

Summary

Mocking is a short term solution and a long term problem. If you want to deliver software faster then you should spend less time on mocks and more time on refactoring and automation.

If you would like to see more content like this follow me on medium.

Let me know your thoughts on Twitter @BenTorvo or by Email ben@torvo.com.au

Discussion (5)

Collapse
joelbonetr profile image
JoelBonetR • Edited on

The reason behind mocks is not to enable tests in an early stage but decoupling the dependencies from back to front teams conceptually.

They have nothing to do with unit tests (they validate that a function returns the expected from a given input. Good tests do that multiple times with different inputs to validate every possible situation).
They also have nothing to do with end to end tests (by definition, they need to be done AFTER the integration).

1

Let's say a backend team estimate tasks for a week, frontend estimate theirs for another week. The delivery would be in 2 weeks then.

2

Instead, both teams agree on a model, schema, structure or response, they mock it as the contract and they use it to perform further developments till both parts are finished (and unit tested... hopefully).

This work about defining needs that leads to a contract will be needed anyway so no time wasted here. The mock can be also auto-generated from the contract, so again, no time wasted here.

We've just used a week at this point, then you just need -let's say- +1 day to integrate (usually less).

3

Integration process begins, if the backend and frontend implementations are correct from the contract point of view it will be OK at the first try, otherwise one team or another will need to perform further changes to adapt it, which is the reason for estimate integrations according to the model complexity.

Note that we reduced the delivery time of that [feature or whatever] by 4 labour days by using mocks.

4

Once it's finished QA team will apply integration tests, also called End to End (which are by no means replaceable by unit tests) and which are not -usually- a developer responsibility.



The main reasons for using mocks are:
  • Clients can't understand why both teams can't work in parallel to solve their need of having the features on a given deadline because investors are pushing on them as well.
  • It's quite hard to sync back and front to have things to do if a team is waiting the other fo finish things (blockers) and the people paying you don't like that either, for obvious reasons.

Nothing more, nothing less. It's convenience.



On the other hand I don't really know how you plan to call my "pure functions" or "methods" when they are private to their context.
import { schema } from 'whatever';

// non callable from outside this context
const validateInput = (input) => schema.validate(input).then( x => JSON.parse(x));

// what unit test actually will test
export default function doSomething(input) {
  const validation = validateInput(input);
  if( validation.error ) return;

  // also non callable from outside this context
  const doMagic = (data) => {
     return data.specs;
  }

  const result = doMagic(validation);
  return JSON.stringify(result);
}
Enter fullscreen mode Exit fullscreen mode

Do you plan to export every single function so you can test them individually? That's ridiculous!


Other reasons for using mocks:

  • Mocks can either be deleted once the integration is done or stored within a tool to see diferences on them for future updates, they are a good place to quick search when a change was made so you can check the email to see the client request 😂

  • They are also good when doing PoCs, because usually the user will require changes on them before they qualify as "valid" to start developing over them. If you code in real the DB model/schema, migrations, validations, CRUD functions etc, when the customer requires a change you'll need to edit all those steps instead a single mock, which is clearly a lose of time (and more times that what we would like to admit, garbage will be left in the code from those changes).

  • The tests performed on functions that use mocks will also work after the integration. If a test fails after the integration step chances are that the issue is either due to a contract break by one of the teams or in the data (or lack of it) and is usually the first thing to look at after the contract-implementation double-check.




TLDR;
  • The industry won't stop using mocks and there are good reasons for that.
  • No one consider mocks harmful.
Collapse
jesterxl profile image
Jesse Warden • Edited on

2nd post from you in my feed, and I wholeheartedly agree with this one as well. I still think having basic stubs/mocks for unit tests are good if you practice Test Driven Development. The point is for design, not just "does the code work". The key would be just to use dependency injection in OOP or "passing parameters to functions in FP". In your example above, that'd be database_write being a function passed in; stub in unit tests:

def database_write_stub():
  pass
Enter fullscreen mode Exit fullscreen mode

Then use it:

def test_logic():
  assert logic(database_write_stub) == 5
Enter fullscreen mode Exit fullscreen mode

And a real function in integration.

I hope you keep writing articles like these.

Collapse
jessekphillips profile image
Jesse Phillips

I need to start by stating that there is a difference between unittest mocking and integration test mocking.

Bringing logic into pure functions for unit testing is definitely recommended over mocking. Try to reduce the dependency graph for any task is great in this regard. Unit testing can't connect to external systems or services as that is integration.

Then you have integration testing where different systems can be tested for integration. You cannot avoid integrating with real system.

If you build out a good mock system it not only provides faster more reliable test automation, it makes it possible to run tests not possible when using a live system (namely a not live system).

Is maintaining mocks more work? Yes. Is it always worth it? I think it usually is.

Here is what speed gets you. You refactor code and can be confident it operates as expected.

I have to be honest that I have no interest in mocking a database, this is likely a combination of the complexity for what a database does and its stability as a reliable service.

Collapse
nombrekeff profile image
Keff

Holy shit, you're on a mission aren't you? Last 3 posts I've seen from you are quite controversial xD I'm not complaining though

Collapse
michaelmangial1 profile image
Michael Mangialardi • Edited on

**Mocks aren’t equivalent to the integrations they replace.

True, but that is intentional. You can ensure that they are returning the right data via schema validation based on the contract of the service.

In a word, you can ensure that mocks return the same thing (that is, same schema not same dat per se) without the complexity of integrating with the real thing.

**If you mock a database client then you haven’t tested the integration with the real client. This means that your code may work with the mock but you will still need to do integration testing to make sure it works without mocks.

If you don't mock, then you have to set up your testing environment to speak to the real service. This requires extra maintenance, and when there is an issue (which is definitely not uncommon) it can be very confusing to know what needs to be fixed.

Without mocking, you need to add more configuration to your testing environment, increase your knowledge to know how the service works under the hood, and learn how to debug. This leads to change amplification, increased cognitive load, and unknown unknowns which are the three main causes of software complexity.

Lastly, if the connection with the database isn't working, you can catch that via manual testing in the browser, or by a "smoke test" before releasing to production. In both these cases, you can test the database connection implicitly. Integration tests don't need to test every single connection works, just that everything is working as expected from the vantage point of the user.

**Feature Parity Is Not Feasible.

You can still have feature parity. Mocks can behave however you desire, and that means they can behave exactly like the real service.

However, the point of mocks is to not work just like the service. They should have the same signature and behavior but more sensible data.

**If you make a quick mock then it won’t return useful data. The more time you spend improving the mock the more useful the data will be. However it can never be a true representation.

A mock isn't supposed to be a true representation. It's meant to be more sensible and easier to work with. However, as mentioned above, you still can ensure that the sensible data is still valid data via schema validation.

**Mocks that aren’t used are a waste of time and effort.

Then delete them.

**If you mock out a database client and don’t use it then there is no point mocking it. This can occur if some code requires valid configuration to initialise but doesn’t use it.

Same as above. Also, you don't have to mock out a database client in its entirety.