Recently I was appointed to lead the test automation and TDD efforts in Retalix’s development group. As part of this effort, I realized that most people lack the knowledge of how to write good tests, and their mistakes lead to tests that are a pain to maintain, or give doubtful ROI. So I ended up putting this short guidelines on the internal Wiki. However, I thought that it should be useful to share it with everyone.
What kind of tests these guidelines refer to?
Any functional automated tests. That is automated tests that verify specific functionality of the system. This is opposed to load testing, UX testing, manual exploratory testing, etc.. Note that functional tests can be done at various scopes – for example: by calling the API of the system, by sending SOAP or REST requests to the server, by simulating the user actions through the client’s View Model, or using UI automation. In addition, some parts of the system (e.g. the Database) can be mocked at some cases, or the tests can use all the real parts of the system. All of these options are valid for functional tests (though each has its own pro’s and con’s).
Usually, the vast majority of the testing effort should concentrate on functional tests.
First and foremost, good automated tests are first of all – good, clean code. This means that all of the good coding practices are relevant for automated tests as well, and sometimes are even more important in test code than in production code. In particular, good tests are:
- Have Single responsibility (i.e. tests only a single business rule)
In addition to these general coding best-practices, here are some guidelines that are more specific for automated tests:
In order to achieve both maintainability and readability of the tests, the tests must verify Business Functionality, and not rely on any implementation detail! If your tests rely on specific implementation details, then once these details are changed, your tests will break.
Good tests must describe a simple scenario from the eyes of the user (or users). A single test can describe a scenario that spans different users (roles) in different times. But it should always describe the need or actions of one or more users, and not some technical aspects of the state of the system, like the existence of some data in the database for example. Note that even if you write the tests using the API or Service layers, the actions that the test describes should clearly correlate to the way a user interacts with the system.
For the tests to be readable and reflect user scenario, always write your tests in a Top -> Down manner! Even start with writing the scenario using a pen and paper, focusing on WHAT you want to test rather then HOW. Then translate your words to code, which should look very similar to what you wrote on paper. In addition to make your test more readable, it will help you identify which entities, classes and methods you need. Only after you wrote the body of the test method, start implementing all the helper methods and/or classes that you need in order to make the test work.
Much like clean code, maintainability also means avoid duplication! (AKA "DRY" – Do not repeat yourself). That means, no copy & paste of code or values! (use Extract method, and other refactoring techniques to avoid duplication!)
(If you’re avoiding copy & paste, then the following guideline is irrelevant. But unfortunately I often find that people do it): If you’re using a framework or library that someone else wrote, don’t use any method or class that you don’t know what it does! You don’t have to know how it is implemented, or exactly how it works, but you must understand what it does, and not use it because "everyone use it"!
According to all best development practices, every code must go through a code review. automated tests are no exception! Or even better – do pair-programming! if you wish to learn more about the benefits of Pair-Programming, read this blog post.
Tests should create all data or environment conditions that are relevant to it at the beginning, and clean them up afterward. This means that it shouldn’t rely on the existence of previous data in the database. However, any suite that require to setup the environment in a way that is relevant for nearly all of its tests, can setup the environment at the beginning of the suite (instead of the at the beginning of each test), and cleanup when it ends.
- Tests should not depend on anything outside of it that may change. This includes: the order of the tests, date and time, random generator, etc.
Writing good automated tests is not that hard, and not very different than writing code. However, without this basic knowledge one may write tests that are very difficult to maintain, and that their RIO is very low (or even negative!)
Make sure to refresh your memory and verify yourself against these guidelines every once in a while to make sure that you’re on the right track. If you won’t be, you will probably also notice that you constantly have to maintain your tests, and that you invest in writing, running and fixing them, more effort than the peace of mind that they should give you when they work.
As always, I would love to hear your comments!