Where there’s Smoke there’s Fire!

laptop_smokingThe first sign of smoke is always an early indicator that fire is not far away. It’s the first warning sign. So too in testing, having Smoke Tests that quickly step through the main pathways of a feature will let you know if there’s a significant problem in your codebase.

What makes a good Smoke Test?

There are 3 factors which make a good smoke test: Speed, Realism and Informative.

Speed

If there is a fire coming, you want to know about it as fast as possible so you have the most time possible to respond. Your tests must be fast enough that running them with every build/iteration will not cause frustration or significantly hold up progress.

If there is a problem, these tests will tell you, quickly, that you need to stop whatever else you might’ve been doing and respond to the issue.

Realism

If smoke is coming from a contained, outdoor BBQ in your Neighbours backyard, it doesn’t matter. You don’t need to do anything about this. But if a tree in your neighbours backyard was on fire, this is much more concerning. The Smoke tests must only look for realistic and common usages of your feature, where you care if something breaks. If the background colour selector on your form creation tool is broken, that’s probably not as crucial to worry about, compared to if the form itself didn’t work.

So, prioritise your tests to only cater for the key user workflows through your feature, and leave everything else for other tests to cover. This will also go hand in hand with designing tests for speed.

Informative

If you see smoke, the first thing you will want to know is how far away it is and how big it is. This will dictate how devastating it will be and what sort of action you need to take. So to the tests you write must be informative about where the problem is, how you can recreate it and how important it is to fix.

Smoke tests are much more useful if they can not only tell you that something is wrong, but also where the problem is, how important a problem it is and how to fix it.

An Example

Since the company I work for (Campaign Monitor) is an email marketing company. So a key scenario or User Story we would want to target in a Smoke Test is:
“As a user, I want to create a campaign, send it to my subscriber list, check the email got through and that any recipient interactions are reported”

So, the steps required for this smoke test are:

  1. Create a campaign, set the email content and the recipients
  2. Send the campaign to some email recipients
  3. Open one of the received emails and click on a link
  4. Check the report shows the email was opened and clicked

This test covers an end-to-end user story that is a vital part of our software, hence if any part of it fails, we need to know immediately and fix it. It is fast and runs in under a minute so doesn’t interrupt workflow. It does not add in any extra steps or checks that aren’t helpful to the target user story. And it is written with logging and helpful error messages so that any failures point as close as possible to the problem area.

Putting it together

Once you’ve created your smoke tests, put them into use! Run them every time you change something in your feature and they will quickly tell you if anything major is wrong. It’s so much better to find a major issue as early as possible and reduce time lost searching for the problem later on or building more and more problems on top.

Smoke tests shouldn’t be your only test approach as they only cover the ‘happy path‘ scenarios and there are plenty of other areas for your feature to have problems. But it is certainly a good starting point!

I’d love to hear how you use (or why you don’t use) Smoke Tests in your company!

What makes a good automated test?

Testers are increasingly being asked to write, maintain and/or review automated tests for their projects. Automated tests can be very valuable in providing quick feedback on the quality of your project but they can also be misused or poorly written, reducing their value. This post will highlight a few factors that I think make the difference between a ‘bad’ automated test and a ‘good’ automated test.

1. The code

People vary in how they like to write their code, for example, what names they use and the structure they like. This is fine to be different, but there are still aspects that should be considered to make sure the test is going to be useful into the future.

Readable & Self-explanatory
Someone should be able to read through your code and easily figure out what is being done at each part of the process. Extracting out logical chunks of code into methods helps with this but be wary of having methods within methods within methods… Deep nesting can add unnecessary complexity and reduces maintainability since coupling of code is increased. Use comments sparingly as the methods and variables should be descriptive enough in most cases and comments quickly get outdated. Format your code so it’s easy to read, with logical line breaks and be wary of ‘clever code’, which might be more efficient and might get 4 lines of code into 1, but it must still be readable and understandable to the next person looking at it in 5 months time. Make your test readable and self-explanatory.

Clear purpose
In order to make it easier to get a picture of test coverage, it helps to have tests with a clear purpose of what they are and aren’t testing. This also means people reading through the test trying to understand it or fix it know exactly what it’s attempting to do. Similarly it helps to not have your tests covering multiple test cases at once. When the test fails, which test case did it fail on? You’ll have to investigate every time just to know where the bug is before you even start getting the reproduction steps. Having multiple test cases also makes naming tests harder and can result in these extra test cases being hidden underneath other tests and perhaps duplicated elsewhere or giving a false picture of test coverage. Make your test have a clear purpose.

2. Execution

When you actually run your tests, there are a few more attributes to look for that contribute to the usefulness of the test.

Speed
If your tests take too long to run, people won’t want to wait around for the results, or will avoid running them at all to save time, which makes them completely useless. What ‘too long’ looks like will vary in each context but speed is always important. You will reduce reluctance from using your tests by setting a realistic expectation of the duration for running your tests. There will also be times where you want a smaller and faster subset of tests for a quick overview, whereas other times you are happy for a longer, more thorough set of tests when you aren’t as worried about the time it takes. Make your test fast.

Consistency
In most cases, depending on the context of your project, running your test 10 times in a row should give you the same 10 results. Similarly running it 100 times in a row. Flakey tests that pass 1 minute, then fail the next are the bane of automated tests. Make sure your tests are robust and will only fail for genuine failures. Add Wait loops for criteria to be met with realistic timeouts of what you consider a reasonable wait for an end-user. Do partial matches if a full match isn’t necessary. Make your test consistent.

Parallelism
The fastest way to run tests is in parallel instead of in sequence. This changes the way you write tests. Global variables and states can’t be trusted. Shared services likewise can’t be trusted. To run in parallel, your tests need to be independent, not relying on the output of other tests or data which is shared. Wherever possible, run your tests in parallel, finding the optimum number of streams to run in parallel and you will have your tests running much faster, giving you quicker feedback on the state of the system being tested. Make your test parralelisable.

Summary

There is plenty more that could be said about what makes a good Automated Test, but this should make a good start. Having code that is readable, self-explanatory and clear of purpose, and written in a way that it runs fast, consistently and can be run in parallel will get you a long way to having an effective, efficient and above all, useful set of automated tests.

I’d love to hear about what other factors you find most important in writing a good automated test as well in the comments below

Lessons learnt from writing Automated Tests

The purpose of this article is to share some of the big lessons I’ve learnt in my 5+ years of writing automated tests for Campaign Monitor, an online service to send beautiful emails. If you are writing automated tests, it’s worth keeping these lessons in mind to ensure you end up with fast, maintainable and useful tests.

1 – Website testing is hard

Websites can be slow to load, both at a whole page level and an element level. This means you are constantly writing ‘waits’ into your code to make sure the element you want to test is available. As java-script techniques improve, there are more animations, hovers, fade-ins,  call-outs, etc… being added to websites which results in a much nicer experience for the user, but can be a nightmare for writing a matching automated test. With these things in mind, I suggest the following:

  1. Evaluate whether this test case is worth automating. You can test functionality via an automated test (eg does clicking button X cause action Y to happen?), but they are less useful at telling you whether a page animation looks correct visually, so you are better to keep that as a manual regression test.
  2. Do as much of the automated test outside of the UI as possible. If you can utilise an API call or make direct database updates to get your system into the state required or to verify the UI action performed in the app you will save a lot of time in executing the test and avoid possible test failures before you even get to the point of interest for the test.
  3. Use Smart Waits. As often as practical, you are better off doing your assert of an element state within a short loop. You usually don’t want your test to fail because your app took 4 seconds to login instead of the usual 2 seconds. Always opt for waiting for a specific element state rather than Thread.Sleep!

2 – Make your tests understandable

When you first write your test, you have all the context you need, you know why you chose each method over other methods, you know what you are attempting to do at each step along the way. But what about the next person to look at that test in a years time when the requirements of that test have changed? Writing a test that is easy to understand for all readers will save valuable time later on and will mean the test continues to provide value for a long time. How do we do this?

  1. Use meaningful variables and method names. You should be able to read through your test without diving down any deeper into the methods and know what is going on. To do this, you need your variables and methods to accurately and succinctly explain what they are for.. For example:
    int outputRowNumber = GetTableRowForCustomer(“customer name”);
    compared to:
    int x = GetCustomer(“customer name”);
  2. Add comments when necessary. Your colleagues aren’t idiots, so they won’t need comments on every line of code, but if, for example, you have a line of code calculating the number of seconds left in the day, it might be worth a few words to explain what you are doing.

3 – Run your tests often

The purpose of creating and running these tests is to find bugs in your application, so you should be running all of your tests at least once a week, and most tests daily or more. This will make it much easier to identify the change that occurred which has resulted in the failed test. How often you run the tests will depend on how often you are updating the code under test. The more often you update, the more often you should run the tests. You will probably have a mix of tests that take varying amount of times, so be smart about when you run them, run the fast ones during the day, but leave the slow ones to run overnight so you don’t have to wait for the results.

We’ve had great success in having a small and quick subset of tests that do a basic check of a broad range of features across the application. This gives us quick feedback to know that there aren’t any major issues with core functionality. We call them ‘Smoke tests’, because if there’s smoke, there’s fire!

4 – Be regular in maintenance

Any one who has tried automated testing will know that your tests will get out of date as the features they are testing update. But, if you keep an eye on the results of your tests regularly, and act on anything that needs updating in a speedy fashion, you will save yourself a lot of pain down the track. Much better to tackle the cause of 2 failing tests now then to deal with multiple causes of 30 failures in a months’ time. If you can address any maintenance tasks for updated feature behaviour within a week of it happening, the context will be fresh and you will be able to get on top of it quickly, before it spreads and merges with other issues and suddenly you have a sea of failed tests and don’t know where to start.

5 – Be technology independent

We previously used a technology called WatiN to drive our test automation and are now in the process of moving over to Selenium WebDriver. We’ve also used other technologies in the past and each time a change is needed, it takes a lot of time/effort to convert our solution. If/When the time comes where you feel the tool you are using is no longer working for your needs, the time/cost to change can be huge. It’s possible every method you’ve written is dependent on your current technological choices, so it either rules out ever changing, or means there is a steep curve involved in changing over.

So instead, add an interface layer on top of your code which just says “I want the ‘browser’ to ‘click’ on ‘this link’”. Then the implementation can figure out how to get the Browser to click on the given link. Then when you decide you want to try a new technology, just add a new implementation to the interface and nothing else needs to change.

I will probably add a follow up post as summarising 5 years of learning into 1 post is hard!