Australian Testing Days 2016 Reflection – Day 1

On May 20-21 I went to the inaugural Australian Testing Days Conference in Melbourne. The first day involved a series of talks, mostly on sharing experiences people had in testing and the second day was an all-day workshop on test leadership. This post outlines the key messages from the session I attended and the key things I learnt from each one.

Day 1

Part 1 – What you meant to say (keynote)

First up, Michael Bolton discussed how the language we, as testers, use around testing can be quite unhelpful and cause confusion for those involved. For example, automated testing does not exist. You can certainly automate checking, which is mainly regression tests of existing behaviour. But you cannot automate testing, which is everything someone does to understand more about a feature, giving them knowledge to decide on how risky it is to release it. The way we communicate what we do will impact what others understand it as and then expect of us. Similarly, it is important to understand the language others use when asking us to do something.

Another helpful lesson was that customer desires are more important than customer expectations. If they are happy with your product, it doesn’t matter if it met expectations. If I don’t expect the Apple Watch will be of much use to me, but then I try it and discover that I love it, my expectations weren’t met, but my desires were, and it’s a good result. Similarly, users might expect something totally different to what you produce, but if they discover that what you made is actually better than their expectations, it is likewise a good result.

Lessons learnt:

  • Be clear in communication of testing activities to avoid ambiguity and misalignment.
  • Seek the underlying mentality behind people’s testing questions

Part 2 – Transforming an offshore QA team (elective)

Next up Michele Cross shared on the challenges she is facing in transforming an offshore, traditional and highly structure testing team into a more agile, context driven testing team. The primary way to achieve any big change like this is in creating an environment of trust, which comes in 2 ways. A cognitive trust is based on ability, you trust someone because of their skills and attributes. For example, trusting a doctor who has been studying and practising medicine for 20 years that you have just met to diagnose you. An affective trust is based on relationships, you trust someone because of how well you know them and how you have interacted with them in the past. For example, you trust a friend’s movie recommendation because of shared interests and experiences, not their skills as a movie reviewer.

To help establish this trust and initiate change, three C’s were discussed.

  • Culture – Understanding people and their differences. The context that has brought people to where they are now will greatly shape how they interact with people. Do they desire structure or independence? Are they open to conflict or desire harmony? Knowing this can help inform decisions and approaches.
  • Communication – Relating to other people can be just as hard as it is important. Large, distributed teams bring with them challenges of language, timezones, video conferencing etc… It is crucial to find ways to address these concerns so that everyone is kept in the loop, aligned on direction and is able to build relationships with each other.
  • Coaching – Teaching new skills through example and instruction. Create an environment where it is safe to fail so people feel comfortable to grow. Use practical scenarios to teach skills and get involved yourself.

Lessons learnt:

  • Consider the cultural context of people you are interacting with as it will shape how to be most effective in in those interactions
  • Learning through doing, and doing alongside someone is a great way of learning
  • Trust is built by a combination of personal relationships and technical abilities

Part 3 – It takes a village to raise a tester (elective)

Catherine Karena works at WorkVentures which is all about helping under privileged people develop like skills and technological skills to help them enter the tech workforce. She talked about how to figure out what skills to teach by looking at where the most jobs were in the market and the common skills required. This includes both technical and relational skills as they interact with structures and other staff in companies.

On a more general note, a number of characteristics of what makes a great tester were highlighted to focus on teaching these skills as well. A great tester is: curious, a learner, an advocate, a good communicator, tech savvy, a critical thinker, accountable and a high achiever. When it comes to the learning side, a few more tips were shared around teaching through doing as much as possible, making it safe to fail, using industry experts and building up the learning over time.

Some interesting statistics were raised showing that those trained by WorkVentures over 6 months were equal to or greater in performance and value compared to relevant Uni graduates when rated by employers.

Lessons learnt:

  • Relational skills can be just as, if not more, important than technical skills in hiring new talent.
  • Learning in small steps with practical examples greatly improves the outcome.

Part 4 – Context Driven Testing: Uncut (elective)

Brian Osman talked about his experience growing in knowledge and abilities as a tester and how greatly that experience was shaped by testing communities. He explained how a community of like minded people can help drive learning as they challenge each other and bring different view points across.

A side note he introduced was a term called ‘Possum Testing’ which is how he described “Testing that you don’t value, motivated by a fear of some kind”, for example, avoiding using a form of testing because you don’t understand it or how to use it. This is an idea that many people would understand, but could perhaps find it hard to articulate and discuss. Giving it a name instantly provides a means to bring it up in conversation and have people already have a good idea of the context and any common ground in thinking.

Lessons learnt:

  • Naming ideas or common problems is a helpful way to direct future conversations and bring along the original context
  • When looking to improve in a certain area/skill, find a community of others looking to do the same thing.
  • Use these communities to present ideas, defend them and challenge other’s ideas. Debates are encouraged

Part 5 – Testing web services and microservices (elective)

Katrina Clokie (who also mentors me in conference speaking) spoke about her experience testing web services and microservices and a previous version of the talk is available online if you are interested. Starting with web services, she pointed out that each service will have different test needs based on who uses it and how they use it. Service virtualisation is a common technique used in service testing to isolate the front-end from the inconsistencies and potentially unstable back-ends. Microservice testing puts another layer in this model.

Some key guidelines for creating microservices automation were presented, claiming that it should be fit for purpose, remove duplication, be easy to merge changes, have continuous execution and be visible across teams.

An interesting learning technique was present called Pathways which can be found on her website which list a whole bunch of resources for learning about a new topic. They are a helpful way of directing your learning time with a specific goal in mind.

Lessons learnt:

  • Make use of Katrina’s pathways for learning about a new area (for myself or as recommendations to others)
  • Get involved in code creation as early as possible to help influence a culture of testability
  • Write any automation with re-usability and visibility in mind

Part 6 – Test Management Revisited (keynote)

Anne-Marie Charett finished up the day sharing some reflections and approaches she implemented from her time as Test lead at Tyro payments. She started by asking the question, do we still need test management? Which has been asked a few times in the community already. The response being that we do need a testing voice in the community to go with all the new roles and technology coming through like microservices, and DevOps. This doesn’t mean we need Test Managers who deal with providing stability, rather Test Leaders who can direct change. She talked about using the “Satir change Model” to describe the process of change and it’s effect on performance.

She brought a mentality to transforming Tyro to have the best test practice in Australia, and was not interested in blindly copying others. There is certainly benefits to learn from the approach others take, but should be assessed to meet your company’s environment. She discussed a number of testing related strategies that you might have to deal with: Continuous delivery, testing in production, microservices, risk-based automation, business engagement, embedding testing, performance testing, operational testing, test environments, training and growth.

The next question was how to motivate people to learn? Hand-holding certainly isn’t ideal, but you also probably can’t expect people to spontaneously learn all the skills you’d like them to have. This needs coaching! And the coaching should be focused around a task that you can then offer feedback on afterwards. Then challenge them to try it again on their own.

An important question to ask in identifying what skills to teach is in highlighting what makes a good tester at your company, because your needs will be different to other places. She then finished with a few guidelines around coaching based around giving people responsibilities, improving the environment they work in and continuing to adapt as different needs and challenges arise.

Lessons learnt:

  • Any practice/process being used by others should be analysed and adapted to fit your context, not blindly copied.
  • Be a voice for testing and lead others to make changes in areas they need to improve on
  • Think about what makes a good tester at my company and how I measure up
  • Help prepare the organization/team for change and help them cope as they struggle through it

That’s a wrap for Day 1, find my review of Day 2 here, where I took part in a workshop on Coaching Testers with Anne-Marie Charrett

Where there’s Smoke there’s Fire!

laptop_smokingThe first sign of smoke is always an early indicator that fire is not far away. It’s the first warning sign. So too in testing, having Smoke Tests that quickly step through the main pathways of a feature will let you know if there’s a significant problem in your codebase.

What makes a good Smoke Test?

There are 3 factors which make a good smoke test: Speed, Realism and Informative.

Speed

If there is a fire coming, you want to know about it as fast as possible so you have the most time possible to respond. Your tests must be fast enough that running them with every build/iteration will not cause frustration or significantly hold up progress.

If there is a problem, these tests will tell you, quickly, that you need to stop whatever else you might’ve been doing and respond to the issue.

Realism

If smoke is coming from a contained, outdoor BBQ in your Neighbours backyard, it doesn’t matter. You don’t need to do anything about this. But if a tree in your neighbours backyard was on fire, this is much more concerning. The Smoke tests must only look for realistic and common usages of your feature, where you care if something breaks. If the background colour selector on your form creation tool is broken, that’s probably not as crucial to worry about, compared to if the form itself didn’t work.

So, prioritise your tests to only cater for the key user workflows through your feature, and leave everything else for other tests to cover. This will also go hand in hand with designing tests for speed.

Informative

If you see smoke, the first thing you will want to know is how far away it is and how big it is. This will dictate how devastating it will be and what sort of action you need to take. So to the tests you write must be informative about where the problem is, how you can recreate it and how important it is to fix.

Smoke tests are much more useful if they can not only tell you that something is wrong, but also where the problem is, how important a problem it is and how to fix it.

An Example

Since the company I work for (Campaign Monitor) is an email marketing company. So a key scenario or User Story we would want to target in a Smoke Test is:
“As a user, I want to create a campaign, send it to my subscriber list, check the email got through and that any recipient interactions are reported”

So, the steps required for this smoke test are:

  1. Create a campaign, set the email content and the recipients
  2. Send the campaign to some email recipients
  3. Open one of the received emails and click on a link
  4. Check the report shows the email was opened and clicked

This test covers an end-to-end user story that is a vital part of our software, hence if any part of it fails, we need to know immediately and fix it. It is fast and runs in under a minute so doesn’t interrupt workflow. It does not add in any extra steps or checks that aren’t helpful to the target user story. And it is written with logging and helpful error messages so that any failures point as close as possible to the problem area.

Putting it together

Once you’ve created your smoke tests, put them into use! Run them every time you change something in your feature and they will quickly tell you if anything major is wrong. It’s so much better to find a major issue as early as possible and reduce time lost searching for the problem later on or building more and more problems on top.

Smoke tests shouldn’t be your only test approach as they only cover the ‘happy path‘ scenarios and there are plenty of other areas for your feature to have problems. But it is certainly a good starting point!

I’d love to hear how you use (or why you don’t use) Smoke Tests in your company!

What makes a good automated test?

Testers are increasingly being asked to write, maintain and/or review automated tests for their projects. Automated tests can be very valuable in providing quick feedback on the quality of your project but they can also be misused or poorly written, reducing their value. This post will highlight a few factors that I think make the difference between a ‘bad’ automated test and a ‘good’ automated test.

1. The code

People vary in how they like to write their code, for example, what names they use and the structure they like. This is fine to be different, but there are still aspects that should be considered to make sure the test is going to be useful into the future.

Readable & Self-explanatory
Someone should be able to read through your code and easily figure out what is being done at each part of the process. Extracting out logical chunks of code into methods helps with this but be wary of having methods within methods within methods… Deep nesting can add unnecessary complexity and reduces maintainability since coupling of code is increased. Use comments sparingly as the methods and variables should be descriptive enough in most cases and comments quickly get outdated. Format your code so it’s easy to read, with logical line breaks and be wary of ‘clever code’, which might be more efficient and might get 4 lines of code into 1, but it must still be readable and understandable to the next person looking at it in 5 months time. Make your test readable and self-explanatory.

Clear purpose
In order to make it easier to get a picture of test coverage, it helps to have tests with a clear purpose of what they are and aren’t testing. This also means people reading through the test trying to understand it or fix it know exactly what it’s attempting to do. Similarly it helps to not have your tests covering multiple test cases at once. When the test fails, which test case did it fail on? You’ll have to investigate every time just to know where the bug is before you even start getting the reproduction steps. Having multiple test cases also makes naming tests harder and can result in these extra test cases being hidden underneath other tests and perhaps duplicated elsewhere or giving a false picture of test coverage. Make your test have a clear purpose.

2. Execution

When you actually run your tests, there are a few more attributes to look for that contribute to the usefulness of the test.

Speed
If your tests take too long to run, people won’t want to wait around for the results, or will avoid running them at all to save time, which makes them completely useless. What ‘too long’ looks like will vary in each context but speed is always important. You will reduce reluctance from using your tests by setting a realistic expectation of the duration for running your tests. There will also be times where you want a smaller and faster subset of tests for a quick overview, whereas other times you are happy for a longer, more thorough set of tests when you aren’t as worried about the time it takes. Make your test fast.

Consistency
In most cases, depending on the context of your project, running your test 10 times in a row should give you the same 10 results. Similarly running it 100 times in a row. Flakey tests that pass 1 minute, then fail the next are the bane of automated tests. Make sure your tests are robust and will only fail for genuine failures. Add Wait loops for criteria to be met with realistic timeouts of what you consider a reasonable wait for an end-user. Do partial matches if a full match isn’t necessary. Make your test consistent.

Parallelism
The fastest way to run tests is in parallel instead of in sequence. This changes the way you write tests. Global variables and states can’t be trusted. Shared services likewise can’t be trusted. To run in parallel, your tests need to be independent, not relying on the output of other tests or data which is shared. Wherever possible, run your tests in parallel, finding the optimum number of streams to run in parallel and you will have your tests running much faster, giving you quicker feedback on the state of the system being tested. Make your test parralelisable.

Summary

There is plenty more that could be said about what makes a good Automated Test, but this should make a good start. Having code that is readable, self-explanatory and clear of purpose, and written in a way that it runs fast, consistently and can be run in parallel will get you a long way to having an effective, efficient and above all, useful set of automated tests.

I’d love to hear about what other factors you find most important in writing a good automated test as well in the comments below

Running Parallel Automation Tests Using NUnit v3

With the version 3 release of NUnit the ability to run your automated tests in parallel was introduced (A long running feature request). This brings with it the power to greatly speed up your test execution time by 2-3 times, depending on the average length of your tests. Faster feedback is crucial in keeping tests relevant and useful as part of your software development cycle.

As parallel tests is new to v3, the support is still somewhat limited, but I’ve managed to setup our Automated Test solution at my work, Campaign Monitor, to run tests in parallel and wanted to share my findings.

Technology

We are using the following technologies, so you may have to change some factors to match your setup, but it should provide a good starting point.

  • Visual Studio
  • C#
  • Selenium Webdriver
  • TeamCity

The Setup

Step 1 – Install the latest NUnit package

Within Visual Studio, install or upgrade the NUnit v3 package for your solution via Nuget Package Manager (or your choice of package management tool) using:

Install-Package NUnit
OR Upgrade-Package NUnit

Step 2 – Choose the tests that will be run in parallel

Your Tests can be configured to run in parallel at the TestFixture level only in the current release. So in order to setup your tests to be run in parallel, you choose the fixtures that you want to run in parallel and put a [Parallelizable] attribute at the start of the TestFixture.

Step 3 – Choose the number of parallel threads to run

To specify how many threads will run in parallel to execute your tests, add a ‘LevelOfParallelism’ attribute to your assembly.info file within each project with whatever value you desire. How many threads your tests can handle will be linked to the number of cores your machine running the test has. I recommend either 3 or 4.

Step 4 – Install a test runner to run the tests in parallel

Since this new to NUnit version 3, some test runners do not support it yet. There are 2 methods i’ve found which work.

1 – NUnit Console is a Nuget package that can be installed and runs as a command line. Install it using:

Install-Package NUnit.Console

then open a cmd window and navigate to the location of the nunit-console.exe file installed with the package. Run the tests with this command:

nunit-console.exe <path_to_project_dll> --workers=<number_of_threads>

This will then run the tests in the location specified and using the number of worker threads specified. Outputting the results in the command window.

2 – NUnit Test Runner is an extension for Visual Studio. Search for ‘NUnit3 Test Adapter’ (I used Version 3.0.4, created by Charlie Poole). Once installed, build your solution to populate the Tests into the Test Explorer view. You can filter the tests visible using the search bar to find the subset of tests you want (as decided in Step 2) and then click right click and run tests. This will also run your tests in parallel, using the ‘LevelOfParallelism’ attribute defined in Step 2 to determine the number of worker threads. This gives you nicer output to digest than the console runner, but still feels a bit clunky to use.

Your now setup and running your tests in parallel! Pretty easy right! The tricky part I found was then getting these tests to run in parallel when running through TeamCity, our continuous integration software.

(Optional) Step 5 – Configure TeamCity to run your tests in parallel

We use TeamCity to run our automated tests against our continuous integration builds, and so the biggest benefit to this project was to enable the TeamCity builds to run in parallel. Here’s how I did it

Note: First, I tried using MSBuild to run the NUnit tests as detailed here since this was the way we previously ran our tests before the NUnit v3 beta. However this didn’t work as it requires you to supply the NUnit version in the build script and that doesn’t support NUnit 3 or 3.0.0 or 3.0.0-beta-4 or any other variation i tried. So that was a no-go.

Second, I tried using the NUnit test build step and choosing the v3 type (only available in TeamCity v9 onwards). This lead me through a whole string of errors with conflicting references and unavailable methods and despite my best efforts, would not run the tests. So that was a no-go as well.

The method i decided upon was to use a command line step and run the NUnit console exe directly. So i first setup an MSBuild step that would just copy the NUnit console files and the Test Project files to a local directory on the Build Agent running the tests. Then I setup a command line step with these settings:

Run: Executable with parameters
Command Executable: <path_to_nunit-console.exe>
Command parameters: <path_to_test_dll's> --workers=<thread_count>

And with this, I was able to run parallel tests through team city! I’m sure the setup will get easier once support improves, but for now, it’s a good solution that gives our test suite 3-5 times faster results 🙂

Did you find this helpful, or have any tips you want to share? Please comment below!

Lessons learnt from writing Automated Tests

The purpose of this article is to share some of the big lessons I’ve learnt in my 5+ years of writing automated tests for Campaign Monitor, an online service to send beautiful emails. If you are writing automated tests, it’s worth keeping these lessons in mind to ensure you end up with fast, maintainable and useful tests.

1 – Website testing is hard

Websites can be slow to load, both at a whole page level and an element level. This means you are constantly writing ‘waits’ into your code to make sure the element you want to test is available. As java-script techniques improve, there are more animations, hovers, fade-ins,  call-outs, etc… being added to websites which results in a much nicer experience for the user, but can be a nightmare for writing a matching automated test. With these things in mind, I suggest the following:

  1. Evaluate whether this test case is worth automating. You can test functionality via an automated test (eg does clicking button X cause action Y to happen?), but they are less useful at telling you whether a page animation looks correct visually, so you are better to keep that as a manual regression test.
  2. Do as much of the automated test outside of the UI as possible. If you can utilise an API call or make direct database updates to get your system into the state required or to verify the UI action performed in the app you will save a lot of time in executing the test and avoid possible test failures before you even get to the point of interest for the test.
  3. Use Smart Waits. As often as practical, you are better off doing your assert of an element state within a short loop. You usually don’t want your test to fail because your app took 4 seconds to login instead of the usual 2 seconds. Always opt for waiting for a specific element state rather than Thread.Sleep!

2 – Make your tests understandable

When you first write your test, you have all the context you need, you know why you chose each method over other methods, you know what you are attempting to do at each step along the way. But what about the next person to look at that test in a years time when the requirements of that test have changed? Writing a test that is easy to understand for all readers will save valuable time later on and will mean the test continues to provide value for a long time. How do we do this?

  1. Use meaningful variables and method names. You should be able to read through your test without diving down any deeper into the methods and know what is going on. To do this, you need your variables and methods to accurately and succinctly explain what they are for.. For example:
    int outputRowNumber = GetTableRowForCustomer(“customer name”);
    compared to:
    int x = GetCustomer(“customer name”);
  2. Add comments when necessary. Your colleagues aren’t idiots, so they won’t need comments on every line of code, but if, for example, you have a line of code calculating the number of seconds left in the day, it might be worth a few words to explain what you are doing.

3 – Run your tests often

The purpose of creating and running these tests is to find bugs in your application, so you should be running all of your tests at least once a week, and most tests daily or more. This will make it much easier to identify the change that occurred which has resulted in the failed test. How often you run the tests will depend on how often you are updating the code under test. The more often you update, the more often you should run the tests. You will probably have a mix of tests that take varying amount of times, so be smart about when you run them, run the fast ones during the day, but leave the slow ones to run overnight so you don’t have to wait for the results.

We’ve had great success in having a small and quick subset of tests that do a basic check of a broad range of features across the application. This gives us quick feedback to know that there aren’t any major issues with core functionality. We call them ‘Smoke tests’, because if there’s smoke, there’s fire!

4 – Be regular in maintenance

Any one who has tried automated testing will know that your tests will get out of date as the features they are testing update. But, if you keep an eye on the results of your tests regularly, and act on anything that needs updating in a speedy fashion, you will save yourself a lot of pain down the track. Much better to tackle the cause of 2 failing tests now then to deal with multiple causes of 30 failures in a months’ time. If you can address any maintenance tasks for updated feature behaviour within a week of it happening, the context will be fresh and you will be able to get on top of it quickly, before it spreads and merges with other issues and suddenly you have a sea of failed tests and don’t know where to start.

5 – Be technology independent

We previously used a technology called WatiN to drive our test automation and are now in the process of moving over to Selenium WebDriver. We’ve also used other technologies in the past and each time a change is needed, it takes a lot of time/effort to convert our solution. If/When the time comes where you feel the tool you are using is no longer working for your needs, the time/cost to change can be huge. It’s possible every method you’ve written is dependent on your current technological choices, so it either rules out ever changing, or means there is a steep curve involved in changing over.

So instead, add an interface layer on top of your code which just says “I want the ‘browser’ to ‘click’ on ‘this link’”. Then the implementation can figure out how to get the Browser to click on the given link. Then when you decide you want to try a new technology, just add a new implementation to the interface and nothing else needs to change.

I will probably add a follow up post as summarising 5 years of learning into 1 post is hard!