The Champion for Quality

victory-clipart-II-400x400Being the Champion for Quality doesn’t mean being the only one who cares about it, the only one who does anything to improve it or even the one who is best at improving quality (though that might be true). The Champion for Quality is the voice of motivation and direction for the team to create a high quality product. That’s how I picture the role of a QA/Tester.

You’re the Voice

In the words of Aussie Legend John Farnham, “You’re the Voice”:

ASK what people consider as important for quality so that you know what areas you should focus on to deliver this. Is it more important for the product to be fast than accurate? Is it expected to handle a large amount of users? What sort of failures are acceptable and what are unacceptable?

TEACH people that quality matters. In order to achieve the level of quality desired in the target ideas identified above, the whole team needs to be on board. Show people the full story, how a lapse of quality in 1 area eventually leads to a negative customer experience and then a negative experience for the company.
Teach them about cutting corners now leading to more work later on to do it right.
This shows them the importance of their work and makes them feel a bigger part of the company and encourages them to put in the extra effort now, not later.

TRAIN people in how to build a quality product (by your definition of quality) by giving them the tools they need, the test cases, the conditions to experiment with, a peer to review and a clear understanding of what they need to do to ensure they are creating a high quality product.

Be the Champion of Quality in your team, Ask, Teach and Train!

Advertisements

2015 in review

The WordPress.com stats helper monkeys prepared a 2015 annual report for this blog.

Here’s an excerpt:

A San Francisco cable car holds 60 people. This blog was viewed about 330 times in 2015. If it were a cable car, it would take about 6 trips to carry that many people.

Click here to see the complete report.

The Power of Interns

I started out in my current job as an Intern, and my team has just said goodbye to an intern who was with us for the last 6 months, so it seemed a great time to share my thoughts on interns.

1. It’s great experience for the Intern

The first point is the most obvious so I won’t say much. Getting hands on experience to try a new skill is a really effective and efficient way to learn. It takes initiative and curiosity from the Intern, but seeing what real work looks like, and trying it out for themselves is something that can’t be taught from a book.

2. It challenges the rest of the team

As Katrina Clokie states in her blog (http://katrinatester.blogspot.com.au/2015/12/why-you-should-hire-junior-testers.html) the junior (or intern in this case) can push the rest of the team by making them think hard about how to explain the work they do, how they do it and why they do it that way. You really need to understand something to be able to teach someone else so it forces the rest of the team to strengthen their knowledge as they attempt to teach it to the Intern.

Another benefit is in answering the ‘Why do we do it that way?’ question, other team members might see on retrospect that the process they have been using could be improved.

There will obviously be time taken out from the job to train the Intern, but this growth of knowledge through explaining and the chance to rethink current processes makes up for this.

3. You create a Spokesperson

If you make sure to treat your Intern well in the variety and importance of work given, and workplace benefits, then they will speak well about your company to others. They may interact with people who are doing internships at other companies or are in their class and you will gain some very useful word of mouth marketing of what it’s like to work at your company and possibly attract future employees. (The Intern may also turn into a future employee!)

4. They bring fresh ideas

The Intern is usually new to the profession and is often coming off or currently completing studies. This means they have minimal pre-conceptions about how things should be done from being in the field and what tools should be used. They might also be aware of new tools and technologies from their studies or from other fields they are coming from. These factors combined could see for an innovative approach to problems being faced in your company that the rest of the team hadn’t thought of.

What do you think?

Have you had good or bad experiences with hiring Interns? What makes a good/bad intern and how do you hire the right type to start with? Do you like the idea of Internships or are you against it? I’d love to hear your thoughts!

Bug Report: The Baby is crying

IMG_2971

What happens if you treat a crying baby as a bug you need to write a bug report for? A few weeks ago I welcomed my first child into the world, and as with all babies, he cries a lot so I thought I’d try writing a bug report for when he cries.


Title: My baby is crying
Priority: Major

Description:
Baby has wide open mouth, loud noises are heard coming from the mouth, a watery liquid appears from edges of the eyes, limbs may flail around wildly and body may experience moments of tension with arched back.
Commonly known as crying.

Steps to reproduce:
1 - Have a baby (or find existing one, much easier and quicker)
2 - Sit and watch them for sometime between 1 minute and 12 hours (wait is longer as baby gets older)
3 - Baby will cry for anywhere between a few seconds and several hours

Variables which may speed up process:
- Baby is too warm
- Baby is too cold
- Baby has gone to the toilet
- Baby heard a loud noise
- Baby realised parents aren't nearby/holding them
- Baby needs to burp
- Baby needs to fart
- Baby is tired
- Baby was taken out of the bath
- Baby is in an uncomfortable position
- X other reasons unknown to man

Recommendation:
There are many reasons a baby might cry, and babies have been crying for as long as there have been babies. Due to the very large scope of solving this problem, the recommended action is "Will not fix". Many workarounds exist such as cuddling, rocking, swaying, patting, talking, singing, changing, etc... so stick with these for now

Getting started testing Microservices

Overview

Microservices involve breaking down functional areas of code into separate services that run independently of each other. However, there is still a dependency in the type and format of the data that is getting passed around that we need to take into consideration. If that data changes, and other services were depending on the previous format, then they will break. So, you need to test changes between services!

To do this, you can either have brittle end-to-end integration tests that will regularly need updating and are semi-removed from the process, or you can be smarter, and just test the individual services continue to provide and accept data as expected, to highlight when changes are needed. This approach leads to much quicker identification of problems, and is an adaptive approach that won’t be as brittle as integration tests and should be a lot faster to run as well.

The Solution

What I’m proposing is to integrate contract-based testing. (Note, we are only in the early stages of trying this out at my work)
Here’s how it works:

Service A -- Data X --> Service B

Service A is providing Service B with some sort of data payload in the Json format X. We call Service A the provider and Service B the consumer. We start with the consumer (service B) and determine what expectations it has for the data package X and we call this the contract. We would then have standard unit tests that run with every build on that service stubbing out that data coming in from a pretend ‘Service A’. This means that as long as Service B gets it’s data X in the format it expects, it will do what it should with it.

The next part is to make sure that Service A knows that service B is depending on it to provide data X in a given format or with given data, so that it if a change is needed, Service B (or any other services dependent on X) can be updated in line, or a non-breaking change could be made instead.

This is Consumer-Driven contract testing. This is nice, it means that we can guarantee that Service A is providing the right kind of data that Service B is expecting, without having to test their actual connections. Spread this out to a larger scale, with 5 services dependent on Service A’s data, and Service A giving out 5 types of data to different subsets of services and you can certainly see how this makes things simpler, without compromising effectiveness.

A variation of this is to have Service B continue to stub out actually getting data from Service A for the CI builds. But instead of testing on Service A that it still meets the expected data format of Service B, we can put that Test on Service B as well, so it also checks the Stub being used to simulate Service A against what is actually coming in from Service A on a daily basis. When it finds a change, the stub is updated, and/or a change request is made to service A.
Both types have advantages and disadvantages.

In Practice

Writing these sort of tests can be done manually, but there are tools which help with this as well, making it easier. Two such products are Pacto and Pact. They are both written in Ruby, Pacto is by Thoughtworks and Pact by RealEstate.com.au. Out of these 2 I think Pact is a better option as it appears to be more regularly updated and have better documentation. PactNet is a .Net version of Pact written by SEEK Jobs which is the language used at my work, and so is the solution we’re looking into.

These tools provide a few different options along the lines of the concepts described above. One such use case is that you provide the tool with an http endpoint, it hits the endpoint and makes a contract out of the response (saying what a response should look like). Then in subsequent tests the same endpoint is hit and the result compared with the contract saved previously, so it can tell if there have been any breaking changes.
I’m not sure how well these tools go at specifying that there might only be part of the response that you really care about, and the rest can change without breaking anything. This would be a more useful implementation.

Further reading

Note that most of the writing available online about these tools is referring to the Ruby implementation, but it’s transferable to the .Net version.

Influential people

People that are big contributors to this space worth following or listening to:

  • Beth Skurrie – Major contributor and speaker on Pact from REA
  • Martin Fowler – Writes a lot on microservices, how to build them and test them, on a theory level, not about particular tools.
  • Neil Campbell – Works on the PactNet library

Got any experience testing microservices and lessons to share? Other resources worth including? Please comment below and I’ll include them

The Art of Problem Solving

Solving problems is something i love doing. My job presents me with problems to solve every day (e.g. how to test ‘invisible’ features or how to simulate a customer using our product), and outside of work I’m also faced with problems to solve on a regular basis (e.g. why the hot water tap doesn’t heat up or why the TV reception is bad for 1 channel only). Whatever the problem, there are ways of approaching it that will apply in most cases. Here’s the approach i use.

Step 1 – Gathering Intel

First up, we need to understand the problem we are trying to solve. So start by gathering all the information you can about the problem. Here are some starter questions worth asking:

  • What steps are needed to see the problem occurs? (So you can determine if your fix worked)
  • What is the expected outcome and what is the actual outcome? (So you can tell when it’s been fixed)
  • Is it something that happens every time or just sometimes? And if so, how often? (This might tell you something about why it’s happening)
  • Does anything you do seem to influence whether it happens or not or how severe it is? (Another potential cause to the problem)
  • When did the problem start? Is it recent or always? Can you trace it back to start with a particular event? (Did that event cause the problem? Maybe it just made it visible)

Once you’ve got these questions (or similar ones) answered, you should have a good idea of what the problem is that you are trying to solve and a way to test out if your fix for the problem has worked.

Step 2 – Creating a hypothesis

Now that you know the problem, step 2 is to create a hypothesis on how you might be able to fix the problem. This will be based on your findings in step 1 on what factors seem to contribute to the problem with an idea on what change then needs to be made to fix the problem.

For example, a simple case might be that last month someone changed the type of paper the photocopier was using and over the past few weeks, there appears to have been an increased in paper jams. So your hypothesis is that by changing the paper back to the previous paper, the occurrence of paper jams should decrease. You’ve seen the change in behaviour, you notice an event which possibly contributed to the problem and you came up with a method to fix the problem, including a way to tell if it worked.

Perhaps the problem is a bit trickier. You’ve been using your iPhone for months as a calendar and it sends you email notifications when you have an appointment coming up. You recently upgraded to a newer model iPhone, at the same time as changing your email address that you receive notifications on. The iPhone calendar app has been updated and you decide that instead of checking those emails on your iPhone, you are going to start using your new iPad to read them. But the emails have the wrong times for your appointment.

Lots of variations and no obvious cause to the change. But the same rules still apply, find out all you can, and create your best guess hypothesis of what might help fix the problem.

Step 3 – Test out your hypothesis

Finally, you test out your hypothesis, make the change(s) required and see if the problem still happens. This might take a lot of testing depending on how often and predictable the problem was to reproduce. Eventually, you’ll figure out whether your hypothesis was correct or not.

Hypothesis didn’t solve the problem? Repeat Steps 2 and 3 again with the new information you’ve just learnt until you find your answer.

Applying it to my job as a tester

How does this fit in with being a tester? Well there’s two main types of problems i come across in my work.

  1. How to test tricky to see/verify areas of the project?
  2. How to create automated tests to cover those areas that are also useful, readable and maintainable (see my previous blog post for more on writing good automated tests)

In both these cases, the same rules apply as i’ve outlined above. It might take a while to gather the right data and come up with a solution, but I’ve found following this simple and general approach quite helpful in my work. In my role as a tester for the Online SaaS company, Campaign Monitor (which is awesome by the way!) I have lots of things to consider in how to test our product and where appropriate, automate tests to cover our various UI elements and so I constantly find myself with problems to solve. From how to run automated tests in parallel to figuring out how customer feedback should shape the way i test and the things i look for.

I hope this simple approach might help others solving there problems too!

What makes a good automated test?

Testers are increasingly being asked to write, maintain and/or review automated tests for their projects. Automated tests can be very valuable in providing quick feedback on the quality of your project but they can also be misused or poorly written, reducing their value. This post will highlight a few factors that I think make the difference between a ‘bad’ automated test and a ‘good’ automated test.

1. The code

People vary in how they like to write their code, for example, what names they use and the structure they like. This is fine to be different, but there are still aspects that should be considered to make sure the test is going to be useful into the future.

Readable & Self-explanatory
Someone should be able to read through your code and easily figure out what is being done at each part of the process. Extracting out logical chunks of code into methods helps with this but be wary of having methods within methods within methods… Deep nesting can add unnecessary complexity and reduces maintainability since coupling of code is increased. Use comments sparingly as the methods and variables should be descriptive enough in most cases and comments quickly get outdated. Format your code so it’s easy to read, with logical line breaks and be wary of ‘clever code’, which might be more efficient and might get 4 lines of code into 1, but it must still be readable and understandable to the next person looking at it in 5 months time. Make your test readable and self-explanatory.

Clear purpose
In order to make it easier to get a picture of test coverage, it helps to have tests with a clear purpose of what they are and aren’t testing. This also means people reading through the test trying to understand it or fix it know exactly what it’s attempting to do. Similarly it helps to not have your tests covering multiple test cases at once. When the test fails, which test case did it fail on? You’ll have to investigate every time just to know where the bug is before you even start getting the reproduction steps. Having multiple test cases also makes naming tests harder and can result in these extra test cases being hidden underneath other tests and perhaps duplicated elsewhere or giving a false picture of test coverage. Make your test have a clear purpose.

2. Execution

When you actually run your tests, there are a few more attributes to look for that contribute to the usefulness of the test.

Speed
If your tests take too long to run, people won’t want to wait around for the results, or will avoid running them at all to save time, which makes them completely useless. What ‘too long’ looks like will vary in each context but speed is always important. You will reduce reluctance from using your tests by setting a realistic expectation of the duration for running your tests. There will also be times where you want a smaller and faster subset of tests for a quick overview, whereas other times you are happy for a longer, more thorough set of tests when you aren’t as worried about the time it takes. Make your test fast.

Consistency
In most cases, depending on the context of your project, running your test 10 times in a row should give you the same 10 results. Similarly running it 100 times in a row. Flakey tests that pass 1 minute, then fail the next are the bane of automated tests. Make sure your tests are robust and will only fail for genuine failures. Add Wait loops for criteria to be met with realistic timeouts of what you consider a reasonable wait for an end-user. Do partial matches if a full match isn’t necessary. Make your test consistent.

Parallelism
The fastest way to run tests is in parallel instead of in sequence. This changes the way you write tests. Global variables and states can’t be trusted. Shared services likewise can’t be trusted. To run in parallel, your tests need to be independent, not relying on the output of other tests or data which is shared. Wherever possible, run your tests in parallel, finding the optimum number of streams to run in parallel and you will have your tests running much faster, giving you quicker feedback on the state of the system being tested. Make your test parralelisable.

Summary

There is plenty more that could be said about what makes a good Automated Test, but this should make a good start. Having code that is readable, self-explanatory and clear of purpose, and written in a way that it runs fast, consistently and can be run in parallel will get you a long way to having an effective, efficient and above all, useful set of automated tests.

I’d love to hear about what other factors you find most important in writing a good automated test as well in the comments below

Running Parallel Automation Tests Using NUnit v3

With the version 3 release of NUnit the ability to run your automated tests in parallel was introduced (A long running feature request). This brings with it the power to greatly speed up your test execution time by 2-3 times, depending on the average length of your tests. Faster feedback is crucial in keeping tests relevant and useful as part of your software development cycle.

As parallel tests is new to v3, the support is still somewhat limited, but I’ve managed to setup our Automated Test solution at my work, Campaign Monitor, to run tests in parallel and wanted to share my findings.

Technology

We are using the following technologies, so you may have to change some factors to match your setup, but it should provide a good starting point.

  • Visual Studio
  • C#
  • Selenium Webdriver
  • TeamCity

The Setup

Step 1 – Install the latest NUnit package

Within Visual Studio, install or upgrade the NUnit v3 package for your solution via Nuget Package Manager (or your choice of package management tool) using:

Install-Package NUnit
OR Upgrade-Package NUnit

Step 2 – Choose the tests that will be run in parallel

Your Tests can be configured to run in parallel at the TestFixture level only in the current release. So in order to setup your tests to be run in parallel, you choose the fixtures that you want to run in parallel and put a [Parallelizable] attribute at the start of the TestFixture.

Step 3 – Choose the number of parallel threads to run

To specify how many threads will run in parallel to execute your tests, add a ‘LevelOfParallelism’ attribute to your assembly.info file within each project with whatever value you desire. How many threads your tests can handle will be linked to the number of cores your machine running the test has. I recommend either 3 or 4.

Step 4 – Install a test runner to run the tests in parallel

Since this new to NUnit version 3, some test runners do not support it yet. There are 2 methods i’ve found which work.

1 – NUnit Console is a Nuget package that can be installed and runs as a command line. Install it using:

Install-Package NUnit.Console

then open a cmd window and navigate to the location of the nunit-console.exe file installed with the package. Run the tests with this command:

nunit-console.exe <path_to_project_dll> --workers=<number_of_threads>

This will then run the tests in the location specified and using the number of worker threads specified. Outputting the results in the command window.

2 – NUnit Test Runner is an extension for Visual Studio. Search for ‘NUnit3 Test Adapter’ (I used Version 3.0.4, created by Charlie Poole). Once installed, build your solution to populate the Tests into the Test Explorer view. You can filter the tests visible using the search bar to find the subset of tests you want (as decided in Step 2) and then click right click and run tests. This will also run your tests in parallel, using the ‘LevelOfParallelism’ attribute defined in Step 2 to determine the number of worker threads. This gives you nicer output to digest than the console runner, but still feels a bit clunky to use.

Your now setup and running your tests in parallel! Pretty easy right! The tricky part I found was then getting these tests to run in parallel when running through TeamCity, our continuous integration software.

(Optional) Step 5 – Configure TeamCity to run your tests in parallel

We use TeamCity to run our automated tests against our continuous integration builds, and so the biggest benefit to this project was to enable the TeamCity builds to run in parallel. Here’s how I did it

Note: First, I tried using MSBuild to run the NUnit tests as detailed here since this was the way we previously ran our tests before the NUnit v3 beta. However this didn’t work as it requires you to supply the NUnit version in the build script and that doesn’t support NUnit 3 or 3.0.0 or 3.0.0-beta-4 or any other variation i tried. So that was a no-go.

Second, I tried using the NUnit test build step and choosing the v3 type (only available in TeamCity v9 onwards). This lead me through a whole string of errors with conflicting references and unavailable methods and despite my best efforts, would not run the tests. So that was a no-go as well.

The method i decided upon was to use a command line step and run the NUnit console exe directly. So i first setup an MSBuild step that would just copy the NUnit console files and the Test Project files to a local directory on the Build Agent running the tests. Then I setup a command line step with these settings:

Run: Executable with parameters
Command Executable: <path_to_nunit-console.exe>
Command parameters: <path_to_test_dll's> --workers=<thread_count>

And with this, I was able to run parallel tests through team city! I’m sure the setup will get easier once support improves, but for now, it’s a good solution that gives our test suite 3-5 times faster results 🙂

Did you find this helpful, or have any tips you want to share? Please comment below!

Lessons learnt from writing Automated Tests

The purpose of this article is to share some of the big lessons I’ve learnt in my 5+ years of writing automated tests for Campaign Monitor, an online service to send beautiful emails. If you are writing automated tests, it’s worth keeping these lessons in mind to ensure you end up with fast, maintainable and useful tests.

1 – Website testing is hard

Websites can be slow to load, both at a whole page level and an element level. This means you are constantly writing ‘waits’ into your code to make sure the element you want to test is available. As java-script techniques improve, there are more animations, hovers, fade-ins,  call-outs, etc… being added to websites which results in a much nicer experience for the user, but can be a nightmare for writing a matching automated test. With these things in mind, I suggest the following:

  1. Evaluate whether this test case is worth automating. You can test functionality via an automated test (eg does clicking button X cause action Y to happen?), but they are less useful at telling you whether a page animation looks correct visually, so you are better to keep that as a manual regression test.
  2. Do as much of the automated test outside of the UI as possible. If you can utilise an API call or make direct database updates to get your system into the state required or to verify the UI action performed in the app you will save a lot of time in executing the test and avoid possible test failures before you even get to the point of interest for the test.
  3. Use Smart Waits. As often as practical, you are better off doing your assert of an element state within a short loop. You usually don’t want your test to fail because your app took 4 seconds to login instead of the usual 2 seconds. Always opt for waiting for a specific element state rather than Thread.Sleep!

2 – Make your tests understandable

When you first write your test, you have all the context you need, you know why you chose each method over other methods, you know what you are attempting to do at each step along the way. But what about the next person to look at that test in a years time when the requirements of that test have changed? Writing a test that is easy to understand for all readers will save valuable time later on and will mean the test continues to provide value for a long time. How do we do this?

  1. Use meaningful variables and method names. You should be able to read through your test without diving down any deeper into the methods and know what is going on. To do this, you need your variables and methods to accurately and succinctly explain what they are for.. For example:
    int outputRowNumber = GetTableRowForCustomer(“customer name”);
    compared to:
    int x = GetCustomer(“customer name”);
  2. Add comments when necessary. Your colleagues aren’t idiots, so they won’t need comments on every line of code, but if, for example, you have a line of code calculating the number of seconds left in the day, it might be worth a few words to explain what you are doing.

3 – Run your tests often

The purpose of creating and running these tests is to find bugs in your application, so you should be running all of your tests at least once a week, and most tests daily or more. This will make it much easier to identify the change that occurred which has resulted in the failed test. How often you run the tests will depend on how often you are updating the code under test. The more often you update, the more often you should run the tests. You will probably have a mix of tests that take varying amount of times, so be smart about when you run them, run the fast ones during the day, but leave the slow ones to run overnight so you don’t have to wait for the results.

We’ve had great success in having a small and quick subset of tests that do a basic check of a broad range of features across the application. This gives us quick feedback to know that there aren’t any major issues with core functionality. We call them ‘Smoke tests’, because if there’s smoke, there’s fire!

4 – Be regular in maintenance

Any one who has tried automated testing will know that your tests will get out of date as the features they are testing update. But, if you keep an eye on the results of your tests regularly, and act on anything that needs updating in a speedy fashion, you will save yourself a lot of pain down the track. Much better to tackle the cause of 2 failing tests now then to deal with multiple causes of 30 failures in a months’ time. If you can address any maintenance tasks for updated feature behaviour within a week of it happening, the context will be fresh and you will be able to get on top of it quickly, before it spreads and merges with other issues and suddenly you have a sea of failed tests and don’t know where to start.

5 – Be technology independent

We previously used a technology called WatiN to drive our test automation and are now in the process of moving over to Selenium WebDriver. We’ve also used other technologies in the past and each time a change is needed, it takes a lot of time/effort to convert our solution. If/When the time comes where you feel the tool you are using is no longer working for your needs, the time/cost to change can be huge. It’s possible every method you’ve written is dependent on your current technological choices, so it either rules out ever changing, or means there is a steep curve involved in changing over.

So instead, add an interface layer on top of your code which just says “I want the ‘browser’ to ‘click’ on ‘this link’”. Then the implementation can figure out how to get the Browser to click on the given link. Then when you decide you want to try a new technology, just add a new implementation to the interface and nothing else needs to change.

I will probably add a follow up post as summarising 5 years of learning into 1 post is hard!

Bug Bash vs Usability Testing Session

The Scenario

Your project is nearing completion and you are getting close to release date. You think you’ve found the majority of the bugs there are to find in this product and are now ready to hand it over to others in the company to get their eyes over it. The question is, do you organise a ‘Bug Bash’ or an ‘Usability Testing Session’?

I think the answer depends on what you are seeking to get out of it. I’ll outline the differences between the two, when you should use each one and some tips for making them be useful.

Bug Bash

My Definition: A Group of people trying to find as many bugs as they can within a given feature in a short amount of time.

Preparation:

  1. Organise to get a group of people to meet in the same room for a designated amount of time (recommend 1 hour). Make sure there is a computer available to everyone, with any prior setup data ready to go.
  2. Plan what you are going to tell the group about the product, what areas you consider most risky, how to record any bugs they find and any further setup information they will need.
  3. Organise helpers from your team to verify and record bugs with you so that people aren’t all waiting for you.

Execution:

  1. Welcome and thank everyone for their attendance
  2. Explain what the product is, and any particular areas you would like people to focus on.
  3. Explain how to record any bugs people find
  4. Start the timer (rec. 1 hour)
  5. Encourage interaction between everyone to help the test ideas flow, food and drinks on the table, light background music, etc..
  6. With your helpers, record any bugs and answer any questions raised. Resist giving too much information away, if the user can’t figure it out, it may be a usability bug.
  7. At the end, thank everyone for their participation and let them know you will send out the results soon (the follow up helps reinforce that you value their input)

Variations:

  • Put people in teams of 2 working on the same machine, 1 person ‘drives’ and the other ‘navigates’ by offering suggestions of test cases to try out led by what they see the ‘driver’ doing.
  • Have prizes for the person/team that finds the most bugs and the best bug.

Who should I invite? Anyone and Everyone! – It’s good to get a mix of people, from different departments (QA, Design, Developer, Marketing…), some people from your team, some people from other teams. I like to get as many testers on board as possible, as realistically they will be most likely to find bugs, but at the same time, a developer or a designer will approach the problem from a different view point and is likely to find different types of bugs, so a mix is important.

Usability Testing Session

My Definition: Observing the user-interactions as a group of people attempt to complete a certain set of tasks with your product, whilst they gain familiarity with it.

Preparation:

  1. Organise to get a group of people to meet in the same room for a designated amount of time (recommend 1 hour). Make sure there is a computer available to everyone, with any prior setup data ready to go.
  2. Plan the tasks that the people must complete during the session. The tasks should be top-level only to allow the user freedom to explore the feature and figure out how to do it (even if it means getting lost along the way), eg. Submit a product review of your last purchased item.
  3. Organise helpers from your product team to record observations and answer questions from the group as they complete the challenge.

Execution:

  1. Welcome and thank everyone for their attendance
  2. Explain the tasks that each person is to complete during the session
  3. Explain the purpose of the session, that it is to observe their interactions, and make note of any situations where they are confused by the product, or unclear of what to do next. (ie usability bugs)
  4. Start the session, best to keep a time-limit to the session to keep everyone on track.
  5. Create a relaxed vibe with food and drinks on the table, light background music, etc..
  6. Together with your helpers, observe the groups interactions with the feature, recording any time they became confused or were unclear on what to do. Answer any questions they have along the way as it’s also an opportunity for the group to get familiar with the product (since it’s often helpful for the whole company to understand the new product being developed).
  7. Finish up, thanks the participants and let them know that you will send out the results soon. (To help reinforce that you value their input)
Variations:
  • Put people in teams, it will depend on your product for whether this is a likely scenario for actual end-users of your product. If end-users will be working solo, don’t use teams in the session either.
  • Have gifts for the participants as thanks, or perhaps prizes for the people who complete the tasks the fastest. (Being fast means there wasn’t anything that confusing for them)
Who should I invite? People who satisfy the following criteria
  • Have not worked closely on the development of the product (they will already know how to use it)
  • Are interested in learning about your new product OR
  • Are a good representation of who your end-users will be (whether that is marketers, developers, non-tech savvy, etc..)

How to choose

Now that we have looked at what each of our options are (of course there are more options out there, but just focusing on 2 here) how do we pick which one to use?

Pick ‘Bug Bash’ if…

  • You are trying to find as many issues with your product as you can to give greater confidence in releasing it
  • You are not trying to teach people how to use the product
  • You don’t have access to people who meet the ‘Usability Testing Session’ group criteria

Pick ‘Usability Testing Session’ if…

  • You are trying to familiarise people in your company with the product you are soon to release
  • You are trying to find usability bugs (perhaps you are worried your product is confusing to use at times)

Summary

This was a brief overview of 2 common techniques of getting other people in your company to look over the product your team has been working on to help find some issues you have been blinded to from working on the product every day. The technique you choose should be based on what you are hoping to get out of the session.

Feedback

I’d love to hear from you if you’ve found either of these techniques helpful for you in the past or if you have any further tips to enhance the usefulness and enjoyment of these sessions. Or perhaps you have a different technique completely that you prefer? Put it all in the comments below :)