Australian Testing Days 2016 Reflection – Day 2

For Part 1 of this series, see Australian Testing Days 2016 Reflection – Day 1

Day 2

Coaching Testers (workshop)Coach showing the right way to follow

I chose Anne-Marie Charett‘s workshop for day 2 to learn about coaching testers. Even though I’m not directly in a coaching role, I thought there would be lots of great things to learn and a number of things that would apply to teaching non-testers in my team about testing. The day started on discovering why each of us had come and what we were hoping to learn from the day so we could focus our discussions accordingly.

First up we thought about coaching sessions and how they should not transform into a “How are you going” session. Much more effective is to make the sessions task-based (e.g. let’s work through an activity and I’ll offer assistance along the way) or for the student to come with a question.

The next lesson was a great approach to teaching new techniques or skills by starting with applying it to a familiar, non-testing situation. For example, creating a mind-map of your family. This takes away the fear of getting it wrong since it’s known and familiar, and greater focus can be put on the technique or skill being learnt.

In understanding what things to coach and how to go about coaching them, we need to understand the context of both the coach and the student. They will each have 2 images of themselves, one of how they view themselves as a tester, and the other as how they view themselves as a coach/student. As well as many other factors that need to be considered in knowing that different approaches work for different people.

The day was broken up with a number of movie clips to show different coaching styles. First up was the scene in The Matrix where Morpheus asks Neo to show him the Kung-Fu he has just learnt. Throughout the scene, more pressure and direction is added from Morpheus as Neo progresses through the task. It starts of easy, then gets harder and more physical as Neo is pushed further and further to learn the lesson Morpheus was teaching him (the rules can be broken).

Next we watched a clip from The Blind Side where Sandra Bullock’s character was better able to teach her foster son how to defend in Grid Iron better than the team coach. This was because she better understand the needs of the student and what motivates them (in this case, family). The question that he needed answered was “Why” not “How”. Trust was also a big factor in the success of the coaching efforts.

Taking this lesson further, as a coach you want to find out what people’s drive is, why are they in testing, how did they get here and where do they want to be? Then the coach can focus on directing them on how to get there and where they need most help, based on the journey they’ve had so far.

Our next video was from The Magnificent Seven, where a young character was challenged to match his speed at drawing his weapon against the team he hoped to join. This was more about putting people out of their comfort zones and seeing how they will respond to pressure. It was a way to show this man that his skills weren’t up to scratch, and that he would need to lift his game.

Following this we talked about Socratic questioning, which is all about not directly answering people’s questions but instead asking questions in return to help lead them to discover the answer for themselves. This is a very powerful tool in coaching, an answer you come to by yourself is much more powerful and memorable than one you were told. But it can also be taken the wrong way and be demotivating when people feel like they never get straight answers.

The coach will need to help manage the way people feel about themselves and base their coaching on the students needs, energy and context. There isn’t a one-size-fits-all approach.

Our final movie clip was from Million Dollar Baby where Hillary Swank’s character is trying to convince Clint Eastwood’s character to coach her. There was certainly reluctance in this coaching partnership, but biases had to be overcome and the student had to reflect and show they did indeed have the passion and desire to not give up, even though it would be hard. Sometimes we too might not feel comfortable at first about coaching someone or being coached.

2 final points were made around managing expectations and reflecting on progress. Manage people’s expectations to learn something quickly by showing that confusion is just a learning state and is not a weakness or lack of ability. It’s also powerful if you can show someone how much they have grown already to help give motivation to continue on that journey.

The day finished with a practical exercise of simulating a coaching session between a Coach, a tester and a developer, trying to come up with acceptance criteria for a payroll system. This was great to put into practice some of the things we had been learning throughout the day and see how even in controlled environments, people have quite different needs, opinions and expectations and so coaching will need to adapt all the time to suit.

Summary

I had a great time at Australian Testing Days 2016 and am looking forward to going again next year! I’d also love to hear what other people got out of the conference or if you have a different view point to anything discussed in either of my posts about it.

Advertisements

Australian Testing Days 2016 Reflection – Day 1

On May 20-21 I went to the inaugural Australian Testing Days Conference in Melbourne. The first day involved a series of talks, mostly on sharing experiences people had in testing and the second day was an all-day workshop on test leadership. This post outlines the key messages from the session I attended and the key things I learnt from each one.

Day 1

Part 1 – What you meant to say (keynote)

First up, Michael Bolton discussed how the language we, as testers, use around testing can be quite unhelpful and cause confusion for those involved. For example, automated testing does not exist. You can certainly automate checking, which is mainly regression tests of existing behaviour. But you cannot automate testing, which is everything someone does to understand more about a feature, giving them knowledge to decide on how risky it is to release it. The way we communicate what we do will impact what others understand it as and then expect of us. Similarly, it is important to understand the language others use when asking us to do something.

Another helpful lesson was that customer desires are more important than customer expectations. If they are happy with your product, it doesn’t matter if it met expectations. If I don’t expect the Apple Watch will be of much use to me, but then I try it and discover that I love it, my expectations weren’t met, but my desires were, and it’s a good result. Similarly, users might expect something totally different to what you produce, but if they discover that what you made is actually better than their expectations, it is likewise a good result.

Lessons learnt:

  • Be clear in communication of testing activities to avoid ambiguity and misalignment.
  • Seek the underlying mentality behind people’s testing questions

Part 2 – Transforming an offshore QA team (elective)

Next up Michele Cross shared on the challenges she is facing in transforming an offshore, traditional and highly structure testing team into a more agile, context driven testing team. The primary way to achieve any big change like this is in creating an environment of trust, which comes in 2 ways. A cognitive trust is based on ability, you trust someone because of their skills and attributes. For example, trusting a doctor who has been studying and practising medicine for 20 years that you have just met to diagnose you. An affective trust is based on relationships, you trust someone because of how well you know them and how you have interacted with them in the past. For example, you trust a friend’s movie recommendation because of shared interests and experiences, not their skills as a movie reviewer.

To help establish this trust and initiate change, three C’s were discussed.

  • Culture – Understanding people and their differences. The context that has brought people to where they are now will greatly shape how they interact with people. Do they desire structure or independence? Are they open to conflict or desire harmony? Knowing this can help inform decisions and approaches.
  • Communication – Relating to other people can be just as hard as it is important. Large, distributed teams bring with them challenges of language, timezones, video conferencing etc… It is crucial to find ways to address these concerns so that everyone is kept in the loop, aligned on direction and is able to build relationships with each other.
  • Coaching – Teaching new skills through example and instruction. Create an environment where it is safe to fail so people feel comfortable to grow. Use practical scenarios to teach skills and get involved yourself.

Lessons learnt:

  • Consider the cultural context of people you are interacting with as it will shape how to be most effective in in those interactions
  • Learning through doing, and doing alongside someone is a great way of learning
  • Trust is built by a combination of personal relationships and technical abilities

Part 3 – It takes a village to raise a tester (elective)

Catherine Karena works at WorkVentures which is all about helping under privileged people develop like skills and technological skills to help them enter the tech workforce. She talked about how to figure out what skills to teach by looking at where the most jobs were in the market and the common skills required. This includes both technical and relational skills as they interact with structures and other staff in companies.

On a more general note, a number of characteristics of what makes a great tester were highlighted to focus on teaching these skills as well. A great tester is: curious, a learner, an advocate, a good communicator, tech savvy, a critical thinker, accountable and a high achiever. When it comes to the learning side, a few more tips were shared around teaching through doing as much as possible, making it safe to fail, using industry experts and building up the learning over time.

Some interesting statistics were raised showing that those trained by WorkVentures over 6 months were equal to or greater in performance and value compared to relevant Uni graduates when rated by employers.

Lessons learnt:

  • Relational skills can be just as, if not more, important than technical skills in hiring new talent.
  • Learning in small steps with practical examples greatly improves the outcome.

Part 4 – Context Driven Testing: Uncut (elective)

Brian Osman talked about his experience growing in knowledge and abilities as a tester and how greatly that experience was shaped by testing communities. He explained how a community of like minded people can help drive learning as they challenge each other and bring different view points across.

A side note he introduced was a term called ‘Possum Testing’ which is how he described “Testing that you don’t value, motivated by a fear of some kind”, for example, avoiding using a form of testing because you don’t understand it or how to use it. This is an idea that many people would understand, but could perhaps find it hard to articulate and discuss. Giving it a name instantly provides a means to bring it up in conversation and have people already have a good idea of the context and any common ground in thinking.

Lessons learnt:

  • Naming ideas or common problems is a helpful way to direct future conversations and bring along the original context
  • When looking to improve in a certain area/skill, find a community of others looking to do the same thing.
  • Use these communities to present ideas, defend them and challenge other’s ideas. Debates are encouraged

Part 5 – Testing web services and microservices (elective)

Katrina Clokie (who also mentors me in conference speaking) spoke about her experience testing web services and microservices and a previous version of the talk is available online if you are interested. Starting with web services, she pointed out that each service will have different test needs based on who uses it and how they use it. Service virtualisation is a common technique used in service testing to isolate the front-end from the inconsistencies and potentially unstable back-ends. Microservice testing puts another layer in this model.

Some key guidelines for creating microservices automation were presented, claiming that it should be fit for purpose, remove duplication, be easy to merge changes, have continuous execution and be visible across teams.

An interesting learning technique was present called Pathways which can be found on her website which list a whole bunch of resources for learning about a new topic. They are a helpful way of directing your learning time with a specific goal in mind.

Lessons learnt:

  • Make use of Katrina’s pathways for learning about a new area (for myself or as recommendations to others)
  • Get involved in code creation as early as possible to help influence a culture of testability
  • Write any automation with re-usability and visibility in mind

Part 6 – Test Management Revisited (keynote)

Anne-Marie Charett finished up the day sharing some reflections and approaches she implemented from her time as Test lead at Tyro payments. She started by asking the question, do we still need test management? Which has been asked a few times in the community already. The response being that we do need a testing voice in the community to go with all the new roles and technology coming through like microservices, and DevOps. This doesn’t mean we need Test Managers who deal with providing stability, rather Test Leaders who can direct change. She talked about using the “Satir change Model” to describe the process of change and it’s effect on performance.

She brought a mentality to transforming Tyro to have the best test practice in Australia, and was not interested in blindly copying others. There is certainly benefits to learn from the approach others take, but should be assessed to meet your company’s environment. She discussed a number of testing related strategies that you might have to deal with: Continuous delivery, testing in production, microservices, risk-based automation, business engagement, embedding testing, performance testing, operational testing, test environments, training and growth.

The next question was how to motivate people to learn? Hand-holding certainly isn’t ideal, but you also probably can’t expect people to spontaneously learn all the skills you’d like them to have. This needs coaching! And the coaching should be focused around a task that you can then offer feedback on afterwards. Then challenge them to try it again on their own.

An important question to ask in identifying what skills to teach is in highlighting what makes a good tester at your company, because your needs will be different to other places. She then finished with a few guidelines around coaching based around giving people responsibilities, improving the environment they work in and continuing to adapt as different needs and challenges arise.

Lessons learnt:

  • Any practice/process being used by others should be analysed and adapted to fit your context, not blindly copied.
  • Be a voice for testing and lead others to make changes in areas they need to improve on
  • Think about what makes a good tester at my company and how I measure up
  • Help prepare the organization/team for change and help them cope as they struggle through it

That’s a wrap for Day 1, find my review of Day 2 here, where I took part in a workshop on Coaching Testers with Anne-Marie Charrett

Where there’s Smoke there’s Fire!

laptop_smokingThe first sign of smoke is always an early indicator that fire is not far away. It’s the first warning sign. So too in testing, having Smoke Tests that quickly step through the main pathways of a feature will let you know if there’s a significant problem in your codebase.

What makes a good Smoke Test?

There are 3 factors which make a good smoke test: Speed, Realism and Informative.

Speed

If there is a fire coming, you want to know about it as fast as possible so you have the most time possible to respond. Your tests must be fast enough that running them with every build/iteration will not cause frustration or significantly hold up progress.

If there is a problem, these tests will tell you, quickly, that you need to stop whatever else you might’ve been doing and respond to the issue.

Realism

If smoke is coming from a contained, outdoor BBQ in your Neighbours backyard, it doesn’t matter. You don’t need to do anything about this. But if a tree in your neighbours backyard was on fire, this is much more concerning. The Smoke tests must only look for realistic and common usages of your feature, where you care if something breaks. If the background colour selector on your form creation tool is broken, that’s probably not as crucial to worry about, compared to if the form itself didn’t work.

So, prioritise your tests to only cater for the key user workflows through your feature, and leave everything else for other tests to cover. This will also go hand in hand with designing tests for speed.

Informative

If you see smoke, the first thing you will want to know is how far away it is and how big it is. This will dictate how devastating it will be and what sort of action you need to take. So to the tests you write must be informative about where the problem is, how you can recreate it and how important it is to fix.

Smoke tests are much more useful if they can not only tell you that something is wrong, but also where the problem is, how important a problem it is and how to fix it.

An Example

Since the company I work for (Campaign Monitor) is an email marketing company. So a key scenario or User Story we would want to target in a Smoke Test is:
“As a user, I want to create a campaign, send it to my subscriber list, check the email got through and that any recipient interactions are reported”

So, the steps required for this smoke test are:

  1. Create a campaign, set the email content and the recipients
  2. Send the campaign to some email recipients
  3. Open one of the received emails and click on a link
  4. Check the report shows the email was opened and clicked

This test covers an end-to-end user story that is a vital part of our software, hence if any part of it fails, we need to know immediately and fix it. It is fast and runs in under a minute so doesn’t interrupt workflow. It does not add in any extra steps or checks that aren’t helpful to the target user story. And it is written with logging and helpful error messages so that any failures point as close as possible to the problem area.

Putting it together

Once you’ve created your smoke tests, put them into use! Run them every time you change something in your feature and they will quickly tell you if anything major is wrong. It’s so much better to find a major issue as early as possible and reduce time lost searching for the problem later on or building more and more problems on top.

Smoke tests shouldn’t be your only test approach as they only cover the ‘happy path‘ scenarios and there are plenty of other areas for your feature to have problems. But it is certainly a good starting point!

I’d love to hear how you use (or why you don’t use) Smoke Tests in your company!

Getting started testing Microservices

Overview

Microservices involve breaking down functional areas of code into separate services that run independently of each other. However, there is still a dependency in the type and format of the data that is getting passed around that we need to take into consideration. If that data changes, and other services were depending on the previous format, then they will break. So, you need to test changes between services!

To do this, you can either have brittle end-to-end integration tests that will regularly need updating and are semi-removed from the process, or you can be smarter, and just test the individual services continue to provide and accept data as expected, to highlight when changes are needed. This approach leads to much quicker identification of problems, and is an adaptive approach that won’t be as brittle as integration tests and should be a lot faster to run as well.

The Solution

What I’m proposing is to integrate contract-based testing. (Note, we are only in the early stages of trying this out at my work)
Here’s how it works:

Service A -- Data X --> Service B

Service A is providing Service B with some sort of data payload in the Json format X. We call Service A the provider and Service B the consumer. We start with the consumer (service B) and determine what expectations it has for the data package X and we call this the contract. We would then have standard unit tests that run with every build on that service stubbing out that data coming in from a pretend ‘Service A’. This means that as long as Service B gets it’s data X in the format it expects, it will do what it should with it.

The next part is to make sure that Service A knows that service B is depending on it to provide data X in a given format or with given data, so that it if a change is needed, Service B (or any other services dependent on X) can be updated in line, or a non-breaking change could be made instead.

This is Consumer-Driven contract testing. This is nice, it means that we can guarantee that Service A is providing the right kind of data that Service B is expecting, without having to test their actual connections. Spread this out to a larger scale, with 5 services dependent on Service A’s data, and Service A giving out 5 types of data to different subsets of services and you can certainly see how this makes things simpler, without compromising effectiveness.

A variation of this is to have Service B continue to stub out actually getting data from Service A for the CI builds. But instead of testing on Service A that it still meets the expected data format of Service B, we can put that Test on Service B as well, so it also checks the Stub being used to simulate Service A against what is actually coming in from Service A on a daily basis. When it finds a change, the stub is updated, and/or a change request is made to service A.
Both types have advantages and disadvantages.

In Practice

Writing these sort of tests can be done manually, but there are tools which help with this as well, making it easier. Two such products are Pacto and Pact. They are both written in Ruby, Pacto is by Thoughtworks and Pact by RealEstate.com.au. Out of these 2 I think Pact is a better option as it appears to be more regularly updated and have better documentation. PactNet is a .Net version of Pact written by SEEK Jobs which is the language used at my work, and so is the solution we’re looking into.

These tools provide a few different options along the lines of the concepts described above. One such use case is that you provide the tool with an http endpoint, it hits the endpoint and makes a contract out of the response (saying what a response should look like). Then in subsequent tests the same endpoint is hit and the result compared with the contract saved previously, so it can tell if there have been any breaking changes.
I’m not sure how well these tools go at specifying that there might only be part of the response that you really care about, and the rest can change without breaking anything. This would be a more useful implementation.

Further reading

Note that most of the writing available online about these tools is referring to the Ruby implementation, but it’s transferable to the .Net version.

Influential people

People that are big contributors to this space worth following or listening to:

  • Beth Skurrie – Major contributor and speaker on Pact from REA
  • Martin Fowler – Writes a lot on microservices, how to build them and test them, on a theory level, not about particular tools.
  • Neil Campbell – Works on the PactNet library

Got any experience testing microservices and lessons to share? Other resources worth including? Please comment below and I’ll include them

The Art of Problem Solving

Solving problems is something i love doing. My job presents me with problems to solve every day (e.g. how to test ‘invisible’ features or how to simulate a customer using our product), and outside of work I’m also faced with problems to solve on a regular basis (e.g. why the hot water tap doesn’t heat up or why the TV reception is bad for 1 channel only). Whatever the problem, there are ways of approaching it that will apply in most cases. Here’s the approach i use.

Step 1 – Gathering Intel

First up, we need to understand the problem we are trying to solve. So start by gathering all the information you can about the problem. Here are some starter questions worth asking:

  • What steps are needed to see the problem occurs? (So you can determine if your fix worked)
  • What is the expected outcome and what is the actual outcome? (So you can tell when it’s been fixed)
  • Is it something that happens every time or just sometimes? And if so, how often? (This might tell you something about why it’s happening)
  • Does anything you do seem to influence whether it happens or not or how severe it is? (Another potential cause to the problem)
  • When did the problem start? Is it recent or always? Can you trace it back to start with a particular event? (Did that event cause the problem? Maybe it just made it visible)

Once you’ve got these questions (or similar ones) answered, you should have a good idea of what the problem is that you are trying to solve and a way to test out if your fix for the problem has worked.

Step 2 – Creating a hypothesis

Now that you know the problem, step 2 is to create a hypothesis on how you might be able to fix the problem. This will be based on your findings in step 1 on what factors seem to contribute to the problem with an idea on what change then needs to be made to fix the problem.

For example, a simple case might be that last month someone changed the type of paper the photocopier was using and over the past few weeks, there appears to have been an increased in paper jams. So your hypothesis is that by changing the paper back to the previous paper, the occurrence of paper jams should decrease. You’ve seen the change in behaviour, you notice an event which possibly contributed to the problem and you came up with a method to fix the problem, including a way to tell if it worked.

Perhaps the problem is a bit trickier. You’ve been using your iPhone for months as a calendar and it sends you email notifications when you have an appointment coming up. You recently upgraded to a newer model iPhone, at the same time as changing your email address that you receive notifications on. The iPhone calendar app has been updated and you decide that instead of checking those emails on your iPhone, you are going to start using your new iPad to read them. But the emails have the wrong times for your appointment.

Lots of variations and no obvious cause to the change. But the same rules still apply, find out all you can, and create your best guess hypothesis of what might help fix the problem.

Step 3 – Test out your hypothesis

Finally, you test out your hypothesis, make the change(s) required and see if the problem still happens. This might take a lot of testing depending on how often and predictable the problem was to reproduce. Eventually, you’ll figure out whether your hypothesis was correct or not.

Hypothesis didn’t solve the problem? Repeat Steps 2 and 3 again with the new information you’ve just learnt until you find your answer.

Applying it to my job as a tester

How does this fit in with being a tester? Well there’s two main types of problems i come across in my work.

  1. How to test tricky to see/verify areas of the project?
  2. How to create automated tests to cover those areas that are also useful, readable and maintainable (see my previous blog post for more on writing good automated tests)

In both these cases, the same rules apply as i’ve outlined above. It might take a while to gather the right data and come up with a solution, but I’ve found following this simple and general approach quite helpful in my work. In my role as a tester for the Online SaaS company, Campaign Monitor (which is awesome by the way!) I have lots of things to consider in how to test our product and where appropriate, automate tests to cover our various UI elements and so I constantly find myself with problems to solve. From how to run automated tests in parallel to figuring out how customer feedback should shape the way i test and the things i look for.

I hope this simple approach might help others solving there problems too!

Bug Bash vs Usability Testing Session

The Scenario

Your project is nearing completion and you are getting close to release date. You think you’ve found the majority of the bugs there are to find in this product and are now ready to hand it over to others in the company to get their eyes over it. The question is, do you organise a ‘Bug Bash’ or an ‘Usability Testing Session’?

I think the answer depends on what you are seeking to get out of it. I’ll outline the differences between the two, when you should use each one and some tips for making them be useful.

Bug Bash

My Definition: A Group of people trying to find as many bugs as they can within a given feature in a short amount of time.

Preparation:

  1. Organise to get a group of people to meet in the same room for a designated amount of time (recommend 1 hour). Make sure there is a computer available to everyone, with any prior setup data ready to go.
  2. Plan what you are going to tell the group about the product, what areas you consider most risky, how to record any bugs they find and any further setup information they will need.
  3. Organise helpers from your team to verify and record bugs with you so that people aren’t all waiting for you.

Execution:

  1. Welcome and thank everyone for their attendance
  2. Explain what the product is, and any particular areas you would like people to focus on.
  3. Explain how to record any bugs people find
  4. Start the timer (rec. 1 hour)
  5. Encourage interaction between everyone to help the test ideas flow, food and drinks on the table, light background music, etc..
  6. With your helpers, record any bugs and answer any questions raised. Resist giving too much information away, if the user can’t figure it out, it may be a usability bug.
  7. At the end, thank everyone for their participation and let them know you will send out the results soon (the follow up helps reinforce that you value their input)

Variations:

  • Put people in teams of 2 working on the same machine, 1 person ‘drives’ and the other ‘navigates’ by offering suggestions of test cases to try out led by what they see the ‘driver’ doing.
  • Have prizes for the person/team that finds the most bugs and the best bug.

Who should I invite? Anyone and Everyone! – It’s good to get a mix of people, from different departments (QA, Design, Developer, Marketing…), some people from your team, some people from other teams. I like to get as many testers on board as possible, as realistically they will be most likely to find bugs, but at the same time, a developer or a designer will approach the problem from a different view point and is likely to find different types of bugs, so a mix is important.

Usability Testing Session

My Definition: Observing the user-interactions as a group of people attempt to complete a certain set of tasks with your product, whilst they gain familiarity with it.

Preparation:

  1. Organise to get a group of people to meet in the same room for a designated amount of time (recommend 1 hour). Make sure there is a computer available to everyone, with any prior setup data ready to go.
  2. Plan the tasks that the people must complete during the session. The tasks should be top-level only to allow the user freedom to explore the feature and figure out how to do it (even if it means getting lost along the way), eg. Submit a product review of your last purchased item.
  3. Organise helpers from your product team to record observations and answer questions from the group as they complete the challenge.

Execution:

  1. Welcome and thank everyone for their attendance
  2. Explain the tasks that each person is to complete during the session
  3. Explain the purpose of the session, that it is to observe their interactions, and make note of any situations where they are confused by the product, or unclear of what to do next. (ie usability bugs)
  4. Start the session, best to keep a time-limit to the session to keep everyone on track.
  5. Create a relaxed vibe with food and drinks on the table, light background music, etc..
  6. Together with your helpers, observe the groups interactions with the feature, recording any time they became confused or were unclear on what to do. Answer any questions they have along the way as it’s also an opportunity for the group to get familiar with the product (since it’s often helpful for the whole company to understand the new product being developed).
  7. Finish up, thanks the participants and let them know that you will send out the results soon. (To help reinforce that you value their input)
Variations:
  • Put people in teams, it will depend on your product for whether this is a likely scenario for actual end-users of your product. If end-users will be working solo, don’t use teams in the session either.
  • Have gifts for the participants as thanks, or perhaps prizes for the people who complete the tasks the fastest. (Being fast means there wasn’t anything that confusing for them)
Who should I invite? People who satisfy the following criteria
  • Have not worked closely on the development of the product (they will already know how to use it)
  • Are interested in learning about your new product OR
  • Are a good representation of who your end-users will be (whether that is marketers, developers, non-tech savvy, etc..)

How to choose

Now that we have looked at what each of our options are (of course there are more options out there, but just focusing on 2 here) how do we pick which one to use?

Pick ‘Bug Bash’ if…

  • You are trying to find as many issues with your product as you can to give greater confidence in releasing it
  • You are not trying to teach people how to use the product
  • You don’t have access to people who meet the ‘Usability Testing Session’ group criteria

Pick ‘Usability Testing Session’ if…

  • You are trying to familiarise people in your company with the product you are soon to release
  • You are trying to find usability bugs (perhaps you are worried your product is confusing to use at times)

Summary

This was a brief overview of 2 common techniques of getting other people in your company to look over the product your team has been working on to help find some issues you have been blinded to from working on the product every day. The technique you choose should be based on what you are hoping to get out of the session.

Feedback

I’d love to hear from you if you’ve found either of these techniques helpful for you in the past or if you have any further tips to enhance the usefulness and enjoyment of these sessions. Or perhaps you have a different technique completely that you prefer? Put it all in the comments below :)