TestBash Sydney 2018 Reflection – Part 1

On the 19th October, 2018 I attended TestBash Sydney, the first time this event has come to Australia. I spoke on “Advocating for Quality by turning developers into Quality Champions”. I’ll share some more on that topic in a different post, and instead focus this post series on what I got out of the other talks presented that day.

These are the things that stood out to me, my own reflections and some paraphrasing of content, to help share the lessons with others.

Next Level Teamwork: Pairing and Mobbing

by Maaret Pyhäjärvi – @maaretp | http://visible-quality.blogspot.co.uk/

  • Things are better together!
  • When we pair or mob test on one computer, we all bring our different views and knowledge to the table.
  • With mob programming, only the best of the whole group goes into the code, rather than one person giving their best and worst.
    • Plus, we get those “how did you do that!” moments for sharing knowledge.
  • Experts aren’t the ones who know the most, but who learn the fastest.
  • Traditional pairing is one watching what the other is doing, double checking everything. This is boring for the observer. “I’ve got an idea, give me the keyboard!” mentality.
  • You should keep rotating the keyboard every 2-4 minutes to keep everyone engaged.
  • Strong-style pairing shifts the focus, to a “I’ve got an idea, you take the keyboard” mentality, where you explain the idea to the other person and get them to try it out.
    • You are reviewing what is happening with what you want to happen, rather than guessing someone else’s mindset.
  • It can be hard to pair when skillsets are unequal, eg a developer and a tester, feeling that you are slowing them down or forcing them into things they don’t want to do. Strong-style pairing helps with this.
  • Some pairing pitfalls
    • Hijacking the sessions. Only doing what one person wants to try, or not getting the point
    • Giving up to impatience. Don’t see the value, but persist anyway.
    • Working with ‘them’. Pairing with an uncomfortable person, maybe use mob over pairing for this.
  • In Agile Conf 2011, an 11 year old girl was able to participate in a mob session by stating her intent, and having others follow that through, and by listening to others and doing what they said to do.
  • Mobbing basics:
    • Driver (no thinking). Instruct them by giving “Intent -> Location -> Details”
    • Designated navigator, to make decisions on behalf of group.
    • Conduct a retrospective.
    • Typically use 6-8 people, but if everyone is contributing or learning, it’s the right size.
    • Need people similar enough to get along, but diverse enough to bring different ideas.
  • I might have a good idea, but don’t know how to code/test it.

My thoughts

We use pairing at work a lot already, and I didn’t really learn anything new to introduce here. Mobbing doesn’t really appeal to me, for it to be beneficial, you have to justify the time spent of a whole group working on 1 task, which to me, only works when the work produced by individuals is far inferior. Mobbing should produce a better outcome, but it will be quite slow, and we get most of the way there by reviewing people’s code to get a solid overall solution.

Well presented,  but I can’t see mobbing working outside of some particular scenarios.

How Automated E2E Testing Enables a Consistently Great User Experience on an Ever Changing WordPress.com

by Alister Scott – @watirmelon | https://watirmelon.blog/

Note: A full transcript of Alister’s talk, including slides, is available on his website.

I didn’t really understand the start of Alister’s talk about hobbies or why it was included. It finished with the phrase: “Problems don’t stop, they get exchanged or upgraded into new problems. Happiness comes from solving problems”. This seems to be what the introduction was getting at. The rest of the talk then followed a pattern of presenting a problem around automation, a solution to the problem, and the problem that came out of the solution, and then the next solution, and so on.

  • Problem: Customer flows were breaking in production (They were dogfooding, but this didn’t include Signup)
  • Solution: Automated e2e test of signup in production
  • P: Non-deterministic tests due to A/B tests on signup
  • S: Override A/B test during testing
    • Including a bot to detect new A/B tests in the PR, with a prompt to update e2e tests
  • P: Too slow, too late, too hidden (since running in prod)
  • S: Parallel tests, canaries on merge before prod, direct pings
  • P: Still have to revert merges, slow local runs (parallel only in docker)
  • S: Live branch tests with canaries on every PR
  • P: Canaries don’t find all the problems (want to find before prod)
  • S: (Optional) Live branch full suite tests (use for large changes via a github label)
  • P: IE11 & Safari 10 issues
  • S: IE11 & Safari 10 canaries
  • (All these builds report back directly into the github PR) – NICE!
  • P: People still break e2e tests
  • S: You have to let them make their own mistakes
    • Get the PR author that broke the test logic to update the test, don’t just fix it

Takeaways:

  • Backwards law: acceptance of non-ideal circumstances is a positive experience
  • Solving problems creates new problems
  • Happiness comes from solving problems
  • Think of what you ‘can’ do over what you ‘should’ do
  • Tools can’t solve problems you don’t have
  • Continuous delivery only possible with no manual regression testing
  • Think in AND not OR

My thoughts

I thought this was a clever way to present a talk, and the challenges are familiar, so it was interesting to see how Alister’s team had been addressing them. Being a talk about ‘what’ and not ‘how’ meant there was less direct actions to take out of it. I already know the importance of automated tests, running them as often and early as possible, running parallel and similar tips that came up in the talk. So for me, I’m interested in exploring setting up an optional build into a test environment or docker build where e2e tests can be run by setting an optional label on a github PR.

A Tester’s guide to changing hearts and minds

by Michelle Playfair – @micheleplayfair

  • Testing is changing, and testers are generally on board, but not everyone else is.
    • Some confusion around what testers actually do
    • Most devs probably don’t read testing blogs, attend testing conferences or follow thought leaders etc. (and vice versa)
  • 4 P’s of marketing for testers to consider about marketing themselves:
    • Product, place, price and promotion
  • Product: How can you add value
    • What do you do here? (you need to have a good answer)
    • Now you know what value you bring, how do you sell it?
  • Promotion: Build relationships, grow network, reveal value
    • You need competence and confidence to be good at your job
    • Trust is formed either cognitively or affectively based on culture/background
    • You need to speak up and be willing to fail. People can’t follow you if they don’t know where you want to take them
    • Learned helpfulness
      • Think about how you talk to yourself
      • Internal vs external. “I can’t do that” -> Yes it’s hard, but it’s not that you are bad at it, you just need to learn.
      • Permanent vs temporary. “I could never do that” -> Maybe later you can
      • Global vs situational. Some of this vs all of this
  • Step 1: Please ask questions, put your hand up! Tell people what you know
    • Develop your own way of sharing, in a way that is suitable for you.
  • If you can’t change your environment, change your environment (ie find somewhere else).

My thoughts

Michelle presented very well, and the topic of ‘selling testing’ is particularly relevant given changes to the way testing is viewed within modern organisations. This was a helpful overview to start thinking through how to tackle this problem. The hard work is still on the tester to figure things out, but using the 4 P’s marketing approach is going to help structuring this communication.

My talk

My talk “Advancing Quality by Turning Developers into Quality Champions” was next, but I’ll talk more about that in a separate blog post later.

A Spectrum of Difference – Creating EPIC software testers

by Paul Seaman & Lee Hawkins – @beaglesays | https://beaglesays.blog/ & @therockertester | https://therockertester.wordpress.com/

Paul and Lee talked about the program they have set up to teach adults on the autism spectrum about software testing through the EPIC testability academy. An interesting note early on regarding language is that their clients indicated they preferred an Identity-first language of “autistic person” over phrases like “person with autism”.

They had identified that it’s crucial to focus their content on testing skills directly applicable in a workplace, keep iterating that content and cater for the group differences by arranging content, changing approaches etc. They found it really helpful to reflect and adapt content over time, but ensure you give your content enough of a chance to work first. Homework for the students also proved quite useful, despite initial hesitations to include that in the course.

My thoughts

It’s encouraging to hear about Paul and Lee’s work, a really important way to improve the diversity of our workforce, and give some valuable skills and opportunities to people who can often be overlooked in society. It was also helpful to think a little bit about structuring a training course in general and what they got out of that.

I did find the paired nature of the presentation interfered with the flow a little but not a big problem.

Exploratory Testing: LIVE

by Adam Howard – @adammhoward | https://solavirtusinvicta.wordpress.com/

This was an interesting idea for a talk, a developer at Adam’s work had hidden some bugs in a feature of his company website, and put it behind a feature flag for Adam to test against, while not making it public to anyone else. Adam would then do some exploratory testing, trying to find some of the issues that the dev had hidden for him, demonstrating some exploratory testing techniques for us at the same time. Adam also had access to the database to do some further investigation if needed.

The purpose of doing this exercise was to show that exploratory testing is a learned skill, and to help with learning to explain yourself. By marketing yourself and your skills like this, others can and will want to learn too.

  • Draw a mindmap as we go to document learnings
  • Consider using the “unboxing” heuristic, systematically working through the feature to build understanding.
    • E.g. Testing a happy path, learn something, test that out, make a hypothesis and try again. Thinking about how to validate or invalidate our observation.
    • Sometimes you might dive into the rabbit hole a little when something stands out.
    • Make a note of any questions to follow up or if something doesn’t feel right.
  • It can be helpful to look at help docs to see what claims we are making about the feature and what it should do. (Depends on what comes first, the help doc or the feature).

My thoughts

An interesting way to present a talk on how to do exploratory testing, by seeing it in action. There were a few times I could see the crowd wanted to participate, but didn’t get much chance, and it felt like it was kind of an interactive session, but kind of not, so I wasn’t quite clear on the intention there. Seeing something in action is a great way to learn, though I would’ve liked to see him try this on a website he didn’t already know, or at least one that more people would be familiar with, so we could all be on a more level playing field, but made for interesting viewing regardless.

Part 2 of my summary: TestBash Sydney 2018 Reflection – Part 2

Advertisements

Australian Testing Days 2016 Reflection – Day 1

On May 20-21 I went to the inaugural Australian Testing Days Conference in Melbourne. The first day involved a series of talks, mostly on sharing experiences people had in testing and the second day was an all-day workshop on test leadership. This post outlines the key messages from the session I attended and the key things I learnt from each one.

Day 1

Part 1 – What you meant to say (keynote)

First up, Michael Bolton discussed how the language we, as testers, use around testing can be quite unhelpful and cause confusion for those involved. For example, automated testing does not exist. You can certainly automate checking, which is mainly regression tests of existing behaviour. But you cannot automate testing, which is everything someone does to understand more about a feature, giving them knowledge to decide on how risky it is to release it. The way we communicate what we do will impact what others understand it as and then expect of us. Similarly, it is important to understand the language others use when asking us to do something.

Another helpful lesson was that customer desires are more important than customer expectations. If they are happy with your product, it doesn’t matter if it met expectations. If I don’t expect the Apple Watch will be of much use to me, but then I try it and discover that I love it, my expectations weren’t met, but my desires were, and it’s a good result. Similarly, users might expect something totally different to what you produce, but if they discover that what you made is actually better than their expectations, it is likewise a good result.

Lessons learnt:

  • Be clear in communication of testing activities to avoid ambiguity and misalignment.
  • Seek the underlying mentality behind people’s testing questions

Part 2 – Transforming an offshore QA team (elective)

Next up Michele Cross shared on the challenges she is facing in transforming an offshore, traditional and highly structure testing team into a more agile, context driven testing team. The primary way to achieve any big change like this is in creating an environment of trust, which comes in 2 ways. A cognitive trust is based on ability, you trust someone because of their skills and attributes. For example, trusting a doctor who has been studying and practising medicine for 20 years that you have just met to diagnose you. An affective trust is based on relationships, you trust someone because of how well you know them and how you have interacted with them in the past. For example, you trust a friend’s movie recommendation because of shared interests and experiences, not their skills as a movie reviewer.

To help establish this trust and initiate change, three C’s were discussed.

  • Culture – Understanding people and their differences. The context that has brought people to where they are now will greatly shape how they interact with people. Do they desire structure or independence? Are they open to conflict or desire harmony? Knowing this can help inform decisions and approaches.
  • Communication – Relating to other people can be just as hard as it is important. Large, distributed teams bring with them challenges of language, timezones, video conferencing etc… It is crucial to find ways to address these concerns so that everyone is kept in the loop, aligned on direction and is able to build relationships with each other.
  • Coaching – Teaching new skills through example and instruction. Create an environment where it is safe to fail so people feel comfortable to grow. Use practical scenarios to teach skills and get involved yourself.

Lessons learnt:

  • Consider the cultural context of people you are interacting with as it will shape how to be most effective in in those interactions
  • Learning through doing, and doing alongside someone is a great way of learning
  • Trust is built by a combination of personal relationships and technical abilities

Part 3 – It takes a village to raise a tester (elective)

Catherine Karena works at WorkVentures which is all about helping under privileged people develop like skills and technological skills to help them enter the tech workforce. She talked about how to figure out what skills to teach by looking at where the most jobs were in the market and the common skills required. This includes both technical and relational skills as they interact with structures and other staff in companies.

On a more general note, a number of characteristics of what makes a great tester were highlighted to focus on teaching these skills as well. A great tester is: curious, a learner, an advocate, a good communicator, tech savvy, a critical thinker, accountable and a high achiever. When it comes to the learning side, a few more tips were shared around teaching through doing as much as possible, making it safe to fail, using industry experts and building up the learning over time.

Some interesting statistics were raised showing that those trained by WorkVentures over 6 months were equal to or greater in performance and value compared to relevant Uni graduates when rated by employers.

Lessons learnt:

  • Relational skills can be just as, if not more, important than technical skills in hiring new talent.
  • Learning in small steps with practical examples greatly improves the outcome.

Part 4 – Context Driven Testing: Uncut (elective)

Brian Osman talked about his experience growing in knowledge and abilities as a tester and how greatly that experience was shaped by testing communities. He explained how a community of like minded people can help drive learning as they challenge each other and bring different view points across.

A side note he introduced was a term called ‘Possum Testing’ which is how he described “Testing that you don’t value, motivated by a fear of some kind”, for example, avoiding using a form of testing because you don’t understand it or how to use it. This is an idea that many people would understand, but could perhaps find it hard to articulate and discuss. Giving it a name instantly provides a means to bring it up in conversation and have people already have a good idea of the context and any common ground in thinking.

Lessons learnt:

  • Naming ideas or common problems is a helpful way to direct future conversations and bring along the original context
  • When looking to improve in a certain area/skill, find a community of others looking to do the same thing.
  • Use these communities to present ideas, defend them and challenge other’s ideas. Debates are encouraged

Part 5 – Testing web services and microservices (elective)

Katrina Clokie (who also mentors me in conference speaking) spoke about her experience testing web services and microservices and a previous version of the talk is available online if you are interested. Starting with web services, she pointed out that each service will have different test needs based on who uses it and how they use it. Service virtualisation is a common technique used in service testing to isolate the front-end from the inconsistencies and potentially unstable back-ends. Microservice testing puts another layer in this model.

Some key guidelines for creating microservices automation were presented, claiming that it should be fit for purpose, remove duplication, be easy to merge changes, have continuous execution and be visible across teams.

An interesting learning technique was present called Pathways which can be found on her website which list a whole bunch of resources for learning about a new topic. They are a helpful way of directing your learning time with a specific goal in mind.

Lessons learnt:

  • Make use of Katrina’s pathways for learning about a new area (for myself or as recommendations to others)
  • Get involved in code creation as early as possible to help influence a culture of testability
  • Write any automation with re-usability and visibility in mind

Part 6 – Test Management Revisited (keynote)

Anne-Marie Charett finished up the day sharing some reflections and approaches she implemented from her time as Test lead at Tyro payments. She started by asking the question, do we still need test management? Which has been asked a few times in the community already. The response being that we do need a testing voice in the community to go with all the new roles and technology coming through like microservices, and DevOps. This doesn’t mean we need Test Managers who deal with providing stability, rather Test Leaders who can direct change. She talked about using the “Satir change Model” to describe the process of change and it’s effect on performance.

She brought a mentality to transforming Tyro to have the best test practice in Australia, and was not interested in blindly copying others. There is certainly benefits to learn from the approach others take, but should be assessed to meet your company’s environment. She discussed a number of testing related strategies that you might have to deal with: Continuous delivery, testing in production, microservices, risk-based automation, business engagement, embedding testing, performance testing, operational testing, test environments, training and growth.

The next question was how to motivate people to learn? Hand-holding certainly isn’t ideal, but you also probably can’t expect people to spontaneously learn all the skills you’d like them to have. This needs coaching! And the coaching should be focused around a task that you can then offer feedback on afterwards. Then challenge them to try it again on their own.

An important question to ask in identifying what skills to teach is in highlighting what makes a good tester at your company, because your needs will be different to other places. She then finished with a few guidelines around coaching based around giving people responsibilities, improving the environment they work in and continuing to adapt as different needs and challenges arise.

Lessons learnt:

  • Any practice/process being used by others should be analysed and adapted to fit your context, not blindly copied.
  • Be a voice for testing and lead others to make changes in areas they need to improve on
  • Think about what makes a good tester at my company and how I measure up
  • Help prepare the organization/team for change and help them cope as they struggle through it

That’s a wrap for Day 1, find my review of Day 2 here, where I took part in a workshop on Coaching Testers with Anne-Marie Charrett

Bug Bash vs Usability Testing Session

The Scenario

Your project is nearing completion and you are getting close to release date. You think you’ve found the majority of the bugs there are to find in this product and are now ready to hand it over to others in the company to get their eyes over it. The question is, do you organise a ‘Bug Bash’ or an ‘Usability Testing Session’?

I think the answer depends on what you are seeking to get out of it. I’ll outline the differences between the two, when you should use each one and some tips for making them be useful.

Bug Bash

My Definition: A Group of people trying to find as many bugs as they can within a given feature in a short amount of time.

Preparation:

  1. Organise to get a group of people to meet in the same room for a designated amount of time (recommend 1 hour). Make sure there is a computer available to everyone, with any prior setup data ready to go.
  2. Plan what you are going to tell the group about the product, what areas you consider most risky, how to record any bugs they find and any further setup information they will need.
  3. Organise helpers from your team to verify and record bugs with you so that people aren’t all waiting for you.

Execution:

  1. Welcome and thank everyone for their attendance
  2. Explain what the product is, and any particular areas you would like people to focus on.
  3. Explain how to record any bugs people find
  4. Start the timer (rec. 1 hour)
  5. Encourage interaction between everyone to help the test ideas flow, food and drinks on the table, light background music, etc..
  6. With your helpers, record any bugs and answer any questions raised. Resist giving too much information away, if the user can’t figure it out, it may be a usability bug.
  7. At the end, thank everyone for their participation and let them know you will send out the results soon (the follow up helps reinforce that you value their input)

Variations:

  • Put people in teams of 2 working on the same machine, 1 person ‘drives’ and the other ‘navigates’ by offering suggestions of test cases to try out led by what they see the ‘driver’ doing.
  • Have prizes for the person/team that finds the most bugs and the best bug.

Who should I invite? Anyone and Everyone! – It’s good to get a mix of people, from different departments (QA, Design, Developer, Marketing…), some people from your team, some people from other teams. I like to get as many testers on board as possible, as realistically they will be most likely to find bugs, but at the same time, a developer or a designer will approach the problem from a different view point and is likely to find different types of bugs, so a mix is important.

Usability Testing Session

My Definition: Observing the user-interactions as a group of people attempt to complete a certain set of tasks with your product, whilst they gain familiarity with it.

Preparation:

  1. Organise to get a group of people to meet in the same room for a designated amount of time (recommend 1 hour). Make sure there is a computer available to everyone, with any prior setup data ready to go.
  2. Plan the tasks that the people must complete during the session. The tasks should be top-level only to allow the user freedom to explore the feature and figure out how to do it (even if it means getting lost along the way), eg. Submit a product review of your last purchased item.
  3. Organise helpers from your product team to record observations and answer questions from the group as they complete the challenge.

Execution:

  1. Welcome and thank everyone for their attendance
  2. Explain the tasks that each person is to complete during the session
  3. Explain the purpose of the session, that it is to observe their interactions, and make note of any situations where they are confused by the product, or unclear of what to do next. (ie usability bugs)
  4. Start the session, best to keep a time-limit to the session to keep everyone on track.
  5. Create a relaxed vibe with food and drinks on the table, light background music, etc..
  6. Together with your helpers, observe the groups interactions with the feature, recording any time they became confused or were unclear on what to do. Answer any questions they have along the way as it’s also an opportunity for the group to get familiar with the product (since it’s often helpful for the whole company to understand the new product being developed).
  7. Finish up, thanks the participants and let them know that you will send out the results soon. (To help reinforce that you value their input)
Variations:
  • Put people in teams, it will depend on your product for whether this is a likely scenario for actual end-users of your product. If end-users will be working solo, don’t use teams in the session either.
  • Have gifts for the participants as thanks, or perhaps prizes for the people who complete the tasks the fastest. (Being fast means there wasn’t anything that confusing for them)
Who should I invite? People who satisfy the following criteria
  • Have not worked closely on the development of the product (they will already know how to use it)
  • Are interested in learning about your new product OR
  • Are a good representation of who your end-users will be (whether that is marketers, developers, non-tech savvy, etc..)

How to choose

Now that we have looked at what each of our options are (of course there are more options out there, but just focusing on 2 here) how do we pick which one to use?

Pick ‘Bug Bash’ if…

  • You are trying to find as many issues with your product as you can to give greater confidence in releasing it
  • You are not trying to teach people how to use the product
  • You don’t have access to people who meet the ‘Usability Testing Session’ group criteria

Pick ‘Usability Testing Session’ if…

  • You are trying to familiarise people in your company with the product you are soon to release
  • You are trying to find usability bugs (perhaps you are worried your product is confusing to use at times)

Summary

This was a brief overview of 2 common techniques of getting other people in your company to look over the product your team has been working on to help find some issues you have been blinded to from working on the product every day. The technique you choose should be based on what you are hoping to get out of the session.

Feedback

I’d love to hear from you if you’ve found either of these techniques helpful for you in the past or if you have any further tips to enhance the usefulness and enjoyment of these sessions. Or perhaps you have a different technique completely that you prefer? Put it all in the comments below :)