TestBash Sydney 2018 Reflection – Part 2

The second half of my summary and reflections on the TestBash 2018 conference in Sydney. For the first half: TestBash Sydney 2018 Reflection – Part 1

Avoid Sleepwalking to Failure! On Abstractions and Keeping it Real in Software Teams

by Paul Maxwell-Walters – @TestingRants | http://testingrants.blogspot.com.au

The main focus of the talk seemed to be on Hypernormalisation, where you are so much a part of the system that you can’t see beyond it. For example, the message of society is that everything is OK, when really it isn’t, and you know it, but perhaps can’t explain why.

This can apply to testers, where we just accept things the way they are and don’t try and make things better, perhaps because we can’t see/imagine how it could be better.

  • Hyper-reality – A world that you interact with, that isn’t real (e.g  DisneyWorld). This is how we sometimes relate to software testing.

Abstraction #1 – Quality

The definition of quality is different for everyone. James Bach said in 2009 that Quality is dead, this is a mini-hypernormalisation. He didn’t like the way things were going, and was somewhat acceptant of that.

Abstraction #2 – Measurement

Rich Rogers said in his book “Changing times: Quality for humans in a digital age” in 2017 that we assume when we have tested all of our criteria, that we have good quality (but this is wrong!). If a team’s definition of quality or measurement isn’t true to reality, then why persist with it?

Therefore, we don’t have to accept the way things are just because we don’t see how it could be any better.

My thoughts

Personally I found this a really hard talk to follow (as might be evident in my notes above), in between the old news videos of Russia and the large amount/complexity of content. I felt the point of the talk of not just accepting what is being thrown at us in Quality could have been made much simpler and more time spent on applications of how to do this, or what areas of the job we should be actively looking for better ways to do.

I do strongly agree with the idea of not accepting bad practices just because that’s how it is now, and striving to make things better (ie being agile), but I found the delivery missed the mark a little bit.

Test Representatives – An Alternative Approach to Test Practice Management

by Georgia de Pont – @georgia_chunn

Georgia started by giving some of the context of the patterns/techniques used at Tyro, being Pairing XP, TDD and the Spotify model of squads. We then followed the journey that the testing team at Tyro have gone through, starting by moving from a disconnected team into having embedded testers.
This created some challenges:

  • Loss of alignment and knowledge sharing within the testing group
  • Lack of consistency in test approaches and usage of test members within the team
  • Lack of a test manager (it was a flat structure)

The question arose of how to address these issues, should they hire a test manager or take a grassroots approach? (Evidently, they took the grassroots approach).

Each product tribe selected a test representative to act as representative. They would support their tribe’s testers, gather info on issues and be a point of contact for that tribe. The representatives came up with a pipeline of improvement initiatives to work through, with ideas coming in a variety of ways. The representative group, “rep group”, would share the info back to their test teams and would have regular meetings with the TestOps team.
Some of the initiatives:

  • Clarity of role of embedded tester, e.g. what are the test practices they can offer
  • Improving the recruitment process. Candidates were given a take home test, which wasn’t reflective of the workplace because people usually pair. So they created a pairing exercise instead.
  • Onboarding new test engineers and probation expectations
  • Performance criteria for test engineers for them and their leads to use
  • Upskilling test engineers (an ongoing effort) of making sure there is available training, organising internal and external speakers, conferences, etc.
  • Co-ordinating engineering-wide test efforts
  • Developing a quality engineering strategy, involving various stakeholders, to identify any current roadblocks in the testing efforts and work to remove them

Some steps for success to make a representative group work in other workplaces:

  • Start small. Think about how many people per teams (aim for 1 rep per 2-4 teams)
  • In forming the rep group, consider whether best to select people or ask for volunteers. (they selected people to start with)
  • Communicate the work of this group to the rest of the organisation
    • Maybe hook into existing communications
    • Include ways to get involved
  • Ask for feedback (e.g. surveys)
    • Now they do this more informally
    • Asking: Is it working/effective?
  • Run Rep group retrospectives
  • Ensure support from engineering leadership
    • Maybe include them in meetings
    • Get their feedback/input
    • Give them an awareness of upcoming testing concerns
    • Get budget support

My thoughts

Georgia did quite well for her first talk, it was clear that she had practiced presenting this before, she had helpful slides to guide the content and carried confidence in her presentation. With practice presenting she will only get better, and it will be interesting to see her speak again in a few years with more experience.

The content was relevant, describing a smart solution to problems that many organisations are now feeling with the recent direction testing roles are taking. Even for companies not in a position to form a ‘rep group’ there were definite takeaways in how a testing group can, and should, interact with other stakeholders within Engineering. The initiatives the group came up with look really interesting, and I would’ve liked to hear much more about those as there appeared to be some directly applicable opportunities in there, perhaps this will be the subject of future talks?

Enchanting Experiences – The Future of Mobile Apps

by Parimala Hariprasad – @PariHariprasad | http://curioustester.blogspot.co.uk/

Pari finished out the day as the final keynote, speaking on where she sees the design of applications and devices heading in a heavily connected world and what opportunities could be unlocked.

It started with Machine Input, with a definition that this is all about information that devices can find on their own.

Products are developing all the time, getting more complex, with more interfaces, but perhaps what we need is to get simpler again, and automatically detect more of the options we would otherwise have to choose (through machine input).
We then looked at a range of machine input types

  • Camera – e.g. auto detect details of a credit card held in front of the camera and pre-enter
  • Location – detect where you are and make suggestions, including smart defaults like detecting the country and local currency
  • History – Each time you fill out forms, remember what data you put in, particularly between site visits.
  • There’s lots of other sensors in our phones that can be hooked into.

Could we detect which floor someone is on, which shop they are in, what they have recently searched for and offer them a discount as they walk around the shop, without prompting?

The internet of things is leading to a more and more connected home and we looked at some ideas companies like Google are developing in this space.

  • Designing great products is about great experiences.
  • Security and privacy is really important with all this data being available
    • Devices should not dictate us and how we live, they should make things easier and work for us
  • Learn about design and usability to see how it can impact your testing efforts/plan.

My thoughts

No particular takeaways from this one, as I felt it wasn’t really a talk targeted for testers, more of a talk for developers or designers who can more directly incorporate Pari’s design thoughts into their work. It was interesting to think through some of the options made possible in various parts of our lives through machine learning from the point of view of: “that would be cool to see!”. There will definitely be challenges in testing any of these types of features, which wasn’t covered much besides acknowledging that security will be key with all these features. Also the content spawned thought processes on what things would need to be considered in testing these sort of technologies.

99 Second talks

99 second talks then wrapped up the day with about 20 people speaking. For most people it was a chance to practice public speaking in a somewhat non-threatening environment. For others it was a chance to promote their company/group, or just a chance to introduce a topic/technique. Hard to take much in given the format, and I didn’t have notes of any takeaways from this, but some interesting topics raised.

Overall thoughts on the conference

One of the big selling points leading up to TestBash was the community feel of the event, particularly seen in the pre and post conference meetups. I was unable to attend either, so I didn’t get the full experience and can’t really comment on how the conference lived up to that hype. The conference day felt much like most other conferences I’ve been to in terms of interactions between sessions and general flow of the day.

There wasn’t much chance for questions with most of the talks, which was unfortunate, as there can be some great discussion coming out of this as people dive in to the bits that were missed or are really interesting to them. Though, in saying that, question time can be hit and miss depending on which direction the conversation gets steered towards, so not a big concern.

The talk selection was pretty good considering it was a single track, so every talk had to be chosen for being applicable to a wide audience, covering a generally applicable topic, but trying to make that topic interesting enough for anyone to learn from. This is a pretty hard criteria to meet, so hats off to the selection team. If you were wanting a deep dive on particular topics, this is perhaps not going to be the best type of conference to attend, but it could still happen on the right topic.

Speakers are well looked after by TestBash, in communications and compensation, though as a local speaker, there was nothing to be compensated for, so that is definitely more of a perk for non-local speakers.

The swag was better than most conferences, aiming for practical items that can be re-used beyond the day.

Overall I found the day enjoyable, I got good feedback on my talk and I have some renewed ideas and techniques to take back into my workplace from other talks which makes for a successful conference in my regards. This is helped by sharing my thoughts afterwards, which I recommend others try after any conference/training, even if just to peers in their workplace.

Would I go to the next one? Maybe, if I wasn’t speaking I would need to look at the schedule first, and see what talks are being covered. I’m definitely glad we have another testing conference in Australia, we seem to be adding more to the mix every year at the moment, and each one lifts the overall quality of presentations as people get more practice, we get more international speakers and a wider range of topics and conference setups to choose from. Each conference will have to work harder and harder to stand out and provide a good experience to attendees, which is a win for everyone! So I’m glad TestBash has come to Australia, and is coming back next year!

Advertisements

TestBash Sydney 2018 Reflection – Part 1

On the 19th October, 2018 I attended TestBash Sydney, the first time this event has come to Australia. I spoke on “Advocating for Quality by turning developers into Quality Champions”. I’ll share some more on that topic in a different post, and instead focus this post series on what I got out of the other talks presented that day.

These are the things that stood out to me, my own reflections and some paraphrasing of content, to help share the lessons with others.

Next Level Teamwork: Pairing and Mobbing

by Maaret Pyhäjärvi – @maaretp | http://visible-quality.blogspot.co.uk/

  • Things are better together!
  • When we pair or mob test on one computer, we all bring our different views and knowledge to the table.
  • With mob programming, only the best of the whole group goes into the code, rather than one person giving their best and worst.
    • Plus, we get those “how did you do that!” moments for sharing knowledge.
  • Experts aren’t the ones who know the most, but who learn the fastest.
  • Traditional pairing is one watching what the other is doing, double checking everything. This is boring for the observer. “I’ve got an idea, give me the keyboard!” mentality.
  • You should keep rotating the keyboard every 2-4 minutes to keep everyone engaged.
  • Strong-style pairing shifts the focus, to a “I’ve got an idea, you take the keyboard” mentality, where you explain the idea to the other person and get them to try it out.
    • You are reviewing what is happening with what you want to happen, rather than guessing someone else’s mindset.
  • It can be hard to pair when skillsets are unequal, eg a developer and a tester, feeling that you are slowing them down or forcing them into things they don’t want to do. Strong-style pairing helps with this.
  • Some pairing pitfalls
    • Hijacking the sessions. Only doing what one person wants to try, or not getting the point
    • Giving up to impatience. Don’t see the value, but persist anyway.
    • Working with ‘them’. Pairing with an uncomfortable person, maybe use mob over pairing for this.
  • In Agile Conf 2011, an 11 year old girl was able to participate in a mob session by stating her intent, and having others follow that through, and by listening to others and doing what they said to do.
  • Mobbing basics:
    • Driver (no thinking). Instruct them by giving “Intent -> Location -> Details”
    • Designated navigator, to make decisions on behalf of group.
    • Conduct a retrospective.
    • Typically use 6-8 people, but if everyone is contributing or learning, it’s the right size.
    • Need people similar enough to get along, but diverse enough to bring different ideas.
  • I might have a good idea, but don’t know how to code/test it.

My thoughts

We use pairing at work a lot already, and I didn’t really learn anything new to introduce here. Mobbing doesn’t really appeal to me, for it to be beneficial, you have to justify the time spent of a whole group working on 1 task, which to me, only works when the work produced by individuals is far inferior. Mobbing should produce a better outcome, but it will be quite slow, and we get most of the way there by reviewing people’s code to get a solid overall solution.

Well presented,  but I can’t see mobbing working outside of some particular scenarios.

How Automated E2E Testing Enables a Consistently Great User Experience on an Ever Changing WordPress.com

by Alister Scott – @watirmelon | https://watirmelon.blog/

Note: A full transcript of Alister’s talk, including slides, is available on his website.

I didn’t really understand the start of Alister’s talk about hobbies or why it was included. It finished with the phrase: “Problems don’t stop, they get exchanged or upgraded into new problems. Happiness comes from solving problems”. This seems to be what the introduction was getting at. The rest of the talk then followed a pattern of presenting a problem around automation, a solution to the problem, and the problem that came out of the solution, and then the next solution, and so on.

  • Problem: Customer flows were breaking in production (They were dogfooding, but this didn’t include Signup)
  • Solution: Automated e2e test of signup in production
  • P: Non-deterministic tests due to A/B tests on signup
  • S: Override A/B test during testing
    • Including a bot to detect new A/B tests in the PR, with a prompt to update e2e tests
  • P: Too slow, too late, too hidden (since running in prod)
  • S: Parallel tests, canaries on merge before prod, direct pings
  • P: Still have to revert merges, slow local runs (parallel only in docker)
  • S: Live branch tests with canaries on every PR
  • P: Canaries don’t find all the problems (want to find before prod)
  • S: (Optional) Live branch full suite tests (use for large changes via a github label)
  • P: IE11 & Safari 10 issues
  • S: IE11 & Safari 10 canaries
  • (All these builds report back directly into the github PR) – NICE!
  • P: People still break e2e tests
  • S: You have to let them make their own mistakes
    • Get the PR author that broke the test logic to update the test, don’t just fix it

Takeaways:

  • Backwards law: acceptance of non-ideal circumstances is a positive experience
  • Solving problems creates new problems
  • Happiness comes from solving problems
  • Think of what you ‘can’ do over what you ‘should’ do
  • Tools can’t solve problems you don’t have
  • Continuous delivery only possible with no manual regression testing
  • Think in AND not OR

My thoughts

I thought this was a clever way to present a talk, and the challenges are familiar, so it was interesting to see how Alister’s team had been addressing them. Being a talk about ‘what’ and not ‘how’ meant there was less direct actions to take out of it. I already know the importance of automated tests, running them as often and early as possible, running parallel and similar tips that came up in the talk. So for me, I’m interested in exploring setting up an optional build into a test environment or docker build where e2e tests can be run by setting an optional label on a github PR.

A Tester’s guide to changing hearts and minds

by Michelle Playfair – @micheleplayfair

  • Testing is changing, and testers are generally on board, but not everyone else is.
    • Some confusion around what testers actually do
    • Most devs probably don’t read testing blogs, attend testing conferences or follow thought leaders etc. (and vice versa)
  • 4 P’s of marketing for testers to consider about marketing themselves:
    • Product, place, price and promotion
  • Product: How can you add value
    • What do you do here? (you need to have a good answer)
    • Now you know what value you bring, how do you sell it?
  • Promotion: Build relationships, grow network, reveal value
    • You need competence and confidence to be good at your job
    • Trust is formed either cognitively or affectively based on culture/background
    • You need to speak up and be willing to fail. People can’t follow you if they don’t know where you want to take them
    • Learned helpfulness
      • Think about how you talk to yourself
      • Internal vs external. “I can’t do that” -> Yes it’s hard, but it’s not that you are bad at it, you just need to learn.
      • Permanent vs temporary. “I could never do that” -> Maybe later you can
      • Global vs situational. Some of this vs all of this
  • Step 1: Please ask questions, put your hand up! Tell people what you know
    • Develop your own way of sharing, in a way that is suitable for you.
  • If you can’t change your environment, change your environment (ie find somewhere else).

My thoughts

Michelle presented very well, and the topic of ‘selling testing’ is particularly relevant given changes to the way testing is viewed within modern organisations. This was a helpful overview to start thinking through how to tackle this problem. The hard work is still on the tester to figure things out, but using the 4 P’s marketing approach is going to help structuring this communication.

My talk

My talk “Advancing Quality by Turning Developers into Quality Champions” was next, but I’ll talk more about that in a separate blog post later.

A Spectrum of Difference – Creating EPIC software testers

by Paul Seaman & Lee Hawkins – @beaglesays | https://beaglesays.blog/ & @therockertester | https://therockertester.wordpress.com/

Paul and Lee talked about the program they have set up to teach adults on the autism spectrum about software testing through the EPIC testability academy. An interesting note early on regarding language is that their clients indicated they preferred an Identity-first language of “autistic person” over phrases like “person with autism”.

They had identified that it’s crucial to focus their content on testing skills directly applicable in a workplace, keep iterating that content and cater for the group differences by arranging content, changing approaches etc. They found it really helpful to reflect and adapt content over time, but ensure you give your content enough of a chance to work first. Homework for the students also proved quite useful, despite initial hesitations to include that in the course.

My thoughts

It’s encouraging to hear about Paul and Lee’s work, a really important way to improve the diversity of our workforce, and give some valuable skills and opportunities to people who can often be overlooked in society. It was also helpful to think a little bit about structuring a training course in general and what they got out of that.

I did find the paired nature of the presentation interfered with the flow a little but not a big problem.

Exploratory Testing: LIVE

by Adam Howard – @adammhoward | https://solavirtusinvicta.wordpress.com/

This was an interesting idea for a talk, a developer at Adam’s work had hidden some bugs in a feature of his company website, and put it behind a feature flag for Adam to test against, while not making it public to anyone else. Adam would then do some exploratory testing, trying to find some of the issues that the dev had hidden for him, demonstrating some exploratory testing techniques for us at the same time. Adam also had access to the database to do some further investigation if needed.

The purpose of doing this exercise was to show that exploratory testing is a learned skill, and to help with learning to explain yourself. By marketing yourself and your skills like this, others can and will want to learn too.

  • Draw a mindmap as we go to document learnings
  • Consider using the “unboxing” heuristic, systematically working through the feature to build understanding.
    • E.g. Testing a happy path, learn something, test that out, make a hypothesis and try again. Thinking about how to validate or invalidate our observation.
    • Sometimes you might dive into the rabbit hole a little when something stands out.
    • Make a note of any questions to follow up or if something doesn’t feel right.
  • It can be helpful to look at help docs to see what claims we are making about the feature and what it should do. (Depends on what comes first, the help doc or the feature).

My thoughts

An interesting way to present a talk on how to do exploratory testing, by seeing it in action. There were a few times I could see the crowd wanted to participate, but didn’t get much chance, and it felt like it was kind of an interactive session, but kind of not, so I wasn’t quite clear on the intention there. Seeing something in action is a great way to learn, though I would’ve liked to see him try this on a website he didn’t already know, or at least one that more people would be familiar with, so we could all be on a more level playing field, but made for interesting viewing regardless.

Part 2 of my summary: TestBash Sydney 2018 Reflection – Part 2

Australian Testing Days 2016 Reflection – Day 2

For Part 1 of this series, see Australian Testing Days 2016 Reflection – Day 1

Day 2

Coaching Testers (workshop)Coach showing the right way to follow

I chose Anne-Marie Charett‘s workshop for day 2 to learn about coaching testers. Even though I’m not directly in a coaching role, I thought there would be lots of great things to learn and a number of things that would apply to teaching non-testers in my team about testing. The day started on discovering why each of us had come and what we were hoping to learn from the day so we could focus our discussions accordingly.

First up we thought about coaching sessions and how they should not transform into a “How are you going” session. Much more effective is to make the sessions task-based (e.g. let’s work through an activity and I’ll offer assistance along the way) or for the student to come with a question.

The next lesson was a great approach to teaching new techniques or skills by starting with applying it to a familiar, non-testing situation. For example, creating a mind-map of your family. This takes away the fear of getting it wrong since it’s known and familiar, and greater focus can be put on the technique or skill being learnt.

In understanding what things to coach and how to go about coaching them, we need to understand the context of both the coach and the student. They will each have 2 images of themselves, one of how they view themselves as a tester, and the other as how they view themselves as a coach/student. As well as many other factors that need to be considered in knowing that different approaches work for different people.

The day was broken up with a number of movie clips to show different coaching styles. First up was the scene in The Matrix where Morpheus asks Neo to show him the Kung-Fu he has just learnt. Throughout the scene, more pressure and direction is added from Morpheus as Neo progresses through the task. It starts of easy, then gets harder and more physical as Neo is pushed further and further to learn the lesson Morpheus was teaching him (the rules can be broken).

Next we watched a clip from The Blind Side where Sandra Bullock’s character was better able to teach her foster son how to defend in Grid Iron better than the team coach. This was because she better understand the needs of the student and what motivates them (in this case, family). The question that he needed answered was “Why” not “How”. Trust was also a big factor in the success of the coaching efforts.

Taking this lesson further, as a coach you want to find out what people’s drive is, why are they in testing, how did they get here and where do they want to be? Then the coach can focus on directing them on how to get there and where they need most help, based on the journey they’ve had so far.

Our next video was from The Magnificent Seven, where a young character was challenged to match his speed at drawing his weapon against the team he hoped to join. This was more about putting people out of their comfort zones and seeing how they will respond to pressure. It was a way to show this man that his skills weren’t up to scratch, and that he would need to lift his game.

Following this we talked about Socratic questioning, which is all about not directly answering people’s questions but instead asking questions in return to help lead them to discover the answer for themselves. This is a very powerful tool in coaching, an answer you come to by yourself is much more powerful and memorable than one you were told. But it can also be taken the wrong way and be demotivating when people feel like they never get straight answers.

The coach will need to help manage the way people feel about themselves and base their coaching on the students needs, energy and context. There isn’t a one-size-fits-all approach.

Our final movie clip was from Million Dollar Baby where Hillary Swank’s character is trying to convince Clint Eastwood’s character to coach her. There was certainly reluctance in this coaching partnership, but biases had to be overcome and the student had to reflect and show they did indeed have the passion and desire to not give up, even though it would be hard. Sometimes we too might not feel comfortable at first about coaching someone or being coached.

2 final points were made around managing expectations and reflecting on progress. Manage people’s expectations to learn something quickly by showing that confusion is just a learning state and is not a weakness or lack of ability. It’s also powerful if you can show someone how much they have grown already to help give motivation to continue on that journey.

The day finished with a practical exercise of simulating a coaching session between a Coach, a tester and a developer, trying to come up with acceptance criteria for a payroll system. This was great to put into practice some of the things we had been learning throughout the day and see how even in controlled environments, people have quite different needs, opinions and expectations and so coaching will need to adapt all the time to suit.

Summary

I had a great time at Australian Testing Days 2016 and am looking forward to going again next year! I’d also love to hear what other people got out of the conference or if you have a different view point to anything discussed in either of my posts about it.

Australian Testing Days 2016 Reflection – Day 1

On May 20-21 I went to the inaugural Australian Testing Days Conference in Melbourne. The first day involved a series of talks, mostly on sharing experiences people had in testing and the second day was an all-day workshop on test leadership. This post outlines the key messages from the session I attended and the key things I learnt from each one.

Day 1

Part 1 – What you meant to say (keynote)

First up, Michael Bolton discussed how the language we, as testers, use around testing can be quite unhelpful and cause confusion for those involved. For example, automated testing does not exist. You can certainly automate checking, which is mainly regression tests of existing behaviour. But you cannot automate testing, which is everything someone does to understand more about a feature, giving them knowledge to decide on how risky it is to release it. The way we communicate what we do will impact what others understand it as and then expect of us. Similarly, it is important to understand the language others use when asking us to do something.

Another helpful lesson was that customer desires are more important than customer expectations. If they are happy with your product, it doesn’t matter if it met expectations. If I don’t expect the Apple Watch will be of much use to me, but then I try it and discover that I love it, my expectations weren’t met, but my desires were, and it’s a good result. Similarly, users might expect something totally different to what you produce, but if they discover that what you made is actually better than their expectations, it is likewise a good result.

Lessons learnt:

  • Be clear in communication of testing activities to avoid ambiguity and misalignment.
  • Seek the underlying mentality behind people’s testing questions

Part 2 – Transforming an offshore QA team (elective)

Next up Michele Cross shared on the challenges she is facing in transforming an offshore, traditional and highly structure testing team into a more agile, context driven testing team. The primary way to achieve any big change like this is in creating an environment of trust, which comes in 2 ways. A cognitive trust is based on ability, you trust someone because of their skills and attributes. For example, trusting a doctor who has been studying and practising medicine for 20 years that you have just met to diagnose you. An affective trust is based on relationships, you trust someone because of how well you know them and how you have interacted with them in the past. For example, you trust a friend’s movie recommendation because of shared interests and experiences, not their skills as a movie reviewer.

To help establish this trust and initiate change, three C’s were discussed.

  • Culture – Understanding people and their differences. The context that has brought people to where they are now will greatly shape how they interact with people. Do they desire structure or independence? Are they open to conflict or desire harmony? Knowing this can help inform decisions and approaches.
  • Communication – Relating to other people can be just as hard as it is important. Large, distributed teams bring with them challenges of language, timezones, video conferencing etc… It is crucial to find ways to address these concerns so that everyone is kept in the loop, aligned on direction and is able to build relationships with each other.
  • Coaching – Teaching new skills through example and instruction. Create an environment where it is safe to fail so people feel comfortable to grow. Use practical scenarios to teach skills and get involved yourself.

Lessons learnt:

  • Consider the cultural context of people you are interacting with as it will shape how to be most effective in in those interactions
  • Learning through doing, and doing alongside someone is a great way of learning
  • Trust is built by a combination of personal relationships and technical abilities

Part 3 – It takes a village to raise a tester (elective)

Catherine Karena works at WorkVentures which is all about helping under privileged people develop like skills and technological skills to help them enter the tech workforce. She talked about how to figure out what skills to teach by looking at where the most jobs were in the market and the common skills required. This includes both technical and relational skills as they interact with structures and other staff in companies.

On a more general note, a number of characteristics of what makes a great tester were highlighted to focus on teaching these skills as well. A great tester is: curious, a learner, an advocate, a good communicator, tech savvy, a critical thinker, accountable and a high achiever. When it comes to the learning side, a few more tips were shared around teaching through doing as much as possible, making it safe to fail, using industry experts and building up the learning over time.

Some interesting statistics were raised showing that those trained by WorkVentures over 6 months were equal to or greater in performance and value compared to relevant Uni graduates when rated by employers.

Lessons learnt:

  • Relational skills can be just as, if not more, important than technical skills in hiring new talent.
  • Learning in small steps with practical examples greatly improves the outcome.

Part 4 – Context Driven Testing: Uncut (elective)

Brian Osman talked about his experience growing in knowledge and abilities as a tester and how greatly that experience was shaped by testing communities. He explained how a community of like minded people can help drive learning as they challenge each other and bring different view points across.

A side note he introduced was a term called ‘Possum Testing’ which is how he described “Testing that you don’t value, motivated by a fear of some kind”, for example, avoiding using a form of testing because you don’t understand it or how to use it. This is an idea that many people would understand, but could perhaps find it hard to articulate and discuss. Giving it a name instantly provides a means to bring it up in conversation and have people already have a good idea of the context and any common ground in thinking.

Lessons learnt:

  • Naming ideas or common problems is a helpful way to direct future conversations and bring along the original context
  • When looking to improve in a certain area/skill, find a community of others looking to do the same thing.
  • Use these communities to present ideas, defend them and challenge other’s ideas. Debates are encouraged

Part 5 – Testing web services and microservices (elective)

Katrina Clokie (who also mentors me in conference speaking) spoke about her experience testing web services and microservices and a previous version of the talk is available online if you are interested. Starting with web services, she pointed out that each service will have different test needs based on who uses it and how they use it. Service virtualisation is a common technique used in service testing to isolate the front-end from the inconsistencies and potentially unstable back-ends. Microservice testing puts another layer in this model.

Some key guidelines for creating microservices automation were presented, claiming that it should be fit for purpose, remove duplication, be easy to merge changes, have continuous execution and be visible across teams.

An interesting learning technique was present called Pathways which can be found on her website which list a whole bunch of resources for learning about a new topic. They are a helpful way of directing your learning time with a specific goal in mind.

Lessons learnt:

  • Make use of Katrina’s pathways for learning about a new area (for myself or as recommendations to others)
  • Get involved in code creation as early as possible to help influence a culture of testability
  • Write any automation with re-usability and visibility in mind

Part 6 – Test Management Revisited (keynote)

Anne-Marie Charett finished up the day sharing some reflections and approaches she implemented from her time as Test lead at Tyro payments. She started by asking the question, do we still need test management? Which has been asked a few times in the community already. The response being that we do need a testing voice in the community to go with all the new roles and technology coming through like microservices, and DevOps. This doesn’t mean we need Test Managers who deal with providing stability, rather Test Leaders who can direct change. She talked about using the “Satir change Model” to describe the process of change and it’s effect on performance.

She brought a mentality to transforming Tyro to have the best test practice in Australia, and was not interested in blindly copying others. There is certainly benefits to learn from the approach others take, but should be assessed to meet your company’s environment. She discussed a number of testing related strategies that you might have to deal with: Continuous delivery, testing in production, microservices, risk-based automation, business engagement, embedding testing, performance testing, operational testing, test environments, training and growth.

The next question was how to motivate people to learn? Hand-holding certainly isn’t ideal, but you also probably can’t expect people to spontaneously learn all the skills you’d like them to have. This needs coaching! And the coaching should be focused around a task that you can then offer feedback on afterwards. Then challenge them to try it again on their own.

An important question to ask in identifying what skills to teach is in highlighting what makes a good tester at your company, because your needs will be different to other places. She then finished with a few guidelines around coaching based around giving people responsibilities, improving the environment they work in and continuing to adapt as different needs and challenges arise.

Lessons learnt:

  • Any practice/process being used by others should be analysed and adapted to fit your context, not blindly copied.
  • Be a voice for testing and lead others to make changes in areas they need to improve on
  • Think about what makes a good tester at my company and how I measure up
  • Help prepare the organization/team for change and help them cope as they struggle through it

That’s a wrap for Day 1, find my review of Day 2 here, where I took part in a workshop on Coaching Testers with Anne-Marie Charrett

Where there’s Smoke there’s Fire!

laptop_smokingThe first sign of smoke is always an early indicator that fire is not far away. It’s the first warning sign. So too in testing, having Smoke Tests that quickly step through the main pathways of a feature will let you know if there’s a significant problem in your codebase.

What makes a good Smoke Test?

There are 3 factors which make a good smoke test: Speed, Realism and Informative.

Speed

If there is a fire coming, you want to know about it as fast as possible so you have the most time possible to respond. Your tests must be fast enough that running them with every build/iteration will not cause frustration or significantly hold up progress.

If there is a problem, these tests will tell you, quickly, that you need to stop whatever else you might’ve been doing and respond to the issue.

Realism

If smoke is coming from a contained, outdoor BBQ in your Neighbours backyard, it doesn’t matter. You don’t need to do anything about this. But if a tree in your neighbours backyard was on fire, this is much more concerning. The Smoke tests must only look for realistic and common usages of your feature, where you care if something breaks. If the background colour selector on your form creation tool is broken, that’s probably not as crucial to worry about, compared to if the form itself didn’t work.

So, prioritise your tests to only cater for the key user workflows through your feature, and leave everything else for other tests to cover. This will also go hand in hand with designing tests for speed.

Informative

If you see smoke, the first thing you will want to know is how far away it is and how big it is. This will dictate how devastating it will be and what sort of action you need to take. So to the tests you write must be informative about where the problem is, how you can recreate it and how important it is to fix.

Smoke tests are much more useful if they can not only tell you that something is wrong, but also where the problem is, how important a problem it is and how to fix it.

An Example

Since the company I work for (Campaign Monitor) is an email marketing company. So a key scenario or User Story we would want to target in a Smoke Test is:
“As a user, I want to create a campaign, send it to my subscriber list, check the email got through and that any recipient interactions are reported”

So, the steps required for this smoke test are:

  1. Create a campaign, set the email content and the recipients
  2. Send the campaign to some email recipients
  3. Open one of the received emails and click on a link
  4. Check the report shows the email was opened and clicked

This test covers an end-to-end user story that is a vital part of our software, hence if any part of it fails, we need to know immediately and fix it. It is fast and runs in under a minute so doesn’t interrupt workflow. It does not add in any extra steps or checks that aren’t helpful to the target user story. And it is written with logging and helpful error messages so that any failures point as close as possible to the problem area.

Putting it together

Once you’ve created your smoke tests, put them into use! Run them every time you change something in your feature and they will quickly tell you if anything major is wrong. It’s so much better to find a major issue as early as possible and reduce time lost searching for the problem later on or building more and more problems on top.

Smoke tests shouldn’t be your only test approach as they only cover the ‘happy path‘ scenarios and there are plenty of other areas for your feature to have problems. But it is certainly a good starting point!

I’d love to hear how you use (or why you don’t use) Smoke Tests in your company!

Getting started testing Microservices

Overview

Microservices involve breaking down functional areas of code into separate services that run independently of each other. However, there is still a dependency in the type and format of the data that is getting passed around that we need to take into consideration. If that data changes, and other services were depending on the previous format, then they will break. So, you need to test changes between services!

To do this, you can either have brittle end-to-end integration tests that will regularly need updating and are semi-removed from the process, or you can be smarter, and just test the individual services continue to provide and accept data as expected, to highlight when changes are needed. This approach leads to much quicker identification of problems, and is an adaptive approach that won’t be as brittle as integration tests and should be a lot faster to run as well.

The Solution

What I’m proposing is to integrate contract-based testing. (Note, we are only in the early stages of trying this out at my work)
Here’s how it works:

Service A -- Data X --> Service B

Service A is providing Service B with some sort of data payload in the Json format X. We call Service A the provider and Service B the consumer. We start with the consumer (service B) and determine what expectations it has for the data package X and we call this the contract. We would then have standard unit tests that run with every build on that service stubbing out that data coming in from a pretend ‘Service A’. This means that as long as Service B gets it’s data X in the format it expects, it will do what it should with it.

The next part is to make sure that Service A knows that service B is depending on it to provide data X in a given format or with given data, so that it if a change is needed, Service B (or any other services dependent on X) can be updated in line, or a non-breaking change could be made instead.

This is Consumer-Driven contract testing. This is nice, it means that we can guarantee that Service A is providing the right kind of data that Service B is expecting, without having to test their actual connections. Spread this out to a larger scale, with 5 services dependent on Service A’s data, and Service A giving out 5 types of data to different subsets of services and you can certainly see how this makes things simpler, without compromising effectiveness.

A variation of this is to have Service B continue to stub out actually getting data from Service A for the CI builds. But instead of testing on Service A that it still meets the expected data format of Service B, we can put that Test on Service B as well, so it also checks the Stub being used to simulate Service A against what is actually coming in from Service A on a daily basis. When it finds a change, the stub is updated, and/or a change request is made to service A.
Both types have advantages and disadvantages.

In Practice

Writing these sort of tests can be done manually, but there are tools which help with this as well, making it easier. Two such products are Pacto and Pact. They are both written in Ruby, Pacto is by Thoughtworks and Pact by RealEstate.com.au. Out of these 2 I think Pact is a better option as it appears to be more regularly updated and have better documentation. PactNet is a .Net version of Pact written by SEEK Jobs which is the language used at my work, and so is the solution we’re looking into.

These tools provide a few different options along the lines of the concepts described above. One such use case is that you provide the tool with an http endpoint, it hits the endpoint and makes a contract out of the response (saying what a response should look like). Then in subsequent tests the same endpoint is hit and the result compared with the contract saved previously, so it can tell if there have been any breaking changes.
I’m not sure how well these tools go at specifying that there might only be part of the response that you really care about, and the rest can change without breaking anything. This would be a more useful implementation.

Further reading

Note that most of the writing available online about these tools is referring to the Ruby implementation, but it’s transferable to the .Net version.

Influential people

People that are big contributors to this space worth following or listening to:

  • Beth Skurrie – Major contributor and speaker on Pact from REA
  • Martin Fowler – Writes a lot on microservices, how to build them and test them, on a theory level, not about particular tools.
  • Neil Campbell – Works on the PactNet library

Got any experience testing microservices and lessons to share? Other resources worth including? Please comment below and I’ll include them

The Art of Problem Solving

Solving problems is something i love doing. My job presents me with problems to solve every day (e.g. how to test ‘invisible’ features or how to simulate a customer using our product), and outside of work I’m also faced with problems to solve on a regular basis (e.g. why the hot water tap doesn’t heat up or why the TV reception is bad for 1 channel only). Whatever the problem, there are ways of approaching it that will apply in most cases. Here’s the approach i use.

Step 1 – Gathering Intel

First up, we need to understand the problem we are trying to solve. So start by gathering all the information you can about the problem. Here are some starter questions worth asking:

  • What steps are needed to see the problem occurs? (So you can determine if your fix worked)
  • What is the expected outcome and what is the actual outcome? (So you can tell when it’s been fixed)
  • Is it something that happens every time or just sometimes? And if so, how often? (This might tell you something about why it’s happening)
  • Does anything you do seem to influence whether it happens or not or how severe it is? (Another potential cause to the problem)
  • When did the problem start? Is it recent or always? Can you trace it back to start with a particular event? (Did that event cause the problem? Maybe it just made it visible)

Once you’ve got these questions (or similar ones) answered, you should have a good idea of what the problem is that you are trying to solve and a way to test out if your fix for the problem has worked.

Step 2 – Creating a hypothesis

Now that you know the problem, step 2 is to create a hypothesis on how you might be able to fix the problem. This will be based on your findings in step 1 on what factors seem to contribute to the problem with an idea on what change then needs to be made to fix the problem.

For example, a simple case might be that last month someone changed the type of paper the photocopier was using and over the past few weeks, there appears to have been an increased in paper jams. So your hypothesis is that by changing the paper back to the previous paper, the occurrence of paper jams should decrease. You’ve seen the change in behaviour, you notice an event which possibly contributed to the problem and you came up with a method to fix the problem, including a way to tell if it worked.

Perhaps the problem is a bit trickier. You’ve been using your iPhone for months as a calendar and it sends you email notifications when you have an appointment coming up. You recently upgraded to a newer model iPhone, at the same time as changing your email address that you receive notifications on. The iPhone calendar app has been updated and you decide that instead of checking those emails on your iPhone, you are going to start using your new iPad to read them. But the emails have the wrong times for your appointment.

Lots of variations and no obvious cause to the change. But the same rules still apply, find out all you can, and create your best guess hypothesis of what might help fix the problem.

Step 3 – Test out your hypothesis

Finally, you test out your hypothesis, make the change(s) required and see if the problem still happens. This might take a lot of testing depending on how often and predictable the problem was to reproduce. Eventually, you’ll figure out whether your hypothesis was correct or not.

Hypothesis didn’t solve the problem? Repeat Steps 2 and 3 again with the new information you’ve just learnt until you find your answer.

Applying it to my job as a tester

How does this fit in with being a tester? Well there’s two main types of problems i come across in my work.

  1. How to test tricky to see/verify areas of the project?
  2. How to create automated tests to cover those areas that are also useful, readable and maintainable (see my previous blog post for more on writing good automated tests)

In both these cases, the same rules apply as i’ve outlined above. It might take a while to gather the right data and come up with a solution, but I’ve found following this simple and general approach quite helpful in my work. In my role as a tester for the Online SaaS company, Campaign Monitor (which is awesome by the way!) I have lots of things to consider in how to test our product and where appropriate, automate tests to cover our various UI elements and so I constantly find myself with problems to solve. From how to run automated tests in parallel to figuring out how customer feedback should shape the way i test and the things i look for.

I hope this simple approach might help others solving there problems too!

Bug Bash vs Usability Testing Session

The Scenario

Your project is nearing completion and you are getting close to release date. You think you’ve found the majority of the bugs there are to find in this product and are now ready to hand it over to others in the company to get their eyes over it. The question is, do you organise a ‘Bug Bash’ or an ‘Usability Testing Session’?

I think the answer depends on what you are seeking to get out of it. I’ll outline the differences between the two, when you should use each one and some tips for making them be useful.

Bug Bash

My Definition: A Group of people trying to find as many bugs as they can within a given feature in a short amount of time.

Preparation:

  1. Organise to get a group of people to meet in the same room for a designated amount of time (recommend 1 hour). Make sure there is a computer available to everyone, with any prior setup data ready to go.
  2. Plan what you are going to tell the group about the product, what areas you consider most risky, how to record any bugs they find and any further setup information they will need.
  3. Organise helpers from your team to verify and record bugs with you so that people aren’t all waiting for you.

Execution:

  1. Welcome and thank everyone for their attendance
  2. Explain what the product is, and any particular areas you would like people to focus on.
  3. Explain how to record any bugs people find
  4. Start the timer (rec. 1 hour)
  5. Encourage interaction between everyone to help the test ideas flow, food and drinks on the table, light background music, etc..
  6. With your helpers, record any bugs and answer any questions raised. Resist giving too much information away, if the user can’t figure it out, it may be a usability bug.
  7. At the end, thank everyone for their participation and let them know you will send out the results soon (the follow up helps reinforce that you value their input)

Variations:

  • Put people in teams of 2 working on the same machine, 1 person ‘drives’ and the other ‘navigates’ by offering suggestions of test cases to try out led by what they see the ‘driver’ doing.
  • Have prizes for the person/team that finds the most bugs and the best bug.

Who should I invite? Anyone and Everyone! – It’s good to get a mix of people, from different departments (QA, Design, Developer, Marketing…), some people from your team, some people from other teams. I like to get as many testers on board as possible, as realistically they will be most likely to find bugs, but at the same time, a developer or a designer will approach the problem from a different view point and is likely to find different types of bugs, so a mix is important.

Usability Testing Session

My Definition: Observing the user-interactions as a group of people attempt to complete a certain set of tasks with your product, whilst they gain familiarity with it.

Preparation:

  1. Organise to get a group of people to meet in the same room for a designated amount of time (recommend 1 hour). Make sure there is a computer available to everyone, with any prior setup data ready to go.
  2. Plan the tasks that the people must complete during the session. The tasks should be top-level only to allow the user freedom to explore the feature and figure out how to do it (even if it means getting lost along the way), eg. Submit a product review of your last purchased item.
  3. Organise helpers from your product team to record observations and answer questions from the group as they complete the challenge.

Execution:

  1. Welcome and thank everyone for their attendance
  2. Explain the tasks that each person is to complete during the session
  3. Explain the purpose of the session, that it is to observe their interactions, and make note of any situations where they are confused by the product, or unclear of what to do next. (ie usability bugs)
  4. Start the session, best to keep a time-limit to the session to keep everyone on track.
  5. Create a relaxed vibe with food and drinks on the table, light background music, etc..
  6. Together with your helpers, observe the groups interactions with the feature, recording any time they became confused or were unclear on what to do. Answer any questions they have along the way as it’s also an opportunity for the group to get familiar with the product (since it’s often helpful for the whole company to understand the new product being developed).
  7. Finish up, thanks the participants and let them know that you will send out the results soon. (To help reinforce that you value their input)
Variations:
  • Put people in teams, it will depend on your product for whether this is a likely scenario for actual end-users of your product. If end-users will be working solo, don’t use teams in the session either.
  • Have gifts for the participants as thanks, or perhaps prizes for the people who complete the tasks the fastest. (Being fast means there wasn’t anything that confusing for them)
Who should I invite? People who satisfy the following criteria
  • Have not worked closely on the development of the product (they will already know how to use it)
  • Are interested in learning about your new product OR
  • Are a good representation of who your end-users will be (whether that is marketers, developers, non-tech savvy, etc..)

How to choose

Now that we have looked at what each of our options are (of course there are more options out there, but just focusing on 2 here) how do we pick which one to use?

Pick ‘Bug Bash’ if…

  • You are trying to find as many issues with your product as you can to give greater confidence in releasing it
  • You are not trying to teach people how to use the product
  • You don’t have access to people who meet the ‘Usability Testing Session’ group criteria

Pick ‘Usability Testing Session’ if…

  • You are trying to familiarise people in your company with the product you are soon to release
  • You are trying to find usability bugs (perhaps you are worried your product is confusing to use at times)

Summary

This was a brief overview of 2 common techniques of getting other people in your company to look over the product your team has been working on to help find some issues you have been blinded to from working on the product every day. The technique you choose should be based on what you are hoping to get out of the session.

Feedback

I’d love to hear from you if you’ve found either of these techniques helpful for you in the past or if you have any further tips to enhance the usefulness and enjoyment of these sessions. Or perhaps you have a different technique completely that you prefer? Put it all in the comments below :)