TestBash Sydney 2018 Reflection – Part 2

The second half of my summary and reflections on the TestBash 2018 conference in Sydney. For the first half: TestBash Sydney 2018 Reflection – Part 1

Avoid Sleepwalking to Failure! On Abstractions and Keeping it Real in Software Teams

by Paul Maxwell-Walters – @TestingRants | http://testingrants.blogspot.com.au

The main focus of the talk seemed to be on Hypernormalisation, where you are so much a part of the system that you can’t see beyond it. For example, the message of society is that everything is OK, when really it isn’t, and you know it, but perhaps can’t explain why.

This can apply to testers, where we just accept things the way they are and don’t try and make things better, perhaps because we can’t see/imagine how it could be better.

  • Hyper-reality – A world that you interact with, that isn’t real (e.g  DisneyWorld). This is how we sometimes relate to software testing.

Abstraction #1 – Quality

The definition of quality is different for everyone. James Bach said in 2009 that Quality is dead, this is a mini-hypernormalisation. He didn’t like the way things were going, and was somewhat acceptant of that.

Abstraction #2 – Measurement

Rich Rogers said in his book “Changing times: Quality for humans in a digital age” in 2017 that we assume when we have tested all of our criteria, that we have good quality (but this is wrong!). If a team’s definition of quality or measurement isn’t true to reality, then why persist with it?

Therefore, we don’t have to accept the way things are just because we don’t see how it could be any better.

My thoughts

Personally I found this a really hard talk to follow (as might be evident in my notes above), in between the old news videos of Russia and the large amount/complexity of content. I felt the point of the talk of not just accepting what is being thrown at us in Quality could have been made much simpler and more time spent on applications of how to do this, or what areas of the job we should be actively looking for better ways to do.

I do strongly agree with the idea of not accepting bad practices just because that’s how it is now, and striving to make things better (ie being agile), but I found the delivery missed the mark a little bit.

Test Representatives – An Alternative Approach to Test Practice Management

by Georgia de Pont – @georgia_chunn

Georgia started by giving some of the context of the patterns/techniques used at Tyro, being Pairing XP, TDD and the Spotify model of squads. We then followed the journey that the testing team at Tyro have gone through, starting by moving from a disconnected team into having embedded testers.
This created some challenges:

  • Loss of alignment and knowledge sharing within the testing group
  • Lack of consistency in test approaches and usage of test members within the team
  • Lack of a test manager (it was a flat structure)

The question arose of how to address these issues, should they hire a test manager or take a grassroots approach? (Evidently, they took the grassroots approach).

Each product tribe selected a test representative to act as representative. They would support their tribe’s testers, gather info on issues and be a point of contact for that tribe. The representatives came up with a pipeline of improvement initiatives to work through, with ideas coming in a variety of ways. The representative group, “rep group”, would share the info back to their test teams and would have regular meetings with the TestOps team.
Some of the initiatives:

  • Clarity of role of embedded tester, e.g. what are the test practices they can offer
  • Improving the recruitment process. Candidates were given a take home test, which wasn’t reflective of the workplace because people usually pair. So they created a pairing exercise instead.
  • Onboarding new test engineers and probation expectations
  • Performance criteria for test engineers for them and their leads to use
  • Upskilling test engineers (an ongoing effort) of making sure there is available training, organising internal and external speakers, conferences, etc.
  • Co-ordinating engineering-wide test efforts
  • Developing a quality engineering strategy, involving various stakeholders, to identify any current roadblocks in the testing efforts and work to remove them

Some steps for success to make a representative group work in other workplaces:

  • Start small. Think about how many people per teams (aim for 1 rep per 2-4 teams)
  • In forming the rep group, consider whether best to select people or ask for volunteers. (they selected people to start with)
  • Communicate the work of this group to the rest of the organisation
    • Maybe hook into existing communications
    • Include ways to get involved
  • Ask for feedback (e.g. surveys)
    • Now they do this more informally
    • Asking: Is it working/effective?
  • Run Rep group retrospectives
  • Ensure support from engineering leadership
    • Maybe include them in meetings
    • Get their feedback/input
    • Give them an awareness of upcoming testing concerns
    • Get budget support

My thoughts

Georgia did quite well for her first talk, it was clear that she had practiced presenting this before, she had helpful slides to guide the content and carried confidence in her presentation. With practice presenting she will only get better, and it will be interesting to see her speak again in a few years with more experience.

The content was relevant, describing a smart solution to problems that many organisations are now feeling with the recent direction testing roles are taking. Even for companies not in a position to form a ‘rep group’ there were definite takeaways in how a testing group can, and should, interact with other stakeholders within Engineering. The initiatives the group came up with look really interesting, and I would’ve liked to hear much more about those as there appeared to be some directly applicable opportunities in there, perhaps this will be the subject of future talks?

Enchanting Experiences – The Future of Mobile Apps

by Parimala Hariprasad – @PariHariprasad | http://curioustester.blogspot.co.uk/

Pari finished out the day as the final keynote, speaking on where she sees the design of applications and devices heading in a heavily connected world and what opportunities could be unlocked.

It started with Machine Input, with a definition that this is all about information that devices can find on their own.

Products are developing all the time, getting more complex, with more interfaces, but perhaps what we need is to get simpler again, and automatically detect more of the options we would otherwise have to choose (through machine input).
We then looked at a range of machine input types

  • Camera – e.g. auto detect details of a credit card held in front of the camera and pre-enter
  • Location – detect where you are and make suggestions, including smart defaults like detecting the country and local currency
  • History – Each time you fill out forms, remember what data you put in, particularly between site visits.
  • There’s lots of other sensors in our phones that can be hooked into.

Could we detect which floor someone is on, which shop they are in, what they have recently searched for and offer them a discount as they walk around the shop, without prompting?

The internet of things is leading to a more and more connected home and we looked at some ideas companies like Google are developing in this space.

  • Designing great products is about great experiences.
  • Security and privacy is really important with all this data being available
    • Devices should not dictate us and how we live, they should make things easier and work for us
  • Learn about design and usability to see how it can impact your testing efforts/plan.

My thoughts

No particular takeaways from this one, as I felt it wasn’t really a talk targeted for testers, more of a talk for developers or designers who can more directly incorporate Pari’s design thoughts into their work. It was interesting to think through some of the options made possible in various parts of our lives through machine learning from the point of view of: “that would be cool to see!”. There will definitely be challenges in testing any of these types of features, which wasn’t covered much besides acknowledging that security will be key with all these features. Also the content spawned thought processes on what things would need to be considered in testing these sort of technologies.

99 Second talks

99 second talks then wrapped up the day with about 20 people speaking. For most people it was a chance to practice public speaking in a somewhat non-threatening environment. For others it was a chance to promote their company/group, or just a chance to introduce a topic/technique. Hard to take much in given the format, and I didn’t have notes of any takeaways from this, but some interesting topics raised.

Overall thoughts on the conference

One of the big selling points leading up to TestBash was the community feel of the event, particularly seen in the pre and post conference meetups. I was unable to attend either, so I didn’t get the full experience and can’t really comment on how the conference lived up to that hype. The conference day felt much like most other conferences I’ve been to in terms of interactions between sessions and general flow of the day.

There wasn’t much chance for questions with most of the talks, which was unfortunate, as there can be some great discussion coming out of this as people dive in to the bits that were missed or are really interesting to them. Though, in saying that, question time can be hit and miss depending on which direction the conversation gets steered towards, so not a big concern.

The talk selection was pretty good considering it was a single track, so every talk had to be chosen for being applicable to a wide audience, covering a generally applicable topic, but trying to make that topic interesting enough for anyone to learn from. This is a pretty hard criteria to meet, so hats off to the selection team. If you were wanting a deep dive on particular topics, this is perhaps not going to be the best type of conference to attend, but it could still happen on the right topic.

Speakers are well looked after by TestBash, in communications and compensation, though as a local speaker, there was nothing to be compensated for, so that is definitely more of a perk for non-local speakers.

The swag was better than most conferences, aiming for practical items that can be re-used beyond the day.

Overall I found the day enjoyable, I got good feedback on my talk and I have some renewed ideas and techniques to take back into my workplace from other talks which makes for a successful conference in my regards. This is helped by sharing my thoughts afterwards, which I recommend others try after any conference/training, even if just to peers in their workplace.

Would I go to the next one? Maybe, if I wasn’t speaking I would need to look at the schedule first, and see what talks are being covered. I’m definitely glad we have another testing conference in Australia, we seem to be adding more to the mix every year at the moment, and each one lifts the overall quality of presentations as people get more practice, we get more international speakers and a wider range of topics and conference setups to choose from. Each conference will have to work harder and harder to stand out and provide a good experience to attendees, which is a win for everyone! So I’m glad TestBash has come to Australia, and is coming back next year!

Advertisements

TestBash Sydney 2018 Reflection – Part 1

On the 19th October, 2018 I attended TestBash Sydney, the first time this event has come to Australia. I spoke on “Advocating for Quality by turning developers into Quality Champions”. I’ll share some more on that topic in a different post, and instead focus this post series on what I got out of the other talks presented that day.

These are the things that stood out to me, my own reflections and some paraphrasing of content, to help share the lessons with others.

Next Level Teamwork: Pairing and Mobbing

by Maaret Pyhäjärvi – @maaretp | http://visible-quality.blogspot.co.uk/

  • Things are better together!
  • When we pair or mob test on one computer, we all bring our different views and knowledge to the table.
  • With mob programming, only the best of the whole group goes into the code, rather than one person giving their best and worst.
    • Plus, we get those “how did you do that!” moments for sharing knowledge.
  • Experts aren’t the ones who know the most, but who learn the fastest.
  • Traditional pairing is one watching what the other is doing, double checking everything. This is boring for the observer. “I’ve got an idea, give me the keyboard!” mentality.
  • You should keep rotating the keyboard every 2-4 minutes to keep everyone engaged.
  • Strong-style pairing shifts the focus, to a “I’ve got an idea, you take the keyboard” mentality, where you explain the idea to the other person and get them to try it out.
    • You are reviewing what is happening with what you want to happen, rather than guessing someone else’s mindset.
  • It can be hard to pair when skillsets are unequal, eg a developer and a tester, feeling that you are slowing them down or forcing them into things they don’t want to do. Strong-style pairing helps with this.
  • Some pairing pitfalls
    • Hijacking the sessions. Only doing what one person wants to try, or not getting the point
    • Giving up to impatience. Don’t see the value, but persist anyway.
    • Working with ‘them’. Pairing with an uncomfortable person, maybe use mob over pairing for this.
  • In Agile Conf 2011, an 11 year old girl was able to participate in a mob session by stating her intent, and having others follow that through, and by listening to others and doing what they said to do.
  • Mobbing basics:
    • Driver (no thinking). Instruct them by giving “Intent -> Location -> Details”
    • Designated navigator, to make decisions on behalf of group.
    • Conduct a retrospective.
    • Typically use 6-8 people, but if everyone is contributing or learning, it’s the right size.
    • Need people similar enough to get along, but diverse enough to bring different ideas.
  • I might have a good idea, but don’t know how to code/test it.

My thoughts

We use pairing at work a lot already, and I didn’t really learn anything new to introduce here. Mobbing doesn’t really appeal to me, for it to be beneficial, you have to justify the time spent of a whole group working on 1 task, which to me, only works when the work produced by individuals is far inferior. Mobbing should produce a better outcome, but it will be quite slow, and we get most of the way there by reviewing people’s code to get a solid overall solution.

Well presented,  but I can’t see mobbing working outside of some particular scenarios.

How Automated E2E Testing Enables a Consistently Great User Experience on an Ever Changing WordPress.com

by Alister Scott – @watirmelon | https://watirmelon.blog/

Note: A full transcript of Alister’s talk, including slides, is available on his website.

I didn’t really understand the start of Alister’s talk about hobbies or why it was included. It finished with the phrase: “Problems don’t stop, they get exchanged or upgraded into new problems. Happiness comes from solving problems”. This seems to be what the introduction was getting at. The rest of the talk then followed a pattern of presenting a problem around automation, a solution to the problem, and the problem that came out of the solution, and then the next solution, and so on.

  • Problem: Customer flows were breaking in production (They were dogfooding, but this didn’t include Signup)
  • Solution: Automated e2e test of signup in production
  • P: Non-deterministic tests due to A/B tests on signup
  • S: Override A/B test during testing
    • Including a bot to detect new A/B tests in the PR, with a prompt to update e2e tests
  • P: Too slow, too late, too hidden (since running in prod)
  • S: Parallel tests, canaries on merge before prod, direct pings
  • P: Still have to revert merges, slow local runs (parallel only in docker)
  • S: Live branch tests with canaries on every PR
  • P: Canaries don’t find all the problems (want to find before prod)
  • S: (Optional) Live branch full suite tests (use for large changes via a github label)
  • P: IE11 & Safari 10 issues
  • S: IE11 & Safari 10 canaries
  • (All these builds report back directly into the github PR) – NICE!
  • P: People still break e2e tests
  • S: You have to let them make their own mistakes
    • Get the PR author that broke the test logic to update the test, don’t just fix it

Takeaways:

  • Backwards law: acceptance of non-ideal circumstances is a positive experience
  • Solving problems creates new problems
  • Happiness comes from solving problems
  • Think of what you ‘can’ do over what you ‘should’ do
  • Tools can’t solve problems you don’t have
  • Continuous delivery only possible with no manual regression testing
  • Think in AND not OR

My thoughts

I thought this was a clever way to present a talk, and the challenges are familiar, so it was interesting to see how Alister’s team had been addressing them. Being a talk about ‘what’ and not ‘how’ meant there was less direct actions to take out of it. I already know the importance of automated tests, running them as often and early as possible, running parallel and similar tips that came up in the talk. So for me, I’m interested in exploring setting up an optional build into a test environment or docker build where e2e tests can be run by setting an optional label on a github PR.

A Tester’s guide to changing hearts and minds

by Michelle Playfair – @micheleplayfair

  • Testing is changing, and testers are generally on board, but not everyone else is.
    • Some confusion around what testers actually do
    • Most devs probably don’t read testing blogs, attend testing conferences or follow thought leaders etc. (and vice versa)
  • 4 P’s of marketing for testers to consider about marketing themselves:
    • Product, place, price and promotion
  • Product: How can you add value
    • What do you do here? (you need to have a good answer)
    • Now you know what value you bring, how do you sell it?
  • Promotion: Build relationships, grow network, reveal value
    • You need competence and confidence to be good at your job
    • Trust is formed either cognitively or affectively based on culture/background
    • You need to speak up and be willing to fail. People can’t follow you if they don’t know where you want to take them
    • Learned helpfulness
      • Think about how you talk to yourself
      • Internal vs external. “I can’t do that” -> Yes it’s hard, but it’s not that you are bad at it, you just need to learn.
      • Permanent vs temporary. “I could never do that” -> Maybe later you can
      • Global vs situational. Some of this vs all of this
  • Step 1: Please ask questions, put your hand up! Tell people what you know
    • Develop your own way of sharing, in a way that is suitable for you.
  • If you can’t change your environment, change your environment (ie find somewhere else).

My thoughts

Michelle presented very well, and the topic of ‘selling testing’ is particularly relevant given changes to the way testing is viewed within modern organisations. This was a helpful overview to start thinking through how to tackle this problem. The hard work is still on the tester to figure things out, but using the 4 P’s marketing approach is going to help structuring this communication.

My talk

My talk “Advancing Quality by Turning Developers into Quality Champions” was next, but I’ll talk more about that in a separate blog post later.

A Spectrum of Difference – Creating EPIC software testers

by Paul Seaman & Lee Hawkins – @beaglesays | https://beaglesays.blog/ & @therockertester | https://therockertester.wordpress.com/

Paul and Lee talked about the program they have set up to teach adults on the autism spectrum about software testing through the EPIC testability academy. An interesting note early on regarding language is that their clients indicated they preferred an Identity-first language of “autistic person” over phrases like “person with autism”.

They had identified that it’s crucial to focus their content on testing skills directly applicable in a workplace, keep iterating that content and cater for the group differences by arranging content, changing approaches etc. They found it really helpful to reflect and adapt content over time, but ensure you give your content enough of a chance to work first. Homework for the students also proved quite useful, despite initial hesitations to include that in the course.

My thoughts

It’s encouraging to hear about Paul and Lee’s work, a really important way to improve the diversity of our workforce, and give some valuable skills and opportunities to people who can often be overlooked in society. It was also helpful to think a little bit about structuring a training course in general and what they got out of that.

I did find the paired nature of the presentation interfered with the flow a little but not a big problem.

Exploratory Testing: LIVE

by Adam Howard – @adammhoward | https://solavirtusinvicta.wordpress.com/

This was an interesting idea for a talk, a developer at Adam’s work had hidden some bugs in a feature of his company website, and put it behind a feature flag for Adam to test against, while not making it public to anyone else. Adam would then do some exploratory testing, trying to find some of the issues that the dev had hidden for him, demonstrating some exploratory testing techniques for us at the same time. Adam also had access to the database to do some further investigation if needed.

The purpose of doing this exercise was to show that exploratory testing is a learned skill, and to help with learning to explain yourself. By marketing yourself and your skills like this, others can and will want to learn too.

  • Draw a mindmap as we go to document learnings
  • Consider using the “unboxing” heuristic, systematically working through the feature to build understanding.
    • E.g. Testing a happy path, learn something, test that out, make a hypothesis and try again. Thinking about how to validate or invalidate our observation.
    • Sometimes you might dive into the rabbit hole a little when something stands out.
    • Make a note of any questions to follow up or if something doesn’t feel right.
  • It can be helpful to look at help docs to see what claims we are making about the feature and what it should do. (Depends on what comes first, the help doc or the feature).

My thoughts

An interesting way to present a talk on how to do exploratory testing, by seeing it in action. There were a few times I could see the crowd wanted to participate, but didn’t get much chance, and it felt like it was kind of an interactive session, but kind of not, so I wasn’t quite clear on the intention there. Seeing something in action is a great way to learn, though I would’ve liked to see him try this on a website he didn’t already know, or at least one that more people would be familiar with, so we could all be on a more level playing field, but made for interesting viewing regardless.

Part 2 of my summary: TestBash Sydney 2018 Reflection – Part 2