TestBash Sydney 2018 Reflection – Part 1

On the 19th October, 2018 I attended TestBash Sydney, the first time this event has come to Australia. I spoke on “Advocating for Quality by turning developers into Quality Champions”. I’ll share some more on that topic in a different post, and instead focus this post series on what I got out of the other talks presented that day.

These are the things that stood out to me, my own reflections and some paraphrasing of content, to help share the lessons with others.

Next Level Teamwork: Pairing and Mobbing

by Maaret Pyhäjärvi – @maaretp | http://visible-quality.blogspot.co.uk/

  • Things are better together!
  • When we pair or mob test on one computer, we all bring our different views and knowledge to the table.
  • With mob programming, only the best of the whole group goes into the code, rather than one person giving their best and worst.
    • Plus, we get those “how did you do that!” moments for sharing knowledge.
  • Experts aren’t the ones who know the most, but who learn the fastest.
  • Traditional pairing is one watching what the other is doing, double checking everything. This is boring for the observer. “I’ve got an idea, give me the keyboard!” mentality.
  • You should keep rotating the keyboard every 2-4 minutes to keep everyone engaged.
  • Strong-style pairing shifts the focus, to a “I’ve got an idea, you take the keyboard” mentality, where you explain the idea to the other person and get them to try it out.
    • You are reviewing what is happening with what you want to happen, rather than guessing someone else’s mindset.
  • It can be hard to pair when skillsets are unequal, eg a developer and a tester, feeling that you are slowing them down or forcing them into things they don’t want to do. Strong-style pairing helps with this.
  • Some pairing pitfalls
    • Hijacking the sessions. Only doing what one person wants to try, or not getting the point
    • Giving up to impatience. Don’t see the value, but persist anyway.
    • Working with ‘them’. Pairing with an uncomfortable person, maybe use mob over pairing for this.
  • In Agile Conf 2011, an 11 year old girl was able to participate in a mob session by stating her intent, and having others follow that through, and by listening to others and doing what they said to do.
  • Mobbing basics:
    • Driver (no thinking). Instruct them by giving “Intent -> Location -> Details”
    • Designated navigator, to make decisions on behalf of group.
    • Conduct a retrospective.
    • Typically use 6-8 people, but if everyone is contributing or learning, it’s the right size.
    • Need people similar enough to get along, but diverse enough to bring different ideas.
  • I might have a good idea, but don’t know how to code/test it.

My thoughts

We use pairing at work a lot already, and I didn’t really learn anything new to introduce here. Mobbing doesn’t really appeal to me, for it to be beneficial, you have to justify the time spent of a whole group working on 1 task, which to me, only works when the work produced by individuals is far inferior. Mobbing should produce a better outcome, but it will be quite slow, and we get most of the way there by reviewing people’s code to get a solid overall solution.

Well presented,  but I can’t see mobbing working outside of some particular scenarios.

How Automated E2E Testing Enables a Consistently Great User Experience on an Ever Changing WordPress.com

by Alister Scott – @watirmelon | https://watirmelon.blog/

Note: A full transcript of Alister’s talk, including slides, is available on his website.

I didn’t really understand the start of Alister’s talk about hobbies or why it was included. It finished with the phrase: “Problems don’t stop, they get exchanged or upgraded into new problems. Happiness comes from solving problems”. This seems to be what the introduction was getting at. The rest of the talk then followed a pattern of presenting a problem around automation, a solution to the problem, and the problem that came out of the solution, and then the next solution, and so on.

  • Problem: Customer flows were breaking in production (They were dogfooding, but this didn’t include Signup)
  • Solution: Automated e2e test of signup in production
  • P: Non-deterministic tests due to A/B tests on signup
  • S: Override A/B test during testing
    • Including a bot to detect new A/B tests in the PR, with a prompt to update e2e tests
  • P: Too slow, too late, too hidden (since running in prod)
  • S: Parallel tests, canaries on merge before prod, direct pings
  • P: Still have to revert merges, slow local runs (parallel only in docker)
  • S: Live branch tests with canaries on every PR
  • P: Canaries don’t find all the problems (want to find before prod)
  • S: (Optional) Live branch full suite tests (use for large changes via a github label)
  • P: IE11 & Safari 10 issues
  • S: IE11 & Safari 10 canaries
  • (All these builds report back directly into the github PR) – NICE!
  • P: People still break e2e tests
  • S: You have to let them make their own mistakes
    • Get the PR author that broke the test logic to update the test, don’t just fix it

Takeaways:

  • Backwards law: acceptance of non-ideal circumstances is a positive experience
  • Solving problems creates new problems
  • Happiness comes from solving problems
  • Think of what you ‘can’ do over what you ‘should’ do
  • Tools can’t solve problems you don’t have
  • Continuous delivery only possible with no manual regression testing
  • Think in AND not OR

My thoughts

I thought this was a clever way to present a talk, and the challenges are familiar, so it was interesting to see how Alister’s team had been addressing them. Being a talk about ‘what’ and not ‘how’ meant there was less direct actions to take out of it. I already know the importance of automated tests, running them as often and early as possible, running parallel and similar tips that came up in the talk. So for me, I’m interested in exploring setting up an optional build into a test environment or docker build where e2e tests can be run by setting an optional label on a github PR.

A Tester’s guide to changing hearts and minds

by Michelle Playfair – @micheleplayfair

  • Testing is changing, and testers are generally on board, but not everyone else is.
    • Some confusion around what testers actually do
    • Most devs probably don’t read testing blogs, attend testing conferences or follow thought leaders etc. (and vice versa)
  • 4 P’s of marketing for testers to consider about marketing themselves:
    • Product, place, price and promotion
  • Product: How can you add value
    • What do you do here? (you need to have a good answer)
    • Now you know what value you bring, how do you sell it?
  • Promotion: Build relationships, grow network, reveal value
    • You need competence and confidence to be good at your job
    • Trust is formed either cognitively or affectively based on culture/background
    • You need to speak up and be willing to fail. People can’t follow you if they don’t know where you want to take them
    • Learned helpfulness
      • Think about how you talk to yourself
      • Internal vs external. “I can’t do that” -> Yes it’s hard, but it’s not that you are bad at it, you just need to learn.
      • Permanent vs temporary. “I could never do that” -> Maybe later you can
      • Global vs situational. Some of this vs all of this
  • Step 1: Please ask questions, put your hand up! Tell people what you know
    • Develop your own way of sharing, in a way that is suitable for you.
  • If you can’t change your environment, change your environment (ie find somewhere else).

My thoughts

Michelle presented very well, and the topic of ‘selling testing’ is particularly relevant given changes to the way testing is viewed within modern organisations. This was a helpful overview to start thinking through how to tackle this problem. The hard work is still on the tester to figure things out, but using the 4 P’s marketing approach is going to help structuring this communication.

My talk

My talk “Advancing Quality by Turning Developers into Quality Champions” was next, but I’ll talk more about that in a separate blog post later.

A Spectrum of Difference – Creating EPIC software testers

by Paul Seaman & Lee Hawkins – @beaglesays | https://beaglesays.blog/ & @therockertester | https://therockertester.wordpress.com/

Paul and Lee talked about the program they have set up to teach adults on the autism spectrum about software testing through the EPIC testability academy. An interesting note early on regarding language is that their clients indicated they preferred an Identity-first language of “autistic person” over phrases like “person with autism”.

They had identified that it’s crucial to focus their content on testing skills directly applicable in a workplace, keep iterating that content and cater for the group differences by arranging content, changing approaches etc. They found it really helpful to reflect and adapt content over time, but ensure you give your content enough of a chance to work first. Homework for the students also proved quite useful, despite initial hesitations to include that in the course.

My thoughts

It’s encouraging to hear about Paul and Lee’s work, a really important way to improve the diversity of our workforce, and give some valuable skills and opportunities to people who can often be overlooked in society. It was also helpful to think a little bit about structuring a training course in general and what they got out of that.

I did find the paired nature of the presentation interfered with the flow a little but not a big problem.

Exploratory Testing: LIVE

by Adam Howard – @adammhoward | https://solavirtusinvicta.wordpress.com/

This was an interesting idea for a talk, a developer at Adam’s work had hidden some bugs in a feature of his company website, and put it behind a feature flag for Adam to test against, while not making it public to anyone else. Adam would then do some exploratory testing, trying to find some of the issues that the dev had hidden for him, demonstrating some exploratory testing techniques for us at the same time. Adam also had access to the database to do some further investigation if needed.

The purpose of doing this exercise was to show that exploratory testing is a learned skill, and to help with learning to explain yourself. By marketing yourself and your skills like this, others can and will want to learn too.

  • Draw a mindmap as we go to document learnings
  • Consider using the “unboxing” heuristic, systematically working through the feature to build understanding.
    • E.g. Testing a happy path, learn something, test that out, make a hypothesis and try again. Thinking about how to validate or invalidate our observation.
    • Sometimes you might dive into the rabbit hole a little when something stands out.
    • Make a note of any questions to follow up or if something doesn’t feel right.
  • It can be helpful to look at help docs to see what claims we are making about the feature and what it should do. (Depends on what comes first, the help doc or the feature).

My thoughts

An interesting way to present a talk on how to do exploratory testing, by seeing it in action. There were a few times I could see the crowd wanted to participate, but didn’t get much chance, and it felt like it was kind of an interactive session, but kind of not, so I wasn’t quite clear on the intention there. Seeing something in action is a great way to learn, though I would’ve liked to see him try this on a website he didn’t already know, or at least one that more people would be familiar with, so we could all be on a more level playing field, but made for interesting viewing regardless.

Part 2 of my summary: TestBash Sydney 2018 Reflection – Part 2

Advertisements

Where there’s Smoke there’s Fire!

laptop_smokingThe first sign of smoke is always an early indicator that fire is not far away. It’s the first warning sign. So too in testing, having Smoke Tests that quickly step through the main pathways of a feature will let you know if there’s a significant problem in your codebase.

What makes a good Smoke Test?

There are 3 factors which make a good smoke test: Speed, Realism and Informative.

Speed

If there is a fire coming, you want to know about it as fast as possible so you have the most time possible to respond. Your tests must be fast enough that running them with every build/iteration will not cause frustration or significantly hold up progress.

If there is a problem, these tests will tell you, quickly, that you need to stop whatever else you might’ve been doing and respond to the issue.

Realism

If smoke is coming from a contained, outdoor BBQ in your Neighbours backyard, it doesn’t matter. You don’t need to do anything about this. But if a tree in your neighbours backyard was on fire, this is much more concerning. The Smoke tests must only look for realistic and common usages of your feature, where you care if something breaks. If the background colour selector on your form creation tool is broken, that’s probably not as crucial to worry about, compared to if the form itself didn’t work.

So, prioritise your tests to only cater for the key user workflows through your feature, and leave everything else for other tests to cover. This will also go hand in hand with designing tests for speed.

Informative

If you see smoke, the first thing you will want to know is how far away it is and how big it is. This will dictate how devastating it will be and what sort of action you need to take. So to the tests you write must be informative about where the problem is, how you can recreate it and how important it is to fix.

Smoke tests are much more useful if they can not only tell you that something is wrong, but also where the problem is, how important a problem it is and how to fix it.

An Example

Since the company I work for (Campaign Monitor) is an email marketing company. So a key scenario or User Story we would want to target in a Smoke Test is:
“As a user, I want to create a campaign, send it to my subscriber list, check the email got through and that any recipient interactions are reported”

So, the steps required for this smoke test are:

  1. Create a campaign, set the email content and the recipients
  2. Send the campaign to some email recipients
  3. Open one of the received emails and click on a link
  4. Check the report shows the email was opened and clicked

This test covers an end-to-end user story that is a vital part of our software, hence if any part of it fails, we need to know immediately and fix it. It is fast and runs in under a minute so doesn’t interrupt workflow. It does not add in any extra steps or checks that aren’t helpful to the target user story. And it is written with logging and helpful error messages so that any failures point as close as possible to the problem area.

Putting it together

Once you’ve created your smoke tests, put them into use! Run them every time you change something in your feature and they will quickly tell you if anything major is wrong. It’s so much better to find a major issue as early as possible and reduce time lost searching for the problem later on or building more and more problems on top.

Smoke tests shouldn’t be your only test approach as they only cover the ‘happy path‘ scenarios and there are plenty of other areas for your feature to have problems. But it is certainly a good starting point!

I’d love to hear how you use (or why you don’t use) Smoke Tests in your company!

What makes a good automated test?

Testers are increasingly being asked to write, maintain and/or review automated tests for their projects. Automated tests can be very valuable in providing quick feedback on the quality of your project but they can also be misused or poorly written, reducing their value. This post will highlight a few factors that I think make the difference between a ‘bad’ automated test and a ‘good’ automated test.

1. The code

People vary in how they like to write their code, for example, what names they use and the structure they like. This is fine to be different, but there are still aspects that should be considered to make sure the test is going to be useful into the future.

Readable & Self-explanatory
Someone should be able to read through your code and easily figure out what is being done at each part of the process. Extracting out logical chunks of code into methods helps with this but be wary of having methods within methods within methods… Deep nesting can add unnecessary complexity and reduces maintainability since coupling of code is increased. Use comments sparingly as the methods and variables should be descriptive enough in most cases and comments quickly get outdated. Format your code so it’s easy to read, with logical line breaks and be wary of ‘clever code’, which might be more efficient and might get 4 lines of code into 1, but it must still be readable and understandable to the next person looking at it in 5 months time. Make your test readable and self-explanatory.

Clear purpose
In order to make it easier to get a picture of test coverage, it helps to have tests with a clear purpose of what they are and aren’t testing. This also means people reading through the test trying to understand it or fix it know exactly what it’s attempting to do. Similarly it helps to not have your tests covering multiple test cases at once. When the test fails, which test case did it fail on? You’ll have to investigate every time just to know where the bug is before you even start getting the reproduction steps. Having multiple test cases also makes naming tests harder and can result in these extra test cases being hidden underneath other tests and perhaps duplicated elsewhere or giving a false picture of test coverage. Make your test have a clear purpose.

2. Execution

When you actually run your tests, there are a few more attributes to look for that contribute to the usefulness of the test.

Speed
If your tests take too long to run, people won’t want to wait around for the results, or will avoid running them at all to save time, which makes them completely useless. What ‘too long’ looks like will vary in each context but speed is always important. You will reduce reluctance from using your tests by setting a realistic expectation of the duration for running your tests. There will also be times where you want a smaller and faster subset of tests for a quick overview, whereas other times you are happy for a longer, more thorough set of tests when you aren’t as worried about the time it takes. Make your test fast.

Consistency
In most cases, depending on the context of your project, running your test 10 times in a row should give you the same 10 results. Similarly running it 100 times in a row. Flakey tests that pass 1 minute, then fail the next are the bane of automated tests. Make sure your tests are robust and will only fail for genuine failures. Add Wait loops for criteria to be met with realistic timeouts of what you consider a reasonable wait for an end-user. Do partial matches if a full match isn’t necessary. Make your test consistent.

Parallelism
The fastest way to run tests is in parallel instead of in sequence. This changes the way you write tests. Global variables and states can’t be trusted. Shared services likewise can’t be trusted. To run in parallel, your tests need to be independent, not relying on the output of other tests or data which is shared. Wherever possible, run your tests in parallel, finding the optimum number of streams to run in parallel and you will have your tests running much faster, giving you quicker feedback on the state of the system being tested. Make your test parralelisable.

Summary

There is plenty more that could be said about what makes a good Automated Test, but this should make a good start. Having code that is readable, self-explanatory and clear of purpose, and written in a way that it runs fast, consistently and can be run in parallel will get you a long way to having an effective, efficient and above all, useful set of automated tests.

I’d love to hear about what other factors you find most important in writing a good automated test as well in the comments below

Lessons learnt from writing Automated Tests

The purpose of this article is to share some of the big lessons I’ve learnt in my 5+ years of writing automated tests for Campaign Monitor, an online service to send beautiful emails. If you are writing automated tests, it’s worth keeping these lessons in mind to ensure you end up with fast, maintainable and useful tests.

1 – Website testing is hard

Websites can be slow to load, both at a whole page level and an element level. This means you are constantly writing ‘waits’ into your code to make sure the element you want to test is available. As java-script techniques improve, there are more animations, hovers, fade-ins,  call-outs, etc… being added to websites which results in a much nicer experience for the user, but can be a nightmare for writing a matching automated test. With these things in mind, I suggest the following:

  1. Evaluate whether this test case is worth automating. You can test functionality via an automated test (eg does clicking button X cause action Y to happen?), but they are less useful at telling you whether a page animation looks correct visually, so you are better to keep that as a manual regression test.
  2. Do as much of the automated test outside of the UI as possible. If you can utilise an API call or make direct database updates to get your system into the state required or to verify the UI action performed in the app you will save a lot of time in executing the test and avoid possible test failures before you even get to the point of interest for the test.
  3. Use Smart Waits. As often as practical, you are better off doing your assert of an element state within a short loop. You usually don’t want your test to fail because your app took 4 seconds to login instead of the usual 2 seconds. Always opt for waiting for a specific element state rather than Thread.Sleep!

2 – Make your tests understandable

When you first write your test, you have all the context you need, you know why you chose each method over other methods, you know what you are attempting to do at each step along the way. But what about the next person to look at that test in a years time when the requirements of that test have changed? Writing a test that is easy to understand for all readers will save valuable time later on and will mean the test continues to provide value for a long time. How do we do this?

  1. Use meaningful variables and method names. You should be able to read through your test without diving down any deeper into the methods and know what is going on. To do this, you need your variables and methods to accurately and succinctly explain what they are for.. For example:
    int outputRowNumber = GetTableRowForCustomer(“customer name”);
    compared to:
    int x = GetCustomer(“customer name”);
  2. Add comments when necessary. Your colleagues aren’t idiots, so they won’t need comments on every line of code, but if, for example, you have a line of code calculating the number of seconds left in the day, it might be worth a few words to explain what you are doing.

3 – Run your tests often

The purpose of creating and running these tests is to find bugs in your application, so you should be running all of your tests at least once a week, and most tests daily or more. This will make it much easier to identify the change that occurred which has resulted in the failed test. How often you run the tests will depend on how often you are updating the code under test. The more often you update, the more often you should run the tests. You will probably have a mix of tests that take varying amount of times, so be smart about when you run them, run the fast ones during the day, but leave the slow ones to run overnight so you don’t have to wait for the results.

We’ve had great success in having a small and quick subset of tests that do a basic check of a broad range of features across the application. This gives us quick feedback to know that there aren’t any major issues with core functionality. We call them ‘Smoke tests’, because if there’s smoke, there’s fire!

4 – Be regular in maintenance

Any one who has tried automated testing will know that your tests will get out of date as the features they are testing update. But, if you keep an eye on the results of your tests regularly, and act on anything that needs updating in a speedy fashion, you will save yourself a lot of pain down the track. Much better to tackle the cause of 2 failing tests now then to deal with multiple causes of 30 failures in a months’ time. If you can address any maintenance tasks for updated feature behaviour within a week of it happening, the context will be fresh and you will be able to get on top of it quickly, before it spreads and merges with other issues and suddenly you have a sea of failed tests and don’t know where to start.

5 – Be technology independent

We previously used a technology called WatiN to drive our test automation and are now in the process of moving over to Selenium WebDriver. We’ve also used other technologies in the past and each time a change is needed, it takes a lot of time/effort to convert our solution. If/When the time comes where you feel the tool you are using is no longer working for your needs, the time/cost to change can be huge. It’s possible every method you’ve written is dependent on your current technological choices, so it either rules out ever changing, or means there is a steep curve involved in changing over.

So instead, add an interface layer on top of your code which just says “I want the ‘browser’ to ‘click’ on ‘this link’”. Then the implementation can figure out how to get the Browser to click on the given link. Then when you decide you want to try a new technology, just add a new implementation to the interface and nothing else needs to change.

I will probably add a follow up post as summarising 5 years of learning into 1 post is hard!