Australian Testing Days 2016 Reflection – Day 1

On May 20-21 I went to the inaugural Australian Testing Days Conference in Melbourne. The first day involved a series of talks, mostly on sharing experiences people had in testing and the second day was an all-day workshop on test leadership. This post outlines the key messages from the session I attended and the key things I learnt from each one.

Day 1

Part 1 – What you meant to say (keynote)

First up, Michael Bolton discussed how the language we, as testers, use around testing can be quite unhelpful and cause confusion for those involved. For example, automated testing does not exist. You can certainly automate checking, which is mainly regression tests of existing behaviour. But you cannot automate testing, which is everything someone does to understand more about a feature, giving them knowledge to decide on how risky it is to release it. The way we communicate what we do will impact what others understand it as and then expect of us. Similarly, it is important to understand the language others use when asking us to do something.

Another helpful lesson was that customer desires are more important than customer expectations. If they are happy with your product, it doesn’t matter if it met expectations. If I don’t expect the Apple Watch will be of much use to me, but then I try it and discover that I love it, my expectations weren’t met, but my desires were, and it’s a good result. Similarly, users might expect something totally different to what you produce, but if they discover that what you made is actually better than their expectations, it is likewise a good result.

Lessons learnt:

  • Be clear in communication of testing activities to avoid ambiguity and misalignment.
  • Seek the underlying mentality behind people’s testing questions

Part 2 – Transforming an offshore QA team (elective)

Next up Michele Cross shared on the challenges she is facing in transforming an offshore, traditional and highly structure testing team into a more agile, context driven testing team. The primary way to achieve any big change like this is in creating an environment of trust, which comes in 2 ways. A cognitive trust is based on ability, you trust someone because of their skills and attributes. For example, trusting a doctor who has been studying and practising medicine for 20 years that you have just met to diagnose you. An affective trust is based on relationships, you trust someone because of how well you know them and how you have interacted with them in the past. For example, you trust a friend’s movie recommendation because of shared interests and experiences, not their skills as a movie reviewer.

To help establish this trust and initiate change, three C’s were discussed.

  • Culture – Understanding people and their differences. The context that has brought people to where they are now will greatly shape how they interact with people. Do they desire structure or independence? Are they open to conflict or desire harmony? Knowing this can help inform decisions and approaches.
  • Communication – Relating to other people can be just as hard as it is important. Large, distributed teams bring with them challenges of language, timezones, video conferencing etc… It is crucial to find ways to address these concerns so that everyone is kept in the loop, aligned on direction and is able to build relationships with each other.
  • Coaching – Teaching new skills through example and instruction. Create an environment where it is safe to fail so people feel comfortable to grow. Use practical scenarios to teach skills and get involved yourself.

Lessons learnt:

  • Consider the cultural context of people you are interacting with as it will shape how to be most effective in in those interactions
  • Learning through doing, and doing alongside someone is a great way of learning
  • Trust is built by a combination of personal relationships and technical abilities

Part 3 – It takes a village to raise a tester (elective)

Catherine Karena works at WorkVentures which is all about helping under privileged people develop like skills and technological skills to help them enter the tech workforce. She talked about how to figure out what skills to teach by looking at where the most jobs were in the market and the common skills required. This includes both technical and relational skills as they interact with structures and other staff in companies.

On a more general note, a number of characteristics of what makes a great tester were highlighted to focus on teaching these skills as well. A great tester is: curious, a learner, an advocate, a good communicator, tech savvy, a critical thinker, accountable and a high achiever. When it comes to the learning side, a few more tips were shared around teaching through doing as much as possible, making it safe to fail, using industry experts and building up the learning over time.

Some interesting statistics were raised showing that those trained by WorkVentures over 6 months were equal to or greater in performance and value compared to relevant Uni graduates when rated by employers.

Lessons learnt:

  • Relational skills can be just as, if not more, important than technical skills in hiring new talent.
  • Learning in small steps with practical examples greatly improves the outcome.

Part 4 – Context Driven Testing: Uncut (elective)

Brian Osman talked about his experience growing in knowledge and abilities as a tester and how greatly that experience was shaped by testing communities. He explained how a community of like minded people can help drive learning as they challenge each other and bring different view points across.

A side note he introduced was a term called ‘Possum Testing’ which is how he described “Testing that you don’t value, motivated by a fear of some kind”, for example, avoiding using a form of testing because you don’t understand it or how to use it. This is an idea that many people would understand, but could perhaps find it hard to articulate and discuss. Giving it a name instantly provides a means to bring it up in conversation and have people already have a good idea of the context and any common ground in thinking.

Lessons learnt:

  • Naming ideas or common problems is a helpful way to direct future conversations and bring along the original context
  • When looking to improve in a certain area/skill, find a community of others looking to do the same thing.
  • Use these communities to present ideas, defend them and challenge other’s ideas. Debates are encouraged

Part 5 – Testing web services and microservices (elective)

Katrina Clokie (who also mentors me in conference speaking) spoke about her experience testing web services and microservices and a previous version of the talk is available online if you are interested. Starting with web services, she pointed out that each service will have different test needs based on who uses it and how they use it. Service virtualisation is a common technique used in service testing to isolate the front-end from the inconsistencies and potentially unstable back-ends. Microservice testing puts another layer in this model.

Some key guidelines for creating microservices automation were presented, claiming that it should be fit for purpose, remove duplication, be easy to merge changes, have continuous execution and be visible across teams.

An interesting learning technique was present called Pathways which can be found on her website which list a whole bunch of resources for learning about a new topic. They are a helpful way of directing your learning time with a specific goal in mind.

Lessons learnt:

  • Make use of Katrina’s pathways for learning about a new area (for myself or as recommendations to others)
  • Get involved in code creation as early as possible to help influence a culture of testability
  • Write any automation with re-usability and visibility in mind

Part 6 – Test Management Revisited (keynote)

Anne-Marie Charett finished up the day sharing some reflections and approaches she implemented from her time as Test lead at Tyro payments. She started by asking the question, do we still need test management? Which has been asked a few times in the community already. The response being that we do need a testing voice in the community to go with all the new roles and technology coming through like microservices, and DevOps. This doesn’t mean we need Test Managers who deal with providing stability, rather Test Leaders who can direct change. She talked about using the “Satir change Model” to describe the process of change and it’s effect on performance.

She brought a mentality to transforming Tyro to have the best test practice in Australia, and was not interested in blindly copying others. There is certainly benefits to learn from the approach others take, but should be assessed to meet your company’s environment. She discussed a number of testing related strategies that you might have to deal with: Continuous delivery, testing in production, microservices, risk-based automation, business engagement, embedding testing, performance testing, operational testing, test environments, training and growth.

The next question was how to motivate people to learn? Hand-holding certainly isn’t ideal, but you also probably can’t expect people to spontaneously learn all the skills you’d like them to have. This needs coaching! And the coaching should be focused around a task that you can then offer feedback on afterwards. Then challenge them to try it again on their own.

An important question to ask in identifying what skills to teach is in highlighting what makes a good tester at your company, because your needs will be different to other places. She then finished with a few guidelines around coaching based around giving people responsibilities, improving the environment they work in and continuing to adapt as different needs and challenges arise.

Lessons learnt:

  • Any practice/process being used by others should be analysed and adapted to fit your context, not blindly copied.
  • Be a voice for testing and lead others to make changes in areas they need to improve on
  • Think about what makes a good tester at my company and how I measure up
  • Help prepare the organization/team for change and help them cope as they struggle through it

That’s a wrap for Day 1, find my review of Day 2 here, where I took part in a workshop on Coaching Testers with Anne-Marie Charrett

Advertisements

Getting started testing Microservices

Overview

Microservices involve breaking down functional areas of code into separate services that run independently of each other. However, there is still a dependency in the type and format of the data that is getting passed around that we need to take into consideration. If that data changes, and other services were depending on the previous format, then they will break. So, you need to test changes between services!

To do this, you can either have brittle end-to-end integration tests that will regularly need updating and are semi-removed from the process, or you can be smarter, and just test the individual services continue to provide and accept data as expected, to highlight when changes are needed. This approach leads to much quicker identification of problems, and is an adaptive approach that won’t be as brittle as integration tests and should be a lot faster to run as well.

The Solution

What I’m proposing is to integrate contract-based testing. (Note, we are only in the early stages of trying this out at my work)
Here’s how it works:

Service A -- Data X --> Service B

Service A is providing Service B with some sort of data payload in the Json format X. We call Service A the provider and Service B the consumer. We start with the consumer (service B) and determine what expectations it has for the data package X and we call this the contract. We would then have standard unit tests that run with every build on that service stubbing out that data coming in from a pretend ‘Service A’. This means that as long as Service B gets it’s data X in the format it expects, it will do what it should with it.

The next part is to make sure that Service A knows that service B is depending on it to provide data X in a given format or with given data, so that it if a change is needed, Service B (or any other services dependent on X) can be updated in line, or a non-breaking change could be made instead.

This is Consumer-Driven contract testing. This is nice, it means that we can guarantee that Service A is providing the right kind of data that Service B is expecting, without having to test their actual connections. Spread this out to a larger scale, with 5 services dependent on Service A’s data, and Service A giving out 5 types of data to different subsets of services and you can certainly see how this makes things simpler, without compromising effectiveness.

A variation of this is to have Service B continue to stub out actually getting data from Service A for the CI builds. But instead of testing on Service A that it still meets the expected data format of Service B, we can put that Test on Service B as well, so it also checks the Stub being used to simulate Service A against what is actually coming in from Service A on a daily basis. When it finds a change, the stub is updated, and/or a change request is made to service A.
Both types have advantages and disadvantages.

In Practice

Writing these sort of tests can be done manually, but there are tools which help with this as well, making it easier. Two such products are Pacto and Pact. They are both written in Ruby, Pacto is by Thoughtworks and Pact by RealEstate.com.au. Out of these 2 I think Pact is a better option as it appears to be more regularly updated and have better documentation. PactNet is a .Net version of Pact written by SEEK Jobs which is the language used at my work, and so is the solution we’re looking into.

These tools provide a few different options along the lines of the concepts described above. One such use case is that you provide the tool with an http endpoint, it hits the endpoint and makes a contract out of the response (saying what a response should look like). Then in subsequent tests the same endpoint is hit and the result compared with the contract saved previously, so it can tell if there have been any breaking changes.
I’m not sure how well these tools go at specifying that there might only be part of the response that you really care about, and the rest can change without breaking anything. This would be a more useful implementation.

Further reading

Note that most of the writing available online about these tools is referring to the Ruby implementation, but it’s transferable to the .Net version.

Influential people

People that are big contributors to this space worth following or listening to:

  • Beth Skurrie – Major contributor and speaker on Pact from REA
  • Martin Fowler – Writes a lot on microservices, how to build them and test them, on a theory level, not about particular tools.
  • Neil Campbell – Works on the PactNet library

Got any experience testing microservices and lessons to share? Other resources worth including? Please comment below and I’ll include them