TestBash Sydney 2018 Reflection – Part 2

The second half of my summary and reflections on the TestBash 2018 conference in Sydney. For the first half: TestBash Sydney 2018 Reflection – Part 1

Avoid Sleepwalking to Failure! On Abstractions and Keeping it Real in Software Teams

by Paul Maxwell-Walters – @TestingRants | http://testingrants.blogspot.com.au

The main focus of the talk seemed to be on Hypernormalisation, where you are so much a part of the system that you can’t see beyond it. For example, the message of society is that everything is OK, when really it isn’t, and you know it, but perhaps can’t explain why.

This can apply to testers, where we just accept things the way they are and don’t try and make things better, perhaps because we can’t see/imagine how it could be better.

  • Hyper-reality – A world that you interact with, that isn’t real (e.g  DisneyWorld). This is how we sometimes relate to software testing.

Abstraction #1 – Quality

The definition of quality is different for everyone. James Bach said in 2009 that Quality is dead, this is a mini-hypernormalisation. He didn’t like the way things were going, and was somewhat acceptant of that.

Abstraction #2 – Measurement

Rich Rogers said in his book “Changing times: Quality for humans in a digital age” in 2017 that we assume when we have tested all of our criteria, that we have good quality (but this is wrong!). If a team’s definition of quality or measurement isn’t true to reality, then why persist with it?

Therefore, we don’t have to accept the way things are just because we don’t see how it could be any better.

My thoughts

Personally I found this a really hard talk to follow (as might be evident in my notes above), in between the old news videos of Russia and the large amount/complexity of content. I felt the point of the talk of not just accepting what is being thrown at us in Quality could have been made much simpler and more time spent on applications of how to do this, or what areas of the job we should be actively looking for better ways to do.

I do strongly agree with the idea of not accepting bad practices just because that’s how it is now, and striving to make things better (ie being agile), but I found the delivery missed the mark a little bit.

Test Representatives – An Alternative Approach to Test Practice Management

by Georgia de Pont – @georgia_chunn

Georgia started by giving some of the context of the patterns/techniques used at Tyro, being Pairing XP, TDD and the Spotify model of squads. We then followed the journey that the testing team at Tyro have gone through, starting by moving from a disconnected team into having embedded testers.
This created some challenges:

  • Loss of alignment and knowledge sharing within the testing group
  • Lack of consistency in test approaches and usage of test members within the team
  • Lack of a test manager (it was a flat structure)

The question arose of how to address these issues, should they hire a test manager or take a grassroots approach? (Evidently, they took the grassroots approach).

Each product tribe selected a test representative to act as representative. They would support their tribe’s testers, gather info on issues and be a point of contact for that tribe. The representatives came up with a pipeline of improvement initiatives to work through, with ideas coming in a variety of ways. The representative group, “rep group”, would share the info back to their test teams and would have regular meetings with the TestOps team.
Some of the initiatives:

  • Clarity of role of embedded tester, e.g. what are the test practices they can offer
  • Improving the recruitment process. Candidates were given a take home test, which wasn’t reflective of the workplace because people usually pair. So they created a pairing exercise instead.
  • Onboarding new test engineers and probation expectations
  • Performance criteria for test engineers for them and their leads to use
  • Upskilling test engineers (an ongoing effort) of making sure there is available training, organising internal and external speakers, conferences, etc.
  • Co-ordinating engineering-wide test efforts
  • Developing a quality engineering strategy, involving various stakeholders, to identify any current roadblocks in the testing efforts and work to remove them

Some steps for success to make a representative group work in other workplaces:

  • Start small. Think about how many people per teams (aim for 1 rep per 2-4 teams)
  • In forming the rep group, consider whether best to select people or ask for volunteers. (they selected people to start with)
  • Communicate the work of this group to the rest of the organisation
    • Maybe hook into existing communications
    • Include ways to get involved
  • Ask for feedback (e.g. surveys)
    • Now they do this more informally
    • Asking: Is it working/effective?
  • Run Rep group retrospectives
  • Ensure support from engineering leadership
    • Maybe include them in meetings
    • Get their feedback/input
    • Give them an awareness of upcoming testing concerns
    • Get budget support

My thoughts

Georgia did quite well for her first talk, it was clear that she had practiced presenting this before, she had helpful slides to guide the content and carried confidence in her presentation. With practice presenting she will only get better, and it will be interesting to see her speak again in a few years with more experience.

The content was relevant, describing a smart solution to problems that many organisations are now feeling with the recent direction testing roles are taking. Even for companies not in a position to form a ‘rep group’ there were definite takeaways in how a testing group can, and should, interact with other stakeholders within Engineering. The initiatives the group came up with look really interesting, and I would’ve liked to hear much more about those as there appeared to be some directly applicable opportunities in there, perhaps this will be the subject of future talks?

Enchanting Experiences – The Future of Mobile Apps

by Parimala Hariprasad – @PariHariprasad | http://curioustester.blogspot.co.uk/

Pari finished out the day as the final keynote, speaking on where she sees the design of applications and devices heading in a heavily connected world and what opportunities could be unlocked.

It started with Machine Input, with a definition that this is all about information that devices can find on their own.

Products are developing all the time, getting more complex, with more interfaces, but perhaps what we need is to get simpler again, and automatically detect more of the options we would otherwise have to choose (through machine input).
We then looked at a range of machine input types

  • Camera – e.g. auto detect details of a credit card held in front of the camera and pre-enter
  • Location – detect where you are and make suggestions, including smart defaults like detecting the country and local currency
  • History – Each time you fill out forms, remember what data you put in, particularly between site visits.
  • There’s lots of other sensors in our phones that can be hooked into.

Could we detect which floor someone is on, which shop they are in, what they have recently searched for and offer them a discount as they walk around the shop, without prompting?

The internet of things is leading to a more and more connected home and we looked at some ideas companies like Google are developing in this space.

  • Designing great products is about great experiences.
  • Security and privacy is really important with all this data being available
    • Devices should not dictate us and how we live, they should make things easier and work for us
  • Learn about design and usability to see how it can impact your testing efforts/plan.

My thoughts

No particular takeaways from this one, as I felt it wasn’t really a talk targeted for testers, more of a talk for developers or designers who can more directly incorporate Pari’s design thoughts into their work. It was interesting to think through some of the options made possible in various parts of our lives through machine learning from the point of view of: “that would be cool to see!”. There will definitely be challenges in testing any of these types of features, which wasn’t covered much besides acknowledging that security will be key with all these features. Also the content spawned thought processes on what things would need to be considered in testing these sort of technologies.

99 Second talks

99 second talks then wrapped up the day with about 20 people speaking. For most people it was a chance to practice public speaking in a somewhat non-threatening environment. For others it was a chance to promote their company/group, or just a chance to introduce a topic/technique. Hard to take much in given the format, and I didn’t have notes of any takeaways from this, but some interesting topics raised.

Overall thoughts on the conference

One of the big selling points leading up to TestBash was the community feel of the event, particularly seen in the pre and post conference meetups. I was unable to attend either, so I didn’t get the full experience and can’t really comment on how the conference lived up to that hype. The conference day felt much like most other conferences I’ve been to in terms of interactions between sessions and general flow of the day.

There wasn’t much chance for questions with most of the talks, which was unfortunate, as there can be some great discussion coming out of this as people dive in to the bits that were missed or are really interesting to them. Though, in saying that, question time can be hit and miss depending on which direction the conversation gets steered towards, so not a big concern.

The talk selection was pretty good considering it was a single track, so every talk had to be chosen for being applicable to a wide audience, covering a generally applicable topic, but trying to make that topic interesting enough for anyone to learn from. This is a pretty hard criteria to meet, so hats off to the selection team. If you were wanting a deep dive on particular topics, this is perhaps not going to be the best type of conference to attend, but it could still happen on the right topic.

Speakers are well looked after by TestBash, in communications and compensation, though as a local speaker, there was nothing to be compensated for, so that is definitely more of a perk for non-local speakers.

The swag was better than most conferences, aiming for practical items that can be re-used beyond the day.

Overall I found the day enjoyable, I got good feedback on my talk and I have some renewed ideas and techniques to take back into my workplace from other talks which makes for a successful conference in my regards. This is helped by sharing my thoughts afterwards, which I recommend others try after any conference/training, even if just to peers in their workplace.

Would I go to the next one? Maybe, if I wasn’t speaking I would need to look at the schedule first, and see what talks are being covered. I’m definitely glad we have another testing conference in Australia, we seem to be adding more to the mix every year at the moment, and each one lifts the overall quality of presentations as people get more practice, we get more international speakers and a wider range of topics and conference setups to choose from. Each conference will have to work harder and harder to stand out and provide a good experience to attendees, which is a win for everyone! So I’m glad TestBash has come to Australia, and is coming back next year!

Advertisements

TestBash Sydney 2018 Reflection – Part 1

On the 19th October, 2018 I attended TestBash Sydney, the first time this event has come to Australia. I spoke on “Advocating for Quality by turning developers into Quality Champions”. I’ll share some more on that topic in a different post, and instead focus this post series on what I got out of the other talks presented that day.

These are the things that stood out to me, my own reflections and some paraphrasing of content, to help share the lessons with others.

Next Level Teamwork: Pairing and Mobbing

by Maaret Pyhäjärvi – @maaretp | http://visible-quality.blogspot.co.uk/

  • Things are better together!
  • When we pair or mob test on one computer, we all bring our different views and knowledge to the table.
  • With mob programming, only the best of the whole group goes into the code, rather than one person giving their best and worst.
    • Plus, we get those “how did you do that!” moments for sharing knowledge.
  • Experts aren’t the ones who know the most, but who learn the fastest.
  • Traditional pairing is one watching what the other is doing, double checking everything. This is boring for the observer. “I’ve got an idea, give me the keyboard!” mentality.
  • You should keep rotating the keyboard every 2-4 minutes to keep everyone engaged.
  • Strong-style pairing shifts the focus, to a “I’ve got an idea, you take the keyboard” mentality, where you explain the idea to the other person and get them to try it out.
    • You are reviewing what is happening with what you want to happen, rather than guessing someone else’s mindset.
  • It can be hard to pair when skillsets are unequal, eg a developer and a tester, feeling that you are slowing them down or forcing them into things they don’t want to do. Strong-style pairing helps with this.
  • Some pairing pitfalls
    • Hijacking the sessions. Only doing what one person wants to try, or not getting the point
    • Giving up to impatience. Don’t see the value, but persist anyway.
    • Working with ‘them’. Pairing with an uncomfortable person, maybe use mob over pairing for this.
  • In Agile Conf 2011, an 11 year old girl was able to participate in a mob session by stating her intent, and having others follow that through, and by listening to others and doing what they said to do.
  • Mobbing basics:
    • Driver (no thinking). Instruct them by giving “Intent -> Location -> Details”
    • Designated navigator, to make decisions on behalf of group.
    • Conduct a retrospective.
    • Typically use 6-8 people, but if everyone is contributing or learning, it’s the right size.
    • Need people similar enough to get along, but diverse enough to bring different ideas.
  • I might have a good idea, but don’t know how to code/test it.

My thoughts

We use pairing at work a lot already, and I didn’t really learn anything new to introduce here. Mobbing doesn’t really appeal to me, for it to be beneficial, you have to justify the time spent of a whole group working on 1 task, which to me, only works when the work produced by individuals is far inferior. Mobbing should produce a better outcome, but it will be quite slow, and we get most of the way there by reviewing people’s code to get a solid overall solution.

Well presented,  but I can’t see mobbing working outside of some particular scenarios.

How Automated E2E Testing Enables a Consistently Great User Experience on an Ever Changing WordPress.com

by Alister Scott – @watirmelon | https://watirmelon.blog/

Note: A full transcript of Alister’s talk, including slides, is available on his website.

I didn’t really understand the start of Alister’s talk about hobbies or why it was included. It finished with the phrase: “Problems don’t stop, they get exchanged or upgraded into new problems. Happiness comes from solving problems”. This seems to be what the introduction was getting at. The rest of the talk then followed a pattern of presenting a problem around automation, a solution to the problem, and the problem that came out of the solution, and then the next solution, and so on.

  • Problem: Customer flows were breaking in production (They were dogfooding, but this didn’t include Signup)
  • Solution: Automated e2e test of signup in production
  • P: Non-deterministic tests due to A/B tests on signup
  • S: Override A/B test during testing
    • Including a bot to detect new A/B tests in the PR, with a prompt to update e2e tests
  • P: Too slow, too late, too hidden (since running in prod)
  • S: Parallel tests, canaries on merge before prod, direct pings
  • P: Still have to revert merges, slow local runs (parallel only in docker)
  • S: Live branch tests with canaries on every PR
  • P: Canaries don’t find all the problems (want to find before prod)
  • S: (Optional) Live branch full suite tests (use for large changes via a github label)
  • P: IE11 & Safari 10 issues
  • S: IE11 & Safari 10 canaries
  • (All these builds report back directly into the github PR) – NICE!
  • P: People still break e2e tests
  • S: You have to let them make their own mistakes
    • Get the PR author that broke the test logic to update the test, don’t just fix it

Takeaways:

  • Backwards law: acceptance of non-ideal circumstances is a positive experience
  • Solving problems creates new problems
  • Happiness comes from solving problems
  • Think of what you ‘can’ do over what you ‘should’ do
  • Tools can’t solve problems you don’t have
  • Continuous delivery only possible with no manual regression testing
  • Think in AND not OR

My thoughts

I thought this was a clever way to present a talk, and the challenges are familiar, so it was interesting to see how Alister’s team had been addressing them. Being a talk about ‘what’ and not ‘how’ meant there was less direct actions to take out of it. I already know the importance of automated tests, running them as often and early as possible, running parallel and similar tips that came up in the talk. So for me, I’m interested in exploring setting up an optional build into a test environment or docker build where e2e tests can be run by setting an optional label on a github PR.

A Tester’s guide to changing hearts and minds

by Michelle Playfair – @micheleplayfair

  • Testing is changing, and testers are generally on board, but not everyone else is.
    • Some confusion around what testers actually do
    • Most devs probably don’t read testing blogs, attend testing conferences or follow thought leaders etc. (and vice versa)
  • 4 P’s of marketing for testers to consider about marketing themselves:
    • Product, place, price and promotion
  • Product: How can you add value
    • What do you do here? (you need to have a good answer)
    • Now you know what value you bring, how do you sell it?
  • Promotion: Build relationships, grow network, reveal value
    • You need competence and confidence to be good at your job
    • Trust is formed either cognitively or affectively based on culture/background
    • You need to speak up and be willing to fail. People can’t follow you if they don’t know where you want to take them
    • Learned helpfulness
      • Think about how you talk to yourself
      • Internal vs external. “I can’t do that” -> Yes it’s hard, but it’s not that you are bad at it, you just need to learn.
      • Permanent vs temporary. “I could never do that” -> Maybe later you can
      • Global vs situational. Some of this vs all of this
  • Step 1: Please ask questions, put your hand up! Tell people what you know
    • Develop your own way of sharing, in a way that is suitable for you.
  • If you can’t change your environment, change your environment (ie find somewhere else).

My thoughts

Michelle presented very well, and the topic of ‘selling testing’ is particularly relevant given changes to the way testing is viewed within modern organisations. This was a helpful overview to start thinking through how to tackle this problem. The hard work is still on the tester to figure things out, but using the 4 P’s marketing approach is going to help structuring this communication.

My talk

My talk “Advancing Quality by Turning Developers into Quality Champions” was next, but I’ll talk more about that in a separate blog post later.

A Spectrum of Difference – Creating EPIC software testers

by Paul Seaman & Lee Hawkins – @beaglesays | https://beaglesays.blog/ & @therockertester | https://therockertester.wordpress.com/

Paul and Lee talked about the program they have set up to teach adults on the autism spectrum about software testing through the EPIC testability academy. An interesting note early on regarding language is that their clients indicated they preferred an Identity-first language of “autistic person” over phrases like “person with autism”.

They had identified that it’s crucial to focus their content on testing skills directly applicable in a workplace, keep iterating that content and cater for the group differences by arranging content, changing approaches etc. They found it really helpful to reflect and adapt content over time, but ensure you give your content enough of a chance to work first. Homework for the students also proved quite useful, despite initial hesitations to include that in the course.

My thoughts

It’s encouraging to hear about Paul and Lee’s work, a really important way to improve the diversity of our workforce, and give some valuable skills and opportunities to people who can often be overlooked in society. It was also helpful to think a little bit about structuring a training course in general and what they got out of that.

I did find the paired nature of the presentation interfered with the flow a little but not a big problem.

Exploratory Testing: LIVE

by Adam Howard – @adammhoward | https://solavirtusinvicta.wordpress.com/

This was an interesting idea for a talk, a developer at Adam’s work had hidden some bugs in a feature of his company website, and put it behind a feature flag for Adam to test against, while not making it public to anyone else. Adam would then do some exploratory testing, trying to find some of the issues that the dev had hidden for him, demonstrating some exploratory testing techniques for us at the same time. Adam also had access to the database to do some further investigation if needed.

The purpose of doing this exercise was to show that exploratory testing is a learned skill, and to help with learning to explain yourself. By marketing yourself and your skills like this, others can and will want to learn too.

  • Draw a mindmap as we go to document learnings
  • Consider using the “unboxing” heuristic, systematically working through the feature to build understanding.
    • E.g. Testing a happy path, learn something, test that out, make a hypothesis and try again. Thinking about how to validate or invalidate our observation.
    • Sometimes you might dive into the rabbit hole a little when something stands out.
    • Make a note of any questions to follow up or if something doesn’t feel right.
  • It can be helpful to look at help docs to see what claims we are making about the feature and what it should do. (Depends on what comes first, the help doc or the feature).

My thoughts

An interesting way to present a talk on how to do exploratory testing, by seeing it in action. There were a few times I could see the crowd wanted to participate, but didn’t get much chance, and it felt like it was kind of an interactive session, but kind of not, so I wasn’t quite clear on the intention there. Seeing something in action is a great way to learn, though I would’ve liked to see him try this on a website he didn’t already know, or at least one that more people would be familiar with, so we could all be on a more level playing field, but made for interesting viewing regardless.

Part 2 of my summary: TestBash Sydney 2018 Reflection – Part 2

Australian Testing Days 2016 Reflection – Day 1

On May 20-21 I went to the inaugural Australian Testing Days Conference in Melbourne. The first day involved a series of talks, mostly on sharing experiences people had in testing and the second day was an all-day workshop on test leadership. This post outlines the key messages from the session I attended and the key things I learnt from each one.

Day 1

Part 1 – What you meant to say (keynote)

First up, Michael Bolton discussed how the language we, as testers, use around testing can be quite unhelpful and cause confusion for those involved. For example, automated testing does not exist. You can certainly automate checking, which is mainly regression tests of existing behaviour. But you cannot automate testing, which is everything someone does to understand more about a feature, giving them knowledge to decide on how risky it is to release it. The way we communicate what we do will impact what others understand it as and then expect of us. Similarly, it is important to understand the language others use when asking us to do something.

Another helpful lesson was that customer desires are more important than customer expectations. If they are happy with your product, it doesn’t matter if it met expectations. If I don’t expect the Apple Watch will be of much use to me, but then I try it and discover that I love it, my expectations weren’t met, but my desires were, and it’s a good result. Similarly, users might expect something totally different to what you produce, but if they discover that what you made is actually better than their expectations, it is likewise a good result.

Lessons learnt:

  • Be clear in communication of testing activities to avoid ambiguity and misalignment.
  • Seek the underlying mentality behind people’s testing questions

Part 2 – Transforming an offshore QA team (elective)

Next up Michele Cross shared on the challenges she is facing in transforming an offshore, traditional and highly structure testing team into a more agile, context driven testing team. The primary way to achieve any big change like this is in creating an environment of trust, which comes in 2 ways. A cognitive trust is based on ability, you trust someone because of their skills and attributes. For example, trusting a doctor who has been studying and practising medicine for 20 years that you have just met to diagnose you. An affective trust is based on relationships, you trust someone because of how well you know them and how you have interacted with them in the past. For example, you trust a friend’s movie recommendation because of shared interests and experiences, not their skills as a movie reviewer.

To help establish this trust and initiate change, three C’s were discussed.

  • Culture – Understanding people and their differences. The context that has brought people to where they are now will greatly shape how they interact with people. Do they desire structure or independence? Are they open to conflict or desire harmony? Knowing this can help inform decisions and approaches.
  • Communication – Relating to other people can be just as hard as it is important. Large, distributed teams bring with them challenges of language, timezones, video conferencing etc… It is crucial to find ways to address these concerns so that everyone is kept in the loop, aligned on direction and is able to build relationships with each other.
  • Coaching – Teaching new skills through example and instruction. Create an environment where it is safe to fail so people feel comfortable to grow. Use practical scenarios to teach skills and get involved yourself.

Lessons learnt:

  • Consider the cultural context of people you are interacting with as it will shape how to be most effective in in those interactions
  • Learning through doing, and doing alongside someone is a great way of learning
  • Trust is built by a combination of personal relationships and technical abilities

Part 3 – It takes a village to raise a tester (elective)

Catherine Karena works at WorkVentures which is all about helping under privileged people develop like skills and technological skills to help them enter the tech workforce. She talked about how to figure out what skills to teach by looking at where the most jobs were in the market and the common skills required. This includes both technical and relational skills as they interact with structures and other staff in companies.

On a more general note, a number of characteristics of what makes a great tester were highlighted to focus on teaching these skills as well. A great tester is: curious, a learner, an advocate, a good communicator, tech savvy, a critical thinker, accountable and a high achiever. When it comes to the learning side, a few more tips were shared around teaching through doing as much as possible, making it safe to fail, using industry experts and building up the learning over time.

Some interesting statistics were raised showing that those trained by WorkVentures over 6 months were equal to or greater in performance and value compared to relevant Uni graduates when rated by employers.

Lessons learnt:

  • Relational skills can be just as, if not more, important than technical skills in hiring new talent.
  • Learning in small steps with practical examples greatly improves the outcome.

Part 4 – Context Driven Testing: Uncut (elective)

Brian Osman talked about his experience growing in knowledge and abilities as a tester and how greatly that experience was shaped by testing communities. He explained how a community of like minded people can help drive learning as they challenge each other and bring different view points across.

A side note he introduced was a term called ‘Possum Testing’ which is how he described “Testing that you don’t value, motivated by a fear of some kind”, for example, avoiding using a form of testing because you don’t understand it or how to use it. This is an idea that many people would understand, but could perhaps find it hard to articulate and discuss. Giving it a name instantly provides a means to bring it up in conversation and have people already have a good idea of the context and any common ground in thinking.

Lessons learnt:

  • Naming ideas or common problems is a helpful way to direct future conversations and bring along the original context
  • When looking to improve in a certain area/skill, find a community of others looking to do the same thing.
  • Use these communities to present ideas, defend them and challenge other’s ideas. Debates are encouraged

Part 5 – Testing web services and microservices (elective)

Katrina Clokie (who also mentors me in conference speaking) spoke about her experience testing web services and microservices and a previous version of the talk is available online if you are interested. Starting with web services, she pointed out that each service will have different test needs based on who uses it and how they use it. Service virtualisation is a common technique used in service testing to isolate the front-end from the inconsistencies and potentially unstable back-ends. Microservice testing puts another layer in this model.

Some key guidelines for creating microservices automation were presented, claiming that it should be fit for purpose, remove duplication, be easy to merge changes, have continuous execution and be visible across teams.

An interesting learning technique was present called Pathways which can be found on her website which list a whole bunch of resources for learning about a new topic. They are a helpful way of directing your learning time with a specific goal in mind.

Lessons learnt:

  • Make use of Katrina’s pathways for learning about a new area (for myself or as recommendations to others)
  • Get involved in code creation as early as possible to help influence a culture of testability
  • Write any automation with re-usability and visibility in mind

Part 6 – Test Management Revisited (keynote)

Anne-Marie Charett finished up the day sharing some reflections and approaches she implemented from her time as Test lead at Tyro payments. She started by asking the question, do we still need test management? Which has been asked a few times in the community already. The response being that we do need a testing voice in the community to go with all the new roles and technology coming through like microservices, and DevOps. This doesn’t mean we need Test Managers who deal with providing stability, rather Test Leaders who can direct change. She talked about using the “Satir change Model” to describe the process of change and it’s effect on performance.

She brought a mentality to transforming Tyro to have the best test practice in Australia, and was not interested in blindly copying others. There is certainly benefits to learn from the approach others take, but should be assessed to meet your company’s environment. She discussed a number of testing related strategies that you might have to deal with: Continuous delivery, testing in production, microservices, risk-based automation, business engagement, embedding testing, performance testing, operational testing, test environments, training and growth.

The next question was how to motivate people to learn? Hand-holding certainly isn’t ideal, but you also probably can’t expect people to spontaneously learn all the skills you’d like them to have. This needs coaching! And the coaching should be focused around a task that you can then offer feedback on afterwards. Then challenge them to try it again on their own.

An important question to ask in identifying what skills to teach is in highlighting what makes a good tester at your company, because your needs will be different to other places. She then finished with a few guidelines around coaching based around giving people responsibilities, improving the environment they work in and continuing to adapt as different needs and challenges arise.

Lessons learnt:

  • Any practice/process being used by others should be analysed and adapted to fit your context, not blindly copied.
  • Be a voice for testing and lead others to make changes in areas they need to improve on
  • Think about what makes a good tester at my company and how I measure up
  • Help prepare the organization/team for change and help them cope as they struggle through it

That’s a wrap for Day 1, find my review of Day 2 here, where I took part in a workshop on Coaching Testers with Anne-Marie Charrett

Getting started testing Microservices

Overview

Microservices involve breaking down functional areas of code into separate services that run independently of each other. However, there is still a dependency in the type and format of the data that is getting passed around that we need to take into consideration. If that data changes, and other services were depending on the previous format, then they will break. So, you need to test changes between services!

To do this, you can either have brittle end-to-end integration tests that will regularly need updating and are semi-removed from the process, or you can be smarter, and just test the individual services continue to provide and accept data as expected, to highlight when changes are needed. This approach leads to much quicker identification of problems, and is an adaptive approach that won’t be as brittle as integration tests and should be a lot faster to run as well.

The Solution

What I’m proposing is to integrate contract-based testing. (Note, we are only in the early stages of trying this out at my work)
Here’s how it works:

Service A -- Data X --> Service B

Service A is providing Service B with some sort of data payload in the Json format X. We call Service A the provider and Service B the consumer. We start with the consumer (service B) and determine what expectations it has for the data package X and we call this the contract. We would then have standard unit tests that run with every build on that service stubbing out that data coming in from a pretend ‘Service A’. This means that as long as Service B gets it’s data X in the format it expects, it will do what it should with it.

The next part is to make sure that Service A knows that service B is depending on it to provide data X in a given format or with given data, so that it if a change is needed, Service B (or any other services dependent on X) can be updated in line, or a non-breaking change could be made instead.

This is Consumer-Driven contract testing. This is nice, it means that we can guarantee that Service A is providing the right kind of data that Service B is expecting, without having to test their actual connections. Spread this out to a larger scale, with 5 services dependent on Service A’s data, and Service A giving out 5 types of data to different subsets of services and you can certainly see how this makes things simpler, without compromising effectiveness.

A variation of this is to have Service B continue to stub out actually getting data from Service A for the CI builds. But instead of testing on Service A that it still meets the expected data format of Service B, we can put that Test on Service B as well, so it also checks the Stub being used to simulate Service A against what is actually coming in from Service A on a daily basis. When it finds a change, the stub is updated, and/or a change request is made to service A.
Both types have advantages and disadvantages.

In Practice

Writing these sort of tests can be done manually, but there are tools which help with this as well, making it easier. Two such products are Pacto and Pact. They are both written in Ruby, Pacto is by Thoughtworks and Pact by RealEstate.com.au. Out of these 2 I think Pact is a better option as it appears to be more regularly updated and have better documentation. PactNet is a .Net version of Pact written by SEEK Jobs which is the language used at my work, and so is the solution we’re looking into.

These tools provide a few different options along the lines of the concepts described above. One such use case is that you provide the tool with an http endpoint, it hits the endpoint and makes a contract out of the response (saying what a response should look like). Then in subsequent tests the same endpoint is hit and the result compared with the contract saved previously, so it can tell if there have been any breaking changes.
I’m not sure how well these tools go at specifying that there might only be part of the response that you really care about, and the rest can change without breaking anything. This would be a more useful implementation.

Further reading

Note that most of the writing available online about these tools is referring to the Ruby implementation, but it’s transferable to the .Net version.

Influential people

People that are big contributors to this space worth following or listening to:

  • Beth Skurrie – Major contributor and speaker on Pact from REA
  • Martin Fowler – Writes a lot on microservices, how to build them and test them, on a theory level, not about particular tools.
  • Neil Campbell – Works on the PactNet library

Got any experience testing microservices and lessons to share? Other resources worth including? Please comment below and I’ll include them

Running Parallel Automation Tests Using NUnit v3

With the version 3 release of NUnit the ability to run your automated tests in parallel was introduced (A long running feature request). This brings with it the power to greatly speed up your test execution time by 2-3 times, depending on the average length of your tests. Faster feedback is crucial in keeping tests relevant and useful as part of your software development cycle.

As parallel tests is new to v3, the support is still somewhat limited, but I’ve managed to setup our Automated Test solution at my work, Campaign Monitor, to run tests in parallel and wanted to share my findings.

Technology

We are using the following technologies, so you may have to change some factors to match your setup, but it should provide a good starting point.

  • Visual Studio
  • C#
  • Selenium Webdriver
  • TeamCity

The Setup

Step 1 – Install the latest NUnit package

Within Visual Studio, install or upgrade the NUnit v3 package for your solution via Nuget Package Manager (or your choice of package management tool) using:

Install-Package NUnit
OR Upgrade-Package NUnit

Step 2 – Choose the tests that will be run in parallel

Your Tests can be configured to run in parallel at the TestFixture level only in the current release. So in order to setup your tests to be run in parallel, you choose the fixtures that you want to run in parallel and put a [Parallelizable] attribute at the start of the TestFixture.

Step 3 – Choose the number of parallel threads to run

To specify how many threads will run in parallel to execute your tests, add a ‘LevelOfParallelism’ attribute to your assembly.info file within each project with whatever value you desire. How many threads your tests can handle will be linked to the number of cores your machine running the test has. I recommend either 3 or 4.

Step 4 – Install a test runner to run the tests in parallel

Since this new to NUnit version 3, some test runners do not support it yet. There are 2 methods i’ve found which work.

1 – NUnit Console is a Nuget package that can be installed and runs as a command line. Install it using:

Install-Package NUnit.Console

then open a cmd window and navigate to the location of the nunit-console.exe file installed with the package. Run the tests with this command:

nunit-console.exe <path_to_project_dll> --workers=<number_of_threads>

This will then run the tests in the location specified and using the number of worker threads specified. Outputting the results in the command window.

2 – NUnit Test Runner is an extension for Visual Studio. Search for ‘NUnit3 Test Adapter’ (I used Version 3.0.4, created by Charlie Poole). Once installed, build your solution to populate the Tests into the Test Explorer view. You can filter the tests visible using the search bar to find the subset of tests you want (as decided in Step 2) and then click right click and run tests. This will also run your tests in parallel, using the ‘LevelOfParallelism’ attribute defined in Step 2 to determine the number of worker threads. This gives you nicer output to digest than the console runner, but still feels a bit clunky to use.

Your now setup and running your tests in parallel! Pretty easy right! The tricky part I found was then getting these tests to run in parallel when running through TeamCity, our continuous integration software.

(Optional) Step 5 – Configure TeamCity to run your tests in parallel

We use TeamCity to run our automated tests against our continuous integration builds, and so the biggest benefit to this project was to enable the TeamCity builds to run in parallel. Here’s how I did it

Note: First, I tried using MSBuild to run the NUnit tests as detailed here since this was the way we previously ran our tests before the NUnit v3 beta. However this didn’t work as it requires you to supply the NUnit version in the build script and that doesn’t support NUnit 3 or 3.0.0 or 3.0.0-beta-4 or any other variation i tried. So that was a no-go.

Second, I tried using the NUnit test build step and choosing the v3 type (only available in TeamCity v9 onwards). This lead me through a whole string of errors with conflicting references and unavailable methods and despite my best efforts, would not run the tests. So that was a no-go as well.

The method i decided upon was to use a command line step and run the NUnit console exe directly. So i first setup an MSBuild step that would just copy the NUnit console files and the Test Project files to a local directory on the Build Agent running the tests. Then I setup a command line step with these settings:

Run: Executable with parameters
Command Executable: <path_to_nunit-console.exe>
Command parameters: <path_to_test_dll's> --workers=<thread_count>

And with this, I was able to run parallel tests through team city! I’m sure the setup will get easier once support improves, but for now, it’s a good solution that gives our test suite 3-5 times faster results 🙂

Did you find this helpful, or have any tips you want to share? Please comment below!