By Gaurav Singh

Here’s a question you might have pondered over already:

Do you think devs, PMs, designers, or business folks in your team can test as well as a dedicated tester?

If your answer is ‘no’, this blog’s here to try and change your perspective.

From my own experience, I’ve found that when sufficient context is given, ‘non-testers’ can test so well that the team’s tester can actually think of taking a vacation and come back to BAU with awesome products and features getting shipped.

If you’re thinking this is just wishful thinking or a pipe dream at best, I come bearing facts and real-life examples.

Here’s a simple practice we can adopt to ensure that the team collectively gets better at testing, and the process becomes fun to a certain degree: Mob testing.

Mob testing is an effective way to encourage entire teams to participate in testing.

What is mob testing?

Mob testing is a natural extension to pair testing, where, instead of just 2 people, we have a larger group of people testing the app/system together. It’s a wonderful way to get the whole team engaged in the testing process while sharing knowledge and context about the problem at hand.

As opposed to the popular belief, too many cooks do not spoil the dish, in case of mob testing. The process increases efficiency and working in a group actually works as an advantage, because extra pairs of unbiased eyes are extremely helpful in catching bugs, design issues, or check different aspects of the system.

Below is the sample format I follow within my team at Gojek and it has worked quite well. I have tried this within the team mostly in consumer-facing mobile apps testing, but this format/technique is generic enough to be applied easily in different contexts.

The format

🔦 Guiding principles

  • A session should not be more than 1.5 hours. Anything more might not be too engaging for the group. If the session goes beyond this time, try to take breaks in between.
  • Everyone takes a turn on the keyboard/mouse/device for a fixed duration, guided by a timer
  • This approach follows strong style navigation, wherein the driver is not allowed to put their thought process behind the actions on the app/system, instead, their actions are guided by instructions from navigators in the group who take turns
  • End the session with an optional and concise retrospection which goes for about 10 minutes, to gather feedback on what could be improved for the next session
  • Remember to be kind, considerate, and respectful to everyone in the group. Even if some do not understand much about the feature under review, they’re guaranteed to take away valuable context. Also, try to clarify if anyone has questions about the System Under Test (SUT) during the session
  • Hear everyone’s voice actively and be inclusive
  • Try to build on top of what’s already tested by earlier navigators and check different aspects of the app

🎲 Rules of the game

  • The person on the keyboard is not allowed to decide what happens next
  • Instructions coming from the group should be at the highest possible level of abstraction
3 steps form this abstraction:
Intent: Explain what you want, in simple terms
Location: If the intent is not enough, then specify the location where action has to be taken
Details: Low-level details, if needed
  • Only the driver has permission to do actions, no one else should touch the keyboard/device
  • Take turns and ensure every driver is on the keyboard for not more than 10 minutes
  • Start with one person as the designated navigator and this should be rotated
  • Someone should take the role of facilitator/time cop and ensure the group remains focused on the task at hand. This person or someone else designated may take notes on any observations made by the group and summarise them at the end
  • The session should be largely exploratory in nature, but you may give it more focus by coming up with charters or tours to explore in the session
  • Once everyone is done with their turn, rinse and repeat until time runs out

Does it work?

Absolutely. I’ve tried this with my team of devs, PMs, designers and have received positive feedback on this from everyone.

This works wonders while developing a feature that has to ship soon, since everyone in the team sees the ground realities.

The practice allows the stakeholders to have a clearer idea about release readiness. I’ve observed this work much better than even a sprint demo since you actually get stakeholder involvement in the testing process.

It’s very useful in onboarding new folks in the team since it gets easier to understand the SUT quickly and can make use of the live session to ask questions.

On average, we’ve found over 20 bugs on every mob testing session we’ve tried in our team and observed noteworthy corner cases suggested by devs, PMs, or designers.

The icing on the cake is that everyone on the team gets a peek into the mind of the tester. The exercise helps the team understand it’s a skill one can learn provided they’re being open-minded and not a mysterious unattainable dark art.

Footnotes

Give mob testing a shot to realize the value it brings. To learn more, check out this blog by Maaret Pyhäjärvi.

If you have thoughts to share, hmu on Twitter or in the comments.

Want to be hooked to reading more about how we build our #SuperApp? Here’s a start.
Want to build it with us? Join us.