Not so recently, Gojko Atzic wrote a blog post asking for reader’s suggestions on techniques to visualise software quality of a system in development. I’ve recently been giving this some thought and came up with the following idea.
Early last year at a client site, I met a genuinely lovely person working as a tester. She did traditional manual testing of a large complex system being developed in a non-iterative (big bang) manner.
I noticed her clear red pen had a small label on it: a piece of paper with a date sticky taped on, so I asked her about what it meant. She told me how she hates waste and loves to use a pen in its entirety, and the date is a way to keep track of how long she’s been using the particular pen for.
We went on talking to understand what she used the red pen for. What she’d do was create lots of manual test cases in a template on her computer, and then print them all to create a large pile of paper when it came time to execute the tests. As she excuted these tests, her red pen would be used to mark failures on these test case printouts, and write notes about what the defects were for the failures. As the pen was clear, and you could see how much red ink remained, she then joked about how the pen was an indicator of how good the system she worked on was. She’d used lots of red ink from her red pen since the start of the year, so the system wasn’t good! Aha!
I started reading some suggestions for visualising software quality. I see two problems with most of them: firstly, most are far too complex, and secondly, most rely on capturing detailed metrics which creates overhead onto itself.
What if you could have a lo-fidelity way to visualise software quality without creating any overhead? Perfect. Enter red and green ink.
A proposal for visualising software quality using red and green ink
Let me start by saying that this idea is freshly baked, possibly half cooked: I haven’t even tried it and I don’t know if it’ll work at all. But I think it’s cool and that’s why I am sharing it.
Imagine you’re working in a small cross-functional team developing a piece of software. You work as the tester on the team and have varied responsibilities: work with the business analyst and SME to define acceptance criteria, work with a developer to automate these acceptance criteria, and conduct exploratory (session-based) testing on individual user stories as they are completed.
At the start of the project, you’ll need three additional things:
- Two brand new matching red and green pens with clear barrels (so you can see the ink)
- A ream of blank white paper: roughly A4 or A3 sized (or whatever you can get your hands on)
Now you’re ready to visualise software quality
Each story has a set amount of time allocated to it for exploratory (session-based) testing. When you are about the start an exploratory testing session, you need to grab the two the pens and a couple of blank white sheets of paper. As you test, write your thoughts on the paper in either ink: good thoughts (me likey) are in green, bad thoughts (bugs, crashes, poor design etc.) are in red.
Instant feedback on software quality
As soon as the session is complete, stick these sheets of paper on your wall, and talk to the team about them, explaining each red and green thought. The paper will instantly show what you think of the quality of the system: a predominantly green sheet is good, a predominantly red one is bad.
Longer term feedback on software quality
Over time, the ink remaining in each pen will paint a picture (excuse the pun) about the quality of your system. Are you using loads of red ink and not much green?
As I mentioned, this is just an idea I recently had and I have no idea whether it’d be successful in visualising software quality. But I reckon it’d be fun.