Notes from the 2015 ANZTB Conference in Auckland

I was lucky enough to make my first trans-Tasman journey to Auckland last week to attend the 2015 ANZTB Conference. The conference was enjoyable and there were some memorable talks I really enjoyed (I personally like single-stream events). Here’s some favorites:

Secure by Design – Laura Bell – slides

I loved the essence of this talk which was basically (in my own words) ‘take security testing off the pedestal’. Laura shared five simple tools and techniques to make security more accessible for developers and testers alike. One key takeaway for me was to focus on getting the language right: ‘security vulnerabilities hide behind acronyms, jargon and assumptions‘. For example, most people understand the different between authentication (providing identity) and authorization (access rights), but both these terms are commonly shortened to ‘auth’ which most people use interchangeably (and confusingly). A great talk.

Innovation through Collaboration – Wil McLellan

This was probably my favorite talk of the day, as it was a well told story about building a collaborative co-working space called ‘EPIC’ for IT start-ups in Christchurch following the 2011 earth quake. The theme was how collaboration encourages innovation, and even companies in competition benefit through collaboration. My key takeaway was how designing a space you can encourage collaboration, for example, in EPIC there’s only a single kitchen for the whole building, and each tenancy doesn’t has it’s own water. So, if someone wants a drink or something to eat they need to visit a communal area. Doing this enough times means you start interacting with others in the building you wouldn’t normally do so in your day to day work.

Through a different lens – Sarah Pulis – slides

Sarah is the Head of Accessibility Services at PwC in Sydney and she shared some good background information about why accessibility is important and some of the key resources to analyse/evaluate and improve accessibility of systems. Whilst I knew most of the resources she mentioned, I thought here talk was very well put together.

Well done to the team that organized the conference.

Auckland was a beautiful city BTW, here’s a couple of pics I took:

Test your web apps in production? Stylebot can help.

I test in production way too much for my liking (more details in an upcoming blog post).


Testing in production is risky, especially because I test in a lot of different environments and they all look the same. I found the only way I could tell which environment I was in was by looking closely at the URL. This was problematic as it led to doing things in a production environment thinking I was using a pre-production or test environment – oops.

I initially thought about putting some environment specific code/CSS into our apps that made the background colour different for each environment, but the solution was complex and it still couldn’t tell me I was using production from a glance.

I recently found the Stylebot extension for Chrome that allows you to locally tweak styles on any websites you visit. I loaded this extension and added our production sites with the background colour set to bright red, so now I immediately know I am using production as it’s bright red, be extra careful.

Stylebot Example

I’ve also set some other environments to be contrasting bright colours (purple, yellow etc.) so I am know from a quick glance what environment I am using.

I like this solution as I haven’t had to change any of our apps at all and it works in all environments: which is just what I needed.

Do you do something similar? Leave a comment below.

Do you even need a software tester on your agile team?

This post is part of the Pride & Paradev series.

I admit this topic is a little strange to have in a book about software testing. But I thought I would include it nonetheless as it’s relevant to our industry and it’s good for you to have some background information about the reasoning behind hiring testers.

Do you even need a software tester on your agile team?

You don’t need a software tester on your agile team

You’ve probably heard the story. Facebook, of the most popular sites on the whole Internet (as of writing) has no testers. The Facebook engineer responsible for the feature is responsible for the testing.

This is because Facebook, by and large, does not need to produce high quality software. They ship quickly and as a result, they ship bugs. Sure, they’ve got a lot of automated tests, but we all know there is still a need for human testing.

Evan Priestley, an ex-Facebook engineer, explains that Facebook get around having no testers by doing a few things:

  • They rely on extensive dogfooding by internal engineers;
  • They have extensive real time production monitoring for faults;
  • They release code to a beta site 24 hours before release where major clients are forced to do QA testing (to avoid integration problems); and
  • They provide channels for ex-employees to report bugs.

What can we learn from this? If you don’t particularly care about quality, have good production monitoring, and can get internal engineers and major partners to do your QA then you may get away with not having a tester on your agile team.

You definitely need a software tester on your agile team

Most agile teams and product companies sooner or later realize they need a software tester.

Software testers provide a unique questioning perspective which is critical to finding problems before go-live. Even with solid automated testing in place: nothing can replicate the human eye and human judgement.

I’ve noticed that a lot of organizations that typically didn’t have any software testers have started to hire or dedicate staff as testers as they begun to see the benefits of testers, or have starting feeling the pain of their absence.

Take 37signals, the web development company founded in 1999, who only in 2013 created a dedicated QA role. This news was fairly well hidden in a 37signals blog post introducing a new support member:

“…You may have noticed a picture of Michael’s new working environment a few months ago. We recently integrated QA testing into our development process, with Michael taking the lead. His effort has prevented potential problems and bugs in every new feature in Basecamp. Look for more details about this in the future.”

~ Joan Stewart 37signals

Another example of relatively new introduction of testing roles is the Wikimedia Foundation who only recently hired a tester (or two) into their organization to take the lead on testing Wikimedia products including Wikipedia.

If your team is too small to support a full time dedicated tester, then look for somebody who can do both software testing and business analysis. That way you still have somebody who can be responsible for advocating quality, but don’t have to grow your team size unnecessarily.

Are software testers the gatekeepers or guardians of quality?

This post is part of the Pride & Paradev series.

Are software testers the gatekeepers or guardians of quality?

Software Testers are the Guardians and Gatekeepers of Quality

“Quality is value to some person.”

~ Jerry Weinberg

Working as a sole tester in a small agile team, you are the guardian of quality. You care about quality and it’s your job to fight the right fight to ensure it prevails. As Jerry Weinberg says: quality is value to some person, you need to ensure that value is realized.

User stories aren’t ‘done’ until you’ve tested each of them, which means you get to provide information to the Product Owner about each of them. You define the quality bar and you work closely with your team and product owner to strive for it.

You’ll soon realize that’s it’s better to start building quality in than testing it in, so you can make sure there are clearly defined acceptance criteria which have been marked as completed so that testing is more focused: efficient and effective. You’ll work with the programmers to make sure that as many acceptance tests are automated along side the code so that the regression testing burden is lessened each time a story is delivered.

One way to build quality into your development process to introduce some humorous passive-aggressive yet lighthearted signage around your story wall:

Tick the Acceptance Criteria

Whilst your Product Owner ultimately wants a great product, you’re working with the programmers closely in your team to ensure this happens on a day to day basis. You’re the guardian of quality and they’ll respect you for making them look good.

Software Testers aren’t the Guardians and Gatekeepers of Quality

Whilst you think you may define the quality of the system, it’s actually the development team as a whole that does that. Developers write the good/poor quality code.

Whilst you can provide information and suggestions about problems: product owners can and should overrule you: it’s their product for their business that you’re building: you can’t always get what you consider to be important, often business decisions trump technical ones.

You’re not perfect. Everyone is under pressure to deliver and if you act like an unreasonable gatekeeper of quality, you’ll quickly gain enemies or have people simply go around or above you.

And that’s no fun.

Do software testers need technical skills?

This post is part of the Pride & Paradev series.

Do software testers need technical skills?

Software Testers Need Technical Skills

“Man is a tool-using animal. Without tools he is nothing, with tools he is all.”
~ Thomas Carlyle

You’re testing software day in and day out, so it makes sense to have an idea about the internals of how that software works. That requires a deep technical understanding of the application. The better your understanding of the application is, the better the bugs you raise will be. If you can understand what a stack trace is and why it’s happening, the more effective you’ll be in communicating what has happened and why.

“Most good testers have some measure of technical skill such as system administration, databases, networks, etc. that lends itself to gray box testing.”

~ Elizabeth Hendrickson – Do Testers Have to Write Code?

As you’re testing, you can easily dive into the database and run some SQL queries to make sure things actually did what they were meant to, or discover and test an exposed web-service using different combinations as it’ll be quicker and provides the same results.

You’ll know IE7 JavaScript quirks and will be able to communicate these to a programmer and work on a solution that gracefully degrades.

Gone are the days where you’d be emailed a link to a test environment somewhere that you’ll use to conduct some manual testing and provide some feedback. More often then not, you’ll start by setting up your own integrated development environment on your own machine so that you can pull changes as they’re committed by programmers and find issues sooner.

You’ll also probably be asked to build a test environment that other people can use, and a continuous deployment pipeline to automatically update that environment when appropriate.

Without technical skills you’re going to struggle with this, as it’s not just a matter of testing’ the functionality of the application, but testing the entire system: that it can be built, deployed, internationalized, scaled etc.

Soon you’ll start coming across other testing challenges such as how to test internationalization and localization, accessibility and how to locate or generate appropriate test data. This may involve writing your own SQL scripts that take field labels and translate them to a test locale to check screens for hard coded data. Again, these activities require technical skills.

Often programmers will show disdain for testers without any technical skills as they won’t understand the technical challenges a programmer faces, and won’t be able to communicate issues in a technical way.

The more technical skills you have in your toolbelt, the more effective you can be as a software tester.

But having strong technical skills and wanting to do nothing but programming as the sole tester on a small agile team is a recipe for disaster.

Software Testers Don’t Need Technical Skills

“A particularly terrible idea is to offer testing jobs to the programmers who apply for jobs at your company and aren’t good enough to be programmers. Testers don’t have to be programmers, but if you spend long enough acting like a tester is just an incompetent programmer, eventually you’re building a team of incompetent programmers, not a team of competent testers.”
~ Joel on Software on Testers

Hiring testers with technical skills over testing ability is a common mistake. A tester who primarily spends his/her time writing automated tests will spend more time getting his/her own code working instead of testing the code that your customers will use.

In a small agile team of say seven programmers and one tester, the tester will spend nearly all his/her time conducting exploratory and story testing so there will be no time to spend as a tester writing automated tests, it will need to be done by the programmers as part of developing a story. Hiring a tester who expects to predominantly write code on a small agile team is a big mistake.

“Since testing can be taught on the job, but general intelligence can’t, you really need very smart people as testers, even if they don’t have relevant experience.”

~ Joel on Software on Testers

What technical skills a tester lacks can be made up for with intelligence and curiosity. Even if a tester has no deep underlying knowledge of a system, they can still very effective at finding bugs through skilled exploratory and story testing. Often non technical testers have better shoshin: ‘a lack of preconceptions’ when testing a system. A technical tester may take technical limitations into consideration but a non technical can be better at questioning why things are they way they are and rejecting technical complacency.

Often non-technical testers will have a better understanding of the subject matter and be able to communicate with business representatives more effectively about issues.

You can be very effective as a non-technical tester, but it’s harder work and you’ll need to develop strong collaboration skills with the development team to provide support and guidance for more technical tasks such as automated testing and test data discovery or creation.

Pride and Paradev: a collection of software testing contradictions

The test of a first rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function.

— F. Scott Fitzgerald

It was the best of times, it was the worst of times

– Charles Dickens: A Tale of Two Cities

As I recently said, I will be writing a series of articles here about agile software testing, focused around the theme of how to survive as a solo tester in an agile team. The idea is these articles which will eventually become a book titled Pride and Paradev (read this article if you don’t know what a paradev is).

I had been struggling to decide upon on a creative approach for this series/book when I recently had a epiphany in the shower (they can happen at the best of times).

Most software testing books are a verbose collection of best practices: what you should do as tester to be successful, sometimes in the form of lessons learned. I particularly dislike best practices, they’re black and white, and I prefer to see the world in shades of grey, a best practice in one context make no sense in another.

“The color of truth is gray.”
~ André Gide

Acknowledging this is easy, but writing articles in this doctrine isn’t. It’s much easier to write a best practice book as you can squarely sit on one side of the fence and write from there.

One approach to write a book about grey areas is to write things sufficiently nebulous/generalist that they could appear to work in different contexts. But not only is this hard, the outcome for the reader is weak and ineffective.

What I am proposing instead is “Pride and Pardev: a collection of software testing contradictions”, which, as the title states, will be a collection of contradictory statements, and supporting evidence, in regards to agile software testing. The added benefit of writing contradictions is no one can really argue with you, as you’re already arguing with yourself.

Pride and Paradev: a collection of software testing contradictions by Alister Scott

I haven’t done a lot of planning as yet, but here’s some examples of contractions that may feature as articles and in the final book:

  • Generate test data for all your testing
  • Use production data for all your testing
  • Keep a record of bugs as they are found
  • Don’t keep a record of bugs at all
  • Test everything in IE
  • Don’t test anything in IE
  • Automate your acceptance tests
  • Don’t automate your acceptance tests
  • Your automated acceptance tests should be in the same language as your codebase
  • Your automated acceptance tests can be in whatever language the testers want
  • Testers should automate acceptance tests
  • Developers should automate acceptance tests
  • Test extensively before going to production
  • Test most things after go-live in production
  • Testers should write the acceptance criteria
  • Testers shouldn’t write the acceptance criteria
  • Involve real users in your testing
  • Don’t involve real users in your testing

One of the key objectives is to keep things lightweight, that’s why I think the blog article format will nicely translate into a chapter in the final book. As Dr Seuss famously said: “So the writer who breeds, more words than he needs, is making a chore, for the reader who reads.”

I’m looking forward to writing the first article soon. I am also looking forward to the great feedback that I will receive before collating these into an eBook on Leanpub.

Please feel free to leave a comment below about what you’d like to see covered in a collection of agile software testing contradictions.

Yet another software testing pyramid

A fellow ThoughtWorker James Crisp recently wrote an interesting article about his take on an automated test pyramid.

Some of the terminology he used was interesting, which is what I believe led to some questioning comments and a follow up article by another fellow ThoughtWorker, Dean Cornish, who stated the pyramid “oversimplifies a complex problem of how many tests you need to reach a point of feeling satisfied about your test coverage“.

I believe that one of the most unclear areas of James’s pyramid is the use of the term Acceptance tests, which James equates to roughly 10% of the automated test suite. One commenter stated these should instead be called functional tests, but as James points out, aren’t all tests functional in nature? I would also argue that all tests are about acceptance (to different people), so I would rephrase the term to express what is being tested, which in his case is the GUI.

The other fundamental issue I see with James testing pyramid is that it is missing exploratory/session based testing. The only mention of exploratory testing is when James states ‘if defects come to light from exploratory testing, then discover how they slipped through the testing net’, but I feel this could be better represented on the pyramid. Exploratory, or session based testing, ensures confidence in the automated tests that are being developed and run. Without it, an automated testing strategy is fundamentally flawed. That’s why I include it in my automated testing pyramid as the Eye of Providence (I originally got the ‘eye’ idea from another ThoughtWorker Darren Smith).

Show me the Pyramid

Without further ado, here’s my automated test pyramid. It shows what the automated tests use to test: being the GUI, APIs, Integration Points, Components & Units. I’ve put dotted lines between components, integration points and APIs, as these are similar and it might be a case of testing not all of these.

Another way of looking at this, is looking at the intention of the tests. Manual exploratory tests and automated GUI tests are business facing, in that they strive to answer the question: “are we building the right system?”. Unit, integration and component tests are technology facing, in that they strive to answer the question: “are we building the system right?”. So, another version of the automated testing pyramid could simply plot these two styles of tests on the pyramid, showing that you’ll need more technology facing than business facing automated tests, as the business facing tests are more difficult to maintain.


By removing the term acceptance, and showing what the automated tests test, I believe the first automated test pyramid shows a solid approach to automated testing. Acceptance tests and functional tests can be anywhere in the pyramid, but you should limit your GUI tests, often by increasing your unit test coverage.

The second pyramid is another way to view the intention of the tests, but I believe both resolve most of the issues Dean has with James’s pyramid. Additionally they both include manual session based testing, a key ingredient in an automated test strategy that should be shown on the pyramid so it is not forgotton.

I welcome your feedback.

Upcoming agile testing meetup in Brisbane, Australia

I just got this email from Thoughtworks, and have registered to attend. See you there if you’re in Brisbane.

You are invited to the launch meeting of the Brisbane chapter of the Agile Alliance Australia (AAA). Last year’s inaugural Agile Australia conference established the AAA and it is envisaged that local chapters will provide ongoing education and knowledge-sharing.

ThoughtWorks Global Testing Practice Lead Kristan Vingrys will be our guest speaker discussing the Changing Role of a Tester. Kristan has over 10 years experience in software testing encompassing a wide range of testing practices for both products and applications within a diverse range of industries. He has presented on agile testing and articles – including a chapter in the ThoughtWorks Anthology Book. He has been involved in solution delivery, coaching and mentoring teams with a focus on testing, agile processes and how the two compliment each other.

The Changing Role of a Tester

IT development is getting faster business now expects applications in months not years. Testing has to keep up and can not afford to be seen as a bottleneck. Traditional ways of testing no longer work with the rapid software development methodologies being used; there is no time to do big up-front test case design or multiple cycle test executions. Testers have to become more agile and embrace change while still providing quality information about the application being developed.

Testing on an agile project is different to a waterfall project; it is not just about doing waterfall in smaller iterations. Nor is it focused on finding as many defects as possible, instead the goal is to work as a team delivering quality working software that satisfies customer need. There are some new skills that testers will need to learn, but they do not need to throw away everything they already know. Changing the mindset about how, when and why of testing will help a tester adapt their existing skills to become an invaluable resource on any agile team.

When: 5:30pm, Thursday 13th May
Where: See the meetup site

After the discussion, we will go locally for drinks, food and networking for those interested, location TBA

Please register at the Brisbane Chapter site: Brisbane Chapter – First Meeting

Look forward to seeing you there!

Update 19 July 2010:

Link to a great write up by Craig Smith is here, plus slides are here (pdf).