100,000 e2e selenium tests? Sounds like a nightmare!

This story begins with a promo email I received from Sauce Labs…

“Ever wondered how an Enterprise company like Salesforce runs their QA tests? Learn about Salesforce’s inventory of 100,000 Selenium tests, how they run them at scale, and how to architect your test harness for success”

saucelabs email

100,000 end-to-end selenium tests and success in the same sentence? WTF? Sounds like a nightmare to me!

I dug further and got burnt by the molten lava: the slides confirmed my nightmare was indeed real:

Salesforce Selenium Slide

“We test end to end on almost every action.”

Ouch! (and yes, that is an uncredited image from my blog used in the completely wrong context)

But it gets worse. Salesforce have 7500 unique end-to-end WebDriver tests which are run on 10 browsers (IE6, IE7, IE8, IE9, IE10, IE11, Chrome, Firefox, Safari & PhantomJS) on 50,000 client VMs that cost multiple millions of dollars, totaling 1 million browser tests executed per day (which equals 20 selenium tests per day, per machine, or over 1 hour to execute each test).

Salesforce UI Testing Portfolio

My head explodes! (and yes, another uncredited image from this blog used out of context and with my title removed).

But surely that’s only one place right? Not everyone does this?

A few weeks later I watched David Heinemeier Hansson say this:

“We recently had a really bad bug in Basecamp where we actually lost some data for real customers and it was incredibly well tested at the unit level, and all the tests passed, and we still lost data. How the f*#% did this happen? It happened because we were so focused on driving our design from the unit test level we didn’t have any system tests for this particular thing.
…And after that, we sort of thought, wait a minute, all these unit tests are just focusing on these core objects in the system, these individual unit pieces, it doesn’t say anything about whether the whole system works.”

~ David Heinemeier Hansson – Ruby on Rails creator

and read that he had written this:

“…layered on top is currently a set of controller tests, but I’d much rather replace those with even higher level system tests through Capybara or similar. I think that’s the direction we’re heading. Less emphasis on unit tests, because we’re no longer doing test-first as a design practice, and more emphasis on, yes, slow, system tests (Which btw do not need to be so slow any more, thanks to advances in parallelization and cloud runner infrastructure).”

~ David Heinemeier Hansson – Ruby on Rails creator

I started to get very worried. David is the creator of Ruby on Rails and very well respected within the ruby community (despite being known to be very provocative and anti-intellectual: the ‘Fox News’ of the ruby world).

But here is dhh telling us to replace lower level tests with higher level ‘system’ (end to end) tests that use something like Capybara to drive a browser because unit tests didn’t find a bug and because it’s now possible to parallelize these ‘slow’ tests? Seriously?

Speed has always seen as the Achille’s heel of end to end tests because everyone knows that fast feedback is good. But parallelization solves this right? We just need 50,000 VMs like Salesforce?

No.

Firstly, parallelization of end to end tests actually introduces its own problems, such as what to do with tests that you can’t run in parallel (for example, ones that change global state of a system such as a system message that appears to all users), and it definitely makes test data management trickier. You’ll be surprised the first time you run an existing suite of sequential e2e tests in parallel, as a lot will fail for unknown reasons.

Secondly, the test feedback to someone who’s made a change still isn’t fast enough to enable confidence in making a change (by the time your app has been deployed and the parallel end-to-end tests have run; the person who made the change has most likely moved onto something else).

But the real problem with end to end tests isn’t actually speed. The real problem with end to end tests is that when end to end tests fail, most of the time you have no idea what went wrong so you spend a lot of time trying to find out why. Was it the server? Was it the deployment? Was it the data? Was it the actual test? Maybe a browser update that broke Selenium? Was the test flaky (non-deterministic or non-hermetic)?

Rachel Laycock and Chirag Doshi from ThoughtWorks explain this really well in their recent post on broken UI tests:

“…unlike unit tests, the functional tests don’t tell you what is broken or where to locate the failure in the code base. They just tell you something is broken. That something could be the test, the browser, or a race condition. There is no way to tell because functional tests, by definition of being end-to-end, test everything.”

So what’s the answer? You have David’s FUD about unit testing not catching a major bug in BaseCamp. On the other hand you need to face the issue of having a large suite of end to end tests will most likely result in you spending all your time investigating test failures instead of delivering new features quickly.

If I had to choose just one, I would definitely choose a comprehensive suite of automated unit tests over a comprehensive suite of end-to-end/system tests any day of the week.

Why? Because it’s much easier to supplement comprehensive unit testing with human exploratory end-to-end system testing (and you should anyway!) than trying to manually verify units function from the higher system level, and it’s much easier to know why a unit test is broken as explained above. And it’s also much easier to add automated end-to-end tests later than trying to retrofit unit tests later (because your code probably won’t be testable and making it testable after-the-fact can introduce bugs).

To answer our question, let’s imagine for a minute that you were responsible for designing and building a new plane. You obviously need to test that your new plane works. You build a plane by creating parts (units), putting these together into components, and then putting all the components together to build the (hopefully) working plane (system).

If you only focused on unit tests, like David mentioned in his Basecamp example, you could be pretty confident that each piece of the plane would be have been tested well and works correctly, but wouldn’t be confident it would fly!

If you only focussed on end to end tests, you’d need to fly the plane to check the individual units and components actually work (which is expensive and slow), and even then, if/when it crashed, you’d need to examine the black-box to hopefully understand which unit or component didn’t work, as we currently do when end-to-end tests fail.

But, obviously we don’t need to choose just one. And that’s exactly what Airbus does when it’s designing and building the new Airbus A350:

As with any new plane, the early design phases were riddled with uncertainty. Would the materials be light enough and strong enough? Would the components perform as Airbus desired? Would parts fit together? Would it fly the way simulations predicted? To produce a working aircraft, Airbus had to systematically eliminate those risks using a process it calls a “testing pyramid.” The fat end of the pyramid represents the beginning, when everything is unknown. By testing materials, then components, then systems, then the aircraft as a whole, ever-greater levels of complexity can be tamed. “The idea is to answer the big questions early and the little questions later,” says Stefan Schaffrath, Airbus’s vice president for media relations.

The answer, which has been the answer all along, is to have a balanced set of automated tests across all levels, with a disciplined approach to having a larger number of smaller specific automated unit/component tests and a smaller number of larger general end-to-end automated tests to ensure all the units and components work together. (My diagram below with attribution)

Automated Testing Pyramid

Having just one level of tests, as shown by the stories above, doesn’t work (but if it did I would rather automated unit tests). Just like having a diet of just chocolate doesn’t work, nor does a diet that deprives you of anything sweet or enjoyable (but if I had to choose I would rather a diet of healthy food only than a diet of just chocolate).

Now if we could just convince Salesforce to be more like Airbus and not fly a complete plane (or 50,000 planes) to test everything every-time they make a change and stop David from continuing on his anti-unit pro-system testing anti-intellectual rampage which will result in more damage to our industry than it’s worth.

My thoughts on tddGate

If you’ve somehow managed to miss the keynote, blog post and subsequent shitstorm about it, David Heinemeier Hansson (dhh), creator of ruby on rails, has recently come out and declared test-driven development (TDD) dead. I’ve dubbed it ‘tddGate‘.

I find it rather ironic that David advocates the importance of clarity of code in his keynote, yet his objections to TDD through his keynote and posts are anything but clear (to me at least).

For example:

  • I don’t fully comprehend his science/pseudoscience/diet analogy in his keynote: he claims TDD is science-based because it uses metrics and coverage, but it’s also like a diet in that most people can’t make it work so it’s pseudoscience, but he also believes information system development isn’t science because it’s actually more like writing French poetry? Very confusing.
  • He interchangeably uses TDD to mean Test Driven Development and Test Driven Design.
  • He seems to imply you can only do TDD if you’re writing unit tests and you can only write unit tests if they are isolated by using dependency injection (DI) and mocks. He also seems fairly negative on unit testing, DI and mocks, therefore negative on TDD, and wants it dead so he can write (slower) system tests without using TDD, mocks or DI.
  • David gives an example of why unit tests aren’t valuable because they didn’t catch a BaseCamp bug to do with attachments (hint: the issue isn’t to do with unit testing per se, but having only one style of tests).
  • Because David thinks TDD is about unit testing, he sees driving system design from units is bad because people don’t care about units, they care about the whole thing, and doesn’t see the importance of testability.
  • Most importantly, he seems to not fully understand TDD (or at least doesn’t communicate his understanding very well):

 “TDD was what I was supposed to do. With TDD I was supposed to write all my tests first and then I would be allowed to write my code. It just didn’t work.” 25:29

The one subject that I wholeheartedly agree with David on is the importance of reading other people’s code. Writers read much more than they write, so should programmers.

So, here’s some of my current thoughts on TDD:

  • I have met few programmers who write unit tests, let alone who practice TDD.
  • Self testing code (eg. automated testing) is critically important to the health of a codebase as it allows someone to confidently make changes and/or perform refactoring without worrying they may have inadvertently broken something.
  • One way to achieve self testing code is via TDD, but it’s by no means the only way. You can easily achieve a self testing codebase by writing tests after code (or even having someone else write tests).
  • There are circumstances where it doesn’t make sense to write tests first (see some examples here).
  • It’s common to practice TDD by writing unit tests but it’s not the only way to practice TDD (for example: you could write an integration test first or an acceptance test first).
  • It’s common to write ‘isolated’ unit tests using DI and test doubles (so they’re fast and decoupled)  but it’s not the only way to write unit tests (you can interact with your database and you can test real dependencies, they’re not isolated unit tests, but are still unit tests nonetheless).
  • I personally find practicing TDD and writing unit tests first does result in a clearer, more well designed API as you’re calling your own API and you can design it how you like, but it isn’t the only way to achieve a clear API.
  • I also find practicing TDD is very effective for bug fixes as it’s easy to write a failing test and have confidence you’ve fixed the problem (and not created any others) when the test finally passes.
  • I don’t trust a test I haven’t seen fail: and this is much easier to do with TDD. You can also achieve this after the fact by (temporarily) changing your code to not work.
  • Unlike David, I strongly believe in the value of testability.
  • I believe it’s important to have the right mix of different types of automated tests for your context. Most often this means more unit tests and less end to end tests, but there are some cases where this is skewed. A diet of just one, like eating only chocolate, or completely banning sweet foods, is unhealthy and unsustainable.
  • Do what works for you personally and in your context. If you love the flow you achieve doing TDD that’s great, if you can get self testing code another way, that’s equally good.
  • If you don’t enjoy it and it doesn’t work for you, don’t make yourself do something like TDD just because someone else says to do it. But don’t stop something like TDD if you like it just because someone else declares it ‘dead’.

A ruby testing framework, from scratch, in 15 minutes

As part of my talk last week at the Brisbane Testers Meetup, I gave a live demo (no pre-recorded or pre-written code) of writing a ruby testing framework from scratch in 15 minutes. The idea was to show that most testing frameworks contain so much functionality ‘you ain’t gonna need’, so why not try writing one from scratch and see how we go? It was also a chance to show the testers who hadn’t done automated testing that programming/automated testing is not rocket science.

Since I promised to talk about selenium, I used watir-webdriver, but I would have preferred to just show testing a simple app/class that I would have written from scratch in ruby.

Our testing problem

I wrote a beautifully simple website to welcome the testers to the first ever Brisbane Testers Meetup, and wanted to write some tests to make sure it worked. The site is accessible at data:text/html,<h1 id=”welcome”>Welcome BNE Testers!</h1> and looks something like this:

bnetesterswelcome

First I’ll give you a few moments to get over how amazing that web site is… that’s long enough, now, what we need is a couple of tests for it:

  1. Make sure the welcome message exists
  2. Make sure the welcome message is visible
  3. Make sure the welcome message content is correct

Iteration zero

Do the simplest thing that could possibly work. In our case print out the three things we want to check to the screen and we’ll manually verify them.

require 'watir-webdriver'

b = Watir::Browser.new
b.goto 'data:text/html,<h1 id="welcome">Welcome BNE Testers!</h1>'
puts b.h1(id: 'welcome').exists?
puts b.h1(id: 'welcome').visible?
puts b.h1(id: 'welcome').text

which outputs:

true
true
Welcome BNE Testers!

A good start but not quite a testing framework.

Iteration One

I think it’s time to introduce a method to assert a value is true.

I like to start by writing how I want my tests to look before I write any ‘implementation’ code:

require 'watir-webdriver'

b = Watir::Browser.new
b.goto 'data:text/html,<h1 id="welcome">Welcome BNE Testers!</h1>'

assert('that the welcome message exists') { b.h1(id: 'welcome').exists? }
assert('that the welcome message is visible') { b.h1(id: 'welcome').visible? }
assert('that the welcome message text is correct') { b.h1(id: 'welcome').text == 'Welcome BNE Testers!' }

b.close

I usually run the my tests to give me a ‘clue’ to what I need to do next. In our case:

 undefined method `assert' for main:Object (NoMethodError)

In our case, it’s simple, we need to write an assert method. Luckily we know exactly what we need: a method that takes a description string and a block of code that should execute returning true, otherwise we have an error. We can simply write this method above our existing tests:

def assert message, &block
	begin
		if (block.call)
			puts "Assertion PASSED for #{message}"
		else
			puts "Assertion FAILED for #{message}"
		end
	rescue => e
		puts "Assertion FAILED for #{message} with exception '#{e}'"
	end
end

which gives us this output when we run:

Assertion PASSED for that the welcome message exists
Assertion PASSED for that the welcome message is visible
Assertion PASSED for that the welcome message text is correct

This is awesome, but it makes me nervous that all of our three tests passed the first time we ran them. Perhaps we hard coded them to pass? Will they ever fail?

There’s an old saying, source unknown, which is ‘never trust a test you didn’t first see fail‘. Let’s apply this here by making all our tests fail. I usually do this by changing the source system, that way you can keep the integrity of your tests intact.

This is fairly easy to do in our case by changing the id of our welcome element.

data:text/html,<h1 id=”hello”>Welcome BNE Testers!</h1>

When we do so, all our tests fail: yipee.

Assertion FAILED for that the welcome message exists
Assertion FAILED for that the welcome message is visible with exception 'unable to locate element, using {:id=>"welcome", :tag_name=>"h1"}'
Assertion FAILED for that the welcome message text is correct with exception 'unable to locate element, using {:id=>"welcome", :tag_name=>"h1"}'

We change it back and they pass again: double yipee.

Iteration Two –

So far all the text output has been in same color, and everyone knows a good test framework uses color. Lucky I know a gem that does color output easily, all we do is:

require 'colorize'

def assert message, &block
	begin
		if (block.call)
			puts "Assertion PASSED for #{message}".green
		else
			puts "Assertion FAILED for #{message}".red
		end
	rescue => e
		puts "Assertion FAILED for #{message} with exception '#{e}'".red
	end
end

which gives us some pretty output:

Assertion PASSED for that the welcome message exists
Assertion PASSED for that the welcome message is visible
Assertion PASSED for that the welcome message text is correct

and a fail now looks like this:

Assertion FAILED for that the welcome message exists

Sweet.

We can put the assert method in its own file which leaves our test file cleaner and easier to read:

require 'watir-webdriver'
require './assertions.rb'

b = Watir::Browser.new
b.goto 'data:text/html,<h1 id="welcome">Welcome BNE Testers!</h1>'

assert('that the welcome message exists') { b.h1(id: 'welcome').exists? }
assert('that the welcome message is visible') { b.h1(id: 'welcome').visible? }
assert('that the welcome message text is correct') { b.h1(id: 'welcome').text == 'Welcome BNE Testers!' }

b.close

Iteration Three

Our final iteration involves making the tests even easier to read by abstracting away the browser. This is typically done using ‘page objects’ and again we’ll write how we would like it to look before implementing that functionality:

require 'watir-webdriver'
require './assertions.rb'

Homepage.visit

assert('that the welcome message exists') { Homepage.welcome.exists? }
assert('that the welcome message is visible') { Homepage.welcome.visible? }
assert('that the welcome message text is correct') { Homepage.welcome.text == 'Welcome BNE Testers!' }

HomePage.close

When we run this, it provides us a hint at what we need to do:

uninitialized constant Homepage (NameError)

We need to create a HomePage class with three methods: visit, welcome and close.

We can simply add this to our tests file to get it working:

class Homepage
	def initialize
		@browser = Watir::Browser.new
		@browser.goto 'data:text/html,<h1 id="welcome">Welcome BNE Testers!</h1>'
	end

	def self.visit
		new
	end

	def welcome
		@browser.h1(id: 'welcome')
	end

	def close
		@browser.close
	end
end

After we’re confident it is working okay, we simply move it to a file named homepage.rb and our resulting tests look a lot neater:

require './assertions.rb'
require './homepage.rb'

homepage = Homepage.visit

assert('that the welcome message exists') { homepage.welcome.exists? }
assert('that the welcome message is visible') { homepage.welcome.visible? }
assert('that the welcome message text is correct') { homepage.welcome.text == 'Welcome BNE Testers!' }

homepage.close

and when we run them, they’re green as cucumbers:

Assertion PASSED for that the welcome message exists
Assertion PASSED for that the welcome message is visible
Assertion PASSED for that the welcome message text is correct

Summary

In a very short period of time, we’ve been able to write a fully functional (but not fully featured) testing framework and build on it as necessary. As I mentioned in my talk, so many test frameworks out there are so bloated and complex, sometimes all we need is simple, so if you’re putting test frameworks on pedestals because you find them too complex, start without one and see how you go!

Do software testers need technical skills?

This post is part of the Pride & Paradev series.


Do software testers need technical skills?


Software Testers Need Technical Skills

“Man is a tool-using animal. Without tools he is nothing, with tools he is all.”
~ Thomas Carlyle

You’re testing software day in and day out, so it makes sense to have an idea about the internals of how that software works. That requires a deep technical understanding of the application. The better your understanding of the application is, the better the bugs you raise will be. If you can understand what a stack trace is and why it’s happening, the more effective you’ll be in communicating what has happened and why.

“Most good testers have some measure of technical skill such as system administration, databases, networks, etc. that lends itself to gray box testing.”

~ Elizabeth Hendrickson – Do Testers Have to Write Code?

As you’re testing, you can easily dive into the database and run some SQL queries to make sure things actually did what they were meant to, or discover and test an exposed web-service using different combinations as it’ll be quicker and provides the same results.

You’ll know IE7 JavaScript quirks and will be able to communicate these to a programmer and work on a solution that gracefully degrades.

Gone are the days where you’d be emailed a link to a test environment somewhere that you’ll use to conduct some manual testing and provide some feedback. More often then not, you’ll start by setting up your own integrated development environment on your own machine so that you can pull changes as they’re committed by programmers and find issues sooner.

You’ll also probably be asked to build a test environment that other people can use, and a continuous deployment pipeline to automatically update that environment when appropriate.

Without technical skills you’re going to struggle with this, as it’s not just a matter of testing’ the functionality of the application, but testing the entire system: that it can be built, deployed, internationalized, scaled etc.

Soon you’ll start coming across other testing challenges such as how to test internationalization and localization, accessibility and how to locate or generate appropriate test data. This may involve writing your own SQL scripts that take field labels and translate them to a test locale to check screens for hard coded data. Again, these activities require technical skills.

Often programmers will show disdain for testers without any technical skills as they won’t understand the technical challenges a programmer faces, and won’t be able to communicate issues in a technical way.

The more technical skills you have in your toolbelt, the more effective you can be as a software tester.

But having strong technical skills and wanting to do nothing but programming as the sole tester on a small agile team is a recipe for disaster.


Software Testers Don’t Need Technical Skills

“A particularly terrible idea is to offer testing jobs to the programmers who apply for jobs at your company and aren’t good enough to be programmers. Testers don’t have to be programmers, but if you spend long enough acting like a tester is just an incompetent programmer, eventually you’re building a team of incompetent programmers, not a team of competent testers.”
~ Joel on Software on Testers

Hiring testers with technical skills over testing ability is a common mistake. A tester who primarily spends his/her time writing automated tests will spend more time getting his/her own code working instead of testing the code that your customers will use.

In a small agile team of say seven programmers and one tester, the tester will spend nearly all his/her time conducting exploratory and story testing so there will be no time to spend as a tester writing automated tests, it will need to be done by the programmers as part of developing a story. Hiring a tester who expects to predominantly write code on a small agile team is a big mistake.

“Since testing can be taught on the job, but general intelligence can’t, you really need very smart people as testers, even if they don’t have relevant experience.”

~ Joel on Software on Testers

What technical skills a tester lacks can be made up for with intelligence and curiosity. Even if a tester has no deep underlying knowledge of a system, they can still very effective at finding bugs through skilled exploratory and story testing. Often non technical testers have better shoshin: ‘a lack of preconceptions’ when testing a system. A technical tester may take technical limitations into consideration but a non technical can be better at questioning why things are they way they are and rejecting technical complacency.

Often non-technical testers will have a better understanding of the subject matter and be able to communicate with business representatives more effectively about issues.

You can be very effective as a non-technical tester, but it’s harder work and you’ll need to develop strong collaboration skills with the development team to provide support and guidance for more technical tasks such as automated testing and test data discovery or creation.

Automated WCAG 2.0 accessibility testing in the build pipeline

The web application I am currently working on must meet accessibility standards: WCAG 2.0 AA to be precise. WCAG 2.0 is a collection of guidelines that ensure your site is accessible for people with disabilities.

An example of poor accessibility design is missing an alt tag on an image, or not specifying a language of a document, eg:

<HTML lang="fr">

Building Accessibility In

Whilst later we’ll do doing accessibility assessments with people who are blind or have low vision, we need to make sure we build accessibility in. To do this, we need automated accessibility tests as part of our continuous integration build pipeline. My challenge this week was to do just this.

Automated Accessibility Tests

First I needed to find a tool to validate against the spec. We’re developing the web application locally so we’ll need to run it locally. There’s a tool called TotalValidator which offers a command line tool, the only downside is the licensing cost, as to use the command line version you need to buy at least 6 copies of the tool at £25 per copy, approximately US$240 in total. There’s no trial of command line tool unless you buy at least one copy of the pro tool at £25. I didn’t want to spend money on something that might not work so I kept looking for alternatives.

There are two sites I found that validate accessibility by URL or supplied HTML code: AChecker and WAVE.

AChecker: this tool which works really well. It even supplies an REST API, but I couldn’t find a way to call the API to validate HTML code (instead of by URL) which is what I would like to do. The software behind AChecker is open source (PHP) so you can actually install your own version behind your firewall if you wish.

WAVE: a new tool recently released by WebAim: Web Accessibility in Mind. Again this is an online checker that allows you to validate by URL or HTML code supplied, but unfortunately there’s no API (yet) and the results aren’t as easy to programatically read.

My Solution

The final solution I came up with is a specific tagged web accessibility feature in our acceptance tests project. This has scenarios that use WebDriver to navigate through our application capturing the HTML source code from each page visited. Finally, it visits the AChecker tool online and validates each piece of HTML source code it collected and fails the build if any accessibility problems are found.

AChecker Results

Build Light

We have a specific build light that runs all the automated acceptance tests and accessibility tests. If any of these fail, the light goes red.

Build Status Lights

It’s much better if it looks like this:

Build Lights All Green

Summary

It was fairly easy to use an existing free online accessibility checker to validate all HTML code in your local application, and make this status highly visible to the development team. By building accessibility in, we’re reducing the expected number of issues when we conduct more formal accessibility testing.

Bonus Points: faster accessibility feedback

Ideally, as a page is being developed, the developer/tester should be able to check accessibility (rather than waiting for the build to fail). The easiest way I have found behind a firewall is to use the WAVE Firefox extension, which displays errors as you use your site. Fantastic!

(Apple and Microsoft each have one known accessibility problem, Google has nine!)

Apple.com accessibility

Mobile apps still need automated tests

Jonathan Rasmusson recently wrote what I consider to be quite a contentious blog post about iOS application development titled “It’s not about the unit tests”.

“…imagine my surprise when I entered a community responsible for some of the worlds most loved mobile applications, only to discover they don’t unit test. Even more disturbing, they seem to be getting away with it!”

Whilst I agree with the general theme of the blog post which is change your mind, challenge assumptions:

“All I can say is to keep growing sometimes we need to challenge our most cherished assumptions. It doesn’t always feel good, but that’s how we grow, gain experience, and turn knowledge into wisdom.”

“The second you think you’ve got it all figured out you’ve stopped living.”

I don’t agree with the content.

Jonathan’s basic premise is that you can get away with little or no unit testing for your iOS application for a number of reasons including developing for a smaller screen size, no legacy, one language, visual development and developing on a mature platform. But the real reason that iOS get away with it is by caring.

“These people simply cared more about their craft, and what they were doing, than their contemporaries. They ‘out cared’ the competition. And that is what I see in the iOS community.”

But in writing this post, I believe he missed two critical factors when deciding whether to have automated tests for your iOS app.

iOS users are unforgiving

If you accidentally release an app with a bug, see how quickly you’ll start getting one star reviews and nasty comments in the App Store. See how quickly new users will uninstall your app and never use it again.

The App Store approval process is not capable of supporting quick bug fixes

Releasing a new version of your app that fixes a critical bug may take you 2 minutes (you don’t even need to fix a broken test or write a new test for it!) but it then takes Apple 5-10 business days to release it to your users. This doesn’t stop the one star reviews and comments destroying your reputation in the meantime.

Case in Point: Lasoo iPhone app

I love the Lasoo iPhone app, because it allows me to read store catalogs on my phone (I live in an apartment block and we don’t get them delivered). Recently I upgraded the app and then tried to use it but it wouldn’t even start. I tried the usual close/reopen, delete/reinstall but still nothing. I then checked the app store:

Lasoo iPhone app reviews
Lasoo iPhone app reviews

Oh boy, hundreds of one star reviews within a couple of days: the app is stuffed! I then checked twitter to make sure they knew it was broken, and to my surprise they’d fixed it immediately but were waiting for Apple to approve the fix.

I can’t speculate on whether Lasoo care or not about their app, but imagine for a second if they had just one automated test, one automated test that launched the app to make sure it worked, and it was run every time a change, no matter how small, was made. That one automated test would have saved them from hundreds of one star reviews and having to apologize to customers on twitter whilst they waited for Apple to approve the fix.

Which raises another point:

“[Apple] curate and block apps that don’t meet certain quality or standards.”

The Lasoo app was so broken it wouldn’t even start, so how did it get through Apple’s approval process for certain quality or standards?

Just caring isn’t enough to protect you from introducing bugs

We all make mistakes, even if we care. That’s why we have automated tests, to catch those mistakes.

Not having automated tests is a bit like having unprotected sex. You can probably get away with it forever if you’re careful, but the consequences of getting it wrong are pretty dramatic. And just because you can get away with it doesn’t mean that other people will be able to.