Category Archives: Test Automation

A ruby testing framework, from scratch, in 15 minutes

As part of my talk last week at the Brisbane Testers Meetup, I gave a live demo (no pre-recorded or pre-written code) of writing a ruby testing framework from scratch in 15 minutes. The idea was to show that most testing frameworks contain so much functionality ‘you ain’t gonna need’, so why not try writing one from scratch and see how we go? It was also a chance to show the testers who hadn’t done automated testing that programming/automated testing is not rocket science.

Since I promised to talk about selenium, I used watir-webdriver, but I would have preferred to just show testing a simple app/class that I would have written from scratch in ruby.

Our testing problem

I wrote a beautifully simple website to welcome the testers to the first ever Brisbane Testers Meetup, and wanted to write some tests to make sure it worked. The site is accessible at data:text/html,<h1 id=”welcome”>Welcome BNE Testers!</h1> and looks something like this:

bnetesterswelcome

First I’ll give you a few moments to get over how amazing that web site is… that’s long enough, now, what we need is a couple of tests for it:

  1. Make sure the welcome message exists
  2. Make sure the welcome message is visible
  3. Make sure the welcome message content is correct

Iteration zero

Do the simplest thing that could possibly work. In our case print out the three things we want to check to the screen and we’ll manually verify them.

require 'watir-webdriver'

b = Watir::Browser.new
b.goto 'data:text/html,<h1 id="welcome">Welcome BNE Testers!</h1>'
puts b.h1(id: 'welcome').exists?
puts b.h1(id: 'welcome').visible?
puts b.h1(id: 'welcome').text

which outputs:

true
true
Welcome BNE Testers!

A good start but not quite a testing framework.

Iteration One

I think it’s time to introduce a method to assert a value is true.

I like to start by writing how I want my tests to look before I write any ‘implementation’ code:

require 'watir-webdriver'

b = Watir::Browser.new
b.goto 'data:text/html,<h1 id="welcome">Welcome BNE Testers!</h1>'

assert('that the welcome message exists') { b.h1(id: 'welcome').exists? }
assert('that the welcome message is visible') { b.h1(id: 'welcome').visible? }
assert('that the welcome message text is correct') { b.h1(id: 'welcome').text == 'Welcome BNE Testers!' }

b.close

I usually run the my tests to give me a ‘clue’ to what I need to do next. In our case:

 undefined method `assert' for main:Object (NoMethodError)

In our case, it’s simple, we need to write an assert method. Luckily we know exactly what we need: a method that takes a description string and a block of code that should execute returning true, otherwise we have an error. We can simply write this method above our existing tests:

def assert message, &block
	begin
		if (block.call)
			puts "Assertion PASSED for #{message}"
		else
			puts "Assertion FAILED for #{message}"
		end
	rescue => e
		puts "Assertion FAILED for #{message} with exception '#{e}'"
	end
end

which gives us this output when we run:

Assertion PASSED for that the welcome message exists
Assertion PASSED for that the welcome message is visible
Assertion PASSED for that the welcome message text is correct

This is awesome, but it makes me nervous that all of our three tests passed the first time we ran them. Perhaps we hard coded them to pass? Will they ever fail?

There’s an old saying, source unknown, which is ‘never trust a test you didn’t first see fail‘. Let’s apply this here by making all our tests fail. I usually do this by changing the source system, that way you can keep the integrity of your tests intact.

This is fairly easy to do in our case by changing the id of our welcome element.

data:text/html,<h1 id=”hello”>Welcome BNE Testers!</h1>

When we do so, all our tests fail: yipee.

Assertion FAILED for that the welcome message exists
Assertion FAILED for that the welcome message is visible with exception 'unable to locate element, using {:id=>"welcome", :tag_name=>"h1"}'
Assertion FAILED for that the welcome message text is correct with exception 'unable to locate element, using {:id=>"welcome", :tag_name=>"h1"}'

We change it back and they pass again: double yipee.

Iteration Two –

So far all the text output has been in same color, and everyone knows a good test framework uses color. Lucky I know a gem that does color output easily, all we do is:

require 'colorize'

def assert message, &block
	begin
		if (block.call)
			puts "Assertion PASSED for #{message}".green
		else
			puts "Assertion FAILED for #{message}".red
		end
	rescue => e
		puts "Assertion FAILED for #{message} with exception '#{e}'".red
	end
end

which gives us some pretty output:

Assertion PASSED for that the welcome message exists
Assertion PASSED for that the welcome message is visible
Assertion PASSED for that the welcome message text is correct

and a fail now looks like this:

Assertion FAILED for that the welcome message exists

Sweet.

We can put the assert method in its own file which leaves our test file cleaner and easier to read:

require 'watir-webdriver'
require './assertions.rb'

b = Watir::Browser.new
b.goto 'data:text/html,<h1 id="welcome">Welcome BNE Testers!</h1>'

assert('that the welcome message exists') { b.h1(id: 'welcome').exists? }
assert('that the welcome message is visible') { b.h1(id: 'welcome').visible? }
assert('that the welcome message text is correct') { b.h1(id: 'welcome').text == 'Welcome BNE Testers!' }

b.close

Iteration Three

Our final iteration involves making the tests even easier to read by abstracting away the browser. This is typically done using ‘page objects’ and again we’ll write how we would like it to look before implementing that functionality:

require 'watir-webdriver'
require './assertions.rb'

Homepage.visit

assert('that the welcome message exists') { Homepage.welcome.exists? }
assert('that the welcome message is visible') { Homepage.welcome.visible? }
assert('that the welcome message text is correct') { Homepage.welcome.text == 'Welcome BNE Testers!' }

HomePage.close

When we run this, it provides us a hint at what we need to do:

uninitialized constant Homepage (NameError)

We need to create a HomePage class with three methods: visit, welcome and close.

We can simply add this to our tests file to get it working:

class Homepage
	def initialize
		@browser = Watir::Browser.new
		@browser.goto 'data:text/html,<h1 id="welcome">Welcome BNE Testers!</h1>'
	end

	def self.visit
		new
	end

	def welcome
		@browser.h1(id: 'welcome')
	end

	def close
		@browser.close
	end
end

After we’re confident it is working okay, we simply move it to a file named homepage.rb and our resulting tests look a lot neater:

require './assertions.rb'
require './homepage.rb'

homepage = Homepage.visit

assert('that the welcome message exists') { homepage.welcome.exists? }
assert('that the welcome message is visible') { homepage.welcome.visible? }
assert('that the welcome message text is correct') { homepage.welcome.text == 'Welcome BNE Testers!' }

homepage.close

and when we run them, they’re green as cucumbers:

Assertion PASSED for that the welcome message exists
Assertion PASSED for that the welcome message is visible
Assertion PASSED for that the welcome message text is correct

Summary

In a very short period of time, we’ve been able to write a fully functional (but not fully featured) testing framework and build on it as necessary. As I mentioned in my talk, so many test frameworks out there are so bloated and complex, sometimes all we need is simple, so if you’re putting test frameworks on pedestals because you find them too complex, start without one and see how you go!

Do software testers need technical skills?

This post is part of the Pride & Paradev series.


Do software testers need technical skills?


Software Testers Need Technical Skills

“Man is a tool-using animal. Without tools he is nothing, with tools he is all.”
~ Thomas Carlyle

You’re testing software day in and day out, so it makes sense to have an idea about the internals of how that software works. That requires a deep technical understanding of the application. The better your understanding of the application is, the better the bugs you raise will be. If you can understand what a stack trace is and why it’s happening, the more effective you’ll be in communicating what has happened and why.

“Most good testers have some measure of technical skill such as system administration, databases, networks, etc. that lends itself to gray box testing.”

~ Elizabeth Hendrickson – Do Testers Have to Write Code?

As you’re testing, you can easily dive into the database and run some SQL queries to make sure things actually did what they were meant to, or discover and test an exposed web-service using different combinations as it’ll be quicker and provides the same results.

You’ll know IE7 JavaScript quirks and will be able to communicate these to a programmer and work on a solution that gracefully degrades.

Gone are the days where you’d be emailed a link to a test environment somewhere that you’ll use to conduct some manual testing and provide some feedback. More often then not, you’ll start by setting up your own integrated development environment on your own machine so that you can pull changes as they’re committed by programmers and find issues sooner.

You’ll also probably be asked to build a test environment that other people can use, and a continuous deployment pipeline to automatically update that environment when appropriate.

Without technical skills you’re going to struggle with this, as it’s not just a matter of testing’ the functionality of the application, but testing the entire system: that it can be built, deployed, internationalized, scaled etc.

Soon you’ll start coming across other testing challenges such as how to test internationalization and localization, accessibility and how to locate or generate appropriate test data. This may involve writing your own SQL scripts that take field labels and translate them to a test locale to check screens for hard coded data. Again, these activities require technical skills.

Often programmers will show disdain for testers without any technical skills as they won’t understand the technical challenges a programmer faces, and won’t be able to communicate issues in a technical way.

The more technical skills you have in your toolbelt, the more effective you can be as a software tester.

But having strong technical skills and wanting to do nothing but programming as the sole tester on a small agile team is a recipe for disaster.


Software Testers Don’t Need Technical Skills

“A particularly terrible idea is to offer testing jobs to the programmers who apply for jobs at your company and aren’t good enough to be programmers. Testers don’t have to be programmers, but if you spend long enough acting like a tester is just an incompetent programmer, eventually you’re building a team of incompetent programmers, not a team of competent testers.”
~ Joel on Software on Testers

Hiring testers with technical skills over testing ability is a common mistake. A tester who primarily spends his/her time writing automated tests will spend more time getting his/her own code working instead of testing the code that your customers will use.

In a small agile team of say seven programmers and one tester, the tester will spend nearly all his/her time conducting exploratory and story testing so there will be no time to spend as a tester writing automated tests, it will need to be done by the programmers as part of developing a story. Hiring a tester who expects to predominantly write code on a small agile team is a big mistake.

“Since testing can be taught on the job, but general intelligence can’t, you really need very smart people as testers, even if they don’t have relevant experience.”

~ Joel on Software on Testers

What technical skills a tester lacks can be made up for with intelligence and curiosity. Even if a tester has no deep underlying knowledge of a system, they can still very effective at finding bugs through skilled exploratory and story testing. Often non technical testers have better shoshin: ‘a lack of preconceptions’ when testing a system. A technical tester may take technical limitations into consideration but a non technical can be better at questioning why things are they way they are and rejecting technical complacency.

Often non-technical testers will have a better understanding of the subject matter and be able to communicate with business representatives more effectively about issues.

You can be very effective as a non-technical tester, but it’s harder work and you’ll need to develop strong collaboration skills with the development team to provide support and guidance for more technical tasks such as automated testing and test data discovery or creation.

Automated WCAG 2.0 accessibility testing in the build pipeline

The web application I am currently working on must meet accessibility standards: WCAG 2.0 AA to be precise. WCAG 2.0 is a collection of guidelines that ensure your site is accessible for people with disabilities.

An example of poor accessibility design is missing an alt tag on an image, or not specifying a language of a document, eg:

<HTML lang="fr">

Building Accessibility In

Whilst later we’ll do doing accessibility assessments with people who are blind or have low vision, we need to make sure we build accessibility in. To do this, we need automated accessibility tests as part of our continuous integration build pipeline. My challenge this week was to do just this.

Automated Accessibility Tests

First I needed to find a tool to validate against the spec. We’re developing the web application locally so we’ll need to run it locally. There’s a tool called TotalValidator which offers a command line tool, the only downside is the licensing cost, as to use the command line version you need to buy at least 6 copies of the tool at £25 per copy, approximately US$240 in total. There’s no trial of command line tool unless you buy at least one copy of the pro tool at £25. I didn’t want to spend money on something that might not work so I kept looking for alternatives.

There are two sites I found that validate accessibility by URL or supplied HTML code: AChecker and WAVE.

AChecker: this tool which works really well. It even supplies an REST API, but I couldn’t find a way to call the API to validate HTML code (instead of by URL) which is what I would like to do. The software behind AChecker is open source (PHP) so you can actually install your own version behind your firewall if you wish.

WAVE: a new tool recently released by WebAim: Web Accessibility in Mind. Again this is an online checker that allows you to validate by URL or HTML code supplied, but unfortunately there’s no API (yet) and the results aren’t as easy to programatically read.

My Solution

The final solution I came up with is a specific tagged web accessibility feature in our acceptance tests project. This has scenarios that use WebDriver to navigate through our application capturing the HTML source code from each page visited. Finally, it visits the AChecker tool online and validates each piece of HTML source code it collected and fails the build if any accessibility problems are found.

AChecker Results

Build Light

We have a specific build light that runs all the automated acceptance tests and accessibility tests. If any of these fail, the light goes red.

Build Status Lights

It’s much better if it looks like this:

Build Lights All Green

Summary

It was fairly easy to use an existing free online accessibility checker to validate all HTML code in your local application, and make this status highly visible to the development team. By building accessibility in, we’re reducing the expected number of issues when we conduct more formal accessibility testing.

Bonus Points: faster accessibility feedback

Ideally, as a page is being developed, the developer/tester should be able to check accessibility (rather than waiting for the build to fail). The easiest way I have found behind a firewall is to use the WAVE Firefox extension, which displays errors as you use your site. Fantastic!

(Apple and Microsoft each have one known accessibility problem, Google has nine!)

Apple.com accessibility

Mobile apps still need automated tests

Jonathan Rasmusson recently wrote what I consider to be quite a contentious blog post about iOS application development titled “It’s not about the unit tests”.

“…imagine my surprise when I entered a community responsible for some of the worlds most loved mobile applications, only to discover they don’t unit test. Even more disturbing, they seem to be getting away with it!”

Whilst I agree with the general theme of the blog post which is change your mind, challenge assumptions:

“All I can say is to keep growing sometimes we need to challenge our most cherished assumptions. It doesn’t always feel good, but that’s how we grow, gain experience, and turn knowledge into wisdom.”

“The second you think you’ve got it all figured out you’ve stopped living.”

I don’t agree with the content.

Jonathan’s basic premise is that you can get away with little or no unit testing for your iOS application for a number of reasons including developing for a smaller screen size, no legacy, one language, visual development and developing on a mature platform. But the real reason that iOS get away with it is by caring.

“These people simply cared more about their craft, and what they were doing, than their contemporaries. They ‘out cared’ the competition. And that is what I see in the iOS community.”

But in writing this post, I believe he missed two critical factors when deciding whether to have automated tests for your iOS app.

iOS users are unforgiving

If you accidentally release an app with a bug, see how quickly you’ll start getting one star reviews and nasty comments in the App Store. See how quickly new users will uninstall your app and never use it again.

The App Store approval process is not capable of supporting quick bug fixes

Releasing a new version of your app that fixes a critical bug may take you 2 minutes (you don’t even need to fix a broken test or write a new test for it!) but it then takes Apple 5-10 business days to release it to your users. This doesn’t stop the one star reviews and comments destroying your reputation in the meantime.

Case in Point: Lasoo iPhone app

I love the Lasoo iPhone app, because it allows me to read store catalogs on my phone (I live in an apartment block and we don’t get them delivered). Recently I upgraded the app and then tried to use it but it wouldn’t even start. I tried the usual close/reopen, delete/reinstall but still nothing. I then checked the app store:

Lasoo iPhone app reviews
Lasoo iPhone app reviews

Oh boy, hundreds of one star reviews within a couple of days: the app is stuffed! I then checked twitter to make sure they knew it was broken, and to my surprise they’d fixed it immediately but were waiting for Apple to approve the fix.

I can’t speculate on whether Lasoo care or not about their app, but imagine for a second if they had just one automated test, one automated test that launched the app to make sure it worked, and it was run every time a change, no matter how small, was made. That one automated test would have saved them from hundreds of one star reviews and having to apologize to customers on twitter whilst they waited for Apple to approve the fix.

Which raises another point:

“[Apple] curate and block apps that don’t meet certain quality or standards.”

The Lasoo app was so broken it wouldn’t even start, so how did it get through Apple’s approval process for certain quality or standards?

Just caring isn’t enough to protect you from introducing bugs

We all make mistakes, even if we care. That’s why we have automated tests, to catch those mistakes.

Not having automated tests is a bit like having unprotected sex. You can probably get away with it forever if you’re careful, but the consequences of getting it wrong are pretty dramatic. And just because you can get away with it doesn’t mean that other people will be able to.

Shoshin: the Sudoku robot

I really enjoyed writing Einstein (my Minesweeper robot) recently. So much that I recently wrote another. Introducing Shoshin: a Sudoku robot.

Shoshin has come along fairly nicely. I pretty much followed the same implementation strategy as I did with Einstein: write failing specs and make them pass until I have a robot that can actually win, or solve in this case, Sudoku. I tend to use large pieces of blank white paper and draw lots of diagrams and logic when I am trying to figure something out (and take a Archimedes break and eat a pink lady apple if I get really stuck – see sticker top right).

Shoshin was written in ruby, so I used both RSpec and Cucumber to write the executable specifications. I tend to write lots of low level specifications in RSpec that run really quickly (40 specs in ~1 second), and then have a handful of high level end to end specifications in Cucumber that I run less frequently, but ultimately specify what I am trying to achieve at a high level. I find the combination works very nicely as I get fast feedback and ultimately know what I am trying to achieve.

To solve some of the more difficult sudoku problems, I printed some strategy diagrams from the web and wrote failing specs for them. It was then a matter of making them pass!

The outcome is Shoshin (see source on github) who can win easy, medium and hard games on websudoku.com. She doesn’t presently win evil games, as they involve guessing/brute force attacks which I haven’t implemented yet. Maybe one day when I get time..

Oh, and Shoshin (初心) is a concept in Zen Buddhism meaning “beginner’s mind”.

Writing your own WebDriver selectors in C#

I am working on a C# project at the moment, writing tests using WebDriver, and one of the things I miss most about Watir-WebDriver is its variety of selectors, for example, being able to specify a value for value. Since the application I am testing uses value a huge amount, I started using a css selector for each element I was interacting with:

var driver = new FirefoxDriver();
driver.Navigate().GoToUrl("data:text/html,<div value=\"Home\">Home</div>");
var divByCss = driver.FindElement(By.CssSelector("[value=\"Home\"]"));

But I got sick of typing out this fairly unattractive CSS selector each time I came across a new element.

I wanted to write an extension method so that I can use something like By.Value(“Home”) instead of By.CssSelector(…) but soon realized that you can’t write extension methods for static classes in C#, as you need an instance of the class to extend.

So, instead of extending the original By class, I wrote my own custom MyBy class that I can use in addition to the original.

namespace WebDriverExtensions
{
  using Microsoft.VisualStudio.TestTools.UnitTesting;
  using OpenQA.Selenium;
  using OpenQA.Selenium.Firefox;

  public static class MyBy
  {
    public static By Value(string text)
    {
      return By.CssSelector("[value=\"" + text + "\"]");
    }
  }

  [TestClass]
  public class WebElementExtensionTests
  {
    [TestMethod]
    public void ByValue()
    {
      var driver = new FirefoxDriver();
      driver.Navigate().GoToUrl("data:text/html,<div value=\"Home\">Home</div>");
      var divByValue = driver.FindElement(MyBy.Value("Home"));
      var divByCss = driver.FindElement(By.CssSelector("[value=\"Home\"]"));
      Assert.AreEqual(divByCss, divByValue);
      Assert.AreEqual("Home", divByValue.Text);
      Assert.AreEqual("Home", divByValue.GetAttribute("value"));
      driver.Quit();
    }
  }
}

I think this solves the problem fairly nicely. Do you do something similar? Is there a better way?

Five page object anti-patterns

I’ve observed some page object anti-patterns which commonly arise when starting out designing end-to-end automated tests. Chris McMahon recently asked for some feedback on his initial test spike for Wikipedia, and some of these anti-patterns were present.

Anti-pattern one: frequently opening and closing browsers

I often see both RSpec and Cucumber tests frequently opening and closing browsers. This slows down test execution times, and should be avoided unless absolutely necessary.

You can clear cookies between tests if you’re worried about state.

To open and close the browser only once in Cucumber, specify this in your env.rb file:

browser = Watir::Browser.new

Before do
  @browser = browser
end

at_exit do
  browser.close
end

To open and close the browser only once in RSpec:

browser = Watir::Browser.new

RSpec.configure do |config|
  config.before(:each) { @browser = browser }
  config.after(:suite) { browser.close }
end

Anti-pattern two: hard coding URLs on page classes

Chances are you’ll at some point run your automated tests in different environments, even if it’s just to verify that production has been updated correctly. If you’ve hard coded URLs in page classes, this can be problematic.

Fortunately it’s easy to avoid, by creating a module that contains base URLs which can be accessed by page classes. These base URLs can be stored in YAML files which can be switched for different environments.

module Wikipedia
  BASE_URL = 'https://en.wikipedia.org'
end

class BogusPage
  include PageObject
  page_url "#{Wikipedia::BASE_URL}/wiki/Bogus_page"
end

Anti-pattern three: pages stored as instance variables in steps or rspec specs

I don’t like seeing pages stored as instance variables (those starting with an @) in Cucumber steps or RSpec specs, as it introduces state and thus more room for error.

If you’re using the page-object gem, there are two methods available to access pages directly without using instance variables: visit_page and on_page (also visit or on from 0.6.4+). Both of these can be used as blocks, so you can perform multiple actions within these methods.

visit LoginPage do |page|
  page.login_with('foo', 'badpass')
  page.text.should include "Login error"
  page.text.should include "Secure your account"
end

Anti-pattern four: checking the entire page contains some text somewhere

I often see people checking that the entire web page contains some expected text. Even if the text was at the very bottom of the page hidden in the footer the test would probably pass.

You should check the text is where it should be, using a container that it should belong to. Ideally a span or a div may exist that contains the exact text, but even if it’s in a slightly larger container it is still better than asserting it exists somewhere on the page.

class BogusPage
  include PageObject
  cell :main_text, :class => 'mbox-text'
end

visit_page BogusPage do |page|
  page.main_text.should include 'Wikipedia does not have an article with this exact name'
  page.main_text.should include 'Other reasons this message may be displayed'
end

Anti-pattern five: using RSpec for end-to-end tests

This one is contentious, and I am sure I’ll get lots of opinions to the contrary, but I believe that RSpec is best suited to unit/integration tests, and Cucumber is suited to end-to-end tests.

I find I create duplication when trying to do end-to-end tests in RSpec, which is where Cucumber step definitions come in. Trying to do unit tests in Cucumber seems like too much overhead, and in my opinion is more suited to RSpec.

Visible content locators and i18n in automated tests

I recently read a rebuttal to my post about death to xpath selectors, which raises a point of not using user visible strings in/as selectors to identify elements. The reasoning is that if the time comes to internationalize your site, then your selectors will be brittle as they’re written in a specific language.

Fair point, but if you’re not testing the location of user visible content, then what are you testing? In Australia, I have found it rare (like one project out of about thirty I’ve worked on) that additional languages are supported. But on that one project I used visible user strings to locate objects that weren’t brittle whatsoever. But how? Adam says you can’t do it!

Well, I translate my locators too. That way I am testing both the functionality of the site, the content of the site, and the internationalized content of the site, all at once! No hands.

So how would I do it for my said poor example I used previously?

I’d wrap any selector with something like translate

  @browser.link(:text => translate('Buy')).click

and have a translate method defined in a mix-in:

def translate phrase
  #translate some phrase here using same method as AUT
  phrase
end

Yet another software testing pyramid

A fellow ThoughtWorker James Crisp recently wrote an interesting article about his take on an automated test pyramid.

Some of the terminology he used was interesting, which is what I believe led to some questioning comments and a follow up article by another fellow ThoughtWorker, Dean Cornish, who stated the pyramid “oversimplifies a complex problem of how many tests you need to reach a point of feeling satisfied about your test coverage“.

I believe that one of the most unclear areas of James’s pyramid is the use of the term Acceptance tests, which James equates to roughly 10% of the automated test suite. One commenter stated these should instead be called functional tests, but as James points out, aren’t all tests functional in nature? I would also argue that all tests are about acceptance (to different people), so I would rephrase the term to express what is being tested, which in his case is the GUI.

The other fundamental issue I see with James testing pyramid is that it is missing exploratory/session based testing. The only mention of exploratory testing is when James states ‘if defects come to light from exploratory testing, then discover how they slipped through the testing net’, but I feel this could be better represented on the pyramid. Exploratory, or session based testing, ensures confidence in the automated tests that are being developed and run. Without it, an automated testing strategy is fundamentally flawed. That’s why I include it in my automated testing pyramid as the Eye of Providence (I originally got the ‘eye’ idea from another ThoughtWorker Darren Smith).

Show me the Pyramid

Without further ado, here’s my automated test pyramid. It shows what the automated tests use to test: being the GUI, APIs, Integration Points, Components & Units. I’ve put dotted lines between components, integration points and APIs, as these are similar and it might be a case of testing not all of these.

Another way of looking at this, is looking at the intention of the tests. Manual exploratory tests and automated GUI tests are business facing, in that they strive to answer the question: “are we building the right system?”. Unit, integration and component tests are technology facing, in that they strive to answer the question: “are we building the system right?”. So, another version of the automated testing pyramid could simply plot these two styles of tests on the pyramid, showing that you’ll need more technology facing than business facing automated tests, as the business facing tests are more difficult to maintain.

Summary

By removing the term acceptance, and showing what the automated tests test, I believe the first automated test pyramid shows a solid approach to automated testing. Acceptance tests and functional tests can be anywhere in the pyramid, but you should limit your GUI tests, often by increasing your unit test coverage.

The second pyramid is another way to view the intention of the tests, but I believe both resolve most of the issues Dean has with James’s pyramid. Additionally they both include manual session based testing, a key ingredient in an automated test strategy that should be shown on the pyramid so it is not forgotton.

I welcome your feedback.