100,000 e2e selenium tests? Sounds like a nightmare!

This story begins with a promo email I received from Sauce Labs…

“Ever wondered how an Enterprise company like Salesforce runs their QA tests? Learn about Salesforce’s inventory of 100,000 Selenium tests, how they run them at scale, and how to architect your test harness for success”

saucelabs email

100,000 end-to-end selenium tests and success in the same sentence? WTF? Sounds like a nightmare to me!

I dug further and got burnt by the molten lava: the slides confirmed my nightmare was indeed real:

Salesforce Selenium Slide

“We test end to end on almost every action.”

Ouch! (and yes, that is an uncredited image from my blog used in the completely wrong context)

But it gets worse. Salesforce have 7500 unique end-to-end WebDriver tests which are run on 10 browsers (IE6, IE7, IE8, IE9, IE10, IE11, Chrome, Firefox, Safari & PhantomJS) on 50,000 client VMs that cost multiple millions of dollars, totaling 1 million browser tests executed per day (which equals 20 selenium tests per day, per machine, or over 1 hour to execute each test).

Salesforce UI Testing Portfolio

My head explodes! (and yes, another uncredited image from this blog used out of context and with my title removed).

But surely that’s only one place right? Not everyone does this?

A few weeks later I watched David Heinemeier Hansson say this:

“We recently had a really bad bug in Basecamp where we actually lost some data for real customers and it was incredibly well tested at the unit level, and all the tests passed, and we still lost data. How the f*#% did this happen? It happened because we were so focused on driving our design from the unit test level we didn’t have any system tests for this particular thing.
…And after that, we sort of thought, wait a minute, all these unit tests are just focusing on these core objects in the system, these individual unit pieces, it doesn’t say anything about whether the whole system works.”

~ David Heinemeier Hansson – Ruby on Rails creator

and read that he had written this:

“…layered on top is currently a set of controller tests, but I’d much rather replace those with even higher level system tests through Capybara or similar. I think that’s the direction we’re heading. Less emphasis on unit tests, because we’re no longer doing test-first as a design practice, and more emphasis on, yes, slow, system tests (Which btw do not need to be so slow any more, thanks to advances in parallelization and cloud runner infrastructure).”

~ David Heinemeier Hansson – Ruby on Rails creator

I started to get very worried. David is the creator of Ruby on Rails and very well respected within the ruby community (despite being known to be very provocative and anti-intellectual: the ‘Fox News’ of the ruby world).

But here is dhh telling us to replace lower level tests with higher level ‘system’ (end to end) tests that use something like Capybara to drive a browser because unit tests didn’t find a bug and because it’s now possible to parallelize these ‘slow’ tests? Seriously?

Speed has always seen as the Achille’s heel of end to end tests because everyone knows that fast feedback is good. But parallelization solves this right? We just need 50,000 VMs like Salesforce?

No.

Firstly, parallelization of end to end tests actually introduces its own problems, such as what to do with tests that you can’t run in parallel (for example, ones that change global state of a system such as a system message that appears to all users), and it definitely makes test data management trickier. You’ll be surprised the first time you run an existing suite of sequential e2e tests in parallel, as a lot will fail for unknown reasons.

Secondly, the test feedback to someone who’s made a change still isn’t fast enough to enable confidence in making a change (by the time your app has been deployed and the parallel end-to-end tests have run; the person who made the change has most likely moved onto something else).

But the real problem with end to end tests isn’t actually speed. The real problem with end to end tests is that when end to end tests fail, most of the time you have no idea what went wrong so you spend a lot of time trying to find out why. Was it the server? Was it the deployment? Was it the data? Was it the actual test? Maybe a browser update that broke Selenium? Was the test flaky (non-deterministic or non-hermetic)?

Rachel Laycock and Chirag Doshi from ThoughtWorks explain this really well in their recent post on broken UI tests:

“…unlike unit tests, the functional tests don’t tell you what is broken or where to locate the failure in the code base. They just tell you something is broken. That something could be the test, the browser, or a race condition. There is no way to tell because functional tests, by definition of being end-to-end, test everything.”

So what’s the answer? You have David’s FUD about unit testing not catching a major bug in BaseCamp. On the other hand you need to face the issue of having a large suite of end to end tests will most likely result in you spending all your time investigating test failures instead of delivering new features quickly.

If I had to choose just one, I would definitely choose a comprehensive suite of automated unit tests over a comprehensive suite of end-to-end/system tests any day of the week.

Why? Because it’s much easier to supplement comprehensive unit testing with human exploratory end-to-end system testing (and you should anyway!) than trying to manually verify units function from the higher system level, and it’s much easier to know why a unit test is broken as explained above. And it’s also much easier to add automated end-to-end tests later than trying to retrofit unit tests later (because your code probably won’t be testable and making it testable after-the-fact can introduce bugs).

To answer our question, let’s imagine for a minute that you were responsible for designing and building a new plane. You obviously need to test that your new plane works. You build a plane by creating parts (units), putting these together into components, and then putting all the components together to build the (hopefully) working plane (system).

If you only focused on unit tests, like David mentioned in his Basecamp example, you could be pretty confident that each piece of the plane would be have been tested well and works correctly, but wouldn’t be confident it would fly!

If you only focussed on end to end tests, you’d need to fly the plane to check the individual units and components actually work (which is expensive and slow), and even then, if/when it crashed, you’d need to examine the black-box to hopefully understand which unit or component didn’t work, as we currently do when end-to-end tests fail.

But, obviously we don’t need to choose just one. And that’s exactly what Airbus does when it’s designing and building the new Airbus A350:

As with any new plane, the early design phases were riddled with uncertainty. Would the materials be light enough and strong enough? Would the components perform as Airbus desired? Would parts fit together? Would it fly the way simulations predicted? To produce a working aircraft, Airbus had to systematically eliminate those risks using a process it calls a “testing pyramid.” The fat end of the pyramid represents the beginning, when everything is unknown. By testing materials, then components, then systems, then the aircraft as a whole, ever-greater levels of complexity can be tamed. “The idea is to answer the big questions early and the little questions later,” says Stefan Schaffrath, Airbus’s vice president for media relations.

The answer, which has been the answer all along, is to have a balanced set of automated tests across all levels, with a disciplined approach to having a larger number of smaller specific automated unit/component tests and a smaller number of larger general end-to-end automated tests to ensure all the units and components work together. (My diagram below with attribution)

Automated Testing Pyramid

Having just one level of tests, as shown by the stories above, doesn’t work (but if it did I would rather automated unit tests). Just like having a diet of just chocolate doesn’t work, nor does a diet that deprives you of anything sweet or enjoyable (but if I had to choose I would rather a diet of healthy food only than a diet of just chocolate).

Now if we could just convince Salesforce to be more like Airbus and not fly a complete plane (or 50,000 planes) to test everything every-time they make a change and stop David from continuing on his anti-unit pro-system testing anti-intellectual rampage which will result in more damage to our industry than it’s worth.

Avoid using case statements in your cucumber/specflow/jbehave step definitions

I quite frequently come across a scenario that looks something like this:

Scenario: Create some animals
  Given I am a zoo keeper
  When I create a giraffe
  And I create a lion
  And I create a pony
  And I create a unicorn
  Then I should have a zoo

and step definitions that implement the When steps with a single step definition:

When /^I create a (\D+)$/ do |animal|
  case animal
    when 'lion'
      create_a_lion()
    when 'giraffe'
      create_a_giraffe()
    when 'pony'
      create_a_pony()
    else
      raise 'Unknown animal'
  end
end

I don’t like having case statements in steps for a number of reasons:

  • For readability and maintainability reasons I try to keep my step definitions as short as possible (usually a couple of lines), and using a case statement violates this principle;
  • Raising an exception to catch invalid usage of the step (in an else clause) replicates what these BDD frameworks already do, provide feedback about unimplemented steps;
  • IDEs that support step auto completion (such as RubyMine & Visual Studio) will not suggest valid steps as they don’t understand how you’ve implemented a case statement; and
  • If used inappropriately (such as our unicorn step), the test will only fail at run-time whereas most IDEs will highlight non-matching steps as you’re writing.

For example, you could change our steps to look something like this:

When /^I create a lion$/ do
  create_a_lion()
end

When /^I create a giraffe$/ do
  create_a_giraffe()
end

When /^I create a pony/ do
  create_a_pony()
end

Even though this is three times as many step definitions, it is actually less code (9 lines compared to 12).

By using this approach it is obvious we can’t currently create a unicorn as RubyMine tells us before we even run our tests. And we don’t need to raise any exceptions myself.

rubymine highlights unimplemented steps

Whilst a lot of people use case statements in steps to reduce the number of steps, it is actually counter intuitive as it means you have more code to do so, and the outcome is less usable when writing scenarios. So, please avoid putting case statements in your step definitions.

A tale of three ruby automated testing APIs (redux)

Redux Note: I originally wrote a similar article to this before going on parental leave about six weeks ago. Whilst I didn’t intend to offend, it seemed that a few people took my article the wrong way. I understand that a lot of effort goes into creating a web testing API, but that doesn’t mean that everyone will agree with what you’ve made.

Sadly, an anonymous coward attacked myself and the company who I work (even though I don’t mention that company on this blog), so for the first time in this blog’s history, I have had to turn comment moderation on. I am sorry to the other genuine commenters whose comments have been lost in transition, and now have to wait for their new comments to be approved.

Since then I have received numerous emails asking where my article went, and commenting that people found it interesting and worthwhile. So I have decided to repost this article, hopefully with a little less contention this time around, making it clear, this is my opinion and experience: YMMV.

Intro

As a consultant I get to see and work on a lot of automated testing solutions using different automated web testing APIs. Lately I’ve been thinking about how these APIs are different and what makes them so.

My main interest is in ruby, and fortunately ruby has three solid examples of three different kinds of web testing APIs, two of which extend the lowest level API: selenium-webdriver.

I’ll (try to) explain here what I consider to be three kinds of automated web testing APIs and where I consider the sweet spot to be and and why.

A meaty example

As a carnivore, I thought I would explain my concept in terms I can relate to. If you’re a beef eater, there are many different kinds of beef that you can use to make some tasty food to eat. I’ll use three different kinds of beef for my example. The first (rawest) kind would involve getting a beef carcass and filleting it yourself to eventually make some edible food. The second kind of beef you could use is beef that is already in a slightly usable form, but you can then use yourself to make some edible food. For example, you can buy minced beef at a butcher, and then make your own hamburger patties, taco fillings etc from it. The final type of beef you could use is beef that has already been prepared so you can directly consume it, for example, sausages which can be cooked and consumed as is.

I consider these three examples of different kinds of beef to roughly correlate to automated web testing APIs, of which I also consider to be three kinds of.

The first is a Web Driver API, which is the rawest form of an API, its job is to drive a browser by issuing it commands. It provides a high level of user control, but like filleting a beef carcass it’s more ‘work’. An example in ruby of this API is the selenium-webdriver API, which controls the browser using the webdriver drivers.

The second kind of automated web testing API is the Browser API, which is a higher level API but still provides user control. This is the minced beef of APIs, as whilst it’s in a more usable form than a carcass, you still have a lot of control (and potential to what you can do with it). An example in ruby of this API is the watir-webdriver API, which uses the underlying selenium-webdriver carcass to control the browser.

The final kind of automated web testing API is the Web Form DSL (Domain Specific Language) which is a very high level API that provides users with specific methods to automate web forms and their elements. This is the beef sausages of APIs as sometimes you feel like eating something else besides sausages, but it’s difficult to make anything else edible but sausages from sausages. An example in ruby of this Web Form DSL is the Capybara DSL.

Visually, this looks something like this:

Show me the code™

So exactly what do these APIs look like?

I knew you’d ask, that’s why I came prepared.

Say I want to accomplish a fairly basic scenario on my example Google Doc form:

  • Start a browser
  • Navigate to the watir-webdriver-demo form
  • Check whether text field with id ‘entry_0′ exists (this should exist)
  • Check whether text field with id ‘entry_99′ exists (this shouldn’t exist)
  • Set a text field with id ‘entry_0′ to ‘1’
  • Set a text field with id ‘entry_0′ to ‘2’
  • Select ‘Ruby’ from select list with id ‘entry_1′
  • Click the Submit button

This is how I would do it in the three different APIs:

# * Start browser
# * Navigate to watir-webdriver-demo form
# * Check whether text field with id 'entry_0' exists
# * Check whether text field with id 'entry_99' exists
# * Set text field with id 'entry_0' to '1'
# * Set text field with id 'entry_0' to '2'
# * Select 'Ruby' from select list with id 'entry_1'
# * Click the Submit button

require 'bench'

benchmark 'selenium-webdriver' do
  require 'selenium-webdriver'

  driver = Selenium::WebDriver.for :firefox
  driver.navigate.to 'http://bit.ly/watir-webdriver-demo'
  begin
    driver.find_element(:id, 'entry_0')
  rescue Selenium::WebDriver::Error::NoSuchElementError
    # doesn't exist
  end
  begin
    driver.find_element(:id, 'entry_99').displayed?
  rescue Selenium::WebDriver::Error::NoSuchElementError
    # doesn't exist
  end
  driver.find_element(:id, 'entry_0').clear
  driver.find_element(:id, 'entry_0').send_keys '1'
  driver.find_element(:id, 'entry_0').clear
  driver.find_element(:id, 'entry_0').send_keys '2'
  driver.find_element(:id, 'entry_1').find_element(:tag_name => 'option', :value => 'Ruby').click
  driver.find_element(:name, 'submit').click
  driver.quit
end

benchmark 'watir-webdriver' do
  require 'watir-webdriver'
  b = Watir::Browser.start 'bit.ly/watir-webdriver-demo', :firefox
  b.text_field(:id => 'entry_0').exists?
  b.text_field(:id => 'entry_99').exists?
  b.text_field(:id => 'entry_0').set '1'
  b.text_field(:id => 'entry_0').set '2'
  b.select_list(:id => 'entry_1').select 'Ruby'
  b.button(:name => 'submit').click
  b.close
end

benchmark 'capybara' do
  require 'capybara'
  session = Capybara::Session.new(:selenium)
  session.visit('http://bit.ly/watir-webdriver-demo')
  session.has_field?('entry_0') # => true
  session.has_no_field?('entry_99') # => true
  session.fill_in('entry_0', :with => '1')
  session.fill_in('entry_0', :with => '2')
  session.select('Ruby', :from => 'entry_1')
  session.click_button 'Submit'
  session.driver.quit
end

run 10

This is how long they took for me to run:

                        user     system      total        real
selenium-webdriver  1.810000   0.840000  22.130000 ( 73.123340)
watir-webdriver     1.940000   0.870000  24.380000 ( 79.388494)
capybara            1.950000   0.890000  24.080000 ( 79.920051)

Note: Capybara doesn’t always require a ‘session’, it’s only for non ruby rack applications, but since my example (Google) is not a rack application, as are most of the applications I test, my example must use the session.

When using ruby, why Watir-WebDriver is my sweet spot

I personally find Watir-WebDriver to be the most elegant solution, as the API is high enough for me to be highly readable/usable, but low enough to be powerful and for me to feel like I’m in control.

For example, being able to select an element by a explicit identifier (name, class name, id, anything) is a huge deal to me. I personally don’t like relying on the API to determine which selector to use: for example Capybara only supports name, id and label, but you can’t tell fill_in which specific one to choose: it appears to try each selector one by one until it finds it.

I have found that Watir-WebDriver also also provides lots of flexibility/neatness. For example: it’s the only API shown here that allows URLs to not have a ‘http://’ prefix (how many people do you know who type in http:// into a browser?).

In my opinion, the high level APIs like Capybara don’t provide enough control (for example – being able to specify the explicit selector), but the low level APIs like webdriver don’t provide enough functionality. This is evident when I am using a language other than ruby (like C#) when I find myself writing a large number of web element extension methods because webdriver doesn’t provide any of them. A .set method is a classic example, even Simon Stewart writes a clearAndType method in his examples even though he wrote webdriver which sadly misses it (you must call .clear, and .send_keys).

My biggest concern about high level field APIs

But my biggest issue with the high level APIs is that I’ve frequently seen them used to write test scripts that are step by step interactions with a web form. Instead of thinking of a business application as that, people see it as a series of forms that you ‘fill in’. This means people create scenarios like Aslak Hellesøy included in his recent post about cucumber web steps (which uses Capybara) and the problems it has created.

Scenario: Successful login
  Given a user "Aslak" with password "xyz"
  And I am on the login page
  And I fill in "User name" with "Aslak"
  And I fill in "Password" with "xyz"
  When I press "Log in"
  Then I should see "Welcome, Aslak"

I’m not saying it’s not possible to end up with something as ugly as above using other APIs, but I am saying the web form DSL style naturally relates to this: as the APIs look so similar to this style because that’s what the DSL was designed for: filling in forms. I’ve seen people frequently write generic, reusable cucumber steps to match the web form DSL like:

When /^I fill in "(.+)" with "(.+)"$/ do |value, field|
  fill_in field, :with => value
end

But this means you end up with less readable, less maintainable test scripts rather than business readable executable specifications.

Summary

Ultimately what I am looking for in an automated web testing API is simplicity and full control. I personally find browser APIs like Watir-WebDriver and Watir give me this, and this is why I love them so. Your mileage may vary, you may like different styles of APIs better, but I’ve seen other APIs so badly abused by people not even thinking about it, so it makes sense to think about what you’re trying to achieve and whether what you’re doing is the right way.

Use cucumber feature folders for functional organization, tags for non-functional classification

Originally posted on testingwithvision, moving content here for completeness.

Cucumber allows you to organize your features in a directory structure, as well as label features with tags (keywords). A common question I am asked on projects is what would you use tags for as opposed to a using directory organization.

Use directories for Cucumber feature functional organization

I suggest creating a feature directory structure to organize Cucumber features by functionality, as opposed to using the suggested convention of applying tags for organization. The reason I use directories over tags for functional organization is that I find it’s more efficient to organize features into folders than tagging them, as you don’t have to tag every feature to them to organize them functionally. You also have the benefit of running a subset of scenarios organized into a lower level directory. For example:

Given a folder structure of:

  • features
    • online_ordering
      • shopping_cart.feature
      • check_out.feature
      • check_order_status.feature
    • order_fulfillment
      • modify_orders
        • cancel_order.feature
        • edit_order.feature
      • ship_order.feature
    • admin
      • update_catalog.feature

you could run all online ordering features using:

cucumber features/online_ordering

or all modify order features etc.

cucumber features/order_fulfillment/modify_orders

This allows you maximum flexibility without having to tag each feature as to how it fits functionally with the other features. It also means you don’t end up with a large features directory full of a large number of .feature files that are hard to sort.

The Cucumber feature directory structure you create allows you to see how functionality of your application relates to other functionality through the hierarchial nature of directory organization.

Use tags for non-functional feature classification

I suggest using tags for Cucumber feature and scenario non-functional classification; storing attributes or meta-data about a feature or scenario with it.

Some useful ways you can use Cucumber tags to classify features and scenarios:

  • Tagging the status of the feature/scenario: for example, @wip is commonly used to indicate the work in progress status. These can be run in a continuous integration system so the build fails if any @wip scenarios pass (they are after all, work in progress and shouldn’t pass);
  • Tagging the size of the feature/scenario. I have recently seen some great examples on the web on how to classify scenarios/tests. Simon Stewart from Google suggests t-shirt styled sizes (Small, Medium, Large), Dave Astels from Engine Yard suggests speed of execution (Fast, Slow, Glacial or Hare, Tortoise, Sloth) which is sort of the same thing;
  • Tagging how often a feature or scenario should be run (every build, hourly, daily etc);
  • Tagging whether the scenario is destructive or invasive (or indeed non-destructive or non-invasive: good for running in production!); and
  • Tagging the feature or scenario with some form of meta-data for requirements traceability, or symbolic link to some other document or system.

Once you have started using certain tags, it is easy to run only scenarios that meet certain characteristics. For example, you could run non-invasive scenarios in production that are fast to run:

cucumber --tags @non-invasive --tags @fast

Or you can combine both functionally organized directories and non-functional tags. For example, you could run all ‘online ordering’ scenarios that are ‘non-invasive’ in production, but only those that are ‘small’ or ‘medium’, so they’re fast to run.

cucumber features/online_ordering --tags @non-invasive --tags @small,@medium

Summary

By combining the power of storing your features functionally in directories, and classifying them with non-functional tags, it gives you the power to run scenarios both by functionality, and classification, or both.

Update – 17 Jan 2011

If you’re running directories below the features directory, you should require the features directory so it knows where to look for step definitions. For example:

cucumber -r features features/online_ordering 

Previous Comments

The elements of (cucumber) style

Originally posted on testingwithvision, moving content here for completeness.

I’m rather pedantic when it comes to writing English like Cucumber steps; so here’s some of my style guidelines I’ve developed the last few months. Some of my sources of inspiration include: You’re Cuking it Wrong, (My) Cucumber best practices and tips, and You’re Almost Cuking it.

Never write code in steps

When I click button with id "NameEditForm.Save Button" within table with id "NameDetails00001"

instead write something like

When I click the "Save" button in the "Name Details" section

Use double quotes in steps to indicate it’s an imperative (literal) step

As a rule of thumb, anything that is literally specified should be in double quotes, otherwise it should be specified within the sentence.

For example – an imperative style step

I set the field labeled "First Name" to "John"

As opposed to a declarative style step that still has parameters but doesn’t use double quotes.

I enter the details for user John

Cater for a/an where needed

Given I have found an active user
Given I have found a cancelled user

Matching step definition

^Given I have found an? (\w+ \w+)$

Use multi-line step arguments when checking multiple things

For example use:

Then I should see a form with fields: | First Name | | Surname | | Date of Birth |

instead of

Then I should see a form with fields "First Name, Surname, Date of Birth"

Make tables look like real tables

  • Line up the columns;
  • Put a single space at the beginning of each cell;
  • Don’t use double quotes for contents; and
  • Put a single colon (:) on the end of the preceding line.
Then I should see a form with fields: | First Name | Joe | | Surname | Heinzenburgersteinington | | Date of Birth | 01/01/1900 | 

Instead of

 Then I should see a form with fields: |"First Name"|"Joe"| |"Surname"|"Heinzenburgersteinington"| |"Date of Birth"|"01/01/1900"|

Cater for support for 1st, 2nd, 3rd etc. values in step defintions

For example:

I click the 3rd button labelled "Save"

Matching step definition

^I click the (\d+)\w\w button labelled "([^"]*)"$

Summary

These are some elements of style that I have found to be useful. What has worked for you? Do you have your own elements of style?

An automated testing journey

I did a presentation this evening in Brisbane on an automated testing journey that one may embark on. The whole thing was centered around this tube map style diagram I came up with recently: (download in PDF)

Here’s a link to my prezi slides and it should appear below (if you have flash enabled that is).  You can also download them in very printable PDF if you so choose.

I feel the presentation was well received, but I really shouldn’t have tried to squeeze three days of thinking into 30 mins. Oh well.

As always, I welcome your feedback.

Specification by Example: a love story

I attended a three day Advanced Agile Acceptance Testing/Specification by Example workshop in Melbourne last week run by Gojko Adzic, which I enjoyed immensely.

I took copious amounts of notes so I could regurgitate them here, but instead I decided to do something a bit different and write a story. It’s a story about how a team can apply what he taught us to improve their software development practices. It’s entirely fictitious (although I’d love to live in the Byron Bay hinterland!) and any resemblance to any real person or project is purely coincidental.

So here it is: Specification by Example: a Love Story (PDF). Enjoy.