A ruby testing framework, from scratch, in 15 minutes

As part of my talk last week at the Brisbane Testers Meetup, I gave a live demo (no pre-recorded or pre-written code) of writing a ruby testing framework from scratch in 15 minutes. The idea was to show that most testing frameworks contain so much functionality ‘you ain’t gonna need’, so why not try writing one from scratch and see how we go? It was also a chance to show the testers who hadn’t done automated testing that programming/automated testing is not rocket science.

Since I promised to talk about selenium, I used watir-webdriver, but I would have preferred to just show testing a simple app/class that I would have written from scratch in ruby.

Our testing problem

I wrote a beautifully simple website to welcome the testers to the first ever Brisbane Testers Meetup, and wanted to write some tests to make sure it worked. The site is accessible at data:text/html,<h1 id=”welcome”>Welcome BNE Testers!</h1> and looks something like this:

bnetesterswelcome

First I’ll give you a few moments to get over how amazing that web site is… that’s long enough, now, what we need is a couple of tests for it:

  1. Make sure the welcome message exists
  2. Make sure the welcome message is visible
  3. Make sure the welcome message content is correct

Iteration zero

Do the simplest thing that could possibly work. In our case print out the three things we want to check to the screen and we’ll manually verify them.

require 'watir-webdriver'

b = Watir::Browser.new
b.goto 'data:text/html,<h1 id="welcome">Welcome BNE Testers!</h1>'
puts b.h1(id: 'welcome').exists?
puts b.h1(id: 'welcome').visible?
puts b.h1(id: 'welcome').text

which outputs:

true
true
Welcome BNE Testers!

A good start but not quite a testing framework.

Iteration One

I think it’s time to introduce a method to assert a value is true.

I like to start by writing how I want my tests to look before I write any ‘implementation’ code:

require 'watir-webdriver'

b = Watir::Browser.new
b.goto 'data:text/html,<h1 id="welcome">Welcome BNE Testers!</h1>'

assert('that the welcome message exists') { b.h1(id: 'welcome').exists? }
assert('that the welcome message is visible') { b.h1(id: 'welcome').visible? }
assert('that the welcome message text is correct') { b.h1(id: 'welcome').text == 'Welcome BNE Testers!' }

b.close

I usually run the my tests to give me a ‘clue’ to what I need to do next. In our case:

 undefined method `assert' for main:Object (NoMethodError)

In our case, it’s simple, we need to write an assert method. Luckily we know exactly what we need: a method that takes a description string and a block of code that should execute returning true, otherwise we have an error. We can simply write this method above our existing tests:

def assert message, &block
	begin
		if (block.call)
			puts "Assertion PASSED for #{message}"
		else
			puts "Assertion FAILED for #{message}"
		end
	rescue => e
		puts "Assertion FAILED for #{message} with exception '#{e}'"
	end
end

which gives us this output when we run:

Assertion PASSED for that the welcome message exists
Assertion PASSED for that the welcome message is visible
Assertion PASSED for that the welcome message text is correct

This is awesome, but it makes me nervous that all of our three tests passed the first time we ran them. Perhaps we hard coded them to pass? Will they ever fail?

There’s an old saying, source unknown, which is ‘never trust a test you didn’t first see fail‘. Let’s apply this here by making all our tests fail. I usually do this by changing the source system, that way you can keep the integrity of your tests intact.

This is fairly easy to do in our case by changing the id of our welcome element.

data:text/html,<h1 id=”hello”>Welcome BNE Testers!</h1>

When we do so, all our tests fail: yipee.

Assertion FAILED for that the welcome message exists
Assertion FAILED for that the welcome message is visible with exception 'unable to locate element, using {:id=>"welcome", :tag_name=>"h1"}'
Assertion FAILED for that the welcome message text is correct with exception 'unable to locate element, using {:id=>"welcome", :tag_name=>"h1"}'

We change it back and they pass again: double yipee.

Iteration Two –

So far all the text output has been in same color, and everyone knows a good test framework uses color. Lucky I know a gem that does color output easily, all we do is:

require 'colorize'

def assert message, &block
	begin
		if (block.call)
			puts "Assertion PASSED for #{message}".green
		else
			puts "Assertion FAILED for #{message}".red
		end
	rescue => e
		puts "Assertion FAILED for #{message} with exception '#{e}'".red
	end
end

which gives us some pretty output:

Assertion PASSED for that the welcome message exists
Assertion PASSED for that the welcome message is visible
Assertion PASSED for that the welcome message text is correct

and a fail now looks like this:

Assertion FAILED for that the welcome message exists

Sweet.

We can put the assert method in its own file which leaves our test file cleaner and easier to read:

require 'watir-webdriver'
require './assertions.rb'

b = Watir::Browser.new
b.goto 'data:text/html,<h1 id="welcome">Welcome BNE Testers!</h1>'

assert('that the welcome message exists') { b.h1(id: 'welcome').exists? }
assert('that the welcome message is visible') { b.h1(id: 'welcome').visible? }
assert('that the welcome message text is correct') { b.h1(id: 'welcome').text == 'Welcome BNE Testers!' }

b.close

Iteration Three

Our final iteration involves making the tests even easier to read by abstracting away the browser. This is typically done using ‘page objects’ and again we’ll write how we would like it to look before implementing that functionality:

require 'watir-webdriver'
require './assertions.rb'

Homepage.visit

assert('that the welcome message exists') { Homepage.welcome.exists? }
assert('that the welcome message is visible') { Homepage.welcome.visible? }
assert('that the welcome message text is correct') { Homepage.welcome.text == 'Welcome BNE Testers!' }

HomePage.close

When we run this, it provides us a hint at what we need to do:

uninitialized constant Homepage (NameError)

We need to create a HomePage class with three methods: visit, welcome and close.

We can simply add this to our tests file to get it working:

class Homepage
	def initialize
		@browser = Watir::Browser.new
		@browser.goto 'data:text/html,<h1 id="welcome">Welcome BNE Testers!</h1>'
	end

	def self.visit
		new
	end

	def welcome
		@browser.h1(id: 'welcome')
	end

	def close
		@browser.close
	end
end

After we’re confident it is working okay, we simply move it to a file named homepage.rb and our resulting tests look a lot neater:

require './assertions.rb'
require './homepage.rb'

homepage = Homepage.visit

assert('that the welcome message exists') { homepage.welcome.exists? }
assert('that the welcome message is visible') { homepage.welcome.visible? }
assert('that the welcome message text is correct') { homepage.welcome.text == 'Welcome BNE Testers!' }

homepage.close

and when we run them, they’re green as cucumbers:

Assertion PASSED for that the welcome message exists
Assertion PASSED for that the welcome message is visible
Assertion PASSED for that the welcome message text is correct

Summary

In a very short period of time, we’ve been able to write a fully functional (but not fully featured) testing framework and build on it as necessary. As I mentioned in my talk, so many test frameworks out there are so bloated and complex, sometimes all we need is simple, so if you’re putting test frameworks on pedestals because you find them too complex, start without one and see how you go!

Pedestals

I gave a talk at the first ever Brisbane Testers Meetup last night. It was a fairly good turn out despite some very climatic conditions (very wet and windy).

Every time I give a talk I try to provide a key message, a key takeaway question and ultimately aim to make sure everyone learns something that will make them capable of kicking ass in some way when they get back to work (hat tip to Kathy Sierra).

pedestals

My key message last night is that we, as testers, put things on pedestals, and we need to stop doing it; we need to push over those pedestals. We put automated testing on pedestals because it’s about programming. We put programming on pedestals often because it’s about frameworks. And we put frameworks on pedestals as they are overly complicated and complex and offer far more than we ever need.

So I tried to knock over those pedestals by showing how you can write a ruby testing framework from scratch in 15 minutes. Crash. Bang.

yourpedestal

My key takeaway last night was “what can you take down from your pedestal?” I personally think we all put things on pedestals, we greatly or uncritically admire things. We need to stop it.

The aim of my coding exercise was to show the 20 or so testers in the room who hadn’t done automated testing but wanted to do automated testing that it’s not that hard. Ignore the frameworks, focus on programming and build the simplest thing that could possibly work. Ignore the complex frameworks, the bar to learning programming and automated testing has never been lower.

My slides are available here if you’re interested in taking a look.

Waterfall, Agile & Hyperbole

Hyperbole. Love it or hate it, it’s been around for centuries and is here to stay. And, as someone pointed out this week, I’m guilty as charged of using (abusing?) it on this blog. You just need to quickly flick through my recent posts to find such melodramatic titles such as ‘Do you REALLY need to run your WebDriver tests in IE?‘, ‘UI automation of vendor delivered products always leads to trouble‘, and  ‘Five signs you’re not agile; you’re actually mini-waterfall‘. Hyperbole supports my motto for this blog and my life: strong opinions, weakly held.

But it’s not just me who likes hyperbole mixed into their blog posts. Only this morning did I read the catchy titled ‘Waterfall Is Never the Right Approach‘ followed quickly with a similarly catchy titled rebuttal: ‘Why waterfall kicks ass‘ (I personally would have capitalized ‘NEVER’ and ‘ASS’).

While I found both of articles interesting, I think they both missed the key difference between waterfall and agile (and why waterfall rarely works in these fickle times): waterfall is sequential whereas agile is (at least meant to be) iterative.

I personally don’t care whether you do SCRUM or XP, whether you write your requirements in Word™ or on the back of an index card, or even if you stand around in a circle talking about what card you’re working on.

What I do care about is whether you’re delivering business value frequently and adjusting to the feedback you get.

Sequential ‘big bang’ development such as waterfall, by its nature, delivers business value less frequently, and chances are when that value is realized the original problem has changed (depending on how long ago that was), because as I stated and believe, we live in fickle times.

Iterative development addresses this by developing/releasing small fully functional pieces of business value iteratively and adjusting to feedback/circumstance.

Just because an organization practices what they call ‘agile’, doesn’t mean they’re delivering business value iteratively. I’ve seen plenty of ‘agile’ projects deliver business value very non-frequently, they’re putting a sequential process into agile ‘sprints’ followed by a large period of end to end, business and user acceptance testing, with a ‘big bang’ go live.

Whilst I believe iterative development is the best way to work; I’m not dogmatic (enough) to believe it’s the only way to work. Whilst I believe you could build and tests parts of say an aeroplane iteratively, I still hope there’s it’s a sequential process with a whole heap of testing at the end on a fully complete aeroplane before I take my next flight in it.

Do you REALLY need to run your WebDriver tests in IE?

I recently read that Microsoft are now on board to officially support Selenium WebDriver from Internet Explorer (IE) 11+

Whilst I welcome the news, I try to avoid running WebDriver tests in Internet Explorer completely for the following reasons:

  • Internet Explorer is a very non-testable browser. Whilst everyone agrees testability of your app is paramount, testability of its run-time container, the browser, is equally important. Settings such as security zones, proxies and auto-complete in IE must be manually configured on each machine instead of being programmatically specified by profiles in Firefox and Chrome; and
  • Because IE has historically been so hard to test, WebDriver’s support for IE is much less mature and much less stable and efficient than Firefox and Chrome

The only way automated UI tests can succeed (and the chances of success aren’t high to begin with), is if they are fast and consistent. WebDriver against IE is neither (I see it more of a problem with IE than WebDriver). So if you want to use WebDriver, don’t test against IE, test against Firefox or Chrome.

But, In my role as a consultant, I continually hear managers say that we must run our WebDriver automated tests in Internet Explorer. There’s usually one or two reasons given:

  1. Our web app is for internal staff only and our only supported browser is IE (which is usually IE8); and/or
  2. Our web app (or the one we pay for) has been specifically coded to work only in IE and therefore it’s not possible to test in another browser.

You need to explain that your WebDriver automated tests aren’t the only tests you’ll run against your app. In a corporate environment (such as those who only support IE8), chances are you’ll have a period of business acceptance testing or user acceptance testing. This will be conducted by users in the browser they use, so this straight away mitigates the risk of only running your automated tests against a non-IE browser.

From my experience testing many applications against older versions of IE, the one thing that doesn’t work well (and causes web apps to break) is not the HTML but JavaScript support. If your app contains a decent amount of JavaScript you could write some JavaScript tests in a tool like js-test-driver and run these automatically against older versions of IE automatically. That way you can be assured your JavaScript is working without having to deal with IE/WebDriver issues (and slow running tests).

As for applications specifically coded to work in IE. Web standards exist for a reason and in my opinion it’s crazy to develop a web app that is tied to the implementation of a browser by a single vendor. Microsoft made IE11 purposely report itself to a web server as not being IE so Microsoft can avoid this exact situation happening in the future.

Chances are if your app is hard-coded to only work in IE then it won’t work in IE11 anyway. If it works in IE11, then it’ll work in Chrome and Firefox as they all follow web standards, and you can run your WebDriver tests reliably now.

I believe you’re better off not having any automated UI tests if you there’s a mandate in place that you must run them against IE. If you can’t automatically test your app in Firefox or Chrome, I believe you’re better off spending your time manually testing your app in IE than trying to maintain a test suite that will never be efficient or reliable.

Tips for great brown bag lunches

I’m a big fan of brown bag seminars also called brown bag lunches or just brown bags. I’ve seen them used very successfully to share knowledge and increase team bonding. Here’s some tips to make them successful for you.

Commit to a date and lock in a topic and presenter

Since a brown bag lunch is just as much about discussion as content, I find it’s good to commit to a date and lock in a topic and presenter. This puts pressure on the presenter to make time to get their content ready, and also not worry about having it ‘perfect’.

Give everyone an opportunity to present: try to avoid having the same person presenting over and over again. A good way to harvest ideas is to have spot near your team wall (or a trello board) where people can suggest topics they would like to hear or present.

Don’t limit the audience

Resist the temptation to make a brown bag lunch only for programmers, or only for business analysts etc. Even if the topic is aimed at programmers or testers, it’s good to have a goal to make your content interesting enough that it’ll appeal to the programmer or tester in anybody.

Don’t limit yourself to content that is directly aligned with your current work

Whilst content that is directly aligned to work is good as it’s a good way to get buy in, it’s also good to present content loosely related to what people are working on. For example, you could present a brown bag on distributed version control systems (such as git) to a team purely used to working with centralized version control (such as Subversion or TFS).

If you have a couple of short presentations during a single brown bag lunch you could possibly even have one that isn’t related to work. This is a little risky of course, but it can also be fun (I’m sure that everyone would love to hear about arid plants!). It’s also a good way to break any information filters we have.

Provide lunch

When I first started organizing brown bags, I couldn’t work out whether the term brown bag seminars came from people bringing along their own lunch in a brown bag or being provided lunch in a brown bag. But through experience I have found providing a good lunch is a key contributor to a successful brown bag seminar: ‘chimpanzees who share are chimpanzees who care‘. It also provides a good motivator for people to give up their lunch break and come along because who can resist a free lunch, right?

Make sure everyone knows each other

If you’ve got a new team, or people from different areas who don’t know each other, start with a quick icebreaker where you go around the room and get everyone to introduce themselves. I usually follow the format of ‘name’, ‘role’, ‘a fun fact’ and another random tidbit such as ‘my biggest fear’ or ‘what I’m looking forward to’.

Make sure everyone takes something away

I follow the icebreaker with a question to the audience: ‘what do you expect to get out of today’s session?’ I bring a bunch of Post-it notes and sharpies along and get each person to write a few things they want to get out of the session and stick them to the wall. Ten minutes before the end of the session the presenter reads out each objective and confirms each one has been met with whomever wrote it. If there’s something that wasn’t covered, it can be discussed, or it could even become the topic of a future brown bag.

I’ve seen lots of great objectives written from things like “learn more about automated mobile testing” to “have a nice lunch with my colleagues”.

Always leave plenty of time for discussion

The discussion generated by a brown bag seminar is as important as the content. Make sure you leave plenty of time to discuss what is being presented.

Summary

I thoroughly recommend brown bag lunches as an effective information sharing and team bonding technique, and if you get them right people can really enjoy them and look forward to them.

What’s your experience been with brown bag lunches? Good? Bad? Do you have any tips yourself?

Answer ‘Will it work?’ over ‘Does it work?’

Software teams must continually answer two key questions to ensure they deliver a quality product:

  1. Are we building the correct thing?
  2. Are we building the thing correctly?

In recent times, I’ve noticed a seismic shift of a tester’s role on an agile software team from testing that the team is building the thing correctly to helping the team build the correct thing. That thing can be a user story, a product or even an entire company.

As Trish Khoo recently wrote:

“The more effort I put into testing the product conceptually at the start of the process, the less I effort I had to put into manually testing the product at the end”

It’s more valuable for a tester to answer ‘will it work?‘ at the start than ‘does it work?‘ at the end. This is because if you can determine something isn’t the correct something before development is started, you’ll save the development, testing and rework needed to actually build what is needed (or not needed).

But how do we know it actually does work if we’re focused on will it work? How do we know that we’re building the thing correctly? The answer is automated tests.

Automated tests, written by programmers, alongside testers, during the engineering process validate the software does what it’s meant to do correctly. Behavior driven approaches assist to translate acceptance criteria directly into automated tests.

So, how can a tester be involved to make sure a team is building the correct thing?

  • get involved in writing the acceptance criteria for every story;
  • ensure a kick off for each story happens so the programmer(s) understand(s) what is expected and any edge cases or queries are discussed;
  • work with the programmer(s) to automate tests based upon the acceptance criteria;
  • ensure a handover/walk-through happens as soon as a story is finished in development to ensure that all the acceptance criteria are met and tests have been written;
  • showcase the finished product every iteration to the business.

You’ll soon find you can provide much greater value as a tester determining whether something will work and then working alongside the development team to ensure it works as it is developed.

Free yourself from your filters

One of the most interesting articles I have read recently was ‘It’s time to engineer some filter failure’ by Jon Udell:

“The problem isn’t information overload, Clay Shirky famously said, it’s filter failure. Lately, though, I’m more worried about filter success. Increasingly my filters are being defined for me by systems that watch my behavior and suggest More Like This. More things to read, people to follow, songs to hear. These filters do a great job of hiding things that are dissimilar and surprising. But that’s the very definition of information! Formally it’s the one thing that’s not like the others, the one that surprises you.”

Our sophisticated community based filters have created echo chambers around the software testing profession.

“An echo chamber is a situation in which information, ideas, or beliefs are amplified or reinforced by transmission and repetition inside an “enclosed” system, often drowning out different or competing views.” ~ Wikipedia

I’ve seen a couple of echo chambers have evolved:

  • The context driven testing echo chamber where the thoughts of a couple of the leaders are amplified and reinforced by the followers (eg. checking isn’t testing)
  • The broader software testing echo chamber where testers define themselves as testers and are only interesting in hearing things from other testers (eg. developers are evil and can’t test)
  • The agile echo chamber where anything agile is good and anything waterfall is bad (eg. if you’re not doing continous delivery you’re not agile)

So how do we break free of these echo chambers we’ve built using our sophisticated filters? We break those filters!

Jon has some great suggestions in his article (eg. dump all your regular news sources and view the world through a different lens for a week) and I have some specific to software testing:

  • attend a user group or meetup that isn’t about software testing – maybe a programming user group or one for business analysts: I attend programming user groups here in Brisbane;
  • learn to program, or manage a project, or write CSS.
  • attend a conference that isn’t about context driven testing: I’m attending two conferences this year, neither are context driven testing conferences (ANZTB Sydney and JSConf Melbourne);
  • follow people on twitter who you don’t agree with;
  • read blogs from people who you don’t agree with or have different approaches;
  • don’t immediately agree (or retweet, or ‘like’) something a ‘leader’ says until you validate it actually makes sense and you agree with it;
  • don’t be afraid to change your mind about something and publicize that you’ve changed your mind; and
  • avoid the ‘daily me‘ apps like the plague.

You’ll soon be able to break yourself free from your filters and start thinking for yourself. Good luck.

You probably don’t need a specification framework

I think plain language specification frameworks like SpecFlow and Cucumber are great, but have a lot of overhead and are way overused.

If you don’t have non-technical folk collaborating with you on your specifications, try writing plain automated tests instead. This means using plain NUnit/MSTest over SpecFlow in C# or minitest over Cucumber in Ruby. You’ll avoid the overhead of maintaining a plain language specification framework and be able to focus on developing a great set of tests instead.

It’s easier than you think to add a plain language specification layer to a set of well structured plain tests. So only add the specification layer when you need it, because chances are you ain’t gonna.

Take control of your own career

During my career, I’ve come across numerous testing colleagues with no experience in automated testing who say things like “I’d love to do automated testing”. They expect to be put into an automated testing role so they can learn automated testing.

I don’t think it should work like that. Your employer shouldn’t be solely responsible for you enhancing your skills and progressing your career.

And, the thing is, it’s never been easier to pick up some new technical skills.

If you want to learn programming start by learning something like Ruby. If you want to learn about automated web testing learn Watir. If you want to learn about behavior driven development tools learn Cucumber.

I taught myself Ruby. I taught myself Watir. I taught myself C#, Python, Selenium, Cucumber and Jenkins. The list goes on.

The barrier to entry has never been lower. Try codeacademy, try ruby koans, download the free watir book, buy Cheezy’s cheap eBook about Watir & Cucumber.

So, instead of watching television or going out for drinks, spend your nights and weekends learning some new skills and taking control of your career instead of expecting your employer to hand it to you on a plate.

You’ll then be able to say “I’m learning all about Watir at the moment and I would love to apply that on a project” instead of “I’d love to do automated testing”.

UI automation of vendor delivered products always leads to trouble

wise_old_elfI’ve got three darling boys, and they love this show on ABC4Kids called ‘Ben & Holly’s Little Kingdom’. It’s a cartoon from the makers of Peppa Pig about tiny elves and fairies and there’s a character named the wise old elf (pictured) who doesn’t like the fairy magic and whose catchphrase is ‘magic always leads to trouble‘ which has become a little bit of a meme in our household where we replace the word ‘magic’ with something else that’s perilous. This leads me to the point of this article, something I have a strong opinion about:

“UI automation of vendor delivered products always leads to trouble”

Why do I believe that? To be successful in UI automation involves some critical elements which are missing when writing automated tests against a black box vendor delivered products (such as a customized CRM solution). To be successful in UI automation:

  • you need collaboration between testers and developers to write the code needed to write robust and efficient user interface tests;
  • you need opportunities to include testability into the user interface, whether this be test specific navigation controllers, or ensuring that page elements are appropriately identified and structured; and
  • you need to be able to identify areas which can be tested below the UI, whether this be through APIs and web services, or hitting the database directly. Vendors seldom provide services and almost never allow direct access to the database – particularly if it’s a SaaS product.

I strive to advise anyone that it’s a bad idea to write automated tests against the UI of a vendor delivered product. Either that vendor should be doing their own automated testing, or be providing a more robust way to automatically test that changes have been correctly applied to their product.

I also try to avoid any career opportunities that put me in a situation where this is required of me because I don’t believe in it and haven’t seen it been successfully done.

As the wise old elf says: “UI automation of vendor delivered products always leads to trouble”.