I read a LinkedIn blog post from 2015 by Keqiu Hu from LinkedIn about flaky UI tests. He explains how they fixed their flaky UI tests for the LinkedIn app. Among other things they implemented what they called the “Trunk Guardian service” which runs automated UI tests on the last known good build twice and if the test passes on the first run but fails on the second it is marked as ‘flaky’ and disabled and the owner is notified to fix it or get rid of it. I wondered what your thoughts were on such a “Trunk Guardian service” – if the culture / process was in place to solve the other issues that create flaky tests, could such a thing be worth the effort to implement? Article: Test Stability – How We Make UI Tests Stable
Continue reading “AMA: Trunk Guardian Service?”
We actually don’t run any tests in Internet Explorer any more since these weren’t finding any browser specific bugs (we do exploratory testing in Internet Explorer instead).
this.driver.executeScript( 'return arguments.click();', webElement );
I hope this solution helps!
What is the difference between iterative and incremental models?
Fortunately I have written an entire post on this exact topic here.
My conclusion was:
We can’t build anything without iterating to some degree: no code is written perfectly the second that it is typed or committed. Even if it looks like a company is incrementally building their software: they’re iteratively building it inside.
We can’t release anything without incrementing to some degree: no matter how small a release is, it’s still an incremental change over the last release. Some increments are bigger because they’ve already been internally iterated upon more, some are smaller as they’re less developed and will evolve over time.
So, we develop software iteratively and release incrementally in various sizes over time.
Data Migration testing from one application to another application. Which way to test best and easy? The new application should be in Salesforce.
This is quite a generic question but I’ll try to answer it the best I can. I usually look at data migrations as three separate steps:
Extract data from the old system
Transform the data to fit the new system
Load the data into the new system
I would test that each step has worked correctly by verifying the data starting in the deepest parts of the system (database tables), moving up into APIs and finally into any user interfaces. I know some CRMs such as Salesforce don’t allow access to database tables so sometimes you can only use APIs or user interfaces to ‘spot check’ data.
I hope this helps you Nathan.
I’ve never personally found the return on investment of getting automated tests running across Internet Explorer and Safari to be worthwhile as in my experience this took more effort than the bugs it found. So I personally stick to running our full e2e test suite in our most used browser (Chrome) and supplementing this with exploratory testing on all other browsers.
In saying that the reason you won’t be able to use Docker containers for these purposes is that they’re Linux and Internet Explorer requires Microsoft Windows and Safari requires Apple macOS to be able to run. To be able to use these for your existing automated tests you can sign up to a on-demand browser service like SauceLabs and use the remote WebDriver protocol to execute your tests.
I wondered if you could tell me what sets exceptional QA testers apart? Not just personality or work ethic traits, but specific skills and programming knowledge that will be very valuable to a team?
I think exceptional QA testers, as explained recently, aren’t people who are exceptional at just one thing, eg. testing, but good at lots of things.
So an exceptional QA tester, in my opinion, will typically have (at least good) skills in the following things:
- Skills in human exploratory testing: an exceptional QA tester has the ability to effectively find the most important bugs fast. Whilst this skill can be developed, I have found it’s mostly a mindset.
- Skills in developing automated tests: an exceptional QA tester will have programming skills needed to develop automated tests and I would recommend these to typically match the programming language(s) that programmers in your organization use. For example, skills in automated testing in .NET if your company primarily uses Microsoft .NET. Although, someone with strong programming skills in one language (eg. ruby) should be able to transfer these skills to another language (eg. python).
- Knowledge/Experience in your business domain: an exceptional QA tester will fully understand your business domain and keep this context in mind whilst testing a product and raising issues. An exceptional tester is always testing your system – just as I am testing WordPress.com publishing this post.
- An empathetic mindset: we design and develop software for real people and real life. An exceptional QA tester will test with this in mind.
Do you have set up (inexpensive) infrastructure to store data collected in your automated tests? We are currently using using selenium Java webdriver to automate our tests and IntelliJ as our IDE. We create data from scratch for each and every test case :(
I’m a little confused by the question and whether it’s about test data: data is that is needed by the automated tests, or test results data: insights into the results of our automated tests. So I’ll answer both 😀
Infrastructure to manage test data
Our tests run on specific test accounts and sites on production databases. Since our tests are end-to-end in fashion, we try to make our tests have as few dependencies as possible on existing data. Often an end-to-end scenario will involve creating, viewing, editing and deleting something. If we don’t do all of this by our UI we can use hooks that either use services or database jobs to clean up the data. I explained this in more detail previously.
Infrastructure to manage test results data
We use CircleCI for automated end-to-end tests. We have a number of projects that run different types of end-to-end tests from the same code repository for different purposes (canary tests, visual-diff tests, full regression tests for example).
We generate x-unit test results (from Mocha/Magellan) which CircleCI uses to provide insights into our test results such as this:
You can also drill down into slowest tests and most failed tests etc.
Since all our tests are open source you can view these build insights yourself!
We’re pretty happy with the insights we get from CircleCI at the moment so we don’t see a need to currently develop anything ourself.