Eight thoughts on my Apple Watch

I’m a fairly late adopter: I bought an Apple Watch just a few weeks ago after the hype had settled down a bit and I could just walk in, try one on and buy one.

I bought the 42mm ‘sport’ model because I’ve got big wrists and my main intention with the watch is to measure various aspects of exercise I do.

Here some initial thoughts:

  1. The waterproof is really cool: whilst the touch doesn’t work well under water, I wear it in the shower, I’ve worn I swimming in the pool and also in the surf without any issues. It makes me wonder why we can’t make all our portable devices this waterproof Apparently it’s not waterproof (see comment below) and this isn’t recommended.
  2.  The battery isn’t that bad: I charge it overnight, and monitor a hour or so of exercise most days, and I still get to the end of the day with 50-60% of battery remaining. It could be better and last multiple days, but since I wear it overnight it doesn’t bother me.
  3. The notifications are awesome: The best part for me was that by default the notifications mirror your iPhone. I have minimal notifications set up (none for email etc) so I get minimal notifications on the watch. And apps don’t need to support the watch or be installed on the watch to send notifications on the watch. Plus if you’re using your phone your watch doesn’t notify you and vice versa. They’ve done really well with this.
  4. I don’t really use watch apps: There will probably be better ones with the new WatchOS that supports native apps, but the main purpose for me is glancing at my watch face and quickly seeing notifications. The only app I really use is the exercise one from Apple which monitors your heart rate, distance etc when you’re exercising.
  5. I use the modular watch face: it offers a good range of information I can glance at. Some of the other watch faces are fancy but can only see myself using these as a once off. watch face
  6. The activity rings are a good idea: especially the standing ring which notifies you towards the end of an hour when you haven’t stood up. Great.
  7. Transferring anything to the watch is really slow: and updates are really slow to install. But these happen so infrequently it doesn’t really matter that much.
  8. Nightstand mode is half done: I’d like it to be like an school alarm clock and always show the time in the dark, but unfortunately it only shows anything when tapped etc. Kinda defeats the purpose of this. Maybe there will be some options to enable always on in future updates.

Do you own an Apple Watch or another smart watch? What do you think of it?

My New Topic for CukeUp! Australia 2015

My change in circumstances means I’ll be doing a slightly different topic for CukeUp! 2015 in Sydney from 19-20 November.

Conference Discount Code

If you would like to attend you can use the following code: SPEAKER-10-AS to get an extra 10% off the early bird price until 18th September.

New Talk Topic

My new talk is titled ‘The 10 Do’s and 500* Don’ts of Automated Acceptance Testing’

Automated acceptance tests/executable specifications are a key part of sustainable software delivery, but most teams struggle to implement these in an efficient, productive way without hindering velocity. Alister will share a few ways to move towards successful automated acceptance testing, and many traps of automated acceptance testing, so you achieve business confidence of every commit in a continuous delivery environment. *Note: talk may or may not include 500 don’ts.

If you’re a Simpsons fan like me, you may recognize the title from here:


Hoping to see those from down under there.

An Automattician To Be

I am excited to announce I will be starting as a full time Excellence Wrangler for Automattic working on WordPress.com from this coming Monday.

Improve the quality of the WordPress.com experience through testing and triage. Your work will inform product teams to act on the top priority issues facing our users. Tasks include automated UI testing, creating and executing test plans, effective issue tracking and triage, and identifying and monitoring quality metrics.

I’ve dreamed about working for Automattic/WordPress.com for a long time (I first wrote about working for Automattic in 2008), and with their newly created Excellence Wrangler roles this really is a dream come true.

WordPress is superbly simple yet beautifully powerful software that powers 24% of the Internet (including this blog), not only for blogs like this but sites for businesses, artist’s portfolios, hobbyists and giant media organizations like CNN and TIME.

Some amazing facts about Automattic and how I was hired:

  1. Automattic are 100% distributed with 395 staff across 36 countries all working from home or wherever they choose.
  2. I have already worked for Automattic for almost 3 months on a paid trial, where I was given a real project to work on in my spare time. This is a requirement for all new hires at Automattic. I can’t overstate how great this is, as it gave both Automattic and myself real exposure to each other before committing to a full time job. It now makes taking on a new job without a trial seem too daunting.
  3. Automattic does their entire interviewing/trial/hiring process via asynchronous text chat (Skype/Slack), including the final hiring discussion with Matt, so I have never spoken to a person from Automattic. Whilst this may seem unusual at first, it’s representative of how the company works in such a distributed way, and it’s a great way to eliminate all prejudice/bias from a hiring process as it’s all about what value someone can add, not what they look or sound like.
  4. Everyone who joins Automattic full time spends their first 3 weeks on support, regardless of their position. I am looking forward to this next week as it will give me broad insight into how WordPress.com is used by real customers by working as a ‘Happiness Engineer’: Genchi Genbutsu.

I can’t wait to be a part of the future of WordPress.com, so stay tuned for more updates as I begin this exciting next stage of my career.

The more I think about it the bigger it gets

Warning: this post is the most personal I have written and is about the sensitive subject of mental health. I encourage comments if they’re constructive and helpful. This is a summary of my experience only, and shouldn’t be taken as specific advice or a specific treatment for any condition. Please find links/phone numbers at the bottom of this post if you require immediate help.
Continue reading

Managing dependencies between automated tests

I was at a meet up recently when someone asked the presenter about how to manage dependencies between tests. The presenter gave a list of tools that allow test execution ordering so you can ensure tests are executed in a specific order to satisfy dependencies, and how to pass data around using external sources.

But I don’t think this is a good idea at all.

I believe the best way to manage dependencies between automated tests is to not have automated tests dependent on each other at all.

I have found avoiding something is often better than trying to manage something. Need a storage management solution for your clutter? Avoid clutter. Need a way to manage dependencies between tests? Write independent tests.

As soon as you begin having tests that require other tests to have passed, you create a complex test spiderweb which is makes it hard to work out the true status of any particular test. Not only does it make tests harder to write and debug, it also makes it difficult it not impossible to run these tests in parallel.

Not having inter-test dependencies doesn’t mean having not having any dependencies. Targeted acceptance tests will still often rely on things like test data (or create it quickly via scripts in test pre-conditions), but this should be minimised as much as possible. The small number of true end-to-end tests that you have should avoid any dependencies almost completely.

CukeUp! Australia 2015

The first ever CukeUp! Australia is being held in Sydney on November 19 and 20, 2015.

I have been selected to speak and my talk is titled ‘Establishing a Self-Sustaining Culture of Quality at Domino’s Digital’.

Just 12 months ago Domino’s had a dedicated manual testing team who performed testing during a dedicated testing phase at the end of each project. Not only did this substantially slow down projects, releases were big and introduced lots of risk despite having been independently tested. Fast forward to today, Domino’s Digital consists of multiple cross-functional teams who are wholly responsible and accountable for quality into and beyond production through regular releases: no testing team, no testing phases, no testing manager. Alister will share the journey of moving to a self-sustaining culture of quality and detail the cosmic benefits the business has received in increasing both quality and velocity across all digital delivery initiatives.

Early-bird tickets are available now. Hoping to see those from down under there.

How can open source projects deliver high quality software without dedicated testers?

I recently received the following email from a WatirMelon reader Kiran, and was about to reply with my answer when instead I asked to reply via a blog post as I think it’s an interesting topic.

“I see most of the Open source projects do not have a dedicated manual QA team to perform any kind of testing. But every Organization has dedicated manual QA teams to validate their products before release, yet they fail to meet quality standards.

How does these open source projects manage to deliver stuff with great quality without manual testers? (One reason i can think of is, developers of these projects have great technical skills and commitment than developers in Organizations).

Few things I know about open source projects is that they all have Unit tests and some automated tests which they run regularly.But still I can’t imagine delivering something without manual testing…Is it possible?”

I’ll start by stating that not all organizations have dedicated manual QA teams to validate their products before release. I used the example of Facebook in my book, and I presently work in an organization where there isn’t a dedicated testing team. But generally speakingI agree that most medium to large organizations have testers of some form, whereas most open source projects do not.

I think the quality of open source comes down to two key factors which are essential to high quality software: peer reviews and automated tests.

Open source projects by their very nature need to be open to contribution from various people. This brings about great benefit, as you get diversity of input and skills, and are able to utilize a global pool of talent, but with this comes the need for a safety net to ensure quality of the software is maintained.

Open source projects typically work on a fork/pull request model where all work is done in small increments in ‘forks’ which are provided as pull requests to be merged into the main repository. Distributed version control systems allow this to happen very easily and facilitate a code review system of pull requests before they are merged into the main repository.

Whilst peer reviews are good, these aren’t a replacement for testing, and this is where open source projects need to be self-tested via automated tests. Modern continuous integration systems like CircleCI and TravisCI allow automatic testing of all new pull requests to an open source project before they are even considered to be merged.

How TravisCI Pull Requests Work

From TravisCI

If you have a look at most open source project pages you will most likely see prominent real time ‘build status’ badges to indicate the realtime quality of the software.

Bootstrap's Github Page

Bootstrap’s Github Page

Peer reviews and automated tests cover contributions and regression testing, but how does an open source project test new features?

Most open source projects test new changes in the wild through dogfooding (open source projects often exist to fill a need and open source developers are often consumers of their own products), and pre-release testing like alpha and beta distributions. For example, the Chromium project has multiple channels (canary, dev, beta, stable) where anyone can test upcoming Chromium/Chrome features before they are released to the general public (this isn’t limited to open source software: Apple does the same with OSX and iOS releases).

By using a combination of peer reviews, extensive automated regression testing, dogfooding and making pre-release candidates available I believe open source projects can release very high quality software without having dedicated testers.

If an organization would like to move away from having a dedicated, separate test team to smaller self-sustaining delivery teams responsible for quality into production (which my present organization does), they would need to follow these practices such as peer reviews and maintaining a very high level of automated test coverage. I still believe there’s a role for a tester on such a team in advocating quality, making sure that new features/changes are appropriately tested, and that the automated regression test coverage is sufficient.

Notes from the 2015 ANZTB Conference in Auckland

I was lucky enough to make my first trans-Tasman journey to Auckland last week to attend the 2015 ANZTB Conference. The conference was enjoyable and there were some memorable talks I really enjoyed (I personally like single-stream events). Here’s some favorites:

Secure by Design – Laura Bell – slides

I loved the essence of this talk which was basically (in my own words) ‘take security testing off the pedestal’. Laura shared five simple tools and techniques to make security more accessible for developers and testers alike. One key takeaway for me was to focus on getting the language right: ‘security vulnerabilities hide behind acronyms, jargon and assumptions‘. For example, most people understand the different between authentication (providing identity) and authorization (access rights), but both these terms are commonly shortened to ‘auth’ which most people use interchangeably (and confusingly). A great talk.

Innovation through Collaboration – Wil McLellan

This was probably my favorite talk of the day, as it was a well told story about building a collaborative co-working space called ‘EPIC’ for IT start-ups in Christchurch following the 2011 earth quake. The theme was how collaboration encourages innovation, and even companies in competition benefit through collaboration. My key takeaway was how designing a space you can encourage collaboration, for example, in EPIC there’s only a single kitchen for the whole building, and each tenancy doesn’t has it’s own water. So, if someone wants a drink or something to eat they need to visit a communal area. Doing this enough times means you start interacting with others in the building you wouldn’t normally do so in your day to day work.

Through a different lens – Sarah Pulis – slides

Sarah is the Head of Accessibility Services at PwC in Sydney and she shared some good background information about why accessibility is important and some of the key resources to analyse/evaluate and improve accessibility of systems. Whilst I knew most of the resources she mentioned, I thought here talk was very well put together.

Well done to the team that organized the conference.

Auckland was a beautiful city BTW, here’s a couple of pics I took:

A tale of working from trunk

Let me share with you a story about how we went from long lived feature/release branches to trunk based development, why it was really hard and whether this is something I would recommend you try.


I’m familiar with three main approaches to code branching for a shared code-base:

  1. Long lived feature/release branches
  2. Short lived feature branches
  3. Trunk based development

Long lived feature/release branches

Most teams will start out using long lived feature/release branches. This is where each new project or feature branches from trunk and at a point where the branch is feature ready/stable then these changes are merged into trunk and released. The benefits of this approach is that changes are contained within a branch, so there’s little risk of non-finished changes inadvertently leaking into the main trunk, which is what is used for releases to production. The biggest downside to this approach, and why many teams move away from it, is the merging that has to happen, as each long lived feature branch needs to ultimately combine its changes with every other long lived feature branch and into trunk, and the longer the branch exists, the more it can diverge and the harder this becomes. Some people call this ‘merge hell’.

Short lived feature/release branches

Another version of feature branching is to have short lived feature branches which exist to introduce a change or feature and are merged (often automatically) into the trunk as soon as the change is reviewed and tested. This is typically done using a distributed version control system (such as GIT) and by using a pull request system. Since branches are ad-hoc/short-lived, you need a continuous integration system that supports running against all branches for this approach to work (ours doesn’t), otherwise it doesn’t work as you’d need to create a new build config every time you created a short lived feature branch.

Trunk Based Development

This is probably the simplest (and most risky) approach in that everyone works from and commits directly to trunk. This avoids the needs for merges but also means that trunk should be production ready at any point in time.

A story of moving from long lived feature/release branches to trunk based development

We have anywhere from 2-5 concurrent projects (each with a team of 8 or so developers) working off the same code base that is released to production anywhere from once to half a dozen times per week.

These project teams started out using long-lived feature/release branches specific to projects, but the teams increasingly found merging/divergence difficult – and issues would arise where a merge wasn’t done correctly, so a regression would be inadvertently released. The teams also found there would be manual effort involved in setting up our CI server to run against a new feature/release branch when it was created, and removing it when the feature/release branch was finished.

Since we don’t use a workflow based/distributed version control system, and our CI tools don’t support running against every branch, we couldn’t move to using short lived feature branches, so we decided to move to trunk-based development.

Stage One – Trunk Based Development without a Release Branch

Initially we had pure trunk based development. Everyone committed to trunk. Our CI build ran against trunk, and each build from trunk could be promoted right through to production.

Trunk Based Development without a Release Branch(1)

Almost immediately two main problems arose with our approach:

  1. Feature leakage: people would commit code that wasn’t behind a feature toggle which was inadvertently released to production. This happened a number of times no matter how many times I would tell people ‘use toggles!’.
  2. Hotfix changes using trunk: since we could only deploy from trunk, each hotfix would have to be done via trunk, and this meant the hotfix would include every change made between it and the last release (so, in the above diagram if we wanted to hotfix revision T4 and there were another three revisions, we would have to release T7 and everything else it contained). Trying to get a suitable build would often be a case of one step forward/two steps back with other unintended changes in the mix. This was very stressful for the team and often led to temporarily ‘code freezes’ whilst someone committed a hotfix into trunk and got it ready.

Stage Two – Trunk Based Development with a Release Branch

Pure trunk based development wasn’t working, so we needed some strategies to address our two biggest problems.

  1. Feature leakage: whilst this was more of a cultural/mindset change for the team learning and knowing that every commit would have to be production suitable, one great idea we did implement was TDC: test driven configuration. Since tests act as a safety net against unintended code changes (similar to double entry book-keeping), why not apply the same thinking to config? Basically we wrote unit tests against configuration settings so if a toggle was turned on without having a test that expected it to be on, it would fail the build and couldn’t be promoted to production.
  2. Hotfixing changes from trunk: whilst we wanted to develop and deploy from a constantly verified trunk, we needed a way to quickly provide a hotfix without including every other change in trunk. We decided to create a release branch, but not to release a new feature per say, but purely for production releases. A release would therefore involve deleting and recreating a release branch from trunk to avoid having any divergence. If an hotfix was needed, this could be applied directly to the release branch and the change would be merged into trunk (or the other way around), knowing that the next release would delete the release branch and start again from trunk. This alone has made the entire release process much less stressful as if a last minute change is needed for a release, or a hotfix is required, it’s now much quicker and simpler than releasing a whole new version from trunk, although that is still an option. I would say that nine out of ten of our releases are done by taking a whole new cut, whereas one or so out of ten is done via a change to the release branch.

Trunk Based Development with Release Branch(2)

Lessons Learned

It’s certainly been a ride, but I definitely feel more comfortable with our approach now we’ve ironed out a lot of the kinks.

So, the big question is whether I would recommend team/s to do trunk based development? Well, it depends.

I believe you should only consider working from trunk if:

  • you have very disciplined teams who see every-single-commit as production ready code that could be in production in a hour;
  • you have a release branch that you recreate for each release and can uses for hotfixes;
  • your teams constantly check the build monitors and don’t commit on a red build –  broken commits pile upon broken commits;
  • your teams put every new/non-complete feature/change behind a feature toggle that is toggled off by default, and tested that it is so; and
  • you have comprehensive regression test suite that can tell you immediately if any regressions have been introduced into every build.

Then, and only then, should you work all off trunk.

What have your experiences been with branching?

Microservices: a real world story

Everywhere I turn I hear people talking about microservice architectures: it definitely feels like the latest, over-hyped, fad in software development. According to Martin Fowler:

“…the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.”


But what does this mean for software testing? And how does it work in the real world?

Well, my small team is responsible for maintaining/supporting a system that was developed from scratch using a microservices architecture. I must highlight I wasn’t involved in the initial development of system but I am responsible for maintaining/expanding/keeping the system running.

The system consists of 30-40 REST microservices each with it’s own code-base, git repository, database schema and deployment mechanism. A single page web application (build in AngularJS) provides a user interface to these microservices.

Whilst there are already many microservices evangelists on board the monolith hate-train; my personal experience with this architectural style has less than pleasant for a number of reasons:

  • There is a much, much greater overhead (efficiency tax) involved in automating the integration, versioning and dependency management of so many moving parts.
  • Since each microservice has its own codebase, each microservice needs appropriate infrastructure to automatically build, version, test, deploy, run and monitor it.
  • Whilst its easy to write tests that test a particular microservice, these individual tests don’t find problems between the services or from a user experience point of view, particularly as they will often use fake service endpoints.
  • Microservices are meant to be fault tolerant as they are essentially distributed systems that are naturally erratic however since they are micro, there’s lots of them which means the overhead of testing various combinations of volatility of each microservice is too high (n factorial)
  • Monolithic applications, especially written in strongly typed/static programming languages, generally have a higher level of application/database integrity at compile time. Since microservices are independent units, this integrity can’t be verified until run time. This means more testing in later development/test environments, which I am not keen on.
  • Since a lot of problems can’t be found in testing, microservices put a huge amount of emphasis on monitoring over testing. I’d personally much rather have confidence in testing something rather than relying on constant monitoring/fixing in production. Firefighting in production by development teams isn’t sustainable and leads to impacted efficiency on future enhancements.

I can understand some of the reasoning behind breaking applications down into smaller, manageable chunks but I personally believe that microservices, like any evangelist driven approach, has taken this way too far.

I’ll finish by giving a real world metric that shows just how much overhead and maintenance is involved in maintaining our microservices architected system.

A change that would typically take us 2 hours to patch/test/deploy on our ‘monolithic’ strongly typed/static programming language system typically takes 2 days to patch/test/deploy on our microservices built system. And even then I am much less confident that the change will actually work when it gets to production.

Don’t believe the hype.

Addendum: Martin Fowler seems to have had a change of heart in his recently published ‘Microservice Premium’ article about when to use microservices:

“…my primary guideline would be don’t even consider microservices unless you have a system that’s too complex to manage as a monolith. The majority of software systems should be built as a single monolithic application. Do pay attention to good modularity within that monolith, but don’t try to separate it into separate services.”