Howdy! First, thanks to Alister for asking me to guest-post on his blog. I’m always excited to talk about Docker and its potential to solve all the world’s problems 😉. He asked me to take on the following question from his AMA:
Alister, What are your thoughts on how containerization should fit into a great development and testing workflow? Have you got behind using Docker in your day to day? Thanks!
One of the oldest problems in software development and testing is that a developer writes code on their desktop, where everything works flawlessly, but when it’s shipped to the test or production environments it mysteriously breaks. Maybe their desktop was running a different version of a specific library, or they had unique file permissions enabled. When used correctly, Docker eliminates the “works on my machine” concern. By packaging the runtime environment configuration along with the source code you ensure that the application executes the same in every instance. And just as important, changes to that configuration are logged and can be easily reverted.
Another concern is how to test specific behavior that only gets executed when your application is running in the production environment. By putting all of your application and test servers in individual containers, you can easily connect them on their own private network and just tell the application that it’s running in production. Obviously this is application unique, and building a copy of the production environment presents its own challenges, but at the core it’s definitely doable within a Docker infrastructure. The important thing is that a network of containers is isolated, so you can do things like set machine hostnames to exactly match their production counterparts without worrying about conflicts.
It all just boils down to consistency…if you can ensure that your developer is writing code against the same configuration as your test environment, which is the same configuration is production – everybody wins.
The second part of the question is a little trickier – Have you got behind using Docker in your day to day?
In some ways the answer is yes. The main application that we test is the Calypso front-end to WordPress.com, which itself is built and runs inside Docker. Our core end-to-end tests also run in a custom Docker container on CircleCI 2.0, so we can define exactly what version of NodeJS and Chrome we’re using to test with. However, some of our other test sets (such as certain WooCommerce and Jetpack tests) still run using the default CircleCI container. And as far as I know nobody on our team actually uses that container for developing tests locally, we typically just run directly on our laptops. The CI server is the first place that actually executes via Docker.
The other piece that’s missing for a full Dockerization of the our test setup is that our Canary tests run against the custom https://calypso.live setup (https://github.com/Automattic/calypso-live-branches) rather than directly building/running Calypso side by side in a container. It’s something I’d like to pursue updating at some point, but in the interim the existing setup works great…and most importantly it’s already built and working, allowing us to focus on other things.
So the long story short here is that containerization is a great technology, and has a ton of potential for solving problems in the dev/test world. We’re just scratching the surface of that potential at Automattic, but even the limited use we’re giving it right now is beneficial and I plan on continuing to dig deeper.