Continuous Integration (CI) has been an area of interest for me for many years. For all projects big and small, there are benefits to be gained from doing CI. A fast cycle of quick sanity checks, static analysis, building and packaging, and running various sets of automated tests can give the project team a safety net with which to quickly build new features and keep the regression at bay.
In order for CI to do its magic and be a useful practice for the team, the team must invest in automation. In AtlasCamp 2015, there was an interesting talk about automation by Holly Cummins titled “Confessions of an automation addict” (older set of her slides covering the same topic are available on SlideShare). Based on her presentation and my years spent automating processes big and small I can say that automation is not always easy, it is sometimes not even worth you while, but sometimes it just pays off handsomely.
By saying “invest in automation” I don’t mean that every single developer and every development team must go all-in at once. Instead, one can, and should, gradually move towards more automation in small (but frequent!) steps. Every single step reduces the amount of manual work and takes you closer and closer to the end goal where all your errors (and there will be errors!) are systematic; you fix them once and they are fixed for good.
But automation all by itself can only get you so far. Sometimes the environment you work on and the systems you implement are difficult to set up and even more difficult to test effectively. Unit tests are important, some might say even strictly required, but the really tricky ones to set up are those tests that require all or most of the actual infrastructure to be in place before the tests can be executed (as not everything can be effectively mocked, and can not be when testing systems as a whole rather than on the unit level).
Database servers, cache servers, application servers of many kinds, REST API endpoints; all of these processes and a number of others might need to be properly configured and running in order for you to test a particular piece of your whole software stack. And when things really get interesting and difficult to set up is when that piece of software you are working on needs to be tested and verified with various different releases of all those database servers, cache servers, and so on. There are a lot of combinations that need to be tested and even though you probably have to limit the options to a selected few (of each), you are still left with at least a few of those most important configurations that you need to test against.
This brings an interesting problem for you to solve when it comes to Continuous Integration. Should you set up (and maintain) all these configurations separately in different computers even though the systems would be idle most of the time, just waiting for new tests to be executed? Or should you rather invest in automating the infrastucture setup so systems can be orchestrated and configured just-in-time, right before the tests execute, and be teared down right after? Keeping systems up and running all the time only for the purpose of testing is unnecessarily costly, but on the other hand orchestrating and configuring systems just-in-time can be overly time consuming, e.g. easily several minutes to just get the virtual machine images up and running.
There was a training workshop in AtlasCamp where a complicated testing scenario such as that described above was set up with Docker and Atlassian Bamboo. Docker has been gaining a lot of traction during the last couple of years. With Docker containers, we can isolate the process we want to execute (or a number of processes) and all its dependencies into an immutable image that can be set up and teared down very quickly (perhaps in a few milliseconds). Turns out, this is exactly what we need for our CI testing scenario. In the training workshop, we used Docker Compose (previously Fig), to build the necessary containers, run them and wire them together into a working test environment.
So, using Docker and Docker Compose, we can build immutable images of all the components, separately for each major release version. We can very quickly spawn up a set of these images and link them together, and we can execute tests against this set of infrastructure components. In the end, we can tear down the system and be prepared for a new test run. What we want next is to execute all of these steps either in a loop or in parallel for all of the configurations that we care about, and do that after each meaningful change into our code repositories. At this point, Bamboo enters the scene.
Atlassian Bamboo has excellent support for this kind of a Continuous Integration scenario with its Docker Task. Using the Docker Task, we have a convenient way to set up a set of actions to 1) build Docker images, 2) run those images as containers, and 3) upload the images that we built into a Docker registry (to keep track of and to share with others). The Docker registry can either be the one hosted by Docker the company at https://registry.hub.docker.com/ or a custom hub hosted by you or your company. Bamboo’s Docker Task makes the running and linking of the containers a lot more convenient for the user, compared with having to do all this configuration directly on the command line. When it comes to Docker and Bamboo, also remember that it is possible to execute your Bamboo agents as Docker containers.
As anyone who has been following the IT scene already knows, Docker is already big and seems to be getting even bigger in an ever accelerating pace. It is not a silver bullet (nothing is, really) but it will change, and in many ways has already changed, the way we think about systems and deployment. I suggest you take a good look into Docker now, learn about it and think how it might apply in your given situation. Wire it up with Bamboo and give both of these tools a shot.
Do you have a place for Docker and Bamboo in your Continuous Integration system? I believe you do.