A Recipe for Continuous Integration
In the last 5 years, continuous integration has taken off as a strategy. If you are a software development shop, it is easy so see how to bake a strategy of continuous integration. If you are a small to mid-sized business, however, it is not always easy to see where to begin on implementing continuous integration. This recipe will give you an overview of the tasks required to transition to CI:
- Implement a Continuous Integration Server
- Prepare one or more Application Version/Schema Management Strategies
- Mix in Automated Data Cloning Strategies
- Frost with Automated Testing
Implement a Continuous Integration Server
There are several to choose from. When I interviewed others who had recently taken the plunge, I discovered that the choice you make here is not as critical as you might think. Because much of adopting CI is focused on scripting builds and schema management. The scripts you develop for one CI product are not likely to need a lot of revision when you transition to another.
We settled on Gitlab which provides both a version control system along with continuous integration. Gitlab provides continuous integration strategy that will be familiar to those building on Drupal. It is open source, and CI build jobs are defined using .yml files, which will be familiar to those using Drupal 8.
Prepare one or more Application Version/Schema Management Strategies
Management of any application that utilizes databases will require the execution of upgrade scripts that need to run once against a particular installation, whether it be to alter the schema of the database, or pull down new modules or plugins. Implementations are always application specific, so don’t expect your continuous integration solution to provide these for you. We elected to write bash scripts that invoke drush -vset commands that store a site revision number in the database. Each script was named using a revision number that would eventually get stored in the database upon completion, thereby ensuring that upgrades, migration scripts and feature enables and reversions would only ever execute once.
Mix in Automated Data Cloning Strategies
Repeated testing build, upgrade and deployment methodologies is the hallmark of a successful continuous integration strategy. For us, the promise of nightly copies from our production systems into our development environment was the “carrot” that was needed to get buy in for continuous integration from our developers.
Again for our drupal systems, we leveraged bash scripts executing drush commands to export copies of databases, and move files and data from production systems to development environments. These cloning scripts were scheduled to execute via cron, and always end with a http request to the continuous integration server to trigger the build of the development branch. Now every upgrade script we develop or schema change would run nightly in the development branch, so upgrades to applications like drupal had executed every day during the testing period, developing confidence in the upgrade methodologies.
Prepare procedures for merge and deployment requests
Continuous integration servers often come with tools to help assist in code review, processing of merge request and monitoring success or failure of builds. Adoption of this tool is likely to change how you think about the decision to deploy code into testing and production environments. You can use a straight, “approved by the developer or QA team” approach, but the tools may come with support for other approaches such as “number of upvotes” on a merge request.
We elected to have all dev to test merge requests reviewed by lead developers, but allowed all developers to push tested code into production, provided they have another developer perform the merge. Rather than configure systems so that only a couple of developers could push code into production, we elected to allow all developers to push to all branches, but add policy compliance reporting against our CI server to ensure that developers followed established procedures for merge requests.
Frost with Automated Testing
Once your continuous integration environment is baking in the oven, and you have seasoned it with healthy amounts of code review and automated deployment, you are now ready to shift your development paradigm to one that leverages the power of testing frameworks. For custom code and applications, unit testing with tools like PHPUnit provide the most payback, while for managed applications developed by others, integration test frameworks (e.g. SimpleTest) or behavioral tests (e.g. BeHAT) will provide more complete testing that covers code developed by others.
Working with Drupal provides a real advantage here as there are testing frameworks built in, and existing drupal tests are very scriptable using drush. Because we have a lot of custom code, we are working towards unit testing all newly developed code for Drupal 8, and incorporating unit tests of specific test groups for in house software using Drupal 8’s PHPUnit integration. We will likely supplement that with some minimal behavioral testing (BeHAT) in the future, but this where we’ve elected to focus our energies to date.
Enjoy!
Adopting continuous integration in our development shop payed dividends very quickly. Within the first month, we started catching small flaws in code either through the asynchronous code reviews that were facilitated by Gitlab’s merge request tools, or by noticing build failures. Having developers in the process for reviewing these deployments before they go out is serving as an excellent cross-training tool. Adding unit testing to our environment is proceeding slowly as we figure out the best approaches to unit testing enterprise code designed to talk directly to Oracle databases, but that was to be expected, and I can see the light at the end of the tunnel on that. Let’s just hope it’s not a train!