A tale of robots, gold stars, and code coverage.
At Gridium, we have a Hubot named Gort running in our Slack channels. He does all sorts of useful stuff for us: deploying code, printing the company mailing address, showing the state of our analytics instances, alerting on the state of our sites and servers, making glitter-text graphics… He’s pretty good about telling us when something goes wrong (build failed! page took too long to load!). As a parent, I know that it’s a good idea to notice when things are going well too.
One of our goals from our quarterly camps is to improve our test coverage for our front end Ember apps. In this post, I’ll describe how I hooked up Slack, CircleCI, and Blanket.js to award gold stars whenever a developer increases code coverage, and point out when a build reduces code coverage.
Step 1: Start measuring code coverage with Blanket.js
Blanket.js is “an easy to install, easy to configure, and easy to use JavaScript code coverage library.” Since I wanted to use this with Ember apps, I installed the ember-cli-blanket addon: ember install ember-cli-blanket
. This adds Blanket config to tests/index.html
and creates a tests/blanket-options.js
config file. I added ?coverage
to our testem config to make sure that every test run includes a coverage report. Now, running the tests creates a coverage.json
file, including a total coverage percentage:
{ "coverage": { "statementsTotal": 1289, "statementsCovered": 946, "percentage": 73.39 } }
Step 2: Save coverage reports
Next, I need to save these coverage reports so that I can compare them from one build to the next. CircleCI has a concept of build artifacts that makes this easy. In the circle.yml
build config file, I tell Circle to save the coverage report as an artifact so I can get to it later through their API:
general: artifacts: - "coverage.json"
Step 3: Compare coverage between builds
CircleCI has an option to specify a webhook that gets called whenever a build completes. It POSTs a JSON packet to the specified URL. This includes the repo, build number, previous build number, branch, and username who triggered the build.
I configured a webhook to send build reports to our internal ops API. This is a Python app, and the handler for the blanket endpoint finds the current and previous build numbers, fetches the coverage.json
for each build via the CircleCI API, and extracts the total coverage percentage. It’s important to get the previous build number from the build report instead of just subtracting one: the previous integer build may be on a different branch, and I want to compare incremental changes on a single branch.
Step 4: Post to Slack
We’d already enabled Slack integration with CircleCI via Project Settings / Notifications / Chat Notifications. I used the same webhook URL to post coverage updates to Slack. To make Slack mentions work, I need to know the user’s Slack username, which may be different than their GitHub username. I put this mapping in the ops API script config file, along with the Slack webhook and CircleCI API keys.
To post a message in Slack using the CircleCI webhook, I just need to make a HTTP request with the content of the message.
If a build increases coverage, the script posts a message with a gold star:
If a build decreases coverage, the script posts a less-glowing message:
Conclusion
Connecting our code coverage measurements to Slack via CircleCI was relatively easy. I like that it reinforces the value of writing tests, and it’s nice to get immediate public credit for improvements (or not).
Of course, measuring code coverage isn’t super useful by itself. There’s definitely some noise to the measurements. For example, commenting out a chunk of code will likely increase the coverage number, even without any change in tests. We don’t put much value on the absolute coverage numbers, but it’s good to see an upward trend.