DevOps Zone is brought to you in partnership with:

Mark is a graph advocate and field engineer for Neo Technology, the company behind the Neo4j graph database. As a field engineer, Mark helps customers embrace graph data and Neo4j building sophisticated solutions to challenging data problems. When he's not with customers Mark is a developer on Neo4j and writes his experiences of being a graphista on a popular blog at http://markhneedham.com/blog. He tweets at @markhneedham. Mark is a DZone MVB and is not an employee of DZone and has posted 543 posts at DZone. You can read more from them at their website. View Full User Profile

The Tracer Bullet Approach: An Example

01.03.2013
| 3004 views |
  • submit to reddit

A few weeks ago my former colleague Kief Morris wrote a blog post describing the tracer bullet approach he’s used to setup a continuous delivery pipeline on his current project.

The idea is to get the simplest implementation of a pipeline in place, prioritizing a fully working skeleton that stretches across the full path to production over a fully featured, final-design functionality for each stage of the pipeline.

Kief goes on to explain in detail how we can go about executing this and it reminded of a project I worked on almost 3 years ago where we took a similar approach.

We were building an internal application for an insurance company and didn’t have any idea how difficult it was going to be to put something into production so we decided to find out on the first day of the project.

We started small – our initial goal was to work out what the process would be to get a ‘Hello world’ text file onto production hardware.

Although we were only putting a text file into production we wanted to try and make the pipeline as similar as possible to how it would actually be so we set up a script to package the text file into a ZIP file. We then wired up a continuous integration server to generate this artifact on each run of the build.

What we learnt from this initial process was how far we’d be able to automate things. We were working closely with one of the guys in the operations team and he showed us where we should deploy the artifact so that he could pick it up and put it into production.

Our next step after this was to do the same thing but this time with a web application just serving a ‘Hello world’ response from one of the end points.

This was relatively painless but we learnt some other intricacies of the process when we wanted to deploy a script that would make changes to a database.

Since these changes had to be verified by a different person they preferred it if we put the SQL scripts in a different artifact which they could pick up.

We found all these things out within the first couple of weeks which made our life much easier when we put the application live a couple of months down the line.

Although there were a few manual steps in the process I’ve described we still found the idea of driving out the path to production early a useful exercise.

Read Kief’s post for ideas about how to handle some of the problems you’ll come across when it’s all a bit more automated!

Published at DZone with permission of Mark Needham, author and DZone MVB. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)