DevOps Zone is brought to you in partnership with:

Matthew is a Linux Consultant and Systems Administrator specialising in removing human error from configuration and deployment through the use of automation. He has supported systems of all sizes, from small businesses to home entertainment and major cloud providers. Matthew is a DZone MVB and is not an employee of DZone and has posted 9 posts at DZone. You can read more from them at their website. View Full User Profile

Deploy and Roll-back System Configs with Capistrano, mcollective and Puppet - Part 2

02.28.2012
| 12070 views |
  • submit to reddit

Table of contents for Continuous Delivery of Server Configurations


  1. Turning a 5 Hour Manual Build and Deploy Routine Into a Single Code Commit - Part 1
  2. Part 2
  3. Putting the butler to the test - Part 3

I’ve been playing around with Capistrano over the past few weeks and I’ve recently created a way to use the power of Capistrano’s “deploy” and “rollback” features with Puppet and MCollective to enable me to have complete control over the deployment of my system configurations.We’ll start with Capistrano as it’s the key to all of this – you’ll need the following gems installed:

  • capistrano
  • capistrano-ext
  • railsless-deploy

I’ve taken to having  minimalist cap files so “cd” into your puppet manifests and type the following:

capify


Now edit the Capfile so it looks as follows:

load 'deploy' if respond_to?(:namespace) # cap2 differentiator
require 'capistrano'
require 'rubygems'
require 'railsless-deploy'
load 'config/deploy'

We’re also going to use the ‘multistage’ extension to make sure that we only deploy to our production environment deliberately, so update the config/deploy.rb file and make it look like this:

set :stages, %w[staging production]
set :deploy_to, "/usr/share/puppet/configuration"
set :deploy_via, :export
set :application, "Puppet Manifests"
set :repository, "git://gitserver/puppet.git"
set :scm, :git
set :default_stage, "staging"
set :use_sudo, false

require 'capistrano/ext/multistage'

Note that the stages need to be set before you include the multistage extension otherwise they won’t get setup.

If you now run cap -vT then you should see the following amongst the other tasks:

cap production               # Set the target stage to `production’.
cap staging                     # Set the target stage to `staging’.


This means that we can now run “cap staging deploy” or “cap production deploy” depending on what we want to do.  Note also that we set the default stage to staging so that we have to explicitly state when we want to deploy to production – this will hopefully cut down on any accidental incidents of untested configs making it to the live servers!

Finally, note that we’ve disabled sudo – this makes things more secure (you can’t have your deploy user executing random code on the servers!) and also makes it easier to configure the server (no editing of /etc/sudoers).

The next step is to create the user account on the puppetmaster for deployment.  For simplicity’s sake, we’ll create a user called “deploy”:

root@puppetmaster # useradd -m deploy


And assign it ownership of the puppet manifest directory (/usr/share/puppet/configuration in our case):

root@puppetmaster # chown -Rvf deploy: /usr/share/puppet/configuration


Puppet doesn’t care about write permissions to the manifests/modules directories as far as I can tell so as long as the “puppet” user can read the manifests/modules, we’re all good.

Now setup the access for the user.  I have deliberately not set a password for this account as I use ssh-keys which are not as easily brute forced!

root@puppetmaster # su – deploy

deploy@puppetmaster > vim ~/.ssh/authorized_keys


Place the public SSH key of the user who will be running the “cap production deploy” command into the authorized_keys file listed above.

A quick gotcha: if you’re going to use a staging server with SSH keys, make sure that the key of the user account running “cap production deploy” is on both the gateway server and the puppetmaster, otherwise this will fail!


SSH to the puppetmaster as your deploy user from the account that will be running “cap production deploy” so that you don’t get any SSH-Key errors.

Now setup your config for the staging environment in config/deploy/staging.rb:

set :user, "deploy"
role :web, "staging"
after 'deploy:symlink', 'puppet:run'

and do the same for config/deploy/production.rb:

set :gateway, "deploy@support-gateway"
set :user, "deploy"
set :deploy_via, :copy
role :web, "puppetmaster" # set this to the fully qualified domain name of your puppetmaster
after 'deploy:symlink', 'puppet:run'
after 'deploy:rollback', 'puppet:run'

This means that we can over-ride the “defaults” from deploy.rb for each environment.

You may have noticed the following line:

after ‘deploy:symlink’, ‘puppet:run’


This executes a custom task in a custom namespace once the “current” symlink has been updated to force a puppet run.  The issue at the moment is that this task doesn’t exist yet!

Update config/deploy.rb to look like the following:

set :stages, %w[staging production]
set :application, "Puppet Manifests"
set :repository, "git://localhost/puppet.git"

set :scm, :git

role :web, "puppetmaster" # set this to the fully qualified domain name of your puppetmaster

require 'capistrano/ext/multistage'
require 'mcollective'

#### MCOLLECTIVE STUFF ####

class MCProxy
    include MCollective::RPC

    def initialize(agent)
        @agent = rpcclient(agent)
    end

    def runaction(action, args)
        printrpc @agent.send(action, args)
    end
end

namespace :puppet do
    desc <<-DESC
Run Puppet to pull the latest versions
DESC
    task :run do
        puppet = MCProxy.new("puppetd")
        puppet.runaction("runonce",:concurrency => '2')
    end
end

The important bits to pay attention to are the “require ‘mcollective’ ” (which loads the MCollective libraries) and the MCProxy class (thanks to @ripienaar for helping me with that!).

The MCProxy class enables us to create the puppet:run task and call the “puppetd” agent – note that you could call any agent with any argument you want here, we’re just calling the puppet one.

Now, I’ll leave the rest of the configuration to you (setting up SSH keys/users etc.) however when you now run “cap production deploy” you should see your puppet configs getting checked out of git and SCP’d via the gateway to your puppet master followed by MCollective executing a puppet run across your entire server estate.

The final task is to configure puppet to read the new configs.  I’m assuming that you have all your manifests in one huge repo here, so just update your puppet config to point to the correct directory:

[puppetmasterd]
        vardir = /var/lib/puppet
        logdir = /var/log/puppet
        rundir = /var/run/puppet
        ssldir = $vardir/ssl
modulepath = /usr/share/puppet/configuration/current/manifests

If you don’t know Capistrano, “current” is a symlink which gets updated everytime you successfully deploy/rollback.  Puppet won’t let you set /etc/puppet as a symlink, so you have to adapt the configs to point to the “current” release of your configs.

Now do the inital setup:

cap production deploy:setup # creates the directories required

root@puppetmaster # service puppetmaster restart


and deploy the first version of your modules:

cap production deploy


Let me know how you get on…

Next article in the series

Source:  http://www.threedrunkensysadsonthe.net/2011/05/deploy-and-roll-back-system-configs-with-capistrano-mcollective-and-puppet/

Published at DZone with permission of Matthew Macdonald-wallace, author and DZone MVB.

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)

Comments

Jeroen Rosenberg replied on Fri, 2012/03/02 - 3:56am

Nice article. It got me thinking about using it within our company. Currently we have just one puppetmaster which holds the configuration for our entire test, acceptance and production environment. The puppetmaster is automatically updated by pulling from github by a cron. So, I guess with this setup using capistrano wouldn't make much sense.

However, I could imagine that we use a different puppetmaster for test, acceptance and production. Then capistrano could deploy approved puppet changes to the environment of your choice. But then what's the benefit over using different branches to accomplish this form of staging.

The mcollective part looks also very concise and nice. However, the initial setup of mcollective is quite a lot of work and seems like a bit of overkill in this matter. Theoratically you could just run puppet as a daemon to pull the approved configuration (whether directly form github or through capistrano). Besides, if you want this kind of 'push' strategy to roll out puppet changes you'd rather use a tool like fabric, since puppet is more 'pull' based imho.

Also, the title of your article suggests you could benefit from the rollback capability of capistrano. I gave this a little thought, because clearly puppet is missing this functionality. However, when you think about it there's no easy way to do a full rollback of your system. Imagine you remove a file or upgrade a package to the latest version. There's no way reverting to a previous version of the puppet repository and running puppet will undo these changes.

I wonder what are your thoughts about this. Anyway, thanks for the input.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.