DevOps Zone is brought to you in partnership with:

Patrick Debois has been working on closing the gap between development and operations for many years. In 2009 he organized the first conference and since then the world is stuck with the term 'devops'. Always seeking for opportunities to optimize the global IT instead of local optimizations. Patrick is a DZone MVB and is not an employee of DZone and has posted 39 posts at DZone. You can read more from them at their website. View Full User Profile

Provisioning Workflow - Using Vsphere and Puppet

  • submit to reddit

On a recent project we explored how to further integrate puppet and Vsphere to get EC2 like provisioning, all command-line based.

We leveraged the (Java Vsphere) Vijava interface . For the interested user, I also wrote another blogpost on Programming options for vmware Vsphere, and why libvirt for ESX was (not yet) an option

The result of the Proof of Concept code can be found on the jvspherecontrol project on github

The premise for starting the workflow, is that the servername is added to the DNS first.

  • The name: <apptype>-<environment>-<instance>
    • web-prod-1.<domain>
  • The IP :
    • <IP-prefix>-<vlan-id>-<local ip>
    • : VLAN 30

In our situation, a typical server would have no default routing, but would communicate to the outside uniquely through services mapped through a loadbalancer. This means that all VLAN and Loadbalancing mappings would have been create before that,(that could be automated as well) . We would have DNS entries standardized per VLAN for these kind of services: proxy-30 , ntp-30, dns-30

We didn't want to have dhcp/boot option running in each VLAN, so we decided that a newly created machine would boot in a separated 'Boot-VLAN' to do the initial kickstart. And we would disable (disconnected state in Vsphere) that (boot) network interface after the provisioning was done. The rest of the workflow is pretty standard.

Recently I've heard an alternative way of tackling this problem: it involves created JEOS iso images on the fly for each server with the required network settings. The newly created ISO would be mounted on the Virtual Machine, and it would boot from there. This avoids the need to have a separate boot network interface that you need to disable afterwards.

Because of the VLAN-ed approach we could not have the puppetmaster contact puppetclients directly. To make this work we leverage the use of Mcollective to have the clients listen to an AMQP server.

I'd love to hear about your provisioning approach! Are you doing something similar? Totally different? Any tricks to improve this? Thanks for sharing!

Published at DZone with permission of Patrick Debois, author and DZone MVB. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)