Article summary
Often, I want to develop a Chef configuration that can be applied to a whole cluster of systems. During development, I may not have access to the final virtual (or physical) machines that will make up the cluster. To resolve this problem, I construct a Vagrant cluster that allows me to develop locally.
Instead of using a single Vagrant, the Vagrant cluster contains at least one Vagrant for each role I am developing for. I tweak my Vagrantfile so that it will construct the cluster based on the contents of the standard JSON files used to define Chef nodes. This integrates everything nicely into the Chef server environment and allows me to easily work with a representation of the final production systems.
1. Expanding the Chef Repository Layout
I start with the standard Chef repository layout for use with Chef server (or Solo, if you’re determined).
To this, I add a nodes
directory. While node configuration data is usually kept on the Chef server, it is needed locally for Chef solo, and I like to store it in source control. Data for existing nodes on a Chef server can be easily dumped to JSON:
knife node show --format json
And, of course, node data on a Chef server can be updated from JSON:
knife node from file
The final directory structure for my repository is as follows:
chef-repo/
├── LICENSE
├── README.md
├── Rakefile
├── certificates
├── chefignore
├── config
├── cookbooks
├── data_bags
├── environments
├── nodes
└── roles
2. Updating the Node JSON
A basic JSON file to describe an application server node might look something like:
{
"name": "myapp-vagrant-app-1",
"chef_environment": "_default",
"json_class": "Chef::Node",
"automatic": {
},
"normal": {
},
"chef_type": "node",
"default": {
"myapp":{
"hostnames":[
"foo.example.com"
]
}
},
"override": {
},
"run_list": [
"role[vagrant]",
"role[app-server]"
]
}
To this, I add some extra attributes that won’t interfere with Chef, but that I can use later with Vagrant. I specifically add a value to indicate that the node is intended to be a Vagrant, the private IP address to use, and a name (if it’s different from the actual node name):
{
"normal": {
"is_vagrant":"true",
"vagrant_ip":"192.168.0.2",
"vagrant_name":"myapp-vagrant-app-1"
}
}
This results in:
{
"name": "myapp-vagrant-app-1",
"chef_environment": "_default",
"json_class": "Chef::Node",
"automatic": {
},
"normal": {
"is_vagrant":"true",
"vagrant_ip":"192.168.0.2",
"vagrant_name":"myapp-vagrant-app-1"
},
"chef_type": "node",
"default": {
"myapp":{
"hostnames":[
"foo.example.com"
]
}
},
"override": {
},
"run_list": [
"role[vagrant]",
"role[app-server]"
]
}
3. A Revamped Vagrantfile
I modify the default Vagrantfile
to add code to locate my node JSON files, parse the files to detect ones that are for use with Vagrant ("is_vagrant":"true"
), and then define the individual Vagrants based on the parsed JSON.
Locating the node JSON files in the nodes
directory:
root_dir = File.dirname(File.expand_path(__FILE__))
nodes = Dir[File.join(root_dir,'nodes','*.json')]
Parsing JSON for Vagrant-compatible nodes and defining the Vagrants:
nodes.each do |file|
node_json = JSON.parse(File.read(file))
if(node_json["normal"]["is_vagrant"] == "true")
vagrant_name = node_json["normal"]["name"] || node_json["name"]
vagrant_ip = node_json["normal"]["vagrant_ip"]
config.vm.define vagrant_name do |vagrant|
vagrant.vm.hostname = vagrant_name
vagrant.vm.network :private_network, ip: vagrant_ip
end
end
end
Conveniently, I can also tell Vagrant to immediately provision the new Vagrant cluster using my Chef server, making use of the same node configuration described in the JSON file stored in my Chef repository:
config.vm.provision :chef_client do |chef|
chef.chef_server_url = "https://chef.example.com"
chef.validation_key_path = "chef-validator.pem"
chef.delete_client = true
end
Something very similar could be done using the Chef solo provisioner, and the local Chef repository.
Note –
I have Vagrant delete the client authorization on the Chef server when the Vagrant is destroyed, so that when I bootstrap the Vagrant anew, a new client authorization will be created.
4. Putting it all Together
The final Vagrantfile
looks something like this:
# -*- mode: ruby -*-
# vi: set ft=ruby :
require 'json'
root_dir = File.dirname(File.expand_path(__FILE__))
nodes = Dir[File.join(root_dir,'nodes','*.json')]
Vagrant.configure("2") do |config|
config.ssh.forward_agent = true
# Whichever is appropriate…
#
# config.vm.box = "centos6"
#
config.vm.box = "precise64"
nodes.each do |file|
node_json = JSON.parse(File.read(file))
if(node_json["normal"]["is_vagrant"] == "true")
vagrant_name = node_json["normal"]["name"] || node_json["name"]
vagrant_ip = node_json["normal"]["vagrant_ip"]
config.vm.define vagrant_name do |vagrant|
vagrant.vm.hostname = vagrant_name
vagrant.vm.network :private_network, ip: vagrant_ip
end
end
end
config.vm.provision :chef_client do |chef|
chef.chef_server_url = "https://chef.example.com"
chef.validation_key_path = "chef-validator.pem"
chef.delete_client = true
end
end
Now, I can easily check the status of all of my vagrant nodes:
vagrant status
And, when I want, spin up (or down) the Vagrant cluster:
vagrant up
And voila! I have my very own Vagrant cluster for developing a Chef configuration for a new project’s infrastructure.
Note – This method of defining Vagrants is specifically tailored to local Vagrant providers (i.e. VirtualBox and VMWare Fusion) which support defining private IP addresses. Alternatively, you could make use of Chef server’s attribute store to dynamically query a node’s IP address. This could then be used by Vagrant for local Vagrant providers, or cloud providers.
Just wanted to drop a line and thank you for sharing this info. I think this will be useful for app-based chef repos in our org for our local devs that need to develop tomcat apps across 2 or more machines in their local environment.
One thing I should mention is that I’ve had some issues with “chef.delete_client = true” and “chef.delete_client = false”.
Ref: https://github.com/mitchellh/vagrant/issues/5021
I’m not sure if there’s a workaround for this currently