Using Vagrant AWS with Capistrano

Vagrant 1.1 was recently released, adding support for virtualization providers other than VirtualBox. Among the providers now available is one for AWS. In switching my Vagrant workflow from VirtualBox to AWS, I ran into a problem; and in solving it, I discovered a better way to integrate Vagrant with Capistrano.

1. Vagrant Setup

Vagrant 1.1 was released recently. This release adds support for provider plugins, including a new, freely available provider for AWS. Rather than using VirtualBox on your local machine as the virtualization provider, you can now provision Vagrant-managed VMs in the cloud. This makes it much easier to try out things that require more resources like multi-VM environments and VMs requiring lots of RAM.

No Longer Distributed as a Gem

While Vagrant was initally distributed as a Ruby gem, Vagrant 1.0, introduced packages as the preferred installation method. Now with Vagrant 1.1+, it is no longer distributed as a gem. For more on why this change was made, see Mitchell’s blog.

1.1.x Installer Downloads Available at

To get Vagrant, download an installer from Once you’ve installed Vagrant, you can install the vagrant-aws provider: $ vagrant plugin install vagrant-aws

2. AWS Setup

Set up Account / Billing Info

To use the vagrant-aws provider, you’ll need an AWS account. If you don’t have one, you can set one up here.

Create an IAM User

You’ll probably also want to create an IAM user specificly for use with Vagrant. This allows you to limit access and revoke it if the account is compromised. You can find more info on IAM users and best practices for managing access to your AWS account here.


Once you’ve crated an IAM user and downloaded the API keys, you’ll also want to generate a new SSH keypair from the EC2 management console. Name that ‘vagrant’, too, and save it to ~/.ssh/aws/vagrant.pem.

Security Group

Finally, you should also set up a separate Security Group to use with these VMs. You’ll need access to at least port 22 for SSH, though opening port 80 and 433 for HTTP(S) traffic might also be useful depending on your particular needs.

3. Project Setup

To demonstrate how Vagrant (and this new provider) might be used in the context of building infrastructure with Chef, let’s start with the shell of a project laid out as Justin and I described previously in our posts Chef Solo with Capistrano and Simplifying Chef Solo Cookbook Management with Berkshelf.

Vagrant Box Setup

We’ll assume we already have Ruby 1.9.3 and Bundler installed. We’ll also assume that we have an existing project laid out something like described above.

Let’s start by adding a dummy box to Vagrant that will let us work with the AWS provider. The new box format requires only that boxes contain a metadata.json file specifying which provider to use. Beyond that, the provider is free to require other files of its own. The Virtualbox provider’s box format includes VMDK disk images for example. The AWS provider does not require any disk images, but allows for the inclusion of a Vagrantfile specifying some default values. Here we can use the empty example box provided in the vagrant-aws project repo — we’ll specify all the values we need in our project’s Vagrantfile.

Add the dummy box to Vagrant: $ vagrant box add dummy

And then see that it’s now available for use: $ vagrant box list

Now that we’ve imported our dummy box, we can add a Vagrantfile to our project. $ vagrant init dummy

Let’s edit this Vagrantfile to include our API keys, an AMI to use, and a few other details:

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config| = "dummy"

  config.vm.provider :aws do |aws|
    aws.access_key_id = ENV['AWS_ACCESS_KEY_ID']
    aws.secret_access_key = ENV['AWS_SECRET_ACCESS_KEY']
    aws.keypair_name = "vagrant"
    aws.ssh_private_key_path = "~/.ssh/aws/vagrant.pem"

    aws.ami = "ami-7747d01e"
    aws.ssh_username = "ubuntu"

We should now be able to start it up: $ vagrant up

And log in: $ vagrant ssh

We can terminiate the instance by running vagrant destroy. (We should be able to see all of this from the AWS Console.)

4. Capistrano Changes

In order to incorporate Vagrant in our project, we’ll make a new Capistrano stage for it. Let’s create a file config/deploy/vagrant.rb with the contents:

set :environment_name, "vagrant"
set :ssh_config, "var/#{environment_name}_ssh_config"
set :ssh_host, "default"
ssh_options[:config] = [ssh_config]
server ssh_host, :app, :web, :db
set :server_ip, ssh_host

set :chef_binary, "/usr/bin/chef-solo"

Let’s also make a few changes to config/deploy.rb so it looks like:

require "bundler/capistrano"
require "capistrano/ext/multistage"
set :stages, %w(vagrant)
set :default_stage, "vagrant"
default_run_options[:pty] = true

set :application, "bootstrapper"
set :repository,  "."
set :scm, :none

namespace :ssh do

  desc "Generate var/vagrant_ssh_config."
  task :generate_config do
    puts "Generating #{ssh_config}..."
    system("vagrant ssh-config > #{ssh_config}")

  desc "Destroy var/vagrant_ssh_config."
  task :destroy_config do
    puts "Destroying #{ssh_config}..."
    system("rm #{ssh_config}")
  desc "Pretty-print SSH config."
  task :show_config do
    require 'PP'
    netssh_config = Net::SSH::Config.for(ssh_host, [ssh_config])
    pp netssh_config

  desc "SSH in."
  task :default do
    system("ssh -F #{ssh_config} #{ssh_host}")

namespace :bootstrap do

  desc "Install Chef."
  task :default do
    set :default_shell, "bash"
    set :user, Net::SSH::Config.for(ssh_host, [ssh_config])[:user]
    set :id_file, Net::SSH::Config.for(ssh_host, [ssh_config])[:keys][0]
    set :hostname, Net::SSH::Config.for(ssh_host, [ssh_config])[:host_name]
    if exists?(:id_file)
      system("cd chef && knife bootstrap --bootstrap-version '10.16.2' -d chef-solo -x #{user} -i ../#{id_file} --sudo #{hostname}")
      system("cd chef && knife bootstrap --bootstrap-version '10.16.2' -d chef-solo -x #{user} --sudo #{hostname}")

namespace :berks do

  desc "Install cookbooks from the Berksfile to chef/cookbooks/."
  task :install do
    system("berks install --path chef/cookbooks/")

namespace :chef do
  desc "Upload chef/ and run chef solo against it."
  task :default do
    set :user, Net::SSH::Config.for(ssh_host, [ssh_config])[:user]
    set :default_shell, "bash"
    system("tar czf 'chef.tar.gz' -C chef/ .")
    upload("chef.tar.gz","/home/#{user}",:via => :scp)
    run("rm -rf /home/#{user}/chef")
    run("mkdir -p /home/#{user}/chef") 
    run("tar xzf 'chef.tar.gz' -C /home/#{user}/chef")
    sudo("/bin/bash -c 'cd /home/#{user}/chef && #{chef_binary} -c solo.rb -j vagrant.json'")

namespace :vg do
  desc "Boot Vagrant VM & generate SSH config."
  task :up do
    system("vagrant up --provider=aws")

  desc "Destroy Vagrant VM & remove SSH config."
  task :destroy do
    system("vagrant destroy -f")


By using Vagrant’s ssh-config command, we can dynamically generate the config file our vagrant capistrano stage is based on. This allows us to point that stage at whatever host Vagrant has spun up for us at the time.

By taking advantage of vagrant ssh-config and the fact that Capistrano uses Net:SSH (which can parse ssh config files), we can make a dynamic stage pointing to any VM Vagrant is currently running.