· vagrant ansible chef

Setting up a multi-machine environment in Vagrant with Ansible

Setting up a multi-vm environment with master and slave ip’s is a bit tricky with Vagrant. With chef, we’d have to setup chef-zero, which I’ve had many issues with vagrant, or a local chef-server to deploy a multi-machine cluster. It works, but I was never enamored with it. However, this isn’t a post about chef. With ansible, there are a few gotchas which are covered in the Vagrantfile below, but it’s actually quite a bit easier. Due to it’s architecture, it needs to gather information about the hosts before using them in a playbook.

For instance, if we were using EC2, before using facts about hosts that are not in the run, we’d need to do something like:

- name: setup the facts for later in the playbook
  hosts: ec2
  tasks:
   - action: ec2_facts

Now, for a generic example, here’s a Vagrantfile for a master and two workers. Note that it:

$boxes = [
  {
    :name => :master,
    :group => "master"
    :forwards => { 80 => 1080, 443 => 1443 }
  },
  {
    :name => :worker1,
    :group => "worker",
  },
  {
    :name => :worker2,
    :group => "worker",
  }
]
lxc_snapshot_suffix = "none"

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|

  # haven't tested virtualbox, but these settings are most likely correct
  #config.vm.provider :virtualbox do |vb, override|
  #  override.vm.box = "trusty64"
  #  override.vm.box_url = "https://cloud-images.ubuntu.com/vagrant/trusty/current/trusty-server-cloudimg-amd64-vagrant-disk1.box
  #end

  # https://github.com/mitchellh/vagrant/issues/4967
  # Updated post 05-01
  config.ssh.insert_key = false

  config.vm.provider :lxc do |lxc, override|
    override.vm.box = "fgrehm/trusty64-lxc"
    lxc.backingstore = "btrfs"
    #lxc.snapshot_suffix = lxc_snapshot_suffix
  end
  $groups = { "all" => [] }
  $boxes.each do | opts |
    if ! $groups.has_key?(opts[:group])
      $groups[opts[:group]] = [ opts[:name] ]
    else
      $groups[opts[:group]].push(opts[:name])
    end
    $groups["all"].push(opts[:name])
  end

  $boxes.each_with_index do | opts, index |
     config.vm.define(opts[:name]) do |config|
       config.vm.hostname =   "%s" % [ opts[:name].to_s ] 
       opts[:forwards].each do |guest_port,host_port|
         config.vm.network :forwarded_port, guest: guest_port, host: host_port
       end if opts[:forwards]

       # configure with ansible only on the last box for the all the hosts at once
       if index == $boxes.size - 1 
         config.vm.provision :ansible do |ansible|
           #ansible.verbose = "vvvv"
           ansible.playbook = "ansible/playbook.yml"
           ansible.groups = $groups
           ansible.sudo = true
           ansible.limit = "all"
           ansible.extra_vars = { ansible_ssh_user: 'vagrant' }
         end
       end
     end if ! opts[:disabled]
   end
end

A sample playbook which uses the ip of the master would look something like:

---
- hosts: all
  gather_facts: true

- hosts: master
  gather_facts: true
  sudo: true
  vars:
    master_ip: "{{hostvars[groups['master'][0]]['ansible_eth0']['ipv4']['address']}}"
  roles: 
    - master

- hosts: worker
  gather_facts: true
  sudo: true
  vars:
    master_ip: "{{hostvars[groups['master'][0]]['ansible_eth0']['ipv4']['address']}}"
  roles: 
    - worker

tl;dr - setting up multi-machines with ansible on vagrant is easier than chef

  • LinkedIn
  • Tumblr
  • Reddit
  • Google+
  • Pinterest
  • Pocket