Home Ansible Automated Ansible Lab Setup With Vagrant And Virtualbox In Linux

Automated Ansible Lab Setup With Vagrant And Virtualbox In Linux

How To Setup Ansible Lab Using Vagrant In Linux

By Karthick
Published: Updated: 13.1K views

Ansible is an automation platform used in the area of orchestration, configuration management, deployments, provisioning, etc. If you are a beginner who wants to learn ansible or someone planning to take ansible certification, then you need to have a home lab setup to practice ansible. Setting up a home lab manually is a time-consuming task. There are a couple of automated solutions like Docker, Vagrant, Cloud solutions that can be used to build the ansible lab. In this guide, we will learn an automated way to setup Ansible lab with Vagrant and VirtualBox in Linux.

Vagrant is an excellent tool to quickly set up your development environment. If you are new to vagrant, I suggest you take a look at our introduction to Vagrant guide.

For the purpose of this guide, we will be using Vagrant with VirtualBox as the provider to build our ansible lab. You can also use KVM instead of VirtualBox. If you wish to use KVM as the provider, refer the below article on how to use vagrant with KVM.

Ansible Lab Setup

As a prerequisite, you need to have Vagrant and Virtualbox installed on your Linux machine. If you haven't installed Vagrant yet, please refer the following guide to install Vagrant on different Linux distributions.

We will build a three-node ansible lab setup. One node will be acting as master/controller node and two nodes will be acting as managed nodes. For demonstration purpose, I am using ubuntu/focal64 vagrant box.

Here is the details of my Ansible lab setup.

NODE TYPENODE NAMEIP ADDRESSOS FLAVOR
Control Nodecontroller.anslab.com192.168.10.3ubuntu/focal64
Managed Nodemanaged1.anslab.com192.168.10.4ubuntu/focal64
Managed Nodemanaged2.anslab.com192.168.10.5ubuntu/focal64
Three Node Ansible Lab Setup

Here I am setting up only three nodes for my lab but you can add as many managed nodes as you wish when setting up your own lab.

Clone Project Repository

I have hosted all the required files to setup Ansible lab in my GitHub repository. Run the following command to clone the repository locally.

$ git clone --recursive https://github.com/KarthickSudhakar/Ansible_lab_vagrant_virtualbox.git

Let us navigate inside the project directory to see what files are present.

Project Structure
Project Structure

Let me give you a brief introduction to each file.

1. Vagrantfile

All configurations related to the VM are stored in this file. Here is the contents of this file.

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
  
  config.vm.provider "virtualbox" do |rs|
    rs.memory = 2048
    rs.cpus = 2
  end

  # Will not check for box updates during every startup.
  config.vm.box_check_update = false


  # Master node where ansible will be installed
  config.vm.define "controller" do |controller|
    controller.vm.box = "ubuntu/focal64"
    controller.vm.hostname = "controller.anslab.com"
    controller.vm.network "private_network", ip: "192.168.10.3"
    controller.vm.provision "shell", path: "bootstrap.sh"
    controller.vm.provision "file", source: "key_gen.sh", destination: "/home/vagrant/"
  end

  # Managed node 1.
  config.vm.define "m1" do |m1|
    m1.vm.box = "ubuntu/focal64"
    m1.vm.hostname = "managed1.anslab.com"
    m1.vm.network "private_network", ip: "192.168.10.4"
    m1.vm.provision "shell", path: "bootstrap.sh"
  end

  # Managed node 2.
  config.vm.define "m2" do |m2|
    m2.vm.box = "ubuntu/focal64"
    m2.vm.hostname = "managed2.anslab.com"
    m2.vm.network "private_network", ip: "192.168.10.5"
    m2.vm.provision "shell", path: "bootstrap.sh"
  end

end

2. bootstrap.sh

This is a shell script that is responsible for setting up ansible in the controller node, installing packages, modifying system configurations.

Contents of this file is given below:

#!/usr/bin/env bash

# vagrant by default creates its own keypair for all the machines. Password based authentication will be disabled by default and enabling it so password based auth can be done.

sudo sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/' /etc/ssh/sshd_config
sudo systemctl restart sshd

# Supressing the banner message everytime you connect to the vagrant box.

touch /home/vagrant/.hushlogin

# Updating the hosts file for all the 3 nodes with the IP given in vagrantfile

# 192.168.10.3 controller.ansible.com controller
# 192.168.10.4 managed1.ansible.com managed1
# 192.168.10.5 managed2.ansible.com managed2

echo -e "192.168.10.3 controller.anslab.com controller\n192.168.10.4 managed1.anslab.com managed1\n192.168.10.5 managed2.anslab.com managed2" >> /etc/hosts

# Installing necessary packages 

sudo apt update && sudo apt -y install curl wget net-tools iputils-ping python3-pip sshpass

# Install ansible using pip only in controller node

if [[ $(hostname) = "controller" ]]; then
  sudo pip3 install ansible
fi

3. key_gen.sh

This script should be manually triggered once all three VM build is completed. This script will take care of generating an ssh key pair, distributing it across all three nodes. It will also run a sample ansible ad-hoc command for validation.

The contents of this file is given below:

#!/usr/bin/env bash

# THIS SCRIPT WILL CREATE SSH KEYPAIR AND DISTRIBUTE ACROSS ALL NODES

ssh-keygen -b 2048 -t rsa -f /home/vagrant/.ssh/id_rsa -q -N ""

# LOOPING THROUGH AND DISTRIBUTING THE KEY

for val in controller managed1 managed2; do 
	echo "-------------------- COPYING KEY TO ${val^^} NODE ------------------------------"
	sshpass -p 'vagrant' ssh-copy-id -o "StrictHostKeyChecking=no" vagrant@$val 
done

# CREATE THE INVENTORY FILE

PROJECT_DIRECTORY="/home/vagrant/ansible_project/"

mkdir -p $PROJECT_DIRECTORY
cd $PROJECT_DIRECTORY

# Creating the inventory file for all 3 nodes to run some adhoc command.

echo -e "controller\n\n[ubuntu1]\nmanaged1\n\n[ubuntu2]\nmanaged2" > inventory
echo -e "[defaults]\ninventory = inventory" > ansible.cfg
echo -e "-------------------- RUNNING ANSBILE ADHOC COMMAND - UPTIME ------------------------------"
echo

# running adhoc command to see if everything is fine

ansible all -i inventory -m "shell" -a "uptime"
echo

All of these three files are hosted in my GitHub repository. Feel free to contribute and improve it.

Understanding Vagrantfile Configuration

Before building the Ansible lab, you have to understand the configurations inside the Vagrantfile and shell scripts.

1. Memory And Vcore Allocation

For all the three vagrant boxes, we need to set up the memory and CPU value. Here the memory is set to 2GB and the CPU is set to 2. If you wish to increase or decrease the limit, simply adjust the highlighted parameters in the Vagrantfile.

Memory And Vcore
Memory And Vcore

2. OS Flavor

All the three nodes (Controller & managed) use Ubuntu 20.04 LTS image. So when you run the "vagrant up" command vagrant will look out for the following parameter and try to pull the image if not available locally.

OS Flavor
OS Flavor

3. Network Settings

By default vagrant uses NAT at the first interface(adapter1). Vagrant uses port forwarding over NAT to connect to the virtual machine. Here we are setting the hostname and static IP addresses for all three VM over the private network.

A separate interface(Adapter2) will be created and the static IP address will be assigned to it. VM that is part of the private network can communicate with each other.

In the multi-vm environment, the vagrant will automatically fix the port collision.

==> m2: Fixed port collision for 22 => 2222. Now on port 2201.
==> m2: Clearing any previously set network interfaces...
==> m2: Preparing network interfaces based on configuration...
    m2: Adapter 1: nat
    m2: Adapter 2: hostonly
==> m2: Forwarding ports...
    m2: 22 (guest) => 2201 (host) (adapter 1)
==> m2: Running 'pre-boot' VM customizations...
==> m2: Booting VM...
==> m2: Waiting for machine to boot. This may take a few minutes...
    m2: SSH address: 127.0.0.1:2201
Network And HostName
Network And HostName

4. UserName And SSH Communication

There is a default user called "vagrant" with password "vagrant". Vagrant user has passwordless sudo privilege configured in the vm by default.

By default password-based authentication is disabled for the VM. Vagrant will create an ssh key pair and use the private key to connect to the vm when you run the "vagrant ssh" command.

$ vagrant ssh controller
$ vagrant ssh m1
$ vagrant ssh m2

Password-based authentication is enabled through the bootstrap.sh file so you can connect to the node using the IP address and password-based authentication instead of key-based authentication.

5. Bootstrap Script

The script bootstrap.sh is responsible for

  • Enabling password-based authentication.
  • Create a .huhlogin file to suppress the default banner message.
  • Add the host entries in the /etc/hosts file for all three nodes.
  • Installing required packages.
  • Installing ansible through the python package manager(pip) only on the controller node.

I am using shell provisioner, where the bootstrap.sh will be copied to the /tmp/ location in all three VM and the script will run with root privilege.

Shell Provisioner
Shell Provisioner

Heads Up: If you are building a RHEL based lab, you should edit the package installation command from the bootstrap.sh file according to dnf or rpm. Rest all will be similar across all distributions.

6. Generate Key Pair

Ansible uses SSH keypair to communicate to the managed nodes and run the task. New keys should be generated from the controller node and shared with all the managed nodes so ansible can communicate with managed nodes without prompting for passwords every time.

The script key_gen.sh will take care of creating ssh keys and distributing the keys to all the nodes. The script will also create a project directory with ansible.cfg file and inventory file. The adhoc command will be triggered as part of the script to validate the connectivity.

Heads Up: This script has to be triggered manually from the controller node once all three VM is provisioned.

Build Ansible Lab Setup

Go to the project directory and run the "vagrant up" command and the rest will be taken care of by vagrant and bootstrap script.

$ cd Ansible_lab_vagrant_virtualbox
$ vagrant up

Sample output:

Bringing machine 'controller' up with 'virtualbox' provider…
Bringing machine 'm1' up with 'virtualbox' provider…
Bringing machine 'm2' up with 'virtualbox' provider…
………

Post-Install Script

Once all the three VMs are provisioned, log in to the controller node to run the /home/vagrant/key_gen.sh to create ssh key pairs and validate by running an ansible ad-hoc command.

$ vagrant ssh controller
$ cd /home/vagrant/
$ bash key_gen.sh
Keygen-Script
Keygen-Script

Vagrant Commands To Manage VM

Following commands will help you in maintaining the life cycle of the vagrant machines.

To start building the virtual machines run the following command from the directory where the vagrantfile is located.

$ vagrant up

If you want to bring only one node up, then you can add the hostname to the "vagrant up" command.

$ vagrant up controller

To check the machine state, run the following command.

$ vagrant status
Current machine states:

controller running (virtualbox)
m1 running (virtualbox)
m2 running (virtualbox)

You can also use the following command which will give more information about the VM.

$ vagrant global-status --prune

name       provider   state    directory                                                                    
---------------------------------------------------------------------------------------------------------------------
6095cc7  controller virtualbox running  /home/karthick/Karthick_Root/Work/Vagrant/Lab/Ansible_lab_vagrant_virtualbox 
cf2e302  m1         virtualbox running  /home/karthick/Karthick_Root/Work/Vagrant/Lab/Ansible_lab_vagrant_virtualbox 
af10f7d  m2         virtualbox running  /home/karthick/Karthick_Root/Work/Vagrant/Lab/Ansible_lab_vagrant_virtualbox 

To connect to the virtual machine you can run "vagrant ssh" command. You have to pass the vm name otherwise it will throw the following error.

$ vagrant ssh
This command requires a specific VM name to target in a multi-VM environment.

To ssh into m1 VM, the command would be:

$ vagrant ssh m1

Or,

$ vagrant ssh cf2e302

You can also connect using the username and password if password-based authentication is enabled.

$ ssh vagrant@192.168.10.4
vagrant@192.168.10.4's password:
vagrant@managed1:~$

To stop a particular virtual machine, run the halt command with vm name.

$ vagrant halt controller

To stop all the virtual machines, run the following command.

$ vagrant halt
==> m2: Attempting graceful shutdown of VM…
==> m1: Attempting graceful shutdown of VM…
==> controller: Attempting graceful shutdown of VM…

To destroy all vm including its disk, run the following command.

$ vagrant destroy -f
==> m2: Destroying VM and associated drives…
==> m1: Destroying VM and associated drives…
==> controller: Destroying VM and associated drives…

Conclusion

In this article, I have shown the automated way to setup ansible lab using Vagrant and VirtualBox. The lab is created with an Ubuntu 20.04 LTS stack.

If you are planning to take ansible certification and want a lab to practice I suggest you to edit the vagrantfile pointing the box name to RHEL flavor and replace the apt command to appropriate dnf/yum command in the bootstrap.sh file.

Read Next:

You May Also Like

2 comments

praveen June 17, 2023 - 9:36 pm

key_gen.sh seem to be giving me an error :

vagrant@controller:~$ bash key_gen.sh
key_gen.sh: line 2: $’\r’: command not found
key_gen.sh: line 4: $’\r’: command not found
/home/vagrant/.ssh/id_rsa already exists.
Overwrite (y/n)? y
key_gen.sh: line 6: $’\r’: command not found
key_gen.sh: line 8: $’\r’: command not found
key_gen.sh: line 32: syntax error: unexpected end of file

Reply
sk June 19, 2023 - 12:46 pm

Please file an issue at the author’s github repository. https://github.com/KarthickSudhakar/Ansible_lab_vagrant_virtualbox

Reply

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. By using this site, we will assume that you're OK with it. Accept Read More