Deploying OpenStack with OpenStack-Ansible

by on April 13, 2016 at 7:05 pm

Deploying OpenStack with OpenStack-Ansible will have your organization running a production-ready OpenStack cloud in a matter of minutes. In this tutorial, I’m going to break down the steps of using OpenStack-Ansible to get you up and running in no time!

Disclosure:  I contribute minor changes to the OpenStack-Ansible project.  I am not a core developer of the project.


Let’s talk briefly about what OpenStack-Ansible (OSA) is.  If it’s not obvious by the name, it’s a set of Ansible roles and plays that allow you to deploy an OpenStack cloud.  These playbooks also make some design assumptions about the OpenStack cloud you are deploying.  These design assumptions aren’t unproven demo-only versions of a cloud deployment, they are modeled closely after Rackspace’s popular offering Rack Space Private Cloud or RPC.


OpenStack-Ansible sets up the services most often deployed in a cloud, including the necessary infrastructure components such as MariaDB and RabbitMQ.  The following diagram displays the location of the services inside of the cloud:

OpenStack-Ansible Host Layout

The red components are the Infrastructure services, services which OpenStack services require to run.  The services in light blue are the OpenStack services.  These are the heart of your cloud, what translate user input into creating instances, block volumes, and networks.

What is not really shown in this diagram is how the services are deployed on each of the hosts shown.  On the “Infrastructure Control Plane Host,” each of the services are deployed into an LXC container.  LXC containers serve to provide a level of isolation between the various services.  In production, you’ll deploy multiple Infrastructure hosts, and the LXC containers will be deployed to each Infrastructure host in a highly available configuration.

Setup Requirements

You’ll need somewhere to run these plays.  A Virtualbox or KVM VM will suffice.  Ubuntu 14.04 is required at this time, support for other distributions is upcoming.  RAM:  4GB, vCPU: 4, Disk: 80G

You should be familiar with git.  I put off learning git for too long myself, but over the last couple of years it has reached a point of critical mass, and is now every bit as important as your system’s package manager.  If you are not putting your work into a git repository, you are wrong.  If  your organization does not already have an internal git repository, I suggest looking into GitLab.  GitLab is free software, and offers many of the same features as GitHub, as well as features that GitHub doesn’t have, like branch permissions.

Let’s Get Started

There are some references that you will need while deploying OSA:

1) OSA Quickstart

2) OSA Install Guide

These two guides will go into greater detail than I do here, but I will be outlining some steps to make the procedures a little more clear.

In this tutorial, we’ll be using the Quickstart AIO guide; this is so we can get a better understanding of how the software works and what the outcome will be.  What this essentially does is deploy all the OpenStack services and agents onto one host, similar to devstack.  The majority of the services will be in LXC containers on that host, while some (the neutron ml2 agent and the nova compute agent) will be directly on the host.

Step 1: Clone OSA and Checkout a Tagged Release

We’ll start with the Liberty release, currently tagged version 12.0.5.  Tagged releases allow you a stable snapshot of fixes and enhancements to each version of OpenStack-Ansible, and are associated with a specific OpenStack deployment.  If you prefer Kilo or Mitaka, check out the tags 11.x and 13.x, respectively.

git clone /opt/openstack-ansible && cd /opt/openstack-ansible
git checkout -t 12.0.5

If you forget to checkout a tagged release, you’ll get the most recent development master, which is likely not going to produce the results you are looking for.

Step 2: Bootstrap Your Deployment

The Openstack-Ansible ships with a variety of helpful scripts to get your system up and running.  You’ll need to run the following from /opt/openstack-ansible:


This will download Ansible and use ansible-galaxy to clone dependency roles for OSA.  It will also move some files into /etc/openstack_deploy for you, as well as deploy an executable, openstack-ansible, to your PATH.

Next, run


This is going to setup various files in /etc/openstack_deploy for your AIO deployment, as well as configure some settings on your host.

The most important files in that directory are openstack_user_config.yml, user_variables.yml, and user_secrets.yml.

Step 3: Config File Setup

In the previous step, 3 yaml files were listed.  These three files are the heart of your OpenStack deployment configuration.  They contain the information necessary for OSA to deploy your cloud.  It will take some time to review these files, and unfortunately I cannot cover all of the information you need to know for a production deploy here.  But, I am going to point out some things that are less than obvious initially.

openstack_user_config.yml is where your hosts, cidr_networks, global_overrides, and used_ips are defined.  The names of the host types are defined in various yaml files in /etc/openstack_deploy/env.d  These yaml files are parsed by the script /opt/openstack-ansible/playbooks/inventory/  This script is called dynamically by Ansible when you run openstack-ansible later in this guide, and generates the file /etc/openstack_deploy/openstack_inventory.json  This json inventory file is fed to Ansible to be used during the deployment process.

For the other sections of the files, please refer to the full install guide.

Step 4: Run openstack-ansible

If you are experimenting with modifications to plays or config files, now would be a good time to take a snapshot of your VM.

cd /opt/openstack-ansible/playbooks

This is the directory that all plays should be run from, using the openstack-ansible command.  The easiest way to proceed is to run openstack-ansible setup-everything.yml but I prefer to run the individual plays included inside that file (there are only 3).  The steps are broken down into 3 logical areas:

setup-hosts.yml, setup-infrastructure.yml, and setup-openstack.yml

setup-hosts.yml configures the various physical hosts (or VMs, etc) with settings and packages necessary and common to all hosts.  For instance, each infra_host will be configured with lxc software, as well as container caches with images provided by Rackspace.

setup-infrastructure.yml sets up the share infrastructure services, the red ones in our diagram above.  If you have multiple infrastructure hosts, most (if not all) services will be automatically deployed in an HA configuration.

setup-openstack.yml should be no mystery at this point.  It deploys various OpenStack services such as Horizon, Keystone, Cinder, and more.  An important note about these steps is that the services are typically deploy directly from pip or stable software sources, instead of a distribution’s repositories.

As I said, I prefer to run these plays individually.  The reason being is that some of the plays will need a second run due to timeouts like waiting on SSH to come up, or for MariaDB to start.  I suspect this is more of an issue of trying to do too many things at once in a relatively slow VM, as opposed to on actual hardware infrastructure.  It also gives you the opportunity to inspect your various systems between plays to ensure everything looks like it is going to plan.

Step 5: Full Deployment

If you have successfully deployed an AIO deployment as outlined above, you should be ready to move onto the next phase, deploy a production ready configuration to multiple hosts.  I will have to refer to you the deployment guide I linked above, as there is a lot to cover and it is well documented there.  It should be much easier to understand what is happening if you have followed this guide, and so the settings should be a little bit more clear.  You mostly need to worry about the networking bits, specifically the target host setup.  You will need to manually configure specific interfaces and bridges on each of your hosts, depending on each hosts’s role and your physical network topology.

For a typical deployment, your cloud should have 3 or more infrastructure hosts for HA/Clustering reasons.  This will allow you to more easily perform maintenance on your service hosts without impacting users.

If you are looking to modify the options of specific services, the install guide is also a good start, followed by the developer guide, or you’re your a stubborn person like myself, the specific Ansible role is a great place to see how things are implemented.

Step 6: Contribute!

While you are using this software, you may run into a bug or two, or desire a feature that doesn’t already exist.  OpenStack is a huge moving target, and with a release cycle every 6 months, there is always something that could be added or changed.  Checkout the Developer Docs for more information on contributing.


I hope you found this tutorial useful.  Successfully deploying OpenStack is still one of the largest barriers to entry of this project, and I feel that this project deserves a lot more attention that it typically receives.  Since it is now an official OpenStack project, I personally feel it should be the preferred method of deploying OpenStack in production.

, , , ,

You can skip to the end and leave a response. Pinging is currently not allowed.