Best Practices

Here are some tips for making the most of Ansible and Ansible playbooks.

You can find some example playbooks illustrating these best practices in our ansible-examples repository. (NOTE: These may not use all of the features in the latest release, but are still an excellent reference!).

Content Organization

The following section shows one of many possible ways to organize playbook content.

Your usage of Ansible should fit your needs, however, not ours, so feel free to modify this approach and organize as you see fit.

One crucial way to organize your playbook content is Ansible’s “roles” organization feature, which is documented as partof the main playbooks page. You should take the time to read and understand the roles documentation which is available here: Roles.

Directory Layout

The top level of the directory would contain files and directories like so:

  1. production # inventory file for production servers
  2. staging # inventory file for staging environment
  3.  
  4. group_vars/
  5. group1.yml # here we assign variables to particular groups
  6. group2.yml
  7. host_vars/
  8. hostname1.yml # here we assign variables to particular systems
  9. hostname2.yml
  10.  
  11. library/ # if any custom modules, put them here (optional)
  12. module_utils/ # if any custom module_utils to support modules, put them here (optional)
  13. filter_plugins/ # if any custom filter plugins, put them here (optional)
  14.  
  15. site.yml # master playbook
  16. webservers.yml # playbook for webserver tier
  17. dbservers.yml # playbook for dbserver tier
  18.  
  19. roles/
  20. common/ # this hierarchy represents a "role"
  21. tasks/ #
  22. main.yml # <-- tasks file can include smaller files if warranted
  23. handlers/ #
  24. main.yml # <-- handlers file
  25. templates/ # <-- files for use with the template resource
  26. ntp.conf.j2 # <------- templates end in .j2
  27. files/ #
  28. bar.txt # <-- files for use with the copy resource
  29. foo.sh # <-- script files for use with the script resource
  30. vars/ #
  31. main.yml # <-- variables associated with this role
  32. defaults/ #
  33. main.yml # <-- default lower priority variables for this role
  34. meta/ #
  35. main.yml # <-- role dependencies
  36. library/ # roles can also include custom modules
  37. module_utils/ # roles can also include custom module_utils
  38. lookup_plugins/ # or other types of plugins, like lookup in this case
  39.  
  40. webtier/ # same kind of structure as "common" was above, done for the webtier role
  41. monitoring/ # ""
  42. fooapp/ # ""

Alternative Directory Layout

Alternatively you can put each inventory file with its group_vars/host_vars in a separate directory. This is particularly useful if your group_vars/host_vars don’t have that much in common in different environments. The layout could look something like this:

  1. inventories/
  2. production/
  3. hosts # inventory file for production servers
  4. group_vars/
  5. group1.yml # here we assign variables to particular groups
  6. group2.yml
  7. host_vars/
  8. hostname1.yml # here we assign variables to particular systems
  9. hostname2.yml
  10.  
  11. staging/
  12. hosts # inventory file for staging environment
  13. group_vars/
  14. group1.yml # here we assign variables to particular groups
  15. group2.yml
  16. host_vars/
  17. stagehost1.yml # here we assign variables to particular systems
  18. stagehost2.yml
  19.  
  20. library/
  21. module_utils/
  22. filter_plugins/
  23.  
  24. site.yml
  25. webservers.yml
  26. dbservers.yml
  27.  
  28. roles/
  29. common/
  30. webtier/
  31. monitoring/
  32. fooapp/

This layout gives you more flexibility for larger environments, as well as a total separation of inventory variables between different environments. The downside is that it is harder to maintain, because there are more files.

Use Dynamic Inventory With Clouds

If you are using a cloud provider, you should not be managing your inventory in a static file. See Working With Dynamic Inventory.

This does not just apply to clouds – If you have another system maintaining a canonical list of systemsin your infrastructure, usage of dynamic inventory is a great idea in general.

How to Differentiate Staging vs Production

If managing static inventory, it is frequently asked how to differentiate different types of environments. The following exampleshows a good way to do this. Similar methods of grouping could be adapted to dynamic inventory (for instance, consider applying the AWStag “environment:production”, and you’ll get a group of systems automatically discovered named “ec2_tag_environment_production”.

Let’s show a static inventory example though. Below, the production file contains the inventory of all of your production hosts.

It is suggested that you define groups based on purpose of the host (roles) and also geography or datacenter location (if applicable):

  1. # file: production
  2.  
  3. [atlanta-webservers]
  4. www-atl-1.example.com
  5. www-atl-2.example.com
  6.  
  7. [boston-webservers]
  8. www-bos-1.example.com
  9. www-bos-2.example.com
  10.  
  11. [atlanta-dbservers]
  12. db-atl-1.example.com
  13. db-atl-2.example.com
  14.  
  15. [boston-dbservers]
  16. db-bos-1.example.com
  17.  
  18. # webservers in all geos
  19. [webservers:children]
  20. atlanta-webservers
  21. boston-webservers
  22.  
  23. # dbservers in all geos
  24. [dbservers:children]
  25. atlanta-dbservers
  26. boston-dbservers
  27.  
  28. # everything in the atlanta geo
  29. [atlanta:children]
  30. atlanta-webservers
  31. atlanta-dbservers
  32.  
  33. # everything in the boston geo
  34. [boston:children]
  35. boston-webservers
  36. boston-dbservers

Group And Host Variables

This section extends on the previous example.

Groups are nice for organization, but that’s not all groups are good for. You can also assign variables to them! For instance, atlanta has its own NTP servers, so when setting up ntp.conf, we should use them. Let’s set those now:

  1. ---
  2. # file: group_vars/atlanta
  3. ntp: ntp-atlanta.example.com
  4. backup: backup-atlanta.example.com

Variables aren’t just for geographic information either! Maybe the webservers have some configuration that doesn’t make sense for the database servers:

  1. ---
  2. # file: group_vars/webservers
  3. apacheMaxRequestsPerChild: 3000
  4. apacheMaxClients: 900

If we had any default values, or values that were universally true, we would put them in a file called group_vars/all:

  1. ---
  2. # file: group_vars/all
  3. ntp: ntp-boston.example.com
  4. backup: backup-boston.example.com

We can define specific hardware variance in systems in a host_vars file, but avoid doing this unless you need to:

  1. ---
  2. # file: host_vars/db-bos-1.example.com
  3. foo_agent_port: 86
  4. bar_agent_port: 99

Again, if we are using dynamic inventory sources, many dynamic groups are automatically created. So a tag like “class:webserver” would load invariables from the file “group_vars/ec2_tag_class_webserver” automatically.

Top Level Playbooks Are Separated By Role

In site.yml, we import a playbook that defines our entire infrastructure. This is a very short example, because it’s just importingsome other playbooks:

  1. ---
  2. # file: site.yml
  3. - import_playbook: webservers.yml
  4. - import_playbook: dbservers.yml

In a file like webservers.yml (also at the top level), we map the configuration of the webservers group to the roles performed by the webservers group:

  1. ---
  2. # file: webservers.yml
  3. - hosts: webservers
  4. roles:
  5. - common
  6. - webtier

The idea here is that we can choose to configure our whole infrastructure by “running” site.yml or we could just choose to run a subset by runningwebservers.yml. This is analogous to the “–limit” parameter to ansible but a little more explicit:

  1. ansible-playbook site.yml --limit webservers
  2. ansible-playbook webservers.yml

Task And Handler Organization For A Role

Below is an example tasks file that explains how a role works. Our common role here just sets up NTP, but it could do more if we wanted:

  1. ---
  2. # file: roles/common/tasks/main.yml
  3.  
  4. - name: be sure ntp is installed
  5. yum:
  6. name: ntp
  7. state: installed
  8. tags: ntp
  9.  
  10. - name: be sure ntp is configured
  11. template:
  12. src: ntp.conf.j2
  13. dest: /etc/ntp.conf
  14. notify:
  15. - restart ntpd
  16. tags: ntp
  17.  
  18. - name: be sure ntpd is running and enabled
  19. service:
  20. name: ntpd
  21. state: started
  22. enabled: yes
  23. tags: ntp

Here is an example handlers file. As a review, handlers are only fired when certain tasks report changes, and are run at the endof each play:

  1. ---
  2. # file: roles/common/handlers/main.yml
  3. - name: restart ntpd
  4. service:
  5. name: ntpd
  6. state: restarted

See Roles for more information.

What This Organization Enables (Examples)

Above we’ve shared our basic organizational structure.

Now what sort of use cases does this layout enable? Lots! If I want to reconfigure my whole infrastructure, it’s just:

  1. ansible-playbook -i production site.yml

To reconfigure NTP on everything:

  1. ansible-playbook -i production site.yml --tags ntp

To reconfigure just my webservers:

  1. ansible-playbook -i production webservers.yml

For just my webservers in Boston:

  1. ansible-playbook -i production webservers.yml --limit boston

For just the first 10, and then the next 10:

  1. ansible-playbook -i production webservers.yml --limit boston[0:9]
  2. ansible-playbook -i production webservers.yml --limit boston[10:19]

And of course just basic ad-hoc stuff is also possible:

  1. ansible boston -i production -m ping
  2. ansible boston -i production -m command -a '/sbin/reboot'

And there are some useful commands to know:

  1. # confirm what task names would be run if I ran this command and said "just ntp tasks"
  2. ansible-playbook -i production webservers.yml --tags ntp --list-tasks
  3.  
  4. # confirm what hostnames might be communicated with if I said "limit to boston"
  5. ansible-playbook -i production webservers.yml --limit boston --list-hosts

Deployment vs Configuration Organization

The above setup models a typical configuration topology. When doing multi-tier deployments, there are goingto be some additional playbooks that hop between tiers to roll out an application. In this case, ‘site.yml’may be augmented by playbooks like ‘deploy_exampledotcom.yml’ but the general concepts can still apply.

Consider “playbooks” as a sports metaphor – you don’t have to just have one set of plays to use against your infrastructureall the time – you can have situational plays that you use at different times and for different purposes.

Ansible allows you to deploy and configure using the same tool, so you would likely reuse groups and justkeep the OS configuration in separate playbooks from the app deployment.

Staging vs Production

As also mentioned above, a good way to keep your staging (or testing) and production environments separate is to use a separate inventory file for staging and production. This way you pick with -i what you are targeting. Keeping them all in one file can lead to surprises!

Testing things in a staging environment before trying in production is always a great idea. Your environments need not be the samesize and you can use group variables to control the differences between those environments.

Rolling Updates

Understand the ‘serial’ keyword. If updating a webserver farm you really want to use it to control how many machines you areupdating at once in the batch.

See Delegation, Rolling Updates, and Local Actions.

Always Mention The State

The ‘state’ parameter is optional to a lot of modules. Whether ‘state=present’ or ‘state=absent’, it’s always best to leave thatparameter in your playbooks to make it clear, especially as some modules support additional states.

Group By Roles

We’re somewhat repeating ourselves with this tip, but it’s worth repeating. A system can be in multiple groups. See Working with Inventory and Working with Patterns. Having groups named after things likewebservers and dbservers is repeated in the examples because it’s a very powerful concept.

This allows playbooks to target machines based on role, as well as to assign role specific variablesusing the group variable system.

See Roles.

Operating System and Distribution Variance

When dealing with a parameter that is different between two different operating systems, a great way to handle this isby using the group_by module.

This makes a dynamic group of hosts matching certain criteria, even if that group is not defined in the inventory file:

  1. ---
  2.  
  3. - name: talk to all hosts just so we can learn about them
  4. hosts: all
  5. tasks:
  6. - name: Classify hosts depending on their OS distribution
  7. group_by:
  8. key: os_{{ ansible_facts['distribution'] }}
  9.  
  10. # now just on the CentOS hosts...
  11.  
  12. - hosts: os_CentOS
  13. gather_facts: False
  14. tasks:
  15. - # tasks that only happen on CentOS go here

This will throw all systems into a dynamic group based on the operating system name.

If group-specific settings are needed, this can also be done. For example:

  1. ---
  2. # file: group_vars/all
  3. asdf: 10
  4.  
  5. ---
  6. # file: group_vars/os_CentOS
  7. asdf: 42

In the above example, CentOS machines get the value of ‘42’ for asdf, but other machines get ‘10’.This can be used not only to set variables, but also to apply certain roles to only certain systems.

Alternatively, if only variables are needed:

  1. - hosts: all
  2. tasks:
  3. - name: Set OS distribution dependant variables
  4. include_vars: "os_{{ ansible_facts['distribution'] }}.yml"
  5. - debug:
  6. var: asdf

This will pull in variables based on the OS name.

Bundling Ansible Modules With Playbooks

If a playbook has a ./library directory relative to its YAML file, this directory can be used to add ansible modules that willautomatically be in the ansible module path. This is a great way to keep modules that go with a playbook together. This is shownin the directory structure example at the start of this section.

Whitespace and Comments

Generous use of whitespace to break things up, and use of comments (which start with ‘#’), is encouraged.

Always Name Tasks

It is possible to leave off the ‘name’ for a given task, though it is recommended to provide a descriptionabout why something is being done instead. This name is shown when the playbook is run.

Keep It Simple

When you can do something simply, do something simply. Do not reachto use every feature of Ansible together, all at once. Use what worksfor you. For example, you will probably not need vars,vars_files, vars_prompt and —extra-vars all at once,while also using an external inventory file.

If something feels complicated, it probably is, and may be a good opportunity to simplify things.

Version Control

Use version control. Keep your playbooks and inventory file in git(or another version control system), and commit when you make changesto them. This way you have an audit trail describing when and why youchanged the rules that are automating your infrastructure.

Variables and Vaults

For general maintenance, it is often easier to use grep, or similar tools, to find variables in your Ansible setup. Since vaults obscure these variables, it is best to work with a layer of indirection. When running a playbook, Ansible finds the variables in the unencrypted file and all sensitive variables come from the encrypted file.

A best practice approach for this is to start with a groupvars/ subdirectory named after the group. Inside of this subdirectory, create two files named vars and vault. Inside of the vars file, define all of the variables needed, including any sensitive ones. Next, copy all of the sensitive variables over to the vault file and prefix these variables with vault. You should adjust the variables in the vars file to point to the matching vault_ variables using jinja2 syntax, and ensure that the vault file is vault encrypted.

This best practice has no limit on the amount of variable and vault files or their names.

See also