Since the last update, I have been working to set up a basic way to automate common configuration and management tasks with Ansible. The basic Ansible setup uses a primary controller host with the Ansible software installed which has access to the various nodes to managed via Secure Shell (SSH).

The managed hosts have minimal requirements. Namely they need to be able to receive incoming SSH connections, contain a Python interpreter to execute the commands, and usually an authentication mechanism such as SSH keys to allow logins without typing in a password.

I have put all of the files for my Ansible installation into a git repository so I can manage them with source control and make them portable.

The basic directory structure looks like:

  • Ansible
    • inventory
    • roles
    • bootstrap.yml
    • site.yml
    • test.yml


Playbooks are the files which tell Ansible specifically what tasks to complete on a given run. They are written in the YAML (Yet Another Markup Language) which is super simple to read once you understand how it’s structured.

Playbooks are executed with the command “ansible-playbook playbook.yml”.

I have three basic playbooks so far. Each playbook defines which hosts will be assigned which role. Each role executes the specific configuration tasks for that role. More on that later.

The first is the test playbook which I use for isolating the roles that I’m currently working on developing so they can be executed standalone. The bootstrap playbook will take a brand new host to which I have root (or another user) access, creates the ansible user, and grants it root access through sudo. The ansible user allows me to use a consistent user for all other playbooks. The site playbook is the primary “production” playbook.

Here is an exerpt of the site playbook:

- name: Docker Hosts
          - lachlandev
  remote_user: ansible
  become: yes
          - docker-host


The hosts under Ansbile management are contained in the Inventory. The most basic inventory files contain specific information about the hosts and how to access them.

If you are using supported cloud providers like AWS or Azure you can configure a dynamic inventory plugin. I just went with manually maintaining inventory files for now.

The Inventory directory is organized:

  • Inventory
    • group_vars
      • group1
      • group2
    • host_vars
      • host1
      • host2
      • host3
    • inventory1
    • inventory2

An example inventory file looks like:


Dev and Blog are groups while lachlandev and lachlanblog are hosts. Groups can contain subgroups. Under the host_vars directory, you’ll find a lachlanblog file which gives the parameters specific to that host:

ansible_user: root 

Refer to the Ansible docs for information on how group vars and host vars are merged.


Roles are where the real magic happens. They define the specific tasks to be completed for a given role. So far I have defined the following roles:

  • common - tasks to be completed on all hosts
  • debian - tasks to be completed on Debian hosts such as updating packages
  • docker-host - install and configure the Docker engine
  • fail2ban - install and configure fail2ban which blocks multiple failed attempts to any service such as SSH
  • multicraft - configure the kids’ Minecraft server
  • restic - install restic, configure it to use Backblaze as the storage backend, and schedule regular backups and pruning of the repostiory.
  • sshkeys - deploy my ssh public keys to every host in the inventory
  • ufw - configure a basic firewall on every host which allows only ssh and blocks all other ports.

Again, refer to the Ansible docs on host roles works, but the basic structure I went with is:

  • roles
    • role
      • tasks
        • main.yml

The role lists individual tasks to be completed for a role. For example, here are the tasks for the ufw role:

    - name: Allow outgoing traffic and enable UFW
        state: enabled
        direction: outgoing
        policy: allow

    - name: Set logging
        logging: 'on'

    - name: Rate-limit ssh port
        rule: limit
        port: ssh
        proto: tcp

    - name: Allow ssh
        rule: allow
        name: OpenSSH

    - name: Deny all incoming by default
        direction: incoming
        policy: deny

That’s my Ansible set up so far, in a nutshell. There are many other things I want (and can) do such as deploy and orchestrate Docker containers. This gives me enough of a starting point, though, that I can look at setting up Nextcloud.