AWX is the open source upstream project of the Ansible Tower automation and management platform based on Ansible. Both Ansible and Ansible Tower are provided by RedHat. I plan to use it to automate management and maintenance of the various services such as configuration, software updates, monitoring, and consistent backups using Restic.

Following the installation instructions, there are several options for deploying AWX - OpenShift, Kubernetes, and Docker Compose. For obvious reasons, I’ve chosen Kubernetes. This requires that kubectl is configured and functions. It also requires a functioning helm installation if the deployment of the postgresql database will also be automated.

NFS Client Storage

I have been manually creating NFS volumes and claiming them, but since I discovered that the AWX installer needs to create volumes atuomatically using a storage class, I needed that capability was well. Fortunately, I found an NFS client provisioner which was easily installable through helm.

$ helm repo add stable https://kubernetes-charts.storage.googleapis.com
"stable" has been added to your repositories
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "jetstack" chart repository
...Successfully got an update from the "traefik" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈ 
$ helm install nfs-provisioner stable/nfs-client-provisioner --set nfs.server=192.168.xx.xx --set nfs.path=/Container/k8s
NAME: nfs-provisioner
LAST DEPLOYED: Fri May  8 17:12:28 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

I also configured the storage class as the default Storage Class in Cluster, Storage, Storage Classes.

Update 6/9/2021

Clone the Repository

The first task is to clone the AWX repository:

$ git clone -b 11.2.0 https://github.com/ansible/awx.git

Where a specific version release branch is specified from the the available releases. At the time of this post, it’s 11.2.0

Modify Inventory File

The inventory file in the installer directory needs to be updated to configure how the AWX deployment will be completed using Ansible.

  • dockerhub_base=ansible (use Docker Hub images from ansible/awx_web:tag and ansible/awx_task:tag)
  • dockerhub_version=11.2.0 (deploy the specific version 11.2.0, as of this post. Will default to latest)
  • kubernetes_context=domain (name of Kubernetes cluster)
  • kubernetes_namespace=awx (name of Kubernetes namespace to use)
  • kubernetes_ingress_hostname=awx.domain.tld (hostname for ingress)
  • kubernetes_ingress_annotations={‘cert-manager.io/cluster-issuer’: ’letencrypt’} (any annotations needed for the ingress
  • pg_password=password (new random password)
  • awx_password=password (new random password)
  • secret_key=awxsecret (new random key)

I was getting errors about not enough memory to start the pod, so I also lowered the reservations by adding these settings to the inventory file.

  • web_mem_request=1
  • web_cpu_request=500
  • task_mem_request=1task_cpu_request=1500
  • redis_mem_request=1
  • redis_cpu_request=500
  • memcached_mem_request=1
  • memcached_cpu_request=500

Helm

Helm should first be configured and functioning according to the installation instructions and needs RBAC (role-based access control) with a service account enabled.

Execute Ansible Playbook

Unsurprisingly, AWX is installed by executing the install playbook:

$ ansible-playbook -i inventory install.yml

Post-Install

Following the execution of the install playbook, I verified that the pods deployed succesfully either through kubectl or the Rancher UI.

$ kubectl get pods --namespace awx
NAME                          READY   STATUS    RESTARTS   AGE
awx-579dd4f7cc-k2hqr          4/4     Running   0          59m
awx-postgresql-postgresql-0   1/1     Running   0          81m

Of course, I was able to access the AWX GUI at https://awx.domain.tld using the username and password specified in the install.yml file.

### AWX CLI

The AWX CLI allows for the configuring and launching jobs and playbooks, checking on the status or output of job runs, and allows management of users, teams, etc.

$ python3 -m pip install “https://github.com/ansible/awx/archive/11.2.0.tar.gz#egg=awxkit&subdirectory=awxkit" python3 -m pip install sphinx sphinxcontrib-autoprogram


Next, from awx/awxkit/awxkit/cli/docs directory:

$ TOWER_HOST=https://awx.domain.tld TOWER_USERNAME=admin TOWER_PASSWORD= make clean html


Then to view the documentation, run this and view in the browser at http://localhost:8000

$ cd build/html/ && python -m http.server Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ..$ cd build/html/ && python -m http.server


The documentation explains how to configure the command-line for the host, username and password.

Now login and obtain a token for subsequent commands. The command in backticks will set the TOWER_OAUTH_TOKEN environment variable.

$ export TOWER_HOST=https://awx.domain.tld $ awx --conf.host https://awx.domain.tld --conf.username admin --conf.password <password> login -f human

Then verify that it works:

$ awx users list
{
     "count": 1,
     "next": null,
     "previous": null,
     "results": [
          {
               "id": 1,
               "type": "user",
               "url": "/api/v2/users/1/",
               "related": {
                    "teams": "/api/v2/users/1/teams/",
                    "organizations": "/api/v2/users/1/organizations/",
                    "admin_of_organizations": "/api/v2/users/1/admin_of_organizations/",
                    "projects": "/api/v2/users/1/projects/",
                    "credentials": "/api/v2/users/1/credentials/",
                    "roles": "/api/v2/users/1/roles/",
                    "activity_stream": "/api/v2/users/1/activity_stream/",
                    "access_list": "/api/v2/users/1/access_list/",
                    "tokens": "/api/v2/users/1/tokens/",
                    "authorized_tokens": "/api/v2/users/1/authorized_tokens/",
                    "personal_tokens": "/api/v2/users/1/personal_tokens/"
               },
               "summary_fields": {
                    "user_capabilities": {
                         "edit": true,
                         "delete": false
                    }
               },
               "created": "2020-05-09T16:33:19.549972Z",
               "username": "admin",
               "first_name": "",
               "last_name": "",
               "email": "root@localhost",
               "is_superuser": true,
               "is_system_auditor": false,
               "ldap_dn": "",
               "last_login": "2020-05-10T11:37:37.398967Z",
               "external_account": null,
               "auth": []
          }
     ]
}

In the next post, I will set up Keyguard to provide single sign-on for AWX.

_Update 6/9/2021_

1) AWX is no longer deployed or upgraded using Ansible playbooks and it now does it via the AWX Operator. I will cover migrating AWX to AWX Operator in a separate post.

2) The NFS provisioner stopped working and I discovered that it was deprecated and renamed to nfs-subdir-external-provisioner. I had to use helm to remove the old chart:

$ helm remove nfs-provisioner release “nfs-provisioner” uninstalled $ helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/ “nfs-subdir-external-provisioner” has been added to your repositories $ helm install nfs-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner –set nfs.server=192.168.xx.xx –set nfs.path=/Container/k8s NAME: nfs-provisioner LAST DEPLOYED: Wed Jun 9 14:19:44 2021 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None