I recently updated one of my old posts about AWX because I discovered that the nfs-client storage provisioner which automatically creates persistent volumes using an existing NFS mount had stopped working. It not only stopped working, but it was deprecated and I had to find a new one that worked.

In the process, I noticed that I never updated that post to include the updates to how AWX is installed and updated on Kubernetes. As you could read for yourself in that post, AWX was previously installed by cloning the awx github repository, customizing the inventory file, and executing the included ansible playbook.

The new, Kuberntes way to deploy and update AWX is through the use of the AWX Operator. Fortunately, I had originally set up AWX with an external Postgresql database with the Postgresql helm chart so migration happened with little impact to my existing projects/templates/jobs. The new process is to apply the awx-operator.yml directly from the AWX Operator release:

$ kubectl apply -f https://raw.githubusercontent.com/ansible/awx-operator/<TAG>/deploy/awx-operator.yaml
customresourcedefinition.apiextensions.k8s.io/awxs.awx.ansible.com created
customresourcedefinition.apiextensions.k8s.io/awxbackups.awx.ansible.com created
customresourcedefinition.apiextensions.k8s.io/awxrestores.awx.ansible.com created
clusterrole.rbac.authorization.k8s.io/awx-operator created
clusterrolebinding.rbac.authorization.k8s.io/awx-operator created
serviceaccount/awx-operator created
deployment.apps/awx-operator created

Once the operator is installed into the cluster, now you can deploy AWX by creating a manifest of kind AWX along with the values needed to customize it:


apiVersion: awx.ansible.com/v1beta1 kind: AWX metadata: name: awx namespace: awx spec: tower_admin_user: admin tower_admin_password_secret: awx-admin-password tower_create_preload_data: true tower_garbage_collect_secrets: false tower_hostname: {{ awx_hostname }} tower_image_pull_policy: IfNotPresent tower_ingress_annotations: ‘{‘‘cert-manager.io/cluster-issuer’’: ‘’letsencrypt’’}’ tower_ingress_tls_secret: awx-cert tower_ingress_type: Ingress tower_loadbalancer_port: 80 tower_loadbalancer_protocol: http tower_postgres_storage_class: nfs-client tower_projects_persistence: false tower_projects_storage_access_mode: ReadWriteMany tower_projects_storage_size: 8Gi tower_replicas: 1 tower_route_tls_termination_mechanism: Edge tower_task_privileged: false

apiVersion: v1 kind: Secret metadata: name: awx-postgres-configuration namespace: awx stringData: host: awx-postgresql port: “5432” database: awx username: {{ awx_postgres_user }} password: {{ awx_postgres_pw }} type: Opaque

apiVersion: v1 kind: Secret metadata: name: awx-admin-password namespace: awx stringData: password: {{ awx_admin_pw }}

The values of {{ awx_postgres_pq }} and {{ awx_admin_pw }} are actually ansible variables. This is because I still wanted to have the ability to deploy changes with ansible so I created an ansible role called awx.


- name: Get AWX Operator definition from Github
    url: "https://raw.githubusercontent.com/ansible/awx-operator/{{ awx_operator_version }}/deploy/awx-operator.yaml"
    dest: /tmp/awx-operator-{{ awx_operator_version }}.yaml
- name: Upgrade or Install AWX Operator
   state: present
   namespace: default
   kubeconfig: "{{ kube_config }}"
   context: "{{ kube_context }}"
   src: /tmp/awx-operator-{{ awx_operator_version }}.yaml
   wait: true  
- name: Deploy AWX Manifest
   state: present
   kubeconfig: "{{ kube_config }}"
   context: "{{ kube_context }}"
   namespace: awx
   definition: "{{ lookup('template', 'manifests/awx-deployment.yaml') }}"
      fail_on_error: yes
   wait: true

In the global ansible vars, {{ awx_operator_version }} is defined with the most recent version of the AWX Operator. As new versions are released, it's just a matter of changing that value and running the playbook containing the awx role. In fact, the playbook can be run again and again and it will always be configured the same way unless something in the playbook changes. Here's what will happen each time:

* Download the latest awx_operator.yaml from Github
* Deploy awx_operator.yaml to the cluster
* Apply awx_deployment.yaml manifest containing the specific customizations for my environment.

## Collections

Collections are a new concept for me and one of the old modules I was using to deploy restic was moved from the core Ansible modules into the community.general collections.

I was familiar with [Ansible Galaxy](ttps://galaxy.ansible.com) as the place to share Ansible roles and collections, but had been directly integrating things I found there into my  ansible project. I don't want to do that with modules, so I need to embrace collections.

To do this, I had to create a new directory in my project called collections which contains requirements.yaml:


When running from the command line, I can use the ansible-galaxy to install the collections:

ansible-galaxy collection install -r collections/requirements.yaml

Since I don't want the collections to be come part of the repository, I also need to exclude these in the .gitignore while including the requirements.yaml:

collections/* !collections/requirements.yaml

I also needed to create a new ansible.cfg to tell it where to look for collections:

stdout_callback = yaml
inventory = inventory/home
collections_paths = ./collections


server_list = release_galaxy


The Ansible Galaxy token is obtained by logging into Ansible Galaxy and revealing the API key in user preferences.

So, this all worked great for command-line. I also added the Ansible Galaxy in as a credential within AWX, but I noticed afterwards that it seems to have imported it from ansible.cfg. Refer to the latest documentation because things still seem to be evolving.