A Proxmox cluster requires an odd number of nodes so that, in the event of a node going down, a quorum can be maintained. If the quorum is lost, the cluster goes into a read-only mode and no cluster changes can be made. A quorum is defined as “the minimum number of votes that a distributed transaction has to obtain in order to be allowed to perform an operation in a distributed system”.

I originally started with a single Proxmox node and added a second one. In order to create a Proxmox cluster and still be able to make changes if a node went down, I needed a way to maintain quorum. Fortunately, Proxmox has support for using Corosync as an external quorum voter and I happened to have an extra Raspberry Pi 3 not doing anything.

Once I installed Raspbian on it, I created an Ansible playbook to install and configure Corosync to provide quorum for both the Proxmox cluster and the GlusterFS cluster.

Ansible Playbook

The Playbook, qdevice.yml, installs the corosync package and configures Glusterd with all three nodes.

- name: Quorum devices
    - proxmox1
    - proxmox2
    - qdevice
    - debian
    - sshkeys
    - postfix
    - name: Install Corosync Qdevice tools
        name: corosync-qdevice
        state: latest
    - name: Install GlusterFS tools
        name: glusterfs-server
        state: latest
    - name: Determine Gluster cluster nodes
         nodelist: "{{ ( nodelist | default([]) ) + [ hostvars[item].ansible_host ] }}"
      loop: "{{ ansible_play_hosts }}"
      run_once: true
    - debug:
        var: nodelist | join(',')
    - name: Enable Glusterd service
        name: glusterd
        state: started
        enabled: yes
    - name: Create a Gluster storage pool
        state: present
        nodes: "{{ nodelist }}"
      run_once: true


I didn’t create a specific role for the quorum device, but I do utilize a few standard roles on most hosts:

  • debian - a generic role which installs, configures, and updates the packages I want installed on all Debian hosts such a git, bzip2, htop, etc.
  • sshkeys - deploys my ssh public key as authorized_keys file. It’s also used in the bootstrap.yml to get a brand new host configured as an Ansible target host.
  • postfix - configures Postfix SMTP to relay e-mail through my Mailcow server

If there’s interest, I may cover these roles in detail in future posts. Let me know.

Configure Proxmox

Adding the new quorum device to the Proxmox cluster is a one-liner:

root@proxmox1:~# pvecm qdevice setup <QDEVICE-IP>

Then check the status:

root@proxmox1:~# pvecm status
Cluster information
Name:             home
Config Version:   3
Transport:        knet
Secure auth:      on

Quorum information
Date:             Sat Sep 17 05:25:10 2022
Quorum provider:  corosync_votequorum
Nodes:            2
Node ID:          0x00000001
Ring ID:          1.3cba
Quorate:          Yes

Votequorum information
Expected votes:   3
Highest expected: 3
Total votes:      3
Quorum:           2  
Flags:            Quorate Qdevice 

Membership information
    Nodeid      Votes    Qdevice Name
0x00000001          1    A,V,NMW proxmox1 (local)
0x00000002          1    A,V,NMW proxmox2
0x00000000          1            Qdevice


A Proxmox cluster allows VMs to migrate between hosts, if there is shared storage between the nodes such as GlusterFS. This means flexibility if there is a hardware failure, updates need to be applied, or the load needs to be balanced. You just need to make sure there are an odd number of nodes in the cluster and using an external quorum device with a Raspberry Pi is a great, economical way to do that.