How to configure HA-LVM Cluster using tagging & vol... https://access.redhat.
com/solutions/3796321
Subscriptions Downloads Containers Support Cases
Products & Services Knowledgebase How to configure HA-LVM Cluster using tagging & volume_list in
RHEL 8?
$ - Updated May 20 2022 at 2:17 PM - English
Environment
• Red Hat Enterprise Linux 8 (with the High Availability Add-on)
Issue
• What are the steps to configure active/passive shared storage using volume_list and
tagging feature in RHEL 8 ?
Resolution
LVM LVM-activate
For more information see:
• 1.6. LVM logical volumes in a Red Hat high availability cluster | Configuring and
managing high availability clusters Red Hat Enterprise Linux 8
• 5.3. Creating the resources and resource groups | Configuring and managing high
availability clusters Red Hat Enterprise Linux 8
• cluster nodenames are node1 and node2 while their respective hostnames are rh8-
nd1 and rh8-nd2 .
• Local VG to both the cluster nodes is named rhel .
• Shared storage assigned to both the cluster nodes. In the following article shared
1 of 6 5/4/23, 02:46
How to configure HA-LVM Cluster using tagging & vol... https://access.redhat.com/solutions/3796321
storage is mpatha using multipath.
• Basic cluster setup is complete with fence resource is in Started state as well as
tested.
[rh8-nd1]# multipath -ll
mpatha (36001405dfade64e579642dca302af4f3) dm-2 LIO-ORG,2node_rhel8
size=1.0G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 2:0:0:0 sde 8:64 active ready running
`-+- policy='service-time 0' prio=50 status=enabled
`- 3:0:0:0 sdb 8:16 active ready running
[rh8-nd2]# multipath -ll
mpatha (36001405dfade64e579642dca302af4f3) dm-2 LIO-ORG,2node_rhel8
size=1.0G features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 2:0:0:0 sde 8:64 active ready running
`-+- policy='service-time 0' prio=50 status=enabled
`- 3:0:0:0 sdb 8:16 active ready running
lvm.conf
• Update /etc/lvm/lvm.conf file on all the cluster nodes for volume_list and assign
all the local VG names as its value. If there are multiple local VGs to a cluster node then
include all the local VGs under volume_list parameter & separate them by comma.
# vi /etc/lvm/lvm.conf
[.....]
volume_list = [ "rhel" ]
[.....]
• Take backup of current initramfs file on all of the cluster nodes.
# cp /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.$(date
+%m-%d-%H%M%S).bak
• Rebuild initramfs file on both of the cluster nodes.
# dracut -f -v
• Reboot all of the cluster nodes.
2 of 6 5/4/23, 02:46
How to configure HA-LVM Cluster using tagging & vol... https://access.redhat.com/solutions/3796321
• Once all the cluster nodes are back online, proceed with creation on PV/VG/LV from
any one of the cluster node. In the following example VG name is cluster_vg and LV
name is cluster_lv .
[rh8-nd1]# pvcreate /dev/mapper/mpatha
Physical volume "/dev/mapper/mpatha" successfully created.
[rh8-nd1]# vgcreate cluster_vg /dev/mapper/mpatha
Volume group "cluster_vg" successfully created
[rh8-nd1]# lvcreate -l 100%FREE -n cluster_lv cluster_vg
Logical volume "cluster_lv" created.
• Validate the newly created LVM device.
[rh8-nd1]# vgs -ao+tags
VG #PV #LV #SN Attr VSize VFree VG Tags
cluster_vg 1 1 0 wz--n- 1020.00m 0
rhel 1 2 0 wz--n- <11.00g 0
[rh8-nd2 ~]# vgs -ao+tags
VG #PV #LV #SN Attr VSize VFree VG Tags
cluster_vg 1 1 0 wz--n- 1020.00m 0
rhel 1 2 0 wz--n- <11.00g 0
• Create a filesystem of your requirement (ext4/xfs) over the newly created LVM device.
# mkfs.xfs /dev/cluster_vg/cluster_lv
pacemaker
• Add pacemaker cluster resource to manage the VG. The resource agent to be used is
ocf::heartbeat:LVM-activate . The value for tag parameter will be the tag that
needs to be added
# pcs resource create cluster_vg ocf:heartbeat:LVM-activate
vgname=cluster_vg activation_mode=exclusive vg_access_mode=tagging
tag=rhel8 --group HA-LVM
• Create a cluster resource with resource agent ocf:heartbeat:Filesystem so cluster will
control the mount of filesystem & will make it available on one of the cluster node.
3 of 6 5/4/23, 02:46
How to configure HA-LVM Cluster using tagging & vol... https://access.redhat.com/solutions/3796321
# pcs resource create cluster-fs ocf:heartbeat:Filesystem device=/dev
/cluster_vg/cluster_lv directory=/test fstype=xfs --group HA-LVM
• Verify that the newly created resources start up.
# pcs status | grep HA-LVM -A 2
Resource Group: HA-LVM
vg (ocf::heartbeat:LVM-activate): Started node2
cluster-fs (ocf::heartbeat:Filesystem): Started node2
• Validate that the tag on the volume group is set on all the nodes. In this example, all
nodes should see the tag of rhel8 for the volume group cluster_vg when the
corresponding resource is activated. device status to check for the tags . The logical
volume for cluster_vg should only show it is activate ( a attribute bit) on the node
that the LVM-activate resource is running on.
[rh8-nd1]# vgs -ao+tags cluster_vg
VG #PV #LV #SN Attr VSize VFree VG Tags
cluster_vg 1 1 0 wz--n- 1020.00m 0 rhel8
[rh8-nd2]# vgs -ao+tags cluster_vg
VG #PV #LV #SN Attr VSize VFree VG Tags
cluster_vg 1 1 0 wz--n- 1020.00m 0 rhel8
[rh8-nd1]# lvs -o lv_full_name,lv_attr cluster_vg/cluster_lv
LV Attr
cluster_vg/cluster_lv -wi-------
[rh8-nd2]# lvs -o lv_full_name,lv_attr cluster_vg/cluster_lv
LV Attr
cluster_vg/cluster_lv -wi-a-----
• Test the failover of the resource by placing the active node in standby. This will ensure
that the VG & filesystem gets started on passive node as well.
• Configuring and managing high availability clusters Red Hat Enterprise Linux 8 | 5.1.
Configuring an LVM volume with an ext4 file system in a Pacemaker cluster
• How to use the resource-agents `lvmlockd` and `LVM-activate` with pacemaker on
RHEL 7?
• How do I manually activate a volume group managed by an LVM-activate resource with
4 of 6 5/4/23, 02:46
How to configure HA-LVM Cluster using tagging & vol... https://access.redhat.com/solutions/3796321
system_id on another node?
• How to activate a VG with 'system_id' locally on a cluster node ?
This solution is for active/passive ( LVM-activate has activation_mode set to
exclusive ) mounting of shared storage device that contains a local filesystem (ext4, xfs,
etc) that is not cluster aware. For this reason, you do not want to ever create multiple LVM-
activate resource with activation_mode=exclusive and same vgname because
corruption will occur if they are activated on different nodes. Additionally, do not configure an
LVM-activate resource as a cloned resource when the LVM-activate resource is using
activation_mode=exclusive .
This solution is part of Red Hat’s fast-track publication program, providing a huge library of
solutions that Red Hat engineers have created while supporting our customers. To give you
the knowledge you need the instant it becomes available, these articles may be presented in
a raw and unedited form.
5 of 6 5/4/23, 02:46
How to configure HA-LVM Cluster using tagging & vol... https://access.redhat.com/solutions/3796321
Solution - Jun 20, Solution - Jul 18, Solution - Jan 28,
2017 2018 2016
Copyright © 2023 Red Hat, Inc.
6 of 6 5/4/23, 02:46