-
Story
-
Resolution: Can't Do
-
Minor
-
None
-
rhel-8.6.0
-
None
-
Major
-
sst_high_availability
-
ssg_filesystems_storage_and_HA
-
2
-
False
-
-
x86_64
What were you trying to do that didn't work?
Reboot the Corosync host that is a preferred location for a Pacemaker remote resource. During the reboot, the Pacemaker remote resource would migrate over and run on the other Corosync host, and then migrated back to the preferred host when the preferred host came back online. During the migrate back, the node was fenced and this was not expected.
crm_report collection (to be uploaded): pcmk-Fri-16-Feb-2024.tar.bz2
Key events in crm_report:
Time of reboot: Feb 16 11:49:36
Node rebooted: devlnxps01
Time host came back: Feb 16 11:52:20
Time remote resource migrated back and triggered node fence: Feb 16 11:52:27
Please provide the package NVR for which bug is seen:
How reproducible: Intermittently
Steps to reproduce
- Configure a Corosync+Pacemaker cluster with 3 Corosync nodes and 2 Pacemaker remote nodes.
- For the Pacemaker remote resource, configure a preferred location using location constraint on one of the Corosync nodes.
- Reboot the Corosync host that is a preferred location for a Pacemaker remote resource. During the reboot, the Pacemaker remote resource would migrate over and run on the other Corosync host, and then migrate back to the preferred host when the preferred host comes back online.
Expected results
It is expected that when the Pacemaker remote resource migrates back to run on the preferred host, that the node is still active and all resources running on the node to not being stopped and restarted
Actual results
The corresponding remote node was fenced and all resources on that host were stopped and then restarted. This was not desirable.