Description
Description of problem:
Cluster rollback failed in the nightly-4.12-e2e-aws-ovn-upgrade-rollback-oldest-supported job. The job is testing z stream rollback by installing 4.12.0, updating towards a recent 4.12 nightly, and then, at some random point during that update, rolling back to 4.12.0. The error is Sep 13 11:42:09.619 E ns/openshift-machine-config-operator pod/machine-config-daemon-6g2hx node/ip-10-0-180-199.us-east-2.compute.internal uid/f416428e-88d6-4e8f-938f-a730f442a402 container/machine-config-daemon reason/ContainerExit code/255 cause/Error I0913 11:42:05.167798 2340 start.go:112] Version: v4.12.0-202301070015.p0.g2b3eba7.assembly.stream-dirty (2b3eba74dd9e4371f35ab41dbda02642f60707ec)\nI0913 11:42:05.186936 2340 start.go:125] Calling chroot("/rootfs")\nI0913 11:42:05.189323 2340 update.go:2089] Running: systemctl daemon-reload\nI0913 11:42:05.388884 2340 rpm-ostree.go:86] Enabled workaround for bug 2111817\nF0913 11:42:06.380886 2340 start.go:153] Failed to initialize single run daemon: error reading osImageURL from rpm-ostree: exit status 1\n Sep 13 11:59:15.898 E clusteroperator/machine-config condition/Available status/False reason/MachineConfigDaemonFailed changed: Cluster not available for [{operator 4.12.0-0.nightly-2023-09-12-091728}]: failed to apply machine config daemon manifests: error during waitForDaemonsetRollout: [timed out waiting for the condition, daemonset machine-config-daemon is not ready. status: (desired: 6, updated: 6, ready: 4, unavailable: 2)]
Version-Release number of selected component (if applicable):
How reproducible:
Flaky
Steps to Reproduce:
1. 2. 3.
Actual results:
Cluster upgrade is successful
Expected results:
Additional info: