Uploaded image for project: 'OpenShift API for Data Protection'
  1. OpenShift API for Data Protection
  2. OADP-542

[MTC] Migrations gets stuck at StageBackup stage for indirect runs

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Blocker Blocker
    • OADP 1.0.3
    • OADP 1.0.3
    • None
    • False
    • Hide

      None

      Show
      None
    • False
    • Passed
    • No
    • 0
    • 0
    • Very Likely
    • 0
    • None
    • Unset
    • Unknown

      Description of problem: Migrations are getting stuck at stageBackup stage, when triggered with indirect mode. For direct mode it’s working fine.

      Version-Release number of selected component (if applicable):
      Source GCP 4.6 MTC 1.7.2 + OADP 1.0.3
      Target GCP 4.10 MTC 1.7.2 + OADP 1.0.3

      How reproducible:
      Always

      Steps to Reproduce:
      1. Deploy an application in source cluster
      2. Trigger migration with indirect mode

      Actual results: Migrations are getting stuck at StageBackup stage.

       

      $ oc get migmigration migration-52827 -o yaml
      spec:
      migPlanRef:
      name: test4
      namespace: openshift-migration
      quiescePods: true
      stage: false
      status:
      conditions:
      
      category: Advisory
      lastTransitionTime: "2022-05-31T10:15:05Z"
      message: 'Step: 30/49'
      reason: StageBackupCreated
      status: "True"
      type: Running category: Required
      lastTransitionTime: "2022-05-31T10:13:31Z"
      message: The migration is ready.
      status: "True"
      type: Ready category: Required
      durable: true
      lastTransitionTime: "2022-05-31T10:14:05Z"
      message: The migration registries are healthy.
      status: "True"
      type: RegistriesHealthy category: Advisory
      durable: true
      lastTransitionTime: "2022-05-31T10:14:37Z"
      message: '[1] Stage pods created.'
      status: "True"
      type: StagePodsCreated
      itinerary: Final
      observedDigest: 6a51be85e3b968769b1713084a928b5114ec8e9b3c26662cf534ade8ed78b794
      phase: StageBackupCreated
      pipeline: completed: "2022-05-31T10:14:06Z"
      message: Completed
      name: Prepare
      started: "2022-05-31T10:13:31Z" completed: "2022-05-31T10:14:26Z"
      message: Completed
      name: Backup
      progress: 'Backup openshift-migration/migration-52827-initial-nrqvg: 41 out of estimated total of 41 objects backed up (17s)'
      started: "2022-05-31T10:14:06Z" message: Waiting for stage backup to complete.
      name: StageBackup
      phase: StageBackupCreated
      progress: 'Backup openshift-migration/migration-52827-stage-z8w4d: 0 out of estimated total of 5 objects backed up (52m56s)' 'PodVolumeBackup openshift-migration/migration-52827-stage-z8w4d-f76h9: 0 bytes out of 0 bytes backed up (52m40s)'
      started: "2022-05-31T10:14:26Z" message: Not started
      name: StageRestore message: Not started
      name: Restore message: Not started
      name: Cleanup
      startTimestamp: "2022-05-31T10:13:31Z" 

       

      $ oc logs migration-log-reader-5d6d95499b-72bvn -c color
      openshift-migration velero-57c48b4bb-n9s4x velero time="2022-05-31T11:08:24Z" level=info msg="Found 1 backups in the backup location that do not exist in the cluster and need to be synced" backupLocation=automatic-c6mbt controller=backup-sync logSource="pkg/controller/backup_sync_controller.go:204"
      openshift-migration velero-57c48b4bb-n9s4x velero time="2022-05-31T11:08:24Z" level=info msg="Attempting to sync backup into cluster" backup=migration-58d98-initial-rsrdt backupLocation=automatic-c6mbt controller=backup-sync logSource="pkg/controller/backup_sync_controller.go:212"
      openshift-migration velero-57c48b4bb-n9s4x velero time="2022-05-31T11:08:24Z" level=error msg="Error getting backup metadata from backup store" backup=migration-58d98-initial-rsrdt backupLocation=automatic-c6mbt controller=backup-sync error="rpc error: code = Unknown desc = storage: object doesn't exist" error.file="/remote-source/src/github.com/vmware-tanzu/velero/pkg/persistence/object_store.go:289" error.function="github.com/vmware-tanzu/velero/pkg/persistence.(*objectBackupStore).GetBackupMetadata" logSource="pkg/controller/backup_sync_controller.go:216"
      openshift-migration velero-57c48b4bb-n9s4x velero time="2022-05-31T11:08:24Z" level=info msg="Validating backup storage location" backup-storage-location=automatic-c6mbt controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:114"
      openshift-migration velero-57c48b4bb-n9s4x velero time="2022-05-31T11:08:24Z" level=info msg="Found 1 backups in the backup location that do not exist in the cluster and need to be synced" backupLocation=automatic-gt8v9 controller=backup-sync logSource="pkg/controller/backup_sync_controller.go:204"
      openshift-migration velero-57c48b4bb-n9s4x velero time="2022-05-31T11:08:24Z" level=info msg="Attempting to sync backup into cluster" backup=migration-58d98-initial-rsrdt backupLocation=automatic-gt8v9 controller=backup-sync logSource="pkg/controller/backup_sync_controller.go:212"
      openshift-migration velero-57c48b4bb-n9s4x velero time="2022-05-31T11:08:24Z" level=info msg="Backup storage location valid, marking as available" backup-storage-location=automatic-c6mbt controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:121"
      openshift-migration velero-57c48b4bb-n9s4x velero time="2022-05-31T11:08:24Z" level=info msg="Validating backup storage location" backup-storage-location=automatic-gt8v9 controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:114"
      openshift-migration velero-57c48b4bb-n9s4x velero time="2022-05-31T11:08:24Z" level=error msg="Error getting backup metadata from backup store" backup=migration-58d98-initial-rsrdt backupLocation=automatic-gt8v9 controller=backup-sync error="rpc error: code = Unknown desc = storage: object doesn't exist" error.file="/remote-source/src/github.com/vmware-tanzu/velero/pkg/persistence/object_store.go:289" error.function="github.com/vmware-tanzu/velero/pkg/persistence.(*objectBackupStore).GetBackupMetadata" logSource="pkg/controller/backup_sync_controller.go:216"
      openshift-migration velero-57c48b4bb-n9s4x velero time="2022-05-31T11:08:24Z" level=info msg="Backup storage location valid, marking as available" backup-storage-location=automatic-gt8v9 controller=backup-storage-location logSource="pkg/controller/backup_storage_location_controller.go:121"
      openshift-migration migration-controller-56d764884-7fxkd mtc {"level":"info","ts":1653995305.3079288,"logger":"migration","msg":"Checking registry health","migMigration":"migration-52827"}
      openshift-migration migration-controller-56d764884-7fxkd mtc {"level":"info","ts":1653995305.389897,"logger":"migration","msg":"Found 2/2 registries in healthy condition.","migMigration":"migration-52827","message":""}
      openshift-migration migration-controller-56d764884-7fxkd mtc
      {"level":"info","ts":1653995305.390091,"logger":"migration","msg":"[RUN] (Step 30/49) Waiting for stage backup to complete.","migMigration":"migration-52827","phase":"StageBackupCreated"}
      openshift-migration migration-controller-56d764884-7fxkd mtc
      {"level":"info","ts":1653995305.8250961,"logger":"migration","msg":"Velero Backup progress report","migMigration":"migration-52827","phase":"StageBackupCreated","backup":"openshift-migration/migration-52827-stage-z8w4d","backupProgress":["Backup openshift-migration/migration-52827-stage-z8w4d: 0 out of estimated total of 5 objects backed up (53m21s)","PodVolumeBackup openshift-migration/migration-52827-stage-z8w4d-f76h9: 0 bytes out of 0 bytes backed up (53m5s)"]}
      openshift-migration migration-controller-56d764884-7fxkd mtc {"level":"info","ts":1653995305.8251326,"logger":"migration","msg":"Stage Backup on source cluster is incomplete. Waiting.","migMigration":"migration-52827","phase":"StageBackupCreated","backup":"openshift-migration/migration-52827-stage-z8w4d","backupPhase":"InProgress","backupProgress":"0/5","backupWarnings":0,"backupErrors":0}
      

       

      Expected results: Migrations should be successful.

            tkaovila@redhat.com Tiger Kaovilai
            rhn-support-prajoshi Prasad Joshi
            Prasad Joshi Prasad Joshi
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

              Created:
              Updated:
              Resolved: