Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-19766

[4.13] disruption_tests: [sig-apps] job-upgrade panic in the 4.13-e2e-aws-ovn-upgrade-rollback-oldest-supported job

    XMLWordPrintable

Details

    • Bug
    • Resolution: Won't Do
    • Undefined
    • None
    • 4.13.z
    • Test Framework
    • Important
    • No
    • False
    • Hide

      None

      Show
      None

    Description

      Description of problem:

      disruption_tests: [sig-apps] job-upgrade panic in the nightly-4.13-e2e-aws-ovn-upgrade-rollback-oldest-supported job. The job is testing z stream rollback by installing 4.13.0, updating towards a recent 4.13 nightly, and then, at some random point during that update, rolling back to 4.13.0. The error message is here:
      
      {�[1m�[38;5;9mYour Test Panicked�[0m
      �[38;5;243mk8s.io/kubernetes@v1.26.1/test/e2e/upgrades/apps/job.go:65�[0m
        When you, or your assertion library, calls Ginkgo's Fail(),
        Ginkgo panics to prevent subsequent assertions from running.
      
        Normally Ginkgo rescues this panic so you shouldn't see it.
      
        However, if you make an assertion in a goroutine, Ginkgo can't capture the
        panic.
        To circumvent this, you should call
      
        	defer GinkgoRecover()
      
        at the top of the goroutine that caused this panic.
      
        Alternatively, you may have made an assertion outside of a Ginkgo
        leaf node (e.g. in a container node or some out-of-band function) - please
        move your assertion to
        an appropriate Ginkgo node (e.g. a BeforeSuite, BeforeEach, It, etc...).
      
        �[1mLearn more at:�[0m
        �[38;5;14m�[4mhttp://onsi.github.io/ginkgo/#mental-model-how-ginkgo-handles-failure�[0m
      
      
      goroutine 240 [running]:
      runtime/debug.Stack()
      	runtime/debug/stack.go:24 +0x65
      github.com/openshift/origin/test/extended/util/disruption.finalizeTest({0x0?, 0x4?, 0xd7b41e0?}, {0x8a9a198, 0x16}, {0x8a69f5c, 0x10}, 0xc006b13680, 0xc00566ab40)
      	github.com/openshift/origin/test/extended/util/disruption/disruption.go:267 +0x448
      panic({0x849d280, 0xc000031ea0})
      	runtime/panic.go:884 +0x212
      github.com/onsi/ginkgo/v2.Fail({0xc0056e5d80, 0x3b}, {0xc002397d10?, 0x8a2d519?, 0xc002397d30?})
      	github.com/onsi/ginkgo/v2@v2.4.0/core_dsl.go:352 +0x225
      k8s.io/kubernetes/test/e2e/framework.Fail({0xc006d8b710, 0x26}, {0xc002397da8?, 0xc006d8b710?, 0xc002397dd0?})
      	k8s.io/kubernetes@v1.26.1/test/e2e/framework/log.go:61 +0x145
      k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, {0x988bcc0, 0xc001b65a70}, {0x0?, 0xc006b17ff3?, 0x3?})
      	k8s.io/kubernetes@v1.26.1/test/e2e/framework/expect.go:76 +0x267
      k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...)
      	k8s.io/kubernetes@v1.26.1/test/e2e/framework/expect.go:43
      k8s.io/kubernetes/test/e2e/upgrades/apps.(*JobUpgradeTest).Test(0xc00096b398, 0xc00566ab40, 0xc00096b398?, 0x0?)
      	k8s.io/kubernetes@v1.26.1/test/e2e/upgrades/apps/job.go:65 +0x96
      github.com/openshift/origin/test/extended/util/disruption.(*chaosMonkeyAdapter).Test(0xc0002d7860, 0xc001775c68)
      	github.com/openshift/origin/test/extended/util/disruption/disruption.go:204 +0x4a2
      k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do.func1()
      	k8s.io/kubernetes@v1.26.1/test/e2e/chaosmonkey/chaosmonkey.go:94 +0x6a
      created by k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do
      	k8s.io/kubernetes@v1.26.1/test/e2e/chaosmonkey/chaosmonkey.go:91 +0x8b
        }
      
      The failed job is here https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-nightly-4.13-e2e-aws-ovn-upgrade-rollback-oldest-supported/1706489502383476736
       

      Version-Release number of selected component (if applicable):

      4.13.0

      How reproducible:

      Flaky

      Steps to Reproduce:

      1.
      2.
      3.
      

      Actual results:

      disruption_tests: [sig-apps] job-upgrade panic

      Expected results:

      No panic

      Additional info:

       

      Attachments

        Activity

          People

            rhn-engineering-dgoodwin Devan Goodwin
            yanyang@redhat.com Yang Yang
            Weibin Liang Weibin Liang
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: