Uploaded image for project: 'OpenShift Logging'
  1. OpenShift Logging
  2. LOG-3407

[release-5.5] Cluster logging pod multiple times crashed when some capabilities is disabled.

XMLWordPrintable

    • False
    • None
    • False
    • NEW
    • NEW
    • Hide
      Before this change the operator would install the console view plugin regardless if the console was enabled on this cluster causing the operator to crash. This change accounts for clusters that do not have the console enabled by handling the case when the call to the api server fails.
      Show
      Before this change the operator would install the console view plugin regardless if the console was enabled on this cluster causing the operator to crash. This change accounts for clusters that do not have the console enabled by handling the case when the call to the api server fails.
    • Log Collection - Sprint 228, Log Collection - Sprint 229

      Description of problem:

      Cluster logging pods crashed every 5 minutes when some capabilities is disabled. (https://docs.openshift.com/container-platform/4.11/post_installation_configuration/cluster-capabilities.html)

      Version-Release number of selected component (if applicable):

      4.12.0-0.nightly-2022-09-28-204419 

      How reproducible:

      Always

      Steps to Reproduce:

      1. Install OCP 4.12 on IPI Vsphere with profile versioned-installer-vmc7-ovn-lgw-baselinecaps-none-additionalcaps-marketplace-baremetal-ci cloud and run some test cases. 
      
      

      Actual results:
      Logging pod is crashing.

      oc get po -A --no-headers | awk '$5 > 5 {print}'
      openshift-cluster-version                          cluster-version-operator-78dc9df974-kmmf5                         1/1     Running            7 (10h ago)     10h
      openshift-logging                                  cluster-logging-operator-76cf8fff45-2wnhq                         0/1     CrashLoopBackOff   16 (113s ago)   95m

      And see below throttling logs.

      oc logs cluster-logging-operator-76cf8fff45-2wnhq  -n openshift-logging

      {"_ts":"2022-09-29T14:05:27.263134514Z","_level":"0","_component":"cluster-logging-operator","_message":"starting up...","go_arch":"amd64","go_os":"linux","go_version":"go1.17.12","operator_version":"5.6"}
      
      I0929 14:05:28.314289       1 request.go:665] Waited for 1.03784617s due to client-side throttling, not priority and fairness, request: GET:[https://172.30.0.1:443/apis/autoscaling.openshift.io/v1?timeout=32s|https://172.30.0.1/apis/autoscaling.openshift.io/v1?timeout=32s]
      
      {"_ts":"2022-09-29T14:05:29.924293272Z","_level":"0","_component":"cluster-logging-operator","_message":"Registering Components."}
      
      {"_ts":"2022-09-29T14:05:29.924485802Z","_level":"0","_component":"cluster-logging-operator","_message":"Starting the Cmd."}
      
      I0929 14:05:45.122805       1 request.go:665] Waited for 1.045292985s due to client-side throttling, not priority and fairness, request: GET:[https://172.30.0.1:443/apis/monitoring.coreos.com/v1alpha1?timeout=32s|https://172.30.0.1/apis/monitoring.coreos.com/v1alpha1?timeout=32s]
      
      I0929 14:05:55.173542       1 request.go:665] Waited for 1.095761795s due to client-side throttling, not priority and fairness, request: GET:[https://172.30.0.1:443/apis/events.k8s.io/v1beta1?timeout=32s|https://172.30.0.1/apis/events.k8s.io/v1beta1?timeout=32s]
      
      I0929 14:06:05.223262       1 request.go:665] Waited for 1.1458086s due to client-side throttling, not priority and fairness, request: GET:[https://172.30.0.1:443/apis/metrics.k8s.io/v1beta1?timeout=32s|https://172.30.0.1/apis/metrics.k8s.io/v1beta1?timeout=32s]
      
      I0929 14:06:15.273345       1 request.go:665] Waited for 1.195082902s due to client-side throttling, not priority and fairness, request: GET:[https://172.30.0.1:443/apis/operators.coreos.com/v1?timeout=32s|https://172.30.0.1/apis/operators.coreos.com/v1?timeout=32s]
      
      I0929 14:06:25.273413       1 request.go:665] Waited for 1.196744141s due to client-side throttling, not priority and fairness, request: GET:[https://172.30.0.1:443/apis/apps/v1?timeout=32s|https://172.30.0.1/apis/apps/v1?timeout=32s]
      
      I0929 14:06:35.273517       1 request.go:665] Waited for 1.19481789s due to client-side throttling, not priority and fairness, request: GET:[https://172.30.0.1:443/apis/logging.openshift.io/v1?timeout=32s|https://172.30.0.1/apis/logging.openshift.io/v1?timeout=32s]
      
      I0929 14:06:45.323090       1 request.go:665] Waited for 1.246474995s due to client-side throttling, not priority and fairness, request: GET:[https://172.30.0.1:443/apis/flowcontrol.apiserver.k8s.io/v1beta1?timeout=32s|https://172.30.0.1/apis/flowcontrol.apiserver.k8s.io/v1beta1?timeout=32s]
      
      

       

      Expected results:

      Logging pods should be running, not see above errors

      Additional info:

       

       

       

       

            jcantril@redhat.com Jeffrey Cantrill
            rhn-support-rgangwar Rahul Gangwar
            Anping Li Anping Li
            Votes:
            0 Vote for this issue
            Watchers:
            8 Start watching this issue

              Created:
              Updated:
              Resolved: