Uploaded image for project: 'OpenShift Request For Enhancement'
  1. OpenShift Request For Enhancement
  2. RFE-4088

Option to scale router pods on the Multi-AZ ROSA or OSD cluster

XMLWordPrintable

    • Icon: Feature Request Feature Request
    • Resolution: Unresolved
    • Icon: Undefined Undefined
    • None
    • None
    • Cluster Infrastructure
    • None
    • False
    • None
    • False
    • Not Selected

      1. Proposed title of this feature request

      Option to scale router (ingress-controller) pods on the Multi-AZ ROSA (or OSD) cluster

      2. What is the nature and description of the request?

      ref: https://access.redhat.com/solutions/6903041

      Even if the cluster has 3 infra nodes, currently there will not be 3 replicas of the routers in the OSD/ROSA cluster environment as the default configuration for the number of routers in the OSD/ROSA cluster environments is 2 only, and this default configuration cannot be changed. Customer wants to have an option to change this default configuration in ROSA.

      In OCP on-prem, most of the customers followed 3 replicas of router pods for High Availability, as per the Red Hat Professional Services recommendation.

      Also, it is possible to scale ingress-controller in OCP (ref below docs) but not in ROSA.

      Manual Scaling: https://docs.openshift.com/container-platform/4.12/networking/ingress-operator.html#nw-ingress-controller-configuration_configuring-ingress  

      Autoscaling: https://docs.openshift.com/container-platform/4.12/networking/ingress-operator.html#nw-autoscaling-ingress-controller_configuring-ingress 

      3. Why does the customer need this? (List the business requirements here)

      • For better High Availability and,
      • From cost perspective (customer pays for 3 infra nodes but get only two router pods)

      4. List any affected packages or components.

      Networking/ingress-controller

            rhn-support-dhardie Duncan Hardie
            rh-ee-asaket Ashish Saket
            Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

              Created:
              Updated: