Uploaded image for project: 'Red Hat OpenShift AI Engineering'
  1. Red Hat OpenShift AI Engineering
  2. RHOAIENG-561

[Feature Request]: Enable Model Serving route per Project

XMLWordPrintable

      Feature description

      As it right now, we are creating a Route object for every InferenceService instance we deploy in a Project. That can lead to serious performance issues.

      We are working with the model serving team to adapt the controller for this specific scenario:

      1. When a Serving Runtime is created in a namespace, the odh-model-controller will create a route with a deterministic name based on the namespace (project)
      2. Each time we create a new inference service, it will add a path with a deterministic format if external route is enabled.
      3. This logic will be available for both inferenceSErvices already deployed and new ones.
      4. For existing inference services, the old route will be available in case it's being used for production.

            Unassigned Unassigned
            dgutride@redhat.com Dana Gutride
            RHOAI Dashboard
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

              Created:
              Updated: