Uploaded image for project: 'Infinispan'
  1. Infinispan
  2. ISPN-7793

Expose Infinispan cluster deployed in OpenShift

This issue belongs to an archived project. You can view it, but you can't modify it. Learn more

    • Icon: Enhancement Enhancement
    • Resolution: Obsolete
    • Icon: Minor Minor
    • None
    • None
    • Cloud
    • None

          [ISPN-7793] Expose Infinispan cluster deployed in OpenShift

          PoC has been done. Now it's matter of adjusting the approach and scheduling it. It's up to NadirX.

          Sebastian Łaskawiec (Inactive) added a comment - PoC has been done. Now it's matter of adjusting the approach and scheduling it. It's up to NadirX .

          Each datagrid node should get assigned a resolvable hostname which is used as external-host in the hot rod server configuration.

          This is actually pretty hard thing to do. Whenever you scale up your deployment, a new LB Service needs to be created. This LB Service is vendor dependent and it takes quite a while to create it (5 min at most on GCP). After the service gets its External Address (IP or a address), you need to update Infinispan server configuration and restart it. Note, that sometimes, users might embed the configuration directly into their container, so sometimes it just can not be done.

          So as you see, the main problem is feeding the information about the External IP (or address) back to the server.

          The only way I can see this working is having an external discovery mechanism - w custom container with a customized DNS Server (let's call it Discovery Service for now). Each server could publish its hostname (and internal address - the one that is not routable?) to this service. In the next step the Discovery Service would scan Kubernetes API and create a matching LB Service. Once it's up, it could expose it via DNS.

          Once we have internal hostname (and internal IP address) and external IP mapping, we are ready to do the translation.

          Client nodes will receive topology information through Hot Rod containing these host names.
          Client nodes which have direct visibility of the datagrid nodes will resolve their addresses directly.
          Client nodes which do not have direct visibility should have their DNS configured to remap the datagrid node names to the proxy addresses.

          This could work. Although, I'm not sure if need a project (namespace) prefix or not. Also, the client code would need to be modified to tolerate non-existing servers.

          Sebastian Łaskawiec (Inactive) added a comment - Each datagrid node should get assigned a resolvable hostname which is used as external-host in the hot rod server configuration. This is actually pretty hard thing to do. Whenever you scale up your deployment, a new LB Service needs to be created. This LB Service is vendor dependent and it takes quite a while to create it (5 min at most on GCP). After the service gets its External Address (IP or a address), you need to update Infinispan server configuration and restart it. Note, that sometimes, users might embed the configuration directly into their container, so sometimes it just can not be done. So as you see, the main problem is feeding the information about the External IP (or address) back to the server. The only way I can see this working is having an external discovery mechanism - w custom container with a customized DNS Server (let's call it Discovery Service for now). Each server could publish its hostname (and internal address - the one that is not routable?) to this service. In the next step the Discovery Service would scan Kubernetes API and create a matching LB Service. Once it's up, it could expose it via DNS. Once we have internal hostname (and internal IP address) and external IP mapping, we are ready to do the translation. Client nodes will receive topology information through Hot Rod containing these host names. Client nodes which have direct visibility of the datagrid nodes will resolve their addresses directly. Client nodes which do not have direct visibility should have their DNS configured to remap the datagrid node names to the proxy addresses. This could work. Although, I'm not sure if need a project (namespace) prefix or not. Also, the client code would need to be modified to tolerate non-existing servers.

          My preference here is to support this transparently using DNS.
          Each datagrid node should get assigned a resolvable hostname which is used as external-host in the hot rod server configuration.
          Client nodes will receive topology information through Hot Rod containing these host names.
          Client nodes which have direct visibility of the datagrid nodes will resolve their addresses directly.
          Client nodes which do not have direct visibility should have their DNS configured to remap the datagrid node names to the proxy addresses.

          Tristan Tarrant added a comment - My preference here is to support this transparently using DNS. Each datagrid node should get assigned a resolvable hostname which is used as external-host in the hot rod server configuration. Client nodes will receive topology information through Hot Rod containing these host names. Client nodes which have direct visibility of the datagrid nodes will resolve their addresses directly. Client nodes which do not have direct visibility should have their DNS configured to remap the datagrid node names to the proxy addresses.

            slaskawi@redhat.com Sebastian Łaskawiec (Inactive)
            slaskawi@redhat.com Sebastian Łaskawiec (Inactive)
            Archiver:
            rhn-support-adongare Amol Dongare

              Created:
              Updated:
              Resolved:
              Archived: