Uploaded image for project: 'Red Hat Data Grid'
  1. Red Hat Data Grid
  2. JDG-4797

HotRod client manual cluster switch not working

XMLWordPrintable

    • False
    • False
    • undefined
    • Hide

      Have 2 clusters running, in different networks, different OpenShift clusters, Using HotRod configuration:

      ConfigurationBuilder.addServer()
         .host("hostA")
         .port(ConfigurationProperties.DEFAULT_HOTROD_PORT)
      .addCluster("ClusterB")
         .addClusterNode("hostB", ConfigurationProperties.DEFAULT_HOTROD_PORT)
      

      Primary/default cluster goes down, after max retries the client switches to "ClusterB", primary cluster comes back online(No automatic change is done here so the documentation says that it needs to be manual), using RemoteManagerCache created with the configuration builder is use the manual approach which is the method "switchToDefaultCluster",  logs from the HotRod client when this happens:

      ISPN004014: New server added(hostA:11222), adding to the pool.
      ISPN004016: Server not in cluster anymore(hostB:11222), removing from the pool.
      ISPN004053: Manually switched back to main cluster
      

      After seeing this log I try to insert a new entry to a cache, the same cache exists in both clusters, it does not use cross-site replication, the entry goes to ClusterB, primary Cluster does not receive any value.

      I've tried this with the HotRod application running locally, using BASIC client intelligence, and also with the application running on the same OpenShift cluster as ClusterA using HASH_DISTRIBUTION_AWARE intelligence. 

      The automatic switch works but the manual does not.

      Also is possible to switch between cluster even if the cluster switched to is down, I think there's should be an exception or return false on the switch method, currently, the return is true even with the cluster down.

       

      DataGrid Operator 8.2.2
      io.quarkus.quarkus-infinispan-client-2.1.2.Final

      If this is not enough information please let me know.

       
       
       

      Show
      Have 2 clusters running, in different networks, different OpenShift clusters, Using HotRod configuration: ConfigurationBuilder.addServer() .host( "hostA" ) .port(ConfigurationProperties.DEFAULT_HOTROD_PORT) .addCluster( "ClusterB" ) .addClusterNode( "hostB" , ConfigurationProperties.DEFAULT_HOTROD_PORT) Primary/default cluster goes down, after max retries the client switches to "ClusterB", primary cluster comes back online(No automatic change is done here so the documentation says that it needs to be manual), using RemoteManagerCache created with the configuration builder is use the manual approach which is the method "switchToDefaultCluster",  logs from the HotRod client when this happens: ISPN004014: New server added(hostA:11222), adding to the pool. ISPN004016: Server not in cluster anymore(hostB:11222), removing from the pool. ISPN004053: Manually switched back to main cluster After seeing this log I try to insert a new entry to a cache, the same cache exists in both clusters, it does not use cross-site replication, the entry goes to ClusterB, primary Cluster does not receive any value. I've tried this with the HotRod application running locally, using BASIC client intelligence, and also with the application running on the same OpenShift cluster as ClusterA using HASH_DISTRIBUTION_AWARE intelligence.  The automatic switch works but the manual does not. Also is possible to switch between cluster even if the cluster switched to is down, I think there's should be an exception or return false on the switch method, currently, the return is true even with the cluster down.   DataGrid Operator 8.2.2 io.quarkus.quarkus-infinispan-client-2.1.2.Final If this is not enough information please let me know.      

      Manually switching cluster through the HotRod client is not working as expected, I'm running a few experiments to test out the failover, in automatic mode, after max retries reached, the cluster switch happens and it works as expected but after bringing back the cluster that was down I cannot manually switch to it, there's no exception thrown, the behaviour is as if I did not change it. For example, ClusterA is down, fails over to ClusterB, ClusterA comes back, I manually switch to ClusterA, I add something to a cache, the value still gets stored in ClusteB. I'm using Quarkus.

            dberinde@redhat.com Dan Berindei (Inactive)
            andre.adam Andre Adam (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

              Created:
              Updated:
              Resolved: