Uploaded image for project: 'Infinispan'
  1. Infinispan
  2. ISPN-1466

Async configuration tag affects communication between cache and HotRod client

This issue belongs to an archived project. You can view it, but you can't modify it. Learn more

    • Icon: Bug Bug
    • Resolution: Won't Do
    • Icon: Major Major
    • None
    • 5.1.0.ALPHA1, 5.1.0.BETA1
    • None
    • None

      When using a REPL (replicated) cache with configuration tag <async useReplQueue="true" replQueueMaxElements="3" replQueueInterval="1000" />, HotRod client cannot see a cache entry which was just stored into a cache. The entry is visible not before the replication queue is flushed (either because of MaxElements limit or QueueInterval). I'll attach a testcase but here's a test snippet that fails at first assert:

      @Test
          public void testQueueSize() throws Exception {
              RemoteCache<String, String> asyncCache1 = rcm1.getCache(asyncCacheSize);
              RemoteCache<String, String> asyncCache2 = rcm2.getCache(asyncCacheSize);
              asyncCache1.clear();
              asyncCache1.put("k1", "v1");
              assertTrue(null != asyncCache1.get("k1"));
              assertTrue(null == asyncCache2.get("k1"));
              asyncCache1.put("k2", "v2");
              //k3 fills up the queue -> flush
              asyncCache1.put("k3", "v3");
              Thread.sleep(1000); //wait for the queue to be flushed
              assertTrue(null != asyncCache1.get("k1"));
              assertTrue(null != asyncCache2.get("k1"));
          }
      

      IMO when I have cache A and B in a cluster and the cache entry is stored into the cache A, it should be visible at A and not in B. After flushing the queue it should be visible also at B.

      Here's the test:

      https://svn.devel.redhat.com/repos/jboss-qa/edg/infinispan-functional-tests/trunk/xml-configuration/clustered-cache

      (to run it, one has to install infinispan-arquillian-container into local maven repository, and run "mvn clean verify -Dnode0.ispnhome=${server1.home} -Dnode1.ispnhome=${server2.home}", e.g. mvn clean verify -Dnode0.ispnhome=/home/mgencur/Java/infinispan/infinispan-5.1.0.BETA1 -Dnode1.ispnhome=/home/mgencur/Java/infinispan/infinispan-5.1.0.BETA1-2

      )

            [ISPN-1466] Async configuration tag affects communication between cache and HotRod client

            JBoss JIRA Server <jira-update@redhat.com> made a comment on bug 801519

            Galder Zamarreño <galder.zamarreno@redhat.com> made a comment on jira ISPN-1466

            Martin, I've created ISPN-1916.

            RH Bugzilla Integration added a comment - JBoss JIRA Server <jira-update@redhat.com> made a comment on bug 801519 Galder Zamarreño <galder.zamarreno@redhat.com> made a comment on jira ISPN-1466 Martin, I've created ISPN-1916 .

            JBoss JIRA Server <jira-update@redhat.com> made a comment on bug 801519

            Martin Gencur <mgencur@redhat.com> made a comment on jira ISPN-1466

            Yes, it would be nice to have something like the sticky-per-key policy for ASYNC caches in REPL mode.

            RH Bugzilla Integration added a comment - JBoss JIRA Server <jira-update@redhat.com> made a comment on bug 801519 Martin Gencur <mgencur@redhat.com> made a comment on jira ISPN-1466 Yes, it would be nice to have something like the sticky-per-key policy for ASYNC caches in REPL mode.

            JBoss JIRA Server <jira-update@redhat.com> made a comment on bug 801519

            Galder Zamarreño <galder.zamarreno@redhat.com> made a comment on jira ISPN-1466

            @Martin, sure it's expected but it's rather confusing for the user. If the cache is configured with ASYNC (repl or dist), it should use a sticky node like load balance policy, where all requests for the same key get directed to the same node. The problem is that the client is not aware of the cache configuration, but it'd be nice in the future for the server to be able to provide hints to the client.

            In the absence of these hints, if the client developer knows that it's gonna talk to an async cache, it should be able to configure the load balance policy to be sticky-per-key in order to avoid the issue you had. I'm not aware of such LBP for REPl (dist does it by default, I think, by always going to the 1st owner of the key) so maybe we should implement it. Also, not sure if LBP can be configured on a per cache basis either.

            RH Bugzilla Integration added a comment - JBoss JIRA Server <jira-update@redhat.com> made a comment on bug 801519 Galder Zamarreño <galder.zamarreno@redhat.com> made a comment on jira ISPN-1466 @Martin, sure it's expected but it's rather confusing for the user. If the cache is configured with ASYNC (repl or dist), it should use a sticky node like load balance policy, where all requests for the same key get directed to the same node. The problem is that the client is not aware of the cache configuration, but it'd be nice in the future for the server to be able to provide hints to the client. In the absence of these hints, if the client developer knows that it's gonna talk to an async cache, it should be able to configure the load balance policy to be sticky-per-key in order to avoid the issue you had. I'm not aware of such LBP for REPl (dist does it by default, I think, by always going to the 1st owner of the key) so maybe we should implement it. Also, not sure if LBP can be configured on a per cache basis either.

            JBoss JIRA Server <jira-update@redhat.com> made a comment on bug 801519

            Martin Gencur <mgencur@redhat.com> made a comment on jira ISPN-1466

            No... as I now know that the round-robin is used by default when using HotRod client and REPL mode, the failure I got was making sense.

            I was storing a key/value via HotRod and subsequently did this assert: assertTrue(null != asyncCache1.get("k1")), but since it was configured for ASYNC and used round-robin, the assertion actually tried to find a key in another node than where I stored the key (and the replication did not take place yet) so I got null value. And this is expected IMO.

            RH Bugzilla Integration added a comment - JBoss JIRA Server <jira-update@redhat.com> made a comment on bug 801519 Martin Gencur <mgencur@redhat.com> made a comment on jira ISPN-1466 No... as I now know that the round-robin is used by default when using HotRod client and REPL mode, the failure I got was making sense. I was storing a key/value via HotRod and subsequently did this assert: assertTrue(null != asyncCache1.get("k1")), but since it was configured for ASYNC and used round-robin, the assertion actually tried to find a key in another node than where I stored the key (and the replication did not take place yet) so I got null value. And this is expected IMO.

            JBoss JIRA Server <jira-update@redhat.com> made a comment on bug 801519

            Galder Zamarreño <galder.zamarreno@redhat.com> made a comment on jira ISPN-1466

            @Martin, for async caches, it might make sense to have a first available, or sticky, load balance policy because round-robin could throw funny results. Isn't that what you meant?

            RH Bugzilla Integration added a comment - JBoss JIRA Server <jira-update@redhat.com> made a comment on bug 801519 Galder Zamarreño <galder.zamarreno@redhat.com> made a comment on jira ISPN-1466 @Martin, for async caches, it might make sense to have a first available, or sticky, load balance policy because round-robin could throw funny results. Isn't that what you meant?

            JBoss JIRA Server <jira-update@redhat.com> made a comment on bug 801519

            Manik Surtani <manik.surtani@jboss.com> updated the status of jira ISPN-1466 to Resolved

            RH Bugzilla Integration added a comment - JBoss JIRA Server <jira-update@redhat.com> made a comment on bug 801519 Manik Surtani <manik.surtani@jboss.com> updated the status of jira ISPN-1466 to Resolved

            JBoss JIRA Server <jira-update@redhat.com> made a comment on bug 801519

            Manik Surtani <manik.surtani@jboss.com> updated the status of jira ISPN-1466 to Reopened

            RH Bugzilla Integration added a comment - JBoss JIRA Server <jira-update@redhat.com> made a comment on bug 801519 Manik Surtani <manik.surtani@jboss.com> updated the status of jira ISPN-1466 to Reopened

            JBoss JIRA Server <jira-update@redhat.com> made a comment on bug 801519

            Martin Gencur <mgencur@redhat.com> made a comment on jira ISPN-1466

            The test was indeed incorrect but using HotRod client in conjunction with REPL mode makes sense. The requests are then dispatched to the servers in a round-robin manner. (as described here: https://docs.jboss.org/author/display/ISPN/Java+Hot+Rod+client - Request Balancing).

            RH Bugzilla Integration added a comment - JBoss JIRA Server <jira-update@redhat.com> made a comment on bug 801519 Martin Gencur <mgencur@redhat.com> made a comment on jira ISPN-1466 The test was indeed incorrect but using HotRod client in conjunction with REPL mode makes sense. The requests are then dispatched to the servers in a round-robin manner. (as described here: https://docs.jboss.org/author/display/ISPN/Java+Hot+Rod+client - Request Balancing).

            JBoss JIRA Server <jira-update@redhat.com> made a comment on bug 801519

            Martin Gencur <mgencur@redhat.com> made a comment on jira ISPN-1466

            See the explanation in RH Bugzilla integration comment.

            RH Bugzilla Integration added a comment - JBoss JIRA Server <jira-update@redhat.com> made a comment on bug 801519 Martin Gencur <mgencur@redhat.com> made a comment on jira ISPN-1466 See the explanation in RH Bugzilla integration comment.

            JBoss JIRA Server <jira-update@redhat.com> made a comment on bug 801519

            Martin Gencur <mgencur@redhat.com> updated the status of jira ISPN-1466 to Closed

            RH Bugzilla Integration added a comment - JBoss JIRA Server <jira-update@redhat.com> made a comment on bug 801519 Martin Gencur <mgencur@redhat.com> updated the status of jira ISPN-1466 to Closed

              manik_jira Manik Surtani (Inactive)
              mgencur Martin Gencur
              Archiver:
              rhn-support-adongare Amol Dongare

                Created:
                Updated:
                Resolved:
                Archived: