Uploaded image for project: 'Infinispan'
  1. Infinispan
  2. ISPN-1160

fetchInMemoryState doesn't work without FLUSH protocol for udp

This issue belongs to an archived project. You can view it, but you can't modify it. Learn more

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Major Major
    • 5.0.0.CR6
    • 5.0.0.CR4
    • None
    • None
    • Hide

      I have attached a java file that has a mainline. If you change the location of the configuration file to point to your infinispan distributed jgroups-udp.xml you should see that a new peer will get stuck trying to retrieve state from the coordinator and gets stuck retrying.

      Show
      I have attached a java file that has a mainline. If you change the location of the configuration file to point to your infinispan distributed jgroups-udp.xml you should see that a new peer will get stuck trying to retrieve state from the coordinator and gets stuck retrying.

      I was testing with a replicated cache in infinispan. And in an attempt to try it in 5.0CR4 I have found that I cannot use a replicated or invalidating cache (async or sync) that has fetchInMemoryState set to true unless I have FLUSH protocol provided with udp. I was able to reproduce this using the distributed jgroups-udp.xml file, which has FLUSH removed. When I tried with jgroups-tcp.xml it works without the FLUSH protocol as expected.

      I have attached the test java file that is only using infinispan and is very basic that reproduces it every time I try. I also will attach the log file from both the coordinator and the joining peer that shows this issue.

        1. infinispan.test
          176 kB
        2. infinispan2.test
          177 kB
        3. producer.txt
          96 kB
        4. receiver.txt
          96 kB
        5. TestInfinispan.java
          2 kB

            [ISPN-1160] fetchInMemoryState doesn't work without FLUSH protocol for udp

            Found another solution. See git pull request.

            Vladimir Blagojevic (Inactive) added a comment - Found another solution. See git pull request.

            Bela Ban added a comment -

            Vladimir, can you elaborate what that patch would be ? Is it related to STREAMING_STATE_TRANSFER creating its own TCP socket connections ?

            Bela Ban added a comment - Vladimir, can you elaborate what that patch would be ? Is it related to STREAMING_STATE_TRANSFER creating its own TCP socket connections ?

            I do but the fix might be on JGroups level and require a patch release of JGRoups. In that case it might not be ready on time. Anyway at least we'll know more about the fix!

            Vladimir Blagojevic (Inactive) added a comment - I do but the fix might be on JGroups level and require a patch release of JGRoups. In that case it might not be ready on time. Anyway at least we'll know more about the fix!

            Ok, do you want to take this on for CR5?

            Manik Surtani (Inactive) added a comment - Ok, do you want to take this on for CR5?

            I think it would make our life easier TBH. HAving both options working would be great! This issue is unrelated to FLUSH protocol per se but related to how digests are calculated in presence/absence of FLUSH. I'll investigate this on JGroups level.

            Vladimir Blagojevic (Inactive) added a comment - I think it would make our life easier TBH. HAving both options working would be great! This issue is unrelated to FLUSH protocol per se but related to how digests are calculated in presence/absence of FLUSH. I'll investigate this on JGroups level.

            Vladimir, what's the solution to this? Should our default cfgs have <pbcast.STREAMING_STATE_TRANSFER use_default_transport="true"/> ?

            Manik Surtani (Inactive) added a comment - Vladimir, what's the solution to this? Should our default cfgs have <pbcast.STREAMING_STATE_TRANSFER use_default_transport="true"/> ?

            I can confirm that works for me as well by using the configured TP instead of a separate socket for state transfer.

            William Burns (Inactive) added a comment - I can confirm that works for me as well by using the configured TP instead of a separate socket for state transfer.

            It seems that workaround exist by using <pbcast.STREAMING_STATE_TRANSFER use_default_transport="true"/>

            Vladimir Blagojevic (Inactive) added a comment - It seems that workaround exist by using <pbcast.STREAMING_STATE_TRANSFER use_default_transport="true"/>

            Confirmed! I was able to reproduce this scenario.

            Vladimir Blagojevic (Inactive) added a comment - Confirmed! I was able to reproduce this scenario.

            infinispan.test - this is the log from the coordinator (first to run)

            infinispan2.test - this is the log from the peer that is attempting to retrieve state from the coordinator

            TestInfinispan.java - this is the main class that I ran twice to observe this behavior.

            William Burns (Inactive) added a comment - infinispan.test - this is the log from the coordinator (first to run) infinispan2.test - this is the log from the peer that is attempting to retrieve state from the coordinator TestInfinispan.java - this is the main class that I ran twice to observe this behavior.

              vblagoje Vladimir Blagojevic (Inactive)
              rpwburns William Burns (Inactive)
              Archiver:
              rhn-support-adongare Amol Dongare

                Created:
                Updated:
                Resolved:
                Archived: