Uploaded image for project: 'Infinispan'
  1. Infinispan
  2. ISPN-3183

HotRod RollUps from 5.2 to 5.3 -- target can't obtain formerly stored data from RCS

This issue belongs to an archived project. You can view it, but you can't modify it. Learn more

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Critical Critical
    • 6.0.0.Final
    • None
    • None

      Scenario (typical for rollups):

      Start source node, put entries.
      Start target node which is pointing to source (source is his RemoteCacheStore now) and try to get entries.

      For 5.2 to 5.2 working perfectly.
      For 5.2 source and 5.3 target โ€“ we have problems here.

      Sorry that I can't provide any valuable info beside TRACEs.

      4 TRACE logs โ€“ rollups from 5.2 to 5.2 source log and target log + rollups from 5.2 to 5.3 source log and target log.

      Very quick summary:
      5.2 to 5.2 on target: Entry exists in loader? true

      5.2 to 5.3 on targer:
      16:21:41,508 TRACE [org.infinispan.container.EntryFactoryImpl] (HotRodServerWorker-2) Exists in context? null
      16:21:41,508 TRACE [org.infinispan.container.EntryFactoryImpl] (HotRodServerWorker-2) Retrieved from container null

      What changed in RemoteCacheStore. What changed in HotRod? Any idea? Let me know, thank you!

        1. 52to52sourceTrace
          61 kB
        2. 52to52targetTrace
          50 kB
        3. 52to53sourceTrace
          2 kB
        4. 52to53targetTrace
          27 kB

            [ISPN-3183] HotRod RollUps from 5.2 to 5.3 -- target can't obtain formerly stored data from RCS

            Tomas Sykora <tsykora@redhat.com> changed the Status of bug 986307 from ON_QA to VERIFIED

            RH Bugzilla Integration added a comment - Tomas Sykora <tsykora@redhat.com> changed the Status of bug 986307 from ON_QA to VERIFIED

            Tomas Sykora <tsykora@redhat.com> changed the Status of bug 986307 from ASSIGNED to ON_QA

            RH Bugzilla Integration added a comment - Tomas Sykora <tsykora@redhat.com> changed the Status of bug 986307 from ASSIGNED to ON_QA

            Tomas Sykora <tsykora@redhat.com> made a comment on bug 986307

            I'm pretty sure that this should be ON_QA.

            Setting ON_QA + target milestone back to ER2, and immediately setting as VERIFIED (in for 6.2 ER2 build) because this seems to be no longer an issue since this build.

            RH Bugzilla Integration added a comment - Tomas Sykora <tsykora@redhat.com> made a comment on bug 986307 I'm pretty sure that this should be ON_QA. Setting ON_QA + target milestone back to ER2, and immediately setting as VERIFIED (in for 6.2 ER2 build) because this seems to be no longer an issue since this build.

            Martin Gencur <mgencur@redhat.com> made a comment on bug 986307

            Shouldn't this be ON_QA?

            RH Bugzilla Integration added a comment - Martin Gencur <mgencur@redhat.com> made a comment on bug 986307 Shouldn't this be ON_QA?

            Tristan's PR should fix this problem.

            I got it work and successfully migrated data using hotrod migrator from jdg 6.1 (ispn schema 5.2) to the latest ispn server (ispn schema 6.0).

            Nice work!
            Thank you

            Tomas Sykora added a comment - Tristan's PR should fix this problem. I got it work and successfully migrated data using hotrod migrator from jdg 6.1 (ispn schema 5.2) to the latest ispn server (ispn schema 6.0). Nice work! Thank you

            Not absolutely sure whether this is the proper way of testing this... but I consider this scenario should work
            2 JDG 6.1.GA as source cluster, 2 latest ispn-servers with this PR as a target cluster:

            08:17:19,579 WARN [org.jgroups.protocols.UDP] (multicast receiver,shared=udp) [JGRP00010] packe08:17:19,579 WARN [org.jgroups.protocols.UDP] (multicast receiver,shared=udp) [JGRP00010] packet from 192.168.2.103:45688 has different version (3.4.0) than ours (3.3.4); packet is discarded (received 13 identical messages from 192.168.2.103:45688 in the last 66549 ms)
            t from 192.168.2.103:45688 has different version (3.4.0) than ours (3.3.4); packet is discarded (received 13 identical messages from 192.168.2.103:45688 in the last 66549 ms)

            Looks like we need to deal somehow with different version of jgroups as well. What do you thing? Any simple workaround for that?

            Thanks!

            Tomas Sykora added a comment - Not absolutely sure whether this is the proper way of testing this... but I consider this scenario should work 2 JDG 6.1.GA as source cluster, 2 latest ispn-servers with this PR as a target cluster: 08:17:19,579 WARN [org.jgroups.protocols.UDP] (multicast receiver,shared=udp) [JGRP00010] packe08:17:19,579 WARN [org.jgroups.protocols.UDP] (multicast receiver,shared=udp) [JGRP00010] packet from 192.168.2.103:45688 has different version (3.4.0) than ours (3.3.4); packet is discarded (received 13 identical messages from 192.168.2.103:45688 in the last 66549 ms) t from 192.168.2.103:45688 has different version (3.4.0) than ours (3.3.4); packet is discarded (received 13 identical messages from 192.168.2.103:45688 in the last 66549 ms) Looks like we need to deal somehow with different version of jgroups as well. What do you thing? Any simple workaround for that? Thanks!

            Tristan Tarrant <ttarrant@redhat.com> changed the Status of bug 986307 from NEW to ASSIGNED

            RH Bugzilla Integration added a comment - Tristan Tarrant <ttarrant@redhat.com> changed the Status of bug 986307 from NEW to ASSIGNED

            Tomas Sykora added a comment - - edited

            I've tried to remove checking for accessing old data from new node (old data = data stored into remote cache store even before starting new node which connects to it) to find out what will happen then.

            RecordKnownGlobalKeyset operation issued on source node looks ok but I'm really not sure because of exception below.

            Then test will try to perform synchronizeData on target node with this result:

            testRollingUpgrades(org.infinispan.test.rollingupdates.IspnRollingUpdatesTest) Time elapsed: 36.699 sec <<< ERROR!
            javax.management.MBeanException
            at org.infinispan.jmx.ResourceDMBean.invoke(ResourceDMBean.java:271)
            at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:836)
            at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761)
            at org.jboss.as.jmx.PluggableMBeanServerImpl$TcclMBeanServer.invoke(PluggableMBeanServerImpl.java:527)
            at org.jboss.as.jmx.PluggableMBeanServerImpl.invoke(PluggableMBeanServerImpl.java:263)
            at org.jboss.remotingjmx.protocol.v1.ServerProxy$InvokeHandler.handle(ServerProxy.java:1058)
            at org.jboss.remotingjmx.protocol.v1.ServerProxy$MessageReciever$1.run(ServerProxy.java:225)
            at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
            at java.lang.Thread.run(Thread.java:662)
            Caused by: java.lang.reflect.InvocationTargetException
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
            at java.lang.reflect.Method.invoke(Method.java:597)
            at org.infinispan.jmx.ResourceDMBean.invoke(ResourceDMBean.java:269)
            ... 9 more
            Caused by: org.infinispan.commons.CacheException: ISPN020004: Could not find migration data in cache default
            at org.infinispan.upgrade.hotrod.HotRodTargetMigrator.synchronizeData(HotRodTargetMigrator.java:94)
            at org.infinispan.upgrade.RollingUpgradeManager.synchronizeData(RollingUpgradeManager.java:59)
            ... 14 more

            Maybe this can help a little bit.

            Tomas Sykora added a comment - - edited I've tried to remove checking for accessing old data from new node (old data = data stored into remote cache store even before starting new node which connects to it) to find out what will happen then. RecordKnownGlobalKeyset operation issued on source node looks ok but I'm really not sure because of exception below. Then test will try to perform synchronizeData on target node with this result: testRollingUpgrades(org.infinispan.test.rollingupdates.IspnRollingUpdatesTest) Time elapsed: 36.699 sec <<< ERROR! javax.management.MBeanException at org.infinispan.jmx.ResourceDMBean.invoke(ResourceDMBean.java:271) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:836) at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761) at org.jboss.as.jmx.PluggableMBeanServerImpl$TcclMBeanServer.invoke(PluggableMBeanServerImpl.java:527) at org.jboss.as.jmx.PluggableMBeanServerImpl.invoke(PluggableMBeanServerImpl.java:263) at org.jboss.remotingjmx.protocol.v1.ServerProxy$InvokeHandler.handle(ServerProxy.java:1058) at org.jboss.remotingjmx.protocol.v1.ServerProxy$MessageReciever$1.run(ServerProxy.java:225) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.infinispan.jmx.ResourceDMBean.invoke(ResourceDMBean.java:269) ... 9 more Caused by: org.infinispan.commons.CacheException: ISPN020004: Could not find migration data in cache default at org.infinispan.upgrade.hotrod.HotRodTargetMigrator.synchronizeData(HotRodTargetMigrator.java:94) at org.infinispan.upgrade.RollingUpgradeManager.synchronizeData(RollingUpgradeManager.java:59) ... 14 more Maybe this can help a little bit.

            Tomas Sykora <tsykora@redhat.com> made a comment on bug 986307

            Please see this JIRA: https://issues.jboss.org/browse/ISPN-3183

            Issue is the same for trying RollingUpgrades on 2 (JDG-6.2.0-DR1) servers. (One old - source node and the second new, target node)

            Trace logs are attached in the aforementioned JIRA.

            RH Bugzilla Integration added a comment - Tomas Sykora <tsykora@redhat.com> made a comment on bug 986307 Please see this JIRA: https://issues.jboss.org/browse/ISPN-3183 Issue is the same for trying RollingUpgrades on 2 ( JDG-6 .2.0-DR1) servers. (One old - source node and the second new, target node) Trace logs are attached in the aforementioned JIRA.

            Assigning to Tristan who will be probably interested.

            Tomas Sykora added a comment - Assigning to Tristan who will be probably interested.

              ttarrant@redhat.com Tristan Tarrant
              tsykora@redhat.com Tomas Sykora
              Archiver:
              rhn-support-adongare Amol Dongare

                Created:
                Updated:
                Resolved:
                Archived: