Uploaded image for project: 'WildFly'
  1. WildFly
  2. WFLY-10755

ISPN000208: No live owners found for segments

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Not a Bug
    • Icon: Major Major
    • None
    • 14.0.0.Beta2
    • Clustering
    • None
    • the job run on a segment with UDP problems; going to run the tcp stack counterpart

      The error "ISPN000208: No live owners found for segments" was observed in scenario eap-7x-db-failover-db-session-shutdown-repl-sync-postgres-9-4.

      The scenario is composed of a 4 nodes cluster configured with an invalidation cache backed by a PostreSQL database:

      <cache-container name="web" default-cache="repl" module="org.wildfly.clustering.web.infinispan">
        <transport lock-timeout="60000"/>
        <distributed-cache owners="2" name="dist">
          <locking isolation="REPEATABLE_READ"/>
          <transaction mode="BATCH"/>
          <file-store/>
        </distributed-cache>
        <replicated-cache name="repl">
          <locking isolation="REPEATABLE_READ"/>
          <transaction mode="BATCH"/>
          <file-store/>
        </replicated-cache>
        <invalidation-cache name="offload">
          <locking isolation="REPEATABLE_READ"/>
          <transaction mode="BATCH"/>
          <jdbc-store data-source="testDS" fetch-state="false" passivation="false" purge="false" shared="true" dialect="POSTGRES">
            <table prefix="s">
              <id-column name="id" type="VARCHAR(255)"/>
              <data-column name="datum" type="BYTEA"/>
              <timestamp-column name="version" type="BIGINT"/>
            </table>
          </jdbc-store>
        </invalidation-cache>
      </cache-container>
      

      First run eap-7x-db-failover-db-session-shutdown-repl-sync-postgres-9-4 run 20

      The error was observed 2 times on node dev212;

      The first time, right after Node dev214 left the cluster:

      [JBossINF] [0m[0m09:08:34,196 INFO  [org.infinispan.CLUSTER] (thread-22,ejb,dev212) ISPN000094: Received new cluster view for channel ejb: [dev212|8] (3) [dev212, dev213, dev215]
      [JBossINF] [0m[0m09:08:34,197 INFO  [org.infinispan.CLUSTER] (thread-22,ejb,dev212) ISPN100001: Node dev214 left the cluster
      [JBossINF] [0m[33m09:08:34,362 WARN  [org.infinispan.interceptors.impl.InvalidationInterceptor] (timeout-thread--p10-t1) ISPN000268: Unable to broadcast evicts as a part of the prepare phase. Rolling back.: org.infinispan.util.concurrent.TimeoutException: ISPN000476: Timed out waiting for responses for request 33 from dev215
      [JBossINF] 	at org.infinispan.remoting.transport.impl.MultiTargetRequest.onTimeout(MultiTargetRequest.java:167)
      [JBossINF] 	at org.infinispan.remoting.transport.AbstractRequest.call(AbstractRequest.java:87)
      [JBossINF] 	at org.infinispan.remoting.transport.AbstractRequest.call(AbstractRequest.java:22)
      [JBossINF] 	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
      [JBossINF] 	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
      [JBossINF] 	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
      [JBossINF] 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
      [JBossINF] 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
      [JBossINF] 	at java.lang.Thread.run(Thread.java:748)
      [JBossINF] 
      ...
      [JBossINF] [0m[31m09:08:52,772 ERROR [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread--p15-t15) ISPN000208: No live owners found for segments {4 7-9 12-13 30-31 37 49 59 76-77 88-89 92 118-120 156-157 196 205 235 251} of cache clusterbench-ee7.ear/clusterbench-ee7-ejb.jar. Excluded owners: [dev214]
      

      and, the second time, right after Node dev215 left the cluster:

      [JBossINF] [0m[0m09:11:32,029 INFO  [org.infinispan.CLUSTER] (thread-24,ejb,dev212) ISPN100000: Node dev214 joined the cluster
      [JBossINF] [0m[0m09:11:32,029 INFO  [org.infinispan.CLUSTER] (thread-24,ejb,dev212) ISPN100001: Node dev213 left the cluster
      [JBossINF] [0m[0m09:11:32,029 INFO  [org.infinispan.CLUSTER] (thread-24,ejb,dev212) ISPN100001: Node dev215 left the cluster
      [JBossINF] [0m[31m09:11:32,030 ERROR [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread--p15-t15) ISPN000208: No live owners found for segments {2-3 36 48 55-58 65 75 90 93 108-109 126 150 172 176-177 179-180 204 229-230} of cache clusterbench-ee7.ear/clusterbench-ee7-ejb.jar. Excluded owners: [dev215, dev214]
      [JBossINF] [0m[31m09:11:32,030 ERROR [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread--p15-t15) ISPN000208: No live owners found for segments {2-4 7-9 12-13 30-31 36-37 48-49 55-59 65 75-77 88-90 92-93 108-109 118-120 126 150 156-157 172 176-177 179-180 196 204-205 229-230 235 251} of cache clusterbench-ee7.ear/clusterbench-ee7-ejb.jar. Excluded owners: [dev215, dev214]
      [JBossINF] [0m[0m09:12:29,829 INFO  [org.infinispan.CLUSTER] (thread-21,ejb,dev212) ISPN000093: Received new, MERGED cluster view for channel ejb: MergeView::[dev214|10] (4) [dev214, dev212, dev213, dev215], 2 subgroups: [dev212|8] (3) [dev212, dev213, dev215], [dev214|9] (2) [dev214, dev212]
      

      please note that the log saying that node dev213 left the cluster, look suspicious: node dev213 was started at 8:59:53 and then restarted at 9:12:29, so the log saying node dev213 left the cluster at 9:11:32 is a bit strange

      This run already used modified jgroups time-outs:

      <protocol type="FD_ALL">
        <property name="timeout">10000</property>
        <property name="interval">2000</property>
        <property name="timeout_check_interval">1000</property>
      </protocol>
      <protocol type="VERIFY_SUSPECT">
        <property name="timeout">1000</property>
      </protocol>
      

      Second run eap-7x-db-failover-db-session-shutdown-repl-sync-postgres-9-4_JJB run 18

      The error was observed also in a previous run where those FD_ALL and VERIFY_SUSPECT values were unmodified.

      Third run eap-7x-db-failover-db-session-shutdown-repl-sync-postgres-9-4_JJB run 21

      The error was observed also in run where those FD_ALL and VERIFY_SUSPECT values were set accordingly to what this JIRA states the previous values for FD_ALL were:

             <FD_ALL timeout="60000"
                     interval="15000"
                     timeout_check_interval="5000"
             />
      

      In this run, the error is observed on node dev212:

      [JBossINF] �[0m�[33m03:56:59,728 WARN  [org.jgroups.protocols.UDP] (TQ-Bundler-4,ejb,dev212) JGRP000032: dev212: no physical address for 2806f77e-ee15-45dc-283d-683a4828e878, dropping message
      [JBossINF] �[0m�[0m03:58:02,336 INFO  [org.infinispan.CLUSTER] (thread-30,ejb,dev212) ISPN000093: Received new, MERGED cluster view for channel ejb: MergeView::[dev212|10] (3) [dev212, dev213, dev214], 1 subgroups: [dev214|0] (1) [dev214]
      [JBossINF] �[0m�[0m03:58:02,336 INFO  [org.infinispan.CLUSTER] (thread-30,ejb,dev212) ISPN100000: Node dev214 joined the cluster
      [JBossINF] �[0m�[0m03:58:02,337 INFO  [org.infinispan.CLUSTER] (thread-30,ejb,dev212) ISPN100001: Node dev215 left the cluster
      [JBossINF] �[0m�[0m03:58:02,337 INFO  [org.infinispan.CLUSTER] (thread-30,ejb,dev212) ISPN100001: Node dev214 left the cluster
      [JBossINF] �[0m�[0m03:58:02,337 INFO  [org.infinispan.CLUSTER] (thread-30,ejb,dev212) ISPN000093: Received new, MERGED cluster view for channel ejb: MergeView::[dev212|10] (3) [dev212, dev213, dev214], 1 subgroups: [dev214|0] (1) [dev214]
      [JBossINF] �[0m�[0m03:58:02,337 INFO  [org.infinispan.CLUSTER] (thread-30,ejb,dev212) ISPN100000: Node dev214 joined the cluster
      [JBossINF] �[0m�[0m03:58:02,337 INFO  [org.infinispan.CLUSTER] (thread-30,ejb,dev212) ISPN100001: Node dev215 left the cluster
      [JBossINF] �[0m�[0m03:58:02,337 INFO  [org.infinispan.CLUSTER] (thread-30,ejb,dev212) ISPN100001: Node dev214 left the cluster
      [JBossINF] �[0m�[0m03:58:02,337 INFO  [org.infinispan.CLUSTER] (thread-30,ejb,dev212) ISPN000093: Received new, MERGED cluster view for channel ejb: MergeView::[dev212|10] (3) [dev212, dev213, dev214], 1 subgroups: [dev214|0] (1) [dev214]
      [JBossINF] �[0m�[0m03:58:02,338 INFO  [org.infinispan.CLUSTER] (thread-30,ejb,dev212) ISPN100000: Node dev214 joined the cluster
      [JBossINF] �[0m�[0m03:58:02,338 INFO  [org.infinispan.CLUSTER] (thread-30,ejb,dev212) ISPN100001: Node dev215 left the cluster
      [JBossINF] �[0m�[0m03:58:02,338 INFO  [org.infinispan.CLUSTER] (thread-30,ejb,dev212) ISPN100001: Node dev214 left the cluster
      [JBossINF] �[0m�[0m03:58:02,338 INFO  [org.infinispan.CLUSTER] (thread-30,ejb,dev212) ISPN000093: Received new, MERGED cluster view for channel ejb: MergeView::[dev212|10] (3) [dev212, dev213, dev214], 1 subgroups: [dev214|0] (1) [dev214]
      [JBossINF] �[0m�[0m03:58:02,338 INFO  [org.infinispan.CLUSTER] (thread-30,ejb,dev212) ISPN100000: Node dev214 joined the cluster
      [JBossINF] �[0m�[0m03:58:02,338 INFO  [org.infinispan.CLUSTER] (thread-30,ejb,dev212) ISPN100001: Node dev215 left the cluster
      [JBossINF] �[0m�[0m03:58:02,338 INFO  [org.infinispan.CLUSTER] (thread-30,ejb,dev212) ISPN100001: Node dev214 left the cluster
      [JBossINF] �[0m�[31m03:58:02,339 ERROR [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread--p15-t15) ISPN000208: No live owners found for segments {2-3 21-26 30 46 53-54 58-59 64 69 75 82-83 88 142 150 233 236} of cache clusterbench-ee7.ear/clusterbench-ee7-ejb.jar. Excluded owners: [dev215]
      [JBossINF] �[0m�[31m03:58:02,339 ERROR [org.infinispan.statetransfer.StateConsumerImpl] (transport-thread--p15-t15) ISPN000208: No live owners found for segments {2-3 21-26 30 46 53-54 58-59 64 69 75 82-83 88 142 150 233 236} of cache clusterbench-ee7.ear/clusterbench-ee7-ejb.jar. Excluded owners: [dev215]
      [JBossINF] �[0m�[33m03:58:02,340 WARN  [org.infinispan.statetransfer.InboundTransferTask] (stateTransferExecutor-thread--p20-t14) ISPN000210: Failed to request state of cache clusterbench-ee7.ear/clusterbench-ee7-ejb.jar from node dev214, segments {47-48 65 87 102 157 163 187-188 190-191 221-223 228 232}: org.infinispan.remoting.transport.jgroups.SuspectException: ISPN000400: Node dev214 was suspected
      

      but the logs on dev214 show the node wasn't down; it was just restarted and was logging the following:

      [JBossINF] �[0m�[0m03:56:14,093 INFO  [org.jboss.as.server] (Controller Boot Thread) WFLYSRV0212: Resuming server
      [JBossINF] �[0m�[0m03:56:14,095 INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0060: Http management interface listening on http://10.16.176.60:9990/management
      [JBossINF] �[0m�[0m03:56:14,095 INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0051: Admin console listening on http://10.16.176.60:9990
      [JBossINF] �[0m�[0m03:56:14,095 INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: WildFly Full 14.0.0.Beta2-SNAPSHOT (WildFly Core 6.0.0.Alpha4) started in 8533ms - Started 1156 of 1353 services (511 services are lazy, passive or on-demand)
      2018/07/29 03:56:14:095 EDT [DEBUG][Thread-89] HOST dev220.mw.lab.eng.bos.redhat.com:rootProcess:test - JBossStartup, server started!
      [JBossINF] �[0m�[33m03:57:13,441 WARN  [org.jgroups.protocols.pbcast.NAKACK2] (thread-8,ejb,dev214) JGRP000011: dev214: dropped message 43 from non-member dev213 (view=[dev214|0] (1) [dev214]) (received 17 identical messages from dev213 in the last 61714 ms)
      [JBossINF] �[0m�[33m03:57:15,289 WARN  [org.jgroups.protocols.pbcast.NAKACK2] (thread-8,ejb,dev214) JGRP000011: dev214: dropped message 90 from non-member dev215 (view=[dev214|0] (1) [dev214]) (received 3 identical messages from dev215 in the last 61551 ms)
      [JBossINF] �[0m�[33m03:57:57,334 WARN  [org.jgroups.protocols.UDP] (TQ-Bundler-4,ejb,dev214) JGRP000032: dev214: no physical address for 710670e7-7bb0-9e01-743e-abad40b595ec, dropping message
      [JBossINF] �[0m�[33m03:57:59,339 WARN  [org.jgroups.protocols.UDP] (TQ-Bundler-4,ejb,dev214) JGRP000032: dev214: no physical address for 710670e7-7bb0-9e01-743e-abad40b595ec, dropping message
      [JBossINF] �[0m�[33m03:58:01,342 WARN  [org.jgroups.protocols.UDP] (TQ-Bundler-4,ejb,dev214) JGRP000032: dev214: no physical address for 710670e7-7bb0-9e01-743e-abad40b595ec, dropping message
      [JBossINF] �[0m�[0m03:58:02,337 INFO  [org.infinispan.CLUSTER] (thread-13,ejb,dev214) ISPN000093: Received new, MERGED cluster view for channel ejb: MergeView::[dev212|10] (3) [dev212, dev213, dev214], 1 subgroups: [dev214|0] (1) [dev214]
      [JBossINF] �[0m�[0m03:58:02,338 INFO  [org.infinispan.CLUSTER] (thread-13,ejb,dev214) ISPN100000: Node dev212 joined the cluster
      [JBossINF] �[0m�[0m03:58:02,339 INFO  [org.infinispan.CLUSTER] (thread-13,ejb,dev214) ISPN100000: Node dev213 joined the cluster
      [JBossINF] �[0m�[0m03:58:02,340 INFO  [org.infinispan.CLUSTER] (thread-13,ejb,dev214) ISPN000093: Received new, MERGED cluster view for channel ejb: MergeView::[dev212|10] (3) [dev212, dev213, dev214], 1 subgroups: [dev214|0] (1) [dev214]
      [JBossINF] �[0m�[0m03:58:02,340 INFO  [org.infinispan.CLUSTER] (thread-13,ejb,dev214) ISPN100000: Node dev212 joined the cluster
      [JBossINF] �[0m�[0m03:58:02,341 INFO  [org.infinispan.CLUSTER] (thread-13,ejb,dev214) ISPN100000: Node dev213 joined the cluster
      [JBossINF] �[0m�[0m03:58:02,341 INFO  [org.infinispan.CLUSTER] (thread-13,ejb,dev214) ISPN000093: Received new, MERGED cluster view for channel ejb: MergeView::[dev212|10] (3) [dev212, dev213, dev214], 1 subgroups: [dev214|0] (1) [dev214]
      [JBossINF] �[0m�[0m03:58:02,341 INFO  [org.infinispan.CLUSTER] (thread-13,ejb,dev214) ISPN100000: Node dev212 joined the cluster
      [JBossINF] �[0m�[0m03:58:02,342 INFO  [org.infinispan.CLUSTER] (thread-13,ejb,dev214) ISPN100000: Node dev213 joined the cluster
      [JBossINF] �[0m�[0m03:58:02,342 INFO  [org.infinispan.CLUSTER] (thread-13,ejb,dev214) ISPN000093: Received new, MERGED cluster view for channel ejb: MergeView::[dev212|10] (3) [dev212, dev213, dev214], 1 subgroups: [dev214|0] (1) [dev214]
      [JBossINF] �[0m�[0m03:58:02,343 INFO  [org.infinispan.CLUSTER] (thread-13,ejb,dev214) ISPN100000: Node dev212 joined the cluster
      [JBossINF] �[0m�[0m03:58:02,344 INFO  [org.infinispan.CLUSTER] (thread-13,ejb,dev214) ISPN100000: Node dev213 joined the cluster
      [JBossINF] �[0m�[33m03:58:03,345 WARN  [org.jgroups.protocols.UDP] (TQ-Bundler-4,ejb,dev214) JGRP000032: dev214: no physical address for 710670e7-7bb0-9e01-743e-abad40b595ec, dropping message
      [JBossINF] �[0m�[33m03:58:05,347 WARN  [org.jgroups.protocols.UDP] (TQ-Bundler-4,ejb,dev214) JGRP000032: dev214: no physical address for 710670e7-7bb0-9e01-743e-abad40b595ec, dropping message
      [JBossINF] �[0m�[33m03:58:07,350 WARN  [org.jgroups.protocols.UDP] (TQ-Bundler-4,ejb,dev214) JGRP000032: dev214: no physical address for 710670e7-7bb0-9e01-743e-abad40b595ec, dropping message
      ...
      

      Fourth run eap-7x-db-failover-db-session-shutdown-repl-sync-postgres-9-4_JJB run 23

      We made an attempt with the following setting on the same segment (perf27-eap) (the idea being allowing some longer time-out to verify suspected nodes):

      <protocol type="FD_ALL">
        <property name="timeout">5000</property>
        <property name="interval">1000</property>
        <property name="timeout_check_interval">2000</property>
      </protocol>
      <protocol type="VERIFY_SUSPECT">
        <property name="timeout">5000</property>
      </protocol>
      

      we didn't observe the error, but we observed the following ERROR on node dev215:

      [JBossINF] �[0m�[31m13:42:40,794 ERROR [org.infinispan.CLUSTER] (transport-thread--p14-t5) ISPN000196: Failed to recover cluster state after the current node became the coordinator (or after merge): java.util.concurrent.ExecutionException: org.infinispan.util.concurrent.TimeoutException: ISPN000476: Timed out waiting for responses for request 34 from dev214
      [JBossINF] 	at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
      [JBossINF] 	at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1915)
      [JBossINF] 	at org.infinispan.util.concurrent.CompletableFutures.await(CompletableFutures.java:93)
      [JBossINF] 	at org.infinispan.topology.ClusterTopologyManagerImpl.executeOnClusterSync(ClusterTopologyManagerImpl.java:585)
      [JBossINF] 	at org.infinispan.topology.ClusterTopologyManagerImpl.recoverClusterStatus(ClusterTopologyManagerImpl.java:450)
      [JBossINF] 	at org.infinispan.topology.ClusterTopologyManagerImpl.becomeCoordinator(ClusterTopologyManagerImpl.java:334)
      [JBossINF] 	at org.infinispan.topology.ClusterTopologyManagerImpl.handleClusterView(ClusterTopologyManagerImpl.java:313)
      [JBossINF] 	at org.infinispan.topology.ClusterTopologyManagerImpl.access$500(ClusterTopologyManagerImpl.java:87)
      [JBossINF] 	at org.infinispan.topology.ClusterTopologyManagerImpl$ClusterViewListener.lambda$handleViewChange$0(ClusterTopologyManagerImpl.java:731)
      [JBossINF] 	at org.infinispan.executors.LimitedExecutor.runTasks(LimitedExecutor.java:175)
      [JBossINF] 	at org.infinispan.executors.LimitedExecutor.access$100(LimitedExecutor.java:37)
      [JBossINF] 	at org.infinispan.executors.LimitedExecutor$Runner.run(LimitedExecutor.java:227)
      [JBossINF] 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
      [JBossINF] 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
      [JBossINF] 	at org.wildfly.clustering.service.concurrent.ClassLoaderThreadFactory.lambda$newThread$0(ClassLoaderThreadFactory.java:47)
      [JBossINF] 	at java.lang.Thread.run(Thread.java:748)
      [JBossINF] Caused by: org.infinispan.util.concurrent.TimeoutException: ISPN000476: Timed out waiting for responses for request 34 from dev214
      [JBossINF] 	at org.infinispan.remoting.transport.impl.MultiTargetRequest.onTimeout(MultiTargetRequest.java:167)
      [JBossINF] 	at org.infinispan.remoting.transport.AbstractRequest.call(AbstractRequest.java:87)
      [JBossINF] 	at org.infinispan.remoting.transport.AbstractRequest.call(AbstractRequest.java:22)
      [JBossINF] 	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
      [JBossINF] 	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
      [JBossINF] 	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
      [JBossINF] 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
      [JBossINF] 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
      [JBossINF] 	... 1 more
      [JBossINF] 
      [JBossINF] �[0m�[31m13:42:40,798 FATAL [org.infinispan.CLUSTER] (transport-thread--p16-t24)  ISPN100004: After merge (or coordinator change), the coordinator failed to recover cluster. Cluster members are [dev215, dev214].
      [JBossINF] �[0m�[31m13:42:40,798 FATAL [org.infinispan.CLUSTER] (transport-thread--p13-t5)  ISPN100004: After merge (or coordinator change), the coordinator failed to recover cluster. Cluster members are [dev215, dev214].
      [JBossINF] �[0m�[31m13:42:40,798 FATAL [org.infinispan.CLUSTER] (transport-thread--p15-t2)  ISPN100004: After merge (or coordinator change), the coordinator failed to recover cluster. Cluster members are [dev215, dev214].
      [JBossINF] �[0m�[31m13:42:40,798 FATAL [org.infinispan.CLUSTER] (transport-thread--p14-t5)  ISPN100004: After merge (or coordinator change), the coordinator failed to recover cluster. Cluster members are [dev215, dev214].
      

            pferraro@redhat.com Paul Ferraro
            tborgato@redhat.com Tommaso Borgato
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

              Created:
              Updated:
              Resolved: