Uploaded image for project: 'JBoss Enterprise Application Platform'
  1. JBoss Enterprise Application Platform
  2. JBEAP-17166

[Unexpected Warning] JGRP000011: hsc-1-8nffb: dropped message 79 from non-member hsc-1-tf4nh

    XMLWordPrintable

Details

    • Bug
    • Resolution: Won't Do
    • Major
    • None
    • 7.3.0.CD16
    • Clustering, OpenShift
    • None

    Description

      There is unexpected warning when 3 EAP CD16 pods are started in OCP 4 (but does not seem to be OCP 4 related):

      Expecting empty but was:<"Suspicious error in pod hsc-1-8nffb log '&#27;[0m&#27;[33m21:28:08,946 WARN  [org.jgroups.protocols.pbcast.NAKACK2] (thread-3,null,null) JGRP000011: hsc-1-8nffb: dropped message 79 from non-member hsc-1-tf4nh (view=[hsc-1-gf6m2|1] (2) [hsc-1-gf6m2, hsc-1-8nffb])&#27;[0m'">
      

      which is followed by:

      Expecting empty but was:<"Suspicious error in pod hsc-1-gf6m2 log '&#27;[0m&#27;[31m21:27:25,000 ERROR [org.jgroups.protocols.ASYM_ENCRYPT] (thread-4,null,null) ignoring secret key sent by hsc-1-tf4nh which is not in current view [hsc-1-gf6m2|1] (2) [hsc-1-gf6m2, hsc-1-8nffb]&#27;[0m'">
      

      and:

      �[0m�[31m21:28:20,972 ERROR [org.infinispan.CLUSTER] (transport-thread--p8-t2) ISPN000196: Failed to recover cluster state after the current node became the coordinator (or after merge): java.util.concurrent.ExecutionException: org.infinispan.util.concurrent.TimeoutException: ISPN000476: Timed out waiting for responses for request 11 from hsc-1-gf6m2,hsc-1-8nffb
      	at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
      	at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1915)
      	at org.infinispan.util.concurrent.CompletableFutures.await(CompletableFutures.java:105)
      	at org.infinispan.topology.ClusterTopologyManagerImpl.executeOnClusterSync(ClusterTopologyManagerImpl.java:612)
      	at org.infinispan.topology.ClusterTopologyManagerImpl.recoverClusterStatus(ClusterTopologyManagerImpl.java:452)
      	at org.infinispan.topology.ClusterTopologyManagerImpl.becomeCoordinator(ClusterTopologyManagerImpl.java:336)
      	at org.infinispan.topology.ClusterTopologyManagerImpl.handleClusterView(ClusterTopologyManagerImpl.java:315)
      	at org.infinispan.topology.ClusterTopologyManagerImpl.access$500(ClusterTopologyManagerImpl.java:88)
      	at org.infinispan.topology.ClusterTopologyManagerImpl$ClusterViewListener.lambda$handleViewChange$0(ClusterTopologyManagerImpl.java:758)
      	at org.infinispan.executors.LimitedExecutor.runTasks(LimitedExecutor.java:175)
      	at org.infinispan.executors.LimitedExecutor.access$100(LimitedExecutor.java:37)
      	at org.infinispan.executors.LimitedExecutor$Runner.run(LimitedExecutor.java:227)
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
      	at org.wildfly.clustering.service.concurrent.ClassLoaderThreadFactory.lambda$newThread$0(ClassLoaderThreadFactory.java:47)
      	at java.lang.Thread.run(Thread.java:748)
      Caused by: org.infinispan.util.concurrent.TimeoutException: ISPN000476: Timed out waiting for responses for request 11 from hsc-1-gf6m2,hsc-1-8nffb
      	at org.infinispan.remoting.transport.impl.MultiTargetRequest.onTimeout(MultiTargetRequest.java:167)
      	at org.infinispan.remoting.transport.AbstractRequest.call(AbstractRequest.java:87)
      	at org.infinispan.remoting.transport.AbstractRequest.call(AbstractRequest.java:22)
      	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
      	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
      	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
      

      It seems like temporary cluster issue and cluster recovered from this situation.

      [1] https://eap-qe-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/eap-7.x-openshift-4-ha-load-tests-cd/20/testReport/junit/com.redhat.xpaas.eap.ha/HAServletCounterDnsPingAsymEncryptTest/multipleClientCanCountForTwoMinutesTest/

      Attachments

        Issue Links

          Activity

            People

              pferraro@redhat.com Paul Ferraro
              mnovak1@redhat.com Miroslav Novak
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: