Uploaded image for project: 'Infinispan'
  1. Infinispan
  2. ISPN-9982

Test deadlock for clustered caches with Expiration enabled

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Minor Minor
    • 10.0.0.Beta3
    • None
    • None
    • None

      Add a test case for the following scenario:

      If a cache is configured for expiration an the intervall is != -1 there is a possiblity for a deadlock with JGroups and Infinispan threads because the reaper and any access to the same entry can cause a deadlock.
      The possibility is higher if the configured interval for the reaper is shorter.

      A thread dump might show something like followed:

      "HotRod-hotrod-internalServerWorker-4-12" #279 prio=5 os_prio=0 tid=0x00007f35980b8800 nid=0xbbb waiting for monitor entry [0x00007f356cfb4000]
      java.lang.Thread.State: BLOCKED (on object monitor)
      at org.infinispan.commons.util.concurrent.jdk8backported.EquivalentConcurrentHashMapV8.compute(EquivalentConcurrentHashMapV8.java:1910)

      • waiting to lock <0x0000000742cca028> (a org.infinispan.commons.util.concurrent.jdk8backported.EquivalentConcurrentHashMapV8$Node)
        at org.infinispan.container.DefaultDataContainer.compute(DefaultDataContainer.java:335)
        at org.infinispan.expiration.impl.ExpirationManagerImpl.handleInMemoryExpiration(ExpirationManagerImpl.java:135)
        at org.infinispan.expiration.impl.ClusterExpirationManager.handleInMemoryExpiration(ClusterExpirationManager.java:152)
      • locked <0x0000000742cc9898> (a org.infinispan.container.entries.metadata.MetadataTransientCacheEntry)
        at org.infinispan.container.DefaultDataContainer.get(DefaultDataContainer.java:201)

      "pool-7-thread-1" #158 prio=5 os_prio=0 tid=0x00007f35c816c000 nid=0xb35 waiting for monitor entry [0x00007f3575238000]
      java.lang.Thread.State: BLOCKED (on object monitor)
      at org.infinispan.expiration.impl.ExpirationManagerImpl.lambda$handleInMemoryExpiration$0(ExpirationManagerImpl.java:137)

      • waiting to lock <0x0000000742cc9898> (a org.infinispan.container.entries.metadata.MetadataTransientCacheEntry)
        at org.infinispan.expiration.impl.ExpirationManagerImpl$$Lambda$374/1904915245.compute(Unknown Source)
        at org.infinispan.container.DefaultDataContainer.lambda$compute$3(DefaultDataContainer.java:336)
        at org.infinispan.container.DefaultDataContainer$$Lambda$375/1917382785.apply(Unknown Source)
        at org.infinispan.commons.util.concurrent.jdk8backported.EquivalentConcurrentHashMapV8.compute(EquivalentConcurrentHashMapV8.java:1919)
      • locked <0x0000000742cca028> (a org.infinispan.commons.util.concurrent.jdk8backported.EquivalentConcurrentHashMapV8$Node)
        at org.infinispan.container.DefaultDataContainer.compute(DefaultDataContainer.java:335)
        at org.infinispan.expiration.impl.ExpirationManagerImpl.handleInMemoryExpiration(ExpirationManagerImpl.java:135)
        at org.infinispan.expiration.impl.ClusterExpirationManager.processExpiration(ClusterExpirationManager.java:82)

            dlovison@redhat.com Diego Lovison
            dlovison@redhat.com Diego Lovison
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

              Created:
              Updated:
              Resolved: