Uploaded image for project: 'Infinispan'
  1. Infinispan
  2. ISPN-6478

Netty worker thread starvation with events

This issue belongs to an archived project. You can view it, but you can't modify it. Learn more

    • Icon: Bug Bug
    • Resolution: Obsolete
    • Icon: Critical Critical
    • None
    • 8.2.1.Final
    • Server
    • None

      As a result of ISPN-6005, we decoupled the incoming Hot Rod server invocations from sending events by adding a intermediate queue that maintains the events to send to clients. However, this separation can lead to a Netty worker thread starvation issue since we add the events to the queue in Netty's IO thread, so if the queue is full, incoming requests are stuck.

            [ISPN-6478] Netty worker thread starvation with events

            Infinispan issue tracking has been migrated to GitHub issues: https://github.com/infinispan/infinispan/issues
            If you still want this issue to be worked on, create a new issue on GitHub and link this issue.

            Tristan Tarrant added a comment - Infinispan issue tracking has been migrated to GitHub issues: https://github.com/infinispan/infinispan/issues If you still want this issue to be worked on, create a new issue on GitHub and link this issue.

            We seem to face the issues with higher load on our Infinispan Server based system.

            Here the top most part of a stack dump from an .hprof. The thread never comes back and causes severe effects on our system (as entity locks are still in place):

            java.lang.Thread @ 0x6c5ee2d70

            • at sun.misc.Unsafe.park(ZJ)V (Native Method)
            • at java.util.concurrent.locks.LockSupport.park(Ljava/lang/Object;)V (LockSupport.java:175)
            • at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await()V (AbstractQueuedSynchronizer.java:2039)
            • at java.util.concurrent.LinkedBlockingQueue.put(Ljava/lang/Object;)V (LinkedBlockingQueue.java:350)
            • at org.infinispan.server.hotrod.ClientListenerRegistry$BaseClientEventSender.sendEvent(...)V (ClientListenerRegistry.scala:296)
            • at org.infinispan.server.hotrod.ClientListenerRegistry$BaseClientEventSender.onCacheEvent(...)V (ClientListenerRegistry.scala:262)
            • at sun.reflect.GeneratedMethodAccessor247.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object; (Unknown Source)

            We are using 8.2.6 at the moment. Using the latest 8.2.x version is doable. Switching to 9.x at this point in time is not regarded an option.

            How and when do you plan to approach this issue?

            Karsten Klein (Inactive) added a comment - We seem to face the issues with higher load on our Infinispan Server based system. Here the top most part of a stack dump from an .hprof. The thread never comes back and causes severe effects on our system (as entity locks are still in place): java.lang.Thread @ 0x6c5ee2d70 at sun.misc.Unsafe.park(ZJ)V (Native Method) at java.util.concurrent.locks.LockSupport.park(Ljava/lang/Object;)V (LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await()V (AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.put(Ljava/lang/Object;)V (LinkedBlockingQueue.java:350) at org.infinispan.server.hotrod.ClientListenerRegistry$BaseClientEventSender.sendEvent(...)V (ClientListenerRegistry.scala:296) at org.infinispan.server.hotrod.ClientListenerRegistry$BaseClientEventSender.onCacheEvent(...)V (ClientListenerRegistry.scala:262) at sun.reflect.GeneratedMethodAccessor247.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object; (Unknown Source) We are using 8.2.6 at the moment. Using the latest 8.2.x version is doable. Switching to 9.x at this point in time is not regarded an option. How and when do you plan to approach this issue?

              rh-ee-galder Galder Zamarreño
              rh-ee-galder Galder Zamarreño
              Archiver:
              rhn-support-adongare Amol Dongare

                Created:
                Updated:
                Resolved:
                Archived: