Uploaded image for project: 'JGroups'
  1. JGroups
  2. JGRP-1402

NAKACK: too much lock contention between sending and receiving messages

    XMLWordPrintable

Details

    • Enhancement
    • Resolution: Done
    • Major
    • 3.1
    • None
    • None

    Description

      When we have only 1 node in a cluster, sending and receiving messages creates a lot of contention in NakReceiverWindow (NRW). To reproduce:

      • Start MPerf
      • Press '1' to send 1 million messages
      • The throughput is ca 20-30 MB/sec, compared to 140 MB when running multiple instances of MPerf on the same box !

      In the profiler, we can see that the write lock in NRW makes up for ca 99% of all the blocking ! Ca. half is caused by NRW.add(), the other half by NRW.removeMany().

      The reason is that, when we send a message, it is added to the NRW (add()). The incoming thread then tries to remove as many messages as possible (removeMany()), and blocks messages being added to NRW by the sender, and vice versa; the removeMany() method is blocked accessing the NRW by many add()s.

      SOLUTION 1:

      • If we only have 1 member in the cluster, call removeMany() immediately after NRW.add() on the sender. No need for a message to be processed by the incoming thread pool, if we're the only member in the cluster
      • The downside here is that we don't reduce the contention on NRW if we have more than 1 member: this lock contention may even slow down the case of more than 1 member clusters !

      SOLUTION 2:

      • Make NRW.add() and remove() more efficient, and contend less on the same lock.
      • [1] should help.

      [1] https://issues.jboss.org/browse/JGRP-1396

      Attachments

        Issue Links

          Activity

            People

              rhn-engineering-bban Bela Ban
              rhn-engineering-bban Bela Ban
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: