Uploaded image for project: 'JGroups'
  1. JGroups
  2. JGRP-1601

TP: message bundling based on count per dest rather than global count

XMLWordPrintable

    • Icon: Enhancement Enhancement
    • Resolution: Won't Do
    • Icon: Major Major
    • 3.3
    • None
    • None

      Currently, the message bundlers in TP count accumulated message sizes based on a global 'count' variable. So both multicast messages, and messages to A, B and C increment count.

      If we only sent a message bundle to destination T if (1) there are no more messages in the queue or (2) the accumulated bytes for T would exceed max_bundle_size, then we might send more batches and fewer individual messages.

      Example:

      • We're sending 10K messages and max_bundle_size=55K
      • Different threads concurrently send
        • T1: 6 multicast messages
        • T2: 6 messages to A
        • T3: 6 messages to B
        • T4: 6 messages to C
        • T5: 6 messages to D
        • T6: 6 messages to E
      • If each of the threads gets 1 message in, we've reached the max_bundle_size and will send a message bundle for the multicast, 1 bundle for the unicast message to A, 1 for the message to B and so on, for a total of 5 bundles with one message only !
      • If we counted bytes per destination, we'd be able to send a bundle of 5 messages to the multicast dest, 5 to A, 5 to B and so on.
        • This might lead to better performance, as message batches on the receivers tend to be filled better, especially if we have many messages being sent concurrently.

      With a counter per dest we'd ideally send 6 message batches (containing 6 messages each); whereas with 1 count variable we'd send 36 batches each containing only 1 message !

      Investigate whether it would make sense (perf-wise) to have count being associated with individual destinations rather than having a global count variable.

            rhn-engineering-bban Bela Ban
            rhn-engineering-bban Bela Ban
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

              Created:
              Updated:
              Resolved: