Uploaded image for project: 'AMQ Clients'
  1. AMQ Clients
  2. ENTMQCL-855

Customer feedback - Multithreading documentation doesn't explicitly advise against locking and blocking in messaging handler callbacks

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Major Major
    • 2.1.0.GA
    • 1.2.0.GA
    • documentation
    • None
    • Low

      We are occasionally seeing surprising delays between Proton API calls and the callbacks indicating successful completion. For example, we’ve measured delays of 10+ seconds between our call to connection::open_sender and the resulting messaging_handler::on_sender_open callback. We’ve sampled the callstack during these delays, and the thread running the container shows up as being in epoll_wait at those times. This leads us to believe the delays are not caused by lock contention in our code.

      Upstream documentation states the following regarding serialization of callbacks:

      The calls for each connection are serialized - callbacks for the same connection are never made concurrently.

      proton::container ensures that calls to event callbacks for each connection instance are serialized (not called concurrently), but callbacks for different connections can be safely executed in parallel.

      When the work function is called by Proton, it will be serialized safely so that you can treat the work function like an event callback and safely access the handler and Proton objects stored on it.

      Which does not explicitly say anything about locking or blocking in the callbacks. However upon discussion with the proton-c authors this is indeed a known invalid thing to do.

      Also note that the downstream documentation makes no mention at all, either implicitly or explicitly.

            jross@redhat.com Justin Ross
            rhn-support-rkieley Roderick Kieley
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

              Created:
              Updated:
              Resolved: