Uploaded image for project: 'JBoss A-MQ'
  1. JBoss A-MQ
  2. ENTMQ-801

mqtt: unsustainable latency by subscribers on high-throughput message flows

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Critical Critical
    • None
    • JBoss A-MQ 6.1
    • mqtt
    • None

      We have following test scenario and during the test run, the rate of messages received by the consumers is around 1/2 message rate of the publishers.

      This causes a large accumulation of messages in the broker which will eventually become unsustainable.

      Producers: 800 MQTT clients connect to the Broker, create ~40 subscriptions each, and start publishing messages of 1.5KB in size, at the frequency of 1 msg every 30 sec or so. Publishers are running in a local Ubuntu box connected to the Internet through Ethernet on our company network.

      Consumers: 5 MQTT clients connect to the Broker, create ~10 subscriptions each matching all messages published by the Producers, and very rarely publish a larger message (10KB) randomly addressed to one of the Producers. Consumers are running in a local Ubuntu box connected to the Internet through Ethernet on our company network.

      However,
      The problem cannot be reproduced if the Consumers are moved to a machine in the AWS and there are NO other changes in the test configuration;

      The problem cannot be reproduced if there are NO changes in the test configuration except openwire transport is used instead of MQTT.

      Additional Information:
      I also had a scale down test case with one consumer and 5 producers using MQTT. When MQTT QoS level is set to 0 (at most once delivery), it was much faster for consumer to consume messages. While set QoS leve to 1 (at least once delivery), there were considerable delay for consumer to consume messages. It certainly looks like that MQTT does not have a proper prefetch support. When setting QoS level to 0 (or fire and forget), broker just keeps sending all messages without waiting for any acks back. Therefore it was much faster. While setting QoS level to 1, broker would need to get ACK back before sending next one. So it looks like that there was not a proper prefetch support that allow broker to do a batched delivery.

      I think there could be two problem involved in this issue:

      1. There isn't a prefetch support for MQTT transport;
      2. There isn't any tcp optimization for MQTT transport. My tests showed that with exactly the same test configuration, openwire transport is much faster than MQTT transport. And also if consumer is running on the same machine as broker (in this case, both consumer and broker are running on AWS instance), the problem won't be seen.

            dejanbosanac Dejan Bosanac
            rhn-support-qluo Joe Luo
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

              Created:
              Updated:
              Resolved: