Uploaded image for project: 'AMQ Broker'
  1. AMQ Broker
  2. ENTMQBR-3097

In multiple scale up/down scenario the broker will have lots of store_and_forward(sf) queues

    XMLWordPrintable

Details

    • Release Notes
    • +
    • Hide
      When a broker Pod is scaled down, its messages are migrated to another broker Pod via a store-and-forward queue created in the target broker Pod. Previously, these queues were not deleted after message migration was finished. Also, because broker Pods have unique node IDs, the queues could not be reused by other Pods. Over time, accumulation of these unused queues might cause unwanted memory consumption. This issue is resolved. Store-and-forward queues created for message migration are now deleted when migration is finished.
      Show
      When a broker Pod is scaled down, its messages are migrated to another broker Pod via a store-and-forward queue created in the target broker Pod. Previously, these queues were not deleted after message migration was finished. Also, because broker Pods have unique node IDs, the queues could not be reused by other Pods. Over time, accumulation of these unused queues might cause unwanted memory consumption. This issue is resolved. Store-and-forward queues created for message migration are now deleted when migration is finished.
    • Documented as Resolved Issue
    • AMQ Sprint 3219, AMQ Sprint 3519

    Description

      Performing scale up/down multiple time the broker will have lots of store_and_forward queues(old+new), which I assume is expected behaviour as the new POD created will have a new nodeID.

      |NAME                     |ADDRESS                  |CONSUMER_COUNT |MESSAGE_COUNT |MESSAGES_ADDED |DELIVERING_COUNT |MESSAGES_ACKED |
      |$.artemis.internal.sf.my-cluster.0322339e-ac43-11e9-92ce-0a580a83002a|$.artemis.internal.sf.my-cluster.0322339e-ac43-11e9-92ce-0a580a83002a|0              |0             |0              |0                |0              |
      |$.artemis.internal.sf.my-cluster.16715b5f-ac41-11e9-9016-0a580a800221|$.artemis.internal.sf.my-cluster.16715b5f-ac41-11e9-9016-0a580a800221|0              |0             |0              |0                |0              |
      |$.artemis.internal.sf.my-cluster.26d2f567-ac3e-11e9-bbe9-0a580a830026|$.artemis.internal.sf.my-cluster.26d2f567-ac3e-11e9-bbe9-0a580a830026|0              |0             |0              |0                |0              |
      |$.artemis.internal.sf.my-cluster.42ea7608-ac4a-11e9-9c26-0a580a83002f|$.artemis.internal.sf.my-cluster.42ea7608-ac4a-11e9-9c26-0a580a83002f|1              |0             |0              |0                |0              |
      |$.artemis.internal.sf.my-cluster.46e50f81-ac3d-11e9-8601-0a580a830024|$.artemis.internal.sf.my-cluster.46e50f81-ac3d-11e9-8601-0a580a830024|0              |0             |0              |0                |0              |
      |$.artemis.internal.sf.my-cluster.695c74ed-ac3d-11e9-bc0a-0a580a830025|$.artemis.internal.sf.my-cluster.695c74ed-ac3d-11e9-bc0a-0a580a830025|0              |0             |0              |0                |0              |
      |$.artemis.internal.sf.my-cluster.6a49ce5e-ac4a-11e9-a630-0a580a830030|$.artemis.internal.sf.my-cluster.6a49ce5e-ac4a-11e9-a630-0a580a830030|1              |0             |0              |0                |0              |
      |$.artemis.internal.sf.my-cluster.7621db54-ac42-11e9-85e7-0a580a830028|$.artemis.internal.sf.my-cluster.7621db54-ac42-11e9-85e7-0a580a830028|0              |0             |0              |0                |0              |
      |$.artemis.internal.sf.my-cluster.8f2a6bf5-ac4a-11e9-ba2d-0a580a800226|$.artemis.internal.sf.my-cluster.8f2a6bf5-ac4a-11e9-ba2d-0a580a800226|1              |0             |0              |0                |0              |
      |$.artemis.internal.sf.my-cluster.91cf38f1-ac3d-11e9-b669-0a580a80021f|$.artemis.internal.sf.my-cluster.91cf38f1-ac3d-11e9-b669-0a580a80021f|0              |0             |0              |0                |0              |
      |$.artemis.internal.sf.my-cluster.9834161a-ac42-11e9-ba5a-0a580a800222|$.artemis.internal.sf.my-cluster.9834161a-ac42-11e9-ba5a-0a580a800222|0              |0             |0              |0                |0              |
      |$.artemis.internal.sf.my-cluster.b2bd7a4c-ac4a-11e9-9d13-0a580a830031|$.artemis.internal.sf.my-cluster.b2bd7a4c-ac4a-11e9-9d13-0a580a830031|1              |0             |0              |0                |0              |
      |$.artemis.internal.sf.my-cluster.bb18e313-ac42-11e9-9ebc-0a580a800223|$.artemis.internal.sf.my-cluster.bb18e313-ac42-11e9-9ebc-0a580a800223|0              |0             |0              |0                |0              |
      |$.artemis.internal.sf.my-cluster.d61098c6-ac4a-11e9-a522-0a580a800227|$.artemis.internal.sf.my-cluster.d61098c6-ac4a-11e9-a522-0a580a800227|1              |0             |0              |0                |0              |
      |$.artemis.internal.sf.my-cluster.de09210b-ac42-11e9-8454-0a580a830029|$.artemis.internal.sf.my-cluster.de09210b-ac42-11e9-8454-0a580a830029|0              |0             |0              |0                |0              |
      |$.artemis.internal.sf.my-cluster.f729f4b3-ac4a-11e9-b620-0a580a830032|$.artemis.internal.sf.my-cluster.f729f4b3-ac4a-11e9-b620-0a580a830032|1              |0             |0              |0                |0              |
      

      Attached the output of queue stat for reference.

      I have raised this enhancement to explore if it is possible to delete the old store_and_forward queue to avoid confusion, by the drain pod.

      Attachments

        Issue Links

          Activity

            People

              gaohoward Howard Gao
              dbruscin Domenico Francesco Bruscino
              Roman Vais Roman Vais
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: