Uploaded image for project: 'AMQ Broker'
  1. AMQ Broker
  2. ENTMQBR-2011

Consumer of store-forward internal queues get dropped when syncing large messages

    XMLWordPrintable

Details

    • Bug
    • Resolution: Done
    • Critical
    • AMQ 7.2.2.GA
    • AMQ 7.2.0.GA
    • clustering
    • None
    • Release Notes
    • Hide
      Using temporary destinations in a clustered environment caused messages to get dropped from store and forward queues when clustering messages between brokers. As a result, the broker was not able to distribute messages across cluster nodes and needed to be restarted. Now you can safely use temporary destinations in a clustered environment.
      Show
      Using temporary destinations in a clustered environment caused messages to get dropped from store and forward queues when clustering messages between brokers. As a result, the broker was not able to distribute messages across cluster nodes and needed to be restarted. Now you can safely use temporary destinations in a clustered environment.
    • Documented as Resolved Issue
    • AMQ Broker 1839, AMQ Broker 1842

    Description

      In a cluster of 3 master/3 slave nodes, the following error is thrown:

      2018-09-19 21:05:10,480 WARN  [org.apache.activemq.artemis.core.server] AMQ222151: removing consumer which did not handle a message, consumer=ClusterConnectionBridge@ee644b8 [name=$.artemis.internal.sf.dsi.16c263e0-b78d-11e8-b794-005056b12ceb, queue=QueueImpl[name=$.artemis.internal.sf.dsi.16c263e0-b78d-11e8-b794-005056b12ceb, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=d076697f-bab8-11e8-aaa3-005056b15513], temp=false]@618bbba9 targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@ee644b8 [name=$.artemis.internal.sf.dsi.16c263e0-b78d-11e8-b794-005056b12ceb, queue=QueueImpl[name=$.artemis.internal.sf.dsi.16c263e0-b78d-11e8-b794-005056b12ceb, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=d076697f-bab8-11e8-aaa3-005056b15513], temp=false]@618bbba9 targetConnector=ServerLocatorImpl [initialConnectors=[TransportConfiguration(name=artemis, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=f00396-sys-dmz-wolseley-com], discoveryGroupConfiguration=null]]::ClusterConnectionImpl@1611370719[nodeUUID=d076697f-bab8-11e8-aaa3-005056b15513, connector=TransportConfiguration(name=artemis, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=f00395-sys-dmz-wolseley-com, address=, server=ActiveMQServerImpl::serverUUID=d076697f-bab8-11e8-aaa3-005056b15513])) [initialConnectors=[TransportConfiguration(name=artemis, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=f00396-sys-dmz-wolseley-com], discoveryGroupConfiguration=null]], message=Reference[6020018]:NON-RELIABLE:CoreMessage[messageID=6020018,durable=false,userID=null,priority=0, timestamp=0,expiration=0, durable=false, address=ActiveMQ.Advisory.TempQueue,size=1525,properties=TypedProperties[_AMQ_ROUTE_TO$.artemis.internal.sf.dsi.16c263e0-b78d-11e8-b794-005056b12ceb=[0000 0000 0040 9A5A 0000 0000 0040 9A4F 0000 0000 0040 9A27 0000 0000 005B  ...  0000 0040 9AD9 0000 0000 003B A3B8 0000 0000 0040 99E1 0000 0000 003B A1D5),bytesAsLongs(4233818,4233807,4233767,6005863,4233718,4233756,4233695,4233849,5995089,6005449,4233793,4233666,4233677,6005367,5996066,6005234,4233778,4233945,3908536,4233697,3908053],__HDR_BROKER_IN_TIME=1537405510478,_AMQ_ROUTING_TYPE=0,__HDR_GROUP_SEQUENCE=0,__HDR_COMMAND_ID=0,__HDR_DATASTRUCTURE=[0000 0077 0800 0000 0000 0178 0100 3949 443A 6630 3033 3939 2E73 7973 2E64  ... 61 322D 6163 6661 2D39 3437 3930 3731 3265 6331 3701 0000 0000 0000 0000 00),_AMQ_ROUTE_TO$.artemis.internal.sf.dsi.f60ad065-b784-11e8-b3b1-005056b1dcfc=[0000 0000 000B 08B3 0000 0000 000B 0790 0000 0000 000B 01DF 0000 0000 000B  ...  0000 000B 07E8 0000 0000 000B 0147 0000 0000 000B 0800 0000 0000 000B 0219),bytesAsLongs(723123,722832,721375,722993,722920,721223,722944,721433],_AMQ_DUPL_ID=ID:f00395.sys.ds.wolseley.com-44687-1537362276159-1:1:0:0:109,__HDR_MESSAGE_ID=[0000 005C 6E00 017B 0100 3549 443A 6630 3033 3935 2E73 7973 2E64 732E 776F  ...  0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 006D 0000 0000 0000 0000),__HDR_DROPPABLE=false,__HDR_ARRIVAL=0,__HDR_PRODUCER_ID=[0000 0049 7B01 0035 4944 3A66 3030 3339 352E 7379 732E 6473 2E77 6F6C 7365  ... 37 3336 3232 3736 3135 392D 313A 3100 0000 0000 0000 0000 0000 0000 0000 00),JMSType=Advisory]]@1721081701: java.lang.IndexOutOfBoundsException: writerIndex: 4 (expected: readerIndex(0) <= writerIndex <= capacity(0))
      	at io.netty.buffer.AbstractByteBuf.writerIndex(AbstractByteBuf.java:118) [netty-all-4.1.19.Final-redhat-1.jar:4.1.19.Final-redhat-1]
      	at io.netty.buffer.WrappedByteBuf.writerIndex(WrappedByteBuf.java:129) [netty-all-4.1.19.Final-redhat-1.jar:4.1.19.Final-redhat-1]
      	at org.apache.activemq.artemis.core.buffers.impl.ResetLimitWrappedActiveMQBuffer.writerIndex(ResetLimitWrappedActiveMQBuffer.java:128) [artemis-core-client-2.6.1.amq-720004-redhat-1.jar:2.6.1.amq-720004-redhat-1]
      	at org.apache.activemq.artemis.core.buffers.impl.ResetLimitWrappedActiveMQBuffer.<init>(ResetLimitWrappedActiveMQBuffer.java:60) [artemis-core-client-2.6.1.amq-720004-redhat-1.jar:2.6.1.amq-720004-redhat-1]
      	at org.apache.activemq.artemis.core.message.impl.CoreMessage.internalWritableBuffer(CoreMessage.java:360) [artemis-core-client-2.6.1.amq-720004-redhat-1.jar:2.6.1.amq-720004-redhat-1]
      	at org.apache.activemq.artemis.core.message.impl.CoreMessage.getBodyBuffer(CoreMessage.java:353) [artemis-core-client-2.6.1.amq-720004-redhat-1.jar:2.6.1.amq-720004-redhat-1]
      	at org.apache.activemq.artemis.core.client.impl.ClientProducerImpl.doSend(ClientProducerImpl.java:242) [artemis-core-client-2.6.1.amq-720004-redhat-1.jar:2.6.1.amq-720004-redhat-1]
      	at org.apache.activemq.artemis.core.client.impl.ClientProducerImpl.send(ClientProducerImpl.java:129) [artemis-core-client-2.6.1.amq-720004-redhat-1.jar:2.6.1.amq-720004-redhat-1]
      	at org.apache.activemq.artemis.core.server.cluster.impl.BridgeImpl.deliverStandardMessage(BridgeImpl.java:743) [artemis-server-2.6.1.amq-720004-redhat-1.jar:2.6.1.amq-720004-redhat-1]
      	at org.apache.activemq.artemis.core.server.cluster.impl.BridgeImpl.handle(BridgeImpl.java:619) [artemis-server-2.6.1.amq-720004-redhat-1.jar:2.6.1.amq-720004-redhat-1]
      	at org.apache.activemq.artemis.core.server.impl.QueueImpl.handle(QueueImpl.java:2985) [artemis-server-2.6.1.amq-720004-redhat-1.jar:2.6.1.amq-720004-redhat-1]
      	at org.apache.activemq.artemis.core.server.impl.QueueImpl.deliver(QueueImpl.java:2341) [artemis-server-2.6.1.amq-720004-redhat-1.jar:2.6.1.amq-720004-redhat-1]
      	at org.apache.activemq.artemis.core.server.impl.QueueImpl.access$2000(QueueImpl.java:107) [artemis-server-2.6.1.amq-720004-redhat-1.jar:2.6.1.amq-720004-redhat-1]
      	at org.apache.activemq.artemis.core.server.impl.QueueImpl$DeliverRunner.run(QueueImpl.java:3211) [artemis-server-2.6.1.amq-720004-redhat-1.jar:2.6.1.amq-720004-redhat-1]
      	at org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:42) [artemis-commons-2.6.1.amq-720004-redhat-1.jar:2.6.1.amq-720004-redhat-1]
      	at org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:31) [artemis-commons-2.6.1.amq-720004-redhat-1.jar:2.6.1.amq-720004-redhat-1]
      	at org.apache.activemq.artemis.utils.actors.ProcessorBase.executePendingTasks(ProcessorBase.java:66) [artemis-commons-2.6.1.amq-720004-redhat-1.jar:2.6.1.amq-720004-redhat-1]
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [rt.jar:1.8.0_171]
      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [rt.jar:1.8.0_171]
      	at org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118) [artemis-commons-2.6.1.amq-720004-redhat-1.jar:2.6.1.amq-720004-redhat-1]
      

      Followed by huge accumulation in internal SF queues. The data folder
      FOLDER -> # OF FILES -> SIZE OF FOLDER

      Node1:
      journal -> 101 -> 1005M
      paging -> 6 -> 32K
      large-messages -> 1 -> 2.8M

      Node2:
      journal -> 101 -> 1005M
      paging -> 4 -> 24K
      large-messages -> 0 -> 0M

      Node3:
      journal -> 101 -> 1005M
      paging -> 4 -> 28K
      large-messages -> 145 -> 124K -> Note: almost all these files have a 0 size.

      Attachments

        Issue Links

          Activity

            People

              fnigro Francesco Nigro
              rhn-support-abelkour Mohamed Amine Belkoura
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: