JBoss Messaging
  1. JBoss Messaging
  2. JBMESSAGING-920

Memory Leaks when opening/closing connections

    Details

    • Similar Issues:
      Show 10 results 

      Description

      Kurt's report:

      I have extracted the 'essence' of what our ESB engine does with JMS messaging. I'm sending a message to queue/A and then I'm receiving the same message, I do some cleanup (closing session and connection) and then it starts all over again. Yes for each message I'm setting up the connection and then cleaning it up.

      The problem I'm seeing is the the GC is not able to clean up the JMS objects as it loops around. So at some point this this example nicely blows up when it runs out of memory. Your mileage may vary but it usually is around 800 messages.

      Any ideas why the GC can't make this work? I've testing this with JBM 1.2 GA. But I think JBMQ has the same problem. I'm running JDK-1.5.0_10.

      JBossMQ crashes too. Although a different error it's also due to running out of memory.

        Activity

        Hide
        Kevin Conner
        added a comment -

        This is from an email I sent to the ESB group earlier today.
        Kev

        -----------------------------------------------------------------------------------------------------------

        From what I can see the issue appears to be related to two things

        • The way the current codebase repeatedly creates connections
        • The way JBoss Remoting works

        One cause appears to be the timers used by remoting. From what I can
        see there are three in play

        • ConnectionValidator
        • BisocketServerInvoker$ControlMonitorTimerTask
        • LeasePinger$LeaseTimerTask

        Each one of these timers has indirect access to the majority of the heap.

        The big culprit in the timers appears to be LeasePinger$LeaseTimerTask.
        When connections are closed this task is cancelled but unfortunately
        cancelling j.u.TimerTask does not remove the task from the queue, all
        it does is mark it as cancelled. The consequence of this is that every
        instance referenced by the task cannot be garbage collected until the
        timer would normally fire (and the task is then removed from the queue).

        Referenced from each LeasePinger instance is a BisocketClientInvoker
        which contains a ClientSocketWrapper. Each ClientSocketWrapper
        references a Socket, a DataInputStream (containing BufferedInputStream)
        and a DataOutputStream (containing BufferedOutputStream). Each BIS/BOS
        contains a 64k array! In my tests these instances amount to a
        cumulative size of about 1/3 of the heap.

        Another cause appears to be the use of hash maps. There are numerous
        hashmaps referenced from BisocketServerInvoker and BisocketClientInvoker
        which do not appear to be garbage collected. One reason is the above
        timers but a second is that BisocketServerInvoker holds on to
        BisocketServerInvoker references in a static map called
        listenerIdToServerInvokerMap. This map currently contains an instance
        of BisocketServerInvoker for every iteration of the loop.

        This has all been discovered from examining profile information, not
        source code. It may be that this analysis is completely wrong and that
        exmaination of the source code will highlight other issues.

        Kev

        Show
        Kevin Conner
        added a comment - This is from an email I sent to the ESB group earlier today. Kev ----------------------------------------------------------------------------------------------------------- From what I can see the issue appears to be related to two things The way the current codebase repeatedly creates connections The way JBoss Remoting works One cause appears to be the timers used by remoting. From what I can see there are three in play ConnectionValidator BisocketServerInvoker$ControlMonitorTimerTask LeasePinger$LeaseTimerTask Each one of these timers has indirect access to the majority of the heap. The big culprit in the timers appears to be LeasePinger$LeaseTimerTask. When connections are closed this task is cancelled but unfortunately cancelling j.u.TimerTask does not remove the task from the queue, all it does is mark it as cancelled. The consequence of this is that every instance referenced by the task cannot be garbage collected until the timer would normally fire (and the task is then removed from the queue). Referenced from each LeasePinger instance is a BisocketClientInvoker which contains a ClientSocketWrapper. Each ClientSocketWrapper references a Socket, a DataInputStream (containing BufferedInputStream) and a DataOutputStream (containing BufferedOutputStream). Each BIS/BOS contains a 64k array! In my tests these instances amount to a cumulative size of about 1/3 of the heap. Another cause appears to be the use of hash maps. There are numerous hashmaps referenced from BisocketServerInvoker and BisocketClientInvoker which do not appear to be garbage collected. One reason is the above timers but a second is that BisocketServerInvoker holds on to BisocketServerInvoker references in a static map called listenerIdToServerInvokerMap. This map currently contains an instance of BisocketServerInvoker for every iteration of the loop. This has all been discovered from examining profile information, not source code. It may be that this analysis is completely wrong and that exmaination of the source code will highlight other issues. Kev
        Hide
        Kurt Stam
        added a comment -

        Here is the latest version of the test. I'm closing the JNDI context now.

        Show
        Kurt Stam
        added a comment - Here is the latest version of the test. I'm closing the JNDI context now.
        Hide
        Kevin Conner
        added a comment -

        There were two issues found with remoting on the client side and I have now found an issue with Messaging.

        The ServerSessionEndpoint code contains an executor for each instance. Unfortunately nothing shuts down the executor which means that the threads created by the executor are never destroyed.

        I modified the messaging code to include a call to executor.shutdownNow() in ServerSessionEndpoint.close() and this appears to have done the trick.

        Show
        Kevin Conner
        added a comment - There were two issues found with remoting on the client side and I have now found an issue with Messaging. The ServerSessionEndpoint code contains an executor for each instance. Unfortunately nothing shuts down the executor which means that the threads created by the executor are never destroyed. I modified the messaging code to include a call to executor.shutdownNow() in ServerSessionEndpoint.close() and this appears to have done the trick.

          People

          • Assignee:
            Ovidiu Feodorov
            Reporter:
            Ovidiu Feodorov
          • Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved: