Uploaded image for project: 'Red Hat Fuse'
  1. Red Hat Fuse
  2. ENTESB-6768

mq-gateway stops handling requests

    XMLWordPrintable

Details

    • % %

    Description

      In a very similar scenario to ENTESB-4610, the gateway server stops responding. The first indication something is wrong is:

      2017-04-14 22:08:11,357 | ERROR | entloop-thread-0 | DefaultContext                   | 157 - io.fabric8.fabric-vertx - 1.2.0.redhat-630187 | Unhandled exception
      java.lang.IllegalStateException: Unbalanced calls to release detected.
      	at io.fabric8.common.util.ShutdownTracker.release(ShutdownTracker.java:88)[66:io.fabric8.common-util:1.2.0.redhat-630187]
      	at io.fabric8.gateway.handlers.detecting.DetectingGateway.handleShutdown(DetectingGateway.java:447)[158:io.fabric8.gateway-core:1.2.0.redhat-630187-04]
      	at io.fabric8.gateway.handlers.detecting.DetectingGateway.access$400(DetectingGateway.java:54)[158:io.fabric8.gateway-core:1.2.0.redhat-630187-04]
      	at io.fabric8.gateway.handlers.detecting.DetectingGateway$6$1.handle(DetectingGateway.java:421)[158:io.fabric8.gateway-core:1.2.0.redhat-630187-04]
      	at io.fabric8.gateway.handlers.detecting.DetectingGateway$6$1.handle(DetectingGateway.java:418)[158:io.fabric8.gateway-core:1.2.0.redhat-630187-04]
      	at org.vertx.java.core.net.impl.DefaultNetSocket.handleClosed(DefaultNetSocket.java:240)[157:io.fabric8.fabric-vertx:1.2.0.redhat-630187]
      	at org.vertx.java.core.net.impl.VertxHandler$3.run(VertxHandler.java:120)[157:io.fabric8.fabric-vertx:1.2.0.redhat-630187]
      	at org.vertx.java.core.impl.DefaultContext$3.run(DefaultContext.java:175)[157:io.fabric8.fabric-vertx:1.2.0.redhat-630187]
      	at org.vertx.java.core.impl.DefaultContext.execute(DefaultContext.java:135)[157:io.fabric8.fabric-vertx:1.2.0.redhat-630187]
      	at org.vertx.java.core.net.impl.VertxHandler.channelInactive(VertxHandler.java:118)[157:io.fabric8.fabric-vertx:1.2.0.redhat-630187]
      	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:233)[167:io.netty.transport:4.0.37.Final-redhat-2]
      	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:219)[167:io.netty.transport:4.0.37.Final-redhat-2]
      	at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:212)[167:io.netty.transport:4.0.37.Final-redhat-2]
      	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelInactive(DefaultChannelPipeline.java:1275)[167:io.netty.transport:4.0.37.Final-redhat-2]
      	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:233)[167:io.netty.transport:4.0.37.Final-redhat-2]
      	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:219)[167:io.netty.transport:4.0.37.Final-redhat-2]
      	at io.netty.channel.DefaultChannelPipeline.fireChannelInactive(DefaultChannelPipeline.java:872)[167:io.netty.transport:4.0.37.Final-redhat-2]
      	at io.netty.channel.AbstractChannel$AbstractUnsafe$7.run(AbstractChannel.java:679)[167:io.netty.transport:4.0.37.Final-redhat-2]
      	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:358)[165:io.netty.common:4.0.37.Final-redhat-2]
      	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:394)[167:io.netty.transport:4.0.37.Final-redhat-2]
      	at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)[165:io.netty.common:4.0.37.Final-redhat-2]
      	at java.lang.Thread.run(Thread.java:745)[:1.8.0_65]
      

      General sequence of events:

      1)  The first sign of trouble was:
      2017-04-14 22:08:11,357 | Unhandled exception java.lang.IllegalStateException: Unbalanced calls to release detected.
      
      2)  Repeated errors:
      2017-04-14 22:08:33,237 | Gateway client '/X.X.X.X:22145' closed the connection before it could be routed.
      
      3)  Notification zookeeper connection state was suspended
      2017-04-14 22:11:52,862 | INFO  | ad-1-EventThread | ConnectionStateManager           | 75 - io.fabric8.fabric-zookeeper - 1.2.0.redhat-630187 | State change: SUSPENDED
      
      4)  Zookeeper loss of connection
      2017-04-14 22:12:23,358 | ERROR | ad-1-EventThread | ConnectionState                  | 75 - io.fabric8.fabric-zookeeper - 1.2.0.redhat-630187 | Connection timed out for connection string (HOST1:2181,HOST2:2181,HOST3:2181,HOST4:2181,HOST5:2181) and timeout (15000) / elapsed (30494)
      org.apache.curator.CuratorConnectionLossException: KeeperErrorCode = ConnectionLoss
      
      5)  Notification zookeeper connection state was connected
      2017-04-14 22:13:57,992 | INFO  | ad-1-EventThread | ConnectionStateManager           | 75 - io.fabric8.fabric-zookeeper - 1.2.0.redhat-630187 | State change: RECONNECTED
      ...
      2017-04-14 22:13:58,518 | ERROR | p-gml-1-thread-1 | ZooKeeperGroup                   | 69 - io.fabric8.fabric-groups - 1.2.0.redhat-630187 | 
      org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /fabric/registry/clusters/git/00000000386
      
      6)  No logging or activity occurred until server was restarted
      2017-04-19 20:08:24,392 | INFO  | Thread-2         | Main                             |  -  -  | Karaf shutdown socket: received shutdown command. Stopping framework...
      

      Attachments

        1. gw.zip
          2.16 MB
        2. testA.zip
          1.16 MB
        3. testB.tris.zip
          1.16 MB
        4. testB.zip
          1.16 MB

        Issue Links

          Activity

            People

              atarocch@redhat.com Andrea Tarocchi (Inactive)
              rhn-support-sjavurek Susan Javurek
              Votes:
              0 Vote for this issue
              Watchers:
              10 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: