Uploaded image for project: 'Red Hat Fuse'
  1. Red Hat Fuse
  2. ENTESB-3992

GATEWAY-HTTP doesnot work as expected with loadbalance option set to sticky.

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Major Major
    • jboss-fuse-6.2.1
    • jboss-fuse-6.2
    • Fabric8 v1
    • None
    • % %
    • Hide
      • Extract the zip and build it using command mvn clean install.
      • In fabric create a profile and add this war to the profile following below commands
        profile-create --parents feature-fabric-web TestWebProfile
        profile-edit --bundles mvn:test.war/testWar/1.0.0/war TestWebProfile
        
      • Now create containers and add profiles to them as below
        JBossFuse:karaf@root> container-list
        [id]   [version]  [type]  [connected]  [profiles]              [provision status]
        root*  1.0        karaf   yes          fabric                  success           
                                               fabric-ensemble-0000-1                    
                                               jboss-fuse-full                           
          abc  1.0        karaf   yes          default                 success           
                                               gateway-http                              
          lmn  1.0        karaf   yes          default                 success           
                                               TestWebProfile                            
          xyz  1.0        karaf   yes          default                 success           
                                               TestWebProfile        
        
      • Default behaviour is roundrobin so concurrent request will be divided equally between both nodes lmn and xyz.
      • Now within profile http-gateway edit property file io.fabric8.gateway.http.mapping-webapps.properties. Append it with below additional property.
        loadBalancerType=sticky
        
      • After this refresh the profile.
      • Now check the behaviour. There is no response for request sent. In logs we see that request are logged as per expectation.
        2015-09-13 00:54:22,270 | INFO  | 29-c6d85cd62e57) | HttpMappingRuleConfiguration     | 150 - io.fabric8.gateway-fabric - 1.2.0.redhat-133 | activating http mapping rule {component.name=io.fabric8.gateway.http.mapping, stickyLoadBalancerCacheSize=5, reverseHeaders=true, uriTemplate={contextPath}/, service.factoryPid=io.fabric8.gateway.http.mapping, fabric.zookeeper.pid=io.fabric8.gateway.http.mapping-webapps, zooKeeperPath=/fabric/registry/clusters/webapps, loadBalancerType=sticky, service.pid=io.fabric8.gateway.http.mapping.bc51dfe7-fd7c-4456-b329-c6d85cd62e57, component.id=62}
        2015-09-13 00:54:22,271 | INFO  | 29-c6d85cd62e57) | HttpMappingRuleConfiguration     | 150 - io.fabric8.gateway-fabric - 1.2.0.redhat-133 | activating http mapping rule /fabric/registry/clusters/webapps on 9000
        2015-09-13 00:54:22,271 | INFO  | 29-c6d85cd62e57) | HttpMappingRuleConfiguration     | 150 - io.fabric8.gateway-fabric - 1.2.0.redhat-133 | activating http mapping ZooKeeper path: /fabric/registry/clusters/webapps with URI template: {contextPath}/ enabledVersion: null with load balancer: StickyLoadBalancer{maximumCacheSize=5}
        2015-09-13 00:54:22,273 | INFO  | 29-c6d85cd62e57) | HttpMappingZooKeeperTreeCache    | 151 - io.fabric8.gateway-fabric-support - 1.2.0.redhat-133 | Started listening to ZK path /fabric/registry/clusters/webapps
        
        //concurrent request
        2015-09-13 01:01:21,246 | INFO  | entloop-thread-0 | HttpGatewayHandler               | 149 - io.fabric8.gateway-core - 1.2.0.redhat-133 | Proxying request /demo/index.jsp to service path: /demo/index.jsp on service: http://192.168.166.1:8184/demo reverseServiceUrl: http://0.0.0.0:9000/demo
        2015-09-13 01:01:21,247 | INFO  | entloop-thread-0 | HttpGatewayHandler               | 149 - io.fabric8.gateway-core - 1.2.0.redhat-133 | Proxying request /demo/index.jsp to service path: /demo/index.jsp on service: http://192.168.166.1:8184/demo reverseServiceUrl: http://0.0.0.0:9000/demo
        2015-09-13 01:01:21,247 | INFO  | entloop-thread-0 | HttpGatewayHandler               | 149 - io.fabric8.gateway-core - 1.2.0.redhat-133 | Proxying request /demo/index.jsp to service path: /demo/index.jsp on service: http://192.168.166.1:8184/demo reverseServiceUrl: http://0.0.0.0:9000/demo
        2015-09-13 01:01:21,248 | INFO  | entloop-thread-0 | HttpGatewayHandler               | 149 - io.fabric8.gateway-core - 1.2.0.redhat-133 | Proxying request /demo/index.jsp to service path: /demo/index.jsp on service: http://192.168.166.1:8184/demo reverseServiceUrl: http://0.0.0.0:9000/demo
        2015-09-13 01:01:21,248 | INFO  | entloop-thread-0 | HttpGatewayHandler               | 149 - io.fabric8.gateway-core - 1.2.0.redhat-133 | Proxying request /demo/index.jsp to service path: /demo/index.jsp on service: http://192.168.166.1:8184/demo reverseServiceUrl: http://0.0.0.0:9000/demo
        2015-09-13 01:01:21,248 | INFO  | entloop-thread-0 | HttpGatewayHandler               | 149 - io.fabric8.gateway-core - 1.2.0.redhat-133 | Proxying request /demo/index.jsp to service path: /demo/index.jsp on service: http://192.168.166.1:8184/demo reverseServiceUrl: http://0.0.0.0:9000/demo
        
      • If property is set as loadBalancerType=loadbalancer than there is no issue. Gateway-Http profile serves as loadbalancer.
      Show
      Extract the zip and build it using command mvn clean install. In fabric create a profile and add this war to the profile following below commands profile-create --parents feature-fabric-web TestWebProfile profile-edit --bundles mvn:test.war/testWar/1.0.0/war TestWebProfile Now create containers and add profiles to them as below JBossFuse:karaf@root> container-list [id] [version] [type] [connected] [profiles] [provision status] root* 1.0 karaf yes fabric success fabric-ensemble-0000-1 jboss-fuse-full abc 1.0 karaf yes default success gateway-http lmn 1.0 karaf yes default success TestWebProfile xyz 1.0 karaf yes default success TestWebProfile Default behaviour is roundrobin so concurrent request will be divided equally between both nodes lmn and xyz. Now within profile http-gateway edit property file io.fabric8.gateway.http.mapping-webapps.properties . Append it with below additional property. loadBalancerType=sticky After this refresh the profile. Now check the behaviour. There is no response for request sent. In logs we see that request are logged as per expectation. 2015-09-13 00:54:22,270 | INFO | 29-c6d85cd62e57) | HttpMappingRuleConfiguration | 150 - io.fabric8.gateway-fabric - 1.2.0.redhat-133 | activating http mapping rule {component.name=io.fabric8.gateway.http.mapping, stickyLoadBalancerCacheSize=5, reverseHeaders= true , uriTemplate={contextPath}/, service.factoryPid=io.fabric8.gateway.http.mapping, fabric.zookeeper.pid=io.fabric8.gateway.http.mapping-webapps, zooKeeperPath=/fabric/registry/clusters/webapps, loadBalancerType=sticky, service.pid=io.fabric8.gateway.http.mapping.bc51dfe7-fd7c-4456-b329-c6d85cd62e57, component.id=62} 2015-09-13 00:54:22,271 | INFO | 29-c6d85cd62e57) | HttpMappingRuleConfiguration | 150 - io.fabric8.gateway-fabric - 1.2.0.redhat-133 | activating http mapping rule /fabric/registry/clusters/webapps on 9000 2015-09-13 00:54:22,271 | INFO | 29-c6d85cd62e57) | HttpMappingRuleConfiguration | 150 - io.fabric8.gateway-fabric - 1.2.0.redhat-133 | activating http mapping ZooKeeper path: /fabric/registry/clusters/webapps with URI template: {contextPath}/ enabledVersion: null with load balancer: StickyLoadBalancer{maximumCacheSize=5} 2015-09-13 00:54:22,273 | INFO | 29-c6d85cd62e57) | HttpMappingZooKeeperTreeCache | 151 - io.fabric8.gateway-fabric-support - 1.2.0.redhat-133 | Started listening to ZK path /fabric/registry/clusters/webapps //concurrent request 2015-09-13 01:01:21,246 | INFO | entloop-thread-0 | HttpGatewayHandler | 149 - io.fabric8.gateway-core - 1.2.0.redhat-133 | Proxying request /demo/index.jsp to service path: /demo/index.jsp on service: http: //192.168.166.1:8184/demo reverseServiceUrl: http://0.0.0.0:9000/demo 2015-09-13 01:01:21,247 | INFO | entloop-thread-0 | HttpGatewayHandler | 149 - io.fabric8.gateway-core - 1.2.0.redhat-133 | Proxying request /demo/index.jsp to service path: /demo/index.jsp on service: http: //192.168.166.1:8184/demo reverseServiceUrl: http://0.0.0.0:9000/demo 2015-09-13 01:01:21,247 | INFO | entloop-thread-0 | HttpGatewayHandler | 149 - io.fabric8.gateway-core - 1.2.0.redhat-133 | Proxying request /demo/index.jsp to service path: /demo/index.jsp on service: http: //192.168.166.1:8184/demo reverseServiceUrl: http://0.0.0.0:9000/demo 2015-09-13 01:01:21,248 | INFO | entloop-thread-0 | HttpGatewayHandler | 149 - io.fabric8.gateway-core - 1.2.0.redhat-133 | Proxying request /demo/index.jsp to service path: /demo/index.jsp on service: http: //192.168.166.1:8184/demo reverseServiceUrl: http://0.0.0.0:9000/demo 2015-09-13 01:01:21,248 | INFO | entloop-thread-0 | HttpGatewayHandler | 149 - io.fabric8.gateway-core - 1.2.0.redhat-133 | Proxying request /demo/index.jsp to service path: /demo/index.jsp on service: http: //192.168.166.1:8184/demo reverseServiceUrl: http://0.0.0.0:9000/demo 2015-09-13 01:01:21,248 | INFO | entloop-thread-0 | HttpGatewayHandler | 149 - io.fabric8.gateway-core - 1.2.0.redhat-133 | Proxying request /demo/index.jsp to service path: /demo/index.jsp on service: http: //192.168.166.1:8184/demo reverseServiceUrl: http://0.0.0.0:9000/demo If property is set as loadBalancerType=loadbalancer than there is no issue. Gateway-Http profile serves as loadbalancer.

      • Gateway-Http profile supports load-balancing feature by default.
      • But when loadbalancing feature explicitly set to sticky than response is not received. If we browse using browser page keeps on loading.
      • URL using which this war can be tested is http://localhost:8183/demo.
      • With gateway-http http://localhost:9000/demo/.

            hchirino Hiram Chirino
            rhn-support-cpandey Chandra Shekhar Pandey (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

              Created:
              Updated:
              Resolved: