Details

    • Type: Bug
    • Status: Closed (View Workflow)
    • Priority: Major
    • Resolution: Duplicate Issue
    • Affects Version/s: 2.1 GA
    • Fix Version/s: None
    • Component/s: Backend
    • Environment:

      AMP2.1, OCP 3.x (any supported configuration)

    • Steps to Reproduce:
      Hide

      Authorise a few requests for one application through APIcast then kill the backend-redis container:

      docker ps
      docker kill backend-redis-xyz
      

      Authorise the same application through APIcast, the first request will succeed as the key exists in the cache but the backend-listener logs will show the following error:

      Exception `Redis::ConnectionError' at /opt/ruby/3scale_backend-2.77.1.1/lib/3scale/backend/logger/middleware.rb:35 - Connection lost (ECONNRESET)
      2018-01-03 17:11:41 +0000: Rack app error: #<Redis::ConnectionError: Connection lost (ECONNRESET)>
      

      The second request with the same application through APIcast will return a 403 to the client as the cache will have been deleted based on the previous response and the same error as above can be seen in the logs.

      Show
      Authorise a few requests for one application through APIcast then kill the backend-redis container: docker ps docker kill backend-redis-xyz Authorise the same application through APIcast, the first request will succeed as the key exists in the cache but the backend-listener logs will show the following error: Exception `Redis::ConnectionError' at /opt/ruby/3scale_backend-2.77.1.1/lib/3scale/backend/logger/middleware.rb:35 - Connection lost (ECONNRESET) 2018-01-03 17:11:41 +0000: Rack app error: #<Redis::ConnectionError: Connection lost (ECONNRESET)> The second request with the same application through APIcast will return a 403 to the client as the cache will have been deleted based on the previous response and the same error as above can be seen in the logs.
    • Workaround Description:
      Hide

      A useful workaround that can help in "some" situations is to set the APICAST_BACKEND_CACHE_HANDLER env var to resilient so at least perviously authorised keys will not fail. This will not help if the key was not already in the gateway cache though.

      Show
      A useful workaround that can help in "some" situations is to set the APICAST_BACKEND_CACHE_HANDLER env var to resilient so at least perviously authorised keys will not fail. This will not help if the key was not already in the gateway cache though.
    • QE Test Coverage:
      -

      Description

      When the backend-redis container is killed or scaled down it is then respawned based on the number of replicas in the dc. The backend-listener component should be able to reconnect to the new Redis container to authorise and report new traffic but it always fails on the first 2 attempts to connect.

      Most likely this is happening because the previous connections are cached in the listener and it first fails connecting to the data db and then fails connecting to the resque db.

      Unfortunately this results in a failed request for previously authorised clients (only one client request will fail) or if the first 2 requests processed by the listener are from clients that are not in the APIcast cache they will both fail.

      There is already an issue in GH/backend tracking the reason why we are unable to address this currently. we cannot simply just set a retry due to the fact there is a risk in double reporting traffic involved with this.

        Gliffy Diagrams

          Attachments

            Issue Links

              Activity

                People

                • Assignee:
                  Unassigned
                  Reporter:
                  kevprice Kevin Price
                • Votes:
                  1 Vote for this issue
                  Watchers:
                  6 Start watching this issue

                  Dates

                  • Created:
                    Updated:
                    Resolved: