Uploaded image for project: 'Hawkular Metrics'
  1. Hawkular Metrics
  2. HWKMETRICS-492

Duplicate instances of compression job get scheduled on server restart

    Details

    • Type: Bug
    • Status: Closed (View Workflow)
    • Priority: Critical
    • Resolution: Done
    • Affects Version/s: 0.20.0
    • Fix Version/s: 0.21.0
    • Component/s: Core, Scheduler
    • Labels:
      None

      Description

      A job gets created and scheduled by calling Scheduler.scheduleJob. Each job receives a unique id in the form of a UUID. The job scheduler does not perform any sort of checks to see if a job is already scheduled. JobsServiceImpl.start gets called on server start up. It schedules the compression job. It does not check to see if the compression job is already scheduled which will result in another instance of the job getting scheduled server restart. The operations performed by the job should be idempotent which mean in theory having multiple instances running concurrently shouldn't break things; however, the job performs a lot queries, both reads and writes. Having multiple instances running could overload Cassandra and/or the driver.

        Gliffy Diagrams

          Attachments

            Issue Links

              Activity

                People

                • Assignee:
                  john.sanda John Sanda
                  Reporter:
                  john.sanda John Sanda
                • Votes:
                  0 Vote for this issue
                  Watchers:
                  1 Start watching this issue

                  Dates

                  • Created:
                    Updated:
                    Resolved: