A job gets created and scheduled by calling Scheduler.scheduleJob. Each job receives a unique id in the form of a UUID. The job scheduler does not perform any sort of checks to see if a job is already scheduled. JobsServiceImpl.start gets called on server start up. It schedules the compression job. It does not check to see if the compression job is already scheduled which will result in another instance of the job getting scheduled server restart. The operations performed by the job should be idempotent which mean in theory having multiple instances running concurrently shouldn't break things; however, the job performs a lot queries, both reads and writes. Having multiple instances running could overload Cassandra and/or the driver.