Uploaded image for project: 'Hawkular Metrics'
  1. Hawkular Metrics

Temp tables are not getting deleted


    • Type: Bug
    • Status: Resolved (View Workflow)
    • Priority: Major
    • Resolution: Done
    • Affects Version/s: 0.28.0, 0.29.0
    • Fix Version/s: 0.31.0, 0.30.3
    • Component/s: None
    • Labels:


      I have observed in a number of openshift clusters that old temp tables are not getting deleted. I have seen the number of temp tables grow to well over 100, even more than 300 in one cluster. This causes performance problems with Cassandra.

      Temp tables get dropped at the end of the compression job. It would see that the most likely cause of this issue would be the compression job not running for some reason. There are two places or times that the temp tables are created - during theTempTableCreator job and during start up.

      In one OpenShift cluster I was investigating, the hawkular-metrics pod had been restarted over 600 times in a span of just 6 days. Hawkular Metrics was crashing with an OOME shortly after start up. In this instance, I can see how we would end up with extra temp tables. They were getting created at start up if necessary. The compression job where the deletion would happen was not getting a chance to run because of Hawkular Metrics continually crashing.

      Unfortunately, we do not have enough logging from the clusters to know if/when the compression job had been running. Aside from a restart scenario like the one described above, if the compression job is not running, then the temp table creation job would have to still run in order to wind up with a bunch of extra tables.

        Gliffy Diagrams


            Issue Links



                • Assignee:
                  john.sanda John Sanda
                  john.sanda John Sanda
                • Votes:
                  0 Vote for this issue
                  2 Start watching this issue


                  • Created: