HornetQ
  1. HornetQ
  2. HORNETQ-399

Journal is creating smaller files than the file size setting

    Details

    • Type: Bug Bug
    • Status: Closed (View Workflow)
    • Priority: Major Major
    • Resolution: Done
    • Affects Version/s: 2.0.0.GA, 2.1.0.BETA1, 2.1.0.BETA2, 2.1.0.BETA3, 2.1.0.CR1
    • Fix Version/s: 2.1.0.Final
    • Component/s: Core
    • Labels:
      None
    • Similar Issues:
      Show 10 results 

      Description

      Journal cleanup will kick in to avoid dependency between files, what would cause the "linked-list effect" we talked during design phase.

      At the cleanup we create a new file.. but we won't fill up the whole file with the 10 MiB (or whatever configured size) as that would be a waste.

      Those files were not supposed to be back to free files when released, and that's currently happening

        Gliffy Diagrams

          Activity

          Hide
          Tim Fox added a comment -

          I think compacted files, should always be the full size (e.g. 10 MiB) like other files.

          It doesn't matter if some space is "wasted".

          If we take N files and compact them to M files, then all M files should be the full size. In the last of M files there will be some empty space, all the others will be full.

          Show
          Tim Fox added a comment - I think compacted files, should always be the full size (e.g. 10 MiB) like other files. It doesn't matter if some space is "wasted". If we take N files and compact them to M files, then all M files should be the full size. In the last of M files there will be some empty space, all the others will be full.
          Hide
          Clebert Suconic added a comment -

          This is about cleanup.. not compacting. Cleanup will take a file that has N other files pending and will remove the dead records on it. Say you have a file that only has 1 record alive. The journal is creating a file with just enough place for the cleared up file. Compacting is not creating smaller files. Allthough we can make cleanup to also create files at the same size.

          Show
          Clebert Suconic added a comment - This is about cleanup.. not compacting. Cleanup will take a file that has N other files pending and will remove the dead records on it. Say you have a file that only has 1 record alive. The journal is creating a file with just enough place for the cleared up file. Compacting is not creating smaller files. Allthough we can make cleanup to also create files at the same size.
          Hide
          Tim Fox added a comment -

          What's the point of that?

          Can you explain some more?

          It doesn't make a lot of sense to me.

          Show
          Tim Fox added a comment - What's the point of that? Can you explain some more? It doesn't make a lot of sense to me.
          Hide
          Tim Fox added a comment -

          Clebert. You were going to add some more explanation here.....

          Show
          Tim Fox added a comment - Clebert. You were going to add some more explanation here.....
          Hide
          Clebert Suconic added a comment -

          I take back what I said in regard to cleanup. What was happening was:

          If compact needed a new file, but the data files buffer was empty (i.e., needed to create a file), an empty file is being created, and the flush on compact was supposed to fill up the file up to its entire size.
          (There's no point on writing a file with zeros, than 1 second later writing again with the correct content).

          So, the bug was, when a new file is being created during compacting, the entire file is not being filled up...

          later, when the file is reused, the buffer control may get lost and may lose writes as the file doesn't have the exact size.

          Show
          Clebert Suconic added a comment - I take back what I said in regard to cleanup. What was happening was: If compact needed a new file, but the data files buffer was empty (i.e., needed to create a file), an empty file is being created, and the flush on compact was supposed to fill up the file up to its entire size. (There's no point on writing a file with zeros, than 1 second later writing again with the correct content). So, the bug was, when a new file is being created during compacting, the entire file is not being filled up... later, when the file is reused, the buffer control may get lost and may lose writes as the file doesn't have the exact size.

            People

            • Assignee:
              Clebert Suconic
              Reporter:
              Clebert Suconic
            • Votes:
              0 Vote for this issue
              Watchers:
              0 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development