Details

    Description

      From the beginning, one of ModeShape's requirements to ensure data correctness was exclusive global locking. In other words, locking is the main mechanism via which data consistency has been realized.

      ModeShape 3 and 4 have been relying entirely on Infinispan to perform the locking both in a standalone and clustered setup. With ModeShape 5 we no longer have this option and therefore must consider other options:

      1. delegate (via the new persistence SPI) the exclusive locking responsibility to the actual persistence provider. This means that in the case of write operations (e.g. session.save) ModeShape would ask the persistence provider to make sure it exclusively locks all the entries which are about to be modified and after/when the "write" operation completes asks the same provider to unlock the entries.

      In the case of a RDMS for example, this can be implemented using the SELECT FOR UPDATE approach which is supported by pretty much all the vendors out there.

      2. implement locking internally. In other words, the repository would be responsible via a LockingService abstraction to ensure the correctness of the operation. In this case, instead of asking a persistence provider to perform the locking, ModeShape would internally lock the entries first and then ask the persistence provider to load a "fresh copy" of the entries which are about to be changed.

      In both cases, locking has to be considered both in a standalone and clustered topology.

      (1) has the advantage that it simplifies some of the logic performed by the repository but it places a specific constraint on the persistence provider: not only does this provider have to support transactions, but it must be able to correctly support "global locking" semantics.

      (2) has the disadvantage of adding more complexity to the repository (i.e. making sure that write operations always lock first) but it does have some advantages:

      • the current DocumentStore SPI already supports the semantics of locking (albeit now the calls are delegated to ISPN)
      • in a standalone topology a simple in-memory named locks implementation can be easily implemented which locks the node keys that are part of a transaction. Being in memory, this should be far-superior performance-wise to going to an external (possibly remote) persistence provider.
      • in a clustered topology, the locking service can leverage JGroups global locking, again using the keys of the nodes as the name of the locks. This is probably similar to what ISPN currently does anyway, so the code should perform at least as well as the current code (but most likely better since there won't be any ISPN boiler plating)

      Creating a LockingService abstraction at a repository level means that later on, if we decide to move away from JGroups, we can still move to something similar to (1) where a persistence provider provides the actual locking implementation.

      Attachments

        Issue Links

          Activity

            People

              hchiorean Horia Chiorean (Inactive)
              hchiorean Horia Chiorean (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: