Details
-
Bug
-
Resolution: Done
-
Blocker
-
5.2.0.Beta5
-
None
Description
numOwners=1, pessimistic cache (same applies if A is the only node in cluster)
1. tx1 running on A with writes on k, lockOwner(k) ==
2. A.tx1.lock(k), this doesn't go remotely, and control returns in the InterceptorStack
3. at this point B is started and lockOwner(k) ==
4. the StateTransferInterceptor forwards the command to B which acquires the lock locally
5. this is followed by a tx.commit/rollback that would not send the message to B, so the lock on B is pending.
The logic which determines whether the message to be sent remotely or not is in DistributionInterceptor.visitCommitCommand, which invokes:
protected boolean shouldInvokeRemoteTxCommand(TxInvocationContext ctx) { return ctx.isOriginLocal() && (ctx.hasModifications() || !((LocalTxInvocationContext) ctx).getRemoteLocksAcquired().isEmpty()); }
The problem here is that, when forwarding, we don't register the remote node as a locked.I think a more generic solution would also work, e.g. if the viewId of the tx is different from the viewId of the cluster at commit time, always go remotely.