8000 concurrency: unreplicated locks don't provide isolation through commit time · Issue #142978 · cockroachdb/cockroach · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
concurrency: unreplicated locks don't provide isolation through commit time #142978
Closed
@stevendanna

Description

@stevendanna

As described in issue #111536, locks themselves don't provide isolation up to the point of commit time because once committed the lock is completely gone. Thus any concurrently committing transaction at a lower timestamp will not observe that the key was locked at the timestamp.

For replicated locks we solved this by always updating the timestamp cache when releasing a lock. The timestamp cache then ensures that any concurrently writing transaction cannot write at a time lower than the commit timestamp when we held a lock.

We don't do the same for unreplicated locks. Thus, even an unreplicated lock is held all the way until commit time, it may not provide isolation up to the commit timestamp.

While we could also update the timestamp cache for all unreplicated locks, this has a few items that we should consider:

  1. Unreplicated locks are not currently reliable anyway since they can be lost. Thus, we would be doing more work for every unreplicated lock and still would not be able to provide an isolation guarantee. See concurrency: unreplicated lock loss detection #142977 for further discussion.

  2. Without further work, this would also require disabling 1PC for any transaction that takes out a unreplicated lock.

    // Replicated locks must be held until and provide protection up till their
    // transaction's commit timestamp[1]. We ensure this by bumping the timestamp
    // cache to the transaction's commit timestamp for all locked keys when
    // resolving locks. Let's consider external and local replicated locks
    // separately:
    //
    // - External locks: 1PC transactions do not write a transaction record. This
    // means if any of its external locks are resolved by another transaction
    // they'll be resolved as if the transaction were aborted, thus not providing us
    // protection until the transaction's commit timestamp.
    // - Local locks: we have all the information to locally resolve replicated
    // locks and bump the timestamp cache correctly if we're only dealing with local
    // replicated locks. However, the mechanics of 1PC transactions prevent us from
    // hitting it in the common case, where we're acquiring a replicated lock and
    // writing to the same key. 1PC transactions work by stripping the batch of its
    // EndTxnRequest and running it as a non-transactional batch. This means that
    // without some elbow grease, 1PC is bound to fail when it discovers its own
    // replicated lock. For now, we disable 1PC on the client for local locks as
    // well -- this can be optimized in the future.
    // TODO(arul): file an issue about improving things for local locks.
    //
    // [1] This distinction is currently moot for serializable transactions, as they
    // refresh all their reads (locked and unlocked) before committing. Doing so
    // bumps the timestamp cache. However, one can imagine a world where
    // serializable transactions do not need to refresh keys they acquired
    // replicated locks on. In such a world, we would be relying on lock resolution
    // to bump the timestamp cache to the commit timestamp of the transaction.
    func (tc *txnCommitter) maybeDisable1PC(ba *kvpb.BatchRequest) {
    if tc.disable1PC {
    return // already disabled; early return
    }
    for _, req := range ba.Requests {
    if readOnlyReq, ok := req.GetInner().(kvpb.LockingReadRequest); ok {
    _, dur := readOnlyReq.KeyLocking()
    if dur == lock.Replicated {
    tc.disable1PC = true
    return
    }
    }
    }
    }

Jira issue: CRDB-48612

Epic CRDB-49088

Metadata

Metadata

Assignees

Labels

A-buffered-writesRelated to the introduction of buffered writesA-kv-transactionsRelating to MVCC and the transactional model.C-enhancementSolution expected to add code/behavior + preserve backward-compat (pg compat issues are exce 3747 ption)T-kvKV Team

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions

    0