Ch-4 Concurrency Control
Ch-4 Concurrency Control
Techniques
Chapter 4: Concurrency Control Techniques
Another scheme allows some transactions to have priority over others but
increases the priority of a transaction the longer it waits, until it eventually
gets the highest priority and proceeds.
The Two-Phase Locking Protocol (2PL)
This is a protocol which ensures conflict-serializable schedules.
Requires each Transaction to issue lock and unlock requests in 2
phases:
Phase 1: Growing/ expanding Phase
transaction can obtain locks
transaction can not release existing locks
Phase 2: Shrinking Phase
transaction can release existing locks
transaction can not obtain locks
Initially, a transaction is in the growing phase. The transaction
acquires locks as needed.
Once the transaction releases a lock, it enters the shrinking phase,
and it can issue no more lock requests.
if every transaction in a schedule follows the two-phase locking
protocol, the schedule is guaranteed to be serializable.
the transactions can be serialized in the order of their lock points
(i.e. the point where a transaction acquired its final lock).
The Two-Phase Locking Protocol (Cont.)
The Basic Two-phase locking does not ensure freedom from deadlocks
IS IX S S IX X
IS
IX
S
S IX
X
Multiple Granularity Locking Scheme (MGL)
Transaction Ti can lock a node Q, using the following rules:
1. The lock compatibility matrix must be observed.
2. The root of the tree must be locked first, and may be locked in any
mode.
3. A node Q can be locked by Ti in S or IS mode only if the parent of Q is
currently locked by Ti in either IS or IX mode.
4. A node Q can be locked by Ti in X, SIX, or IX mode only if the parent of
Q is currently locked by Ti in either IX or SIX mode.
5. Ti can lock a node only if it has not previously unlocked any node (that
is, Ti is two-phase)-> enforce the 2PL protocol.
6. Ti can unlock a node Q only if none of the children of Q are currently
locked by Ti.
Observe that locks are acquired in root-to-leaf order, whereas they are
released in leaf-to-root order.
The multiple granularity level protocol is especially suited when processing a
mix of transactions that include
short transactions that access only a few items (records or fields) and
long transactions that access entire files.
Deadlock Handling
System is deadlocked if there is a set of transactions such that every
transaction in the set is waiting for another transaction in the set.
Consider the following two transactions:
T1: write (X) T2: write(Y)
write(Y) write(X)
Schedule with deadlock
T1 T2
lock-X on X
write (X)
lock-X on Y
write (Y)
wait for lock-X on X
wait for lock-X on Y
Deadlock Handling
There are two principal methods for dealing with the deadlock problem.
Deadlock prevention
Deadlock detection and Deadlock recovery
Deadlock prevention protocols ensure that the system will never enter
into a deadlock state.
Some prevention strategies :
Require that each transaction locks all its data items before it begins
execution (predeclaration).
This is generally not a practical assumption—if any of the items
cannot be obtained, none of the items are locked.
limits concurrency.
Impose partial ordering of all data items and require that a
transaction can lock data items only in the order specified by the
partial order (graph-based protocol).
This requires that the programmer (or the system) is aware of the
chosen order of the items, which is also not practical in the
database context.
More Deadlock Prevention Strategies
Following schemes use transaction timestamps for the sake of deadlock
prevention alone.
Transaction timestamp is a unique identifier assigned to each
transaction. It is based on the order in which transactions are started.
There are two deadlock prevention schemes using timestamps:
wait-die scheme — non-preemptive
older transaction(with smaller timestamp) may wait for younger one to
release data item.
Younger transactions never wait for older ones; they are rolled back
instead.
a transaction may die several times before acquiring needed data item
Transactions only wait for younger ones so no cycle is created.
For Example: suppose that transactions T22, T23, and T24 have timestamps
5,10,and 15,respectively.
If T22 requests a data item held by T23, then T22 will wait.
If T24 requests a data item held by T23, then T24 will be rolled back.
wound-wait scheme — preemptive
older transaction wounds (forces rollback of) younger transaction by
aborting it instead of waiting for it.
Younger transactions may wait for older ones.
may be fewer rollbacks than wait-die scheme.
For Example: suppose that transactions T22, T23, and T24 have timestamps
5,10,and 15,respectively.
If T22 requests a data item held by T23, then T22 preempts T23 , and T23
will be rolled back.
If T24 requests a data item held by T23, then T24 will wait.
Both schemes end up aborting the younger of the two transactions (the
transaction that started later) that may be involved in a deadlock, assuming
that this will waste less processing.
Deadlock prevention (Cont.)
Both in wait-die and in wound-wait schemes, a rolled back transaction
is restarted with its original timestamp.
Older transactions thus have precedence over newer ones, and
starvation is hence avoided.
Timeout-Based Schemes :
a transaction waits for a lock only for a specified amount of time.
After that, the wait times out and the transaction is rolled back.
thus deadlocks are not possible
simple to implement; but starvation is possible.
Also difficult to determine good value of the timeout interval.
Deadlock Detection
A second, more practical approach to dealing with deadlock is deadlock
detection, where the system checks if a state of deadlock actually exists.
A simple way to detect a state of deadlock is for the system to construct
and maintain a wait-for graph.
Deadlocks can be described as a wait-for graph, which consists of a pair
G = (V,E),
V is a set of vertices (all the transactions in the system)
E is a set of edges; each element is an ordered pair Ti Tj.
If Ti Tj is in E, then there is a directed edge from Ti to Tj, implying that
Ti is waiting for Tj to release a data item.
When Ti requests a data item currently being held by Tj, then the edge Ti
Tj is inserted in the wait-for graph.
This edge is removed only when Tj is no longer holding a data item
needed by Ti.
The system is in a deadlock state if and only if the wait-for graph has a
cycle.
Must invoke a deadlock-detection algorithm periodically to look for cycles.
Deadlock Detection (Cont.)