0% found this document useful (0 votes)
6 views21 pages

Operating System

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views21 pages

Operating System

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

What are Logical Clocks?

Logical clocks are a concept used in distributed systems to order events


without relying on physical time synchronization. They provide a way to
establish a partial ordering of events based on causality rather than real-time
clock values.
 By assigning logical timestamps to events, logical clocks allow distributed
systems to maintain consistency and coherence across different nodes,
despite varying clock speeds and network delays.
 This ensures that events can be correctly ordered and coordinated,
facilitating fault tolerance and reliable operation in distributed computing
environments.
Differences Between Physical and Logical Clocks
Physical clocks and logical clocks serve distinct purposes in distributed
systems:
1. Nature of Time:
 Physical Clocks: These rely on real-world time measurements and
are typically synchronized using protocols like NTP (Network Time
Protocol). They provide accurate timestamps but can be affected by
clock drift and network delays.
 Logical Clocks: These are not tied to real-world time and instead use
logical counters or timestamps to order events based on causality.
They are resilient to clock differences between nodes but may not
provide real-time accuracy.`
2. Usage:
 Physical Clocks: Used for tasks requiring real-time synchronization
and precise timekeeping, such as scheduling tasks or logging events
with accurate timestamps.
 Logical Clocks: Used in distributed systems to order events across
different nodes in a consistent and causal manner, enabling
synchronization and coordination without strict real-time requirements.
3. Dependency:
 Physical Clocks: Dependent on accurate timekeeping hardware and
synchronization protocols to maintain consistency across distributed
nodes.
 Logical Clocks: Dependent on the logic of event ordering and
causality, ensuring that events can be correctly sequenced even when
nodes have different physical time readings.
Types of Logical Clocks in Distributed System
1. Lamport Clocks
Lamport clocks provide a simple way to order events in a distributed system.
Each node maintains a counter that increments with each event. When
nodes communicate, they update their counters based on the maximum
value seen, ensuring a consistent order of events.
Characteristics of Lamport Clocks:
 Simple to implement.
 Provides a total order of events but doesn’t capture concurrency.
 Not suitable for detecting causal relationships between events.
Algorithm of Lamport Clocks:
1. Initialization: Each node initializes its clock LLL to 0.
2. Internal Event: When a node performs an internal event, it increments its
clock LLL.
3. Send Message: When a node sends a message, it increments its clock
LLL and includes this value in the message.
4. Receive Message: When a node receives a message with timestamp
T: It sets L=max⁡(L,T)+1
Advantages of Lamport Clocks:
 Simple to implement and understand.
 Ensures total ordering of events.
2. Vector Clocks
Vector clocks use an array of integers, where each element corresponds to a
node in the system. Each node maintains its own vector clock and updates it
by incrementing its own entry and incorporating values from other nodes
during communication.
Characteristics of Vector Clocks:
 Captures causality and concurrency between events.
 Requires more storage and communication overhead compared to
Lamport clocks.
Algorithm of Vector Clocks:
1. Initialization: Each node PiP_iPi initializes its vector clock ViV_iVi to a
vector of zeros.
2. Internal Event: When a node performs an internal event, it increments its
own entry in the vector clock Vi[i]V_i[i]Vi[i].
3. Send Message: When a node PiP_iPi sends a message, it includes its
vector clock ViV_iVi in the message.
4. Receive Message: When a node PiP_iPi receives a message with vector
clock Vj:
 It updates each entry: Vi[k]=max⁡(Vi[k],Vj[k])
 It increments its own entry: Vi[i]=Vi[i]+1
Advantages of Vector Clocks:
 Accurately captures causality and concurrency.
 Detects concurrent events, which Lamport clocks cannot do.
3. Matrix Clocks
Matrix clocks extend vector clocks by maintaining a matrix where each entry
captures the history of vector clocks. This allows for more detailed tracking of
causality relationships.
Characteristics of Matrix Clocks:
 More detailed tracking of event dependencies.
 Higher storage and communication overhead compared to vector
clocks.
Algorithm of Matrix Clocks:
1. Initialization: Each node PiP_iPi initializes its matrix clock MiM_iMi to a
matrix of zeros.
2. Internal Event: When a node performs an internal event, it increments its
own entry in the matrix clock Mi[i][i]M_i[i][i]Mi[i][i].
3. Send Message: When a node PiP_iPi sends a message, it includes its
matrix clock MiM_iMi in the message.
4. Receive Message: When a node PiP_iPi receives a message with matrix
clock Mj:
 It updates each entry: Mi[k][l]=max⁡(Mi[k][l],Mj[k][l])
 It increments its own entry: Mi[i][i]=Mi[i][i]+1
Advantages of Matrix Clocks:
 Detailed history tracking of event causality.
 Can provide more information about event dependencies than vector
clocks.
4. Hybrid Logical Clocks (HLCs)
Hybrid logical clocks combine physical and logical clocks to provide both
causality and real-time properties. They use physical time as a base and
incorporate logical increments to maintain event ordering.
Characteristics of Hybrid Logical Clocks:
 Combines real-time accuracy with causality.
 More complex to implement compared to pure logical clocks.
Algorithm of Hybrid Logical Clocks:
1. Initialization: Each node initializes its clock HHH with the current physical
time.
2. Internal Event: When a node performs an internal event, it increments its
logical part of the HLC.
3. Send Message: When a node sends a message, it includes its HLC in
the message.
4. Receive Message: When a node receives a message with HLC T:
 It updates its H = max⁡(H,T)+1
Advantages of Hybrid Logical Clocks:
 Balances real-time accuracy and causal consistency.
 Suitable for systems requiring both properties, such as databases and
distributed ledgers.
5. Version Vectors
Version vectors track versions of objects across nodes. Each node maintains
a vector of version numbers for objects it has seen.
Characteristics of Version Vectors:
 Tracks versions of objects.
 Similar to vector clocks, but specifically for versioning.
Algorithm of Version Vectors:
1. Initialization: Each node initializes its version vector to zeros.
2. Update Version: When a node updates an object, it increments the
corresponding entry in the version vector.
3. Send Version: When a node sends an updated object, it includes its
version vector in the message.
4. Receive Version: When a node receives an object with a version vector:
 It updates its version vector to the maximum values seen for each
entry.
Advantages of Version Vectors:
 Efficient conflict resolution.
 Tracks object versions effectively in distributed databases and file
systems.
Applications of Logical Clocks
Logical clocks play a crucial role in distributed systems by providing a way to
order events and maintain consistency. Here are some key applications:
 Event Ordering
o Causal Ordering: Logical clocks help establish a causal
relationship between events, ensuring that messages are
processed in the correct order.
o Total Ordering: In some systems, it’s essential to have a total
order of events. Logical clocks can be used to assign unique
timestamps to events, ensuring a consistent order across the
system.
 Causal Consistency
o Consistency Models: In distributed databases and storage
systems, logical clocks are used to ensure causal consistency.
They help track dependencies between operations, ensuring that
causally related operations are seen in the same order by all
nodes.
 Distributed Debugging and Monitoring
o Tracing and Logging: Logical clocks can be used to timestamp
logs and trace events across different nodes in a distributed
system. This helps in debugging and understanding the
sequence of events leading to an issue.
o Performance Monitoring: By using logical clocks, it’s possible
to monitor the performance of distributed systems, identifying
bottlenecks and delays.
 Distributed Snapshots
o Checkpointing: Logical clocks are used in algorithms for taking
consistent snapshots of the state of a distributed system, which
is essential for fault tolerance and recovery.
o Global State Detection: They help detect global states and
conditions such as deadlocks or stable properties in the system.
 Concurrency Control
o Optimistic Concurrency Control: Logical clocks help detect
conflicts in transactions by comparing timestamps, allowing
systems to resolve conflicts and maintain data integrity.
o Versioning: In versioned storage systems, logical clocks can be
used to maintain different versions of data, ensuring that updates
are applied correctly and consistently.
Challenges and Limitations with Logical Clocks
Logical clocks are essential for maintaining order and consistency in
distributed systems, but they come with their own set of challenges and
limitations:
 Scalability Issues
o Vector Clock Size: In systems using vector clocks, the size of
the vector grows with the number of nodes, leading to increased
storage and communication overhead.
o Management Complexity: Managing and maintaining logical
clocks across a large number of nodes can be complex and
resource-intensive.
 Synchronization Overhead
o Communication Overhead: Synchronizing logical clocks
requires additional messages between nodes, which can
increase network traffic and latency.
o Processing Overhead: Updating and maintaining logical clock
values can add computational overhead, impacting the system’s
overall performance.
 Handling Failures and Network Partitions
o Clock Inconsistency: In the presence of network partitions or
node failures, maintaining consistent logical clock values can be
challenging.
o Recovery Complexity: When nodes recover from failures,
reconciling logical clock values to ensure consistency can be
complex.
 Partial Ordering
o Limited Ordering Guarantees: Logical clocks, especially
Lamport clocks, only provide partial ordering of events, which
may not be sufficient for all applications requiring a total order.
o Conflict Resolution: Resolving conflicts in operations may
require additional mechanisms beyond what logical clocks can
provide.
 Complexity in Implementation
o Algorithm Complexity: Implementing logical clocks, particularly
vector and matrix clocks, can be complex and error-prone,
requiring careful design and testing.
o Application-Specific Adjustments: Different applications may
require customized logical clock implementations to meet their
specific requirements.
 Storage Overhead
o Vector and Matrix Clocks: These clocks require storing a vector
or matrix of timestamps, which can consume significant memory,
especially in systems with many nodes.
o Snapshot Storage: For some applications, maintaining
snapshots of logical clock values can add to the storage
overhead.
 Propagation Delay
o Delayed Updates: Updates to logical clock values may not
propagate instantly across all nodes, leading to temporary
inconsistencies.
o Latency Sensitivity: Applications that are sensitive to latency
may be impacted by the delays in propagating logical clock
updates.
Mutual exclusion in single computer system Vs. distributed system: In
single computer system, memory and other resources are shared between
different processes. The status of shared resources and the status of users
is easily available in the shared memory so with the help of shared variable
(For example: Semaphores) mutual exclusion problem can be easily solved.
In Distributed systems, we neither have shared memory nor a common
physical clock and therefore we can not solve mutual exclusion problem
using shared variables. To eliminate the mutual exclusion problem in
distributed system approach based on message passing is used. A site in
distributed system do not have complete information of state of the system
due to lack of shared memory and a common physical clock.
Requirements of Mutual exclusion Algorithm:
 No Deadlock: Two or more site should not endlessly wait for any
message that will never arrive.
 No Starvation: Every site who wants to execute critical section should
get an opportunity to execute it in finite time. Any site should not wait
indefinitely to execute critical section while other site are repeatedly
executing critical section
 Fairness: Each site should get a fair chance to execute critical section.
Any request to execute critical section must be executed in the order they
are made i.e Critical section execution requests should be executed in the
order of their arrival in the system.
 Fault Tolerance: In case of failure, it should be able to recognize it by
itself in order to continue functioning without any disruption.
Solution to distributed mutual exclusion: As we know shared variables or
a local kernel can not be used to implement mutual exclusion in distributed
systems. Message passing is a way to implement mutual exclusion. Below
are the three approaches based on message passing to implement mutual
exclusion in distributed systems:
1. Token Based Algorithm:
 A unique token is shared among all the sites.
 If a site possesses the unique token, it is allowed to enter its critical
section
 This approach uses sequence number to order requests for the critical
section.
 Each requests for critical section contains a sequence number. This
sequence number is used to distinguish old and current requests.
 This approach insures Mutual exclusion as the token is unique
Example : Suzuki–Kasami Algorithm
2. Non-token based approach:
 A site communicates with other sites in order to determine which sites
should execute critical section next. This requires exchange of two or
more successive round of messages among sites.
 This approach use timestamps instead of sequence number to order
requests for the critical section.
 When ever a site make request for critical section, it gets a timestamp.
Timestamp is also used to resolve any conflict between critical section
requests.
 All algorithm which follows non-token based approach maintains a logical
clock. Logical clocks get updated according to Lamport’s scheme
Example : Ricart–Agrawala Algorithm
3. Quorum based approach:
 Instead of requesting permission to execute the critical section from all
other sites, Each site requests only a subset of sites which is called
a quorum.
 Any two subsets of sites or Quorum contains a common site.
 This common site is responsible to ensure mutual exclusion
Example : Maekawa’s Algorithm
What are Centralized Systems?
Centralized systems are a type of computing architecture where all or most
of the processing and data storage is done on a single central server or a
group of closely connected servers. This central server manages all
operations, resources, and data, acting as the hub through which all client
requests are processed. The clients, or nodes, connected to the central
server typically have minimal processing power and rely on the server for
most computational tasks.

Centralized Systems

Key Characteristics of Centralized Systems


1. Single Point of Control:
 All data processing and management tasks are handled by the central
server.
 Easier to manage and maintain since there is one primary location for
administration.
2. Simplicity:
 Simplified architecture with a clear structure where all operations are
routed through the central node.
 Easy to deploy and manage due to centralized nature.
3. Efficiency:
 Efficient use of resources as the central server can be optimized for
performance.
 Easier to implement security measures and updates centrally.
4. Scalability Issues:
 Limited scalability as the central server can become a bottleneck if the
load increases significantly.
 Adding more clients can strain the server’s resources, leading to
performance degradation.
5. Single Point of Failure:
 If the central server fails, the entire system can become inoperative.
 High availability and redundancy measures are essential to mitigate
this risk
Ricart–Agrawala algorithm is an algorithm for mutual exclusion in a
distributed system proposed by Glenn Ricart and Ashok Agrawala. This
algorithm is an extension and optimization of Lamport’s Distributed Mutual
Exclusion Algorithm. Like Lamport’s Algorithm, it also follows permission-
based approach to ensure mutual exclusion. In this algorithm:
 Two type of messages ( REQUEST and REPLY) are used and
communication channels are assumed to follow FIFO order.
 A site send a REQUEST message to all other site to get their permission
to enter the critical section.
 A site send a REPLY message to another site to give its permission to
enter the critical section.
 A timestamp is given to each critical section request using Lamport’s
logical clock.
 Timestamp is used to determine priority of critical section requests.
Smaller timestamp gets high priority over larger timestamp. The execution
of critical section request is always in the order of their timestamp.
Algorithm:
 To enter Critical section:
o When a site Si wants to enter the critical section, it send a
timestamped REQUEST message to all other sites.
o When a site Sj receives a REQUEST message from site S i, It
sends a REPLY message to site Si if and only if
o Site Sj is neither requesting nor currently executing the
critical section.
o In case Site Sj is requesting, the timestamp of Site S i‘s
request is smaller than its own request.
 To execute the critical section:
o Site Si enters the critical section if it has received
the REPLY message from all other sites.
 To release the critical section:
o Upon exiting site S i sends REPLY message to all the deferred
requests.
Message Complexity: Ricart–Agrawala algorithm requires invocation of 2(N
– 1) messages per critical section execution. These 2(N – 1) messages
involves
 (N – 1) request messages
 (N – 1) reply messages
Advantages of the Ricart-Agrawala Algorithm:
 Low message complexity: The algorithm has a low message complexity
as it requires only (N-1) messages to enter the critical section, where N is
the total number of nodes in the system.
 Scalability: The algorithm is scalable and can be used in systems with a
large number of nodes.
 Non-blocking: The algorithm is non-blocking, which means that a node
can continue executing its normal operations while waiting to enter the
critical section.
Drawbacks of Ricart–Agrawala algorithm:
 Unreliable approach: failure of any one of node in the system can halt
the progress of the system. In this situation, the process will starve
forever. The problem of failure of node can be solved by detecting failure
after some timeout.
What is a Distributed Transaction?
A distributed transaction spans multiple systems,
ensuring all operations either succeed or fail together
What is the need for a Distributed Transaction?
The need for distributed transactions arises from the requirements to
ensure data consistency and reliability across multiple independent systems
or resources in a distributed computing environment. Specifically:
 Consistency: Ensuring that all changes made as part of a transaction are
committed or rolled back atomically, maintaining data integrity.
 Isolation: Guaranteeing that concurrent transactions do not interfere with
each other, preserving data integrity and preventing conflicts.
 Durability: Confirming that committed transactions persist even in the event
of system failures, ensuring reliability.
 Atomicity: Ensuring that either all operations within a transaction are
completed successfully or none of them are, avoiding partial updates that
could lead to inconsistencies.
Working of Distributed Transactions
The working of Distributed Transactions is the same as that of simple
transactions but the challenge is to implement them upon multiple
databases. Due to the use of multiple nodes or database systems, there
arises certain problems such as network failure, to maintain the availability of
extra hardware servers and database servers. For a successful distributed
transaction the available resources are coordinated by transaction
managers.
Working of Distributed Transactions

Below are some steps to understand how distributed transactions work:


Step 1: Application to Resource – Issues Distributed
Transaction
The first step is to issue that distributed transaction. The application initiates
the transaction by sending the request to the available resources. The
request consists of details such as operations that are to be performed by
each resource in the given transaction.
Step 2: Resource 1 to Resource 2 – Ask Resource 2 to Prepare
to Commit
Once the resource receives the transaction request, resource 1 contacts
resource 2 and asks resource 2 to prepare the commit. This step makes
sure that both the available resources are able to perform the dedicated
tasks and successfully complete the given transaction.
Step 3: Resource 2 to Resource 1 – Resource 2 Acknowledges
Preparation
After the second step, Resource 2 receives the request from Resource 1, it
prepares for the commit. Resource 2 makes a response to resource 1 with
an acknowledgment and confirms that it is ready to go ahead with the
allocated transaction.
Step 4: Resource 1 to Resource 2 – Ask Resource 2 to Commit
Once Resource 1 receives an acknowledgment from Resource 2, it sends a
request to Resource 2 and provides an instruction to commit the transaction.
This step makes sure that Resource 1 has completed its task in the given
transaction and now it is ready for Resource 2 to finalize the operation.
Step 5: Resource 2 to Resource 1 – Resource 2 Acknowledges
Commit
When Resource 2 receives the commit request from Resource 1, it provides
Resource 1 with a response and makes an acknowledgment that it has
successfully committed the transaction it was assigned to. This step ensures
that Resource 2 has completed its task from the operation and makes sure
that both the resources have synchronized their states.
Step 6: Resource 1 to Application – Receives Transaction
Acknowledgement
Once Resource 1 receives an acknowledgment from Resource 2, Resource
1 then sends an acknowledgment of the transaction back to the application.
This acknowledgment confirms that the transaction that was carried out
among multiple resources has been completed successfully.
ACID Properties
ACID stands for Atomicity, Consistency, Isolation, and Durability. These four
key properties define how a transaction should be processed in a reliable
and predictable manner, ensuring that the database remains consistent,
even in cases of failures or concurrent accesses.
The Four ACID Properties
1. Atomicity: “All or Nothing”
Atomicity ensures that a transaction is atomic, it means that either the entire
transaction completes fully or doesn’t execute at all. There is no in-between
state i.e. transactions do not occur partially. If a transaction has multiple
operations, and one of them fails, the whole transaction is rolled back,
leaving the database unchanged. This avoids partial updates that can lead to
inconsistency.
 Commit: If the transaction is successful, the changes are permanently
applied.
 Abort/Rollback: If the transaction fails, any changes made during the
transaction are discarded.
Example: Consider the following transaction T consisting of T1 and T2 :
Transfer of $100 from account X to account Y .

Example

If the transaction fails after completion of T1 but before completion of T2 , the


database would be left in an inconsistent state. With Atomicity, if any part of
the transaction fails, the entire process is rolled back to its original state, and
no partial changes are made.
2. Consistency: Maintaining Valid Data States
Consistency ensures that a database remains in a valid state before and
after a transaction. It guarantees that any transaction will take the database
from one consistent state to another, maintaining the rules and constraints
defined for the data. In simple terms, a transaction should only take the
database from one valid state to another. If a transaction violates any
database rules or constraints, it should be rejected, ensuring that only
consistent data exists after the transaction.
Example: Suppose the sum of all balances in a bank system should always
be constant. Before a transfer, the total balance is $700. After the
transaction, the total balance should remain $700. If the transaction fails in
the middle (like updating one account but not the other), the system should
maintain its consistency by rolling back the transaction
Total before T occurs = 500 + 200 = 700 .
Total after T occurs = 400 + 300 = 700 .

3. Isolation: Ensuring Concurrent Transactions Don’t Interfere


This property ensures that multiple transactions can occur concurrently
without leading to the inconsistency of the database state. Transactions
occur independently without interference. Changes occurring in a particular
transaction will not be visible to any other transaction until that particular
change in that transaction is written to memory or has been committed.
This property ensures that when multiple transactions run at the same time,
the result will be the same as if they were run one after another in a specific
order. This property prevents issues such as dirty reads (reading
uncommitted data), non-repeatable reads (data changing between two reads
in a transaction), and phantom reads (new rows appearing in a result set after
the transaction starts).
Example: Let’s consider two transactions:Consider two transactions T and T”.
 X = 500, Y = 500

Transaction T:
 T wants to transfer $50 from X to Y.
 T reads Y (value: 500), deducts $50 from X (new X = 450), and adds $50
to Y (new Y = 550).
Transaction T”:
 T” starts and reads X (value: 500) and Y (value: 500), then calculates the
sum: 500 + 500 = 1000.
But, by the time T finishes, X and Y have changed
to 450 and 550 respectively, so the correct sum should be 450 + 550 =
1000. Isolation ensures that T” should not see the old values
of X and Y while T is still in progress. Both transactions should be
independent, and T” should only see the final state after T commits. This
prevents inconsistent data like the incorrect sum calculated by T”
4. Durability: Persisting Changes
This property ensures that once the transaction has completed execution,
the updates and modifications to the database are stored in and written to
disk and they persist even if a system failure occurs. These updates now
become permanent and are stored in non-volatile memory. In the event of a
failure, the DBMS can recover the database to the state it was in after the
last committed transaction, ensuring that no data is lost.
Example: After successfully transferring money from Account A to Account B,
the changes are stored on disk. Even if there is a crash immediately after the
commit, the transfer details will still be intact when the system recovers,
ensuring durability.
Timestamp based Concurrency Control
Last Updated : 21 Jan, 2025



Timestamp-based concurrency control is a method used in database systems


to ensure that transactions are executed safely and consistently without
conflicts, even when multiple transactions are being processed
simultaneously. This approach relies on timestamps to manage and
coordinate the execution order of transactions. Refer to the timestamp of a
transaction T as TS(T).
What is Timestamp Ordering Protocol?
The Timestamp Ordering Protocol is a method used in database systems to
order transactions based on their timestamps. A timestamp is a unique
identifier assigned to each transaction, typically determined using the system
clock or a logical counter. Transactions are executed in the ascending order of
their timestamps, ensuring that older transactions get higher priority.
For example:
 If Transaction T1 enters the system first, it gets a timestamp TS(T1) = 007
(assumption).
 If Transaction T2 enters after T1, it gets a timestamp TS(T2) = 009
(assumption).
This means T1 is “older” than T2 and T1 should execute before T2 to maintain
consistency.
Key Features of Timestamp Ordering Protocol:
Transaction Priority:
 Older transactions (those with smaller timestamps) are given higher
priority.
 For example, if transaction T1 has a timestamp of 007 times and
transaction T2 has a timestamp of 009 times, T1 will execute first as it
entered the system earlier.
Early Conflict Management:
 Unlike lock-based protocols, which manage conflicts during execution,
timestamp-based protocols start managing conflicts as soon as a
transaction is created.
Ensuring Serializability:
 The protocol ensures that the schedule of transactions is serializable. This
means the transactions can be executed in an order that is logically
equivalent to their timestamp order.
Basic Timestamp Ordering

Precedence Graph for TS ordering


The Basic Timestamp Ordering (TO) Protocol is a method in database
systems that uses timestamps to manage the order of transactions. Each
transaction is assigned a unique timestamp when it enters the system
ensuring that all operations follow a specific order making the
schedule conflict-serializable and deadlock-free.
 Suppose, if an old transaction Ti has timestamp TS(Ti), a new transaction
Tj is assigned timestamp TS(Tj) such that TS(Ti) < TS(Tj).
 The protocol manages concurrent execution such that the timestamps
determine the serializability order.
 The timestamp ordering protocol ensures that any conflicting read and
write operations are executed in timestamp order.
 Whenever some Transaction T tries to issue a R_item(X) or a W_item(X),
the Basic TO algorithm compares the timestamp of T with R_TS(X) &
W_TS(X) to ensure that the Timestamp order is not violated.
This describes the Basic TO protocol in the following two cases:
Whenever a Transaction T issues a W_item(X) operation, check the following
conditions:
 If R_TS(X) > TS(T) and if W_TS(X) > TS(T), then abort and rollback T and
reject the operation. else,
 Execute W_item(X) operation of T and set W_TS(X) to TS(T) to the larger
of TS(T) and current W_TS(X).
Whenever a Transaction T issues a R_item(X) operation, check the following
conditions:
 If W_TS(X) > TS(T), then abort and reject T and reject the operation, else
 If W_TS(X) <= TS(T), then execute the R_item(X) operation of T and set
R_TS(X) to the larger of TS(T) and current R_TS(X).
Whenever the Basic TO algorithm detects two conflicting operations that
occur in an incorrect order, it rejects the latter of the two operations by
aborting the Transaction that issued it.

3/5

Advantages of Basic TO Protocol


 Conflict Serializable: Ensures all conflicting operations follow the
timestamp order.
 Deadlock-Free: Transactions do not wait for resources, preventing
deadlocks.
 Strict Ordering: Operations are executed in a predefined, conflict-free
order based on timestamps.
Drawbacks of Basic Timestamp Ordering (TO) Protocol
 Cascading Rollbacks : If a transaction is aborted, all dependent
transactions must also be aborted, leading to inefficiency.
 Starvation of Newer Transactions : Older transactions are prioritized,
which can delay or starve newer transactions.
 High Overhead: Maintaining and updating timestamps for every data item
adds significant system overhead.
 Inefficient for High Concurrency: The strict ordering can reduce
throughput in systems with many concurrent transactions
ssss
1. Two-Phase Commit Protocol (2PC)
This is a classic protocol used to achieve atomicity in distributed
transactions.
 It involves two phases: a prepare phase where all participants agree to
commit or abort the transaction, and a commit phase where the decision
is executed synchronously across all participants.
 2PC ensures that either all involved resources commit the transaction or
none do, thereby maintaining atomicity.
2. Three-Phase Commit Protocol (3PC)
3PC extends 2PC by adding an extra phase (pre-commit phase) to address
certain failure scenarios that could lead to indefinite blocking in 2PC.
 In 3PC, participants first agree to prepare to commit, then to commit, and
finally to complete or abort the transaction.
 This protocol aims to reduce the risk of blocking seen in 2PC by
introducing an additional decision-making phase.

You might also like