Unit-2
Concurrent Processes
Operating Systems (BMC-203) Prepared by- [Link] Pandey
Lecture-1
(23/5/25)
Operating Systems (BMC-203) Prepared by- [Link] Pandey
Process Concept:-
❑ A process is a program in execution.
❑ A process is more than the program code, which is sometimes known as the text section.
❑ Process execution must progress in sequential fashion.
❑ A process includes: program counter , processor registers, process stack( contains temporary data such as
method parameters, return addresses, and local variables), data section (contains global variables).
Process State:- As a process executes, it changes its state.
New State: The process is being created.
Running State: A process is said to be running if it has
the CPU, that is, process actually using the CPU at that
particular instant.
Blocked (or waiting) State: A process is said to be
blocked if it is waiting for some event to happen such
that as an I/O completion before it can proceed. Note
that a process is unable to run until some external event
happens.
Ready State: A process is said to be ready if it needs a
CPU to execute. A ready state process is runnable but
temporarily stopped running to let another process run.
Terminated State: The process has finished execution.
Operating Systems (BMC-203) Prepared by- [Link] Pandey
Difference between process and program:-
1) Both are same beast with different name or when this beast is sleeping (not executing) it is called program and
when it is executing becomes process.
2) Program is a static object whereas a process is a dynamic object.
3) A program resides in secondary storage whereas a process resides in main memory.
4) The span time of a program is unlimited but the span time of a process is limited.
5) A process is an 'active' entity whereas a program is a 'passive' entity.
6) A program is an algorithm expressed in programming language whereas a process is expressed in assembly
language or machine language.
Relationship Between Processes of Operating System:- The Processes executing in the operating system is one
of the following two types:
1. Independent Processes
2. Cooperating Processes
1. Independent Processes:- Its state is not shared with any other process. The result of execution depends only on
the input state. The result of the execution will always be the same for the same input. The termination of the
independent process will not terminate any other.
2. Cooperating Processes:- Its state is shared along other processes. The result of the execution depends on
relative execution sequence and cannot be predicted in advanced(Non-deterministic). The result of the execution
will not always be the same for the same input. The termination of the cooperating process may affect other
process.
Operating Systems (BMC-203) Prepared by- [Link] Pandey
Process Creation:- A parent process and then children of that process can be created. When more than one process is
created several possible implementations exist.
•Parent and child can execute concurrently.
•The Parents waits until all of its children have terminated.
•The parent and children share all resources in common.
•The children share only a subset of their parent’s resources.
Process Termination:- A child process can be terminated in the following ways:
•A parent may terminate the execution of one of its children for a following reasons:
[Link] child has exceeded its allocation resource usage.
[Link] task assigned to its child is no longer required.
•If a parent has terminated than its children must be terminated.
Interleaving Processes & Overlapping processes :-
Interleaving Processes- It refers to the rapid switching between processes, creating an illusion of parallelism on a single-core
processor.
On the other hand, Overlapping processes represents true parallel execution, where multiple processes are running
simultaneously on a multi-core or multi-processor system.
Operating Systems (BMC-203) Prepared by- [Link] Pandey
Concurrency in OS:-
1. Concurrency in an operating system refers to the ability to execute multiple processes or threads
simultaneously, improving resource utilization, responsiveness and system efficiency.
2. It allows several tasks to be in progress at the same time, either by running on separate processors or through
context switching on a single processor.
3. Concurrency is essential in modern OS design to handle multitasking, increase system responsiveness, and
optimize performance for users and applications.
4. It may be supported by multi-threading or multi-processing whereby more than one process or threads are
executed simultaneously or in an interleaved fashion.
Principles of Concurrency:- Both interleaved and overlapped processes can be viewed as examples of concurrent
processes. The relative speed of execution cannot be predicted. It depends on the following:
• The activities of other processes
• The way operating system handles interrupts
• The scheduling policies of the operating system
Advantages of Concurrency:-
•Running of multiple applications.
•Better resource utilization.
•Better average response time.
•Better performance.
Operating Systems (BMC-203) Prepared by- [Link] Pandey
Lecture-2
(26/5/25)
Operating Systems (BMC-203) Prepared by- [Link] Pandey
Process Synchronization:-
❑ Process synchronization should be performed to ensure that two processes accessing common data simultaneously do not
create conflict. The simultaneous coordination of several processes in the computer system is known as process
synchronization.
❑ In multi-processing environment where several tasks or processes run simultaneously, we need a mechanisms for sharing
access to shared resources like memory, files, and devices.
Challenges in Process Synchronization:-
There are some challenges in the process synchronization which are as follows:
1. Race Conditions: It occurs when multiple processes or threads access and manipulate shared data concurrently, and the
final outcome depends on the particular order of their execution in which the access takes place, is called a race condition.
It sometimes potentially leads to unexpected or incorrect behavior.
2. Mutual Exclusion: Mutual exclusion (mutex) is a mechanism of process synchronization that prevents multiple processes
from accessing a shared resource at the same time.
3. Deadlocks: When two or more processes cannot go on as they are waiting for one another to free up a shared resource.
4. Starvation: Some processes might be denied access to the resource forever, impeding progress.
Importance of Process Synchronization:-
❖ Those systems that are not equipped with appropriate synchronization mechanisms may result in unpredictable and
unpleasant results such as a system's collapse, data damage and many others.
❖ The synchronization mechanism allows stable and effective multi-process environment. The stability, reliability, and
accuracy in the operation of operating systems are premised upon successful synchronization.
❖ It is essential to the reliability of information integrity if it is being used several times or by different processors.
Operating Systems (BMC-203) Prepared by- [Link] Pandey
Mutual Exclusion-
❑ Mutual exclusion (mutex) is a mechanism of process synchronization that prevents multiple processes from
accessing a shared resource at the same time.
❑ It is a property of concurrency control that helps prevent race conditions.
❑ The requirement of mutual exclusion is that when process P1 is accessing a shared resource R1, another
process should not be able to access resource R1 until process P1 has finished its operation with resource
R1.
• Examples of such shared resources include global variable, files, I/O devices such as printers, critical section etc.
How does it work?-
•A mutex is a variable that is set before accessing a shared resource and released after using it.
•When a mutex is set, no other process or thread can access the shared resource.
•A thread/process that is currently using a shared resource must lock the mutex to prevent other threads from
accessing it.
•The thread unlocks the mutex when it releases the resource.
Why it is used?-
•Mutual exclusion is used to control the entry and exit of processes in critical sections because critical sections are
regions of code that should not be executed by more than one thread/process at a time.
Operating Systems (BMC-203) Prepared by- [Link] Pandey
Approaches To Implement Mutual Exclusion:-
•Software Method: Leave the responsibility to the processes themselves. These methods are usually highly error-
prone and carry high overheads.
•Hardware Method: Special-purpose machine instructions are used for accessing shared resources. This method is
faster but cannot provide a complete solution. Hardware solutions cannot give guarantee the absence of deadlock
and starvation.
•Programming Language Method: Provide support through the operating system or through the programming
language.
Requirements/Conditions of Mutual Exclusion:-
1. At any time, only one process is allowed to
enter its critical section.
2. The solution is implemented purely in
software on a machine.
3. A process remains inside its critical section for
a bounded time only.
4. No assumption can be made about the
relative speeds of asynchronous concurrent
processes.
5. A process cannot prevent any other process
from entering into a critical section.
6. A process must not be indefinitely postponed
from entering its critical section.
Operating Systems (BMC-203) Prepared by- [Link] Pandey
Critical Section:-
❑ A critical section is a part of a program where shared resources like memory or files are accessed by multiple
processes or threads.
❑ To avoid issues like data inconsistency or race conditions, process synchronization techniques ensure that only
one process or thread uses the critical section at a time.
➢ Consider a system consisting of n processes {Po,P1, ..., Pn-1). Each process has a segment of code, called a
critical section, in which the process may be changing common variables, updating a table, writing a file, and so
on.
➢ The important feature of the system is that, when one process is executing in its critical section, no other process
is to be allowed to execute in its critical section. Thus, the execution of critical sections by the processes is
mutually exclusive in time.
➢ Examples of critical sections include updating a global variable, modifying a database table or writing a file to a network
server.
The critical-section problem is to design a protocol that the processes can use to cooperate.
• Each process must request permission to enter its critical section. while(1)
• The section of code implementing this request is the entry section. {
• The critical section may be followed by an exit section. Entry section
• The remaining code is the remainder section. Critical section
Exit section
Remainder section
}
Operating Systems (BMC-203) Prepared by- [Link] Pandey
Solution to the critical-section problem must satisfy the following three
requirements:-
1. Mutual Exclusion: If process Pi is executing in its critical section, then no other processes can be executing in
their critical sections.
2. Progress: If no process is executing in its critical section and some processes wish to enter their critical sections,
then only those processes that are not executing in their remainder section can participate in the decision on which
will enter its critical section and this selection cannot be postponed indefinitely.
3. Bounded Waiting: There exists a bounded waiting on the number of times that other processes are allowed to
enter their critical sections after a process has made a request to enter its critical section and before that request is
granted.
Operating Systems (BMC-203) Prepared by- [Link] Pandey
Lecture-3
Operating Systems (BMC-203) Prepared by- [Link] Pandey
Producer-Consumer Problem:-
➢ The Producer-Consumer problem is a classical multi-process synchronization problem, that is we are trying
to achieve process synchronization between more than one process.
➢ It’s a paradigm for cooperating processes.
➢ producer process produces information that is consumed by a consumer process.
➢ To allow producer and consumer processes to run concurrently, we must have available a buffer of items that
can be filled by the producer and emptied by the consumer.
➢ A producer can produce one item while the consumer is consuming another item. The producer and consumer
must be synchronized, so that the consumer does not try to consume an item that has not yet been produced. In
this situation, the consumer must wait until an item is produced.
➢ For this, two types of buffers can be used-
✦ unbounded-buffer places no practical limit on the size of the buffer. The consumer may have to wait for new
items, but the producer can always produce new items.
✦ bounded-buffer assumes that there is a fixed buffer size. In this, consumer must have to wait if buffer is empty
and the producer must have to wait if buffer is full.
Operating Systems (BMC-203) Prepared by- [Link] Pandey
Bounded-Buffer – Shared-Memory Solution- The consumer and producer processes share the following variables
shared data.
#define BUFFER_SIZE 8
typedef struct
{
...
} item;
item buffer[BUFFER_SIZE];
int in = 0; //initially in and out are 0
int out = 0;
• BUFFER_SIZE-1 elements (elements can enter into buffer from 0 to 7 index in buffer).
• The shared buffer is implemented as a circular array with two logical pointers: in and out.
• The variable in points to the next free position in the buffer and out points to the first full position in the buffer.
• The buffer is EMPTY when in== out
• The buffer is FULL when ((in + 1) % BUFFERSIZE) == out.
• The producer process has a local variable itemp in which the new item to be produced is stored in buffer.
• The consumer process has a local variable itemc in which the new item to be consumed from the buffer.
Operating Systems (BMC-203) Prepared by- [Link] Pandey
• The code for the producer and consumer processes follows:-
Bounded-Buffer – Producer Process
Bounded-Buffer – Consumer Process
void consumer(void) int count=0;
{ void producer(void)
int itemc; {
while(true) int itemp;
{ while(true)
while(count==0); /*Buffer EMPTY {
itemc=Buffer[out]; produce_item(itemp);
out=(out+1)mod n; while(count==n); /*Buffer FULL
count=count-1; Buffer[in]=itemp;
Load Rc, m[count];
process_item(itemc); DECR Rc;
In=(in+1)mod n;
} Store m[count], Rc count=count+1; Load Rp, m[count];
} INCR Rp;
} Store m[count], Rp
}
Operating Systems (BMC-203) Prepared by- [Link] Pandey
Solution of Producer-Consumer Problem Using Semaphore-
Counting Semaphore- full=0 //No of filled slots
empty=N //No of empty slots
Binary Semaphore- S=1
Producer- 0 Consumer-
Producer_item(itemp); 1 consumer();
Down(empty); 2 Down(full);
Down(S);
3 Down(S);
4
Buffer[in]=itemp; itemc=Buffer[out];
5
in=(in+1) mod n; 6
out=(out+1) mod n;
Up(S); 7
Up(S);
Up(full); Up(empty);
Buffer Size=8 (elements can enter into index 0 to 7.)
empty=
full=
S=
Operating Systems (BMC-203)
Prepared by- [Link] Pandey
Lecture-4
Operating Systems (BMC-203) Prepared by- [Link] Pandey
Semaphores:-
❑ Semaphores is an integer variable used in operating systems to manage how different processes share
resources like memory or data without causing conflicts or I/O devices.
❑ It solves the critical section problem.
❑ Semaphores are used to implement critical sections, which are regions of code that must be executed by only
one process at a time. It prevents many processes to access same resource.
❑ Initial value of semaphores=No of resources it controls
A semaphore S is an integer variable that is accessed only through two standard atomic operations: wait and signal.
➢ The classical definition of wait in pseudocode is- •wait(): It is used when a
wait(S) { process want to access a
while (S <= 0); resource. The wait operation
S - -; decrements the value of the
} semaphore.
➢ The classical definitions of signal in pseudocode is- •signal(): It is used when a
signal(S) process want to release a
{ resource. The signal operation
S++; increments the value of the
} semaphore.
Operating Systems (BMC-203) Prepared by- [Link] Pandey
Modifications to the integer value of the semaphore in the wait and signal operations must be executed indivisibly.
That is, when one process modifies the semaphore value, no other process can simultaneously modify that
same semaphore value.
For example- Imagine you and your friends want to use a single swing in a park. To make sure only one person
uses the swing at a time, you can use a token. Whoever has the token gets to use the swing, and when they’re
done, they pass the token to the next person.
In programming, a semaphore works like that token. It helps manage the use of resources, like memory or files, by
allowing only a certain number of processes to access the resource at the same time. This prevents conflicts and
ensures smooth operation.
Semaphores are of two Types:-
•Binary Semaphore: It allows only one process to enter into critical section. It can have only one value 0 or 1. Its
value is initialized to 1. It is used to implement the solution of critical section problems with multiple processes and
a single resource. This is also known as a mutex lock, as they are locks that provide mutual exclusion.
•Counting Semaphore: Multiple resources can be used. The semaphore is initialized to the number of resources
available. Its initial value can be from 0 to n, where n=No of resources to control. It allows multiple processes to
enter into critical section.
Operating Systems (BMC-203) Prepared by- [Link] Pandey
Solution of critical section problem using semaphores-
Let P1,P2,P3,P4,……Pn are the processes that wants to go to the critical section.
wait(s) {
do while (S <= 0);
{ S --;
}
Wait(s);
//critical section
Signal(s); signal(s)
//remainder section {
S++;
} }
While (T) //terminate
Operating Systems (BMC-203) Prepared by- [Link] Pandey
Test and Set Operation (hardware synchronization):-
➢ Test Set Lock approach utilizes a shared variable, often referred to as a "lock," to regulate access to the
critical section of code, ensuring that only one process can execute in that section at a time and
preventing race conditions.
➢ Test and Set Lock operation (TSL) is a process synchronization mechanism among the processes
executing concurrently.
➢ If one process is currently executing a test-and-set, no other process is allowed to begin another test-
and-set until the first process test-and-set is finished.
➢ Test and Set is a hardware solution to the synchronization problem. It is implemented as-
Lock value = 0 means the critical section is currently free
and no process is present inside it.
Lock value = 1 means the critical section is currently
occupied and a process is present inside it.
The characteristics of this test and set synchronization mechanism are-
❑ It ensures mutual exclusion.
❑ It is deadlock free.
❑ It does not guarantee bounded waiting and may cause starvation.
❑ It is not architectural neutral since it requires the operating system to support test-and-set instruction.
❑ It is a busy waiting solution which keeps the CPU busy when the process is actually waiting.
Operating Systems (BMC-203) Prepared by- Prof. Asheesh Pandey
Analysis of Test And Set Lock-
Mutual Exclusion-Mutual Exclusion is guaranteed in TSL
mechanism since a process can never be preempted just before
setting the lock variable. Only one process can see the lock
variable as 0 at a particular time and that's why, the mutual
exclusion is guaranteed.
Progress- According to the definition of the progress, a process
which doesn't want to enter in the critical section should not stop
other processes to get into it. In TSL mechanism, a process will
execute the TSL instruction only when it wants to get into the
critical section. The value of the lock will always be 0 if no process
doesn't want to enter into the critical section hence the progress
is always guaranteed in TSL.
Bounded Waiting-
Bounded Waiting is not guaranteed in TSL. Some process might
not get a chance for so long. We cannot predict for a process that
it will definitely get a chance to enter in critical section after a
certain time.
Architectural Neutrality-
TSL doesn't provide Architectural Neutrality. It depends on the
hardware platform. The TSL instruction is provided by the
operating system. Some platforms might not provide that. Hence
it is not Architectural natural. Portability is not guaranteed.
Operating Systems (BMC-203) Prepared by- [Link] Pandey
Advantages of Semaphores:-
1. Semaphores allow only one process into the critical section. They follow the mutual exclusion principle
strictly and are much more efficient than some other methods of synchronization.
2. There is no resource wastage because of busy waiting in semaphores as processor time is not wasted
unnecessarily to check if a condition is fulfilled to allow a process to access the critical section.
3. Semaphores are implemented in the machine independent code of the microkernel. So, they are machine
independent.
4. They allow flexible management of resources.
Disadvantages of Semaphores:-
1. Semaphores are complicated so the wait and signal operations must be implemented in the correct order to
prevent deadlocks.
2. Semaphores are impractical for last scale use as their use leads to loss of modularity. This happens
because the wait and signal operations prevent the creation of a structured layout for the system.
3. Semaphores may lead to a priority inversion where low priority processes may access the critical section
first and high priority processes late.
4. Semaphore programming is complicated and there are chances of not achieving mutual exclusion.
Classical Synchronization Problems using Semaphores Concept:-
1. Producer-Consumer Problem 5. Railway Track Management
2. Traffic Light Control 6. Dining Philosopher’s Problem
3. Bank Transaction Processing 7. Reader-Writer Problem
4. Print Queue Management Operating Systems (BMC-203)
Prepared by- [Link] Pandey
Lecture-5
Operating Systems (BMC-203) Prepared by- [Link] Pandey
Interprocess Communication (IPC):-
• Interprocess communication is the mechanism provided by the operating system that allows processes to
communicate, to share data and synchronize their actions with each other.
• A diagram that illustrates interprocess communication is as follows −
The different approaches /scheme /
mechanism to implement interprocess
communication are given as follows −
1. Pipe:- A pipe is a data channel that is unidirectional. Two pipes can be used to create a two-way data channel between two
processes. This uses standard input and output methods.
2. Socket:-The socket is the endpoint for sending or receiving data in a network. This is true for data sent between processes
on the same computer or data sent between different computers on the same network. Most of the operating systems use
sockets for interprocess communication.
3. File:-A file is a data record that may be stored on a disk or acquired on demand by a file server. Multiple processes can
access a file as required. All operating systems use files for data storage.
4. Signal:-Signals are useful in interprocess communication in a limited way. They are system messages that are sent from one
process to another. Normally, signals are not used to transfer data but are used for remote commands between processes.
5. Shared Memory:-Shared memory is the memory that can be simultaneously accessed by multiple processes. This is done
so that the processes can communicate with each other.
6. Message Queue:-Multiple processes can read and write data to the message queue without being connected to each other.
Messages are stored in the queue until their recipient retrieves them. Message queues are quite useful for interprocess
communication and are used by most operating systems.
Operating Systems (BMC-203) Prepared by- [Link] Pandey
Different schemes / mechanism to implement interprocess communication:-
Operating Systems (BMC-203) Prepared by- [Link] Pandey
Processes within a system may be independent or cooperating.
• Independent process cannot affect or be affected by the execution of another process.
• Cooperating process can affect or be affected by other processes, including sharing data. Cooperating processes need
interprocess communication (IPC).
• Two models of Inter Process Communication are-
➢ Shared memory
➢ Message passing
Shared Memory Model:-
❑ Shared memory is the memory that can be simultaneously accessed by multiple processes.
❑ This is done so that the processes can communicate with each other. All POSIX (Portable Operating System Interface)
systems like Unix, as well as Windows operating systems use shared memory.
Advantage of Shared Memory Model:-
➢ Memory communication is faster on the Just For Knowledge-
shared memory model as compared to the About POSIX- ensures
message passing model on the same machine. software written for one
POSIX-compliant system can
Disadvantages of Shared Memory Model:- run on other POSIX-
❑ All the processes that use the shared memory compliant systems without
model need to make sure that they are not modification.
writing to the same memory location.
❑ Shared memory model may create problems
like synchronization and memory protection
that need to be addressed.
Operating Systems (BMC-203) Prepared by- [Link] Pandey
Message Passing Model:-
• Multiple processes can read and write data to the message queue without being connected to each other.
• Messages are stored on the message queue until their recipient retrieves them.
• Message queues are quite useful for interprocess communication and are used by most operating systems.
• It is used in distributed environments where the communicating processes are present on remote machines which are
connected with the help of a network.
• IPC facility provides two operations:-
• send(message)
• receive(message)
• The message size is either fixed or variable.
• If processes P1 and P2 wish to communicate, they need to:
➢ Establish a communication link between them.
➢ Exchange messages via send/receive.
Advantage of Messaging Passing Model:-
➢ The message passing model is much easier to implement than the shared
memory model.
Disadvantage of Messaging Passing Model:-
➢ The message passing model has slower communication than the shared
memory model because the connection setup takes time and may be
interrupt sometime.
Operating Systems (BMC-203) Prepared by- [Link] Pandey
The major differences between shared memory and message passing model −
Shared Memory Message Passing
1. It is one of the region for data communication. 1. Mainly the message passing is used for communication.
2. It is used for communication between single processor and
2. It is used in distributed environments where the
multiprocessor systems where the processes that are to be
communicating processes are present on remote machines
communicated present on the same machine and they are
which are connected with the help of a network.
sharing common address space.
3. Here no code is required because the message passing
3. The shared memory code that has to be read or write the
facility provides a mechanism for communication and
data that should be written explicitly by the application
synchronization of actions that are performed by the
programmer.
communicating processes.
4. It is going to provide a maximum speed of computations
because the communication is done with the help of shared 4. Message passing is a time-consuming process because it is
memory so system calls are used to establish the shared implemented through kernel (system calls).
memory.
5. In shared memory make sure that the processes are not 5. Message passing is useful for sharing small amounts of data
writing to the same location simultaneously. so that conflicts need not occur.
6. It follows a faster communication strategy when compared 6. In message passing the communication is slower when
to message passing technique. compared to shared memory technique.
Operating Systems (BMC-203) Prepared by- [Link] Pandey
Lecture-6
Critical Section Problem
i) Dekker’s Problem
ii) Peterson’s Problem
Operating Systems (BMC-203) Prepared by- [Link] Pandey
Dekker's Solution/Problem/Algorithm:-
1. If two or more processes simultaneously access the same shared resource, it can lead to incorrect results or data
corruption.
2. To solve this problem, One of the most popular algorithm is Dekker's algorithm, which was proposed by Cornelis Dekker in
1965. It’s a classic software-based solution to the critical-section problem.
3. It is a simple and efficient algorithm “which is used only for two processes” and allows only one process among two to
access a shared resource at a time.
4. Dekker’s algorithm achieves mutual exclusion by using two flags that indicate each process's intent to enter the critical
section. By alternating the flags' use and checking if the other process's flag is set, the algorithm ensures that only one
process enters the critical section at a time.
5. The algorithm is known for its simplicity and effectiveness in ensuring that only one process can enter a critical section at a
time, thereby preventing race conditions and ensuring data integrity.
Principles of Dekker’s Algorithm
Dekker’s Algorithm is based on the following key principles:
•turn variable: The algorithm uses a shared Boolean array to indicate each process’s desire to enter the critical
section. Additionally, it uses a turn variable to determine which process should enter the critical section next. The
turn variable is used to enforce strict alternation between processes.
•Entry Protocol: When a process wants to enter the critical section, it sets its flag in the array to indicate its
intention. It then checks if the other process’s flag is set and if it is their turn to enter. If not, the process waits until it
can proceed.
•Exit Protocol: After a process exits the critical section, it clears its flag, allowing the other process to enter. The
turn variable is then updated to switch the privilege to the other process.
Operating Systems (BMC-203) Prepared by- [Link] Pandey
Process Pi- Implementation of Dekker’s Algorithm:- Process Pj-
Initially flag[i]=false; Initially flag[j]=false;
Dekker’s Algorithm can be implemented using shared
do do
memory and simple operations such as compare-
{ and-swap (CAS). Every time check turn variable. {
flag[i]=true; Here are basic steps of how it works:- flag[j]=true;
while(flag[j]) while(flag[i])
{ {
if(turn==j) if(turn==i)
{ {
flag[i]=false; flag[j]=false;
while(turn==j); while(turn==i);
flag[i]=true; flag[j]=true;
} }
} }
//critical section //critical section
turn==j; turn==i;
flag[i]=false; flag[j]=false;
//remainder section //remainder section
} }
while(true); Operating Systems (BMC-203) Prepared by- [Link] Pandey while(true);
Peterson’s Solution:-
❑ A classic software-based solution to the critical-section problem known as Peterson’s solution.
❑ Peterson's algorithm/Peterson's solution is a concurrent programming algorithm for mutual exclusion that
allows two or more processes to share a single-use resource without any conflict or interference, using only
shared memory for communication. It was formulated by Gary L. Peterson in 1981.
❑ It ensures mutual exclusion meaning where only one process can access the critical section at a time and avoids
race conditions.
❑ It is simple, easy to understand, and serves as a foundational concept in process synchronization.
❑ Peterson’s Algorithm uses two simple variables one to indicate whose turn is to access the critical section and
another to show if a process is ready to enter.
Peterson’s Algorithm Explanation:-
• Peterson’s Algorithm is a mutual exclusion solution used to ensure that two processes do not enter
into the critical sections at the same time.
• The algorithm uses two main components: a turn variable and a flag array.
➢ The turn variable is an integer that indicates whose turn it is to enter the critical section and it is
shared among the processes.
➢ The flag contains Boolean values for each process, indicating whether a process wants to enter
the critical section or not.
Operating Systems (BMC-203) Prepared by- [Link] Pandey
Initially, Boolean flag[2];
int turn; //shared variable
flag[0]=flag[1]=false;
Petersons Algorithm(Step by Step):-
•Set turn to either 0 or 1, indicating
which process can enter its critical
section first.
•Repeat indefinitely−
•Set flag[i] to true, indicating that
process i wants to enter into its critical
section.
•Set turn to j, the other process index.
•While flag[j] is true and turn equals j,
wait.
•Enter the critical section.
•Set flag[i] to false, indicating that
process i is done with its critical
section.
•Remainder section.
Operating Systems (BMC-203) Prepared by- [Link] Pandey
Differences between Dekker’s and Peterson’s Solution:-
Both Dekker's and Peterson's algorithms are solutions to the mutual exclusion problem in concurrent programming, ensuring
that only one process can access a shared resource at a time, but they differ in their approach and suitability for different
scenarios.
Dekker's algorithm is designed for two processes while Peterson's algorithm can be extended to more than two processes,
though with potential efficiency drawbacks.
Operating Systems (BMC-203) Prepared by- [Link] Pandey
Lecture-7
Classical Problem in Concurrency
Dinning Philosopher Problem
Operating Systems (BMC-203) Prepared by- [Link] Pandey
The Dining Philosopher’s Problem:-
➢ The dining philosopher's problem is the classical problem of synchronization.
➢ Five philosophers are sitting around a circular table and their job is to think and eat alternatively.
➢ A bowl of noodles is placed at the center of the table along with five chopsticks(forks) for each of the philosophers.
➢ To eat, a philosopher needs both their right and a left chopstick.
➢ A philosopher can only eat if both immediate left and right chopsticks of the philosopher is available.
➢ In case if both immediate left and right chopsticks of the philosopher are not available then the philosopher puts down
their (either left or right) chopstick and starts thinking again.
➢ To eat rice, a philosopher picks left first then right chopstick.
Shared data-
1. Bowl of noodles (data set)
Operating Systems (BMC-203) Prepared by- Prof. Asheesh Pandey
void Philosopher Case1-
One by one philosopher come and eat.
{
Case2-
while(1) Preemption happens; race condition arise; Remove by
{ N=number of chopsticks Using semaphore.
THINKING
(take_chopstick[i]; //Left chopstick
(take_chopstick[(i+1) % N]); //Right chopstick
EATING THE NOODLE
void Philosopher Note:-
(put_chopstick[i]); Array of Semaphores will use S[i],
{
(put_chopstick[(i+1) % N]); Depending upon the number of chopsticks..
while(1)
THINKING S[5]= S0, S1, S2, S3, S4.
{
} All are initialized by 1.
THINKING
}
wait(take_chopstick[Si]; //Left chopstick
wait(take_chopstick[S(i+1) mod N]); //Right chopstick
EATING THE NOODLE
signal(put_chopstick[Si]);
signal(put_chopstick[S(i+1) mod N]);
THINKING
Ref:- [Link]
}
}
Operating Systems (BMC-203) Prepared by- [Link] Pandey
Step by Step Algorithm-
➢ Suppose Philosopher P0 wants to eat, it will enter in Philosopher() function, and execute take_chopstick[i]; by doing this it
holds C0 chopstick after that it execute take_chopstick[ (i+1) % 5]; by doing this it holds C1 chopstick( since i =0, therefore
(0 + 1) % 5 = 1)
➢ Similarly suppose now Philosopher P1 wants to eat, it will enter in Philosopher() function, and execute take_chopstick[i]; by
doing this it holds C1 chopstick after that it execute take_chopstick[ (i+1) % 5]; by doing this it holds C2 chopstick( since i
=1, therefore (1 + 1) % 5 = 2)
➢ But Practically Chopstick C1 is not available as it has already been taken by philosopher P0, hence the above code generates
problems and produces race condition.
Solution of the Dining Philosophers Problem by Semaphore-
❑ We use a semaphore to represent a chopstick and this truly acts as a solution of the Dining Philosophers Problem. Wait and
Signal operations will be used for the solution of the Dining Philosophers Problem, for picking a chopstick wait operation
can be executed while for releasing a chopstick signal semaphore can be executed.
❑ From the above solution of the dining philosopher problem, we have proved that no two neighboring philosophers can eat
at the same point in time.
❑ But the drawback of the above solution is that this solution can lead to a deadlock condition. This situation happens if all
the philosophers pick their left chopstick at the same time, which leads to the condition of deadlock and none of the
philosophers can eat.
Solution of deadlock:-
❖ Only in case if both the chopsticks ( left and right ) are available at the same time, only then a philosopher should be
allowed to pick their chopsticks
❖ All the four starting philosophers ( P0, P1, P2, and P3) should pick the left chopstick and then the right chopstick, whereas
the last philosopher P4 should pick the right chopstick and then the left chopstick.
Operating Systems (BMC-203) Prepared by- [Link] Pandey
Note:- To avoid the deadlock condition, change the sequence of taking and putting
chopsticks of any philosopher.
void Philosopher for N-1th philosopher
{
while(1)
{
THINKING
wait(take_chopstick[S(i+1) mod 5]); //Left chopstick
wait(take_chopstick[Si]; //Right chopstick
EATING THE NOODLE
signal(put_chopstick[S(i+1) mod 5]
signal(put_chopstick[Si]);
THINKING
}
}
Operating Systems (BMC-203)
Prepared by- [Link] Pandey
Lecture-8
Classical Problem in Concurrency
Sleeping Barber Problem
Operating Systems (BMC-203) Prepared by- [Link] Pandey
Sleeping Barber Problem:-
➢ The Sleeping Barber problem is a classic problem in process synchronization.
➢ It is based upon a hypothetical barber shop which has:-
❑ one barber,
❑ one barber chair,
❑ one waiting room with N chairs for the customers.
Important Conditions:-
✓ If there is no customer, then the barber sleeps in his own chair.
✓ When a customer arrives, he has to wake up the barber.
✓ If there are many customers and the barber is cutting a
customer’s hair, then the remaining customers wait if there
are empty chairs in the waiting room.
✓ If new customer arrives in barber shop and there is no empty
chair in the waiting room so the customer will leave the shop.
✓ The customer who came earlier will come to barber and check
the barber is free, if barber is free then customer will sit on the
barber chair for hair cut otherwise wait in the waiting room
chair.
Operating Systems (BMC-203) Prepared by- [Link] Pandey
Solution:- Three semaphores are used in the solution to this issue.
Customer : Initially it is zero(0), which counts the number of customers present in the waiting room (customer in
the barber chair is not included because he is not waiting).
Barber : Initially it is zero(0), it defines the condition of barber(0/1) if it is 0 which means barber is idle/free
otherwise, is working.
Mutex: Initially it is one(1), Mutex is used as lock to provide the mutual exclusion which is required for the
single process to synchronize and execute.
Waiting: It’s a shared variable and initially it is zero(0), which is used to count the waiting customers.
Note:- “The reason for using waiting variable is that there is no way to read the current value of semaphore
customer”.
//Barber is doing hair cut
Step by Step Solution:-
✓ When the barber shows up in the morning, he executes the function barber, because
customer is initially 0. Then the barber goes to sleep until the first customer comes up.
✓ When a customer arrives, he executes customer procedure the customer acquires the
mutex for entering the critical region, if another customer enters thereafter, the second one
will not be able to anything until the first one has released the mutex.
✓ The customer then checks the chairs in the waiting room if waiting customers are less then
the number of chairs then he sits otherwise he leaves and releases the mutex.
✓ If the chair is available then customer sits in the waiting room and increments the variable
waiting value and also increases the customer’s semaphore this wakes up the barber if he is
sleeping.
✓ At this point, customer and barber are both awake and the barber is ready to give that
person a haircut. When the haircut is over, the customer exits the procedure and if there
are no customers in waiting room barber sleeps.
Operating Systems (BMC-203) Prepared by- [Link] Pandey
Lecture-9
(Reader & Writer Problem)
Operating Systems (BMC-203) Prepared by- [Link] Pandey