0% found this document useful (0 votes)
96 views107 pages

Module 2: Process Synchronization

Module 2 covers process synchronization, including critical-section problems, synchronization hardware, mutex locks, semaphores, and classic synchronization issues. It also discusses process scheduling, deadlocks, and various scheduling algorithms, emphasizing the importance of managing concurrent processes to prevent data inconsistencies. Key concepts include Peterson's algorithm, semaphores, and examples like the dining philosophers and readers-writers problems.

Uploaded by

followstranger69
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
96 views107 pages

Module 2: Process Synchronization

Module 2 covers process synchronization, including critical-section problems, synchronization hardware, mutex locks, semaphores, and classic synchronization issues. It also discusses process scheduling, deadlocks, and various scheduling algorithms, emphasizing the importance of managing concurrent processes to prevent data inconsistencies. Key concepts include Peterson's algorithm, semaphores, and examples like the dining philosophers and readers-writers problems.

Uploaded by

followstranger69
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd

Module 2 : Process Synchronization

Syllabus
The critical-section problem, Peterson's solution, Synchronization hardware, Mutex

locks, Semaphores, Classic problems of synchronization, Monitors, Synchronization

examples. Process scheduling: Criteria, Scheduling algorithms, Multiprocessor

scheduling, Real-time CPU scheduling. Deadlocks: System model, Characterization,

Methods for handling deadlocks, Deadlock prevention, Avoidance, Detection and

recovery from deadlock


Process Synchronization is used
in a computer system to ensure
that multiple processes or threads
can run concurrently without
interfering with each other.
The main objective of process synchronization is

⮚ to ensure that multiple processes access shared resources


without interfering with each other

⮚ to prevent the possibility of inconsistent data due to concurrent


access.
Process Synchronization is the
coordination of execution of
multiple processes in a multi-
process system to ensure that
they access shared resources in a
controlled and predictable manner
Critical Section

critical section is a segment of code that


accesses a shared resource.
It's also known as a critical region.
It is a code segment that a single process
can access at a particular point of time.
Conditions That Require Process Synchronization

1.Critical Section:
⮚ It is that part of the program where
shared resources are accessed.
⮚ Only one process can execute the
critical section at a given point of
time.
⮚ If there are no shared resources, then
no need of synchronization
Race Condition:
● When more than one processes execute the same
code or access the shared memory, it is possible that
the value of a shared variable is wrong.
In this condition, all processes race ahead to prove
that their output is correct.
This situation is known as race condition.
Critical Section Problem

Critical section problem means


designing a way for cooperative
processes to access shared resources
without creating data inconsistencies.
Sections of a program on OS
The four essential sections of a program on OS
are:

1.Entry Section: This decides the entry of any process.


2.Critical Section: This allows a process to modify the shared
variable.
3. Exit Section: This allows the process waiting in the Entry
section to enter into the Critical Section and removes the
process from critical section after execution is completed.
4.Remainder Section: Parts of the code which are not present
in the above three sections are collectively called remainder
The critical section problem must satisfy three
requirements:
•Mutual Exclusion: If a process is executing in its critical
section, then no other process is allowed to execute in the
critical section.
•Progress: If no process is executing in the critical section
and other processes are waiting outside the critical section,
then only those processes that are not executing in their
remainder section can participate in deciding which will
enter the critical section next, and the selection cannot be
postponed indefinitely.
•Bounded Waiting: Only a specific number of processes
are allowed to enter into their critical section. Thus, a
Common Solution to Critical Section
problem
●Peterson’s Solution
●Synchronization Hardware
●Mutex Locks
●Semaphore Solution
Peterson’s Algorithm is a well-known
solution for ensuring mutual exclusion in
process synchronization.

It is designed to manage access to shared


resources between two processes in a way
that prevents conflicts or data corruption.

The algorithm ensures that only one


process can enter the critical section at
any given time while the other process
waits its turn.
Peterson’s Algorithm uses two simple variables
• one to indicate whose turn it is to access
the critical section
• another to show if a process is ready to
enter.

This method is often used in scenarios where


two processes need to share resources or data
without interfering with each other.
Peterson’s Algorithm is a mutual exclusion solution used to ensure that two processes do not enter into
the critical sections at the same time.

The algorithm uses two main components: a turn variable and a flag array.
The turn variable is an integer that indicates
whose turn it is to enter the critical section.

The flag array contains Boolean values for


each process, indicating whether a process
wants to enter the critical section.
Peterson’s Algorithm : step-by-step:

•Initial Setup: Initially, both processes set


their respective flag values to false, meaning
neither wants to enter the critical section.
The turn variable is set to the ID of one of
the processes (either 0 or 1), indicating that
it’s that process’s turn to enter.

•Intention to Enter: When a process wants


to enter the critical section, it sets its flag
value to true signaling its intent to enter.
the next turn, sets the turn variable to its own ID.
This will indicate that it is its turn to enter the critical
section.
•Waiting Loop: Both processes enter a loop where
they check the flag of the other process and the turn
variable:
•If the other process wants to enter (i.e., flag[1 - processID]
== true), and
•It’s the other process’s turn (i.e., turn == 1 - processID),
then the process waits, allowing the other process
to enter the critical section.
This loop ensures that only one process can enter
the critical section at a time, preventing a race
condition.
•Critical Section: Once a process successfully exits
the loop, it enters the critical section, where it can
safely access or modify the shared resource without
interference from the other process.

•Exiting the Critical Section: After finishing its


work in the critical section, the process resets its flag
to false. This signals that it no longer wants to enter
the critical section, and the other process can now
have its turn
By alternating turns and using these
checks, Peterson’s algorithm ensures
mutual exclusion,
Example of Peterson’s Algorithm

Accessing a shared printer: Peterson’s solution ensures that only one process can access the printer at a time when two processes are trying to print documents.
Reading and writing to a shared file: It can be used when two processes need to read from and write to the same file, preventing concurrent access issues.

Competing for a shared resource: When two processes are competing for a limited resource, such as a network connection or critical hardware, Peterson’s solution ensures mutual exclusion to avoid conflicts.
Semaphores in Process
Synchronization
●Semaphores are a tool used in operating
systems to help manage how different
processes (or programs) share resources, like
memory or data, without causing conflict
What is Semaphores?
● A semaphore is a synchronization tool used
in concurrent programming to manage access
to shared resources.
● It is a lock-based mechanism designed to
achieve process synchronization, built on top
of basic locking techniques.
The process of using Semaphores provides two
operations:

wait (S): The wait operation decrements the


value of the semaphore
signal (S): The signal operation increments the
value of the semaphore.
Types of Semaphores

Semaphores are of two Types:

Binary Semaphore: This is also known as a mutex lock, as they are


locks that provide mutual exclusion. It can have only two values – 0 and
1. Its value is initialized to 1. It is used to implement the solution of
critical section problems with multiple processes and a single
resource.

Counting Semaphore: Counting semaphores can be used to control


access to a given resource consisting of a finite number of instances.
The semaphore is initialized to the number of resources available.
Wait
The wait operation decrements the value of its argument S, if it is positive. If S is
negative or zero, then no operation is performed.

wait(S)
{
while (S<=0);

S--;
}
Signal
The signal operation increments the value of its argument S.

signal(S)
{
S++;
}
Producer Consumer Problem
Dining Philosopher’s Problem
● The dining philosopher's problem states that there are 5 philosophers sharing a circular
table and they eat and think alternatively.

● There is a bowl of rice for each of the philosophers and 5 chopsticks.

● A philosopher needs both their right and left chopstick to eat.

● A hungry philosopher may only eat if there are both chopsticks are available otherwise a
philosopher puts down their chopstick and begin thinking again
Dining Philosopher’s Problem
Solution of Dining Philosophers Problem

● A solution of the Dining Philosophers Problem is to use a semaphore


to represent a chopsticks.
● Each philosopher picks up first the fork on the left and then the fork
on the right by execute a wait() operation on that semaphore.
● After eating, the philosopher releases his chopstick by executing a
signal() operation on the appropriate semaphore.
● The structure of the chopstick is shown above -> semaphore
chopstick (5)
● Initially the elements of the chopstick are initialized to 1 as the
chopsticks are on the table and not picked up by a philosopher.
The structure of a random philosopher i is given as follows-

● There should be at most four


philosophers on the table.
● An even philosopher should pick
the right chopstick and then the
left chopstick while an odd
philosopher should pick the left
chopstick and then the right
chopstick.
● A philosopher should only be
allowed to pick their chopstick if
both are available at the same
time.
The Readers Writers Problem
● Reader's writer problem is another example of a classic
synchronization problem.
● The main complexity with this problem occurs from allowing more
than one reader to access the data at the same time.
● There are many variants of this problem, one of which is given below.
○ There is a shared resource which should be accessed by multiple processes.
○ There are two types of processes in this context. They are reader and writer.
○ Any number of readers can read from the shared resource simultaneously, but only
one writer can write to the shared resource.
○ When a writer is writing data to the resource, no other process can access the
resource.
○ A writer cannot write to the resource if there are non-zero number of readers
accessing the resource at that time
● As seen above in the code for the writer, the writer just waits on the w
semaphore until it gets a chance to write to the resource.
● After performing the write operation, it increments w so that the next writer
can access the resource an writer access the recource
● On the other hand, in the code for the reader, the lock is acquired whenever
the read_count is updated by a process.
● When a reader wants to access the resource, first it increments the read count
value, then accesses the resource and then decrements the read_count value.
● The semaphore w is used by the first reader which enters the critical section
and the last reader which exits the critical section.
● The reason for this is, when the first readers enter the critical section, the
writer is blocked from the resource. Only new readers can access the resource
now.
● Similarly, when the last reader exits the critical section, it signals the writer
using the w semaphore because there are zero readers now and a writer can
have the chance to access the resource.
Process Scheduling
Process Scheduling

The process scheduling is the activity of the


process manager that handles the removal of
the running process from the CPU and the
selection of another process on the basis of a
particular strategy Process scheduling is a
task of operating system to schedule the
processes which are in different states like
ready, running, waiting.
Scheduling Categories
There are two Categories of Scheduling:

Pre-emptive Scheduling
● Circumstances:
○ Process switches from running state to ready state.
○ Process switches from waiting state to ready state

● CPU is allocated to a process for a limited time.


● If process with high priority arrives the current running process is
interrupted.
● Preemptive scheduling is flexible but has overhead of switching processes.
● Example: round robin, shortest remaining job first, and priority scheduling
SJF
Non-Preemptive Scheduling
● Circumstances:
○ Process switches from running state to waiting state.
○ Process terminates.

● CPU is allocated to a process till it gets terminated or enters a


waiting state.
● Average waiting time of the process is increased.
● Process in a running state is not interrupted even for process with
high priority
● Example: first come first serve, shortest job first.
FCFS
Process Scheduling Queues

● In a multiprogramming environment all the processes


that enter the system are stored in the Job Queue.
● Processes in the Ready state that are residing in the
main memory and ready for execution are placed in the
Ready Queue.
● There might be processes that might be waiting for a
device to become available for its execution, such
processes are place in the device queue
Schedulers
● Scheduling of a process is a central activity of the operating system.
● A process migrates among the various scheduling queues throughout
its lifetime.
● Schedulers are special system software which handle process
scheduling in various ways
● Types of schedulers: There are three types of schedulers available:

1. Long Term Scheduler

2. Short Term Scheduler

3. Medium Term Scheduler


Long Term Scheduler:
● A long-term scheduler also known as a job scheduler determines
which

program should be admitted to the system for processing.


● It selects and leads the processes into the memory for execution with
the help of CPU scheduling
● It provides a balanced combo of jobs, such as I/0 bound and
processor bound and controls the degree of multiprogramming.
Short Term Scheduler

● A short-term scheduler also known as a CPU scheduler.


● It selects a process from the multiple processes that are in ready
state in order to execute it and also allocates the CPU to one of them
● It is faster than long-term schedulers and is also called a dispatcher
as it makes the decision on which process will be executed next.
Medium Term Scheduler:
● The medium term scheduler removes the processes from memory
(and from active contention for the CPU), and thus reduces the
degree of multiprogramming
● At some later time, the process can be reintroduced into memory
and its execution can be continued where it left off.
● This scheme is called swapping
CPU Scheduling Criteria
1. CPU Utilization: The scheduling algorithm should be designed in such a way
that the usage of the CPU should be as efficient as possible.
● Theoretically, CPU utilisation can range from 0 to 100 but in a real-time
system, it varies from 40 to 90 percent depending on the load upon the
system.
● The main objective of any CPU scheduling algorithm is to keep the CPU as
busy a possible.
1. Throughput: it can be defined as the number of processes executed by the
CPU in a giveb amount of time.
● It is used to find the efficiency of a CPU.
● For long processes the throughput rate may be less whereas for shorter process
the throughput might be high, it varies depending o the size of the process.
3.Turnaround Time:
● Turnaround time is the total amount of time spent by the process
from coming to the ready state for the first time to its completion.
● The time elapsed from the time of submission of a process to the
time of completion is known as the turnaround time.

○ Turnaround time= Exit time-Arrival time


4. Response Time:
● time from submission of a request until the first response is produced.
● This measure is often called response time and is the amount of time
taken to start responding .
● The formula to calculate response time is as follows:

Response Time =

CPU Allocation Time (when the CPU was allocated for the first) - Arrival Time

5. Waiting Time :it represents the amount of time spent by a process


waiting for the allocation of various resources.

Waiting time= Turnaround Time - Burst Time


CPU Scheduling Algorithms
CPU Scheduling Algorithms
1. First come first served ( FCFS) Scheduling
2. Shortest Job First (SJF) Scheduling
3. Priority scheduling algorithm
4. Round Robin scheduling algorithm
5. Multilevel Queue scheduling
1. First come first served ( FCFS) Scheduling

First come – First served (FCFS), is the simplest scheduling algorithm. FIFO
simply queues processes according to the order they arrive in the ready
queue. In this algorithm, the process that comes first will be executed first
and next process starts only after the previous gets fully executed.

Terminologies Used in CPU Scheduling


● Arrival Time: The time at which the process arrives in the ready queue.
● Completion Time: The time at which the process completes its execution.
● Turn Around Time: Time Difference between completion time and arrival
time. Turn Around Time = (Completion Time – Arrival Time)
● Waiting Time (W. T): Time Difference between turnaround time and burst
time. CPU Burst time is the overall CPU time a process needs.
FCFS
Shortest Job First (SJC) Scheduling
● The shortest job first (SJF) or shortest job next, is a scheduling policy that selects the
waiting process with the smallest execution time to execute next. SJN, also known as
Shortest Job Next (SJN), can be preemptive or non-preemptive.
● Characteristics of SJF Scheduling:
● Shortest Job first has the advantage of having a minimum average waiting time
among all scheduling algorithms.
● It is a Greedy Algorithm.
● It may cause starvation if shorter processes keep coming. This problem can be solved
using the concept of ageing.
● It is practically infeasible as Operating System may not know burst times and
therefore may not sort them. While it is not possible to predict execution time,
several methods can be used to estimate the execution time for a job, such as a
weighted average of previous execution times.
● SJF can be used in specialized environments where accurate estimates of running
time are available.
Shortest Remaining Time First (Preemptive SJF)

● Shortest Remaining Time First (Preemptive SJF) Scheduling


Algorithm
● In this post, we will talk about the pre-emptive version of Shortest Job
First (SJF) scheduling, called Shortest Remaining Time First (SRTF).
● In SRTF, the process with the least time left to finish is selected to run.
● The running process will continue until it finishes or a new process with
a shorter remaining time arrives.
● This way, the process that can finish the fastest is always given priority
Implementation of SRTF Algorithm

Steps:

1. Input Process Details

Take the number of processes and input the arrival time and burst time for each process.

2. Track Remaining Time

Create an array for remaining times, initialized with burst times.

3. Initialize Variables

Set the current time to 0.

Track completed processes, waiting time, and turnaround time.

4. Check for Arriving Processes

At each time unit, add processes with arrival time ≤ current time to the ready queue.

5. Select Shortest Remaining Time

Pick the process with the smallest remaining time from the ready queue.

Preempt if a new process arrives with a shorter remaining time.


6.. Execute Process
● Decrease the remaining time of the selected process.
● Increment the current time.

7. Process Completion
● When remaining time reaches 0:
● Mark the process as completed.
● Calculate Turnaround Time = Completion Time – Arrival Time.
● Calculate Waiting Time = Turnaround Time – Burst Time.

8. Repeat Until All Complete


● Continue checking, selecting, and executing processes until all are completed.

9. Calculate Averages
● Compute average waiting time and turnaround time.

10. Output Results


● Print completion, waiting, and turnaround times for each process.
● Display the average waiting and turnaround times.
# Consider the following processes with their CPU burst in milli seconds
Process CPU Burst
P1 10
P2 1
P3 2
P4 10
P5 5
The processes arrive in the order P1,P2,P3,P4,P5.Draw the Gantt Chart
illustrating the
execution of these processes using FCFS and Round Robin algorithm. Calculate
a) Average Turnaround Time
b) Average Completion Time
Note: Time quantum=2ms
Round Robin Scheduling
Round Robin is a CPU scheduling algorithm where each
process is cyclically assigned a fixed time slot. It is the
preemptive version of the First come First Serve CPU
Scheduling algorithm.

This article focuses on implementing a Round Robin


Scheduling Program where all processes have the same
arrival time. In this scenario, all processes arrive at the
same time which makes scheduling easier. You can focus
Round Robin Scheduling
Characteristics of Round Robin CPU Scheduling Algorithm with Same Arrival Time
● Below are the key characteristics of the Round Robin Scheduling Algorithm when all
processes share the same arrival time:
● Equal Time Allocation: Each process gets an equal and fixed time slice (time
quantum) to execute, ensuring fairness.
● Cyclic Execution: Processes are scheduled in a circular order, and the CPU moves to
the next process in the queue after completing the time quantum.
● No Process Starvation: All processes are guaranteed CPU time at regular intervals,
preventing any process from being neglected.
● Same Start Time: Since all processes arrive at the same time, there is no need to
consider arrival time while scheduling, simplifying the process.
● Context Switching: Frequent context switching occurs as the CPU moves between
processes after each time quantum, which can slightly impact performance.
Round Robin Scheduling
Priority scheduling

Priority scheduling is one of the most common scheduling algorithms used by


the operating system to schedule processes based on their priority. Each
process is assigned a priority. The process with the highest priority is to be
executed first and so on.

Processes with the same priority are executed on a first-come first served
basis. Priority can be decided based on memory requirements, time
requirements or any other resource requirement. Also priority can be decided
on the ratio of average I/O to average CPU burst time.

Priority Scheduling can be implemented in two ways:

Non-Preemptive Priority Scheduling

Preemptive Priority Scheduling


● How Does FCFS Work?
● The mechanics of FCFS are straightforward:
1. Arrival: Processes enter the system and are placed in a queue in the order they arrive.

2. Execution: The CPU takes the first process from the front of the queue, executes it
until it is complete, and then removes it from the queue.

3. Repeat: The CPU takes the next process in the queue and repeats the execution
process.

4. This continues until there are no more processes left in the queue.
Example of FCFS CPU Scheduling:
To understand the First Come, First Served (FCFS)
scheduling algorithm effectively, we’ll use two examples –
• one where all processes arrive at the same time,
• another where processes arrive at different times.
We’ll create Gantt charts for both scenarios and calculate the
turnaround time and waiting time for each process.
Scenario 1: Processes with Same Arrival Time
Consider the following table of arrival time and burst time for three
processes P1, P2 and P3
Process Arrival Time Burst Time

p1 0 5

p2 0 3

p3 0 8
Step-by-Step Execution:
1. P1 will start first and run for 5 units of time (from 0 to 5).

2. P2 will start next and run for 3 units of time (from 5 to 8).

3. P3 will run last, executing for 8 units (from 8 to 16).


Turnaround Time = Completion Time - Arrival Time
Waiting Time = Turnaround Time - Burst Time

AT : Arrival Time
BT : Burst Time or CPU Time
TAT : Turn Around Time
WT : Waiting Time
Processe
AT BT CT TAT WT
s

P1 0 5 5 5-0 = 5 5-5 = 0

P2 0 3 8 8-0 = 8 8-3 = 5

16-0 =
P3 0 8 16 16-8 = 8
16
Average Turn around time = (8 + 12 + 11)/3 = 31/3 = 10.33 ms

Average waiting time = (4 + 7 + 8)/3 = 19/3 = 6.33 ms


Scenario 2: Processes with Different
Arrival Times

Arrival Time
Process Burst Time (BT)
(AT)
P1 5 ms 2 ms
P2 3 ms 0 ms
P3 4 ms 4 ms
Step-by-Step Execution:

• P2 arrives at time 0 and runs for 3 units, so its completion


time is:
Completion Time of P2=0+3=3
• P1 arrives at time 2 but has to wait for P2 to
finish. P1 starts at time 3 and runs for 5 units. Its
completion time is:
Completion Time of P1=3+5=8
• P3 arrives at time 4 but has to wait for P1 to
finish. P3 starts at time 8 and runs for 4 units. Its
completion time is:
Completion Time of P3=8+4=1
Completion Time Turnaround Time Waiting Time (WT
Process
(CT) (TAT = CT – AT) = TAT – BT)
P2 3 ms 3 ms 0 ms

P1 8 ms 6 ms 1 ms

P3 12 ms 8 ms 4 ms
• Average Turnaround time = 1.67
• Average waiting time = 5.67
Round Robin Scheduling is a method used by operating systems to
manage the execution time of multiple processes that are competing
for CPU attention. It is called "round robin" because the system
rotates through all the processes, allocating each of them a fixed time
slice or "quantum", regardless of their priority.
The primary goal of this scheduling method is to ensure that all
processes are given an equal opportunity to execute, promoting
fairness among tasks
• Process Arrival: Processes enter the system and are placed in a
queue.
• Time Allocation: Each process is given a certain amount of CPU
time, called a quantum.
• Execution: The process uses the CPU for the allocated time.
• Rotation: If the process completes within the time, it leaves the
system. If not, it goes back to the end of the queue.
• Repeat: The CPU continues to cycle through the queue until all
processes are completed.
Imagine you're at a busy restaurant with a group of friends,
and there's only one waiter. The waiter could spend a long
time at one table, but instead, he choose to spend exactly
one minute at each table before moving to the next.
Similarly, in Round Robin Scheduling, the CPU spends a
predetermined slice of time on each process. If a process
hasn't finished its task by the time its slice is up, it's moved
to the back of the queue, and the CPU moves on to the next
process.

You might also like