0% found this document useful (0 votes)
180 views89 pages

PG TRB Os Class3

The document outlines the syllabus for a course on Operating Systems, covering topics such as process scheduling, memory management, and synchronization principles. It details various CPU scheduling algorithms including First Come First Serve, Shortest Job First, and Round Robin, along with examples and calculations for average waiting times. Additionally, it discusses concepts like priority scheduling, starvation, and aging in the context of operating systems.

Uploaded by

chitrapreetha.sp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
180 views89 pages

PG TRB Os Class3

The document outlines the syllabus for a course on Operating Systems, covering topics such as process scheduling, memory management, and synchronization principles. It details various CPU scheduling algorithms including First Come First Serve, Shortest Job First, and Round Robin, along with examples and calculations for average waiting times. Additionally, it discusses concepts like priority scheduling, starvation, and aging in the context of operating systems.

Uploaded by

chitrapreetha.sp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 89

WELCOME

TO
PROFESSOR ACADEMY

TRB-COMPUTER INSTRUCTOR GRADE-1

Today Topic: Operating System


One Minute of Prayer- To think about
your Goal to qualify upcoming TRB-
COMPUTER INSTRUCTOR GRADE-1
Operating Systems- Syllabus
• Introduction: System software, OS Strategies;
Multiprogramming, batch.
• Operating system Organization: basic OS
function, Kernels, device drivers.
• Device Management: buffering.
• Process Management: resource abstraction,
process hierarchy
• Scheduling: Strategy selection.
Operating Systems- Syllabus
• Synchronization Principles: deadlock,
semaphores, multiprocessors.
• Deadlock: hold and wait, Bankers algorithm,
consumable resources.
• Memory Management: Memory allocation
strategies, variable partition.
• Protection and Security: Internal access
authorization
Scheduling: Strategy selection
What is Process Scheduling?
Process Scheduling
• Process or CPU Scheduling can be defined as a set of
policies and mechanisms which controls the order in
which the work to be done is completed.
• Whenever the CPU becomes idle, it is the job of the
CPU Scheduler to select another process from the
ready queue to run next.
• The selection process will be carried out by the CPU
scheduler.
Goal of CPU Scheduling
CPU Scheduling-Terms and Terminology
CPU Utilization : A scheduling algorithm should be designed so that
CPU remains busy as possible. It should make efficient use of CPU
Throughput :Number of processes completed per unit time.
Arrival Time: Time at which the process arrives in the ready queue.
Completion Time: Time at which process completes its execution.
Burst Time: Time required by a process for CPU execution.
Response time: The–amount of time it takes from when a request was
submitted until the first response is produced
CPU Scheduling-Terms and Terminology
Turn Around Time: Time Difference between completion time
and arrival time.

Turn Around Time(TAT) = Completion Time(CT) – Arrival


Time(AT)

Waiting Time(W.T): Time Difference between turn around time


and burst time.

Waiting Time(WT)= Turn Around Time(TAT) – Burst Time(BT)


Types of CPU Scheduling

1. First Come First Serve (FCFS)


2. Shortest-Job-First (SJF) Scheduling
3. Shortest Remaining Time
4. Priority Scheduling
5. Round Robin Scheduling
1. First Come First Serve

• It is the simplest algorithm to implement. The


process which come first will use the CPU first.
• It is the non-preemptive type of scheduling.
FCFS Scheduling-Example
Consider the following FCFS scheduling algorithm. In the
Following schedule, there are 5 processes with process ID
P0, P1, P2, P3 and P4. The processes and their respective
Arrival and Burst time are given in the following table. Find
out the average waiting time
Arrival
Process Burst Time
Time
Example: P0 0 2
P1 1 6
P2 2 4
P3 3 9
P4 4 12
FCFS Scheduling
Gantt chart

Formula: TAT = CT – AT WT= TAT – BT


Burst
Arrival Completion Turn Around Waiting
Process Time
Time(AT) Time(CT) Time(TAT) Time(WT)
(BT)
P0 0 2 2 2 0
P1 1 6 8 7 1
P2 2 4 12 10 6
P3 3 9 21 18 9
P4 4 12 33 29 17
Total Waiting Time= 33
Average Waiting Time=33/5 = 6.6
Shortest Job First (SJF)
• The job with the shortest burst time will get the CPU first. The
lesser the burst time, the sooner will the process get the CPU. It
is the non-preemptive type of scheduling.
• Shortest Job First (SJF) is an algorithm in which the process
having the smallest execution time is chosen for the next
execution. This scheduling algorithm also knows as Shortest Job
Next (SJN) .
• This scheduling method can be preemptive or non-preemptive.
• To successfully implement it, the burst time/duration time of
the processes should be known to the processor in advance,
which is practically not feasible all the time
• This is the best approach to minimize waiting time.
Example-Shortest-Job-First (SJF)
Consider the following four processes with the arrival time and length of
CPU burst given in milliseconds
Process Arrival Time Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
The average waiting time for SJF scheduling algorithm is .................
(1) 6.5 ms
(2) 7.5 ms
(3) 6.75 ms
(4) 7.75 ms
Solutions
P1 P2 P4 P3
0 8 12 17 26
PROCESS ARRIVAL TIME BURST TIME COMPLETION TURNAROUND TIME WAITING
TIME TIME

P1 0 8 8 8 0
P2 1 4 12 11 7
P3 2 9 26 24 15
P4 3 5 17 14 9
TOTAL WAITING TIME 31

AVERAGE WAITING TIME=31/4


=7.75 ms
WORK OUT FOR YOU
An operating system uses shortest Job first scheduling
algorithm of processes. Consider the following set of processes
with their arrival times and CPU burst times (in milliseconds):
Process Arrival Time Burst Time
P1 0 12
P2 2 4
P3 3 6
P4 8 5

The average waiting time (in milliseconds) of the processes is ________.


(A) 4.5
(B) 5.0
(C) 5.5
(D) 9.0
P1 P2 P4 P3

0 12 16 21 27
Process Arrival Time Burst Time Completion Turnaround Waiting
Time Time Time
P1 0 12 12 12 0
P2 2 4 16 14 10
P3 3 6 27 24 18
P4 8 5 21 13 8
36

Average waiting time=36/4


=9
Shortest Remaining Time First
It is the preemptive form of SJF. In this algorithm, the OS
schedules the Job according to the remaining time of the
execution.
Shortest Remaining Time First (SRTF), is a scheduling
method that is a preemptive version of Shortest Job First(SJF)
scheduling.
Shortest remaining time is advantageous because short
processes are handled very quickly
SRTF and Shortest Job First scheduling algorithms are
suffered by starvation problem.
Example-SRTF (Preemptive SJF)
Consider the following four processes with the arrival time and length of CPU burst
given in milliseconds :
Process Arrival Time Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
The average waiting time for preemptive SJF scheduling algorithm is .................
(1) 6.5 ms
(2) 7.5 ms
(3) 6.75 ms
(4) 9.0 ms
Solutions
7 X X X
P1 P2 P4 P1 P3

0 1 5 10 17 26

PROCESS ARRIVAL BURST COMPLETION TURNAROUND TIME WAITING


TIME TIME TIME TIME
P1 0 8 17 17 9
P2 1 4 5 4 0
P3 2 9 26 24 15
P4 3 5 10 7 2
TOTAL WAITING TIME 26

AVERAGE WAITING TIME= 26/4


=6.5 ms
WORK OUT FOR YOU
Consider the set of processes with arrival time(in
milliseconds), CPU burst time (in milliseconds), and
priority(0 is the highest priority) shown below.(in
milliseconds): Process Arrival Time Burst Time
P1 0 12
P2 2 4
P3 3 6
P4 8 5
The average waiting time (in milliseconds) of the processes is ________.
(A) 4.5
(B) 5.0
(C) 5.5
Solutions
10
P1 P2 P3 P4 P1

0 2 6 12 17 27

Process Arrival Burst Time Completion Time Turnaround Time Waiting Time
Time
P1 0 12 27 27 15
P2 2 7 6 4 0
P3 3 6 12 9 3
P4 8 5 17 9 4
22

Average waiting time=22/4


=5.5
Priority Scheduling
• We have two types of priority scheduling (Preemptive and
Non-preemptive). Non-preemptive priority scheduling
algorithm and one of the most common scheduling algorithms
in batch systems.
• Each process is assigned a priority. Process with highest
priority is to be executed first and so on.
• Processes with same priority are executed on first come first
served basis.
• Priority can be decided based on memory requirements, time
requirements or any other resource requirement
Example: Non Pre-emptive Priority
Consider the set of processes with arrival time(in milliseconds),
CPU burst time (in milliseconds), and priority(1 is the highest
priority) shown below.
Process Arrival Burst Time Priority
Time
P1 0 12 2
P2 2 4 1
P3 3 6 4
P4 8 5 3
The average waiting time (in milliseconds) of the processes is ________.
(A) 9.75
(B) 9.5
(C) 9.0
Solutions:Non Pre-emptive Priority
P1 P2 P4 P3

0 12 16 21 27

Process Arrival Burst Time Priority Completion Turnaround Waiting Time


Time Time Time
P1 0 12 2 12 12 0
P2 2 4 1 16 14 10
P3 3 6 4 27 24 18
P4 8 5 3 21 13 8
36

Average waiting time=36/4


=9.0
Example: Pre-emptive Priority
Consider the set of processes with arrival time(in milliseconds),
CPU burst time (in milliseconds), and priority(1 is the highest
priority) shown below.
Process Arrival Burst Time Priority
Time
P1 0 12 2
P2 2 4 1
P3 3 6 4
P4 8 5 3
The average waiting time (in milliseconds) of the processes is ________.
(A) 6.5
(B) 5.0
(C) 5.5
Solutions: Pre-emptive Priority
10
P1 P2 P1 P4 P3

0 2 6 16 21 27

Process Arrival Burst Time Priority Completion Turnaround Waiting Time


Time Time Time
P1 0 12 2 16 16 4
P2 2 4 1 6 4 0
P3 3 6 4 27 24 18
P4 8 5 3 21 13 8
30

Average waiting time=30/4


=7.5
Starvation and Aging
• Starvation is the problem that occurs when high
priority processes keep executing and low priority
processes get blocked for indefinite time.
• To avoid starvation, we use the concept of Aging. In
Aging, after some fixed amount of time quantum, we
increase the priority of the low priority processes. By
doing so, as time passes, the lower priority process
becomes a higher priority process.
Round Robin Scheduling
• Round Robin is the preemptive process scheduling
algorithm.
• Each process is provided a fix time to execute, it is called
a time quantum(time slice).
• Once a process is executed for a given time period, it is
preempted and other process executes for a given time
period.
• Context switching is used to save states of preempted
processes
Q1. Given CPU time slice of 2ms and following list of
processes.
Process Burst Time Arrival Time

P1 3 0

P2 4 2

P3 5 5

Find average turnaround time and average waiting time using round
robin CPU scheduling?
A) 4.0 B) 5.66, 1.66 C) 5.66, 0 D)7, 2
Explanation: Time slice2 msec
1 2 X X 3 1 X
P1 P2 P1 P2 P3 P3 p3

0 2 4 5 7 9 11 12

Process Burst Time Arrival Time Completion Turn Around Waiting Time(TAT-
Time(CT) Time(CT-AT) BT)

P1 3 0 5 5 2

P2 4 2 7 5 1

P3 5 5 12 7 2
Average Turn Around Time(TAT)=(5+5+7)/3 Average Waiting Time=(2+1+2)/3
=17/3 =5/3
=5.66 =1.66
ANSWER:OPTION B
Q2. Consider the following set of processes and the
length of CPU burst time given in milliseconds
Process Burst Time

P1 5

P2 7

P3 6

P4 4

Assume that processes being scheduled with Round-Robin


Scheduling Algorithm with Time Quantum 4ms. Then The
waiting time for P4 is _________ ms
A) 0 B) 4 C) 12 D) 6
Explanation
X X X X
1 3 2
P1 P2 P3 P4 P1 P2 p3

0 4 8 12 16 17 20 22

Process Burst Time Arrival Time Completion Turn Around Waiting


Time(CT) Time(CT-AT) Time(TAT-BT)

P1 5 0 17 17 12

P2 7 0 20 20 13

P3 6 0 22 22 16

P4 4 0 16 16 12
NOTE
• FCFS can cause long waiting times, especially when
the first job takes too much CPU time.
• Both SJF and Shortest Remaining time first
algorithms may cause starvation.
• Consider a situation when the long process is there
in the ready queue and shorter processes keep
coming.
NOTE
• If time quantum for Round Robin scheduling is
very large, then it behaves same as FCFS
scheduling.
• SJF is optimal in terms of average waiting time for
a given set of processes, i.e., average waiting time
is minimum with this scheduling, but problems
are, how to know/predict the time of next job
MCQ QUESTION
The number of processes completed per unit
time is known as __________
a) Output
b) Throughput
c) Efficiency
d) Capacity
Answer: b
MCQ QUESTION
The degree of multiprogramming is:
a) the number of processes executed per unit
time
b) the number of processes in the ready
queue
c) the number of processes in the I/O queue
d) the number of processes in memory
Answer: d
MCQ QUESTION
Turnaround time is :
a) the total waiting time for a process to finish
execution
b) the total time spent in the ready queue
c) the total time spent in the running queue
d) the total time from the completion till the
submission of a process
Answer: d
MCQ QUESTION
Response time is :
a) the total time taken from the submission time
till the completion time
b) the total time taken from the submission time
till the first response is produced
c) the total time taken from submission time till
the response is output
d) none of the mentioned
Answer: b
MCQ QUESTION
In processor management, round robin method
essentially uses the preemptive version of
...................
(A) FILO (B) FIFO
(C) SJF (D) Longest time first
Answer: B
MCQ QUESTION
....................is one of pre-emptive scheduling
algorithm.
(A)Shortest-Job-first
(B) Round-robin
(C) Priority based
(D) Shortest-Job-next
Answer: B
MCQ QUESTION
Pre-emptive scheduling is the strategy of
temporarily suspending a gunning process
(A) before the CPU time slice expires
(B) to allow starving processes to run
(C) when it requests I/O
(D) to avoid collision
Answer: B
MCQ QUESTION
Which of the following scheduling algorithms may cause
starvation?
a. First-come-first-served
b. Round Robin
c. Priority
d. Shortest process next
e. Shortest remaining time first
(1) a, c and e (2) c, d and e
(3) b, d and e (4) b, c and d
Answer: 2
TOPIC : Memory Management
What is Memory?
• Computer memory can be defined as a collection of
some data represented in the binary format.
• Computer system understands only binary language
that is 0 or 1. Computer converts every data into binary
language first and then stores it into the memory.
• A computer device that is capable to store any
information or data temporally or permanently, is
called storage device(memory).
The binary representation of 10 is 1010. Here, we are considering
32 bit system therefore, the size of int is 2 bytes i.e. 16 bit. 1
memory block stores 1 bit
Memory Management
• It is the process of controlling and coordinating computer
memory, assigning portions known as blocks to various running
programs to optimize the overall performance of the system.
• Memory management is the functionality of an operating
system which handles or manages primary memory and moves
processes back and forth between main memory and disk
during execution.
• Memory management keeps track of each and every memory
location, regardless of either it is allocated to some process or
it is free. It checks how much memory is to be allocated to
processes
Basics of Memory Management.
• The operating system takes care of mapping the
logical addresses to physical addresses at the time
of memory allocation to the program.
• There are three types of addresses used in a
program before and after memory is allocated
1. Symbolic addresses
2. Relative addresses
3. Physical addresses
Symbolic addresses
• The addresses used in a source code. The variable
names, constants, and instruction labels are the basic
elements of the symbolic address space.
Relative addresses
• At the time of compilation, a compiler converts
symbolic addresses into relative addresses.
Physical addresses
• The loader generates these addresses at the time when
a program is loaded into main memory.
MMU
• The set of all logical(virtual) addresses generated by a
program is referred to as a logical address space.
• The set of all physical addresses corresponding to
these logical addresses is referred to as a physical
address space.
• The runtime mapping from virtual to physical address
is done by the memory management unit (MMU)
which is a hardware device
Memory Management Technique
• Operating system uses the various memory
management mechanism.
• Memory management techniques can be
classified into two types
1. Contiguous allocation- Single contiguous
allocation and partitioned allocation
2. Non-Contiguous allocation- Paging and
Segmentation
Single Contiguous Allocation
• It is the easiest memory management technique.
• In this method, all types of computer's memory
except a small portion which is reserved for the OS is
available for one application.
• For example, MS-DOS operating system allocates
memory in this way. An embedded system also runs
on a single application.
Single Contiguous Allocation
Partitioned Allocation
• It divides primary memory into various memory
partitions, which is mostly contiguous areas of
memory.
• Every partition stores all the information for a
specific task or job.
• This method consists of allotting a partition to a job
when it starts & unallocated when it ends.
• There are two types of partitioned allocation
1.Fixed size partition 2. Variable siz partition
Fixed Partitioning & Variable Partitioning
Partition Allocation
• In Partition Allocation, when there is more than one partition freely
available to accommodate a process’s request, a partition must be
selected. To choose a particular partition, a partition allocation method
is needed. A partition allocation method is considered better if it avoids
internal fragmentation.
• When it is time to load a process into the main memory and if there is
more than one free block of memory of sufficient size then the OS
decides which free block to allocate.
There are different Placement Algorithm:
• A. First Fit
• B. Best Fit
• C. Worst Fit
1. First Fit
• In the first fit, the partition is allocated
which is the first sufficient block from the
top of Main Memory. It scans memory from
the beginning and chooses the first available
block that is large enough. Thus it allocates
the f
• First hole that is large enough.
2. Best Fit
• Allocate the process to the partition which is
the first smallest sufficient partition among
the free available partition. It searches the
entire list of holes to find the smallest hole
whose size is greater than or equal to the size
of the process.
3. Worst Fit
• Allocate the process to the partition which is the
largest sufficient among the freely available
partitions available in the main memory. It is
opposite to the best-fit algorithm. It searches the
entire list of holes to find the largest hole and
allocate it to process.
Example
• Consider the requests from processes in given order 300K,
25K, 125K, and 50K. Let there be two blocks of memory
available of size 150K followed by a block size 350K.
• Which of the following partition allocation schemes can
satisfy the above requests?
A) Best fit but not first fit.
B) First fit but not best fit.
C) Both First fit & Best fit.
D) neither first fit nor best fit.
• Best Fit:
Solutions
300K is allocated from a block of size 350K. 50 is left in the block.
25K is allocated from the remaining 50K block. 25K is left in the
block.
125K is allocated from 150 K block. 25K is left in this block also.
50K can’t be allocated even if there is 25K + 25K space available.
• First Fit:
300K request is allocated from 350K block, 50K is left out.
25K is be allocated from the 150K block, 125K is left out.
Then 125K and 50K are allocated to the remaining left out
partitions.
So, the first fit can handle requests.
• So option B is the correct choice.
Problem in Contiguous- Fragmentation
Processes are stored and removed from memory, which creates
free memory space, which are too small to use by other processes,
it is called Fragmentation.
Two types of Fragmentation methods are:
• External fragmentation
• Internal fragmentation
Fragmentation
1. Internal Fragmentation
• When a process is assigned to a memory block and if that
process is smaller than the memory requested, it creates a
free space in the assigned memory block. Then the difference
between assigned and requested memory is called the
internal fragmentation.
2. External Fragmentation
• Total memory space is enough to load a process but the
process still can’t load because free blocks of memory are
not contiguous.
Diagram-Explanation
Internal Fragmentation: The 4 MB partition is
used to load only 3 MB process and the
remaining 1 MB got waste
External Fragmentation : The remaining 1 MB space
of each partition cannot be used as a unit to store a
4 MB process. Despite of the fact that the sufficient
space is available to load the process, process will
not be loaded.
Non ContiguousPaging
• Paging is a fixed size partitioning scheme.
• In paging, secondary memory and main memory
are divided into equal fixed size partitions.
• The partitions of secondary memory are called as
pages.
• The partitions of main memory are called as
frames.
• pages are mapped to the frames in Paging, page
size needs to be as same as frame size
Paging
Paging-Example
Logical Address
CPU generates a logical address consisting of two parts-
• Page Number
• Page Offset

• Page Number specifies the specific page of the


process from which CPU wants to read the data.

• Page Offset specifies the specific word on the page


that CPU wants to read
Physical Address
• The frame number combined with the page offset
forms the required physical address.
Address Translation
• Page address is called logical address and
represented by page number and the offset.
• Logical Address = Page number + page offset
Frame address is called physical address
• Physical Address = Frame number + page offset
• A data structure called page map table is used to
keep track of the relation between a page of a
process to a frame in physical memory.
Advantages and Disadvantages of
Paging
Here is a list of advantages and disadvantages of paging −
• Paging reduces external fragmentation, but still suffer
from internal fragmentation.
• Paging is simple to implement and assumed as an
efficient memory management technique.
• Due to equal size of the pages and frames, swapping
becomes very easy.
• Page table requires extra memory space, so may not be
good for a system having small RAM.
Segmentation
• Segmentation memory management works very similar to
paging but here segments are of variable-length where as in
paging pages are of fixed
• The operating system maintains a segment map table for every
process and a list of free memory blocks along with segment
numbers, their size and corresponding memory locations in
main memory.
• For each segment, the table stores the starting address of the
segment and the length of the segment. A reference to a
memory location includes a value that identifies a segment and
an offset.

Home Work
To study about
• Demand paging
• Virtual Memory
• Page Replacement algorithm
Note: I will ask question in these topic in the
next class

You might also like