0% found this document useful (0 votes)
13 views47 pages

CSE_CSPC 404 _Operating Systems

The document discusses critical sections, race conditions, and mutual exclusion in operating systems, emphasizing the need for synchronization in concurrent programming. It introduces semaphores, monitors, and message-passing systems as methods for process synchronization, along with specific problems like the producer-consumer and readers-writers problems. Additionally, it addresses the dining philosophers problem and deadlock scenarios, providing solutions and strategies to avoid such issues.

Uploaded by

sri524952
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views47 pages

CSE_CSPC 404 _Operating Systems

The document discusses critical sections, race conditions, and mutual exclusion in operating systems, emphasizing the need for synchronization in concurrent programming. It introduces semaphores, monitors, and message-passing systems as methods for process synchronization, along with specific problems like the producer-consumer and readers-writers problems. Additionally, it addresses the dining philosophers problem and deadlock scenarios, providing solutions and strategies to avoid such issues.

Uploaded by

sri524952
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

BE(CSE) - IV Semester

CSPC 404 – Operating Systems

UNIT – III

Critical Section
When more than one processes access a same code segment that segment is known as
critical section. Critical section contains shared variables or resources which are needed to be
synchronized to maintain consistency of data variable. In simple terms a critical section is group
of instructions/statements or region of code that needs to be executed atomically, such as
accessing a resource (file, input or output port, global data, etc.).

In concurrent programming, if one thread tries to change the value of shared data at the
same time as another thread tries to read the value (i.e. data race across threads), the result is
unpredictable.
The access to such shared variable (shared memory, shared files, shared port, etc) to be
synchronized. Few programming languages have built-in support for synchronization.
It is critical to understand the importance of race condition while writing kernel mode
programming (a device driver, kernel thread, etc.). Since the programmer can directly access and
modifying kernel data structures.

Race Condition
A race condition is a situation that may occur inside a critical section. This happens when
the result of multiple thread execution in critical section differs according to the order in which
the threads execute.
Race conditions in critical sections can be avoided if the critical section is treated as an
atomic instruction. Also, proper thread synchronization using locks or atomic variables can
prevent race conditions.
Mutual Exclusion
Mutual Exclusion also known as Mutex was first identified by Dijkstra. When a process
is accessing shared variable is known as in critical section. When no two processes can be in
Critical Section at the same time, this state is known as Mutual Exclusion.
It is property of concurrency control which is used to prevent race conditions.

Mutual Exclusion Devices:-


Locks, Reader's Writer's Problem, recursive locks, semaphores, monitor, message passing
etc.
Requirements:

 No more than one thread can be in its critical section at any one time.
 A thread which dies in its critical non-critical section will not affect the others' ability to
continue.
 No deadlock: if a thread wants to enter its critical section then it will eventually be
allowed to do so.
 No starvation.
 Threads are not forced into lock-step execution of their critical sections.

Producer Consumer Problem using Semaphores

Producer consumer problem is a classical synchronization problem. We can solve this


problem by using semaphores.

A semaphore S is an integer variable that can be accessed only through two standard
operations wait() and signal().The wait() operation reduces the value of semaphore by 1 and the
signal() operation increases its value by 1.

wait(S){
while(S<=0); // busy waiting
S--;
}
signal(S){
S++;
}
Semaphores are of two types:

1. Binary Semaphore – This is similar to mutex lock but not the same thing. It can have only
two values – 0 and 1. Its value is initialized to 1. It is used to implement the solution of
critical section problem with multiple processes.
2. Counting Semaphore – Its value can range over an unrestricted domain. It is used to
control access to a resource that has multiple instances.

Solution for Producer

do{
//produce an item
wait(empty);
wait(mutex);
//place in buffer
signal(mutex);
signal(full);
}while(true)
When producer produces an item then the value of “empty” is reduced by 1 because one
slot will be filled now. The value of mutex is also reduced to prevent consumer to access the
buffer. Now, the producer has placed the item and thus the value of “full” is increased by 1. The
value of mutex is also increased by 1 because the task of producer has been completed and
consumer can access the buffer.

Solution for Consumer

do{
wait(full);
wait(mutex);
// remove item from buffer
signal(mutex);
signal(empty);
// consumes item
}while(true)
As the consumer is removing an item from buffer, therefore the value of “full” is reduced
by 1 and the value is mutex is also reduced so that the producer cannot access the buffer at this
moment. Now, the consumer has consumed the item, thus increasing the value of “empty” by 1.
The value of mutex is also increased so that producer can access the buffer now .
Monitors in Process Synchronization

The monitor is one of the ways to achieve Process synchronization. The monitor is
supported by programming languages to achieve mutual exclusion between processes.
It is the collection of condition variables and procedures combined together in a special kind of
module or a package.
1. The processes running outside the monitor can’t access the internal variable of the monitor
but can call procedures of the monitor.
2. Only one process at a time can execute code inside monitors.

Syntax:

Condition Variables:
Two different operations are performed on the condition variables of the monitor.
Wait.
signal.
let say we have 2 condition variables
condition x, y; // Declaring variable

Wait operation
x.wait() : Process performing wait operation on any condition variable are suspended.
The suspended processes are placed in block queue of that condition variable.
Note: Each condition variable has its unique block queue.

Signal operation
x.signal(): When a process performs signal operation on condition variable, one of the
blocked processes is given chance.
If (x block queue empty)
// Ignore signal
else
// Resume a process from block queue.

Advantages of Monitor:
Monitors have the advantage of making parallel programming easier and less error prone
than using techniques such as semaphore.

Disadvantages of Monitor:
Monitors have to be implemented as part of the programming language. The compiler
must generate code for them. This gives the compiler the additional burden of having to know
what operating system facilities are available to control access to critical sections in concurrent
processes. Some languages that do support monitors are Java, C#, Visual Basic, Ada and
concurrent Euclid.

Message-Passing System

The function of a message system is to allow processes to communicate with one another
without the need to resort to shared data. We have already seen message passing used as a
method of communication in microkernels. In this scheme, services are provided as ordinary user
processes. That is, the services operate outside of the kernel. Communication among the user
processes is accomplished through the passing of messages. An IPC facility provides at least the
two operations: send (message) and receive (message).

Messages sent by a process can be of either fixed or variable size. If only fixed-sized
messages can be sent, the system-level implementation is straightforward. This restriction,
however, makes the task of programming more difficult. On the other hand, variable-sized
messages require a more complex system-level implementation, but the programming task
becomes simpler.

If processes P and Q want to communicate, they must send messages to and receive
messages from each other; a communication link must exist between them. This link can be
implemented in a variety of ways. We are concerned here not with the link's physical
implementation, but rather with its logical implementation.

Here are several methods for logically implementing a link and the send/receive operations:

• Direct or indirect communication


• Symmetric or asymmetric communication

• Automatic or explicit buffering

• Send by copy or send by reference

• Fixed-sized or variable-sized messages

We look at each of these types of message systems next.

Synchronization

Communication between processes takes place by calls to send and receive primitives.
There are different design options for implementing each primitive. Message passing may be
either blocking or nonblocking-also known as synchronous and asynchronous.

Blocking send: The sending process is blocked until the message is received by the
receiving process or by the mailbox.

Nonblocking send: The sending process sends the message and resumes operation.

Blocking receive: The receiver blocks until a message is available.

Nonblocking receive: The receiver retrieves either a valid message or a null.

Different combinations of send and receive are possible. When both the send and receive are
blocking,

we have a rendezvous between the sender and the receiver.

Buffering

Whether the communication is direct or indirect, messages exchanged by communicating


processes reside in a temporary queue. Basically, such a queue can be implemented in three
ways:

Zero capacity: The queue has maximum length 0; thus, the link cannot have any
messages waiting in it. In this case, the sender must block until the recipient receives the
message.

Bounded capacity: The queue has finite length n; thus, at most n messages can reside in
it. If the queue is not full when a new message is sent, the latter is placed in the queue (either the
message is copied or a pointer to the message is kept), and the sender can continue execution
without waiting. The link has a finite capacity, however. If the link is full, the sender must block
until space is available in the queue.
Unbounded capacity: The queue has potentially infinite length; thus, any number of
messages can wait in it.

The sender never blocks. The zero-capacity case is sometimes referred to as a message
system with no buffering; the other cases are referred to as automatic buffering.
Peterson’s solution

Peterson’s solution is a software based solution to the critical section problem.

Consider two processes P0 and P1. For convenience, when presenting Pi, we use Pi to
denote the other process; that is, j == 1 - i.

The processes share two variables:

boolean flag [2] ;

int turn;

Initially flag [0] = flag [1] = false, and the value of turn is immaterial (but is either 0 or 1). The
structure of process

Pi is shown below.

do{

flag[i]=true

turn=j

while(flag[j] && turn==j);

critical section

flag[i]=false

Remainder section

}while(1);

To enter the critical section, process Pi first sets flag [il to be true and then sets turn to the
value j, thereby asserting that if the other process wishes to enter the critical section it can do so.
If both processes try to enter at the same time, turn will be set to both i and j at roughly the same
time. Only one of these assignments will last; the other will occur, but will be overwritten
immediately. The eventual value of turn decides which of the two processes is allowed to enter
its critical section first.

We now prove that this solution is correct. We need to show that:

1. Mutual exclusion is preserved,

2. The progress requirement is satisfied,

3. The bounded-waiting requirement is met.


To prove property 1, we note that each Pi enters its critical section only if either flag [jl
== false or turn == i. Also note that, if both processes can be executing in their critical sections at
the same time, then flag [i] ==flag [jl == true. These two observations imply that P0 and P1
could not have successfully executed their while statements at about the same time, since the
value of turn can be either 0 or 1, but cannot be both. Hence, one of the processes say Pj-must
have successfully executed the while statement, whereas Pi had to execute at least one additional
statement ("turn == j"). However, since, at that time, flag [j] == true, and turn == j, and this
condition will persist as long as Pi is in its critical section, the result follows:

To prove properties 2 and 3, we note that a process Pi can be prevented from entering the
critical section only if it is stuck in the while loop with the condition flag [j] == true and turn ==
j; this loop is the only one.

If Pi is not ready to enter the critical section, then flag [ j ] == false and Pi can enter its
critical section. If Pi has set flag[j] to true and is also executing in its while statement, then either
turn == i or turn == j. If turn == i, then Pi will enter the critical section. If turn == j, then Pi will
enter the critical section. However, once Pi exits its critical section, it will reset flag [ jl to false,
allowing Pi to enter its critical section. If Pi resets flag [ j 1 to true, it must also set turn to i.

Thus, since Pi does not change the value of the variable turn while executing the while
statement, Pi will enter the critical section (progress) after at most one entry by Pi (bounded
waiting).

Reader’s & Writer Problem

The readers-writers problem relates to an object such as a file that is shared between
multiple processes. Some of these processes are readers i.e. they only want to read the data from
the object and some of the processes are writers i.e. they want to write into the object.
The readers-writers problem is used to manage synchronization so that there are no
problems with the object data. For example - If two readers access the object at the same time
there is no problem. However if two writers or a reader and writer access the object at the same
time, there may be problems.
To solve this situation, a writer should get exclusive access to an object i.e. when a writer
is accessing the object, no reader or writer may access it. However, multiple readers can access
the object at the same time.
This can be implemented using semaphores. The codes for the reader and writer process
in the reader-writer problem are given as follows:
Reader Process
The code that defines the reader process is given below:
wait (mutex);

rc ++;

if (rc == 1)

wait (wrt);

signal(mutex);

.. READ THE OBJECT

.wait(mutex);

rc --;

if (rc == 0)

signal (wrt);

signal(mutex);

In the above code, mutex and wrt are semaphores that are initialized to 1. Also, rc is a
variable that is initialized to 0. The mutex semaphore ensures mutual exclusion and wrt handles
the writing mechanism and is common to the reader and writer process code.
The variable rc denotes the number of readers accessing the object. As soon as rc
becomes 1, wait operation is used on wrt. This means that a writer cannot access the object
anymore. After the read operation is done, rc is decremented. When re becomes 0, signal
operation is used on wrt. So a writer can access the object now.
Writer Process
The code that defines the writer process is given below:

wait(wrt);

.. WRITE INTO THE OBJECT

.signal(wrt);

If a writer wants to access the object, wait operation is performed on wrt. After that no
other writer can access the object. When a writer is done writing into the object, signal operation
is performed on wrt.

Dining Philosophers Problem

Solution
A solution of the Dining Philosophers Problem is to use a semaphore to represent a
chopstick. A chopstick can be picked up by executing a wait operation on the semaphore and
released by executing a signal semaphore.
The structure of the chopstick is shown below:

semaphore chopstick [5];

Initially the elements of the chopstick are initialized to 1 as the chopsticks are on the table
and not picked up by a philosopher.
The structure of a random philosopher i is given as follows:

do {

wait( chopstick[i] );

wait( chopstick[ (i+1) % 5] );

. . . EATING THE RICE

.signal( chopstick[i] );

signal( chopstick[ (i+1) % 5] );

. . THINKING

.} while(1);

In the above structure, first wait operation is performed on chopstick[i] and chopstick[
(i+1) % 5]. This means that the philosopher i has picked up the chopsticks on his sides. Then the
eating function is performed.
After that, signal operation is performed on chopstick[i] and chopstick[ (i+1) % 5]. This
means that the philosopher i has eaten and put down the chopsticks on his sides. Then the
philosopher goes back to thinking.
Difficulty with the solution
The above solution makes sure that no two neighboring philosophers can eat at the same
time. But this solution can lead to a deadlock. This may happen if all the philosophers pick their
left chopstick simultaneously. Then none of them can eat and deadlock occurs.
Some of the ways to avoid deadlock are

 There should be at most four philosophers on the table.


 An even philosopher should pick the right chopstick and then the left chopstick while an
odd philosopher should pick the left chopstick and then the right chopstick.
 A philosopher should only be allowed to pick their chopstick if both are available at the
same time.
Deadlock

A deadlock happens in operating system when two or more processes need some resource
to complete their execution that is held by the other process

In the above diagram, the process 1 has resource 1 and needs to acquire resource 2.
Similarly process 2 has resource 2 and needs to acquire resource 1. Process 1 and process 2 are
in deadlock as each of them needs the other’s resource to complete their execution but neither of
them is willing to relinquish their resources.
Conditions
A deadlock occurs if the four conditions hold true. But these conditions are not mutually
exclusive.
The conditions are
Mutual Exclusion
There should be a resource that can only be held by one process at a time. In the diagram
below, there is a single instance of Resource 1 and it is held by Process 1 only.

Hold and Wait


A process can hold multiple resources and still request more resources from other
processes which are holding them. In the diagram given below, Process 2 holds Resource 2 and
Resource 3 and is requesting the Resource 1 which is held by Process 1.
No Preemption
A resource cannot be preempted from a process by force. A process can only release a
resource voluntarily. In the diagram below, Process 2 cannot preempt Resource 1 from Process
1. It will only be released when Process 1 relinquishes it voluntarily after its execution is
complete.

Circular Wait
A process is waiting for the resource held by the second process, which is waiting for the
resource held by the third process and so on, till the last process is waiting for a resource held by
the first process. This forms a circular chain. For example: Process 1 is allocated Resource2 and
it is requesting Resource 1. Similarly, Process 2 is allocated Resource 1 and it is requesting
Resource 2. This forms a circular wait loop.
Deadlock Detection
A deadlock can be detected by a resource scheduler as it keeps track of all the resources
that are allocated to different processes. After a deadlock is detected, it can be resolved using the
following methods:

 All the processes that are involved in the deadlock are terminated. This is not a good
approach as all the progress made by the processes is destroyed.
 Resources can be preempted from some processes and given to others till the deadlock is
resolved.
Deadlock Prevention
It is very important to prevent a deadlock before it can occur. So, the system checks each
transaction before it is executed to make sure it does not lead to deadlock. If there is even a slight
chance that a transaction may lead to deadlock in the future, it is never allowed to execute.
Deadlock Avoidance
It is better to avoid a deadlock rather than take measures after the deadlock has occurred.
The wait for graph can be used for deadlock avoidance. This is however only useful for smaller
databases as it can get quite complex in larger databases.

Banker’s Algorithm in Operating System

The banker’s algorithm is a resource allocation and deadlock avoidance algorithm that
tests for safety by simulating the allocation for predetermined maximum possible amounts of all
resources, then makes an “s-state” check to test for possible activities, before deciding whether
allocation should be allowed to continue.
Banker’s algorithm is named so because it is used in banking system to check whether
loan can be sanctioned to a person or not. Suppose there are n number of account holders in a
bank and the total sum of their money is S. If a person applies for a loan then the bank first
subtracts the loan amount from the total money that bank has and if the remaining amount is
greater than S then only the loan is sanctioned. It is done because if all the account holders
comes to withdraw their money then the bank can easily do it.
In other words, the bank would never allocate its money in such a way that it can no longer
satisfy the needs of all its customers. The bank would try to be in safe state always.
Following Data structures are used to implement the Banker’s Algorithm:
Let ‘n’ be the number of processes in the system and ‘m’ be the number of resources types.

Available :
 It is a 1-d array of size ‘m’ indicating the number of available resources of each type.
 Available[ j ] = k means there are ‘k’ instances of resource type Rj
Max :
 It is a 2-d array of size ‘n*m’ that defines the maximum demand of each process in a
system.
 Max[ i, j ] = k means process Pi may request at most ‘k’ instances of resource type Rj.
Allocation :
 It is a 2-d array of size ‘n*m’ that defines the number of resources of each type currently
allocated to each process.
 Allocation[ i, j ] = k means process Pi is currently allocated ‘k’ instances of resource
type Rj
Need :
 It is a 2-d array of size ‘n*m’ that indicates the remaining resource need of each process.
 Need [ i, j ] = k means process Pi currently need ‘k’ instances of resource type Rj
for its execution.
 Need [ i, j ] = Max [ i, j ] – Allocation [ i, j ]

Allocationi specifies the resources currently allocated to process P i and Needi specifies
the additional resources that process Pi may still request to complete its task.
Banker’s algorithm consists of Safety algorithm and Resource request algorithm
Safety Algorithm
The algorithm for finding out whether or not a system is in a safe state can be described
as follows:
1) Let Work and Finish be vectors of length ‘m’ and ‘n’ respectively.
Initialize: Work = Available
Finish[i] = false; for i=1, 2, 3, 4….n
2) Find an i such that both
a) Finish[i] = false
b) Needi <= Work
if no such i exists goto step (4)
3) Work = Work + Allocation[i]
Finish[i] = true
goto step (2)
4) if Finish [i] = true for all i
then the system is in a safe state

Resource-Request Algorithm
Let Requesti be the request array for process Pi. Requesti [j] = k means process Pi wants k
instances of resource type Rj. When a request for resources is made by process P i, the following
actions are taken:

1) If Requesti <= Needi


Goto step (2) ; otherwise, raise an error condition, since the process has exceeded its maximum
claim.
2) If Requesti <= Available
Goto step (3); otherwise, Pi must wait, since the resources are not available.
3) Have the system pretend to have allocated the requested resources to process Pi by modifying
the state as
follows:
Available = Available – Requesti
Allocationi = Allocationi + Requesti
Needi = Needi– Requesti

Deadlock Detection and Recovery

Deadlock Detection
If resources have single instance: In this case for Deadlock detection we can run an
algorithm to check for cycle in the Resource Allocation Graph. Presence of cycle in the graph is
the sufficient condition for deadlock.

1. In the above diagram, resource 1 and resource 2 have single instances. There is a cycle R1
→ P1 → R2 → P2. So, Deadlock is confirmed.
2. If there are multiple instances of resources:
Detection of the cycle is necessary but not sufficient condition for deadlock detection, in
this case, the system may or may not be in deadlock varies according to different situations.

Deadlock Recovery
A traditional operating system such as Windows doesn’t deal with deadlock recovery as
it is time and space consuming process. Real-time operating systems use Deadlock recovery.
Recovery method
1. Killing the process: killing all the process involved in the deadlock. Killing process one by
one. After killing each process check for deadlock again keep repeating the process till
system recover from deadlock.
2. Resource Preemption: Resources are preempted from the processes involved in the
deadlock, preempted resources are allocated to other processes so that there is a possibility
of recovering the system from deadlock. In this case, the system goes into starvation.
UNIT – IV

Memory Management

Memory management is the functionality of an operating system which handles or


manages primary memory and moves processes back and forth between main memory and disk
during execution. Memory management keeps track of each and every memory location,
regardless of either it is allocated to some process or it is free. It checks how much memory is to
be allocated to processes. It decides which process will get memory at what time. It tracks
whenever some memory gets freed or unallocated and correspondingly it updates the status.

Process Address Space

The process address space is the set of logical addresses that a process references in its
code. For example, when 32-bit addressing is in use, addresses can range from 0 to 0x7fffffff;
that is, 2^31 possible numbers, for a total theoretical size of 2 gigabytes.
The operating system takes care of mapping the logical addresses to physical addresses
at the time of memory allocation to the program. There are three types of addresses used in a
program before and after memory is allocated

S.N. Memory Addresses & Description

1
Symbolic addresses
The addresses used in a source code. The variable names, constants, and instruction
labels are the basic elements of the symbolic address space.

2
Relative addresses
At the time of compilation, a compiler converts symbolic addresses into relative
addresses.

3
Physical addresses
The loader generates these addresses at the time when a program is loaded into main
memory.

Virtual and physical addresses are the same in compile-time and load-time address-
binding schemes. Virtual and physical addresses differ in execution-time address-binding
scheme.
The set of all logical addresses generated by a program is referred to as a logical address
space. The set of all physical addresses corresponding to these logical addresses is referred to as
a physical address space.
The runtime mapping from virtual to physical address is done by the memory
management unit (MMU) which is a hardware device. MMU uses following mechanism to
convert virtual address to physical address.
 The value in the base register is added to every address generated by a user process,
which is treated as offset at the time it is sent to memory. For example, if the base
register value is 10000, then an attempt by the user to use address location 100 will be
dynamically reallocated to location 10100.
 The user program deals with virtual addresses; it never sees the real physical addresses.
Static vs Dynamic Loading
The choice between Static or Dynamic Loading is to be made at the time of computer
program being developed. If you have to load your program statically, then at the time of
compilation, the complete programs will be compiled and linked without leaving any external
program or module dependency. The linker combines the object program with other necessary
object modules into an absolute program, which also includes logical addresses.
If you are writing a dynamically loaded program, then your compiler will compile the
program and for all the modules which you want to include dynamically, only references will be
provided and rest of the work will be done at the time of execution.
At the time of loading, with static loading, the absolute program (and data) is loaded
into memory in order for execution to start.
If you are using dynamic loading, dynamic routines of the library are stored on a disk in
relocatable form and are loaded into memory only when they are needed by the program.
Static vs Dynamic Linking
When static linking is used, the linker combines all other modules needed by a program
into a single executable program to avoid any runtime dependency.
When dynamic linking is used, it is not required to link the actual module or library with
the program, rather a reference to the dynamic module is provided at the time of compilation
and linking. Dynamic Link Libraries (DLL) in Windows and Shared Objects in Unix are good
examples of dynamic libraries.
Swapping
Swapping is a mechanism in which a process can be swapped temporarily out of main
memory (or move) to secondary storage (disk) and make that memory available to other
processes. At some later time, the system swaps back the process from the secondary storage to
main memory. Though performance is usually affected by swapping process but it helps in
running multiple and big processes in parallel and that's the reason Swapping is also known as a
technique for memory compaction.
The total time taken by swapping process includes the time it takes to move the entire
process to a secondary disk and then to copy the process back to memory, as well as the time
the process takes to regain main memory.
Let us assume that the user process is of size 2048KB and on a standard hard disk where
swapping will take place has a data transfer rate around 1 MB per second. The actual transfer of
the 1000K process to or from memory will take
2048KB / 1024KB per second
= 2 seconds
= 2000 milliseconds
Now considering in and out time, it will take complete 4000 milliseconds plus other
overhead where the process competes to regain main memory.

Memory Allocation

Main memory usually has two partitions


 Low Memory − Operating system resides in this memory.
 High Memory − User processes are held in high memory.
Operating system uses the following memory allocation mechanism.

S.N. Memory Allocation & Description

1
Single-partition allocation
In this type of allocation, relocation-register scheme is used to protect user processes
from each other, and from changing operating-system code and data. Relocation
register contains value of smallest physical address whereas limit register contains
range of logical addresses. Each logical address must be less than the limit register.

2
Multiple-partition allocation
In this type of allocation, main memory is divided into a number of fixed-sized
partitions where each partition should contain only one process. When a partition is
free, a process is selected from the input queue and is loaded into the free partition.
When the process terminates, the partition becomes available for another process.

Fragmentation

As processes are loaded and removed from memory, the free memory space is broken
into little pieces. It happens after sometimes that processes cannot be allocated to memory
blocks considering their small size and memory blocks remains unused. This problem is known
as Fragmentation.
Fragmentation is of two types −

S.N. Fragmentation & Description

1
External fragmentation
Total memory space is enough to satisfy a request or to reside a process in it, but it is
not contiguous, so it cannot be used.

2
Internal fragmentation
Memory block assigned to process is bigger. Some portion of memory is left unused, as
it cannot be used by another process.

The following diagram shows how fragmentation can cause waste of memory and a
compaction technique can be used to create more free memory out of fragmented memory
External fragmentation can be reduced by compaction or shuffle memory contents to
place all free memory together in one large block. To make compaction feasible, relocation
should be dynamic.
The internal fragmentation can be reduced by effectively assigning the smallest partition
but large enough for the process.

Paging

A computer can address more memory than the amount physically installed on the
system. This extra memory is actually called virtual memory and it is a section of a hard that's
set up to emulate the computer's RAM. Paging technique plays an important role in
implementing virtual memory.
Paging is a memory management technique in which process address space is broken
into blocks of the same size called pages (size is power of 2, between 512 bytes and 8192
bytes). The size of the process is measured in the number of pages. Similarly, main memory is
divided into small fixed-sized blocks of (physical) memory called frames and the size of a
frame is kept the same as that of a page to have optimum utilization of the main memory and to
avoid external fragmentation.
Address Translation
Page address is called logical address and represented by page number and the offset.
Logical Address = Page number + page offset
Frame address is called physical address and represented by a frame number and the offset.
Physical Address = Frame number + page offset
A data structure called page map table is used to keep track of the relation between a
page of a process to a frame in physical memory.

When the system allocates a frame to any page, it translates this logical address into a
physical address and create entry into the page table to be used throughout execution of the
program.
When a process is to be executed, its corresponding pages are loaded into any available
memory frames. Suppose you have a program of 8Kb but your memory can accommodate only
5Kb at a given point in time, then the paging concept will come into picture. When a computer
runs out of RAM, the operating system (OS) will move idle or unwanted pages of memory to
secondary memory to free up RAM for other processes and brings them back when needed by
the program.
This process continues during the whole execution of the program where the OS keeps
removing idle pages from the main memory and write them onto the secondary memory and
bring them back when required by the program.

Advantages and Disadvantages of Paging


Here is a list of advantages and disadvantages of paging
 Paging reduces external fragmentation, but still suffer from internal fragmentation.
 Paging is simple to implement and assumed as an efficient memory management
technique.
 Due to equal size of the pages and frames, swapping becomes very easy.
 Page table requires extra memory space, so may not be good for a system having small
RAM.

Segmentation

Segmentation is a memory management technique in which each job is divided into


several segments of different sizes, one for each module that contains pieces that perform
related functions. Each segment is actually a different logical address space of the program.
When a process is to be executed, its corresponding segmentation are loaded into non-
contiguous memory though every segment is loaded into a contiguous block of available
memory.
Segmentation memory management works very similar to paging but here segments are
of variable-length where as in paging pages are of fixed size.
A program segment contains the program's main function, utility functions, data
structures, and so on. The operating system maintains a segment map table for every process
and a list of free memory blocks along with segment numbers, their size and corresponding
memory locations in main memory. For each segment, the table stores the starting address of
the segment and the length of the segment. A reference to a memory location includes a value
that identifies a segment and an offset.
Virtual Memory
A computer can address more memory than the amount physically installed on the
system. This extra memory is actually called virtual memory and it is a section of a hard disk
that's set up to emulate the computer's RAM.
The main visible advantage of this scheme is that programs can be larger than physical
memory. Virtual memory serves two purposes. First, it allows us to extend the use of physical
memory by using disk. Second, it allows us to have memory protection, because each virtual
address is translated to a physical address.
Following are the situations, when entire program is not required to be loaded fully in
main memory.
 User written error handling routines are used only when an error occurred in the data or
computation.
 Certain options and features of a program may be used rarely.
 Many tables are assigned a fixed amount of address space even though only a small
amount of the table is actually used.
 The ability to execute a program that is only partially in memory would counter many
benefits.
 Less number of I/O would be needed to load or swap each user program into memory.
 A program would no longer be constrained by the amount of physical memory that is
available.
 Each user program could take less physical memory, more programs could be run the
same time, with a corresponding increase in CPU utilization and throughput.
Modern microprocessors intended for general-purpose use, a memory management unit,
or MMU, is built into the hardware. The MMU's job is to translate virtual addresses into
physical addresses.
Virtual memory is commonly implemented by demand paging. It can also be
implemented in a segmentation system. Demand segmentation can also be used to provide
virtual memory.

Demand Paging

A demand paging system is quite similar to a paging system with swapping where
processes reside in secondary memory and pages are loaded only on demand, not in advance.
When a context switch occurs, the operating system does not copy any of the old program’s
pages out to the disk or any of the new program’s pages into the main memory Instead, it just
begins executing the new program after loading the first page and fetches that program’s pages
as they are referenced.
While executing a program, if the program references a page which is not available in
the main memory because it was swapped out a little ago, the processor treats this invalid
memory reference as a page fault and transfers control from the program to the operating
system to demand the page back into the memory.
Advantages
Following are the advantages of Demand Paging

 Large virtual memory.


 More efficient use of memory.
 There is no limit on degree of multiprogramming.
Disadvantages
 Number of tables and the amount of processor overhead for handling page interrupts are
greater than in the case of the simple paged management techniques.

Page Replacement Algorithm

Page replacement algorithms are the techniques using which an Operating System
decides which memory pages to swap out, write to disk when a page of memory needs to be
allocated. Paging happens whenever a page fault occurs and a free page cannot be used for
allocation purpose accounting to reason that pages are not available or the number of free pages
is lower than required pages.
When the page that was selected for replacement and was paged out, is referenced again,
it has to read in from disk, and this requires for I/O completion. This process determines the
quality of the page replacement algorithm: the lesser the time waiting for page-ins, the better is
the algorithm.
A page replacement algorithm looks at the limited information about accessing the pages
provided by hardware, and tries to select which pages should be replaced to minimize the total
number of page misses, while balancing it with the costs of primary storage and processor time
of the algorithm itself. There are many different page replacement algorithms. We evaluate an
algorithm by running it on a particular string of memory reference and computing the number of
page faults,

Reference String

The string of memory references is called reference string. Reference strings are
generated artificially or by tracing a given system and recording the address of each memory
reference. The latter choice produces a large number of data, where we note two things.
 For a given page size, we need to consider only the page number, not the entire address.
 If we have a reference to a page p, then any immediately following references to
page p will never cause a page fault. Page p will be in memory after the first reference;
the immediately following references will not fault.
 For example, consider the following sequence of addresses − 123,215,600,1234,76,96
 If page size is 100, then the reference string is 1,2,6,12,0,0

First In First Out (FIFO) algorithm

 Oldest page in main memory is the one which will be selected for replacement.
 Easy to implement, keep a list, replace pages from the tail and add new pages at the
head.

Optimal Page algorithm

 An optimal page-replacement algorithm has the lowest page-fault rate of all algorithms.
An optimal page-replacement algorithm exists, and has been called OPT or MIN.
 Replace the page that will not be used for the longest period of time. Use the time when a
page is to be used.
Least Recently Used (LRU) algorithm

 Page which has not been used for the longest time in main memory is the one which will
be selected for replacement.
 Easy to implement, keep a list, replace pages by looking back into time.

Page Buffering algorithm

 To get a process start quickly, keep a pool of free frames.


 On page fault, select a page to be replaced.
 Write the new page in the frame of free pool, mark the page table and restart the process.
 Now write the dirty page out of disk and place the frame holding replaced page in free
pool.

Least frequently Used(LFU) algorithm

 The page with the smallest count is the one which will be selected for replacement.
 This algorithm suffers from the situation in which a page is used heavily during the
initial phase of a process, but then is never used again.
Most frequently Used (MFU) algorithm

 This algorithm is based on the argument that the page with the smallest count was
probably just brought in and has yet to be used.
UNIT – V

Operating System - I/O Hardware


One of the important jobs of an Operating System is to manage various I/O devices
including mouse, keyboards, touch pad, disk drives, display adapters, USB devices, Bit-mapped
screen, LED, Analog-to-digital converter, On/off switch, network connections, audio I/O,
printers etc.
An I/O system is required to take an application I/O request and send it to the physical
device, then take whatever response comes back from the device and send it to the application.
I/O devices can be divided into two categories −
 Block devices − A block device is one with which the driver communicates by sending
entire blocks of data. For example, Hard disks, USB cameras, Disk-On-Key etc.
 Character devices − A character device is one with which the driver communicates by
sending and receiving single characters (bytes, octets). For example, serial ports, parallel
ports, sounds cards etc.

Device Controllers

Device drivers are software modules that can be plugged into an OS to handle a particular
device. Operating System takes help from device drivers to handle all I/O devices.

The Device Controller works like an interface between a device and a device driver. I/O
units (Keyboard, mouse, printer, etc.) typically consist of a mechanical component and an
electronic component where electronic component is called the device controller.

There is always a device controller and a device driver for each device to communicate
with the Operating Systems. A device controller may be able to handle multiple devices. As an
interface its main task is to convert serial bit stream to block of bytes, perform error correction as
necessary.

Any device connected to the computer is connected by a plug and socket, and the socket
is connected to a device controller. Following is a model for connecting the CPU, memory,
controllers, and I/O devices where CPU and device controllers all use a common bus for
communication.
Communication to I/O Devices

The CPU must have a way to pass information to and from an I/O device. There are three
approaches available to communicate with the CPU and Device.

 Special Instruction I/O


 Memory-mapped I/O
 Direct memory access (DMA)
Special Instruction I/O
This uses CPU instructions that are specifically made for controlling I/O devices. These
instructions typically allow data to be sent to an I/O device or read from an I/O device.
Memory-mapped I/O
When using memory-mapped I/O, the same address space is shared by memory and I/O
devices. The device is connected directly to certain main memory locations so that I/O device
can transfer block of data to/from memory without going through CPU.

While using memory mapped IO, OS allocates buffer in memory and informs I/O device
to use that buffer to send data to the CPU. I/O device operates asynchronously with CPU,
interrupts CPU when finished.
The advantage to this method is that every instruction which can access memory can be
used to manipulate an I/O device. Memory mapped IO is used for most high-speed I/O devices
like disks, communication interfaces.
Direct Memory Access (DMA)

Slow devices like keyboards will generate an interrupt to the main CPU after each byte is
transferred. If a fast device such as a disk generated an interrupt for each byte, the operating
system would spend most of its time handling these interrupts. So a typical computer uses direct
memory access (DMA) hardware to reduce this overhead.

Direct Memory Access (DMA) means CPU grants I/O module authority to read from or
write to memory without involvement. DMA module itself controls exchange of data between
main memory and the I/O device. CPU is only involved at the beginning and end of the transfer
and interrupted only after entire block has been transferred.
Direct Memory Access needs a special hardware called DMA controller (DMAC) that
manages the data transfers and arbitrates access to the system bus. The controllers are
programmed with source and destination pointers (where to read/write the data), counters to
track the number of transferred bytes, and settings, which includes I/O and memory types,
interrupts and states for the CPU cycles.

Polling vs Interrupts I/O

A computer must have a way of detecting the arrival of any type of input. There are two
ways that this can happen, known as polling and interrupts. Both of these techniques allow the
processor to deal with events that can happen at any time and that are not related to the process
it is currently running.
Polling I/O
Polling is the simplest way for an I/O device to communicate with the processor. The
process of periodically checking status of the device to see if it is time for the next I/O
operation, is called polling. The I/O device simply puts the information in a Status register, and
the processor must come and get the information.
Most of the time, devices will not require attention and when one does it will have to
wait until it is next interrogated by the polling program. This is an inefficient method and much
of the processors time is wasted on unnecessary polls.
Compare this method to a teacher continually asking every student in a class, one after
another, if they need help. Obviously the more efficient method would be for a student to
inform the teacher whenever they require assistance.

Interrupts I/O
An alternative scheme for dealing with I/O is the interrupt-driven method. An interrupt is
a signal to the microprocessor from a device that requires attention.
A device controller puts an interrupt signal on the bus when it needs CPU’s attention
when CPU receives an interrupt, It saves its current state and invokes the appropriate interrupt
handler using the interrupt vector (addresses of OS routines to handle various events). When the
interrupting device has been dealt with, the CPU continues with its original task as if it had
never been interrupted.
I/O softwares are organized as
 User Level Libraries − This provides simple interface to the user program to perform
input and output. For example, stdio is a library provided by C and C++ programming
languages.
 Kernel Level Modules − This provides device driver to interact with the device
controller and device independent I/O modules used by the device drivers.
 Hardware − This layer includes actual hardware and hardware controller which interact
with the device drivers and makes hardware alive.
A key concept in the design of I/O software is that it should be device independent where it
should be possible to write programs that can access any I/O device without having to specify
the device in advance. For example, a program that reads a file as input should be able to read a
file on a floppy disk, on a hard disk, or on a CD-ROM, without having to modify the program
for each different device.
Device Drivers

Device drivers are software modules that can be plugged into an OS to handle a
particular device. Operating System takes help from device drivers to handle all I/O devices.
Device drivers encapsulate device-dependent code and implement a standard interface in such a
way that code contains device-specific register reads/writes. Device driver, is generally written
by the device's manufacturer and delivered along with the device on a CD-ROM.
A device driver performs the following jobs

 To accept request from the device independent software above to it.


 Interact with the device controller to take and give I/O and perform required error
handling
 Making sure that the request is executed successfully
How a device driver handles a request is as follows:
Suppose a request comes to read a block N. If the driver is idle at the time a request
arrives, it starts carrying out the request immediately. Otherwise, if the driver is already busy
with some other request, it places the new request in the queue of pending requests.

Interrupt handlers

An interrupt handler, also known as an interrupt service routine or ISR, is a piece of


software or more specifically a callback function in an operating system or more specifically in
a device driver, whose execution is triggered by the reception of an interrupt.
When the interrupt happens, the interrupt procedure does whatever it has to in order to
handle the interrupt, updates data structures and wakes up process that was waiting for an
interrupt to happen.
The interrupt mechanism accepts an address ─ a number that selects a specific interrupt
handling routine/function from a small set. In most architectures, this address is an offset stored
in a table called the interrupt vector table. This vector contains the memory addresses of
specialized interrupt handlers.

Device-Independent I/O Software

The basic function of the device-independent software is to perform the I/O functions
that are common to all devices and to provide a uniform interface to the user-level software.
Though it is difficult to write completely device independent software but we can write some
modules which are common among all the devices. Following is a list of functions of device-
independent I/O Software −

 Uniform interfacing for device drivers


 Device naming - Mnemonic names mapped to Major and Minor device numbers
 Device protection
 Providing a device-independent block size
 Buffering because data coming off a device cannot be stored in final destination.
 Storage allocation on block devices
 Allocation and releasing dedicated devices
 Error Reporting

User-Space I/O Software

These are the libraries which provide richer and simplified interface to access the
functionality of the kernel or ultimately interactive with the device drivers. Most of the user-
level I/O software consists of library procedures with some exception like spooling system
which is a way of dealing with dedicated I/O devices in a multiprogramming system.
I/O Libraries (e.g., stdio) are in user-space to provide an interface to the OS resident
device-independent I/O SW. For example putchar(), getchar(), printf() and scanf() are example
of user level I/O library stdio available in C programming.
Disk Scheduling Algorithms

Disk scheduling is done by operating systems to schedule I/O requests arriving for the
disk. Disk scheduling is also known as I/O scheduling.
Disk scheduling is important because:
 Multiple I/O requests may arrive by different processes and only one I/O request can be
served at a time by the disk controller. Thus other I/O requests need to wait in the waiting
queue and need to be scheduled.
 Two or more request may be far from each other so can result in greater disk arm
movement.
 Hard drives are one of the slowest parts of the computer system and thus need to be
accessed in an efficient manner.
There are many Disk Scheduling Algorithms but before discussing them let’s have a
quick look at some of the important terms:
 Seek Time: Seek time is the time taken to locate the disk arm to a specified track where the
data is to be read or write. So the disk scheduling algorithm that gives minimum average
seek time is better.
 Rotational Latency: Rotational Latency is the time taken by the desired sector of disk to
rotate into a position so that it can access the read/write heads. So the disk scheduling
algorithm that gives minimum rotational latency is better.
 Transfer Time: Transfer time is the time to transfer the data. It depends on the rotating
speed of the disk and number of bytes to be transferred.
 Disk Access Time: Disk Access Time is

Disk Access Time = Seek Time + Rotational Latency + Transfer Time

 Disk Response Time: Response Time is the average of time spent by a request waiting to
perform its I/O operation. Average Response time is the response time of the all
requests. Variance Response Time is measure of how individual request are serviced with
respect to average response time. So the disk scheduling algorithm that gives minimum
variance response time is better.

Disk Scheduling Algorithms

1. FCFS: FCFS is the simplest of all the Disk Scheduling Algorithms. In FCFS, the requests
are addressed in the order they arrive in the disk queue.
Advantages:
 Every request gets a fair chance
 No indefinite postponement
Disadvantages:
 Does not try to optimize seek time
 May not provide the best possible service

2. SSTF: In SSTF (Shortest Seek Time First), requests having shortest seek time are executed
first. So, the seek time of every request is calculated in advance in the queue and then they
are scheduled according to their calculated seek time. As a result, the request near the disk
arm will get executed first. SSTF is certainly an improvement over FCFS as it decreases
the average response time and increases the throughput of system.
Advantages:
 Average Response Time decreases
 Throughput increases
Disadvantages:
 Overhead to calculate seek time in advance
 Can cause Starvation for a request if it has higher seek time as compared to incoming
requests
 High variance of response time as SSTF favours only some requests

3. SCAN: In SCAN algorithm the disk arm moves into a particular direction and services the
requests coming in its path and after reaching the end of disk, it reverses its direction and
again services the request arriving in its path. So, this algorithm works as an elevator and
hence also known as elevator algorithm. As a result, the requests at the midrange are
serviced more and those arriving behind the disk arm will have to wait.
Advantages:
 High throughput
 Low variance of response time
 Average response time
Disadvantages:
 Long waiting time for requests for locations just visited by disk arm

4. CSCAN: In SCAN algorithm, the disk arm again scans the path that has been scanned,
after reversing its direction. So, it may be possible that too many requests are waiting at the
other end or there may be zero or few requests pending at the scanned area. These
situations are avoided in CSCAN algorithm in which the disk arm instead of reversing its
direction goes to the other end of the disk and starts servicing the requests from there. So,
the disk arm moves in a circular fashion and this algorithm is also similar to SCAN
algorithm and hence it is known as C-SCAN (Circular SCAN).

Advantages:
 Provides more uniform wait time compared to SCAN

5. LOOK: It is similar to the SCAN disk scheduling algorithm except for the difference that
the disk arm in spite of going to the end of the disk goes only to the last request to be
serviced in front of the head and then reverses its direction from there only. Thus it
prevents the extra delay which occurred due to unnecessary traversal to the end of the disk.

6. CLOOK: As LOOK is similar to SCAN algorithm, in similar way, CLOOK is similar to


CSCAN disk scheduling algorithm. In CLOOK, the disk arm in spite of going to the end
goes only to the last request to be serviced in front of the head and then from there goes to
the other end’s last request. Thus, it also prevents the extra delay which occurred due to
unnecessary traversal to the end of the disk.

FCFS Disk Scheduling Algorithms

Given an array of disk track numbers and initial head position, our task is to find the total
number of seek operations done to access all the requested tracks if First Come First Serve
(FCFS) disk scheduling algorithm is used.

First Come First Serve (FCFS)


FCFS is the simplest disk scheduling algorithm. As the name suggests, this algorithm
entertains requests in the order they arrive in the disk queue. The algorithm looks very fair and
there is no starvation (all requests are serviced sequentially) but generally, it does not provide the
fastest service.

Algorithm:
1. Let Request array represents an array storing indexes of tracks that have been requested in
ascending order of their time of arrival. ‘head’ is the position of disk head.
2. Let us one by one take the tracks in default order and calculate the absolute distance of the
track from the head.
3. Increment the total seek count with this distance.
4. Currently serviced track position now becomes the new head position.
5. Go to step 2 until all tracks in request array have not been serviced.

Example:

Input:
Request sequence = {176, 79, 34, 60, 92, 11, 41, 114}
Initial head position = 50

Output:
Total number of seek operations = 510
Seek Sequence is
176
79
34
60
92
11
41
114
The following chart shows the sequence in which requested tracks are serviced using FCFS.

Therefore, the total seek count is calculated as:


= (176-50)+(176-79)+(79-34)+(60-34)+(92-60)+(92-11)+(41-11)+(114-41)
= 510

SSTF disk scheduling algorithm

Given an array of disk track numbers and initial head position, our task is to find the total
number of seek operations done to access all the requested tracks if Shortest Seek Time First
(SSTF) is a disk scheduling algorithm is used.

Shortest Seek Time First (SSTF)


Basic idea is the tracks which are closer to current disk head position should be serviced
first in order to minimise the seek operations.

Algorithm
1. Let Request array represents an array storing indexes of tracks that have been requested.
‘head’ is the position of disk head.
2. Find the positive distance of all tracks in the request array from head.
3. Find a track from requested array which has not been accessed/serviced yet and has
minimum distance from head.
4. Increment the total seek count with this distance.
5. Currently serviced track position now becomes the new head position.
6. Go to step 2 until all tracks in request array have not been serviced.

Example
Request sequence = {176, 79, 34, 60, 92, 11, 41, 114}
Initial head position = 50
The following chart shows the sequence in which requested tracks are serviced using SSTF.
Therefore, total seek count is calculates as:
= (50-41)+(41-34)+(34-11)+(60-11)+(79-60)+(92-79)+(114-92)+(176-114)
= 204

SCAN (Elevator) Disk Scheduling Algorithms

Given an array of disk track numbers and initial head position, our task is to find the total
number of seek operations done to access all the requested tracks if SCAN disk scheduling
algorithm is used.
SCAN (Elevator) algorithm
In SCAN disk scheduling algorithm, head starts from one end of the disk and moves
towards the other end, servicing requests in between one by one and reach the other end. Then
the direction of the head is reversed and the process continues as head continuously scan back
and forth to access the disk. So, this algorithm works as an elevator and hence also known as
the elevator algorithm. As a result, the requests at the midrange are serviced more and those
arriving behind the disk arm will have to wait

Algorithm-
1. Let Request array represents an array storing indexes of tracks that have been requested in
ascending order of their time of arrival. ‘head’ is the position of disk head.
2. Let direction represents whether the head is moving towards left or right.
3. In the direction in which head is moving service all tracks one by one.
4. Calculate the absolute distance of the track from the head.
5. Increment the total seek count with this distance.
6. Currently serviced track position now becomes the new head position.
7. Go to step 3 until we reach at one of the ends of the disk.
8. If we reach at the end of the disk reverse the direction and go to step 2 until all tracks in
request array have not been serviced.
Example:

Input:
Request sequence = {176, 79, 34, 60, 92, 11, 41, 114}
Initial head position = 50
Direction = left (We are moving from right to left)

Output:
Total number of seek operations = 226
Seek Sequence is
41
34
11
0
60
79
92
114
176

The following chart shows the sequence in which requested tracks are serviced using SCAN.

Therefore, the total seek count is calculated as:

= (50-41)+(41-34)+(34-11)
+(11-0)+(60-0)+(79-60)
+(92-79)+(114-92)+(176-114)
= 226
C-SCAN Disk Scheduling Algorithm

Given an array of disk track numbers and initial head position, our task is to find the total
number of seek operations done to access all the requested tracks if C-SCAN disk scheduling
algorithm is used.

Circular SCAN (C-SCAN) scheduling algorithm is a modified version of SCAN disk


scheduling algorithm that deals with the inefficiency of SCAN algorithm by servicing the
requests more uniformly. Like SCAN (Elevator Algorithm) C-SCAN moves the head from one
end servicing all the requests to the other end. However, as soon as the head reaches the other
end, it immediately returns to the beginning of the disk without servicing any requests on the
return trip (see chart below) and starts servicing again once reaches the beginning. This is also
known as the “Circular Elevator Algorithm” as it essentially treats the cylinders as a circular list
that wraps around from the final cylinder to the first one.

Algorithm:
1. Let Request array represents an array storing indexes of tracks that have been requested in
ascending order of their time of arrival. ‘head’ is the position of disk head.
2. The head services only in the right direction from 0 to size of the disk.
3. While moving in the left direction do not service any of the tracks.
4. When we reach at the beginning(left end) reverse the direction.
5. While moving in right direction it services all tracks one by one.
6. While moving in right direction calculate the absolute distance of the track from the head.
7. Increment the total seek count with this distance.
8. Currently serviced track position now becomes the new head position.
9. Go to step 6 until we reach at right end of the disk.
10. If we reach at the right end of the disk reverse the direction and go to step 3 until all tracks
in request array have not been serviced.

Examples:

Input:
Request sequence = {176, 79, 34, 60, 92, 11, 41, 114}
Initial head position = 50

Output:
Initial position of head: 50
Total number of seek operations = 190
Seek Sequence is
60
79
92
114
176
199
0
11
34
41
The following chart shows the sequence in which requested tracks are serviced using SCAN.

Therefore, the total seek count is calculated as:


= (60-50)+(79-60)+(92-79)
+(114-92)+(176-114)+(199-176)+(199-0)
+(11-0)+(34-11)+(41-34)

LOOK Disk Scheduling Algorithm

Given an array of disk track numbers and initial head position, our task is to find the total
number of seek operations done to access all the requested tracks if LOOK disk scheduling
algorithm is used. Also, write a program to find the seek sequence using LOOK disk scheduling
algorithm.

LOOK Disk Scheduling Algorithm:


LOOK is the advanced version of SCAN (elevator) disk scheduling algorithm which
gives slightly better seek time than any other algorithm in the hierarchy (FCFS->SRTF->SCAN-
>C-SCAN->LOOK). The LOOK algorithm services request similarly as SCAN algorithm
meanwhile it also “looks” ahead as if there are more tracks that are needed to be serviced in the
same direction. If there are no pending requests in the moving direction the head reverses the
direction and start servicing requests in the opposite direction.

The main reason behind the better performance of LOOK algorithm in comparison to
SCAN is because in this algorithm the head is not allowed to move till the end of the disk.
Algorithm:
1. Let Request array represents an array storing indexes of tracks that have been requested in
ascending order of their time of arrival. ‘head’ is the position of disk head.
2. The intial direction in which head is moving is given and it services in the same direction.
3. The head services all the requests one by one in the direction head is moving.
4. The head continues to move in the same direction untill all the request in this direction are
not finished.
5. While moving in this direction calculate the absolute distance of the track from the head.
6. Increment the total seek count with this distance.
7. Currently serviced track position now becomes the new head position.
8. Go to step 5 until we reach at last request in this direction.
9. If we reach where no requests are needed to be serviced in this direction reverse the
direction and go to step 3 until all tracks in request array have not been serviced.

Examples:
Input:
Request sequence = {176, 79, 34, 60, 92, 11, 41, 114}
Initial head position = 50
Direction = right (We are moving from left to right)

Output:
Initial position of head: 50
Total number of seek operations = 291
Seek Sequence is
60
79
92
114
176
41
34
11

The following chart shows the sequence in which requested tracks are serviced using LOOK.
Therefore, the total seek count is calculated as:
= (60-50)+(79-60)+(92-79)
+(114-92)+(176-114)
+(176-41)+(41-34)+(34-11)

C-LOOK Disk Scheduling Algorithm

Given an array of disk track numbers and initial head position, our task is to find the total
number of seek operations done to access all the requested tracks if C-LOOK disk scheduling
algorithm is used. Also, write a program to find the seek sequence using C-LOOK disk
scheduling algorithm.

C-LOOK (Circular LOOK) Disk Scheduling Algorithm:


C-LOOK is an enhanced version of both SCAN as well as LOOK disk scheduling
algorithms. This algorithm also uses the idea of wrapping the tracks as a circular cylinder as C-
SCAN algorithm but the seek time is better than C-SCAN algorithm. We know that C-SCAN is
used to avoid starvation and services all the requests more uniformly, the same goes for C-
LOOK.
In this algorithm, the head services requests only in one direction(either left or right) until
all the requests in this direction are not serviced and then jumps back to the farthest request on
the other direction and service the remaining requests which gives a better uniform servicing as
well as avoids wasting seek time for going till the end of the disk.
Algorithm-
1. Let Request array represents an array storing indexes of the tracks that have been requested
in ascending order of their time of arrival and head is the position of the disk head.
2. The initial direction in which the head is moving is given and it services in the same
direction.
3. The head services all the requests one by one in the direction it is moving.
4. The head continues to move in the same direction until all the requests in this direction
have been serviced.
5. While moving in this direction, calculate the absolute distance of the tracks from the head.
6. Increment the total seek count with this distance.
7. Currently serviced track position now becomes the new head position.
8. Go to step 5 until we reach the last request in this direction.
9. If we reach the last request in the current direction then reverse the direction and move the
head in this direction until we reach the last request that is needed to be serviced in this
direction without servicing the intermediate requests.
10. Reverse the direction and go to step 3 until all the requests have not been serviced.

Examples:
Input:
Request sequence = {176, 79, 34, 60, 92, 11, 41, 114}
Initial head position = 50
Direction = right (Moving from left to right)
Output:
Initial position of head: 50
Total number of seek operations = 156
Seek Sequence is
60
79
92
114
176
11
34
41

The following chart shows the sequence in which requested tracks are serviced using C-LOOK.
Therefore, the total seek count = (60 – 50) + (79 – 60) + (92 – 79) + (114 – 92) + (176 – 114) +
(176 – 11) + (34 – 11) + (41 – 34) = 321.

Follow the URLs

https://www.geeksforgeeks.org/operating-systems/

https://www.tutorialspoint.com/operating_system/index.htm

You might also like