0% found this document useful (0 votes)
14 views

Os Notes

Uploaded by

Siddhesh Chavan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Os Notes

Uploaded by

Siddhesh Chavan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Introduction and Operating-Systems Structures:

Operating-System Operations,
Functions of Operating System,
Types of System Calls
Operating-System Structure Processes:
Process Concept,
Process Scheduling
Operations on Processes
Interprocess Communication
Threads:
Multicore Programming
Multithreading Models
II
Process Synchronization:
General structure of a typical process
race condition
The Critical-Section Problem
Peterson’s Solution
Synchronization Hardware
Mutex Locks
Semaphores
Classic Problems of Synchronization
Monitors
CPU Scheduling:
Basic Concepts
Scheduling Criteria
Scheduling Algorithms (FCFS, SJF, SRTF, Priority, RR, Multilevel Queue Scheduling,
Multilevel Feedback Queue Scheduling)
Thread Scheduling
Deadlocks:
System Model,
Deadlock Characterization,
Methods for Handling Deadlocks
Deadlock Prevention
Deadlock Avoidance
Deadlock Detection
Recovery from Deadlock
III
Main Memory:
Address binding
Logical address space
Physical address space
MMU
Swapping,
Contiguous Memory Allocation → (fragmentation),
Segmentation, Paging,
Structure of the Page
Tables → (Hashed Page Tables, Inverted Page Tables)
Virtual Memory: Background,
Demand Paging,
Copy-on-Write,
Page Replacement(Any 1 algorithm)
Mass-Storage Structure:
Disk Scheduling,
Disk Management → (Disk formatting)
File-System Interface:
File Concept → (File attributes and file operations),
Directory and Disk Structure → (two level directory structure, Acyclic graph directories),
File-System Mounting
File-System Implementation: Directory Implementation
Free-Space Management

Different types of system calls

Here are the types of system calls −


Process Control
These system calls deal with processes such as process creation, process termination etc.
File Management
These system calls are responsible for file manipulation such as creating a file, reading a file,
writing into a file etc.
Device Management
These system calls are responsible for device manipulation such as reading from device
buffers, writing into device buffers etc.
Information Maintenance
These system calls handle information and its transfer between the operating system and the
user program.
Communication
These system calls are useful for interprocess communication. They also deal with creating
and deleting a communication connection.

Operation on proccesses
The execution of a process is a complex activity. It
involves various operations. Following are the
operations that are performed while execution of
a process:

1.Creation: This the initial step of process


execution activity. Process creation means the
construction of a new process for the execution.
2. Scheduling/Dispatching: The event or activity in which the state of the process is
changed from ready to running. It means the operating system puts the process from ready
state into the running state.
3. Blocking: When a process invokes an input-output system call that blocks the process
and operating system put in block mode. Block mode is basically a mode where process
waits for input-output.
4. Preemption: When a timeout occurs that means the process hadn’t been terminated in
the allotted time interval and next process is ready to execute, then the operating system
preempts the process. This operation is only valid where CPU scheduling supports
preemption.
5. Termination: Process termination is the activity of ending the process.
What is Inter Process Communication?

In general, Inter Process Communication is a type of mechanism usually provided by the


operating system (or OS). The main aim or goal of this mechanism is to provide
communications in between several processes.
Role of Synchronization in Inter Process Communication
It is one of the essential parts of inter process communication. Typically, this is provided by
interprocess communication control mechanisms, but sometimes it can also be controlled by
communication processes.
These are the following methods that used to provide the synchronization:
1. Mutual Exclusion
2. Semaphore
3. Barrier
4. Spinlock
Mutual Exclusion:-
It is generally required that only one process thread can enter the critical section at a time.
This also helps in synchronization and creates a stable state to avoid the race condition.
Semaphore:-
Semaphore is a type of variable that usually controls the access to the shared resources by
several processes
Barrier:-
A barrier typically not allows an individual process to proceed unless all the processes does
not reach it.
Spinlock:-
Spinlock is a type of lock as its name implies. The processes are trying to acquire the spinlock
waits or stays in a loop while checking that the lock is available or not.

shared memory:
Inter Process Communication through shared memory is a concept where two or more
process can access the common memory. And communication is done via this shared
memory where changes made by one process can be viewed by another process.
Message Passing:-
It is a type of mechanism that allows processes to synchronize and communicate with each
other. However, by using the message passing, the processes can communicate with each other
without restoring the hared variables.
Usually, the inter-process communication mechanism provides two operations that are as
follows:
o send (message)
o received (message)
Multi Threading Models

Multi threading-It is a process of multiple threads executes at same time.


Many operating systems support kernel thread and user thread in a combined way.
Example of such system is Solaris. Multi threading model are of three types.

Many to many model.


Many to one model.
one to one model.
Many to Many Model

In this model, we have multiple user threads multiplex to same or lesser number of kernel
level threads. Number of kernel level threads are specific to the machine, advantage of this
model is if a user thread is blocked we can schedule others user thread to other kernel
thread. Thus, System doesn’t block if a particular thread is blocked.
It is the best multi threading model.

Many to One Model

In this model, we have multiple user threads mapped to one kernel thread. In this model
when a user thread makes a blocking system call entire process blocks. As we have only one
kernel thread and only one user thread can access kernel at a time, so multiple threads are
not able access multiprocessor at the same time.
The thread management is done on the user level so it is more efficient.

One to One Model

In this model, one to one relationship between kernel and user thread. In this model
multiple thread can run on multiple processor. Problem with this model is that creating a
user thread requires the corresponding kernel thread.
As each user thread is connected to different kernel , if any user thread makes a blocking
system call, the other user threads won’t be blocked.
Explain the general structure of a typical process,
There are basically four main sections through which each of the process has to pass
through.The universal algorithm is
Critical section:-
Consider a system consist of n processes P0 to Pn every process has a segment of code, called
as critical section, in which the process perhaps changing common variables updating a
table, writing a file and so on. Therefore critical section is that area of code in which the
processors try to access the shared information. Therefore critical section occurs in the area
where race condition occurs.
Entry section:-
Code just previous to the critical section is termed as the entry section. Every processor must
request permission to enter into the critical section, the area of code which execute this
request is called as the entry section.
Exit section:-
The code segment just subsequent to the critical section is termed as exit section.
Remainder section:-
The code remaining subsequent to the exit section is remainder section.
The Critical Section Problem
Critical Section is the part of a program which tries to access shared resources. That resource
may be any resource in a computer like a memory location, Data structure, CPU or any IO
device.
The critical section cannot be executed by more than one process at the same time; operating
system faces the difficulties in allowing and disallowing the processes from entering the critical
section.
The critical section problem is used to design a set of protocols which can ensure that the Race
condition among the processes will never arise.
In order to synchronize the cooperative processes, our main task is to solve the critical section
problem.
Peterson’s solution
Peterson’s Algorithm is used to synchronize two processes. It uses two variables, a bool
array flag of size 2 and an int variable turn to accomplish it.
In the solution i represents the Consumer and j represents the Producer. Initially the flags
are false. When a process wants to execute it’s critical section, it sets it’s flag to true and
turn as the index of the other process. This means that the process wants to execute but it
will allow the other process to run first. The process performs busy waiting until the other
process has finished it’s own critical section.
After this the current process enters it’s critical section and adds or removes a random
number from the shared buffer. After completing the critical section, it sets it’s own flag to
false, indication it does not wish to execute anymore.
The program runs for a fixed amount of time before exiting. This time can be changed by
changing value of the macro RT.
Hardware Synchronization Algorithms

Process Synchronization problems occur when two processes running concurrently share
the same data or same variable. The value of that variable may not be updated correctly
before its being used by a second process. Such a condition is known as Race Around
Condition.
There are a software as well as hardware solutions to this problem.
There are three algorithms in the hardware approach of solving Process Synchronization
problem:
1. Test and Set
2. Swap
3. Unlock and Lock

1. Test and Set:


Here, the shared variable is lock which is initialized to false. TestAndSet(lock) algorithm
works in this way – it always returns whatever value is sent to it and sets lock to true.

2. Swap:
Swap algorithm is a lot like the TestAndSet algorithm. Instead of directly setting lock to
true in the swap function, key is set to true and then swapped with lock.
3. Unlock and Lock :
Unlock and Lock Algorithm uses TestAndSet to regulate the value of lock but it adds
another value, waiting[i], for each process which checks whether or not a process has been
waiting. A ready queue is maintained with respect to the process in the critical section

Mutex Locks
1. Mutex is Binary in nature
2. Operations like Lock and Release are possible
3. Mutex is for Threads, while Semaphores are for processes.
4. Mutex works in user-space and Semaphore for kernel
5. Mutex provides locking mechanism
6. A thread may acquire more than one mutex
7. Binary Semaphore and mutex are different
Semaphores in OS
While mutex is a lock (wait) and release mechanism. Semaphores are signalling mechanisms
that signal to processes the state of the Critical section in OS and grant access to the critical
section accordingly.
Semaphores use the following methods to control access to critical section code –
1. Wait
2. Signal
Wait and Signal are two methods that are associated with semaphores. While some articles
are represented as wait(s) or signal(s) however in some blogs are represented as p(s) for wait
and v(s) for signal
Wait p(s) or wait(s)
1. Wait decrements the value of semaphore by 1
Signal v(s) or signal(s)
1. Signal increments the value of semaphore by 1
Semaphore
1. Semaphore can only have positive values
2. Before the start of the program, it is always initialised to
1. n in Counting semaphore (Where n is the number of processes allowed to
enter critical section simultaneously)
2. 1 in the case of a binary semaphore
Monitors

The monitor is one of the ways to achieve Process synchronization. The monitor is
supported by programming languages to achieve mutual exclusion between processes. For
example Java Synchronized methods. Java provides wait() and notify() constructs.
1. It is the collection of condition variables and procedures combined together in a
special kind of module or a package.
2. The processes running outside the monitor can’t access the internal variable of
the monitor but can call procedures of the monitor.
3. Only one process at a time can execute code inside monitors.

Scheduling algorithm

First Come First Serve (FCFS)


• Jobs are executed on first come, first serve basis.
• It is a non-preemptive, pre-emptive scheduling algorithm.
• Easy to understand and implement.
• Its implementation is based on FIFO queue.
• Poor in performance as average wait time is high.
Shortest Job Next (SJN)
• This is also known as shortest job first, or SJF
• This is a non-preemptive, pre-emptive scheduling algorithm.
• Best approach to minimize waiting time.
• Easy to implement in Batch systems where required CPU time is known in
advance.
• Impossible to implement in interactive systems where required CPU time is not
known.
• The processer should know in advance how much time process will take.
Priority Based Scheduling
• Priority scheduling is a non-preemptive algorithm and one of the most common
scheduling algorithms in batch systems.
• Each process is assigned a priority. Process with highest priority is to be
executed first and so on.
• Processes with same priority are executed on first come first served basis.
• Priority can be decided based on memory requirements, time requirements or
any other resource requirement.

Shortest Remaining Time


• Shortest remaining time (SRT) is the preemptive version of the SJN algorithm.
• The processor is allocated to the job closest to completion but it can be
preempted by a newer ready job with shorter time to completion.
• Impossible to implement in interactive systems where required CPU time is not
known.
• It is often used in batch environments where short jobs need to give preference.
Round Robin Scheduling
• Round Robin is the preemptive process scheduling algorithm.
• Each process is provided a fix time to execute, it is called a quantum.
• Once a process is executed for a given time period, it is preempted and other
process executes for a given time period.
• Context switching is used to save states of preempted processes.
Multiple-Level Queues Scheduling
Multiple-level queues are not an independent scheduling algorithm. They make use of other
existing algorithms to group and schedule jobs with common characteristics.
• Multiple queues are maintained for processes with common characteristics.
• Each queue can have its own scheduling algorithms.
• Priorities are assigned to each queue.

Methods of handling deadlocks

Methods of handling deadlocks : There are three approaches to deal with deadlocks.
1. Deadlock Prevention
2. Deadlock avoidance
3. Deadlock detection
These are explained as following below.
1. Deadlock Prevention : The strategy of deadlock prevention is to design the system in
such a way that the possibility of deadlock is excluded. Indirect method prevent the
occurrence of one of three necessary condition of deadlock i.e., mutual exclusion, no pre-
emption and hold and wait. Direct method prevent the occurrence of circular
wait. Prevention techniques – Mutual exclusion – is supported by the OS. Hold and Wait
– condition can be prevented by requiring that a process requests all its required resources
at one time and blocking the process until all of its requests can be granted at a same time
simultaneously. But this prevention does not yield good result because :
• long waiting time required
• in efficient use of allocated resource
• A process may not know all the required resources in advance
No pre-emption – techniques for ‘no pre-emption are’
• If a process that is holding some resource, requests another resource that can
not be immediately allocated to it, the all resource currently being held are
released and if necessary, request them again together with the additional
resource.
• If a process requests a resource that is currently held by another process, the OS
may pre-empt the second process and require it to release its resources. This
works only if both the processes do not have same priority.
Circular wait One way to ensure that this condition never hold is to impose a total ordering
of all resource types and to require that each process requests resource in an increasing
order of enumeration, i.e., if a process has been allocated resources of type R, then it may
subsequently request only those resources of types following R in ordering.
2. Deadlock Avoidance : This approach allows the three necessary conditions of deadlock
but makes judicious choices to assure that deadlock point is never reached. It allows more
concurrency than avoidance detection A decision is made dynamically whether the current
resource allocation request will, if granted, potentially lead to deadlock. It requires the
knowledge of future process requests. Two techniques to avoid deadlock :
1. Process initiation denial
2. Resource allocation denial
Advantages of deadlock avoidance techniques :
• Not necessary to pre-empt and rollback processes
• Less restrictive than deadlock prevention
Disadvantages :
• Future resource requirements must be known in advance
• Processes can be blocked for long periods
• Exists fixed number of resources for allocation
3. Deadlock Detection : Deadlock detection is used by employing an algorithm that tracks
the circular waiting and killing one or more processes so that deadlock is removed. The
system state is examined periodically to determine if a set of processes is deadlocked. A
deadlock is resolved by aborting and restarting a process, relinquishing all the resources
that the process held.
• This technique does not limit resources access or restrict process action.
• Requested resources are granted to processes whenever possible.
• It never delays the process initiation and facilitates online handling.
• The disadvantage is the inherent pre-emption losses.
What is Main Memory:
The main memory is central to the operation of a modern computer. Main Memory is a
large array of words or bytes, ranging in size from hundreds of thousands to billions. Main
memory is a repository of rapidly available information shared by the CPU and I/O
devices. Main memory is the place where programs and information are kept when the
processor is effectively utilizing them. Main memory is associated with the processor, so
moving instructions and information into and out of the processor is extremely fast. Main
memory is also known as RAM(Random Access Memory)

Logical and Physical Address Space:


Logical Address space: An address generated by the CPU is known as “Logical Address”. It
is also known as a Virtual address. Logical address space can be defined as the size of the
process. A logical address can be changed.
Physical Address space: An address seen by the memory unit (i.e the one loaded into the
memory address register of the memory) is commonly known as a “Physical Address”. A
Physical address is also known as a Real address. The set of all physical addresses
corresponding to these logical addresses is known as Physical address space. A physical
address is computed by MMU. The run-time mapping from virtual to physical addresses is
done by a hardware device Memory Management Unit(MMU). The physical address always
remains constant.
Swapping :
When a process is executed it must have resided in memory. Swapping is a process of swap
a process temporarily into a secondary memory from the main memory, which is fast as
compared to secondary memory. A swapping allows more processes to be run and can be fit
into memory at one time. The main part of swapping is transferred time and the total time
directly proportional to the amount of memory swapped. Swapping is also known as roll-
out, roll in, because if a higher priority process arrives and wants service, the memory
manager can swap out the lower priority process and then load and execute the higher
priority process. After finishing higher priority work, the lower priority process swapped
back in memory and continued to the execution process.

Contiguous Memory Allocation :


The main memory should oblige both the operating system and the different client
processes. Therefore, the allocation of memory becomes an important task in the
operating system. The memory is usually divided into two partitions: one for the resident
operating system and one for the user processes. We normally need several user processes
to reside in memory simultaneously. Therefore, we need to consider how to allocate
available memory to the processes that are in the input queue waiting to be brought into
memory. In adjacent memory allotment, each process is contained in a single contiguous
segment of memory.
Fragmentation:
A Fragmentation is defined as when the process is loaded and removed after execution from
memory, it creates a small free hole. These holes can not be assigned to new processes
because holes are not combined or do not fulfill the memory requirement of the process. To
achieve a degree of multiprogramming, we must reduce the waste of memory or
fragmentation problem. In operating system two types of fragmentation:
Internal fragmentation:
Internal fragmentation occurs when memory blocks are allocated to the process more than
their requested size. Due to this some unused space is leftover and creates an internal
fragmentation problem.
External fragmentation:
In external fragmentation, we have a free memory block, but we can not assign it to process
because blocks are not contiguous.
Paging:
Paging is a memory management scheme that eliminates the need for contiguous allocation
of physical memory. This scheme permits the physical address space of a process to be non-
contiguous.
• Logical Address or Virtual Address (represented in bits): An address generated
by the CPU
• Logical Address Space or Virtual Address Space (represented in words or bytes):
The set of all logical addresses generated by a program
• Physical Address (represented in bits): An address actually available on a
memory unit
• Physical Address Space (represented in words or bytes): The set of all physical
addresses corresponding to the logical addresses
Page Replacement Algorithms:
1. First In First Out (FIFO): This is the simplest page replacement algorithm. In this
algorithm, the operating system keeps track of all pages in the memory in a queue, the
oldest page is in the front of the queue. When a page needs to be replaced page in the front
of the queue is selected for removal.

Disk scheduling

Disk scheduling is done by operating systems to schedule I/O requests arriving for the
disk. Disk scheduling is also known as I/O scheduling.
Disk scheduling is important because:

• Multiple I/O requests may arrive by different processes and only one I/O
request can be served at a time by the disk controller. Thus other I/O requests
need to wait in the waiting queue and need to be scheduled.
• Two or more request may be far from each other so can result in greater disk
arm movement.
• Hard drives are one of the slowest parts of the computer system and thus need
to be accessed in an efficient manner.

Structures of Directory in Operating System

• Two-level directory –
As we have seen, a single level directory often leads to confusion of files names
among different users. the solution to this problem is to create a separate
directory for each user.
In the two-level directory structure, each user has their own user files directory
(UFD). The UFDs have similar structures, but each lists only the files of a single
user. system’s master file directory (MFD) is searches whenever a new user
id=s logged in. The MFD is indexed by username or account number, and each
entry points to the UFD for that user.
• Acyclic graph directory –
An acyclic graph is a graph with no cycle and allows us to share subdirectories
and files. The same file or subdirectories may be in two different directories. It
is a natural generalization of the tree-structured directory.
It is used in the situation like when two programmers are working on a joint
project and they need to access files. The associated files are stored in a
subdirectory, separating them from other projects and files of other
programmers since they are working on a joint project so they want the
subdirectories to be into their own directories. The common subdirectories
should be shared. So here we use Acyclic directories.
It is the point to note that the shared file is not the same as the copy file. If any
programmer makes some changes in the subdirectory it will reflect in both
subdirectories.

File system Directory Implementation :


1. Linear List –
It maintains a linear list of filenames with pointers to the data blocks.It
is time-consuming also.To create a new file, we must first search the
directory to be sure that no existing file has the same name then we add
a file at end of the directory.To delete a file, we search the directory for
the named file and release the space.To reuse the directory entry either
we can mark the entry as unused or we can attach it to a list of free
directories.
2. Hash Table –
The hash table takes a value computed from the file name and returns a
pointer to the file. It decreases the directory search time. The insertion
and deletion process of files is easy. The major difficulty is hash tables
are its generally fixed size and hash tables are dependent on hash
function on that size.

You might also like