Os Notes
Os Notes
Operating-System Operations,
Functions of Operating System,
Types of System Calls
Operating-System Structure Processes:
Process Concept,
Process Scheduling
Operations on Processes
Interprocess Communication
Threads:
Multicore Programming
Multithreading Models
II
Process Synchronization:
General structure of a typical process
race condition
The Critical-Section Problem
Peterson’s Solution
Synchronization Hardware
Mutex Locks
Semaphores
Classic Problems of Synchronization
Monitors
CPU Scheduling:
Basic Concepts
Scheduling Criteria
Scheduling Algorithms (FCFS, SJF, SRTF, Priority, RR, Multilevel Queue Scheduling,
Multilevel Feedback Queue Scheduling)
Thread Scheduling
Deadlocks:
System Model,
Deadlock Characterization,
Methods for Handling Deadlocks
Deadlock Prevention
Deadlock Avoidance
Deadlock Detection
Recovery from Deadlock
III
Main Memory:
Address binding
Logical address space
Physical address space
MMU
Swapping,
Contiguous Memory Allocation → (fragmentation),
Segmentation, Paging,
Structure of the Page
Tables → (Hashed Page Tables, Inverted Page Tables)
Virtual Memory: Background,
Demand Paging,
Copy-on-Write,
Page Replacement(Any 1 algorithm)
Mass-Storage Structure:
Disk Scheduling,
Disk Management → (Disk formatting)
File-System Interface:
File Concept → (File attributes and file operations),
Directory and Disk Structure → (two level directory structure, Acyclic graph directories),
File-System Mounting
File-System Implementation: Directory Implementation
Free-Space Management
Operation on proccesses
The execution of a process is a complex activity. It
involves various operations. Following are the
operations that are performed while execution of
a process:
shared memory:
Inter Process Communication through shared memory is a concept where two or more
process can access the common memory. And communication is done via this shared
memory where changes made by one process can be viewed by another process.
Message Passing:-
It is a type of mechanism that allows processes to synchronize and communicate with each
other. However, by using the message passing, the processes can communicate with each other
without restoring the hared variables.
Usually, the inter-process communication mechanism provides two operations that are as
follows:
o send (message)
o received (message)
Multi Threading Models
In this model, we have multiple user threads multiplex to same or lesser number of kernel
level threads. Number of kernel level threads are specific to the machine, advantage of this
model is if a user thread is blocked we can schedule others user thread to other kernel
thread. Thus, System doesn’t block if a particular thread is blocked.
It is the best multi threading model.
In this model, we have multiple user threads mapped to one kernel thread. In this model
when a user thread makes a blocking system call entire process blocks. As we have only one
kernel thread and only one user thread can access kernel at a time, so multiple threads are
not able access multiprocessor at the same time.
The thread management is done on the user level so it is more efficient.
In this model, one to one relationship between kernel and user thread. In this model
multiple thread can run on multiple processor. Problem with this model is that creating a
user thread requires the corresponding kernel thread.
As each user thread is connected to different kernel , if any user thread makes a blocking
system call, the other user threads won’t be blocked.
Explain the general structure of a typical process,
There are basically four main sections through which each of the process has to pass
through.The universal algorithm is
Critical section:-
Consider a system consist of n processes P0 to Pn every process has a segment of code, called
as critical section, in which the process perhaps changing common variables updating a
table, writing a file and so on. Therefore critical section is that area of code in which the
processors try to access the shared information. Therefore critical section occurs in the area
where race condition occurs.
Entry section:-
Code just previous to the critical section is termed as the entry section. Every processor must
request permission to enter into the critical section, the area of code which execute this
request is called as the entry section.
Exit section:-
The code segment just subsequent to the critical section is termed as exit section.
Remainder section:-
The code remaining subsequent to the exit section is remainder section.
The Critical Section Problem
Critical Section is the part of a program which tries to access shared resources. That resource
may be any resource in a computer like a memory location, Data structure, CPU or any IO
device.
The critical section cannot be executed by more than one process at the same time; operating
system faces the difficulties in allowing and disallowing the processes from entering the critical
section.
The critical section problem is used to design a set of protocols which can ensure that the Race
condition among the processes will never arise.
In order to synchronize the cooperative processes, our main task is to solve the critical section
problem.
Peterson’s solution
Peterson’s Algorithm is used to synchronize two processes. It uses two variables, a bool
array flag of size 2 and an int variable turn to accomplish it.
In the solution i represents the Consumer and j represents the Producer. Initially the flags
are false. When a process wants to execute it’s critical section, it sets it’s flag to true and
turn as the index of the other process. This means that the process wants to execute but it
will allow the other process to run first. The process performs busy waiting until the other
process has finished it’s own critical section.
After this the current process enters it’s critical section and adds or removes a random
number from the shared buffer. After completing the critical section, it sets it’s own flag to
false, indication it does not wish to execute anymore.
The program runs for a fixed amount of time before exiting. This time can be changed by
changing value of the macro RT.
Hardware Synchronization Algorithms
Process Synchronization problems occur when two processes running concurrently share
the same data or same variable. The value of that variable may not be updated correctly
before its being used by a second process. Such a condition is known as Race Around
Condition.
There are a software as well as hardware solutions to this problem.
There are three algorithms in the hardware approach of solving Process Synchronization
problem:
1. Test and Set
2. Swap
3. Unlock and Lock
2. Swap:
Swap algorithm is a lot like the TestAndSet algorithm. Instead of directly setting lock to
true in the swap function, key is set to true and then swapped with lock.
3. Unlock and Lock :
Unlock and Lock Algorithm uses TestAndSet to regulate the value of lock but it adds
another value, waiting[i], for each process which checks whether or not a process has been
waiting. A ready queue is maintained with respect to the process in the critical section
Mutex Locks
1. Mutex is Binary in nature
2. Operations like Lock and Release are possible
3. Mutex is for Threads, while Semaphores are for processes.
4. Mutex works in user-space and Semaphore for kernel
5. Mutex provides locking mechanism
6. A thread may acquire more than one mutex
7. Binary Semaphore and mutex are different
Semaphores in OS
While mutex is a lock (wait) and release mechanism. Semaphores are signalling mechanisms
that signal to processes the state of the Critical section in OS and grant access to the critical
section accordingly.
Semaphores use the following methods to control access to critical section code –
1. Wait
2. Signal
Wait and Signal are two methods that are associated with semaphores. While some articles
are represented as wait(s) or signal(s) however in some blogs are represented as p(s) for wait
and v(s) for signal
Wait p(s) or wait(s)
1. Wait decrements the value of semaphore by 1
Signal v(s) or signal(s)
1. Signal increments the value of semaphore by 1
Semaphore
1. Semaphore can only have positive values
2. Before the start of the program, it is always initialised to
1. n in Counting semaphore (Where n is the number of processes allowed to
enter critical section simultaneously)
2. 1 in the case of a binary semaphore
Monitors
The monitor is one of the ways to achieve Process synchronization. The monitor is
supported by programming languages to achieve mutual exclusion between processes. For
example Java Synchronized methods. Java provides wait() and notify() constructs.
1. It is the collection of condition variables and procedures combined together in a
special kind of module or a package.
2. The processes running outside the monitor can’t access the internal variable of
the monitor but can call procedures of the monitor.
3. Only one process at a time can execute code inside monitors.
Scheduling algorithm
Methods of handling deadlocks : There are three approaches to deal with deadlocks.
1. Deadlock Prevention
2. Deadlock avoidance
3. Deadlock detection
These are explained as following below.
1. Deadlock Prevention : The strategy of deadlock prevention is to design the system in
such a way that the possibility of deadlock is excluded. Indirect method prevent the
occurrence of one of three necessary condition of deadlock i.e., mutual exclusion, no pre-
emption and hold and wait. Direct method prevent the occurrence of circular
wait. Prevention techniques – Mutual exclusion – is supported by the OS. Hold and Wait
– condition can be prevented by requiring that a process requests all its required resources
at one time and blocking the process until all of its requests can be granted at a same time
simultaneously. But this prevention does not yield good result because :
• long waiting time required
• in efficient use of allocated resource
• A process may not know all the required resources in advance
No pre-emption – techniques for ‘no pre-emption are’
• If a process that is holding some resource, requests another resource that can
not be immediately allocated to it, the all resource currently being held are
released and if necessary, request them again together with the additional
resource.
• If a process requests a resource that is currently held by another process, the OS
may pre-empt the second process and require it to release its resources. This
works only if both the processes do not have same priority.
Circular wait One way to ensure that this condition never hold is to impose a total ordering
of all resource types and to require that each process requests resource in an increasing
order of enumeration, i.e., if a process has been allocated resources of type R, then it may
subsequently request only those resources of types following R in ordering.
2. Deadlock Avoidance : This approach allows the three necessary conditions of deadlock
but makes judicious choices to assure that deadlock point is never reached. It allows more
concurrency than avoidance detection A decision is made dynamically whether the current
resource allocation request will, if granted, potentially lead to deadlock. It requires the
knowledge of future process requests. Two techniques to avoid deadlock :
1. Process initiation denial
2. Resource allocation denial
Advantages of deadlock avoidance techniques :
• Not necessary to pre-empt and rollback processes
• Less restrictive than deadlock prevention
Disadvantages :
• Future resource requirements must be known in advance
• Processes can be blocked for long periods
• Exists fixed number of resources for allocation
3. Deadlock Detection : Deadlock detection is used by employing an algorithm that tracks
the circular waiting and killing one or more processes so that deadlock is removed. The
system state is examined periodically to determine if a set of processes is deadlocked. A
deadlock is resolved by aborting and restarting a process, relinquishing all the resources
that the process held.
• This technique does not limit resources access or restrict process action.
• Requested resources are granted to processes whenever possible.
• It never delays the process initiation and facilitates online handling.
• The disadvantage is the inherent pre-emption losses.
What is Main Memory:
The main memory is central to the operation of a modern computer. Main Memory is a
large array of words or bytes, ranging in size from hundreds of thousands to billions. Main
memory is a repository of rapidly available information shared by the CPU and I/O
devices. Main memory is the place where programs and information are kept when the
processor is effectively utilizing them. Main memory is associated with the processor, so
moving instructions and information into and out of the processor is extremely fast. Main
memory is also known as RAM(Random Access Memory)
Disk scheduling
Disk scheduling is done by operating systems to schedule I/O requests arriving for the
disk. Disk scheduling is also known as I/O scheduling.
Disk scheduling is important because:
• Multiple I/O requests may arrive by different processes and only one I/O
request can be served at a time by the disk controller. Thus other I/O requests
need to wait in the waiting queue and need to be scheduled.
• Two or more request may be far from each other so can result in greater disk
arm movement.
• Hard drives are one of the slowest parts of the computer system and thus need
to be accessed in an efficient manner.
• Two-level directory –
As we have seen, a single level directory often leads to confusion of files names
among different users. the solution to this problem is to create a separate
directory for each user.
In the two-level directory structure, each user has their own user files directory
(UFD). The UFDs have similar structures, but each lists only the files of a single
user. system’s master file directory (MFD) is searches whenever a new user
id=s logged in. The MFD is indexed by username or account number, and each
entry points to the UFD for that user.
• Acyclic graph directory –
An acyclic graph is a graph with no cycle and allows us to share subdirectories
and files. The same file or subdirectories may be in two different directories. It
is a natural generalization of the tree-structured directory.
It is used in the situation like when two programmers are working on a joint
project and they need to access files. The associated files are stored in a
subdirectory, separating them from other projects and files of other
programmers since they are working on a joint project so they want the
subdirectories to be into their own directories. The common subdirectories
should be shared. So here we use Acyclic directories.
It is the point to note that the shared file is not the same as the copy file. If any
programmer makes some changes in the subdirectory it will reflect in both
subdirectories.