OS - Unit 1
OS - Unit 1
UNIT-I
Introduction: What Operating Systems Do, Operating-System Structure, Operating-System Operations, Process
Management, Memory Management, Storage Management, Protection and Security, Kernel Data Structures.
System Structures: Operating-System Services, User and Operating-System Interface, System Calls, Types of
System Calls, Operating-System Structure.
Process Concept: Process Concept, Process Scheduling, Operations on Processes, Inter process Communication.
Computer System Organization
In this we have study the organization of the Computer System is as follows:
(a) Computer Startup :
For a computer to start running—for instance, when it is powered up or rebooted—it needs to
have an initial program to run.
This initial program or bootstrap program, tends to be simple.
Typically, it is stored within the computer hardware in read-only memory (ROM) or electrically
erasable programmable read-only memory (EEPROM), known by the general term firmware.
It initializes all aspects of the system, from CPU registers to device controllers to memory
contents.
The bootstrap program must know how to load the operating system and how to start executing
that system.
To accomplish this goal, the bootstrap program must locate the operating-system kernel and
load it into memory.
Interrupt Timeline:
To start an I/O operation, the device driver loads the appropriate registers within the device
controller.
The device controller, in turn, examines the contents of these registers to determine what
action to take (such as “read a character from the keyboard”).
The controller starts the transfer of data from the device to its local buffer.
Once the transfer of data is complete, the device controller informs the device driver via an
interrupt that it has finished its operation.
The device driver then returns control to the operating system, possibly returning the data or a
pointer to the data if the operation was a read.
For other operations, the device driver returns status information.
The following two types of I/O methods are :
Synchronus I/O and Asychronus I/O.
Fig: Two I/O methods: (a) synchronous and (b) asynchronous.
Synchronous I/O means that some flow of execution (such as a process or thread) is
waiting for the operation to complete.
Asynchronous I/O means that nothing is waiting for the operation to complete and the
completion of the operation itself causes something to happen.
(e) Storage Structure
Registers are a type of computer memory used to quickly accept, store, and transfer
data and instructions that are being used immediately by the CPU. The registers used by
the CPU are often termed as Processor registers.
A processor register may hold an instruction, a storage address, or any data (such as bit
sequence or individual characters).
The computer needs processor registers for manipulating data and a register for holding
a memory address. The register holding the memory location is used to calculate the
address of the next instruction after the execution of the current instruction is
completed.
Following is the list of some of the most common registers used in a basic computer:
Register Symbol Number of bits Function
Cache memory:
It is a small-sized type of volatile computer memory that provides high-speed data
access to a processor and stores frequently used computer programs, applications and
data.
Main memory :
Fig : Main Memory
Only large storage media that the CPU can access directly.
This is Random access memory.
This is also called as volatile memory.
Secondary storage :
Extension of main memory and that provides large nonvolatile storage capacity.
Solid-state disks:
It is Nonvolatile.
Faster than hard disks.
It is used in various technologies and Becoming more popular.
Hard disks :
Rigid metal or glass platters covered with magnetic recording material.
Disk surface is logically divided into tracks, which are subdivided into sectors.
The disk controller determines the logical interaction between the device and the
computer.
A storage medium from which data is read and to which it is written by lasers.
Optical disks:
Can store much more data -- up to 6 gigabytes (6 billion bytes) -- than most portable magnetic
media, such as floppies.
There are three basic types of optical disks:
CD-ROM :Like audio CDs, CD-ROMs come with data already encoded onto them. The data is
permanent and can be read any number of times, but CD-ROMs cannot be modified.
WORM : Stands for write-once, read -many. With a WORM disk drive, you can write data onto
a WORM disk, but only once. After that, the WORM disk behaves just like a CD-ROM.
Erasable: Optical disks that can be erased and loaded with new data, just like magnetic disks.
These are often referred to as EO (erasable optical) disks.
These three technologies are not compatible with one another; each requires a different type of
disk drive and disk. Evenwithin one category, there are many competing formats, although CD-
ROMs are relatively standardized.
A magnetic disk :
This is a storage device that uses a magnetization process to write, rewrite and access data.
It is covered with a magnetic coating and stores data in the form of tracks, spots and sectors.
Hard disks, zip disks and floppy disks are common examples of magnetic disks.
Fig:Magnetic Disk
Magnetic Tape:
A magnetic tape, in computer terminology, is a storage medium that allows for data archiving,
collection, and backup.
Computer-System Architecture
A computer system can be organized in a number of different ways, which we can categorize
roughly according to the number of general-purpose processors used.They are
(a) Single Processor Systems :
There are mainly two types of multiprocessors i.e. symmetric and asymmetric multiprocessors.
Details about them are as follows:
Symmetric Multiprocessors:
In these types of systems, each processor contains a similar copy of the operating system and
they all communicate with each other. All the processors are in a peer to peer relationship i.e.
no master - slave relationship exists between them.
An example of the symmetric multiprocessing system is the Encore version of Unix for the
Multimax Computer.
Asymmetric Multiprocessors:
In asymmetric systems, each processor is given a predefined task. There is a master processor
that gives instruction to all the other processors.
Asymmetric multiprocessor system contains a master slave relationship.
Asymmetric multiprocessor was the only type of multiprocessor available before symmetric
multiprocessors were created. Now also, this is the cheaper option.
Difference Between Asymmetric and Symmetric Multiprocessing:
Tasks of the operating system are done by Tasks of the operating system are
processor. memory.
(c ) A dual-core design :
Fig : A dual-core design with two cores placed on the same chip.
In symmetric clustering system two or more nodes all run applications as well as monitor each
other. This is more efficient than asymmetric system as it uses all the hardware and doesn't
keep a node merely as a hot standby.
A diagram that demonstrates symmetric clustering system is:
In asymmetric clustering :
In symmetric clustering system two or more nodes all run applications as well as monitor each
other. This is more efficient than asymmetric system as it uses all the hardware and doesn't
keep a node merely as a hot standby
A diagram that demonstrates symmetric clustering system is:
How Asymmetric Clustering Works:The following steps demonstrate the working of the
asymmetric clustering system:
There is a master node in asymmetric clustering that directs all the slaves’ nodes to perform
the tasks required. The requests are delegated by the master node.
A distributed cache is used in asymmetric clustering to improve the performance of the system.
Resources such as memory, peripheral devices etc. are divided between the nodes of the
asymmetric clustering system at boot time.
Definition: An Operating System (OS) is an interface between computer user and computer
hardware.
An operating system is software which performs all the basic tasks like file management,
memory management, process management, handling input and output, and controlling
peripheral devices such as disk drives and printers.
An operating system is a program that acts as an interface between the user and the computer
hardware and controls the execution of all kinds of programs.
Some popular Operating Systems include Linux, Windows, OS X, VMS, OS/400, AIX, z/OS, etc.
or
Memory Management
Processor Management
Device Management
Storage Management
File Management
Security
Control over system performance
Job accounting
Error detecting aids
Coordination between other software and users
(b)Processor Management
In multiprogramming environment, the OS decides which process gets the processor when and
for how much time. This function is called process scheduling.
(f) Security − By means of password and similar other techniques, it prevents unauthorized access
to programs and data.
(g) Control over system performance − Recording delays between request for a service and
response from the system.
(h) Job accounting − Keeping track of time and resources used by various jobs and users.
(i) Error detecting aids − Production of dumps, traces, error messages, and other debugging and
error detecting aids.
(j) Coordination between other software’s and users − Coordination and assignment of compilers,
interpreters, assemblers and other software to the various users of the computer systems.
A Multiprocessor system consists of several processors that share a common physical memory.
Multiprocessor system provides higher computing power and speed.
In multiprocessor system all processors operate under single operating system.
Multiplicity of the processors and how they do act together are transparent to the others.
Distributed systems use multiple central processors to serve multiple real-time applications and
multiple users.
Data processing jobs are distributed among the processors accordingly.
The processors communicate with one another through various communication lines (such as
high-speed buses or telephone lines).
Processors in a distributed system may vary in size and function.
These processors are referred as sites, nodes, computers, and so on.
A Network Operating System runs on a server and provides the server the capability to manage
data, users, groups, security, applications, and other networking functions.
The primary purpose of the network operating system is to allow shared file and printer access
among multiple computers in a network, typically a local area network (LAN), a private network
or to other networks.
Examples of network operating systems include Microsoft Windows Server 2003, Microsoft
Windows Server 2008, UNIX, Linux, Mac OS X, Novell NetWare, and BSD.
1. Single User: If the single user Operating System is loaded in computer’s memory; the
computer would be able to handle one user at a time.
Ex: MS-Dos, MS-Win 95-98, Win-ME
2. Multi user: If the multi-user Operating System is loaded in computer’s memory; the
computer would be able to handle more than one user at a time.
Ex: UNIX, Linux, XENIX
(c) Communication:
In case of distributed systems which are a collection of processors that do not share memory,
peripheral devices, or a clock, the operating system manages communications between all the
processes. Multiple processes communicate with one another through communication lines in
the network.
The OS handles routing and connection strategies, and the problems of contention and
security.
The major activities of an operating system with respect to communication :
Two processes often require data to be transferred between them.
Both the processes can be on one computer or on different computers, but are connected
through a computer network.
Communication may be implemented by two methods, either by Shared Memory or by
Message Passing.
(f) Protection:
Considering a computer system having multiple users and concurrent execution of multiple
processes, the various processes must be protected from each other's activities.
Protection refers to a mechanism or a way to control the access of programs, processes, or
users to the resources defined by a computer system.
The major activities of an operating system with respect to protection –
The OS ensures that all access to system resources is controlled.
The OS ensures that external I/O devices are protected from invalid access attempts.
The OS provides authentication features for each user by means of passwords.
The main difference between GUI and CLI is that the Graphical User Interface (GUI) allows the
user to interact with the system using graphical elements such as windows, icons, menus while
the Command Line Interface (CLI) allows the user to interact with the system using commands.
Fig:Abstraction View of OS
Different modes supported by the operating system are:
Kernel Mode:
When CPU is in kernel mode, the code being executed can access any memory address and any
hardware resource.
Hence kernel mode is a very privileged and powerful mode.
If a program crashes in kernel mode, the entire system will be halted.
User Mode:
When CPU is in user mode, the programs don’t have direct access to memory and hardware
resources.
In user mode, if any program crashes, only that particular program is halted.
That means the system will be in a safe state even if a program in user mode crashes.
Hence, most programs in an OS run in user mode.
System Call:
System calls provide an interface to the services made available by an operatingsystem.
These calls are generally available as routines written in C and C++, although certain low-level
tasks (for example, tasks where hardware must be accessed directly) may have to be written
using assembly-language instructions.
System calls can be grouped roughly into six major categories: processcontrol, file manipulation,
device manipulation, information maintenance,communications, and protection.
Process control
◦ end, abort
◦ load, execute
◦ create process, terminate process
◦ get process attributes, set process attributes
◦ wait for time
◦ wait event, signal event
◦ allocate and free memory
File management
◦ create file, delete file
◦ open, close
◦ read, write, reposition
◦ get file attributes, set file attributes
Device management
◦ request device, release device
◦ read, write, reposition
◦ get device attributes, set device attributes
◦ logically attach or detach devices
Information maintenance
◦ get time or date, set time or date
◦ get system data, set system data
◦ get process, file, or device attributes
◦ set process, file, or device attributes
Communications
◦ create, delete communication connection
◦ send, receive messages
◦ transfer status information
◦ attach or detach remote device
Operating-System Structure
The general-purpose OS is very large program.
Various ways to represent the operating system structure are:
Simple structure or MS-DOS :
Fig:MS DOS
In MS-DOS, the interfaces and levels of functionality are not well separated.
For instance, application programs are able to access the basic I/O routinesto write directly to
the display and disk drives.
Such freedom leaves MS-DOSvulnerable to errant (or malicious) programs, causing entire
system crasheswhen user programs fail.
Of course, MS-DOS was also limited by the hardwareof its era.
Because the Intel 8088 for which it was written provides no dualmode and no hardware
protection, the designers of MS-DOS had no choice butto leave the base hardware accessible.
2. Layered – an abstraction
Fig:Layered OS
The operating system is divided into a number of layers (levels), each built on top of lower
layers.
The bottom layer (layer 0), is the hardware; the highest (layer N) is the user interface.
With modularity, layers are selected such that each uses functions (operations) and services of
only lower-level layers
3. Microkernel -Mach
Fig:Micro Kernel
The main function of the microkernel is to provide communication between the client program
and the various services that are also running in user space.
Communication is provided through message passing.
An operating system called Mach that modularized the kernel using the microkernel approach.
4.Modules
Fig:Kernel Module
The best current methodology for operating-system design involves using loadable kernel
modules.
Here, the kernel has a set of core components and links in additional services via modules,
either at boot time or during run time.
The Solaris operating system structure, shown in Fig is organizedaround a core kernel with seven
types of loadable kernel modules:
1. Scheduling classes
2. File systems
3. Loadable system calls
4. Executable formats
5. STREAMS modules
6. Miscellaneous
7. Device and bus drivers
This type of design is common in modern implementations of UNIX, such as Solaris, Linux, and Mac OS X,
as well as Windows.
5.Hybrid Systems
Most modern operating systems are actually not one pure model Hybrid
combinesmultiple approaches to address performance, security, usability needs.
We explore the structure ofthree hybrid systems: the Apple Mac OS X operating system
and the two mostprominent mobile operating systems—iOS and Android.
Apple Mac OS X :
iOS:
Figure:Architecture of Apple’s iOS.
iOS is a mobile operating system designed by Apple to run its smartphone, theiPhone, as
well as its tablet computer, the iPad.
iOS is structured on the MacOS X operating system, with added functionality pertinent
to mobile devices,but does not directly run Mac OS X applications.
Cocoa Touch is an API for Objective-C that provides several frameworks fordeveloping
applications that run on iOS devices.
The fundamental differencebetween Cocoa, mentioned earlier, and Cocoa Touch is that
the latter providessupport for hardware features unique to mobile devices, such as
touch screens.
The media services layer provides services for graphics, audio, and video.
The core services layer provides a variety of features, including support forcloud
computing and databases.
The bottom layer represents the core operatingsystem, which is based on the kernel
environment shown in Figure.
Android
The Android operating system was designed by the Open Handset Alliance(led primarily
by Google) and was developed for Android smartphones andtablet computers.
Whereas iOS is designed to run on Apple mobile devicesand is close-sourced, Android
runs on a variety of mobile platforms and isopen-sourced, partly explaining its rapid rise
in popularity.
Android is similar to iOS in that it is a layered stack of software thatprovides a rich set of
frameworks for developing mobile applications.
At thebottom of this software stack is the Linux kernel, although it has been modifiedby
Google and is currently outside the normal distribution of Linux releases.
Linux is used primarily for process, memory, and device-driver support forhardware and
has been expanded to include power management.
The Androidruntime environment includes a core set of libraries as well as the
Dalvikvirtualmachine.
Software designers for Android devices develop applications in theJava language.
However, rather than using the standard Java API, Google hasdesigned a separate
Android API for Java development.
The Java class files arefirst compiled to Java bytecode and then translated into an
executable file thatruns on the Dalvik virtual machine.
The Dalvik virtual machine was designedfor Android and is optimized for mobile devices
with limited memory andCPU processing capabilities.
The set of libraries available for Android applications includes frameworksfor developing
web browsers (webkit), database support (SQLite), and multimedia.
The libc library is similar to the standard C library but is much smallerand has been
designed for the slower CPUs that characterize mobile devices.
Operating System Debugging
Debugging is the process of finding the problems in a computer system and solving them. There
are many different ways in which operating systems perform debugging. Some of these are:
Log Files
The log files record all the events that occur in an operating system. This is done by writing all
the messages into a log file. There are different types of log files. Some of these are given as
follows:
Event Logs
These stores the records of all the events that occur in the execution of a system. This is done so
that the activities of all the events can be understood to diagnose problems.
Transaction Logs
The transaction logs store the changes to the data so that the system can recover from crashes
and other errors. These logs are readable by a human.
Message Logs
These logs store both the public and private messages between the users. They are mostly plain
text files, but in some cases they may be HTML files.
Core Dump Files
The core dump files contain the memory address space of a process that terminates
unexpectedly. The creation of the core dump is triggered in response to program crashes by the
kernel. The core dump files are used by the developers to find the program’s state at the time of
its termination so that they can find out why the termination occurred.The automatic creation
of the core dump files can be disabled by the users. This may be done to improve performance,
clear disk space or increase security.
Crash Dump Files
In the event of a total system failure, the information about the state of the operating system is
captured in crash dump files. There are three types of dump that can be captured when a
system crashes. These are:
Description :
S.N. Component & Description
1 Stack
The process Stack contains the temporary data such as method/function
parameters, return address and local variables.
2 Heap
This is dynamically allocated memory to a process during its run time.
3 Text
This includes the current activity represented by the value of Program
Counter and the contents of the processor's registers.
4 Data
This section contains the global and static variables.
diagram of a PCB −
A PCB keeps all the information needed to keep track of a process as listed below in the table –
1 Process State
The current state of the process i.e., whether it is ready, running, waiting,
or whatever.
2 Process privileges
This is required to allow/disallow access to system resources.
3 Process ID
Unique identification for each of the process in the operating system.
4 Pointer
A pointer to parent process.
5 Program Counter
Program Counter is a pointer to the address of the next instruction to be
executed for this process.
6 CPU registers
Various CPU registers where process need to be stored for execution for
running state.
9 Accounting information
This includes the amount of CPU used for process execution, time limits,
execution ID etc.
10 IO status information
This includes a list of I/O devices allocated to the process.
Process Life Cycle Methods
When a process executes, it passes through different states. These stages may differ in different
operating systems, and the names of these states are also not standardized.In general, a process can
have one of the following five states at a time.
1 Start
This is the initial state when a process is first started/created.
2 Ready
The process is waiting to be assigned to a processor. Ready processes are
waiting to have the processor allocated to them by the operating system
so that they can run. Process may come into this state after Start state or
while running it by but interrupted by the scheduler to assign CPU to
some other process.
3 Running
Once the process has been assigned to a processor by the OS scheduler,
the process state is set to running and the processor executes its
instructions.
4 Waiting
Process moves into the waiting state if it needs to wait for a resource,
such as waiting for user input, or waiting for a file to become available.
5 Terminated or Exit
Once the process finishes its execution, or it is terminated by the
operating system, it is moved to the terminated state where it waits to be
removed from main memory.
Process Scheduling
The process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process on the basis of a particular
strategy.
Process scheduling is an essential part of Multiprogramming operating systems. Such operating
systems allow more than one process to be loaded into the executable memory at a time and
the loaded process shares the CPU using time multiplexing.
Process Scheduling Queues:
The OS maintains all PCBs in Process Scheduling Queues. The OS maintains a separate queue for
each of the process states and PCBs of all processes in the same execution state are placed in
the same queue. When the state of a process is changed, its PCB is unlinked from its current
queue and moved to its new state queue.
The PCB is maintained for a process throughout its lifetime, and is deleted once the process
terminates.
Job queue: This queue keeps all the processes in the system.
Ready queue: This queue keeps a set of all processes residing in main memory, ready and
waiting to execute. A new process is always put in this queue.
Device queues: The processes which are blocked due to unavailability of an I/O device
constitute this queue.
Scheduler
Schedulers are special system software which handles process scheduling in various ways. Their
main task is to select the jobs to be submitted into the system and to decide which process to
run. Schedulers are of three types :
Long-Term Scheduler
Short-Term Scheduler
Medium-Term Scheduler
Long Term Scheduler: It is also called a job scheduler. A long-term scheduler determines which
programs are admitted to the system for processing. It selects processes from the queue and
loads them into memory for execution. Process loads into the memory for CPU scheduling.The
primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O bound
and processor bound. It also controls the degree of multiprogramming. If the degree of
multiprogramming is stable, then the average rate of process creation must be equal to the
average departure rate of processes leaving the system.On some systems, the long-term
scheduler may not be available or minimal. Time-sharing operating systems have no long term
scheduler. When a process changes the state from new to ready, then there is use of long-term
scheduler.
Short Term Scheduler: It is also called as CPU scheduler. Its main objective is to increase system
performance in accordance with the chosen set of criteria. It is the change of ready state to
running state of the process. CPU scheduler selects a process among the processes that are
ready to execute and allocates CPU to one of them.Short-term schedulers, also known as
dispatchers, make the decision of which process to execute next. Short-term schedulers are
faster than long-term schedulers.
Medium Term Scheduler: Medium-term scheduling is a part of swapping. It removes the processes
from the memory. It reduces the degree of multiprogramming. The medium-term scheduler is in-charge
of handling the swapped out-processes.A running process may become suspended if it makes an I/O
request. Suspended processes cannot make any progress towards completion. In this condition, to
remove the process from memory and make space for other processes, the suspended process is
moved to the secondary storage. This process is called swapping, and the process is said to be swapped
out or rolled out. Swapping may be necessary to improve the process mix.
Context Switch:
When CPU switches to another process, the system must save the state of the old process and
load the saved state for the new process via a context switch.
(OR)
A context switch is the mechanism to store and restore the state or context of a CPU in Process
Control block so that a process execution can be resumed from the same point at a later time.
Using this technique, a context switcher enables multiple processes to share a single CPU.
Context switching is an essential part of a multitasking operating system features.
Operations on Processes
The processes in most systems can execute concurrently, and they may be created and deleted
dynamically.
Thus, these systems must provide a mechanism for process creation and termination.
Process Creation: Parent process creates children processes, which, in turn create other
processes, forming a tree of processes.
Most operating systems (including UNIX, Linux, and Windows) identify processes according to a
unique process identifier (or pid), which is typically an integer number. The pid provides a
unique value for each process in the system, and it can be used as an index to access various
attributes of a process within the kernel.
Figure illustrates a typical process tree for the Linux operating system,showing the name of each
process and it’s pid. (We use the term process ratherloosely, as Linux prefers the term task
instead.) The init process (which alwayshas a pid of 1) serves as the root parent process for all
user processes.
Once thesystem has booted, the init process can also create various user processes, suchas a
web or print server, an ssh server, and the like.
Diagrammatic Representation for Process creation using the fork () system call.
Here :
fork() system call creates new process .
exec() system call used after a fork() to replace the process memory space with a new program.
The parent waits for the child process to complete with the wait () system call.
When the child process completes (by either implicitly or explicitly invoking exit ()), the parent
process resumes from the call to wait (), where itcompletes using the exit () system call.
Like all of the exec functions, execlp() replaces the calling process image with a new process
image. ... The execlp() function is most commonly used to overlay a process image that has been
created by a call to the fork function. path. Identifies the location of the new process image
within the hierarchical file system.
On Unix-like operating systems, exec () is a built-in command of the Bash shell. It allows you
to execute a command that completely replaces the current process.
Process Termination :
A process terminates when it finishes executing its final statement and asks theoperating system
to delete it by using the exit () system call.
At that point, the process may return a status value (typically an integer) to its parent process
(via the wait () system call). All the resources of the process—including physical and virtual
memory, open files, and I/O buffers—are deallocated by the operating system.
A parent may terminate the execution of one of its children for a variety of reasons, such as
these:
The child has exceeded its usage of some of the resources that it has beenallocated. (To
determine whether this has occurred, the parent must havea mechanism to inspect the state of
its children.)
The task assigned to the child is no longer required.
The parent is exiting, and the operating system does not allow a child tocontinue if its parent
terminates.
To illustrate process execution and termination, consider that, in Linuxand UNIX systems, we can
terminate a process by using the exit() systemcall, providing an exit status as a parameter:
/* exit with status 1 */
exit(1);
In fact, under normal termination, exit() may be called either directly (as shown above) or
indirectly (by a return statement in main()).
Interprocess Communication
Inter process communication (IPC) is a set of programming interfaces that allow a programmer
to coordinate activities among different program processes that can run concurrently in an
operating system. This allows a program to handle many user requests at the same time. Since
even a single user request may result in multiple processes running in the operating system on
the user's behalf, the processes need to communicate with each other. The IPC interfaces make
this possible. Each IPC method has its own advantages and limitations so it is not unusual for a
single program to use all of the IPC methods.
Processes executing concurrently in the operating system may be eitherindependent processes
or cooperating processes.
A process is independentif it cannot affect or be affected by the other processes executing in
the system.Any process that does not share data with any other process is independent.
Aprocess is cooperatingor Dependentif it can affect or be affected by the other
processesexecuting in the system. Clearly, any process that shares data with otherprocesses is a
cooperating process.
There are several reasons for providing an environment that allows processCooperation:
Information sharing. Since several users may be interested in the samepiece of information (for
instance, a shared file), we must provide anenvironment to allow concurrent access to such
information.
Computation speedup. If we want a particular task to run faster, we mustbreak it into subtasks,
each of which will be executing in parallel with theothers. Notice that such a speedup can be
achieved only if the computerhas multiple processing cores.
Modularity. We may want to construct the system in a modular fashion,dividing the system
functions into separate processes or threads.
Convenience. Even an individual user may work on many tasks at thesame time. For instance, a
user may be editing, listening to music, andcompiling in parallel.
Cooperating processes require an interprocess communication (IPC) mechanismthat will allow
them to exchange data and information.
There are two fundamental models of inter process communication: shared memory and
message passing.
(b) Synchronization
Communication between processes takes place through calls to send() andreceive() primitives.
There are different design options for implementingeach primitive.
Message passing may be either blocking or nonblocking—also known as synchronous and
asynchronous. (Throughout this text, youwill encounter the concepts of synchronous and
asynchronous behavior inrelation to various operating-system algorithms.)
• Blocking send. The sending process is blocked until the message isreceived by the receiving
process or by the mailbox.
• Non blocking send: The sending process sends the message and resumesoperation.
• Blocking receive: The receiver blocks until a message is available.
• Non blocking receive: The receiver retrieves either a valid message or a null.
message next produced;
while (true)
{
/* produce an item in next produced */
send(next produced);
}
Fig : The producer process using message passing.
© Buffering
Whether communication is direct or indirect, messages exchanged by communicatingprocesses
reside in a temporary queue.
Basically, such queues can beimplemented in three ways:
Zero capacity: The queue has a maximum length of zero; thus, the linkcannot have any
messageswaiting in it. In this case, the sender must block until the recipient receives the
message.
Bounded capacity: The queue has finite length n; thus, at most n messagescan reside in it. If the
queue is not full when a new message is sent, themessage is placed in the queue (either the
message is copied or a pointerto the message is kept), and the sender can continue execution
withoutwaiting. The link’s capacity is finite, however If the link is full, the sender must block until
space is available in the queue.
Unbounded capacity: The queue’s length is potentially infinite; thus, any number of messages
can wait in it. The sender never blocks.
Examples of IPC
The following are the examples of IPC :
o POSIX Shared Memory
o Mach
o Windows
o Sockets
o Remote Procedure Calls
o Pipes