0% found this document useful (0 votes)
87 views

OS - Unit 1

The document discusses the organization and operation of computer systems. It covers topics like computer startup, system structure, system operation, I/O structure, storage structure, and direct memory access. Diagrams are included to illustrate concepts like system components, interrupt timelines, I/O methods, storage device hierarchy, and DMA.

Uploaded by

Pawan Nani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
87 views

OS - Unit 1

The document discusses the organization and operation of computer systems. It covers topics like computer startup, system structure, system operation, I/O structure, storage structure, and direct memory access. Diagrams are included to illustrate concepts like system components, interrupt timelines, I/O methods, storage device hierarchy, and DMA.

Uploaded by

Pawan Nani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 67

OPERATING SYSTEMS

UNIT-I
Introduction: What Operating Systems Do, Operating-System Structure, Operating-System Operations, Process
Management, Memory Management, Storage Management, Protection and Security, Kernel Data Structures.
System Structures: Operating-System Services, User and Operating-System Interface, System Calls, Types of
System Calls, Operating-System Structure.
Process Concept: Process Concept, Process Scheduling, Operations on Processes, Inter process Communication.
Computer System Organization
In this we have study the organization of the Computer System is as follows:
(a) Computer Startup :
 For a computer to start running—for instance, when it is powered up or rebooted—it needs to
have an initial program to run.
 This initial program or bootstrap program, tends to be simple.
 Typically, it is stored within the computer hardware in read-only memory (ROM) or electrically
erasable programmable read-only memory (EEPROM), known by the general term firmware.
 It initializes all aspects of the system, from CPU registers to device controllers to memory
contents.
 The bootstrap program must know how to load the operating system and how to start executing
that system.
 To accomplish this goal, the bootstrap program must locate the operating-system kernel and
load it into memory.

(b) Computer System Structure :

Fig: Abstract view of the components of a computer system.


 Computer system can be divided into four components:
Hardware: Provides basic computing resources CPU, memory, I/O devices.
Operating system: Controls and coordinates use of hardware among various
applications and users.
Application programs: Define the ways in which the system resources are used to solve
the computing problems of the users Word processors, compilers, web browsers,
database systems, video games.
Users: People, machines, other computers.

(c) Computer System Operation:

Fig : A modern computer system.


 A modern general-purpose computer system consists of one or more CPUs and a
number of device controllers connected through a common bus that provides access to
shared memory in above figure.
 Each device controller is in charge of a specific type of device (for example, disk drives,
audio devices or video displays).
 The CPU and the device controllers can execute in parallel, competing for memory
cycles.
 To ensure orderly access to the shared memory a memory controller synchronizes
access to the memory.
 I/O devices and the CPU can execute concurrently.
 Each device controller has a local buffer.
 CPU moves data from/to main memory to/from local buffers.
 I/O is from the device to local buffer of controller.
 Device controller informs CPU that it has finished its operation by causing an interrupt.

Interrupt Timeline:

Fig: Interrupt timeline for a single process doing output.


 When the CPU is interrupted, it stops what it is doing and immediately transfers
execution to a fixed location. The fixed location usually contains the starting address
where the service routine for the interrupt is located. The interrupt service routine
executes; on completion, the CPU resumes the interrupted computation. A timeline of
this operation is shown in Figure.
 A trap or exception is a software-generated interrupt caused either by an error or a user
request .
 An operating system is interrupt driven.
(d) I/O Structure:

Fig : I/O Structure

 To start an I/O operation, the device driver loads the appropriate registers within the device
controller.
 The device controller, in turn, examines the contents of these registers to determine what
action to take (such as “read a character from the keyboard”).
 The controller starts the transfer of data from the device to its local buffer.
 Once the transfer of data is complete, the device controller informs the device driver via an
interrupt that it has finished its operation.
 The device driver then returns control to the operating system, possibly returning the data or a
pointer to the data if the operation was a read.
 For other operations, the device driver returns status information.
 The following two types of I/O methods are :
 Synchronus I/O and Asychronus I/O.
Fig: Two I/O methods: (a) synchronous and (b) asynchronous.
 Synchronous I/O means that some flow of execution (such as a process or thread) is
waiting for the operation to complete.
 Asynchronous I/O means that nothing is waiting for the operation to complete and the
completion of the operation itself causes something to happen.
(e) Storage Structure

Fig: Storage-Device Hierarchy


 In generally the Storage systems organized hierarchy is Speed ,Cost and Volatility
 The wide variety of storage systems can be organized in a hierarchy according to speed
and cost. The higher levels are expensive, but they are fast. As we move down the
hierarchy, the cost per bit generally decreases,where as the access time generally
increases.
 Various levels of memory are:
 Registers :

Fig: Registers in CPU

 Registers are a type of computer memory used to quickly accept, store, and transfer
data and instructions that are being used immediately by the CPU. The registers used by
the CPU are often termed as Processor registers.

 A processor register may hold an instruction, a storage address, or any data (such as bit
sequence or individual characters).

 The computer needs processor registers for manipulating data and a register for holding
a memory address. The register holding the memory location is used to calculate the
address of the next instruction after the execution of the current instruction is
completed.

 Following is the list of some of the most common registers used in a basic computer:
Register Symbol Number of bits Function

Data register DR 16 Holds memory operand

Address register AR 12 Holds address for the memory

Accumulator AC 16 Processor register

Instruction register IR 16 Holds instruction code

Program counter PC 12 Holds address of the instruction

Temporary register TR 16 Holds temporary data

Input register INPR 8 Carries input character

Output register OUTR 8 Carries output character

 Cache memory:
 It is a small-sized type of volatile computer memory that provides high-speed data
access to a processor and stores frequently used computer programs, applications and
data.

Fig: Cache Memory

 Main memory :
Fig : Main Memory

 Only large storage media that the CPU can access directly.
 This is Random access memory.
 This is also called as volatile memory.
 Secondary storage :
 Extension of main memory and that provides large nonvolatile storage capacity.
 Solid-state disks:
 It is Nonvolatile.
 Faster than hard disks.
 It is used in various technologies and Becoming more popular.
 Hard disks :
 Rigid metal or glass platters covered with magnetic recording material.
 Disk surface is logically divided into tracks, which are subdivided into sectors.
 The disk controller determines the logical interaction between the device and the
computer.
 A storage medium from which data is read and to which it is written by lasers.
 Optical disks:
 Can store much more data -- up to 6 gigabytes (6 billion bytes) -- than most portable magnetic
media, such as floppies.
 There are three basic types of optical disks:
CD-ROM :Like audio CDs, CD-ROMs come with data already encoded onto them. The data is
permanent and can be read any number of times, but CD-ROMs cannot be modified.
WORM : Stands for write-once, read -many. With a WORM disk drive, you can write data onto
a WORM disk, but only once. After that, the WORM disk behaves just like a CD-ROM.
Erasable: Optical disks that can be erased and loaded with new data, just like magnetic disks.
These are often referred to as EO (erasable optical) disks.
 These three technologies are not compatible with one another; each requires a different type of
disk drive and disk. Evenwithin one category, there are many competing formats, although CD-
ROMs are relatively standardized.

 A magnetic disk :
 This is a storage device that uses a magnetization process to write, rewrite and access data.
 It is covered with a magnetic coating and stores data in the form of tracks, spots and sectors.
 Hard disks, zip disks and floppy disks are common examples of magnetic disks.

Fig:Magnetic Disk
 Magnetic Tape:
 A magnetic tape, in computer terminology, is a storage medium that allows for data archiving,
collection, and backup.

 Direct memory access (DMA):


Fig:DMA

 This is a method that allows an input/output (I/O) device to send or receive


data directly to or from the main memory, bypassing the CPU to speed
up memory operations.
 The process is managed by a chip known as a DMA controller (DMAC).

Computer-System Architecture
 A computer system can be organized in a number of different ways, which we can categorize
roughly according to the number of general-purpose processors used.They are
(a) Single Processor Systems :

 Most systems use single processor systems.


 They perform only one process at a given time, and it carries out the next process in the queue
only after the current process is completed.
 OS monitors the status of them and also sends them next executable instruction.
 It relieves CPU of disk scheduling and other tasks.
 It is suitable for general purpose computers, as it cannot run multiple processes in parallel.

(b) Multi Processor Systems (or) Tightly Coupled Systems:


 Also known as parallel or tightly coupled systems.
 Most computer systems are single processor systems i.e they only have one processor.
However, multiprocessor or parallel systems are increasing in importance nowadays.
 These systems have multiple processors working in parallel that share the computer clock,
memory, bus, peripheral devices etc.
The Advantages of Multi processor systems are :
Increased throughput:As there are a number of processors, more work can be done in less time.
These multiple processors run parallel to each other increasing the performance of the system.
Reliability and failure-free:Failure of any processor will not affect the functionality of the
system, as there are a number of processors. We can expect failure free service from multi-
processor system.
Economy of scale:Multi Processor Systems cost less than a number of individual single processor
system. In the case of multi processor system expenditure for system cabinet, memory power
supply, accessories are saved as these systems share resources like power supply, memory and
also space.

 There are mainly two types of multiprocessors i.e. symmetric and asymmetric multiprocessors.
Details about them are as follows:

 Symmetric Multiprocessors:
 In these types of systems, each processor contains a similar copy of the operating system and
they all communicate with each other. All the processors are in a peer to peer relationship i.e.
no master - slave relationship exists between them.
 An example of the symmetric multiprocessing system is the Encore version of Unix for the
Multimax Computer.
 Asymmetric Multiprocessors:

 In asymmetric systems, each processor is given a predefined task. There is a master processor
that gives instruction to all the other processors.
 Asymmetric multiprocessor system contains a master slave relationship.
 Asymmetric multiprocessor was the only type of multiprocessor available before symmetric
multiprocessors were created. Now also, this is the cheaper option.
 Difference Between Asymmetric and Symmetric Multiprocessing:

ASYMMETRIC MULTIPROCESSING SYMMETRIC MULTIPROCESSING

In asymmetric multiprocessing, the In symmetric multiprocessing, all the

processors are not treated equally. processors are treated equally.

Tasks of the operating system are done by Tasks of the operating system are

master processor. done individual processor

No Communication between Processors as All processors communicate with

they are controlled by the master another processor by a shared

processor. memory.

In asymmetric multiprocessing, process In symmetric multiprocessing, the

are master-slave. process is taken from the ready queue.

Asymmetric multiprocessing systems are Symmetric multiprocessing systems

cheaper. are costlier.

Asymmetric multiprocessing systems are Symmetric multiprocessing systems

easier to design are complex to design

(c ) A dual-core design :
Fig : A dual-core design with two cores placed on the same chip.

 A dual-core processor is a CPU with two processors or "execution cores" in the same integrated


circuit. Each processor has its own cache and controller, which enables it to function as
efficiently as a single processor. However, because the two processors are linked together, they
can perform operations up to twice as fast as a single processor can.
 We show a dual-core design with two cores on the samechip. In this design, each core has its
own register set as well as its own localcache.
 Other designs might use a shared cache or a combination of local andshared caches.
 Aside from architectural considerations, such as cache,memory,and bus contention, these
multicore CPUs appear to the operating system asN standard processors.
 This characteristic puts pressure on operating systemdesigners—and application programmers
—to make use of those processingcores.

 Note:A core, or CPU core, is the "brain" of a CPU. It receives instructions, and performs


calculations, or operations, to satisfy those instructions. A CPU can have multiple cores.
 Note: A CPU with a single core is called a uniprocessor. When a system has more than one core,
it is called a multicore. A CPU with two cores is called a dual-core processor while
a processor with four cores is called a quad-core processor. Moreover, high-performance
computers can have six to eight cores.
 Note:The main difference between multicore and multiprocessor is that the multicore refers to
a single CPU with multiple execution units while the multiprocessor refers to a system that has
two or more CPUs.
(d) Clustered Systems :

Fig : General structure of a clustered system.


 Clusteredcomputers share storage and are closely linked via a local-area network LANor a
faster interconnect, such as InfiniBand.
 Clustering can be structured asymmetrically or symmetrically.
 In symmetric clustering :

 In symmetric clustering system two or more nodes all run applications as well as monitor each
other. This is more efficient than asymmetric system as it uses all the hardware and doesn't
keep a node merely as a hot standby.
 A diagram that demonstrates symmetric clustering system is:

In asymmetric clustering :
 In symmetric clustering system two or more nodes all run applications as well as monitor each
other. This is more efficient than asymmetric system as it uses all the hardware and doesn't
keep a node merely as a hot standby
 A diagram that demonstrates symmetric clustering system is:

How Asymmetric Clustering Works:The following steps demonstrate the working of the
asymmetric clustering system:
 There is a master node in asymmetric clustering that directs all the slaves’ nodes to perform
the tasks required. The requests are delegated by the master node.
 A distributed cache is used in asymmetric clustering to improve the performance of the system.
 Resources such as memory, peripheral devices etc. are divided between the nodes of the
asymmetric clustering system at boot time.

 The advantages of Clustred Systems are :


 Clustering is usually used to provide high-availability service—that is,service will continue even
if one or more systems in the cluster fail.
 Clusters can also be used to provide high-performance computingenvironments.Such systems
can supply significantly greater computationalpower than single-processor or even SMP
systems because they can run anapplication concurrently on all computers in the cluster.

What Operating Systems Do or Role of


the Operating Systems

 Definition: An Operating System (OS) is an interface between computer user and computer
hardware.
 An operating system is software which performs all the basic tasks like file management,
memory management, process management, handling input and output, and controlling
peripheral devices such as disk drives and printers.
 An operating system is a program that acts as an interface between the user and the computer
hardware and controls the execution of all kinds of programs.
 Some popular Operating Systems include Linux, Windows, OS X, VMS, OS/400, AIX, z/OS, etc.

or

Fig: Operating System


The goals of an Operating system are:
 Execute user programs and make solving user problems easier.
 Make the computer system convenient to use.
 Use the computer hardware in an efficient manner.

The important Operations or functions of


an operating System

 Memory Management
 Processor Management
 Device Management
 Storage Management
 File Management
 Security
 Control over system performance
 Job accounting
 Error detecting aids
 Coordination between other software and users

(a) Memory Management :


 To execute a program all (or part) of the instructions must be in memory.
 All (or part) of the data that is needed by the program must be in memory.
 Memory management determines what is in memory and when Optimizing CPU utilization and
computer response to users.
 The activities of the Memory management are :
 Keeping track of which parts of memory are currently being used and by whom.
 Deciding which processes (or parts thereof) and data to move into and out of memory.
 Allocating and deallocating memory space as needed.
 Scheduling processes and threads on the CPUs
 Creating and deleting both user and system processes
 Suspending and resuming processes
 Providing mechanisms for process synchronization
 Providing mechanisms for process communication

(b)Processor Management
 In multiprogramming environment, the OS decides which process gets the processor when and
for how much time. This function is called process scheduling.

The activities of the processor management are:


 Keeps tracks of processor and status of process. The program responsible for this task is known
as traffic controller.
 Allocates the processor (CPU) to a process.
 De-allocates processor when a process is no longer required.

(c) Device Management:


 An Operating System manages device communication via their respective drivers.
 The activities of the device management are :
 Keeps tracks of all devices. Program responsible for this task is known as the I/O controller.
 Decides which process gets the device when and for how much time.
 Allocates the device in the efficient way.
 De-allocates devices.

(d) Storage Management:


 To make the computer system convenient for users, the operating system provides a uniform,
logical view of information storage.
 The operating system abstracts from the physical properties of its storage devices to define a
logical storage unit, the file.
 The operating system maps files onto physical media and accesses these files via the storage
devices.

(e) File Management:


 A file system is normally organized into directories for easy navigation and usage. These
directories may contain files and other directions
.

The activities of the file management are:


 Keeps track of information, location, uses, status etc. The collective facilities are often known
as file system.
 Decides who gets the resources.
 Allocates the resources.
 De-allocates the resources.

(f) Security − By means of password and similar other techniques, it prevents unauthorized access
to programs and data.
(g) Control over system performance − Recording delays between request for a service and
response from the system.
(h) Job accounting − Keeping track of time and resources used by various jobs and users.
(i) Error detecting aids − Production of dumps, traces, error messages, and other debugging and
error detecting aids.
(j) Coordination between other software’s and users − Coordination and assignment of compilers,
interpreters, assemblers and other software to the various users of the computer systems.

Kernel Data Structures


We briefly describe several fundamental data structures used extensively in operating systems.
 The following are Different types of Kernel data structures are:
 Array: An array is a simple data structure in which each element can be accessed directly.
 In a singly linked list, each item points to its successor.
 In a doubly linked list, a given item can refer either to its predecessor or to its successor.
 In a circularly linked list, the last element in the list refers to the first element, rather than to
null.
 A stack is a sequentially ordered data structure that uses the last in, first out (LIFO) principle for
adding and removing items, meaning that the last item placed onto a stack is the first item
removed. The operations for inserting and removing items from a stack are known as push and
pop, respectively.
 A queue, in contrast, is a sequentially ordered data structure that uses thefirst in, first out (FIFO)
principle: items are removed from a queue in the orderin which they were inserted.
 A tree is a data structure that can be used to represent data hierarchically. Datavalues in a tree
structure are linked through parent–child relationships. In ageneral tree, a parent may have an
unlimited number of children.
 In a binary tree, a parent may have at most two children, which we term the left childand the
right child.
 A binary search tree additionally requires an orderingbetween the parent’s two children in
which le f t child <= right child.
 A hash functiontakes data as its input, performs a numeric operation on this data, and returns a
numeric value. This numeric value can then be used as an index into a table (typically an array)
to quickly retrieve the data. Whereas searching for a data item through a list of size n can
require up to O(n) comparisons in the worst case, using a hash function for retrieving data from
table can be as good as O(1) in the worst case, depending on implementation details.

Operating System Types


 Operating systems are there from the very first computer generation and they keep evolving
with time.
 In this, we will discuss some of the important types of operating systems which are most
commonly used.
 Batch operating systems
 The users of a batch operating system do not interact with the computer directly.
 Each user prepares his job on an off-line device like punch cards and submits it to the computer
operator.
 To speed up processing, jobs with similar needs are batched together and run as a group.
 The programmers leave their programs with the operator and the operator then sorts the
programs with similar requirements into batches.

The problems with Batch Systems are as follows :


 Lack of interaction between the user and the job.
 CPU is often idle, because the speed of the mechanical I/O devices is slower than the CPU.
 Difficult to provide the desired priority.

Multi Programming Batch Systems:


 In this the operating system, picks and begins to execute one job from memory.
 Once this job needs an I/O operation operating system switches to another job (CPU and OS
always busy).
 Jobs in the memory are always less than the number of jobs on disk (Job Pool).
 If several jobs are ready to run at the same time, then system chooses which one to run (CPU
Scheduling).
 In Non-multiprogrammed system, there are moments when CPU sits idle and does not do any
work.
 In Multiprogramming system, CPU will never be idle and keeps on processing.

Time-sharing operating systems:


 Time-sharing is a technique which enables many people, located at various terminals, to use a
particular computer system at the same time.
 Time-sharing or multitasking is a logical extension of multiprogramming. Processor's time
which is shared among multiple users simultaneously is termed as time-sharing.
 Multiple jobs are executed by the CPU by switching between them, but the switches occur so
frequently.
 Thus, the user can receive an immediate response.
 The main difference between Multiprogrammed Batch Systems and Time-Sharing Systems is
that in case of multiprogrammed batch systems, the objective is to maximize processor use,
whereas in Time-Sharing Systems, the objective is to minimize response time.

Advantages of Timesharing operating systems are as follows :


 Provides the advantage of quick response.
 Avoids duplication of software.
 Reduces CPU idle time.

Disadvantages of Time-sharing operating systems are as follows :


 Problem of reliability.
 Question of security and integrity of user programs and data.
 Problem of data communication.

Multiprocessor Systems or Tightly coupled systems or Parallel systems

 A Multiprocessor system consists of several processors that share a common physical memory.
 Multiprocessor system provides higher computing power and speed.
 In multiprocessor system all processors operate under single operating system.
 Multiplicity of the processors and how they do act together are transparent to the others.

Advantages of Multiprocessor Systems are:


 Enhanced performance.
 Execution of several tasks by different processors concurrently, increases the system's
throughput without speeding up the execution of a single task.
 If possible, system divides task into many subtasks and then these subtasks can be executed in
parallel in different processors.
 Thereby speeding up the execution of single tasks.

Distributed operating Systems or loosely coupled systems: 

Fig :Distributed Systems

 Distributed systems use multiple central processors to serve multiple real-time applications and
multiple users.
 Data processing jobs are distributed among the processors accordingly.
 The processors communicate with one another through various communication lines (such as
high-speed buses or telephone lines).
 Processors in a distributed system may vary in size and function.
 These processors are referred as sites, nodes, computers, and so on.

The advantages of distributed systems are as follows :


 With resource sharing facility, a user at one site may be able to use the resources available at
another.
 Speedup the exchange of data with one another via electronic mail.
 If one site fails in a distributed system, the remaining sites can potentially continue operating.
 Better service to the customers.
 Reduction of the load on the host computer.
 Reduction of delays in data processing.

Network operating System

Fig:Network Operating Systems

 A Network Operating System runs on a server and provides the server the capability to manage
data, users, groups, security, applications, and other networking functions.
 The primary purpose of the network operating system is to allow shared file and printer access
among multiple computers in a network, typically a local area network (LAN), a private network
or to other networks.
 Examples of network operating systems include Microsoft Windows Server 2003, Microsoft
Windows Server 2008, UNIX, Linux, Mac OS X, Novell NetWare, and BSD.

The advantages of network operating systems are as follows:


 Centralized servers are highly stable.
 Security is server managed.
 Upgrades to new technologies and hardware can be easily integrated into the system.
 Remote access to servers is possible from different locations and types of systems.

The disadvantages of network operating systems are as follows:


 High cost of buying and running a server.
 Dependency on a central location for most operations.
 Regular maintenance and updates are required.

Real Time operating System:


 A real-time system is defined as a data processing system in which the time interval required to
process and respond to inputs is so small that it controls the environment.
 The time taken by the system to respond to an input and display of required updated
information is termed as the response time.
 So in this method, the response time is very less as compared to online processing.
 Real-time systems are used when there are rigid time requirements on the operation of a
processor or the flow of data and real-time systems can be used as a control device in a
dedicated application.
 A real-time operating system must have well-defined, fixed time constraints, otherwise the
system will fail. For example, scientific experiments, medical imaging systems, industrial control
systems, weapon systems, robots, air traffic control systems, etc.
 There are two types of real-time operating systems.
 Hard real-time systems: Hard real-time systems guarantee that critical tasks complete on
time. In hard real-time systems, secondary storage is limited or missing and the data is stored in
ROM. In these systems, virtual memory is almost never found.
 Soft real-time systems: Soft real-time systems are less restrictive. A critical real-time task gets
priority over other tasks and retains the priority until it completes. Soft real-time systems have
limited utility than hard real-time systems. For example, multimedia, virtual reality, Advanced
Scientific Projects likes undersea exploration and planetary rovers, etc.
Note:
 Types of Operating System (Based of No. of user):

1. Single User: If the single user Operating System is loaded in computer’s memory; the
computer would be able to handle one user at a time.
Ex: MS-Dos, MS-Win 95-98, Win-ME

2. Multi user: If the multi-user Operating System is loaded in computer’s memory; the
computer would be able to handle more than one user at a time.
Ex: UNIX, Linux, XENIX

3. Network: If the network Operating System is loaded in computer’s memory; the


computer would be able to handle more than one computer at time.
Ex: Novel Netware, Win-NT, Win-2000-2003

Operating System Services


 An Operating System provides services to both the users and to the programs.
 It provides programs an environment to execute.
 It provides users the services to execute the programs in a convenient manner.

Following are a few common services provided by an operating system .


(a) Program execution
(b) I/O operations
(c) File System manipulation
(d) Communication
(e) Error Detection
(f) Resource Allocation
(g) Protection

(a) Program execution :


 Operating systems handle many kinds of activities from user programs to system programs like
printer spooler, name servers, file server, etc. Each of these activities is encapsulated as a
process.
 A process includes the complete execution context (code to execute, data to manipulate,
registers, OS resources in use).
The major activities of an operating system with respect to program management:
 Loads a program into memory.
 Executes the program.
 Handles program's execution.
 Provides a mechanism for process synchronization.
 Provides a mechanism for process communication.
 Provides a mechanism for deadlock handling.

(b) I/O Operation:


 An I/O subsystem comprises of I/O devices and their corresponding driver software.
 Drivers hide the peculiarities of specific hardware devices from the users.
 An Operating System manages the communication between user and device drivers.
 I/O operation means read or write operation with any file or any specific I/O device.
 Operating system provides the access to the required I/O device when required.

(c ) File system manipulation:


 A file represents a collection of related information. Computers can store files on the disk
(secondary storage), for long-term storage purpose. Examples of storage media include
magnetic tape, magnetic disk and optical disk drives like CD, DVD. Each of these media has its
own properties like speed, capacity, data transfer rate and data access methods.
 A file system is normally organized into directories for easy navigation and usage. These
directories may contain files and other directions.
 The major activities of an operating system with respect to file management :
 Program needs to read a file or write a file.
 The operating system gives the permission to the program for operation on file.
 Permission varies from read-only, read-write, denied and so on.
 Operating System provides an interface to the user to create/delete files.
 Operating System provides an interface to the user to create/delete directories.
 Operating System provides an interface to create the backup of file system.

(c) Communication:
 In case of distributed systems which are a collection of processors that do not share memory,
peripheral devices, or a clock, the operating system manages communications between all the
processes. Multiple processes communicate with one another through communication lines in
the network.
 The OS handles routing and connection strategies, and the problems of contention and
security.
 The major activities of an operating system with respect to communication :
 Two processes often require data to be transferred between them.
 Both the processes can be on one computer or on different computers, but are connected
through a computer network.
 Communication may be implemented by two methods, either by Shared Memory or by
Message Passing.

(d) Error handling


 Errors can occur anytime and anywhere. An error may occur in CPU, in I/O devices or in the
memory hardware.
 The major activities of an operating system with respect to error handling :
 The OS constantly checks for possible errors.
 The OS takes an appropriate action to ensure correct and consistent computing.

(e) Resource Management


 In case of multi-user or multi-tasking environment, resources such as main memory, CPU cycles
and files storage are to be allocated to each user or job.
 The major activities of an operating system with respect to resource management :
 The OS manages all kinds of resources using schedulers.
 CPU scheduling algorithms are used for better utilization of CPU.

(f) Protection:
 Considering a computer system having multiple users and concurrent execution of multiple
processes, the various processes must be protected from each other's activities.
 Protection refers to a mechanism or a way to control the access of programs, processes, or
users to the resources defined by a computer system.
 The major activities of an operating system with respect to protection –
 The OS ensures that all access to system resources is controlled.
 The OS ensures that external I/O devices are protected from invalid access attempts.
 The OS provides authentication features for each user by means of passwords.

User and Operating-System Interface


Here, we discuss two fundamental approaches.
 One provides a command-line interface, or command interpreter, that allows users to directly
enter commands to be performed by the operating system.
 Theother allows users to interface with the operating system via a graphical user interface, or
GUI.

 The main difference between GUI and CLI is that the Graphical User Interface (GUI) allows the
user to interact with the system using graphical elements such as windows, icons, menus while
the Command Line Interface (CLI) allows the user to interact with the system using commands. 

Fig: Differences between GUI & CLI

Operating System Calls


 To understand system calls, first one needs to understand the difference between kernel
mode and user mode of a CPU.
 Every modern operating system supports these two modes.

Fig:Abstraction View of OS
 Different modes supported by the operating system are:

Kernel Mode:
 When CPU is in kernel mode, the code being executed can access any memory address and any
hardware resource.
 Hence kernel mode is a very privileged and powerful mode.
 If a program crashes in kernel mode, the entire system will be halted.

User Mode:
 When CPU is in user mode, the programs don’t have direct access to memory and hardware
resources.
 In user mode, if any program crashes, only that particular program is halted.
 That means the system will be in a safe state even if a program in user mode crashes.
 Hence, most programs in an OS run in user mode.

System Call:
 System calls provide an interface to the services made available by an operatingsystem.
 These calls are generally available as routines written in C and C++, although certain low-level
tasks (for example, tasks where hardware must be accessed directly) may have to be written
using assembly-language instructions.
 System calls can be grouped roughly into six major categories: processcontrol, file manipulation,
device manipulation, information maintenance,communications, and protection.

 Process control
◦ end, abort
◦ load, execute
◦ create process, terminate process
◦ get process attributes, set process attributes
◦ wait for time
◦ wait event, signal event
◦ allocate and free memory

 File management
◦ create file, delete file
◦ open, close
◦ read, write, reposition
◦ get file attributes, set file attributes

 Device management
◦ request device, release device
◦ read, write, reposition
◦ get device attributes, set device attributes
◦ logically attach or detach devices

 Information maintenance
◦ get time or date, set time or date
◦ get system data, set system data
◦ get process, file, or device attributes
◦ set process, file, or device attributes

 Communications
◦ create, delete communication connection
◦ send, receive messages
◦ transfer status information
◦ attach or detach remote device

Operating-System Structure
The general-purpose OS is very large program.
Various ways to represent the operating system structure are:
Simple structure or MS-DOS :

Fig:MS DOS

 In MS-DOS, the interfaces and levels of functionality are not well separated.
 For instance, application programs are able to access the basic I/O routinesto write directly to
the display and disk drives.
 Such freedom leaves MS-DOSvulnerable to errant (or malicious) programs, causing entire
system crasheswhen user programs fail.
 Of course, MS-DOS was also limited by the hardwareof its era.
 Because the Intel 8088 for which it was written provides no dualmode and no hardware
protection, the designers of MS-DOS had no choice butto leave the base hardware accessible.

1. More complex -- UNIX


Fig:Unix OS

 UNIX initially was limited by hardware functionality.


 Itconsists of two separable parts: the kernel and the system programs.
 The kernelis further separated into a series of interfaces and device drivers, which havebeen
added and expanded over the years as UNIX has evolved.
 The kernel provides the file system, CPU scheduling,memory management, and other operating-
system functions through systemcalls.
 Taken in sum, that is an enormous amount of functionality to be combinedinto one level. This
monolithic structure was difficult to implement andmaintain.
 It had a distinct performance advantage, however: there is very littleoverhead in the system call
interface or inCommunication within the kernel.

2. Layered – an abstraction
Fig:Layered OS

 The operating system is divided into a number of layers (levels), each built on top of lower
layers.
 The bottom layer (layer 0), is the hardware; the highest (layer N) is the user interface.
 With modularity, layers are selected such that each uses functions (operations) and services of
only lower-level layers

3. Microkernel -Mach

Fig:Micro Kernel
 The main function of the microkernel is to provide communication between the client program
and the various services that are also running in user space.
 Communication is provided through message passing.
 An operating system called Mach that modularized the kernel using the microkernel approach.

4.Modules

Fig:Kernel Module

 The best current methodology for operating-system design involves using loadable kernel
modules.
 Here, the kernel has a set of core components and links in additional services via modules,
either at boot time or during run time.
 The Solaris operating system structure, shown in Fig is organizedaround a core kernel with seven
types of loadable kernel modules:
1. Scheduling classes
2. File systems
3. Loadable system calls
4. Executable formats
5. STREAMS modules
6. Miscellaneous
7. Device and bus drivers

This type of design is common in modern implementations of UNIX, such as Solaris, Linux, and Mac OS X,
as well as Windows.

5.Hybrid Systems
 Most modern operating systems are actually not one pure model Hybrid
combinesmultiple approaches to address performance, security, usability needs.
 We explore the structure ofthree hybrid systems: the Apple Mac OS X operating system
and the two mostprominent mobile operating systems—iOS and Android.

Apple Mac OS X :

Fig: Mac OS X Structure

The Apple Mac OS X operating system uses a hybrid structure.


 The top layers include the Aqua user interfaceand a set of application environments and
services.
 Notably,the Cocoa environment specifies an API for the Objective-C
programminglanguage, which is used for writing Mac OS X applications.
 Below theselayers is the kernel environment, which consists primarily of the
Machmicrokernel and the BSD UNIX kernel.
 Mach provides memory management;support for remote procedure calls (RPCs) and
interprocess communication(IPC) facilities, including message passing; and thread
scheduling.
 The BSDcomponent provides a BSD command-line interface, support for networkingand
file systems, and an implementation of POSIX APIs, including Pthreads.
 In addition to Mach and BSD, the kernel environment provides an I/O kitfor
development of device drivers and dynamically loadable modules (whichMac OS X refers
to as kernel extensions).

iOS:
Figure:Architecture of Apple’s iOS.

 iOS is a mobile operating system designed by Apple to run its smartphone, theiPhone, as
well as its tablet computer, the iPad.
 iOS is structured on the MacOS X operating system, with added functionality pertinent
to mobile devices,but does not directly run Mac OS X applications.
 Cocoa Touch is an API for Objective-C that provides several frameworks fordeveloping
applications that run on iOS devices.
 The fundamental differencebetween Cocoa, mentioned earlier, and Cocoa Touch is that
the latter providessupport for hardware features unique to mobile devices, such as
touch screens.
 The media services layer provides services for graphics, audio, and video.
 The core services layer provides a variety of features, including support forcloud
computing and databases.
 The bottom layer represents the core operatingsystem, which is based on the kernel
environment shown in Figure.

Android

Figure:Architecture of Google’s Android.

 The Android operating system was designed by the Open Handset Alliance(led primarily
by Google) and was developed for Android smartphones andtablet computers.
 Whereas iOS is designed to run on Apple mobile devicesand is close-sourced, Android
runs on a variety of mobile platforms and isopen-sourced, partly explaining its rapid rise
in popularity.
 Android is similar to iOS in that it is a layered stack of software thatprovides a rich set of
frameworks for developing mobile applications.
 At thebottom of this software stack is the Linux kernel, although it has been modifiedby
Google and is currently outside the normal distribution of Linux releases.
 Linux is used primarily for process, memory, and device-driver support forhardware and
has been expanded to include power management.
 The Androidruntime environment includes a core set of libraries as well as the
Dalvikvirtualmachine.
 Software designers for Android devices develop applications in theJava language.
 However, rather than using the standard Java API, Google hasdesigned a separate
Android API for Java development.
 The Java class files arefirst compiled to Java bytecode and then translated into an
executable file thatruns on the Dalvik virtual machine.
 The Dalvik virtual machine was designedfor Android and is optimized for mobile devices
with limited memory andCPU processing capabilities.
 The set of libraries available for Android applications includes frameworksfor developing
web browsers (webkit), database support (SQLite), and multimedia.
 The libc library is similar to the standard C library but is much smallerand has been
designed for the slower CPUs that characterize mobile devices.
Operating System Debugging

 Debugging is the process of finding the problems in a computer system and solving them. There
are many different ways in which operating systems perform debugging. Some of these are:
Log Files
 The log files record all the events that occur in an operating system. This is done by writing all
the messages into a log file. There are different types of log files. Some of these are given as
follows:
Event Logs
 These stores the records of all the events that occur in the execution of a system. This is done so
that the activities of all the events can be understood to diagnose problems.
Transaction Logs
 The transaction logs store the changes to the data so that the system can recover from crashes
and other errors. These logs are readable by a human.
Message Logs
 These logs store both the public and private messages between the users. They are mostly plain
text files, but in some cases they may be HTML files.
Core Dump Files
 The core dump files contain the memory address space of a process that terminates
unexpectedly. The creation of the core dump is triggered in response to program crashes by the
kernel. The core dump files are used by the developers to find the program’s state at the time of
its termination so that they can find out why the termination occurred.The automatic creation
of the core dump files can be disabled by the users. This may be done to improve performance,
clear disk space or increase security.
Crash Dump Files
 In the event of a total system failure, the information about the state of the operating system is
captured in crash dump files. There are three types of dump that can be captured when a
system crashes. These are:

Complete Memory Dump


 The whole contents of the physical memory at the time of the system crash are captured in the
complete memory dump. This is the default setting on the Windows Server System.
Kernel Memory Dump
 Only the kernel mode read and writes pages that are present in the main memory at the time of
the system crash are stored in the kernel memory dump.
Small Memory Dump
 This memory dump contains the list of device drivers, stop code, process and thread
information, kernel stack etc.
Trace Listings
 The trace listing record information about a program execution using logging. This information is
used by programmers for debugging. System administrators and technical personnel can use the
trace listings to find the common problems with software using software monitoring tools.
Profiling
 This is a type of program analysis that measures various parameters in a program such as space
and time complexity, frequency and duration of function calls, usage of specific instructions etc.
Profiling is done by monitoring the source code of the required system program using a code
profiler.
Process Concepts
Process:
 A process is basically a program in execution. The execution of a process must progress in a
sequential fashion.
 A process is defined as an entity which represents the basic unit of work to be implemented in
the system.
 To put it in simple terms, we write our computer programs in a text file and when we execute
this program, it becomes a process which performs all the tasks mentioned in the program.
 When a program is loaded into the memory and it becomes a process, it can be divided into four
sections ─ stack, heap, text and data.
 The following image shows a simplified layout of a process inside main memory :

Fig: Process in memory.

Description :
S.N. Component & Description

1 Stack
The process Stack contains the temporary data such as method/function
parameters, return address and local variables.

2 Heap
This is dynamically allocated memory to a process during its run time.

3 Text
This includes the current activity represented by the value of Program
Counter and the contents of the processor's registers.

4 Data
This section contains the global and static variables.

Process Control Block (PCB) :


 A Process Control Block is a data structure maintained by the Operating System for
every process. The PCB is identified by an integer process ID (PID).
 The PCB is maintained for a process throughout its lifetime, and is deleted once the
process terminates.
 The architecture of a PCB is completely dependent on Operating System and may
contain different information in different operating systems. Here is a simplified

diagram of a PCB −
A PCB keeps all the information needed to keep track of a process as listed below in the table –

S.N. Information & Description

1 Process State
The current state of the process i.e., whether it is ready, running, waiting,
or whatever.

2 Process privileges
This is required to allow/disallow access to system resources.

3 Process ID
Unique identification for each of the process in the operating system.

4 Pointer
A pointer to parent process.

5 Program Counter
Program Counter is a pointer to the address of the next instruction to be
executed for this process.

6 CPU registers
Various CPU registers where process need to be stored for execution for
running state.

7 CPU Scheduling Information


Process priority and other scheduling information which is required to
schedule the process.

8 Memory management information


This includes the information of page table, memory limits, Segment table
depending on memory used by the operating system.

9 Accounting information
This includes the amount of CPU used for process execution, time limits,
execution ID etc.

10 IO status information
This includes a list of I/O devices allocated to the process.
Process Life Cycle Methods
When a process executes, it passes through different states. These stages may differ in different
operating systems, and the names of these states are also not standardized.In general, a process can
have one of the following five states at a time.

Fig: Process state diagram


S.N. State & Description

1 Start
This is the initial state when a process is first started/created.

2 Ready
The process is waiting to be assigned to a processor. Ready processes are
waiting to have the processor allocated to them by the operating system
so that they can run. Process may come into this state after Start state or
while running it by but interrupted by the scheduler to assign CPU to
some other process.

3 Running
Once the process has been assigned to a processor by the OS scheduler,
the process state is set to running and the processor executes its
instructions.

4 Waiting
Process moves into the waiting state if it needs to wait for a resource,
such as waiting for user input, or waiting for a file to become available.

5 Terminated or Exit
Once the process finishes its execution, or it is terminated by the
operating system, it is moved to the terminated state where it waits to be
removed from main memory.

Process Scheduling
 The process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process on the basis of a particular
strategy.
 Process scheduling is an essential part of Multiprogramming operating systems. Such operating
systems allow more than one process to be loaded into the executable memory at a time and
the loaded process shares the CPU using time multiplexing.
 Process Scheduling Queues:
 The OS maintains all PCBs in Process Scheduling Queues. The OS maintains a separate queue for
each of the process states and PCBs of all processes in the same execution state are placed in
the same queue. When the state of a process is changed, its PCB is unlinked from its current
queue and moved to its new state queue.

Fig: Queuing-diagram representation of process scheduling.

 The PCB is maintained for a process throughout its lifetime, and is deleted once the process
terminates.
Job queue: This queue keeps all the processes in the system.
Ready queue: This queue keeps a set of all processes residing in main memory, ready and
waiting to execute. A new process is always put in this queue.
Device queues: The processes which are blocked due to unavailability of an I/O device
constitute this queue.

Scheduler
 Schedulers are special system software which handles process scheduling in various ways. Their
main task is to select the jobs to be submitted into the system and to decide which process to
run. Schedulers are of three types :
 Long-Term Scheduler
 Short-Term Scheduler
 Medium-Term Scheduler
Long Term Scheduler: It is also called a job scheduler. A long-term scheduler determines which
programs are admitted to the system for processing. It selects processes from the queue and
loads them into memory for execution. Process loads into the memory for CPU scheduling.The
primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O bound
and processor bound. It also controls the degree of multiprogramming. If the degree of
multiprogramming is stable, then the average rate of process creation must be equal to the
average departure rate of processes leaving the system.On some systems, the long-term
scheduler may not be available or minimal. Time-sharing operating systems have no long term
scheduler. When a process changes the state from new to ready, then there is use of long-term
scheduler.
Short Term Scheduler: It is also called as CPU scheduler. Its main objective is to increase system
performance in accordance with the chosen set of criteria. It is the change of ready state to
running state of the process. CPU scheduler selects a process among the processes that are
ready to execute and allocates CPU to one of them.Short-term schedulers, also known as
dispatchers, make the decision of which process to execute next. Short-term schedulers are
faster than long-term schedulers.
Medium Term Scheduler: Medium-term scheduling is a part of swapping. It removes the processes
from the memory. It reduces the degree of multiprogramming. The medium-term scheduler is in-charge
of handling the swapped out-processes.A running process may become suspended if it makes an I/O
request. Suspended processes cannot make any progress towards completion. In this condition, to
remove the process from memory and make space for other processes, the suspended process is
moved to the secondary storage. This process is called swapping, and the process is said to be swapped
out or rolled out. Swapping may be necessary to improve the process mix.
Context Switch:
 When CPU switches to another process, the system must save the state of the old process and
load the saved state for the new process via a context switch.
(OR)
 A context switch is the mechanism to store and restore the state or context of a CPU in Process
Control block so that a process execution can be resumed from the same point at a later time.
Using this technique, a context switcher enables multiple processes to share a single CPU.
Context switching is an essential part of a multitasking operating system features.

Operations on Processes
 The processes in most systems can execute concurrently, and they may be created and deleted
dynamically.
 Thus, these systems must provide a mechanism for process creation and termination.
 Process Creation: Parent process creates children processes, which, in turn create other
processes, forming a tree of processes.
 Most operating systems (including UNIX, Linux, and Windows) identify processes according to a
unique process identifier (or pid), which is typically an integer number. The pid provides a
unique value for each process in the system, and it can be used as an index to access various
attributes of a process within the kernel.
 Figure illustrates a typical process tree for the Linux operating system,showing the name of each
process and it’s pid. (We use the term process ratherloosely, as Linux prefers the term task
instead.) The init process (which alwayshas a pid of 1) serves as the root parent process for all
user processes.
 Once thesystem has booted, the init process can also create various user processes, suchas a
web or print server, an ssh server, and the like.

Fig: A tree of processes on a typical Linux system.

 In Figure, we see two children of init—kthreadd and sshd.


 The kthreadd process is responsible for creating additional processes that perform tasks on
behalf of the kernel (in this situation, khelper and pdflush).
 The sshd process is responsible formanaging clients that connect to the system by using ssh
(which is short forsecure shell).
 The login process is responsible for managing clients that directlylog onto the system.
 In this example, a client has logged on and is using thebash shell, which has been assigned pid
8416.
 Using the bash command-lineinterface, this user has created the process ps as well as the emacs
editor.
 On UNIX and Linux systems, we can obtain a listing of processes by usingtheps command. For
example, the command.
 ps -el
 Will list complete information for all processes currently active in the system.
 It is easy to construct a process tree similar to the one shown in Figure byrecursively tracing
parent processes all the way to the init process.
 And in generally When a process creates a new process, two possibilities for execution exist:
1. The parent continues to execute concurrently with its children.
2. The parent waits until some or all of its children have terminated.
 There are also two address-space possibilities for the new process:
1. The child process is a duplicate of the parent process (it has the sameprogram and data as the
parent).
2. The child process has a new program loaded into it.
Example: Creating a separate process using the UNIX fork() system call.
#include <sys/types.h>
#include <stdio.h>
#include <unistd.h>
intmain()
{
pidpid;
/* fork a child process */
pid = fork();
if (pid< 0) { /* error occurred */
fprintf(stderr, "Fork Failed");
return 1;
}
else if (pid == 0) { /* child process */
execlp("/bin/ls","ls",NULL);
}
else{/* parent process */
/* parent will wait for the child to complete */
wait(NULL);
printf("Child Complete");
}
return 0;
}

 Diagrammatic Representation for Process creation using the fork () system call.

 Here :
 fork() system call creates new process .
 exec() system call used after a fork() to replace the process memory space with a new program.
 The parent waits for the child process to complete with the wait () system call.
 When the child process completes (by either implicitly or explicitly invoking exit ()), the parent
process resumes from the call to wait (), where itcompletes using the exit () system call.
 Like all of the exec functions, execlp() replaces the calling process image with a new process
image. ... The execlp() function is most commonly used to overlay a process image that has been
created by a call to the fork function. path. Identifies the location of the new process image
within the hierarchical file system.
 On Unix-like operating systems, exec () is a built-in command of the Bash shell. It allows you
to execute a command that completely replaces the current process.
 Process Termination :
 A process terminates when it finishes executing its final statement and asks theoperating system
to delete it by using the exit () system call.
 At that point, the process may return a status value (typically an integer) to its parent process
(via the wait () system call). All the resources of the process—including physical and virtual
memory, open files, and I/O buffers—are deallocated by the operating system.
 A parent may terminate the execution of one of its children for a variety of reasons, such as
these:
 The child has exceeded its usage of some of the resources that it has beenallocated. (To
determine whether this has occurred, the parent must havea mechanism to inspect the state of
its children.)
 The task assigned to the child is no longer required.
 The parent is exiting, and the operating system does not allow a child tocontinue if its parent
terminates.
 To illustrate process execution and termination, consider that, in Linuxand UNIX systems, we can
terminate a process by using the exit() systemcall, providing an exit status as a parameter:
/* exit with status 1 */
exit(1);
 In fact, under normal termination, exit() may be called either directly (as shown above) or
indirectly (by a return statement in main()).

Interprocess Communication
 Inter process communication (IPC) is a set of programming interfaces that allow a programmer
to coordinate activities among different program processes that can run concurrently in an
operating system. This allows a program to handle many user requests at the same time. Since
even a single user request may result in multiple processes running in the operating system on
the user's behalf, the processes need to communicate with each other. The IPC interfaces make
this possible. Each IPC method has its own advantages and limitations so it is not unusual for a
single program to use all of the IPC methods.
 Processes executing concurrently in the operating system may be eitherindependent processes
or cooperating processes.
 A process is independentif it cannot affect or be affected by the other processes executing in
the system.Any process that does not share data with any other process is independent.
 Aprocess is cooperatingor Dependentif it can affect or be affected by the other
processesexecuting in the system. Clearly, any process that shares data with otherprocesses is a
cooperating process.
 There are several reasons for providing an environment that allows processCooperation:
 Information sharing. Since several users may be interested in the samepiece of information (for
instance, a shared file), we must provide anenvironment to allow concurrent access to such
information.
Computation speedup. If we want a particular task to run faster, we mustbreak it into subtasks,
each of which will be executing in parallel with theothers. Notice that such a speedup can be
achieved only if the computerhas multiple processing cores.
Modularity. We may want to construct the system in a modular fashion,dividing the system
functions into separate processes or threads.
Convenience. Even an individual user may work on many tasks at thesame time. For instance, a
user may be editing, listening to music, andcompiling in parallel.
 Cooperating processes require an interprocess communication (IPC) mechanismthat will allow
them to exchange data and information.
 There are two fundamental models of inter process communication: shared memory and
message passing.

(a) Shared memory:


 Interprocess communication using shared memory requires communicatingprocesses to
establish a region of shared memory.
 Typically, a shared-memoryregion resides in the address space of the process creating the
shared-memorysegment.
 Other processes that wish to communicate using this shared-memorysegment must attach it to
their address space.
 Recall that, normally, theoperating system tries to prevent one process from accessing another
process’smemory. Shared memory requires that two or more processes agree to removethis
restriction.
 They can then exchange information by reading and writingdata in the shared areas. The form of
the data and the location are determined bythese processes and are not under the operating
system’s control.
 The processes are also responsible for ensuring that they are not writing to the same
locationsimultaneously.
 To illustrate the concept of cooperating processes, let’s consider theproducer–consumer
problem, which is a common paradigm for cooperatingprocesses.
 Producer–consumer problem:A producer process produces information that is consumed by
aconsumer process. For example, a compiler may produce assembly code thatis consumed by an
assembler. The assembler, in turn, may produce objectmodules that are consumed by the
loader.
 The following variables reside in a region of memory shared by the producer and consumer
processes:

#define BUFFER SIZE 10


Typedefstruct
{
...
}item;
item buffer[BUFFER SIZE];
int in = 0;
int out = 0;

 The producer process using shared memory.


item next produced;
while (true)
{
/* produce an item in next produced */
while (((in + 1) % BUFFER SIZE) == out); /* do nothing */
buffer[in] = next produced;
in = (in + 1) % BUFFER SIZE;
}
 The consumer process using shared memory.
item next consumed;
while (true)
{
while (in == out)
; /* do nothing */
next consumed = buffer[out];
out = (out + 1) % BUFFER SIZE;
/* consume the item in next consumed */
}

 One solution to the producer–consumer problem uses shared memory.


 Toallow producer and consumer processes to run concurrently, we must haveavailable a buffer
of items that can be filled by the producer and emptied bythe consumer.
 This buffer will reside in a region of memory that is shared bythe producer and consumer
processes.
 A producer can produce one item whilethe consumer is consuming another item.
 The producer and consumer mustbe synchronized, so that the consumer does not try to
consume an item thathas not yet been produced.
 Here two types of buffers can be used.
The unbounded buffer places no practicallimit on the size of the buffer. The consumer may
have to wait for new items,but the producer can always produce new items.
The bounded buffer assumesa fixed buffer size. In this case, the consumer must wait if the
buffer is empty,and the producer must wait if the buffer is full.

(b) Message Passing:

 Message passing provides a mechanism to allow processes to communicateand to synchronize


their actions without sharing the same address space. It isparticularly useful in a distributed
environment, where the communicating processes may reside on different computers
connected by a network. Forexample, an Internet chat program could be designed so that chat
participants communicate with one another by exchanging messages.
 A message-passing facility provides at least two operations:
send(message) receive(message)
 Messages sent by a process can be either fixed or variable in size.
 If only fixed-sized messages can be sent, the system-level implementation is straightforward.
This restriction, however, makes the task of programming more difficult.
 Conversely, variable-sized messages require a more complex system level implementation, but
the programming task becomes simpler.
 If processes P andQwant to communicate, theymust send messages to andreceive messages
from each other: a communication link must exist betweenthem. This link can be implemented
in a variety of ways.We are concerned herenot with the link’s physical implementation (such as
shared memory, hardware bus, or network. but rather with its logical implementation.
 Here are several methods for logically implementing a linkand the send()/receive() operations:
o Direct or indirect communication
o Synchronous or asynchronous communication
o Automatic or explicit buffering
 We look at issues related to each of these features next.
(a) Naming:
 Processes that want to communicate must have a way to refer to each other. They can use
either direct or indirect communication.
 In direct communication or Symmetry scheme, each process that wants to communicate must
explicitly name the recipient or sender of the communication.
 Here the send() and receive() primitives are defined as:
•send(P, message)—Send a message to process P.
•receive(Q, message)—Receive a message from process Q.
 A communication link in this scheme has the following properties:
• A link is established automatically between every pair of processes that want to communicate.
The processes need to know only each other’sidentity to communicate.
• A link is associated with exactly two processes.
• Between each pair of processes, there exists exactly one link.
 In Indirect communication or Asymmetry scheme, indirect communication, the messages are
sent to and received from mailboxes, or ports.
 Here the send() and receive() primitives are defined as follows:
• send(A, message)—Send a message to mailbox A.
• receive(A, message)—Receive a message from mailbox A.
 In this scheme, a communication link has the following properties:
o A communication link is established between a pair of processes only if both members
of the pair have a shared mailbox.
o A link may be associated with more than two processes.
o Between each pair of communicating processes, a number of different links may exist,
with each link corresponding to one mailbox.
Example:
 Now suppose that processes P1, P2, and P3 all share mailbox A. ProcessP1 sends a message to A,
while both P2 and P3 execute a receive() from A.Which process will receive the message sent by
P1?
 The operating system then must provide a mechanism that allows aprocess to do the following:
• Create a new mailbox.
• Send and receive messages through the mailbox.
• Delete a mailbox.

(b) Synchronization
 Communication between processes takes place through calls to send() andreceive() primitives.
There are different design options for implementingeach primitive.
 Message passing may be either blocking or nonblocking—also known as synchronous and
asynchronous. (Throughout this text, youwill encounter the concepts of synchronous and
asynchronous behavior inrelation to various operating-system algorithms.)
• Blocking send. The sending process is blocked until the message isreceived by the receiving
process or by the mailbox.
• Non blocking send: The sending process sends the message and resumesoperation.
• Blocking receive: The receiver blocks until a message is available.
• Non blocking receive: The receiver retrieves either a valid message or a null.
message next produced;
while (true)
{
/* produce an item in next produced */
send(next produced);
}
Fig : The producer process using message passing.

© Buffering
 Whether communication is direct or indirect, messages exchanged by communicatingprocesses
reside in a temporary queue.
 Basically, such queues can beimplemented in three ways:
Zero capacity: The queue has a maximum length of zero; thus, the linkcannot have any
messageswaiting in it. In this case, the sender must block until the recipient receives the
message.
Bounded capacity: The queue has finite length n; thus, at most n messagescan reside in it. If the
queue is not full when a new message is sent, themessage is placed in the queue (either the
message is copied or a pointerto the message is kept), and the sender can continue execution
withoutwaiting. The link’s capacity is finite, however If the link is full, the sender must block until
space is available in the queue.
Unbounded capacity: The queue’s length is potentially infinite; thus, any number of messages
can wait in it. The sender never blocks.

Examples of IPC
 The following are the examples of IPC :
o POSIX Shared Memory
o Mach
o Windows
o Sockets
o Remote Procedure Calls
o Pipes

You might also like