0% found this document useful (0 votes)
7 views

Os own (3)

An operating system (OS) is an intermediary between users and computer hardware, managing resources, processes, memory, files, devices, and providing user interfaces. It can be categorized into various types, including batch systems, time-sharing systems, personal computer systems, multiprocessor systems, and distributed systems, each with distinct characteristics and functionalities. Examples of operating systems include Windows, Linux, macOS, and Android, which serve to enhance user experience and system efficiency.

Uploaded by

Arshdeep Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Os own (3)

An operating system (OS) is an intermediary between users and computer hardware, managing resources, processes, memory, files, devices, and providing user interfaces. It can be categorized into various types, including batch systems, time-sharing systems, personal computer systems, multiprocessor systems, and distributed systems, each with distinct characteristics and functionalities. Examples of operating systems include Windows, Linux, macOS, and Android, which serve to enhance user experience and system efficiency.

Uploaded by

Arshdeep Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

UNIT-I

What is an Operating System?


A program that acts as an intermediary between a user of a computer and the computer
hardware. An operating System is a collection of system programs that together control the
operations of a computer system. Some examples of operating systems are UNIX, Mach, MS-DOS,
MS-Windows, Windows/NT, Chicago, OS/2, MacOS, VMS, MVS, and VM.

Operating system goals:


● Execute user programs and make solving user problems easier.
● Make the computer system convenient to use.
● Use the computer hardware in an efficient manner.

Computer System Components


● Hardware – provides basic computing resources (CPU, memory, I/O devices).
● Operating system – controls and coordinates the use of the hardware among the various
application programs for the various users.
● Applications programs – Define the ways in which the system resources are used to solve the
computing problems of the users (compilers, database systems, video games, business programs).
● Users (people, machines, other computers).

Operating System Definitions


 Resource allocator – manages and allocates resources.
 Control program – controls the execution of user programs and operations of I/O devices .
 Kernel – The one program running at all times (all else being application programs).
Components of OS:
OS has two parts.
1. Kernel: Kernel is an active part of an OS i.e., it is the part of OS running at all times. It is a
programs which can interact with the hardware. Ex: Device driver, dll files, system files etc.

2. Shell: Shell is called as the command interpreter. It is a set of programs used to interact
with the application programs. It is responsible for execution of instructions given to OS
(called commands).

Operating systems can be explored from two viewpoints:


1. User View: From the user’s point view, the OS is designed for one user to monopolize its
resources, to maximize the work that the user is performing and for ease of use.

2. System View: From the computer's point of view, an operating system is a control program
that manages the execution of user programs to prevent errors and improper use of the
computer. It is concerned with the operation and control of I/O devices.

Functions of Operating System:


An Operating System (OS) is a fundamental component of a computer system that acts as an
intermediary between the hardware and the user or application software. It serves several crucial
functions:
1. Resource Management: The OS manages hardware resources like the central processing unit
(CPU), memory, storage devices, and input/output devices. It allocates these resources to various
programs and ensures their efficient use.
2. Process Management: It allows the execution of multiple processes concurrently. A process is a
program in execution. The OS schedules processes, provides mechanisms for inter-process
communication, and ensures they run without interfering with each other.
A process is a program in execution. A process needs certain resources, including CPU time,
memory, files, and I/O devices, to accomplish its task. The operating system is responsible for the
following activities in connection with process management.

✦ Process creation and deletion.

✦ process suspension and resumption.

✦ Provision of mechanisms for:


• process synchronization
• process communication
3. Memory Management: The OS manages system memory, ensuring that each program or
process gets the necessary memory space. It uses techniques like virtual memory to provide an
illusion of larger memory than physically available.
Memory is a large array of words or bytes, each with its own addressMain memory is a volatile
storage device. It loses its contents in the case of system failure. The operating system is
responsible for the following activities in connections with memory management:

 Keep track of which parts of memory are currently being used and by whom.
 Decide which processes to load when memory space becomes available.
 Allocate and de-allocate memory space as needed.

4. File System Management: A file is a collection of related information defined by its creator.
Commonly, files represent programs (both source and object forms) and data. The operating
system is responsible for the following activities in connections with file management:

 File creation and deletion.


 Directory creation and deletion.
 Support of primitives for manipulating files and directories.
 Mapping files onto secondary storage.
 File backup on stable (non volatile) storage media
5. Device Management: It controls the interaction between software and hardware devices. This
includes device drivers that enable software to communicate with various hardware components.
6. User Interface: The OS provides a user-friendly interface for users to interact with the system.
This interface can be command-line (text-based) or graphical (GUI), depending on the OS.

7. Security and Access Control: It enforces security policies, user authentication, and access
controls to protect the system from unauthorized access and data breaches. The OS provides
mechanisms for handling errors and exceptions, preventing system crashes due to software bug
8. Networking: In modern operating systems, networking capabilities are integrated to allow
communication over networks, which is vital for internet connectivity and networked applications.

 A distributed system is a collection of processors that do not share memory or a clock. Each
processor has its own local memory.
 The processors in the system are connected through a communication network.
 Communication takes place using a protocol.
 A distributed system provides user access to various system resources.
Access to a shared resource allows:
✦ Computation speed-up ✦ Increased data availability ✦ Enhanced reliability
9. Task Scheduling: The OS employs scheduling algorithms to determine the order in which
processes are executed, ensuring fair allocation of CPU time and system responsiveness
10. Secondary-Storage Management :Since main memory (primary storage) is volatile and too
small to accommodate all data and programs permanently, the computer system must provide
secondary storage to back up main memory. Most modern computer systems use disks as the
principle on-line storage medium, for both programs and data. The operating system is responsible
for the following activities in connection with disk management: ✦

 Free space management


 Storage allocation
 Disk scheduling

Examples of Operating Systems:


1. Windows: Microsoft Windows is a widely used OS known for its graphical user interface and
compatibility with a variety of software applications.

2. Linux: Linux is an open-source operating system known for its stability, security, and flexibility. It
comes in various distributions, such as Ubuntu, CentOS, and Debian.

3. macOS: macOS is the OS developed by Apple for their Macintosh computers, known for its user-
friendly interface and integration with Apple hardware.
4. Unix: Unix is an older, robust OS that has influenced many other operating systems, including
Linux.
5. Android: Android is a popular mobile operating system used in smartphones and tablets.
6. iOS: iOS is Apple's mobile operating system used in iPhones and iPads.

Simple Batch Systems:


A Simple Batch System is an early type of operating system that manages batch processing. Batch
processing involves the execution of multiple jobs without user interaction. Jobs are submitted to
the system in the form of batch jobs, and the OS processes them one after the other. Here are the
key characteristics and components of simple batch systems:
 Batch Jobs: In a simple batch system, users submit their jobs to the system as batch jobs. A
batch job typically consists of one or more programs or tasks that need to be executed
sequentially
 Job Scheduling: The OS's primary responsibility is to schedule and manage the execution of
batch jobs. It maintains a job queue and selects the next job to run based on criteria like
job priority.
 Job Control Language (JCL): JCL specifies details like the input and output files, resource
requirements, and other job-specific information.
 Job Spooling: Jobs are often spooled (spooling stands for Simultaneous Peripheral
Operations On-line) before execution. This means they are placed in a queue and stored on
secondary storage, making it easier for the system to retrieve and execute them.
 No Interactivity: Simple batch systems have no user interaction during job execution. They
are suited for long-running and computationally intensive tasks.
 Resource Allocation: The OS allocates resources, such as CPU time, memory, and I/O
devices, to each job in the queue as it is scheduled.
 Job Termination: Once a job is completed, the OS releases the allocated resources,
manages output files, and may notify the user of job completion.

Advantages of Simple Batch Systems:


o Efficiency: Simple batch systems are efficient for processing large volumes of similar tasks
without the overhead of user interaction.
o Resource Utilization: They make efficient use of system resources by allowing continuous
execution of jobs without idle times.
o Error Recovery: Batch systems can be designed to restart a job in case of system failures,
improving error recovery.

Disadvantages of Simple Batch Systems:


o Lack of Interactivity: They are not suitable for tasks that require user interaction, making
them unsuitable for real-time or interactive applications
o Limited Flexibility: Users need to submit jobs in advance, which may lead to delays if a
high-priority task suddenly arises.
o Resource Contentions: Resource allocation can be a challenge in busy batch systems,
leading to contention for resources.
o Debugging: Debugging batch jobs can be more challenging since there is no immediate
feedback from the system.
o Job Prioritization: Determining job priorities and scheduling can be complex in busy batch
environments

Multi-programmed Batch Systems:


A Multi-programmed Batch System is an extension of simple batch systems that aims to improve
the overall efficiency of the system by allowing multiple jobs to be in memory simultaneously. This
approach addresses some of the limitations of simple batch systems. Here are the key aspects of
multi-programmed batch systems:

 Job Pool: In a multi-programmed batch system, there is a job pool that contains a
collection of batch jobs. These jobs are ready to run and are loaded into memory as space
becomes available.
 Job Scheduling: The operating system employs job scheduling algorithms to select the next
job from the job pool and load it into memory. This helps in reducing idle time of the CPU
and improves system throughput.
 Memory Management: Multi-programmed batch systems manage memory efficiently by
allocating and de-allocating memory space for each job. Jobs may need to be swapped in
and out of memory to make the best use of available resources.
 I/O Overlap: These systems aim to overlap I/O operations with CPU processing. While one
job is waiting for I/O, another job can utilize the CPU, enhancing overall system
performance.
 Job Prioritization: Jobs are prioritized based on their characteristics and requirements.
High-priority jobs may be selected for execution before lower priority ones.
 Batch Job Execution: Each job is executed as a separate program, similar to simple batch
systems. It runs until it completes or is blocked by I/O, at which point the CPU scheduler
selects the next job for execution.

Advantages of Multi-programmed Batch Systems:


o Improved Throughput: The system can execute multiple jobs concurrently, reducing CPU
idle time and increasing the overall throughput of the system.
o Resource Utilization: Resources are used efficiently as they are not wasted on idle time.
This leads to better CPU and I/O device utilization.
o Enhanced Job Scheduling: Job scheduling algorithms play a crucial role in selecting the
next job for execution, optimizing the use of system resources.
o Reduced Waiting Time: By overlapping I/O operations with CPU processing, waiting times
for I/O-bound jobs are reduced.

Disadvantages of Multi-programmed Batch Systems:


o Complexity: Managing multiple jobs in memory requires complex memory management
and job scheduling algorithms.
o Increased Overhead: The need to load and swap jobs in and out of memory introduces
some overhead in the system.
o Contention for Resources: With multiple jobs running concurrently, contention for
resources like memory and I/O devices can arise.
o Priority Inversion: Job prioritization can sometimes lead to priority inversion issues where
lower-priority jobs block resources needed by higher-priority ones.
Multi-programmed batch systems are a significant improvement over simple batch systems as
they allow for better resource utilization and system throughput. They were a crucial step in the
evolution of operating systems, laying the foundation for more advanced OS designs
Time Sharing Systems:
A Time-Sharing System, also known as a multi-user operating system, allows multiple users to
interact with the computer simultaneously. Here are the key aspects of time-sharing systems:
 User Interaction: Time-sharing systems provide a user-friendly interface that allows
multiple users to log in and work on the computer concurrently.
 Time Slicing: The CPU's time is divided into small time slices, and each user or process is
allocated a time slice to execute their tasks. This provides the illusion of concurrent
execution for multiple users.
 Resource Sharing: Resources like CPU, memory, and I/O devices are shared among users or
processes. The system ensures fair access to resources.
 Multi-Tasking: Time-sharing systems support true multi-tasking, where multiple processes
can run concurrently, and the OS manages the context switching.
 Response Time: They are designed for fast response times to ensure that users can interact
with the system in real-time.
 Example: Unix is an example of a time-sharing operating system, providing a command-
line interface for multiple users to log in and work simultaneously

Personal-Computer Systems
Personal Computer (PC) Systems are designed for individual users and small-scale computing
needs. Here are the key characteristics:
 Single User: PC systems are typically single-user systems, designed for use by a single
individual.
 User-Friendly GUI: They often have a graphical user interface (GUI) that makes it easy for
users to interact with the system.
 Limited Resource Sharing: PC systems are not designed for heavy multi-user interaction or
resource sharing. They focus on providing resources to a single user's tasks.
 Broad Application: PC systems are used for a wide range of applications, from word
processing and web browsing to gaming and multimedia.
 Operating Systems: Common PC operating systems include Microsoft Windows, macOS,
and various Linux distributions.

Multiprocessor (Parallel) Systems:


Multiprocessor operating systems are also known as parallel OS or tightly coupled OS. Such
operating systems have more than one processor in close communication that sharing the
computer bus, the clock and sometimes memory and peripheral devices. It executes multiple
jobs at same time and makes the processing faster. Multiprocessor systems have three main
advantages:
 Increased throughput: By increasing the number of processors, the system performs more
work in less time. The speed-up ratio with N processors is less than N.
 Economy of scale: Multiprocessor systems can save more money than multiple single-
processor systems, because they can share peripherals, mass storage, and power supplies.
 Increased reliability: If one processor fails to done its task, then each of the remaining
processors must pick up a share of the work of the failed processor. The failure of one
processor will not halt the system, only slow it down.
 Challenges: Developing parallel software can be complex due to issues like data
synchronization and load balancing.
 Examples: High-performance computing clusters, supercomputers, and multicore
processors in modern PCs are examples of parallel systems.
The multiprocessor operating systems are classified into two categories:
1. Symmetric multiprocessing system : In symmetric multiprocessing system, each processor runs
an identical copy of the operating system, and these copies communicate with one another as
needed
2. Asymmetric multiprocessing system: In asymmetric multiprocessing system, a processor is
called master processor that controls other processors called slave processor. Thus, it establishes a
master-slave relationship. The master processor schedules the jobs and manages the memory for
entire system
The ability to continue providing service proportional to the level of surviving hardware is called
graceful degradation. Systems designed for graceful degradation are called fault tolerant

Distributed Systems:
In distributed system, the different machines are connected in a network and each machine has its
own processor and own local memory. In this system, the operating systems on all the machines
work together to manage the collective network resource. It can be classified into two categories:
1. Client-Server systems
2. Peer-to-Peer systems

 Multiple Machines: Distributed systems consist of multiple independent machines or


nodes that communicate and collaborate to perform tasks.
 Resource Sharing: Resources like processing power, memory, and data can be shared
across the network, allowing for more efficient use of resources.
 Scalability: Distributed systems can be easily scaled by adding more machines to the
network.
 Fault Tolerance: They are designed to handle failures gracefully, ensuring that the system
continues to function even if some nodes fail.
 Examples: The internet is a massive distributed system, and cloud computing platforms like
AWS, Google Cloud, and Azure are examples of distributed systems used for various
applications.

Advantages of distributed systems.


 Resources Sharing
 Computation speed up – load sharing
 Reliability
 Communications
 Requires networking infrastructure.
 Local area networks (LAN) or Wide area networks (WAN)
Real-Time Systems:
Real-Time Systems are designed to respond to events or input within a predefined time constraint.
They are used in applications where timing and predictability are critical. Here are the key
characteristics of real-time systems:

 Timing Constraints: Real-time systems have strict timing constraints, and tasks must be
completed within specific time limits.
 Deterministic Behavior: These systems aim for deterministic behavior, ensuring that the
system's response is predictable and consistent.
 Hard and Soft Real-Time: Real-time systems can be classified as hard realtime (where
missing a deadline is catastrophic) or soft real-time (where occasional missed deadlines are
acceptable).
 In hard real-time systems, the primary goal is to ensure that critical tasks are
completed within a strict deadline. These systems are used in environments where
failure to meet timing constraints could result in catastrophic consequences. Every
component of a hard real-time system, including the operating system, must be
designed to provide predictable and bounded delays. Examples include air traffic
control systems, pacemakers, and anti-lock braking systems, where even a minor
delay can cause system failure or danger to human life. Hard real-time systems
often have dedicated hardware and real-time operating systems (RTOS) designed to
handle tasks with precise timing requirements, ensuring that data retrieval and
execution happen within a guaranteed timeframe

 soft real-time systems are less rigid in their timing requirements. While critical
tasks are still given priority over other tasks, the system does not guarantee that
they will always complete within the strict deadlines. As a result, missing a deadline
might cause degraded performance or inconvenience, but not system failure. These
systems can handle tasks that are not as time-sensitive alongside real-time tasks.
Common examples include multimedia systems, online transaction systems, and
video streaming applications. These systems can tolerate occasional delays or
deadline misses without critical consequences

 Applications: Real-time systems are used in areas like aviation (flight control systems),
automotive (engine control units), and industrial automation (robotics).
 Challenges: Developing real-time systems is challenging due to the need for precise timing,
and they often require specialized hardware and software.
Both distributed systems and real-time systems are specialized types of computer systems, each
with its unique requirements and applications. Distributed systems focus on resource sharing and
scalability across multiple machines, while real-time systems prioritize time-bound responses and
determinism.

Operating Systems as Resource Managers:


Operating Systems (OS) act as resource managers that oversee and control the allocation and
utilization of a computer system's hardware and software resources. Here's how an OS functions
as a resource manager:

1. Process Management: The OS manages processes, which are running instances of


programs. It allocates CPU time to processes, schedules them for execution, and ensures
that they run without interfering with each other. Process management includes creating,
terminating, and suspending processes.
2. Memory Management: The OS handles memory allocation, ensuring that each process
gets the necessary memory space. It also manages memory protection to prevent one
process from accessing another process's memory.
3. File System Management: It manages the file system, allowing users to create, read, write,
and delete files. The OS enforces file access permissions and maintains the file hierarchy.
4. Device Management: The OS controls the interaction between software and hardware
devices. This involves device drivers that enable communication between the operating
system and various hardware components like printers, disks, and network interfaces.
5. User and Authentication Management: The OS provides user authentication and access
control, ensuring that only authorized users can access the system and specific resources. It
also maintains user profiles and security policies.
6. Scheduling and Resource Allocation: The OS employs scheduling algorithms to determine
the order in which processes are executed, ensuring fair allocation of CPU time and system
responsiveness. It also allocates resources like I/O devices, network bandwidth, and
memory.
7. Security: OSs implement security measures like encryption, firewalls, and access controls
to protect the system from unauthorized access and data breaches.
8. Load Balancing: In distributed systems, the OS manages load balancing, ensuring that tasks
are distributed evenly across the network to prevent overloading of certain nodes. .

Processes
Introduction to Processes:
In the context of operating systems, a process is a fundamental concept that represents the
execution of a program. It's a unit of work in a computer system that can be managed and
scheduled by the operating system. Here's an overview of processes:
A process includes a program's code, data, and execution context (program counter, registers,
stack), operating in isolated memory to prevent interference. It enables multitasking and
concurrency by running multiple programs simultaneously. Processes communicate via OS-
provided inter-process communication mechanisms.

Process States:
Processes go through different states during their lifecycle. These states represent the different
stages a process can be in. The typical process states are:
 New State: In this step, the process is about to be created but not yet created. It is the
program that is present in secondary memory that will be picked up by the OS to create
the process.
 Ready State: New -> Ready to run. After the creation of a process, the process enters the
ready state i.e. the process is loaded into the main memory. The process here is ready to
run and is waiting to get the CPU time for its execution. Processes that are ready for
execution by the CPU are maintained in a queue called a ready queue for ready processes.
 Run State: The process is chosen from the ready queue by the OS for execution and the
instructions within the process are executed by any one of the available processors.
 Blocked or Wait State: Whenever the process requests access to I/O needs input from the
user or needs access to a critical region, it enters the blocked or waits state. The process
continues to wait in the main memory and does not require CPU. Once the I/O operation is
completed the process goes to the ready state.

 Terminated or Completed State: Process is killed as well as PCB is deleted. The resources
allocated to the process will be released or deallocated.

 Suspend Ready: Process that was initially in the ready state but was swapped out of main
memory and placed onto external storage .The process will transition back to a ready state
whenever the process is again brought onto the main memory.

 Suspend Wait or Suspend Blocked: Similar to suspend ready but uses the process which
was performing I/O operation and lack of main memory caused them to move to
secondary memory. When work is finished it may go to suspend ready.

Process Management:
Process management is a critical aspect of an operating system's responsibilities. It involves
various tasks related to process creation, scheduling, and termination. Here's an overview of
process management:

1. Process Creation: When a user or system request initiates a new process, the OS is responsible
for creating the process. This includes allocating memory, initializing data structures, and setting
up the execution environment.

2. Process Scheduling: The OS uses scheduling algorithms to determine which process to run next
on the CPU. It ensures fair allocation of CPU time to multiple processes and aims to maximize
system throughput.
3. Process Termination: When a process completes its execution or is terminated due to an error
or user action, the OS must clean up its resources, release memory, and remove it from the
system.
4. Process Communication: The OS provides mechanisms for processes to communicate and share
data. This can include inter-process communication (IPC) methods like message passing or shared
memory.
5. Process Synchronization: When multiple processes are accessing shared resources, the OS
manages synchronization to prevent data corruption and race conditions.
6. Process Priority and Control: The OS allows users to set process priorities, which influence their
order of execution. It also provides mechanisms to control and monitor processes
7. Process State Transitions: The OS manages the transitions between different process states,
ensuring that processes move between states as required.

Effective process management is essential for the efficient and stable operation of a computer
system, enabling multiple programs to run simultaneously, share resources, and respond to user
and system needs..

Program Process
A program is a passive entity consisting of a A process is an active entity that represents the
set of instructions or code written to perform instance of a program in execution. Once a
a specific task. It is stored in the system’s program is executed, the system creates a
memory, typically as an executable file, but process to run it, allocating necessary resources
does not actively execute or use system like CPU time, memory, and I/O devices
resources until it is run
Program is a static file on disk, waiting to be A process is dynamic and changes state as it
loaded into memory for execution executes the instructions, interacts with the
operating system, and consumes resources
A program itself doesn't do anything until it's Each process has its own memory space and
executed execution context, including registers and
program counters, making it independent of
other processes
Examples include files like calculator.exe or a For example, when you open a calculator on
Python script like myscript.py. your computer, the calculator program becomes
an active process, utilizing system resources to
operate.
Process Control Block (PCB)
A Process Control Block (PCB) is a data structure used by the operating system to manage information
about a process. The process control keeps track of many important pieces of information needed to
manage processes efficiently. The diagram helps explain some of these key data items.

 Pointer: It is a stack pointer that is required to be saved when the process is switched from
one state to another to retain the current position of the process.
 Process state: It stores the respective state of the process.
 Process number: Every process is assigned a unique id known as process ID or PID which
stores the process identifier.
 Program counter: Program Counter stores the counter, which contains the address of the
next instruction that is to be executed for the process.
 Register: Registers in the PCB, it is a data structure. When a processes is running and it’s
time slice expires, the current value of process specific registers would be stored in the PCB
and the process would be swapped out. When the process is scheduled to be run, the
register values is read from the PCB and written to the CPU registers. This is the main
purpose of the registers in the PCB.

 Memory limits: This field contains the information about memory management
system used by the operating system. This may include page tables, segment tables, etc.
 List of Open files: This information includes the list of files opened for a process.

Location of The Process Control Block


The Process Control Block (PCB) is stored in a special part of memory that normal users can’t
access. This is because it holds important information about the process. Some operating systems
place the PCB at the start of the kernel stack for the process, as this is a safe and secure spot.
Interrupts
An interrupt is a signal emitted by hardware or software when a process or an event needs
immediate attention. It alerts the processor to a high-priority process requiring interruption of the
current working process. In I/O devices, one of the bus control lines is dedicated for this purpose
and is called the Interrupt Service Routine (ISR).
For I/O devices, a dedicated line called the Interrupt Service Routine (ISR) is used to manage
interrupts. When an interrupt occurs, the processor:
1. Finishes the current instruction.
2. Saves the address of the interrupted task in a temporary spot.
3. Loads the address of the ISR to handle the interrupt.

After the interrupt is handled, the processor goes back to the original task. To avoid repeated
interrupt signals, the processor informs the device that the request is acknowledged. However,
saving registers and switching tasks takes time, causing a delay known as Interrupt Latency.

A single computer can perform only one computer instruction at a time. But, because it can be
interrupted, it can manage how programs or sets of instructions will be performed. This is known
as multitasking. It allows the user to do many different things simultaneously, and the computer
turns to manage the programs that the user starts. Of course, the computer operates at speeds
that make it seem like all user tasks are being performed simultaneously.
Types of Interrupts:

A hardware interrupt
A hardware interrupt is a signal from a hardware device to the processor, indicating it needs attention. For example,
pressing a key or moving a mouse generates a hardware interrupt, prompting the processor to read the input. These
interrupts occur asynchronously, independent of the processor clock. To handle them effectively, interrupt signals are
synchronized with the processor clock and processed only at instruction boundaries.

Each hardware device is typically associated with a unique IRQ (Interrupt Request) signal, allowing the system to
identify and prioritize the requesting device efficiently. Hardware interrupts are further classified into two types, such
as:

Maskable interrupt

 A maskable interrupt is an interrupt signal that can be turned on or off by the processor using a special
register called the interrupt mask register. This register has bits that correspond to each interrupt signal.
Depending on the system, a bit may enable or disable the interrupt. When an interrupt is disabled (masked),
the processor ignores the interrupt signal.
 Non-maskable interrupts (NMI), on the other hand, cannot be turned off. These interrupts are very
important and must always be handled immediately, like signals from a watchdog timer indicating a critical
error.

Spurious interrupt

 A spurious interrupt is an interrupt that happens, but there is no clear source for it. It's also sometimes
called a phantom or ghost interrupt. If the interrupting device is cleared too late during the ISR, the processor
may mistakenly think another interrupt is pending, even though there is none. This can lead to issues like
system freezes or unpredictable behaviour. To prevent this, the ISR should check all interrupt sources and
only act if there is a real interrupt.

A software interrupt
A software interrupt is when the processor triggers an interrupt by executing a special instruction or when certain
conditions occur. Each software interrupt is linked to a specific handler.

These interrupts can be intentionally caused by special instructions designed to request services from the operating
system or interact with device drivers, similar to calling a subroutine.

However, software interrupts can also happen unexpectedly due to program errors, and these are called traps or
exceptions.

Handling Multiple Devices


When multiple devices raise an interrupt request, additional methods are used to decide which
device to handle first:
1. Polling: The processor checks each device's IRQ bit to see if it has a pending interrupt. The
first device with the IRQ bit set is serviced. It's simple but can waste time since the
processor checks all devices, even those without interrupts.
2. Vectored Interrupts: Each device sends a unique code (interrupt vector) to the processor
to identify itself. This code can point to the location of the interrupt service routine (ISR) in
memory, helping the processor quickly determine which device caused the interrupt.
3. Interrupt Nesting: Devices are arranged in a priority order. The processor handles
interrupts from higher-priority devices first, while lower-priority interrupts are ignored. The
processor’s priority is encoded in the Process Status register (PS) and can be modified by
program instructions. It operates in supervised mode while running OS routines and
switches to user mode for application programs.

Inter-process Communication (IPC):


Inter-process Communication (IPC) is a set of mechanisms and techniques used by processes
in an operating system to communicate and share data with each other. IPC is essential for
processes to cooperate, exchange information, and synchronize their activities. There are
several methods and tools for IPC, depending on the needs and requirements of the processes
involved. Here are some of the key methods of IPC:
1. Shared Memory:
o Shared memory allows processes to share a portion of their address space. This
shared region of memory acts as a communication buffer, allowing multiple
processes to read and write data into it.
o Shared memory is a fast and efficient method of IPC since it doesn't involve the
overhead of copying data between processes.
o However, it requires careful synchronization and mutual exclusion to prevent data
corruption.
2. Message Passing:
o In a message-passing IPC mechanism, processes communicate by sending and
receiving messages through a predefined communication channel.
o Message-passing can be either synchronous (blocking) or asynchronous (non-
blocking), depending on whether processes wait for a response or continue their
execution.
o It is a more structured and safer method compared to shared memory since
processes don't have direct access to each other's memory.
3. Pipes and FIFOs (Named Pipes):
o Pipes are a one-way communication channel that allows data to flow in one
direction between processes.
o Named pipes (FIFOs) are similar but have a well-defined name in the file system,
allowing unrelated processes to communicate using a common pipe.

4. Sockets:
o Sockets are a network-based IPC mechanism used for communication between
processes on different machines over a network.
o They are widely used for client-server applications and network communication.
o Sockets support both stream (TCP) and datagram (UDP) communication.

5. Semaphores and Mutexes: Semaphores and mutexes are synchronization mechanisms that
are used to control access to shared resources, preventing race conditions and ensuring
mutual exclusion. They are particularly useful for coordinating concurrent access to critical
sections of code.

Threads: Introduction and Thread States


Introduction to Threads:
A thread is the smallest unit of execution within a process. Threads are often referred to as
"lightweight processes" because they share the same memory space as the process and can
execute independently. Multiple threads within a single process can work together to perform
tasks concurrently. Here's an introduction to threads

Benefits of Threads:
 Improved concurrency: Threads allow multiple tasks to be executed concurrently within
the same process, potentially improving system performance.
 Resource efficiency: Threads share resources like memory, reducing the overhead
associated with creating and managing separate processes.
 Faster communication: Threads within the same process can communicate more efficiently
than separate processes since they share memory

Types of Threads:
Threads are of two types. These are described below.
● User Level Thread
● Kernel Level Thread

User Level Threads


User Level Thread is a type of thread that is not created using system calls. The kernel has no work
in the management of user-level threads. User-level threads can be easily implemented by the
user. In case when user-level threads are single-handed processes, kernel-level thread manages
them. Let’s look at the advantages and disadvantages of User-Level Thread.
Advantages of User-Level Threads
● Implementation of the User-Level Thread is easier than Kernel Level Thread.
● Context Switch Time is less in User Level Thread.
● User-Level Thread is more efficient than Kernel-Level Thread.
● Because of the presence of only Program Counter, Register Set, and Stack Space, it has a simple
representation.
Disadvantages of User-Level Threads
● There is a lack of coordination between Thread and Kernel.
● In case of a page fault, the whole process can be blocked.

Kernel Level Threads


A kernel Level Thread is a type of thread that can recognize the Operating system easily. Kernel
Level Threads has its own thread table where it keeps track of the system. The operating System
Kernel helps in managing threads. Kernel Threads have somehow longer context switching time.
Kernel helps in the management of threads.
Advantages of Kernel-Level Threads
● It has up-to-date information on all threads.
● Applications that block frequency are to be handled by the Kernel-Level Threads.
● Whenever any process requires more time to process, Kernel-Level Thread provides more time
to it.
Disadvantages of Kernel-Level threads
● Kernel-Level Thread is slower than User-Level Thread.
● Implementation of this type of thread is a little more complex than a user-level thread.

What is Multi-Threading?
A thread is also known as a lightweight process. The idea is to achieve parallelism by dividing a
process into multiple threads. For example, in a browser, multiple tabs can be different threads.
MS Word uses multiple threads: one thread to format the text, another thread to process inputs,
etc. More advantages of multithreading are discussed below.

Multithreading is a technique used in operating systems to improve the performance and


responsiveness of computer systems. Multithreading allows multiple threads (i.e., lightweight
processes) to share the same resources of a single process, such as the CPU, memory, and I/O
devices
Thread States:
Threads go through different states during their lifecycle, just like processes. The typical thread
states are:
1. New: In this state, a thread is created but has not yet started execution.
2. Runnable: A thread in the runnable state is ready to execute and waiting for the CPU. It is
typically waiting in a queue and is eligible for execution.
3. Running: A thread in the running state is actively executing its code on the CPU.

4. Blocked (or Waiting): When a thread cannot continue its execution due to the need for some
external event (e.g., I/O operation), it enters the blocked state and is put on hold until the event
occurs.
5. Terminated: When a thread completes its execution or is explicitly terminated, it enters the
terminated state. Resources associated with the thread are released

Thread Transitions:
Threads transition between these states based on various factors, including their priority, the
availability of CPU time, and external events. Thread scheduling algorithms determine which
thread runs next and aim to provide fair execution and efficient resource utilization.

Thread Management:
Operating systems provide APIs and libraries to create, manage, and synchronize threads. Popular
programming languages like C, C++, Java, and Python have built in support for threading. Threads
can communicate and synchronize their activities using synchronization primitives like
semaphores, mutexes, and condition variables.

Effective thread management is crucial for achieving concurrent execution in applications,


improving performance, and making efficient use of modern multicore processors. However, it
also introduces challenges related to synchronization, data sharing, and avoiding race conditions.

Thread Operation:
Thread operations are fundamental for creating, managing, and controlling threads within a
program or process. Here are the key thread operations:
1. Thread Creation: To create a new thread, a program typically calls a thread creation function or
constructor provided by the programming language or threading library. The new thread starts
executing a specified function or method concurrently with the calling thread.
2. Thread Termination: Threads can terminate for various reasons, such as completing their tasks,
receiving a termination signal, or encountering an error. Proper thread termination is essential to
release resources and avoid memory leaks.
3. Thread Synchronization: Thread synchronization is crucial to coordinate the execution of
multiple threads. Synchronization mechanisms like mutexes, semaphores, and condition variables
are used to prevent race conditions and ensure orderly access to shared resources.
4. Thread Joining: A thread can wait for another thread to complete its execution by using a
thread join operation. This is often used to wait for the results of a thread's work before
continuing with the main thread.
5. Thread Prioritization: Some threading models or libraries allow you to set thread priorities,
which influence the order in which threads are scheduled to run by the operating system.
6. Thread Communication: Threads communicate with each other by passing data or signals.
Interthread communication mechanisms include shared memory, message queues, pipes, and
other IPC methods.

Threading Models:
Threading models define how threads are created, scheduled, and managed within a program or
an operating system. Different threading models offer various advantages and trade-offs,
depending on the application's requirements. Here are common threading models:
1. Many-to-One Model:
• In this model, many user-level threads are mapped to a single kernel-level thread. It
is simple to implement and suitable for applications with infrequent thread
blocking.
• However, it doesn't fully utilize multiprocessor systems since a single thread can run
at a time.
2. One-to-One Model:
• In the one-to-one model, each user-level thread corresponds to a separate kernel-
level thread. This model provides full support for multithreading and can take
advantage of multiprocessor systems.
• It offers fine-grained control but may have higher overhead due to the increased
number of kernel threads.
3. Many-to-Many Model:
• The many-to-many model combines characteristics of both the many-to-one and
one-to-one models. It allows multiple user-level threads to be multiplexed onto a
smaller number of kernel threads.
• This model seeks to balance control and efficiency by allowing both user level and
kernel-level threads.
4. Hybrid Model:
• A hybrid threading model combines different threading approaches to take
advantage of both user-level and kernel-level threads.
• For example, it might use one-to-one for CPU-bound threads and many-to-one for
I/O-bound threads. Hybrid models aim to strike a balance between performance
and resource utilization.
The choice of a threading model depends on factors like the application's requirements, the
platform's support, and the trade-offs between control, resource usage, and performance. It's
essential to select the appropriate model to achieve the desired concurrency and efficiency in a
multithreaded application.

Benefits of Thread in Operating System


● Responsiveness: If the process is divided into multiple threads, if one thread completes its
execution, then its output can be immediately returned.
● Faster context switch: Context switch time between threads is lower compared to the process
context switch. Process context switching requires more overhead from the CPU
● Effective utilization of multiprocessor system: If we have multiple threads in a single process,
then we can schedule multiple threads on multiple processors. This will make process execution
faster.
● Resource sharing: Resources like code, data, and files can be shared among all threads within a
process. Note: Stacks and registers can’t be shared among the threads. Each thread has its own
stack and registers.
● Communication: Communication between multiple threads is easier, as the threads share a
common address space. while in the process we have to follow some specific communication
techniques for communication between the two processes.
Processor Scheduling:
Processor scheduling is a core component of operating systems that manages the execution of
processes and threads on a CPU. It aims to allocate CPU time efficiently and fairly to multiple
competing processes. Below, we'll explore various aspects of processor scheduling in detail.
Here’s a simplified explanation of the three types of queues:
1. Job Queue: Think of this as the waiting line for all the processes that want to be executed
by the system. These are the tasks or programs that have been added to the system and
are waiting to be handled.
2. Ready Queue: This is like a line of tasks that are ready to be picked up and worked on by
the CPU (the brain of the computer). The processes here are already loaded into the
computer’s memory and are just waiting for the CPU to start executing them.
3. Device Queue: This queue holds processes that are waiting for access to specific hardware,
like a printer or a disk drive. Each type of device has its own queue, and processes will wait
in the respective queue until the device is ready to handle their request.

In short:
 Job Queue: All processes waiting to start.
 Ready Queue: Processes waiting for the CPU to execute them.
 Device Queue: Processes waiting for a specific device (like a printer) to be available
Scheduling Levels
Scheduling levels, also known as scheduling domains, represent the different stages at which
scheduling decisions are made within an operating system. These levels help determine which
process or thread gets access to the CPU at any given time. There are typically three primary
scheduling levels:

1. Long-Term Scheduling (Job Scheduling)


Objective: Decides which new processes should be loaded into the system for execution.
Role: Selects processes from the job pool (a queue of new processes) and brings them into
memory.
Characteristics:
 It manages when and how many processes can enter the system, based on available
resources like memory and CPU load.
 It tries to balance CPU-bound processes (tasks that use a lot of CPU) with I/O-bound
processes (tasks that rely more on input/output operations like file handling).
 It operates slowly, often taking minutes or hours to decide.

2. Medium-Term Scheduling
Objective: Manages memory by deciding which processes should be swapped in and out of
memory.
Role: Decides which processes in memory should be temporarily moved to secondary storage
(swapped out) or brought back into memory (swapped in).
Characteristics:

 It helps manage memory effectively by ensuring the system doesn't become overloaded
with processes.

 It may swap processes out when they are waiting for I/O or when the system’s memory is
running low.

 This scheduling happens faster than long-term scheduling, usually within seconds or
minutes.

3. Short-Term Scheduling (CPU Scheduling)


Objective: Decides which process from the ready queue should get access to the CPU.
Role: The primary goal is to optimize CPU usage, system responsiveness, and throughput (how
many processes complete in a given time).
Characteristics:
 It works very quickly, making decisions in milliseconds or even microseconds.
 The scheduler considers factors like process priority, how long a process will run, and
fairness in allocating CPU time.
 It ensures that CPU time is shared efficiently among processes, making sure the system
responds quickly and keeps processes running smoothly.
Example: In a multi-tasking system, the short-term scheduler decides which process should run
next, giving time to each process based on its priority and time limits.
In Summary:
 Long-Term Scheduling: Decides which processes can enter the system, balancing system
load (slow process).
 Medium-Term Scheduling: Manages memory by swapping processes in and out (faster
than long-term, but slower than short-term).
 Short-Term Scheduling: Allocates CPU time to processes, optimizing performance and
responsiveness (very fast process).
Each of these scheduling levels plays a vital role in maintaining the efficiency and performance of
the operating system.

Preemptive Vs Non-premptive Scheduling


The Scheduling algorithms can be divided into two categories with respect to how they deal with
clock interrupts.
Non-preemptive Scheduling: A scheduling discipline is nonpreemptive , if once a process has been
used the CPU, the CPU cannot be taken away from that process.

Preemptive Scheduling: A scheduling discipline is preemptive, if once a process has been used the
CPU, the CPU can taken away
Priorities in Scheduling
In the context of processor scheduling, priorities play a crucial role in determining the order in
which processes or threads are granted access to the CPU. Prioritization is used to manage the
execution of processes based on their relative importance or urgency. Let's delve into the concept
of priorities in scheduling:
1. Importance of Priorities: Priorities are assigned to processes or threads to reflect their
significance within the system. High-priority processes are given preference in CPU
allocation, ensuring that critical tasks are executed promptly. Here's how priorities are used
and their significance:
 Responsiveness: High-priority processes are scheduled more frequently, ensuring
that tasks with immediate user interaction or real-time requirements receive timely
CPU attention. This enhances system responsiveness and user experience.
 Resource Allocation: Priorities help allocate CPU resources efficiently. Processes
that require more CPU time or have higher system importance can be assigned
higher priorities.
2. Priority Levels:
Priority levels can vary from system to system, with different operating systems using
distinct scales to represent priorities. Common approaches include:
 Absolute Priorities: In some systems, priorities are assigned as absolute values,
with higher numbers indicating higher priority. For example, a process with priority
10 is more important than a process with priority 5.
 Relative Priorities: In other systems, priorities are assigned relative to each other,
with lower numbers indicating higher priority. A process with priority 1 is more
important than a process with priority 5. This approach is sometimes used to avoid
confusion.
 Priority Ranges: Some systems categorize processes into priority ranges, such as
"high," "medium," and "low." Each range represents a group of priorities,
simplifying the priority assignment process.
3.Static vs. Dynamic Priorities: Priorities can be classified as static or dynamic:

 Static Priorities: In static priority scheduling, priorities are assigned to processes at


the time of their creation and remain fixed throughout the process's lifetime.
Changes to priorities require manual intervention or administrative actions.
 Dynamic Priorities: Dynamic priority scheduling allows priorities to change during
the execution of a process based on factors like aging, process behavior, and
resource usage. This approach adapts to the system's current workload and
requirements.
4.Priority Inversion: Priority inversion is a situation in which a lower-priority process holds a
resource required by a higher-priority process. This can cause a priority inversion anomaly,
where the higher-priority process is effectively blocked by the lower-priority process. To
address this issue, priority inheritance or priority ceiling protocols are used to temporarily
boost the priority of the lower-priority process.

Demand Scheduling
(also called event-driven scheduling or on-demand scheduling) is a type of scheduling mechanism
where processes request CPU time only when they need it, rather than receiving a fixed time slice
or being scheduled according to a pre-set policy. This type of scheduling is typically used in
systems that require quick responses to specific events, such as interactive or event-driven
systems. Here's a simplified explanation:
Key Characteristics of Demand Scheduling:
1. Event-Driven:
In demand scheduling, processes don’t run continuously. Instead, they signal or generate
events when they need CPU time. These events might be caused by user actions (e.g.,
clicking a button) or system events (e.g., a device signal). The process only asks for CPU
time when it’s ready or requires attention.
2. Resource Allocation:
When a process generates an event (request), the scheduler gives it access to the CPU. The
scheduler typically grants CPU time based on a first-come, first-served basis. So, whichever
process requests the CPU first gets it, ensuring processes are handled in the order they
arrive.

3. Low Response Time:


Demand scheduling is ideal for systems where quick, interactive responses are important.
For example, in a GUI-based application, when you click a button, the associated action
should be executed immediately. The system responds without waiting for a pre-defined
time slice.
4. Examples:
o User Interactions: For instance, in graphical user interfaces (GUIs), when a user
clicks a button, enters text, or interacts with the system, demand scheduling
ensures that the event is processed right away.
o Real-Time Systems: If a system has sensors or devices that trigger events, those
events will be processed promptly based on the demand.

Real-Time Scheduling:
Real-time scheduling is used in systems with time-critical tasks where meeting specific deadlines is
crucial. These systems include applications like avionics, industrial control systems, medical
devices, and telecommunications.
Real-time scheduling is classified into two categories: hard real-time and soft real-time.
Hard Real-Time Scheduling:

 In hard real-time systems, missing a task's deadline is unacceptable and can lead to system
failure. Schedulers are designed to ensure that critical tasks meet their strict timing
requirements.
 The scheduler prioritizes tasks based on their importance and ensures that high-priority
tasks are executed before lower-priority ones. This may involve preemptive scheduling.
 Examples include flight control systems, medical equipment, and automotive safety
systems
Soft Real-Time Scheduling:

 In soft real-time systems, occasional deadline misses are tolerable, and the system can
recover. While meeting deadlines is still a priority, there is some flexibility.
 The scheduler aims to maximize the number of deadlines met and minimize the number of
missed deadlines. Tasks are often assigned priorities based on their timing constraints.
 Examples include multimedia applications, online gaming, and streaming services.
Deterministic Scheduling: Real-time scheduling algorithms aim for determinism, ensuring that
tasks are executed predictably and consistently. This is essential for maintaining system reliability.

You might also like