Os own (3)
Os own (3)
2. Shell: Shell is called as the command interpreter. It is a set of programs used to interact
with the application programs. It is responsible for execution of instructions given to OS
(called commands).
2. System View: From the computer's point of view, an operating system is a control program
that manages the execution of user programs to prevent errors and improper use of the
computer. It is concerned with the operation and control of I/O devices.
Keep track of which parts of memory are currently being used and by whom.
Decide which processes to load when memory space becomes available.
Allocate and de-allocate memory space as needed.
4. File System Management: A file is a collection of related information defined by its creator.
Commonly, files represent programs (both source and object forms) and data. The operating
system is responsible for the following activities in connections with file management:
7. Security and Access Control: It enforces security policies, user authentication, and access
controls to protect the system from unauthorized access and data breaches. The OS provides
mechanisms for handling errors and exceptions, preventing system crashes due to software bug
8. Networking: In modern operating systems, networking capabilities are integrated to allow
communication over networks, which is vital for internet connectivity and networked applications.
A distributed system is a collection of processors that do not share memory or a clock. Each
processor has its own local memory.
The processors in the system are connected through a communication network.
Communication takes place using a protocol.
A distributed system provides user access to various system resources.
Access to a shared resource allows:
✦ Computation speed-up ✦ Increased data availability ✦ Enhanced reliability
9. Task Scheduling: The OS employs scheduling algorithms to determine the order in which
processes are executed, ensuring fair allocation of CPU time and system responsiveness
10. Secondary-Storage Management :Since main memory (primary storage) is volatile and too
small to accommodate all data and programs permanently, the computer system must provide
secondary storage to back up main memory. Most modern computer systems use disks as the
principle on-line storage medium, for both programs and data. The operating system is responsible
for the following activities in connection with disk management: ✦
2. Linux: Linux is an open-source operating system known for its stability, security, and flexibility. It
comes in various distributions, such as Ubuntu, CentOS, and Debian.
3. macOS: macOS is the OS developed by Apple for their Macintosh computers, known for its user-
friendly interface and integration with Apple hardware.
4. Unix: Unix is an older, robust OS that has influenced many other operating systems, including
Linux.
5. Android: Android is a popular mobile operating system used in smartphones and tablets.
6. iOS: iOS is Apple's mobile operating system used in iPhones and iPads.
Job Pool: In a multi-programmed batch system, there is a job pool that contains a
collection of batch jobs. These jobs are ready to run and are loaded into memory as space
becomes available.
Job Scheduling: The operating system employs job scheduling algorithms to select the next
job from the job pool and load it into memory. This helps in reducing idle time of the CPU
and improves system throughput.
Memory Management: Multi-programmed batch systems manage memory efficiently by
allocating and de-allocating memory space for each job. Jobs may need to be swapped in
and out of memory to make the best use of available resources.
I/O Overlap: These systems aim to overlap I/O operations with CPU processing. While one
job is waiting for I/O, another job can utilize the CPU, enhancing overall system
performance.
Job Prioritization: Jobs are prioritized based on their characteristics and requirements.
High-priority jobs may be selected for execution before lower priority ones.
Batch Job Execution: Each job is executed as a separate program, similar to simple batch
systems. It runs until it completes or is blocked by I/O, at which point the CPU scheduler
selects the next job for execution.
Personal-Computer Systems
Personal Computer (PC) Systems are designed for individual users and small-scale computing
needs. Here are the key characteristics:
Single User: PC systems are typically single-user systems, designed for use by a single
individual.
User-Friendly GUI: They often have a graphical user interface (GUI) that makes it easy for
users to interact with the system.
Limited Resource Sharing: PC systems are not designed for heavy multi-user interaction or
resource sharing. They focus on providing resources to a single user's tasks.
Broad Application: PC systems are used for a wide range of applications, from word
processing and web browsing to gaming and multimedia.
Operating Systems: Common PC operating systems include Microsoft Windows, macOS,
and various Linux distributions.
Distributed Systems:
In distributed system, the different machines are connected in a network and each machine has its
own processor and own local memory. In this system, the operating systems on all the machines
work together to manage the collective network resource. It can be classified into two categories:
1. Client-Server systems
2. Peer-to-Peer systems
Timing Constraints: Real-time systems have strict timing constraints, and tasks must be
completed within specific time limits.
Deterministic Behavior: These systems aim for deterministic behavior, ensuring that the
system's response is predictable and consistent.
Hard and Soft Real-Time: Real-time systems can be classified as hard realtime (where
missing a deadline is catastrophic) or soft real-time (where occasional missed deadlines are
acceptable).
In hard real-time systems, the primary goal is to ensure that critical tasks are
completed within a strict deadline. These systems are used in environments where
failure to meet timing constraints could result in catastrophic consequences. Every
component of a hard real-time system, including the operating system, must be
designed to provide predictable and bounded delays. Examples include air traffic
control systems, pacemakers, and anti-lock braking systems, where even a minor
delay can cause system failure or danger to human life. Hard real-time systems
often have dedicated hardware and real-time operating systems (RTOS) designed to
handle tasks with precise timing requirements, ensuring that data retrieval and
execution happen within a guaranteed timeframe
soft real-time systems are less rigid in their timing requirements. While critical
tasks are still given priority over other tasks, the system does not guarantee that
they will always complete within the strict deadlines. As a result, missing a deadline
might cause degraded performance or inconvenience, but not system failure. These
systems can handle tasks that are not as time-sensitive alongside real-time tasks.
Common examples include multimedia systems, online transaction systems, and
video streaming applications. These systems can tolerate occasional delays or
deadline misses without critical consequences
Applications: Real-time systems are used in areas like aviation (flight control systems),
automotive (engine control units), and industrial automation (robotics).
Challenges: Developing real-time systems is challenging due to the need for precise timing,
and they often require specialized hardware and software.
Both distributed systems and real-time systems are specialized types of computer systems, each
with its unique requirements and applications. Distributed systems focus on resource sharing and
scalability across multiple machines, while real-time systems prioritize time-bound responses and
determinism.
Processes
Introduction to Processes:
In the context of operating systems, a process is a fundamental concept that represents the
execution of a program. It's a unit of work in a computer system that can be managed and
scheduled by the operating system. Here's an overview of processes:
A process includes a program's code, data, and execution context (program counter, registers,
stack), operating in isolated memory to prevent interference. It enables multitasking and
concurrency by running multiple programs simultaneously. Processes communicate via OS-
provided inter-process communication mechanisms.
Process States:
Processes go through different states during their lifecycle. These states represent the different
stages a process can be in. The typical process states are:
New State: In this step, the process is about to be created but not yet created. It is the
program that is present in secondary memory that will be picked up by the OS to create
the process.
Ready State: New -> Ready to run. After the creation of a process, the process enters the
ready state i.e. the process is loaded into the main memory. The process here is ready to
run and is waiting to get the CPU time for its execution. Processes that are ready for
execution by the CPU are maintained in a queue called a ready queue for ready processes.
Run State: The process is chosen from the ready queue by the OS for execution and the
instructions within the process are executed by any one of the available processors.
Blocked or Wait State: Whenever the process requests access to I/O needs input from the
user or needs access to a critical region, it enters the blocked or waits state. The process
continues to wait in the main memory and does not require CPU. Once the I/O operation is
completed the process goes to the ready state.
Terminated or Completed State: Process is killed as well as PCB is deleted. The resources
allocated to the process will be released or deallocated.
Suspend Ready: Process that was initially in the ready state but was swapped out of main
memory and placed onto external storage .The process will transition back to a ready state
whenever the process is again brought onto the main memory.
Suspend Wait or Suspend Blocked: Similar to suspend ready but uses the process which
was performing I/O operation and lack of main memory caused them to move to
secondary memory. When work is finished it may go to suspend ready.
Process Management:
Process management is a critical aspect of an operating system's responsibilities. It involves
various tasks related to process creation, scheduling, and termination. Here's an overview of
process management:
1. Process Creation: When a user or system request initiates a new process, the OS is responsible
for creating the process. This includes allocating memory, initializing data structures, and setting
up the execution environment.
2. Process Scheduling: The OS uses scheduling algorithms to determine which process to run next
on the CPU. It ensures fair allocation of CPU time to multiple processes and aims to maximize
system throughput.
3. Process Termination: When a process completes its execution or is terminated due to an error
or user action, the OS must clean up its resources, release memory, and remove it from the
system.
4. Process Communication: The OS provides mechanisms for processes to communicate and share
data. This can include inter-process communication (IPC) methods like message passing or shared
memory.
5. Process Synchronization: When multiple processes are accessing shared resources, the OS
manages synchronization to prevent data corruption and race conditions.
6. Process Priority and Control: The OS allows users to set process priorities, which influence their
order of execution. It also provides mechanisms to control and monitor processes
7. Process State Transitions: The OS manages the transitions between different process states,
ensuring that processes move between states as required.
Effective process management is essential for the efficient and stable operation of a computer
system, enabling multiple programs to run simultaneously, share resources, and respond to user
and system needs..
Program Process
A program is a passive entity consisting of a A process is an active entity that represents the
set of instructions or code written to perform instance of a program in execution. Once a
a specific task. It is stored in the system’s program is executed, the system creates a
memory, typically as an executable file, but process to run it, allocating necessary resources
does not actively execute or use system like CPU time, memory, and I/O devices
resources until it is run
Program is a static file on disk, waiting to be A process is dynamic and changes state as it
loaded into memory for execution executes the instructions, interacts with the
operating system, and consumes resources
A program itself doesn't do anything until it's Each process has its own memory space and
executed execution context, including registers and
program counters, making it independent of
other processes
Examples include files like calculator.exe or a For example, when you open a calculator on
Python script like myscript.py. your computer, the calculator program becomes
an active process, utilizing system resources to
operate.
Process Control Block (PCB)
A Process Control Block (PCB) is a data structure used by the operating system to manage information
about a process. The process control keeps track of many important pieces of information needed to
manage processes efficiently. The diagram helps explain some of these key data items.
Pointer: It is a stack pointer that is required to be saved when the process is switched from
one state to another to retain the current position of the process.
Process state: It stores the respective state of the process.
Process number: Every process is assigned a unique id known as process ID or PID which
stores the process identifier.
Program counter: Program Counter stores the counter, which contains the address of the
next instruction that is to be executed for the process.
Register: Registers in the PCB, it is a data structure. When a processes is running and it’s
time slice expires, the current value of process specific registers would be stored in the PCB
and the process would be swapped out. When the process is scheduled to be run, the
register values is read from the PCB and written to the CPU registers. This is the main
purpose of the registers in the PCB.
Memory limits: This field contains the information about memory management
system used by the operating system. This may include page tables, segment tables, etc.
List of Open files: This information includes the list of files opened for a process.
After the interrupt is handled, the processor goes back to the original task. To avoid repeated
interrupt signals, the processor informs the device that the request is acknowledged. However,
saving registers and switching tasks takes time, causing a delay known as Interrupt Latency.
A single computer can perform only one computer instruction at a time. But, because it can be
interrupted, it can manage how programs or sets of instructions will be performed. This is known
as multitasking. It allows the user to do many different things simultaneously, and the computer
turns to manage the programs that the user starts. Of course, the computer operates at speeds
that make it seem like all user tasks are being performed simultaneously.
Types of Interrupts:
A hardware interrupt
A hardware interrupt is a signal from a hardware device to the processor, indicating it needs attention. For example,
pressing a key or moving a mouse generates a hardware interrupt, prompting the processor to read the input. These
interrupts occur asynchronously, independent of the processor clock. To handle them effectively, interrupt signals are
synchronized with the processor clock and processed only at instruction boundaries.
Each hardware device is typically associated with a unique IRQ (Interrupt Request) signal, allowing the system to
identify and prioritize the requesting device efficiently. Hardware interrupts are further classified into two types, such
as:
Maskable interrupt
A maskable interrupt is an interrupt signal that can be turned on or off by the processor using a special
register called the interrupt mask register. This register has bits that correspond to each interrupt signal.
Depending on the system, a bit may enable or disable the interrupt. When an interrupt is disabled (masked),
the processor ignores the interrupt signal.
Non-maskable interrupts (NMI), on the other hand, cannot be turned off. These interrupts are very
important and must always be handled immediately, like signals from a watchdog timer indicating a critical
error.
Spurious interrupt
A spurious interrupt is an interrupt that happens, but there is no clear source for it. It's also sometimes
called a phantom or ghost interrupt. If the interrupting device is cleared too late during the ISR, the processor
may mistakenly think another interrupt is pending, even though there is none. This can lead to issues like
system freezes or unpredictable behaviour. To prevent this, the ISR should check all interrupt sources and
only act if there is a real interrupt.
A software interrupt
A software interrupt is when the processor triggers an interrupt by executing a special instruction or when certain
conditions occur. Each software interrupt is linked to a specific handler.
These interrupts can be intentionally caused by special instructions designed to request services from the operating
system or interact with device drivers, similar to calling a subroutine.
However, software interrupts can also happen unexpectedly due to program errors, and these are called traps or
exceptions.
4. Sockets:
o Sockets are a network-based IPC mechanism used for communication between
processes on different machines over a network.
o They are widely used for client-server applications and network communication.
o Sockets support both stream (TCP) and datagram (UDP) communication.
5. Semaphores and Mutexes: Semaphores and mutexes are synchronization mechanisms that
are used to control access to shared resources, preventing race conditions and ensuring
mutual exclusion. They are particularly useful for coordinating concurrent access to critical
sections of code.
Benefits of Threads:
Improved concurrency: Threads allow multiple tasks to be executed concurrently within
the same process, potentially improving system performance.
Resource efficiency: Threads share resources like memory, reducing the overhead
associated with creating and managing separate processes.
Faster communication: Threads within the same process can communicate more efficiently
than separate processes since they share memory
Types of Threads:
Threads are of two types. These are described below.
● User Level Thread
● Kernel Level Thread
What is Multi-Threading?
A thread is also known as a lightweight process. The idea is to achieve parallelism by dividing a
process into multiple threads. For example, in a browser, multiple tabs can be different threads.
MS Word uses multiple threads: one thread to format the text, another thread to process inputs,
etc. More advantages of multithreading are discussed below.
4. Blocked (or Waiting): When a thread cannot continue its execution due to the need for some
external event (e.g., I/O operation), it enters the blocked state and is put on hold until the event
occurs.
5. Terminated: When a thread completes its execution or is explicitly terminated, it enters the
terminated state. Resources associated with the thread are released
Thread Transitions:
Threads transition between these states based on various factors, including their priority, the
availability of CPU time, and external events. Thread scheduling algorithms determine which
thread runs next and aim to provide fair execution and efficient resource utilization.
Thread Management:
Operating systems provide APIs and libraries to create, manage, and synchronize threads. Popular
programming languages like C, C++, Java, and Python have built in support for threading. Threads
can communicate and synchronize their activities using synchronization primitives like
semaphores, mutexes, and condition variables.
Thread Operation:
Thread operations are fundamental for creating, managing, and controlling threads within a
program or process. Here are the key thread operations:
1. Thread Creation: To create a new thread, a program typically calls a thread creation function or
constructor provided by the programming language or threading library. The new thread starts
executing a specified function or method concurrently with the calling thread.
2. Thread Termination: Threads can terminate for various reasons, such as completing their tasks,
receiving a termination signal, or encountering an error. Proper thread termination is essential to
release resources and avoid memory leaks.
3. Thread Synchronization: Thread synchronization is crucial to coordinate the execution of
multiple threads. Synchronization mechanisms like mutexes, semaphores, and condition variables
are used to prevent race conditions and ensure orderly access to shared resources.
4. Thread Joining: A thread can wait for another thread to complete its execution by using a
thread join operation. This is often used to wait for the results of a thread's work before
continuing with the main thread.
5. Thread Prioritization: Some threading models or libraries allow you to set thread priorities,
which influence the order in which threads are scheduled to run by the operating system.
6. Thread Communication: Threads communicate with each other by passing data or signals.
Interthread communication mechanisms include shared memory, message queues, pipes, and
other IPC methods.
Threading Models:
Threading models define how threads are created, scheduled, and managed within a program or
an operating system. Different threading models offer various advantages and trade-offs,
depending on the application's requirements. Here are common threading models:
1. Many-to-One Model:
• In this model, many user-level threads are mapped to a single kernel-level thread. It
is simple to implement and suitable for applications with infrequent thread
blocking.
• However, it doesn't fully utilize multiprocessor systems since a single thread can run
at a time.
2. One-to-One Model:
• In the one-to-one model, each user-level thread corresponds to a separate kernel-
level thread. This model provides full support for multithreading and can take
advantage of multiprocessor systems.
• It offers fine-grained control but may have higher overhead due to the increased
number of kernel threads.
3. Many-to-Many Model:
• The many-to-many model combines characteristics of both the many-to-one and
one-to-one models. It allows multiple user-level threads to be multiplexed onto a
smaller number of kernel threads.
• This model seeks to balance control and efficiency by allowing both user level and
kernel-level threads.
4. Hybrid Model:
• A hybrid threading model combines different threading approaches to take
advantage of both user-level and kernel-level threads.
• For example, it might use one-to-one for CPU-bound threads and many-to-one for
I/O-bound threads. Hybrid models aim to strike a balance between performance
and resource utilization.
The choice of a threading model depends on factors like the application's requirements, the
platform's support, and the trade-offs between control, resource usage, and performance. It's
essential to select the appropriate model to achieve the desired concurrency and efficiency in a
multithreaded application.
In short:
Job Queue: All processes waiting to start.
Ready Queue: Processes waiting for the CPU to execute them.
Device Queue: Processes waiting for a specific device (like a printer) to be available
Scheduling Levels
Scheduling levels, also known as scheduling domains, represent the different stages at which
scheduling decisions are made within an operating system. These levels help determine which
process or thread gets access to the CPU at any given time. There are typically three primary
scheduling levels:
2. Medium-Term Scheduling
Objective: Manages memory by deciding which processes should be swapped in and out of
memory.
Role: Decides which processes in memory should be temporarily moved to secondary storage
(swapped out) or brought back into memory (swapped in).
Characteristics:
It helps manage memory effectively by ensuring the system doesn't become overloaded
with processes.
It may swap processes out when they are waiting for I/O or when the system’s memory is
running low.
This scheduling happens faster than long-term scheduling, usually within seconds or
minutes.
Preemptive Scheduling: A scheduling discipline is preemptive, if once a process has been used the
CPU, the CPU can taken away
Priorities in Scheduling
In the context of processor scheduling, priorities play a crucial role in determining the order in
which processes or threads are granted access to the CPU. Prioritization is used to manage the
execution of processes based on their relative importance or urgency. Let's delve into the concept
of priorities in scheduling:
1. Importance of Priorities: Priorities are assigned to processes or threads to reflect their
significance within the system. High-priority processes are given preference in CPU
allocation, ensuring that critical tasks are executed promptly. Here's how priorities are used
and their significance:
Responsiveness: High-priority processes are scheduled more frequently, ensuring
that tasks with immediate user interaction or real-time requirements receive timely
CPU attention. This enhances system responsiveness and user experience.
Resource Allocation: Priorities help allocate CPU resources efficiently. Processes
that require more CPU time or have higher system importance can be assigned
higher priorities.
2. Priority Levels:
Priority levels can vary from system to system, with different operating systems using
distinct scales to represent priorities. Common approaches include:
Absolute Priorities: In some systems, priorities are assigned as absolute values,
with higher numbers indicating higher priority. For example, a process with priority
10 is more important than a process with priority 5.
Relative Priorities: In other systems, priorities are assigned relative to each other,
with lower numbers indicating higher priority. A process with priority 1 is more
important than a process with priority 5. This approach is sometimes used to avoid
confusion.
Priority Ranges: Some systems categorize processes into priority ranges, such as
"high," "medium," and "low." Each range represents a group of priorities,
simplifying the priority assignment process.
3.Static vs. Dynamic Priorities: Priorities can be classified as static or dynamic:
Demand Scheduling
(also called event-driven scheduling or on-demand scheduling) is a type of scheduling mechanism
where processes request CPU time only when they need it, rather than receiving a fixed time slice
or being scheduled according to a pre-set policy. This type of scheduling is typically used in
systems that require quick responses to specific events, such as interactive or event-driven
systems. Here's a simplified explanation:
Key Characteristics of Demand Scheduling:
1. Event-Driven:
In demand scheduling, processes don’t run continuously. Instead, they signal or generate
events when they need CPU time. These events might be caused by user actions (e.g.,
clicking a button) or system events (e.g., a device signal). The process only asks for CPU
time when it’s ready or requires attention.
2. Resource Allocation:
When a process generates an event (request), the scheduler gives it access to the CPU. The
scheduler typically grants CPU time based on a first-come, first-served basis. So, whichever
process requests the CPU first gets it, ensuring processes are handled in the order they
arrive.
Real-Time Scheduling:
Real-time scheduling is used in systems with time-critical tasks where meeting specific deadlines is
crucial. These systems include applications like avionics, industrial control systems, medical
devices, and telecommunications.
Real-time scheduling is classified into two categories: hard real-time and soft real-time.
Hard Real-Time Scheduling:
In hard real-time systems, missing a task's deadline is unacceptable and can lead to system
failure. Schedulers are designed to ensure that critical tasks meet their strict timing
requirements.
The scheduler prioritizes tasks based on their importance and ensures that high-priority
tasks are executed before lower-priority ones. This may involve preemptive scheduling.
Examples include flight control systems, medical equipment, and automotive safety
systems
Soft Real-Time Scheduling:
In soft real-time systems, occasional deadline misses are tolerable, and the system can
recover. While meeting deadlines is still a priority, there is some flexibility.
The scheduler aims to maximize the number of deadlines met and minimize the number of
missed deadlines. Tasks are often assigned priorities based on their timing constraints.
Examples include multimedia applications, online gaming, and streaming services.
Deterministic Scheduling: Real-time scheduling algorithms aim for determinism, ensuring that
tasks are executed predictably and consistently. This is essential for maintaining system reliability.