Unit-V - Advance Operating System
Unit-V - Advance Operating System
23CSE2010-Operating System
Class - S.Y.
PLD(SEM-II)
Unit - V
Advance Operating System
AY 2024-2025 SEM-II
1
MIT School of Computing
Department of Computer Science & Engineering
Unit-V Syllabus
2
Introduction to Distributed Operating Systems
• A Distributed Operating System (DOS) allows
applications to run on multiple interconnected
computers while offering enhanced communication
and integration capabilities compared to a network
operating system.
• Appears to users as a typical centralized OS.
• Utilizes multiple CPUs for better resource sharing,
including CPUs, disks, network interfaces, and nodes.
• Expands data accessibility across different sites in the
system.
Distributed Operating Systems
Importance of Distributed Operating Systems
1. Resource Sharing
• Enables sharing of hardware and software resources (e.g., CPUs,
printers, storage) across multiple systems, improving utilization.
2. Scalability
• Easily accommodates growth in users, devices, and applications
without a significant performance decline.
3. Fault Tolerance
• Provides robust performance by handling hardware or software
failures, ensuring system reliability and uptime.
4. Improved Performance
• Distributes workloads among multiple nodes, enhancing system
performance and reducing response time.
Importance of Distributed Operating Systems
5. Transparency
• Offers users a unified and coherent view of the system, hiding the
complexity of distributed components.
• Types: Access, Location, Replication, and Fault Transparency.
6. Cost-Effectiveness
• Reduces costs by utilizing low-cost hardware and combining
resources instead of relying on expensive centralized systems.
7. Support for Collaboration
• Facilitates seamless communication and data sharing among users
in different locations, enabling collaborative work.
8. High Availability
• Ensures that services are accessible even in the event of node or
network failures, minimizing downtime.
Key Characteristics of Distributed Operating Systems
1. Transparency
• Access Transparency: Users can access resources without worrying
about their location.
• Location Transparency: The physical location of resources is hidden
from users.
• Replication Transparency: Users don’t need to know about replicated
resources.
• Concurrency Transparency: Multiple users can access shared resources
without interference.
• Fault Transparency: System hides hardware or software failures from
users.
Key Characteristics of Distributed Operating Systems
2. Scalability
• The system can handle an increasing number of users, nodes, or
resources efficiently.
3. Fault Tolerance
• The system can recover from hardware or software failures and
maintain operations.
4. Resource Sharing
• Enables efficient sharing of resources like processors, memory, storage,
and peripherals.
Key Characteristics of Distributed Operating Systems
5. Concurrency
• Supports multiple processes running simultaneously across distributed
nodes.
6. Security
• Provides mechanisms for authentication, data encryption, and secure
communication between nodes.
7. High Availability
• Ensures continuous system availability by distributing workload and
replicating resources.
8. Flexibility
• Supports different types of hardware, operating systems, and network
configurations.
Real-World Examples of Distributed Operating Systems
1. Apache Hadoop
• It is a framework for distributed storage and processing of large
datasets using a cluster of computers.
• Features:
• Built on the Hadoop Distributed File System (HDFS) for
fault-tolerant storage.
• Uses the MapReduce programming model for distributed
data processing.
• Applications:
• Big Data analysis, data warehousing, and machine learning.
• Widely used by companies like Facebook, Yahoo, and
Twitter.
Real-World Examples of Distributed Operating Systems
2. Google’s MapReduce
• A programming model and processing framework designed by
Google for large-scale data processing in a distributed
environment.
• Features:
• Breaks down large tasks into smaller subtasks (Map phase)
and aggregates the results (Reduce phase).
• Provides fault tolerance and scalability for massive datasets.
• Applications:
• Used internally by Google for indexing the web, processing
logs, and building machine learning models.
• Inspired the development of Apache Hadoop.
Architecture of Distributed Systems
1. Layered Architecture in Distributed Systems
2. Peer-to-Peer (P2P) Architecture
3. Three-Tier Architecture
4. Service-Oriented Architecture (SOA)
System Architectures in Distributed Operating Systems
Suitability for real- Not well-suited for real-time Tailored for real-time requirements,
time applications application due to variable latency ensuring precise timing and low latency
Execution
Task Period (Ti) Priority
Time (Ci)
T1 1 4 High
T2 2 6 Medium
T3 2 8 Low
Rate Monotonic Scheduling (RMS)
• Scheduling Order:
• T1 has the shortest period (4) and the highest
priority.
• T2 has a medium period (6) and medium priority.
• T3 has the longest period (8) and the lowest priority.
• The CPU will execute the tasks in order of their
priorities, preempting lower-priority tasks when higher-
priority ones become ready.
Rate Monotonic Scheduling (RMS)
Advantages
• Easy to Implement: Simple and straightforward to
apply in real-time systems.
• Optimal: If any static priority assignment can meet
deadlines, RMS can as well.
• Predictable: Uses a calculated copy of time periods,
unlike time-sharing algorithms like Round Robin.
Rate Monotonic Scheduling (RMS)
Disadvantages
• Limited Support for Aperiodic Tasks: Difficult to
handle aperiodic and sporadic tasks effectively.
• Not Always Optimal: When task periods and deadlines
differ, RMS may fail to meet deadlines.
Earliest Deadline First (EDF) Scheduling
2. Earliest Deadline First (EDF) Scheduling
Definition:
Earliest Deadline First (EDF) is a dynamic priority
scheduling algorithm primarily used in real-time
systems. In EDF, tasks are assigned priorities based on
their deadlines, with the task having the earliest
deadline given the highest priority.
Earliest Deadline First (EDF) Scheduling
Key Characteristics:
1. Dynamic Priority:
The priority of a task is not fixed; it changes based on the
remaining time until its deadline.
2. Preemptive Nature:
EDF is preemptive, meaning that if a new task with an earlier
deadline arrives, it can interrupt the currently running task
3. Optimality:
EDF is optimal for uniprocessor systems, meaning it
guarantees that tasks will meet their deadlines if the total CPU
utilization is ≤ 100%.
Earliest Deadline First (EDF) Scheduling
Key Characteristics:
4. Schedulability Test:
For a set of n periodic tasks with execution times 𝐶𝑖C i and
periods 𝑇𝑖T i, the system is schedulable if:
Earliest Deadline First (EDF) Scheduling
How EDF Works (Example):
Consider three tasks with the following attributes:
T1 2 4
T2 1 2
T3 2 6
Earliest Deadline First (EDF) Scheduling
Scheduling Order:
1. At time t=0:
• T2 has the earliest deadline (D=2), so it runs first.
2. At t=1:
• T2 completes, and T1 runs next as it has the next
earliest deadline (D=4).
3. At t=3:
• T1 completes, and T3 runs last since it has the
furthest deadline (D=6).
Earliest Deadline First (EDF) Scheduling
Advantages of EDF:
1. Maximizes CPU Utilization:
Allows 100% utilization for periodic tasks.
2. Simple and Efficient:
Straightforward to implement for uniprocessor
systems.
3. Optimal for Uniprocessor Systems:
If any feasible schedule exists, EDF will produce it.
Earliest Deadline First (EDF) Scheduling
Disadvantages of EDF:
1. Overheads:
Frequent context switching due to preemption.
2. Poor Performance in Overloaded Systems:
In cases where the system is overloaded, EDF might
fail to meet several deadlines simultaneously.
3. Complexity in Multiprocessor Systems:
EDF is not optimal for multiprocessor systems and
requires additional considerations.
Virtualization and Operating System Components
Virtualization has many purposes in advanced
operating systems, including:
1. Multiple isolated systems: Virtualization allows users to run
multiple isolated systems on a single physical machine.
2. Hardware optimization: Virtualization optimizes the use of
hardware resources. For example, users can create a virtual server
pool on a single computer system instead of running one server on
each computer.
3. System security: Virtualization can enhance system security.
4. Data backup and recovery: Virtualization can simplify data
backup and recovery.
Virtualization and Operating System Components
5. Legacy applications: Virtualization can allow users to maintain and
operate outdated software alongside newer applications and systems.
6. Disaster recovery: Virtualization can simplify disaster recovery by
allowing users to quickly bring VMs back online after a system failure.
7. Remote access: Virtualization can allow users to remotely access and
interact with deployed apps without installing them on their own
devices.
8. Business continuity: Virtualization can help minimize the impact of
downtime on workloads.
9. Testing and development: Virtualization can facilitate testing and
development environments.
MIT School of Computing
Department of Computer Science & Engineering
VIRTUAL MACHINE
• Computer having hardware resources as
RAM,CPU, storage and its operated by
operating system i.e. windows.
• If we needs to learnPLD and use Linux it required
separate new computer system with hardware
resources and then on that Linux OS is required.
• In virtualization no need of separate system and
its hardware resources.
• It allow user to install Linux on the top of
Windows.
48
MIT School of Computing
Department of Computer Science & Engineering
VIRTUAL MACHINE
PLD
Computer System without Virtual Machine Computer System with Virtual Machine
49
MIT School of Computing
Department of Computer Science & Engineering
PLD
51
MIT School of Computing
Department of Computer Science & Engineering
PLD
52
MIT School of Computing
Department of Computer Science & Engineering
TYPE OF HYPERVISOR
PLD
53
MIT School of Computing
Department of Computer Science & Engineering
TYPE OF HYPERVISOR
• Type 1 hypervisor functions as a light operating
system that operates directly on the host's
hardware. PLD
• Type 2 hypervisor functions as a software layer
on top of an operating system, similar to other
computer programs.
54
Linux Operating System Overview
Origin & Creator:
Linux is an open-source operating system (OS) created by Linus Torvalds in
1991.
Global Impact:
Today, Linux powers a massive range of devices, from personal computers to
the world’s 500 most powerful supercomputers.
Why Users Choose Linux:
Versatility: Works across various platforms (from desktops to servers to
embedded systems).Security: Known for its robust security features.
Customization:
Open-source nature allows users to tailor the system to their specific needs.
Linux System
Definition:
• An Operating System (OS) is software that directly manages a system’s
hardware and resources, such as the CPU, memory, and storage.
Role of the OS:
• The OS acts as an intermediary between applications and hardware,
facilitating communication between software and physical resources.
User Interaction:
• Humans interact with computers in many ways, but most often through
an OS.
• It provides a user-friendly interface to access a computer’s core
functions, enabling tasks like running applications, managing files, and
controlling hardware.
History of Linux System
• Linux is a modem, free operating system based on UNIX standards.
• First developed as a small but self-contained kernel in 1991 by
Linus Torvalds, with the major design goal of UNIX compatibility.
• Its history has been one of collaboration by many users from all
around the world, corresponding almost exclusively over the
Internet.
• It has been designed to run efficiently and reliably on common PC
hardware, but also runs on a variety of other platforms.
• The core Linux operating system kernel is entirely original, but it
can run much existing free UNIX software, resulting in an entire
UNIX-compatible operating system free from proprietary code.
History of Linux System
• Linux is a modem, free operating system based on UNIX
standards.
• First developed as a small but self-contained kernel in 1991 by
Linus Torvalds, with the major design goal of UNIX compatibility.
• Its history has been one of collaboration by many users from all
around the world, corresponding almost exclusively over the
Internet.
• It has been designed to run efficiently and reliably on common PC
hardware, but also runs on a variety of other platforms.
• The core Linux operating system kernel is entirely original, but it
can run much existing free UNIX software, resulting in an entire
UNIX-compatible operating system free from proprietary code.3
• 0…………..
The Linux Kernel
• Version 0.01 (May 1991) had no networking, ran only on 80386-
compatible Intel processors and on PC hardware, had extremely
limited device-drive support, and supported only the Minix file
system.
• Linux 1.0 (March 1994) included these new features:
• Support for UNIX’s standard TCP/IP networking protocols
• BSD-compatible socket interface for networking programming
• Device-driver support for running IP over an Ethernet
• Enhanced file system
• Support for a range of SCSI controllers for
high-performance disk access
• Extra hardware support
• Version 1.2 (March 1995) was the final PC-only Linux kernel.
Linux 2.0
• Released in June 1996, 2.0 added two major new capabilities:
⁃ Support for multiple architectures, including a fully 64-bit native
Alpha port.
⁃ Support for multiprocessor architectures
• Other new features included:
⁃ Improved memory-management code
⁃ Improved TCP/IP performance
⁃ Support for internal kernel threads, for handling dependencies
between loadable modules, and for automatic loading of modules on
demand.
⁃ Standardized configuration interface
• Available for Motorola 68000-series processors, Sun Sparc systems, and
for PC and PowerMac systems.
Linux Distributions
• Standard, precompiled sets of packages, or distributions, include
the basic Linux system, system installation and management
utilities, and ready-to-install packages of common UNIX tools.
• The first distributions managed these packages by simply
providing a means of unpacking all the files into the appropriate
places; modern distributions include advanced package
management.
• Early distributions included SLS and Slackware. Red Hat and
Debian are popular distributions from commercial and
noncommercial sources, respectively.
• The RPM Package file format permits compatibility among the
various Linux distributions.
Linux Licensing
• The Linux kernel is distributed under the GNU General
Public License (GPL), the terms of which are set out
by the Free Software Foundation.
• Anyone using Linux, or creating their own derviate of
Linux, may not make the derived product proprietary;
software released under the GPL may not be
redistributed as a binary-only product.
Design Principles
• Linux is a multiuser, multitasking system with a full set of
UNIX-compatible tools..
• Its file system adheres to traditional UNIX semantics, and it
fully implements the standard UNIX networking model.
• Main design goals are speed, efficiency, and standardization.
• Linux is designed to be compliant with the relevant POSIX
documents; at least two Linux distributions have achieved
official POSIX certification.
• The Linux programming interface adheres to the SVR4 UNIX
semantics, rather than to BSD behavior.
Components of a Linux System
• Like most UNIX implementations, Linux is composed
of three main bodies of code; the most important
distinction between the kernel and all other
components.
• The kernel is responsible for maintaining the important
abstractions of the operating system.
– Kernel code executes in kernel mode with full
access to all the physical resources of the computer.
– All kernel code and data structures are kept in the
same single address space.
Components of a Linux System
• The system libraries define a standard set of functions
through which applications interact with the kernel, and
which implement much of the operating-system
functionality that does not need the full privileges of
kernel code.
• Thesystem utilities perform individual specialized
management tasks.
Process Management
• UNX process management separates the creation of
processes and the running of a new program into two
distinct operations.
⁃ The fork system call creates a new process.
⁃ A new program is run after a call to execve.
• Under UNIX, a process encompasses all the information
that the operating system must maintain t track the context
of a single execution of a single program.
• Under Linux, process properties fall into three groups: the
process’s identity, environment, and context.
Process Identity
• Process ID (PID). The unique identifier for the process; used to
specify processes to the operating system when an application
makes a system call to signal, modify, or wait for another
process.
• Credentials. Each process must have an associated user ID and
one or more group IDs that determine the process’s rights to
access system resources and files.
• Personality. Not traditionally found on UNIX systems, but
under Linux each process has an associated personality identifier
that can slightly modify the semantics of certain system calls.
Used primarily by emulation libraries to request that system calls
be compatible with certain specific flavors of UNIX.
Process Environment
• The process’s environment is inherited from its parent, and is composed of
two null-terminated vectors:
– The argument vector lists the command-line arguments used to
invoke the running program; conventionally starts with the name of
the program itself
– The environment vector is a list of “NAME=VALUE” pairs that
associates named environment variables with arbitrary textual values.
• Passing environment variables among processes and inheriting variables
by a process’s children are flexible means of passing information to
components of the user-mode system software.
• The environment-variable mechanism provides a customization of the
operating system that can be set on a per-process basis, rather than being
configured for the system as a whole.
Process Context
• The (constantly changing) state of a running program at any
point in time.
• The scheduling context is the most important part of the
process context; it is the information that the scheduler needs
to suspend and restart the process.
• The kernel maintains accounting information about the
resources currently being consumed by each process, and the
total resources consumed by the process in its lifetime so far.
• The file table is an array of pointers to kernel file structures.
When making file I/O system calls, processes refer to files by
their index into this table.
Process Context
• Whereas the file table lists the existing open files, the
file-system context applies to requests to open new files. The
current root and default directories to be used for new file
searches are stored here.
• Thesignal-handler table defines the routine in the process’s
address space to be called when specific signals arrive.
• The virtual-memory context of a process describes the full
contents of the its private address space.
Processes and Threads
• Linux uses the same internal representation for processes and
threads; a thread is simply a new process that happens to share
the same address space as its parent.
• A distinction is only made when a new thread is created by the
clone system call.
– fork creates a new process with its own entirely new
process context
– clone creates a new process with its own identity, but that is
allowed to share the data structures of its parent
• Using clone gives an application fine-grained control over
exactly what is shared between two threads.
Scheduling
• The job of allocating CPU time to different tasks
within an operating system.
• While scheduling is normally thought of as the running
and interrupting of processes, in Linux, scheduling also
includes the running of the various kernel tasks.
• Running kernel tasks encompasses both tasks that are
requested by a running process and tasks that execute
internally on behalf of a device driver.
Kernel Synchronization
•A request for kernel-mode execution can occur in two
ways:
– A running program may request an operating system
service, either explicitly via a system call, or implicitly,
for example, when a page fault occurs.
– A device driver may deliver a hardware interrupt that
causes the CPU to start executing a kernel-defined
handler for that interrupt.
• Kernel synchronization requires a framework that willl
allow the kernel’s critical sections to run without
interruption by another critical section.
Kernel Synchronization
• Linux uses two techniques to protect critical sections:
1. Normal kernel code is non-preemptible
– When a time interrupt is received while a process is
executing a kernel system service routine, the kernel’s
need_resched flag is set so that the scheduler will run
once the system call has completed and control is
about to be returned to user mode.
2. The second technique applies to critical sections that occur in
an interrupt service routines.
– By using the processor’s interrupt control hardware to
disable interrupts during a critical section, the kernel
guarantees that it can proceed without the risk of concurrent
access of shared data structures.
Kernel Synchronization
• To avoid performance penalties, Linux’s kernel uses a synchronization
architecture that allows long critical sections to run without having
interrupts disabled for the critical section’s entire duration.
• Interrupt service routines are separated into a top half and a bottom
half.
– The top half is a normal interrupt service routine, and runs with
recursive interrupts disabled.
– The bottom half is run, with all interrupts enabled, by a miniature
scheduler that ensures that bottom halves never interrupt
themselves.
– This architecture is completed by a mechanism for disabling
selected bottom halves while executing normal, foreground kernel
code.
Interrupt Protection Levels
credits
credits : priority
2
factors in both the process’s history and its priority.
– This crediting system automatically prioritizes interactive or I/O-bound
processes.
Process Scheduling
• Linux implements the FIFO and round-robin real-time
scheduling classes; in both cases, each process has a priority in
addition to its scheduling class.
– The scheduler runs the process with the highest priority; for
equal-priority processes, it runs the longest-waiting one
– FIFO processes continue to run until they either exit or
block
– A round-robin process will be preempted after a while and
moved to the end of the scheduling queue, so that round-
robing processes of equal priority automatically time-share
between themselves.
What is Android?
• Android is Linux based operating system designed primarily for
mobile devices such as smartphones and tablets.
• Android was first developed as a Advance Operating System for
digital camera.
• There are more than 4,00,000 apps in android market
• And android is an open source.
What is Operating System?
• An operating system, or "OS," is software that
communicates with the hardware allows other programs to
run.
• Common desktop operating systems include Windows, OS
X, and Linux.
• Common mobile OS include Android, Windows Phone/
Windows
Android Operating System
• Android OS consists of a shell and a kernel.
• Creator's of android takes out the kernel from Linux OS
2.6 and rewrite the shell part using java, that’s forms
android OS.
Android Devices
Origin of Android
• Android was founded in Palo Alto, California in October
2003 by Andy Rubin, Rich Miner,
• Nick Sears and Chris White who work at “ " to
develop.
Origin of Android
• Android was purchased by the Google in AUGUST,
2005 for 50 million $.
• HTC Dream was the first android device launched in
September 2008
• Now, android covers 90% of the mobile OS market.
Open Handset Alliance (OHA)
• It's consortium of several companies.
• OHA is a business alliance of firm to develop open
standard for mobile device.
• OHA includes 84 firms to develop open standard for
mobile devices, i.e.
• HTC, Sony, Dell, Intel, Motorola, QUALCOMM, Google
Samsung Electronics, LG Electronics, T-Mobile, NVidias
• Reason for Nokia not to develop Android Mobiles is
Nokia is not part of OHA
Features
• Android supports wireless communication using:-
• 3G Networks, 4G Networks, 802.11 Wi-Fi Networks,
Bluetooth Connectivity
• Developing an android application is not tough ,using
SDK and java emulator we can easily develop
applications that we want.
• Open source - Free development platform
90
Major components of the Android
1. Application Layer
• Description:
This is the topmost layer where user-facing applications
reside. These include apps like Contacts, Messaging, Camera,
and third-party apps installed by the user.
• Key Points:
• Built using Java/Kotlin or other supported languages.
• Interacts with underlying layers via the Android
Framework.
Major components of the Android
2. Android Framework
• Description:
Provides a collection of APIs that developers use to build applications.
It simplifies the process of interacting with hardware, system resources,
and core features.
• Key Components:
• Activity Manager: Manages the lifecycle of applications and
handles user navigation.
• Content Providers: Enables sharing of data between applications.
• Resource Manager: Accesses non-code resources like strings,
layouts, and drawables.
• Location Manager: Provides location services.
• Notification Manager: Allows apps to show alerts in the
notification bar
Major components of the Android
3. Android Runtime (ART)
• Description:
Responsible for running Android applications. It replaces the
older Dalvik Virtual Machine (DVM) for better performance and
efficiency.
• Key Features:
• Ahead-of-Time (AOT) and Just-in-Time (JIT) compilation for
improved app execution.
• Garbage Collection: Automatic memory management.
• Core Libraries: Provides essential APIs for tasks like data
manipulation, threading, and networking.
Major components of the Android
4. Native Libraries
• Description:
This layer includes C and C++ libraries that offer low-level
functionality, improving performance and enabling
hardware access.
• Important Libraries:
• SQLite: Lightweight database engine for storing data.
• OpenGL ES: Supports 2D and 3D graphics rendering.
• WebKit: Used for web browsing functionality.
• Libc: Standard C library for system functions.
Major components of the Android
5. Hardware Abstraction Layer (HAL)
• Description:
Acts as an interface between hardware-specific drivers
and the Android system. It ensures compatibility
between the hardware and the Android framework.
• Examples:
• Camera HAL for accessing the device's camera.
• Audio HAL for handling audio input and output.
Major components of the Android
6. Linux Kernel
• Description:
Forms the foundation of the Android operating system.
It handles low-level tasks like memory management,
process scheduling, and hardware interaction.
• Key Features:
• Power management.
• Security through user permissions and file access
controls.
• Drivers for hardware components (e.g., Wi-Fi,
Bluetooth, camera, etc.).