OS Internal Unit- 3
OS Internal Unit- 3
• The principle of least privilege dictates that programs, users, and systems be
given just enough privileges to perform their tasks.
• This ensures that failures do the least amount of harm and allow the least of
harm to be done.
• For example, if a program needs special privileges to perform a task, it is better
to make it a SGID program with group ownership of "network" or "backup" or
some other pseudo group, rather than SUID with root ownership. This limits
the amount of damage that can occur if something goes wrong.
• Typically each user is given their own account, and has only enough privilege
to modify their own files.
• The root account should not be used for normal day to day activities - The
System Administrator should also have an ordinary account, and reserve use of
the root account for only those tasks which need the root privileges
ApplicationMiddlew
areOperating
systemHardware
1. The access control mechanisms the user sees at the application level may
express a very rich and complex security policy. A modern online busi-
ness could assign staff to one of dozens of different roles, each of which
could initiate some subset of several hundred possible transactions in the
system. Some of these (such as refunds) might require dual control or
approval from a supervisor. And that’s nothing compared with the com-
plexity of the access controls on a modern social networking site, which
will have a thicket of rules and options about who can see, copy, and
search what data from whom.
2. The applications may be written on top of middleware, such as a
database management system or bookkeeping package, which enforces a
number of protection properties. For example, bookkeeping soft- ware
may ensure that a transaction which debits one ledger for a certain
amount must credit another ledger for the same amount, while database
software typically has access controls specifying which dictio- naries a
given user can select, and which procedures they can run.
3. The middleware will use facilities provided by the underlying operating
system. As this constructs resources such as files and communications
ports from lower level components, it acquires the responsibility for pro-
viding ways to control access to them.
4. Finally, the operating system access controls will usually rely on hard-
ware features provided by the processor or by associated memory
management hardware. These control which memory addresses a given
process can access.
In order to determine who gets to do what, the first thing that the operating
system needs is the user’s identity. Typically, the login program establishes that
by getting the user’s credentials, which usually comprise a login name to
identify the user and password to authenticate the user. The system then
associates a unique user ID number with that user and grants access to
resources based on that ID.
A subject is the thing that needs to access the resources, or objects. Often, the
subject is the user. However, the subject can also be a logical entity. For
example, you might run the postfix mail server and create a user ID of postfix
for that server even if it does not correspond to a human user. Having this
distinct ID will enable you to configure access rights for the postfix server that
are distinct from other users. You will also likely do this for certain other
servers, such as a web server.
Processes run with the identity & authority of some user identifier. This
identifies the subject or principal (or security principal). The terms are often
used interchangeably but there are slight differences. A subject is any entity
that requests access to an object, often a user. A principal is a unique identity
for a user. The subject resolves to a principal when you log in; your user ID is
the principal. A subject might have multiple identities and be associated with a
set of principals. Principals don’t need to be humans. The identity of a program
or process may be a principal. We will not worry about the distinction in our
discussions and will casually talk about users or subjects – just keep in mind
that we really refer to the user ID that the operating system assigned to that
subject.
An object is the resource that the subject may access. The resource is often a
file but may also be a device, communication link, or even another subject. In
most modern operating systems, devices are treated with the same abstraction
as files. In POSIX6 systems, the file system namespace contains names and
permissions for devices; an attribute tells the system it’s not a data file.
As we will soon see, most operating systems define what can be done with
different objects, meaning that permissions are associated with each object.
Figure3.3:Accesscontrolmatrix
• Copy and owner rights only allow the modification of rights within a column.
The addition of control rights, which only apply to domain objects, allow a
process operating in one domain to affect the rights available in other domains.
For example in the table below, a process operating in domain D2 has the right
to control any of the rights in domain D4.
Figure 3.7. - Modified access matrix of Figure 3.3
• The simplest approach is one big global table with < domain, object,
rights > entries.
• Unfortunately this table is very large ( even if sparse ) and so cannot be
kept in memory ( without invoking virtual memory techniques. )
• There is also no good way to specify groupings - If everyone has access
to some resource, then it still needs a separate entry for every domain.
• Even when a basic access matrix works satisfies most of our needs,
implementing it becomes unwieldy. It will lead to a huge table with
dynamically changing rows (each time users or groups are added or
removed) and objects (whenever files or devices are added or deleted).
Even on personal computers, this table can easily contain several billion
entries, making it impractical to store and access efficiently.
• The access control matrix can easily become far too large to store in
memory but we want lookups in to be efficient. We’d like to avoid extra
block reads from the disk just to get access permission information for a
user on a file. In practice, implementing an access control matrix in a
system is not practical.
• Instead, we can break apart the access control matrix by columns. Each
column represents an object and we can store the permissions of each
subject with each object.
• When the operating system accesses the object, it also accesses the list of
access permissions for that object. Adding new permissions is done on
an object-by-object basis.
• When we open file F0, we can at that time get the access control list for
that file, telling us what each subject is allowed to do to that file. This is
called an access control list (ACL). All current operating systems use
access control lists.
The next way to manage the access control matrix is to store it by rows. These
are called capabilities
An access control list associates column with each object. That is, each object
stores a list of access permissions for all the domains (subjects). Another way
of breaking up the access control matrix is by rows. We can associate a row of
the table with each domain (subject). This is called a capability list. A
capability** is the set of operations that the subject is allowed to perform on a
specific object. Each subject now has a complete list of capabilities.
Before the operating system performs a request on an object, it will check the
subjects capability list and see if the requested access is allowed for that object.
A process, of course, cannot freely modify its capability list unless it has a
control attribute for all objects in the domain or is the owned of a specific
object.
Capability lists have the advantage that, because they are associated with a
subject, the system does not have to read any additional data to check access
rights each time an object is accessed, as it has to do to read an access control
list. It is also very easy to delegate rights from one user to another user: simply
copy the capability list. If a user must be deleted, it is also easy to handle that:
simply delete the capability associated with that user; there is no need to go
through the access control list of every file in the file system.
However, the idea of a capability list is useful with networked services. You
often connect to servers that do not know you and where you do not have an
account. In such cases, authorization and single sign-on services such as OAuth
and Kerberos can provide an unmodifiable message stating who a user is and
what operations they are allowed to perform.
• Trusted Hardware: Components like the TPM chip, which act as a foundation
for security.
• Secure Boot Mechanisms: Verifies the integrity of the system during startup.
• Attestation Protocols: Enables remote verification of system trustworthiness.
Trust
• Definition:
Trust is the confidence that a system will behave as intended under specific
conditions. In computing, this means ensuring that hardware, software, and
firmware remain uncompromised.
• Trust Requirements:
o Systems must be verifiable.
o Mechanisms for integrity checking must be robust and reliable.
Root of Trust
• Definition:
The foundation upon which all other trust in the system is built. It is typically
hardware-based to ensure tamper resistance.
• Types of Roots of Trust:
1. Root of Trust for Measurement (RTM): Ensures the accuracy of
integrity measurements.
2. Root of Trust for Storage (RTS): Protects sensitive data like
cryptographic keys.
3. Root of Trust for Reporting (RTR): Enables secure and verifiable
reporting of the system state.
• Overview:
A TPM is a hardware chip that provides a secure environment for
cryptographic operations and storing sensitive information.
• Key Functions:
o Platform Configuration Registers (PCRs): Store integrity metrics.
o Cryptographic Key Management: Securely generate, store, and
manage encryption keys.
o Random Number Generation: Provides a source of high-quality
random numbers for secure operations.
o Sealing: Binds data to a specific system state, ensuring its
confidentiality.
Integrity
• Definition:
Integrity ensures that data and system states are not altered without detection.
• Measurement and Reporting:
o Measurements are taken during boot or runtime to ensure the system
matches a known good state.
o Reporting involves creating a verifiable log of these measurements.
• Secure Boot:
Ensures that only trusted software components are loaded during system
startup. If an untrusted component is detected, the boot process halts.
• Measured Boot:
Logs the integrity of each component loaded during boot, allowing for
verification without halting the system.
Remote Attestation
• Definition:
Remote attestation allows a third party to verify the integrity and
trustworthiness of a system remotely.
• How It Works:
1. The system takes measurements (hashes) of its current state using TPM.
2. The TPM signs these measurements using a secure key.
3. The signed measurements are sent to the verifier.
4. The verifier checks the measurements against a known good state.
• Data Confidentiality:
Ensures sensitive data is encrypted and inaccessible to unauthorized parties.
• Sealing:
o Data is encrypted and bound to a specific system state.
o If the system deviates from the expected state, access to the data is
denied.
Trust Chains
• Definition:
A trust chain links multiple components (hardware, firmware, software) to
ensure that the trust established at the root propagates throughout the system.
• How It Works:
o Each layer verifies the integrity of the next layer before passing control.
o Any deviation from expected behavior breaks the trust chain.
b. Root of Trust
c. Secure Boot
d. Remote Attestation
• What is TPM?
o The Trusted Platform Module (TPM) is a hardware-based security
solution designed to protect sensitive data and enhance the security of
computing devices.
o It is a tamper-resistant chip embedded in modern computing devices,
acting as a root of trust for security operations.
• Purpose of TPM
o To safeguard sensitive information such as cryptographic keys,
passwords, and certificates.
o To ensure the integrity of the platform by securely verifying the system
state during boot-up and runtime.
• Functions:
o Cryptographic Operations: Key generation, encryption, and
decryption.
o Secure Storage: Protect sensitive information such as encryption keys.
Platform Integrity Measurement: Ensures that the system is in a known good state.
2. Historical Background
1. Hardware-Based Security:
o Tamper-resistant design ensures physical and logical security.
2. Cryptographic Operations:
o TPM can generate and securely store cryptographic keys, perform digital
signing, and verify signatures.
3. Secure Boot and Measured Boot:
o Secure Boot: Prevents unauthorized software from loading during
startup.
o Measured Boot: Logs measurements of boot components and verifies
their integrity.
4. Remote Attestation:
o Provides verifiable proof of a system's integrity to a remote party.
5. Sealing and Binding:
o Sealing: Encrypts data in a way that it can only be decrypted if the
platform is in a specific state.
o Binding: Encrypts data using a TPM-protected key.
6. Platform Integrity Measurement:
o Measures and records the state of the system during boot and runtime,
ensuring no unauthorized changes occur.
6. Types of TPM
1. Discrete TPM:
o A dedicated chip embedded in the device hardware.
o Provides the highest level of security as it is physically separated from
other components.
2. Integrated TPM:
o Implemented within other components, such as a chipset or CPU.
o Slightly less secure than discrete TPMs.
3. Firmware TPM (fTPM):
o Emulates TPM functionalities in firmware.
o Provides cost-effective solutions but may be less secure than hardware-
based TPMs.
4. Software TPM (sTPM):
o Fully implemented in software and used for development and testing.
o Does not provide hardware-level security.
b. Root of Trust
• The Root of Trust (RoT) is a set of hardware, firmware, or software
components that form the foundation of a trusted computing
environment.
o It serves as the baseline for all secure operations, ensuring that the
system starts in a known good state and continues to operate securely.
• Purpose:
o To establish and maintain the integrity, confidentiality, and authenticity
of a computing platform.
o Acts as a reliable anchor for cryptographic processes, authentication, and
verification of system components.
c. Secure Boot
• Secure Boot is a security standard that ensures that a device boots using only
software that is trusted by the Original Equipment Manufacturer (OEM).
• Purpose:
1. Power On: The device is powered on, and the firmware (BIOS/UEFI)
initializes.
3. Chain of Trust: Each component in the boot process verifies the next
component in the chain, starting from the firmware to the operating
system.
4. Loading the OS: If all components are verified, the operating system is
loaded; otherwise, the boot process is halted.
• Key Concepts:
• Firmware (BIOS/UEFI):
• Responsible for the initial hardware checks and loading the bootloader.
• Bootloader:
• Operating System:
• The OS kernel and drivers must also be signed to ensure they are trusted.
d. Remote Attestation
• Remote attestation is a mechanism that enables a device (the "attester") to
prove to a remote party (the "verifier") that it is in a known and trusted state.
• Purpose:
• To provide assurance that the software and configuration of a device
have not been tampered with.
• To establish trust in a remote device before allowing it to participate in
secure communications or transactions.
• Process Overview:
1. Measurement: During the boot process, the device measures the integrity
of its software components (e.g., BIOS, bootloader, OS) and stores these
measurements in the TPM.
2. Quote Generation: The TPM generates a cryptographic "quote" that
includes the measurements and a signature, proving that the
measurements were taken by a trusted hardware component.
3. Communication: The attester sends the quote to the verifier along with
its public key and other relevant information.
4. Verification: The verifier checks the quote against expected values
(known good measurements) to determine if the attester is in a trusted
state.
Key Concepts:
• TPM-based Protocols: Utilize the TPM for secure measurement and quoting.
• DICE (Device Identifier Composition Engine): A framework for attesting
IoT devices, focusing on lightweight and efficient attestation methods.
• Trusted Computing Group (TCG) Standards: Various standards and
specifications for implementing remote attestation in trusted computing
environments.
• Rings are numbered from 0 to 7, with outer rings having a subset of the
privileges of the inner rings.
• Each file is a memory segment, and each segment description includes
an entry that indicates the ring number associated with that segment, as
well as read, write, and execute privileges.
• Each process runs in a ring, according to the current-ring-number, a
counter associated with each process.
• A process operating in one ring can only access segments associated
with higher ( farther out ) rings, and then only according to the access
bits. Processes cannot access segments associated with lower rings.
• Domain switching is achieved by a process in one ring calling upon a
process operating in a lower ring, which is controlled by several factors
stored with each segment descriptor:
o An access bracket, defined by integers b1 <= b2.
o A limit b3 > b2
o A list of gates, identifying the entry points at which the segments
may be called.
• If a process operating in ring i calls a segment whose bracket is such that
b1 <= i<= b2, then the call succeeds and the process remains in ring i.
• Otherwise a trap to the OS occurs, and is handled as follows:
o If i< b1, then the call is allowed, because we are transferring to a
procedure with fewer privileges. However if any of the parameters
being passed are of segments below b1, then they must be copied
to an area accessible by the called procedure.
o If i> b2, then the call is allowed only if i<= b3 and the call is
directed to one of the entries on the list of gates.
• Overall this approach is more complex and less efficient than other
protection schemes.
3.7 Viruses
3.7 Worms
• A worm is a process that uses the fork / spawn process to make copies of itself
in order to wreak havoc on a system. Worms consume system resources, often
blocking out other, legitimate processes. Worms that propagate over networks
can be especially problematic, as they can tie up vast amounts of network
resources and bring down large-scale systems.
• One of the most well-known worms was launched by Robert Morris, a graduate
student at Cornell, in November 1988. Targeting Sun and VAX computers
running BSD UNIX version 4, the worm spanned the Internet in a matter of a
few hours, and consumed enough resources to bring down many systems.
• This worm consisted of two parts:
1. A small program called a grappling hook, which was deposited on the
target system through one of three vulnerabilities, and
2. The main worm program, which was transferred onto the target system
and launched by the grappling hook program.
3.8 ROOTKIT
A rootkit is a collection of software that is used by the hacker and specially
designed for doing malicious attacks like malware attacks to gain control by
infecting its target user or network. There are different types of Methods by
which hackers install rootkits on the target user’s computer.
Methods:
Description
Methods
Types of Rootkits
Different Types of Rootkits are Explained Below:
Different Types
of Rootkits in
Description
cyber security
Preventive Measures:
Below are some preventive measures which we can follow for preventing
rootkit attacks.
1. A phishing attack is an attack in which hackers send malicious messages
designed to trick the targeted user. Using phishing attacks, hackers
spread ransomware on a target user’s computer by bypassing firewalls to
extract the target user’s sensitive or personal information. Therefore, never
click on attachments from unknown senders in emails. Avoid clicking on links
in unfamiliar emails. Also, avoid unfamiliar social media activity.
2. Hackers hide malware in various unknown files such as archive files (.zip, .rar),
etc. When a target user opens this malicious file, the malware automatically
enters the system and takes control of the system. So avoid downloading
various types of unknown files, such as archive files (.zip, .rar), etc., because
hackers hide malicious programs in these types of files.
3. Use up-to-date anti-spyware and firewall programs to prevent unwanted access
to your computer.
4. Protect your device or computer from known and unknown viruses, malware,
etc. with a strong, up-to-date security suite and antivirus software.
5. Keep your software and operating system updated.
Polymorphic malware refers to malicious software that can change or morph its
code, making it difficult for traditional antivirus solutions to detect. This ability
to evolve allows polymorphic malware to evade signature-based detection
methods, which rely on static patterns or signatures to identify known threats.
Types of Polymorphic Malware
Polymorphic malware can take various forms, including:
• Polymorphic Viruses – These viruses can change their code or appearance with
each infection, making it difficult for antivirus software to recognize them
based on a static signature.
• Polymorphic Worms – Similar to viruses, polymorphic worms can also alter
their code or structure to evade detection. However, worms can propagate
independently without user intervention or attaching themselves to a host file.
• Polymorphic Trojans – These Trojans can change their code or behavior to
avoid being detected by security software. They often disguise themselves as
legitimate applications to trick users into downloading and installing them.
• Polymorphic Ransomware – This type of ransomware can modify its
encryption algorithms, communication methods, or other characteristics to
bypass security measures and successfully encrypt a victim’s data.
The Mechanics of Polymorphic Malware
Polymorphic malware employs several techniques to evade detection, such as:
• Code Obfuscation – By using encryption, compression, or other obfuscation
methods, polymorphic malware can conceal its true nature from security
software.
• Dynamic Encryption Keys – Polymorphic malware can use different encryption
keys for each new instance, making it challenging for signature-based detection
tools to identify the malware based on a fixed pattern.
• Variable Code Structure – By changing its code structure, polymorphic
malware can confuse security tools that rely on static signatures for detection.
• Behavioral Adaptation – Polymorphic malware can alter its behavior or
execution patterns to blend in with normal system processes, making it harder
for behavioral-based detection methods to identify the threat.
Examples of Polymorphic Malware Techniques
To better understand how malware can become polymorphic, let’s explore
some examples:
• Subroutine Permutation – Polymorphic malware can rearrange its subroutines
or functions in different orders to change its code structure. For example:
• Original Code:
function A() {...}
function B() {...}
function C() {...}
• Polymorphic Code:
function B() {...}
function C() {...}
function A() {...}
• Register Swapping – By changing the registers used to store values,
polymorphic malware can alter its appearance without affecting its
functionality:
• Original Code:
MOV EAX, 1
ADD EBX, EAX
• Polymorphic Code:
MOV ECX, 1
ADD EBX, ECX
• Instruction Substitution – Polymorphic malware can replace instructions with
equivalent ones to change its code while retaining its functionality:
• Original Code:
SUB EAX, 5
• Polymorphic Code:
ADD EAX, -5
Virus Protection
• Implementing protection against computer viruses involves a combination of
technical measures, user practices, and organizational policies. Here’s a
detailed approach to protect against viruses:
2. System Updates
• Update the Operating System: Regularly update your OS to fix
vulnerabilities that viruses exploit.
• Patch Applications: Ensure all installed applications, especially commonly
targeted ones like browsers and productivity software, are updated.
3. Network Protection
• Enable Firewalls: Use a firewall to block unauthorized access to your system.
• Secure Wi-Fi Networks: Use strong encryption (WPA3 or WPA2) and change
default passwords on routers.
4. Email Security
• Avoid Phishing Emails: Be cautious with unexpected email attachments or
links.
• Spam Filters: Use email services with strong spam filtering capabilities to
block potentially malicious messages.
7. Backup Strategy
• Regular Backups: Regularly back up important files to external drives or
cloud storage.
• Isolated Backups: Keep backups disconnected from the network to prevent
them from being infected.
8. Educate Users
• Training: Provide training on recognizing phishing attacks, avoiding
suspicious downloads, and safe internet practices.
• Awareness Campaigns: Regularly update users about new types of threats and
best practices.
6. Educate Users
• Recognize Fake Software: Train users to identify and avoid installing fake or
suspicious applications.
• Phishing Awareness: Teach users to recognize phishing attempts that may
lead to Trojan infections.
• Safe Practices: Emphasize the importance of avoiding unknown USB drives or
external media.
8. Backup Strategy
• Regular Backups: Create regular backups of critical files to protect against
data loss from Trojan infections.
• Isolated Backups: Store backups offline or in secure cloud storage to prevent
them from being compromised.
9. Advanced Protections
• Sandboxing: Test unknown software or files in a virtual sandbox environment
before running them on the main system.
• Behavioral Monitoring: Use security software that monitors application
behavior to detect Trojan-like activities.
• File Integrity Monitoring: Monitor changes in critical files to detect
unauthorized modifications.
• What is a Rootkit?
• A rootkit is a type of malicious software designed to gain unauthorized access
to a computer system while concealing its presence. Rootkits often allow
attackers to control the system as an administrator, modify system files, and
evade detection by antivirus programs. They can be installed through phishing
attacks, software vulnerabilities, or physical access to the system.
Types of Rootkits
• Kernel-Level Rootkits: Operate at the OS kernel level, giving attackers deep
access and control over the system.
• User-Mode Rootkits: Operate at the application layer, affecting software and
user-level processes.
• Firmware Rootkits: Reside in firmware such as BIOS or UEFI, making them
highly persistent and difficult to remove.
• Hypervisor Rootkits: Target virtual machines by acting as a hypervisor
beneath the OS.
• Bootkits: Infect the master boot record (MBR) or EFI system partition to
execute before the OS loads.
• What is Ransomware?
• Ransomware is a type of malicious software that encrypts the victim's files or
locks them out of their system. The attacker demands a ransom payment in
exchange for providing a decryption key or restoring access. Some ransomware
also threatens to leak stolen data if the ransom isn't paid (known as double
extortion).
3. Backups
• Regular Backups: Regularly back up important files to external drives or
cloud services.
• Offline Backups: Store backups offline or in a separate, air-gapped network to
protect against ransomware targeting backups.
• Test Restores: Periodically test restoring files from backups to ensure
reliability.
4. Email Security
• Beware of Phishing Emails: Do not open suspicious email attachments or
click on links from unknown senders.
• Spam Filters: Use email services with strong spam filtering to block phishing
attempts.
• Email Authentication: Implement protocols like DMARC, SPF, and DKIM to
reduce email spoofing risks.
5. Network Security
• Firewalls: Enable firewalls to block unauthorized access.
• Intrusion Detection/Prevention Systems (IDS/IPS): Monitor and block
suspicious network activity.
• Secure Remote Access: Use Virtual Private Networks (VPNs) and multi-factor
authentication (MFA) for remote connections.
8. Advanced Protections
• Endpoint Detection and Response (EDR): Deploy EDR solutions to monitor,
detect, and respond to ransomware threats.
• Zero Trust Architecture: Implement zero trust principles to verify all access
requests continuously.
• Micro-Segmentation: Segment the network to prevent lateral movement of
ransomware.
Architecture of a Honeypot
The architecture of a honeypot can vary based on its purpose, but it generally
consists of the following components:
1. Honeypot System:
• Physical or Virtual Machine: The honeypot can be deployed on a
physical server or as a virtual machine. Virtual honeypots are often
easier to manage and can be quickly deployed or destroyed.
• Operating System: The honeypot can run a standard OS (like Linux or
Windows) or a custom OS designed to appear vulnerable.
• Services: The honeypot may run services that are commonly targeted by
attackers (e.g., SSH, FTP, HTTP) to simulate a real environment.
2. Data Collection:
• Logging: All interactions with the honeypot are logged. This includes
connection attempts, commands executed, and any data sent or received.
• Monitoring Tools: Tools like Snort, Suricata, or custom scripts can be
used to monitor network traffic and system calls.
3. Analysis Tools:
• Data Analysis: Collected data is analyzed to identify patterns, attack
vectors, and the tools used by attackers.
• Alerting Mechanisms: Alerts can be configured to notify administrators
of suspicious activity.
4. Isolation:
• Network Segmentation: The honeypot should be isolated from the
production network to prevent attackers from moving laterally to other
systems.
• Firewall Rules: Specific firewall rules can be implemented to control
traffic to and from the honeypot.
5. Management Interface:
• Dashboard: A management interface can be created to visualize data,
monitor activity, and manage the honeypot's configuration.
Implementing a Honeypot