0% found this document useful (0 votes)
7 views5 pages

Kratikal OS

Uploaded by

jatinsingh31195
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views5 pages

Kratikal OS

Uploaded by

jatinsingh31195
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Kratikal OS

02 September 2025 21:34

1. What is a process vs thread?


Process is program under action and thread is the smallest segment of instructions (segment of a process) that
can be handled independently by a scheduler.
Threads are lightweight processes that share the same address space including the code section, data section
and operating system resources such as the open files and signals. However, each thread has its own program
counter (PC), register set and stack space allowing them to the execute independently within the same process
context. Unlike processes, threads are not fully independent entities and can communicate and synchronize more
efficiently making them suitable for the concurrent and parallel execution in the multi-threaded environment.
2. Difference between orphan and zombie process.
A process that has finished the execution but still has an entry in the process table to report to its parent process
is known as a zombie process. A child process always first becomes a zombie before being removed from the
process table. The parent process reads the exit status of the child process which reaps off the child process
entry from the process table.

A process whose parent process no more exists i.e. either finished or terminated without waiting for its child
process to terminate is called an orphan process.

3. What is a PCB (Process Control Block)?


The process control block (PCB) is a block that is used to track the process’s execution status. A process control
block (PCB) contains information about the process, i.e. registers, quantum, priority, etc. The process table is an
array of PCBs, that means logically contains a PCB for all of the current processes in the system.

4. Explain context switching.


Switching of CPU to another process means saving the state of the old process and loading the saved state
for the new process. In Context Switching the process is stored in the Process Control Block to serve the
new process so that the old process can be resumed from the same part it was left.

5. What is mutex vs semaphore?


• Mutex is a locking mechanism, whereas Semaphore is a signaling mechanism
• Mutex is just an object,while Semaphore is an integer
• Mutex has no subtype, whereas semaphore has two types: counting semaphore and binary
semaphore.
• Semaphore supports wait and signal operations modification, whereas Mutex is only modified by
the process that may request or release a resource.
• Semaphore value is modified using wait () and signal () operations, on the other hand, Mutex
operations are locked or unlocked.

Kratikal OS Page 1
6. Difference between throughput, latency, bandwidth.
Bandwidth is the maximum amount of data that can be transferred over a network, throughput is the actual amount of data that is
successfully transferred, and latency is the time it takes for data to travel from its source to its destination.

7. Explain page, fragmentation (internal vs external).


A page, memory page, or virtual page is a fixed-length contiguous block of virtual memory, described by a single
entry in a page table. It is the smallest unit of data for memory management in an operating system that uses
virtual memory.

Paging is a method or technique which is used for non-contiguous memory allocation. It is a fixed-size
partitioning theme (scheme). In paging, both main memory and secondary memory are divided into equal fixed-
size partitions. The partitions of the secondary memory area unit and the main memory area unit are known as
pages and frames respectively.
Paging is a memory management method accustomed fetch processes from the secondary memory into the
main memory in the form of pages. in paging, each process is split into parts wherever the size of every part is
the same as the page size. The size of the last half could also be but the page size. The pages of the process area
unit hold on within the frames of main memory relying upon their accessibility

Processes are stored and removed from memory, which makes free memory space, which is too little to even
consider utilizing by different processes. Suppose, that process is not ready to dispense to memory blocks since
its little size and memory hinder consistently staying unused is called fragmentation. This kind of issue occurs
during a dynamic memory allotment framework when free blocks are small, so it can't satisfy any request.

Kratikal OS Page 2
8. What is deadlock and how can it be prevented?
Deadlock is a situation when two or more processes wait for each other to finish and none of them ever
finish. Consider an example when two trains are coming toward each other on the same track and there is only
one track, none of the trains can move once they are in front of each other. A similar situation occurs in
operating systems when there are two or more processes that hold some resources and wait for resources held
by other(s).

Deadlock Prevention
Deadlock prevention means designing the system in a way that ensures at least one Coffman condition never
holds. Common strategies:
1. Eliminate Mutual Exclusion
○ Make resources sharable whenever possible (e.g., read-only files).
○ But not always possible (printers, locks).
2. Eliminate Hold and Wait
○ Force processes to request all resources at once (before execution starts).
○ Or require a process to release all held resources before requesting new ones.
○ Downsides: Low resource utilization, possible starvation.
3. Eliminate No Preemption
○ If a process requests a resource not available, it must release all held resources and retry later.
○ Or forcibly preempt resources from processes when needed.
○ Used in databases (transaction rollbacks).
4. Eliminate Circular Wait
○ Impose a total ordering of resource types and require each process to request resources in increasing
order only.
○ This breaks the circular chain.

Other Approaches
• Deadlock Avoidance → Use algorithms (like Banker’s Algorithm) that dynamically check if granting a
resource request keeps the system in a safe state.
• Deadlock Detection & Recovery → Allow deadlocks to occur, detect them (e.g., with a wait-for graph), then
recover (kill/restart processes or preempt resources).

9. Difference between static and dynamic loading/linking.


Static Loading
• All program code and data are loaded into memory at program start (load time).
• Once loaded, everything is ready to run.
• No further loading happens during execution.
• Example: Traditional C program compiled and loaded fully into RAM before running.

Kratikal OS Page 3
• Example: Traditional C program compiled and loaded fully into RAM before running.
Advantages
• Faster execution (everything is already in memory).
• Simpler management.
Disadvantages
• Wastes memory (unused routines are still loaded).
• Larger initial load time.

Dynamic Loading
• A routine is loaded into memory only when it is first called.
• Code resides on disk until needed.
• Requires a small piece of code called a loader stub that checks if the routine is loaded; if not, it loads it.
Advantages
• Saves memory (unused routines never loaded).
• Useful for large programs with rarely used functions.
Disadvantages
• Slight overhead when routine is loaded for the first time.

Static vs Dynamic Linking


This is about how external libraries (like .lib / .dll / .so) are connected to a program.
Static Linking
• All library code needed by the program is copied into the final executable file at compile/link time.
• Executable is self-contained.
Advantages
• No dependency on external libraries at runtime.
• Faster execution (no runtime linking).
Disadvantages
• Larger executable size (duplicate code in multiple programs).
• Updating a library requires recompiling the program.

Dynamic Linking
• Only references (stubs) to the shared library are placed in the executable.
• The actual library code is linked at runtime (when the program is executed).
• Libraries can be shared among multiple programs (e.g., .dll in Windows, .so in Linux).
Advantages
• Saves memory (single copy of library shared).
• Easier updates (updating library automatically benefits all programs).
Disadvantages
• Slower startup (linking happens at runtime).
• Program depends on the presence of the correct version of the library (dependency issues).

10. What happens in the background when you open [Link]?


1. DNS Resolution (Finding IP Address)
When you type [Link] in the browser, the first step is to find the IP address of that domain. The browser
checks its own cache, then the operating system cache, and if not found, it queries the DNS resolver (usually
your ISP or Google DNS like [Link]). This resolver may contact other DNS servers (root, TLD, authoritative) until
it gets the IP address of Google’s server. Only after this step does your computer know where to connect.

2. Establishing a TCP Connection


Once the IP is known, the browser starts a TCP connection with the Google server. This is done through a
process called the three-way handshake: the client sends a SYN, the server replies with SYN-ACK, and the client
responds with ACK. This ensures a reliable communication channel is established between your computer and
the server.
3. TLS Handshake (Secure Connection)
Because the URL is [Link] the browser also needs to secure the connection. This happens using the TLS

Kratikal OS Page 4
Because the URL is [Link] the browser also needs to secure the connection. This happens using the TLS
handshake. The server provides its SSL/TLS certificate to prove it’s really Google, and both sides exchange
cryptographic keys. After this step, all data sent between your browser and Google will be encrypted, ensuring
privacy and security.

4. Sending the HTTP Request


With the secure connection ready, the browser sends an HTTP GET request to the server. This request
essentially says: “Please send me the homepage of [Link].” It also includes headers such as the browser
type, cookies (if any), and other details that help the server respond appropriately.

5. Server Processing and Response


Google’s server receives the request, processes it, and sends back an HTTP response. This usually contains the
HTML content of the page along with references to other resources like CSS, JavaScript, images, and fonts. The
response also includes status codes (e.g., 200 OK) and headers that provide extra information, such as caching
rules.

6. Browser Rendering the Page


The browser now takes the HTML and begins rendering the page. It parses the HTML to build a DOM tree, then
fetches CSS to build a CSSOM tree, and executes any JavaScript code. Once the structure and styles are ready,
the browser combines them into a render tree, calculates the layout of elements, and finally paints pixels on the
screen. This is the moment when you visually see Google’s homepage.

7. Additional Optimizations
While rendering, the browser may download multiple resources in parallel to speed things up. It also makes use
of cached files (for example, if you visited before, some images or scripts may already be stored locally). Modern
browsers even pre-fetch DNS records and open connections in advance to make loading faster.

✅ So in short: typing [Link] triggers DNS resolution, TCP/TLS handshakes, an HTTP request/response
cycle, and then the browser parses and renders the page for you.

Kratikal OS Page 5

You might also like