CAM 5 PYQ
CAM 5 PYQ
Here are the simple points about Locality of Reference in Cache Memory:
2. Types of Locality:
Temporal Locality: Reuse the same data again soon (e.g., loop variables).
Spatial Locality: Access data stored close together (e.g., array elements).
4. Examples:
5. Cache Optimization:
6. In Practice:
CAM 5 1
Here’s a simplified explanation of Programmed I/O based on the provided content:
How It Works
1. CPU Initiates Transfer:
The CPU sends or receives data to/from a peripheral device via specific instructions.
3. CPU Monitoring:
The CPU constantly checks the device status until it is ready for the next data transfer.
CAM 5 2
Drawback of Programmed I/O
Inefficient Use of CPU Time:
The CPU stays in a loop ("polling") to monitor the I/O device, wasting time while waiting for the device to be
ready.
Key Points
CPU Involvement: The CPU is heavily involved in every step of data transfer.
Inefficiency: Time is wasted as the CPU waits for the I/O device to respond.
Instead, the device sends an interrupt signal to the CPU when it is ready for data transfer.
How It Works
1. Device Sends Interrupt:
The I/O device generates an Interrupt Request (IRQ) when it is ready for data transfer.
2. CPU Response:
It saves the current state, including the return address (stored in the Program Counter).
3. Service Routine:
The CPU branches to a specific interrupt service routine to handle the data transfer.
After completing the I/O operation, the CPU returns to the original program.
4. No Continuous Monitoring:
Unlike Programmed I/O, the CPU doesn't continuously check the device's status.
Types of Interrupts
1. Vectored Interrupt:
2. Non-Vectored Interrupt:
The device provides the address of the service routine when it sends the interrupt.
Example: The CPU gets the service routine address from the device.
The CPU can perform other tasks while waiting for the I/O device.
Faster Response:
CAM 5 3
Comparison with Programmed I/O
Comparison Table
Initiation of Transfer CPU initiates and monitors every data transfer. Device initiates transfer via an interrupt.
CPU Utilization Inefficient; CPU spends time in a loop. Efficient; CPU performs other tasks in parallel.
Slower, as the CPU might miss some events while Faster, as the CPU responds immediately to
Response Time
polling. interrupts.
Implementation
Simpler; fewer hardware requirements. More complex; requires an interrupt controller.
Complexity
System Throughput Lower, due to CPU time wastage. Higher, as the CPU is free for other tasks.
Suitable for simple devices or low-speed Suitable for high-speed devices and
Suitability
peripherals. multitasking.
Data Loss Risk Higher, especially with fast devices. Lower, as interrupts ensure timely handling.
Examples Reading from a slow keyboard or mouse. Handling network packets or high-speed disks.
Would you like further elaboration on any specific aspect or a diagram to illustrate the differences?
CAM 5 4
CAM 5 5
CAM 5 6
CAM 5 7
Key Points about Virtual Memory:
1. Definition:
Virtual memory allows a computer to use more memory than is physically available by using a portion of the
hard disk to emulate RAM.
2. Advantages:
Larger Programs: Programs larger than the available physical memory can be executed.
Memory Protection: Each virtual address is translated to a physical address, adding a layer of protection.
Rarely used features or options are kept out of memory until required.
4. Benefits:
Allows more programs to run simultaneously, improving CPU utilization and throughput.
CAM 5 8
5. Role of the MMU (Memory Management Unit):
The MMU translates virtual memory addresses into physical memory addresses, managing the interaction
between hardware and memory.
6. Modern Usage:
Virtual memory is a fundamental feature of modern operating systems, enabling efficient use of memory
resources and supporting advanced multitasking capabilities.
Would you like a diagram or further explanation of how virtual memory and the MMU operate?
stores data in tiny capacitors that can lose their charge over time. To prevent data loss, DRAM must be refreshed
regularly. Here’s how it works in simple steps:
1. Capacitors lose charge: DRAM stores each bit of data in a small capacitor. These capacitors slowly lose their
charge, so the data can be lost.
2. Refresh needed: To keep the data, the memory must "refresh" the capacitors by rewriting the data.
3. Refresh circuit: A refresh circuit automatically goes through all the memory cells and reloads the charge in the
capacitors hundreds of times per second (every 64 ms).
4. Automatic process: This refresh process happens in the background without affecting normal memory
operations like reading or writing data.
In short, DRAM needs to be refreshed regularly to keep the stored data from disappearing.
1. Direct Mapping
Concept: In Direct Mapping, each block in the main memory is mapped to a specific cache block based on a
simple formula. For example, if block j from the main memory maps to cache block j % N (where N is the total
number of cache blocks).
How it works:
Memory block 0, 128, 256, ... will all map to cache block 0.
Memory block 1, 129, 257, ... will map to cache block 1, and so on.
Block offset: Determines the location within the block (low-order bits).
Cache block number: Determines the specific cache location where the block will be placed.
Cons: Inflexible, as a specific block can only go to one place in the cache. This can cause cache conflicts if
multiple blocks from memory map to the same cache block.
2. Associative Mapping
Concept: In Associative Mapping, a memory block can be placed in any cache block, which offers more
flexibility in cache placement.
CAM 5 9
How it works:
The tag part of the memory address is used to compare with all cache blocks to find a match.
There is no fixed position for each memory block in the cache. Any block from main memory can be stored
in any cache location.
Cons: More complex because it requires comparing tags with all cache entries (requires parallel searches
for every cache block), which is more hardware-intensive.
3. Set-Associative Mapping
Concept: Set-Associative Mapping combines the flexibility of associative mapping with the simplicity of direct
mapping. Cache blocks are grouped into sets, and each memory block can be placed in any block within a
specific set.
How it works:
A cache is divided into several sets (e.g., 2-way set-associative or 4-way set-associative), and each set
can store multiple blocks.
Memory blocks are mapped to a set using a formula ( block_number % number_of_sets ), but within each set, the
memory block can go into any available cache block.
Example:
2-way set-associative: For a cache with 128 blocks, there will be 64 sets, and each set can hold 2 blocks.
So, memory block 0, 64, 128, ... will map to set 0, and these blocks can reside in either of the two blocks in
that set. Similarly, memory block 1, 65, 129, ... will map to set 1.
Structure:
Set index: Determines the set in which the block could be placed.
Pros: Reduces conflict misses compared to direct mapping. Less hardware-intensive than fully associative
mapping.
Cons: More complex than direct mapping, requires searching within a set.
Summary of Differences:
Mapping Type Flexibility Cache Efficiency Complexity
In conclusion, Direct Mapping is simpler but less flexible, Associative Mapping provides maximum flexibility at the
cost of complexity, and Set-Associative Mapping strikes a balance between the two, combining aspects of both
direct and associative mapping.
Asynchronous Transmission
CAM 5 10
Asynchronous transmission is a method of data transmission where the data is sent in individual, discrete chunks
or characters, with each character preceded by a start bit and followed by a stop bit. This helps the receiver
identify where each data character begins and ends.
These bits are used to synchronize the data between the sender and the receiver.
2. Mark State:
Between characters, the line remains in a mark state (binary 1 or negative voltage), which signifies
inactivity.
When data is being sent, the mark state is interrupted by the start bit ( 0 ), indicating the beginning of a new
character.
3. Transmission of Characters:
Data is sent one character at a time, with a start bit to signal the beginning and a stop bit to mark the end.
For example, the ASCII character "A" (which is 0100 0001 ) would be sent as 1 0100 0001 0 . The 1 at the
beginning is the start bit, and the 0 at the end is the stop bit.
4. Transmission Gaps:
There may be gaps (spaces) between characters, meaning that characters don’t need to be sent in a
continuous stream.
The gaps allow for idle times between data transmissions, where the line stays in the mark state.
A parity bit (optional) can be added to provide error detection. This bit is often placed after the data bits but
before the stop bit.
Parity can be even or odd, depending on the method chosen, and helps the receiver detect transmission
errors.
Example:
For example, when transmitting the ASCII character "A" ( 0100 0001 ):
CAM 5 11
Stop bit: 0 (Binary 1 or positive voltage).
This format ensures that the receiver knows when to expect the start and end of each character.
Gaps between characters: Characters can be separated by gaps, meaning data doesn't need to be sent
continuously.
Mark state: Inactive line is marked by binary 1 , and when it is interrupted by a 0 , the receiver knows that new
data will follow.
Use in Communication:
Asynchronous transmission is commonly used in situations where data is sent intermittently (e.g., over
telephone lines, serial communication links). It’s especially useful for applications where data is not
continuously flowing, like sending individual characters or small data packets.
Summary:
Asynchronous transmission is ideal for scenarios with intermittent communication, where each character is clearly
marked with start and stop bits, making it simple to implement and use for low-speed or occasional data
transmission.
Page Number: Identifies which part of the program (page) the CPU is accessing.
2. Physical Address
Physical Address is the actual location in RAM.
3. Page Table
The page table maps pages (from the logical address) to frames (in physical memory).
The CPU uses the page number to look up the frame number in the page table.
Then, it combines the frame number with the page offset to form the physical address.
Step 2: The page number is used to look up the frame number in the page table.
Step 3: The physical address is created by combining the frame number and page offset.
Example
CAM 5 12
Logical Address: 0x12345
Page Faults
If the needed page is not in RAM, the operating system loads it from the hard drive and updates the page table.
In summary, virtual memory uses pages and frames with a page table to translate logical addresses to physical
addresses, allowing programs to use more memory than is physically available.
CAM 5 13
CAM 5 14
Cache Memory and its Levels
Cache memory is a special, very high-speed memory that acts as a buffer between the CPU and main memory
(RAM). It is designed to speed up the process of accessing data, reducing the time it takes for the CPU to fetch
instructions or data from the main memory.
Cache Memory:
Purpose: Cache memory speeds up the CPU by storing frequently accessed data and instructions, making
them available for the CPU when needed.
CAM 5 15
Speed: It is faster than RAM but slower than CPU registers. It's faster than RAM because it is located closer to
the CPU.
Cost: Cache memory is more expensive than main memory but cheaper than CPU registers.
How it works:
The cache holds copies of data from the main memory that is frequently used by the CPU.
When the CPU needs data, it first checks if it's in the cache (this is known as a cache hit).
If the data is not in the cache (a cache miss), the CPU retrieves it from the slower main memory.
Purpose: Stores data that is immediately required by the CPU for calculations.
Purpose: Stores frequently accessed data to speed up access times for the CPU.
Speed: Faster than RAM but slower than L1 registers. It's larger than L1 but still quite small compared to RAM.
Speed: Slower than all other levels of memory but provides much larger storage capacity.
Summary:
Cache memory improves the efficiency of the CPU by providing quicker access to frequently used data and
instructions.
Registers (L1) are the fastest, but they have very limited space.
Cache memory (L2) is fast and provides a larger buffer for storing data for faster CPU access.
Main memory (L3) is larger, but slower, and holds the data currently being used.
This hierarchical structure allows for a balance of speed and capacity, ensuring that the CPU can access the most
critical data quickly, while also having a large amount of data available for processing when necessary.
CAM 5 16
CAM 5 17
To input a sequence of 9 data bytes into memory, the following steps are typically involved:
CAM 5 18
7. Repeat the Process:
For each of the remaining 8 data bytes, the process is repeated:
The CPU places the next address ( 0x1001 , 0x1002 , etc.) on the address bus.
The CPU sends the next data byte on the data bus.
Example:
If you want to store the sequence 0x01, 0x02, ..., 0x09 starting from address 0x1000 , the memory will look like this
after the process:
0x1000 0x01
0x1001 0x02
0x1002 0x03
0x1003 0x04
0x1004 0x05
0x1005 0x06
0x1006 0x07
0x1007 0x08
0x1008 0x09
Summary:
Address Bus: Sends the memory location where data will be stored.
This process is repeated for each byte of data, and the memory stores them sequentially.
Paging is a memory management technique used in computer systems to efficiently utilize the physical memory
(RAM) and manage virtual memory. It helps to avoid issues like fragmentation and makes memory access more
efficient.
What is Paging?
Paging is a process of dividing both virtual memory and physical memory into fixed-size blocks.
Virtual memory: The memory space that the operating system creates to give the illusion of a larger
amount of memory than is physically available.
Physical memory: The actual RAM in the system where data is stored.
Pages: In virtual memory, data is divided into small fixed-size blocks called pages. The size of a page is usually
a power of 2 (like 512 bytes, 1024 bytes, etc.).
Frames: In physical memory (RAM), the memory is also divided into blocks of the same size as the pages.
These are called frames.
CAM 5 19
2. Page Fault: If a program tries to access a page that is not currently in physical memory (RAM), a page fault
occurs. The operating system then loads the required page from secondary storage (hard disk) into an
available frame in RAM.
3. Efficient Memory Use: Paging helps in efficient memory utilization by allowing processes to use memory in
fixed-sized chunks, preventing fragmentation that can occur when memory is allocated and deallocated in
different sizes.
Key Terms:
Page: A fixed-size block of virtual memory.
Example:
Imagine you have 4 pages of virtual memory, and each page is 1024 bytes (1KB). Your physical memory has frames
of the same size (1024 bytes).
Page 0 → Frame 1
Page 1 → Frame 0
Page 2 → Frame 3
Page 3 → Frame 2
The page table will keep track of this mapping so that when a program requests a page, it knows which physical
frame it is located in.
Advantages of Paging:
No External Fragmentation: Since both the virtual memory and physical memory are divided into fixed-size
blocks, paging avoids the problem of external fragmentation.
Efficient Memory Management: Pages can be loaded or swapped into physical memory as needed, allowing
more efficient use of RAM.
Virtual Memory: Paging allows the use of virtual memory, making it possible for programs to run with more
memory than is physically available by swapping pages in and out of the disk.
In summary, paging is a method to break memory into fixed-sized blocks (pages and frames) to manage memory
more efficiently, allowing better utilization and easier handling of larger amounts of memory than the physical RAM
size.
CAM 5 20
Here are the simplified points regarding Synchronous Transmission:
1. Clock-based Data Transfer: Synchronous transmission uses a continuous data stream accompanied by timing
signals (clock) to keep the sender and receiver synchronized.
2. Data Blocks: Data is sent in blocks (called frames or packets) at fixed time intervals.
3. Used for Large Data: It's ideal for transferring large amounts of data quickly from one place to another.
4. Synchronization: The sending and receiving devices synchronize their transmission speeds using clock
signals.
5. Continuous Data Stream: A continuous flow of data is sent without gaps, and the connection is synchronized
with special characters.
6. No Start/Stop Bits: Unlike asynchronous transmission, there are no start and stop bits, which makes data
transfer faster.
CAM 5 21
7. Special Sync Characters: Sync characters are used to ensure the connection is synchronized before data
transmission begins.
8. Clocking: All devices on the connection must have the same clock setting for proper synchronization.
9. Faster but Prone to Errors: Due to no start/stop bits, synchronous transmission is quicker, but synchronization
errors can lead to data corruption (losing bits).
10. Error Handling: Errors can be managed with re-check digits and protocols like Ethernet, SONET, and Token
Ring that use synchronous transmission.
In short, Synchronous Transmission is faster, uses clock synchronization for sending data in blocks, and is best
for large data transfers but may face errors if clocks get out of sync.
1. Read-Only: ROM is non-volatile memory used to store data that is permanently written (typically during
manufacturing). Unlike RAM (Random Access Memory), ROM is designed for reading data, not writing or
altering it frequently.
2. Access Type: RAM is called "random access" because any data can be accessed in any order, with equal time
to access any location in the memory. In contrast, ROM is typically used for storing firmware or system-level
instructions that are not meant to be modified frequently. The data in ROM is accessed in a sequential manner,
especially for its intended purpose (like boot instructions).
3. Volatility: RAM is volatile, meaning it loses data when the power is turned off, whereas ROM retains data even
when power is lost. This makes ROM suitable for storing permanent or semi-permanent data, like a computer's
BIOS or other embedded system programs.
So, the key differences lie in data access behavior, modifiability, and purpose, making ROM fundamentally
different from random access memory.
CAM 5 22