0% found this document useful (0 votes)
2 views15 pages

ca(notes)

The document outlines two methods of input/output management: Program Controlled Input I/O and Interrupt I/O. Program Controlled I/O involves a loop for handling input and output, checking device readiness and processing characters until a carriage return is encountered, while Interrupt I/O allows devices to signal the CPU when they are ready, improving efficiency and reducing CPU idle time. Additionally, it discusses the advantages and disadvantages of both methods, including the complexities of programming with interrupts and the inefficiencies of polling.

Uploaded by

tom01012121
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views15 pages

ca(notes)

The document outlines two methods of input/output management: Program Controlled Input I/O and Interrupt I/O. Program Controlled I/O involves a loop for handling input and output, checking device readiness and processing characters until a carriage return is encountered, while Interrupt I/O allows devices to signal the CPU when they are ready, improving efficiency and reducing CPU idle time. Additionally, it discusses the advantages and disadvantages of both methods, including the complexities of programming with interrupts and the inefficiencies of polling.

Uploaded by

tom01012121
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 15

COMP ARCH.

Program Controlled Input I/O

1. Move #LINE, R0

● Description: This initializes a memory pointer (R0) to the starting address of the input
line buffer (#LINE).

2. WAITK - Input Handling Loop

● TestBit #0, STATUS


○ Description: This checks the status bit (#0) in the STATUS register to see if the
input device is ready to receive data (check the SIN flag).
● Branch=0 WAITK
○ Description: If the input device is not ready (bit #0 is 0), branch to WAITK and
keep waiting.
● Move DATAIN, R1
○ Description: Once the input device is ready, read a character from DATAIN into
register R1.
3. WAITD - Output Handling Loop

● TestBit #1, STATUS


○ Description: Check the status bit (#1) in the STATUS register to see if the
output device is ready to receive data (check the SOUT flag).
● Branch=0 WAITD
○ Description: If the output device is not ready, branch to WAITD and wait.
● Move R1, DATAOUT
○ Description: Once the output device is ready, send the character in R1 to
DATAOUT for display.
● Move R1, (R0)+
○ Description: Store the character in the memory location pointed to by R0, then
increment the pointer (R0).

4. Compare #0D, R1

● Description: Compare the character in R1 with the Carriage Return (CR) ASCII value
(#0D).
● Branch≠0 WAITK
○ Description: If the character is not a Carriage Return, branch back to WAITK to
keep reading and processing characters.

5. Move #0A, DATAOUT

● Description: If the character is a Carriage Return, send a Line Feed (ASCII value #0A)
to the output device.

6. Call PROCESS

● Description: After the input line is complete (ending with CR), the program calls a
subroutine named PROCESS to handle the input line further (e.g., process commands or
data).

Summary of Workflow:

● Input Handling (WAITK): The program waits for an input character, reads it, and stores
it in memory.
● Output Handling (WAITD): It waits for the display to be ready, echoes the character,
and moves to the next memory location.
● Carriage Return Check: When the program encounters a Carriage Return (Enter key),
it sends a Line Feed (newline) and calls a subroutine to process the input.
This assembly routine is a basic loop for controlled I/O, used in early systems where
input/output devices required handshaking for each character transmitted or received.

Interrupt I/O: Detailed Explanation

Interrupt I/O is an efficient method of managing input/output (I/O) operations between the
processor (CPU) and peripheral devices (such as keyboards, printers, disk drives). Instead of
the processor continuously checking the status of devices (as in polling I/O), interrupts allow
the devices to notify the processor when they are ready to send or receive data.

Here’s a detailed breakdown of the Interrupt I/O mechanism:

1. What is an Interrupt?

● An interrupt is a signal sent by a peripheral device (or sometimes an internal event) to


the processor, indicating that the device needs attention (either to send or receive data).
● This signal is sent via a designated control line on the bus called the Interrupt Request
(IRQ) line.

2. Interrupt Request (IRQ) Line

● IRQ is a special line in the system’s bus structure that peripheral devices use to notify
the processor of an interrupt. Devices pull this line high (or active) when they need the
CPU’s attention.
● This signal interrupts the normal flow of the CPU’s instructions and prompts the CPU to
service the device.

3. Interrupt Service Routine (ISR)

● Once the processor detects an interrupt, it stops executing the current program and
jumps to a specific piece of code designed to handle the interrupt, called the Interrupt
Service Routine (ISR).
● The ISR handles the specific needs of the device that triggered the interrupt. For
example, it may read data from the device or send data to the device.
● After executing the ISR, the processor returns to its previously running program.

4. Interrupt Acknowledge Signal

● Upon receiving the interrupt, the processor sends an Interrupt Acknowledge Signal
back to the device. This informs the device that the interrupt has been recognized, and it
can stop sending the interrupt request.
● This ensures that the processor only services the interrupt once, preventing
unnecessary repeated signals from the device.

5. Saving and Restoring Context

● Context refers to the state of the processor at the time the interrupt occurs, including the
contents of all registers, the program counter, flags, etc.
● When an interrupt occurs, the processor needs to save the current context (the state
of the currently executing program) so it can resume the program once the interrupt has
been handled.
● After the ISR is completed, the processor restores the saved context and continues
executing from where it left off.
● This task is done either automatically by the processor or manually via code in the ISR,
depending on the system architecture.

6. Interrupt Latency

● Interrupt latency is the time delay between when the device sends an Interrupt
Request (IRQ) and when the processor starts executing the corresponding Interrupt
Service Routine (ISR).
● This delay depends on factors such as:
○ The current priority level of the processor (some systems allow for interrupt
priorities).
○ Whether the processor is currently executing another ISR or is in a non-
interruptible state (such as a critical section).
○ The time taken to save the current program’s context.
● Minimizing interrupt latency is crucial in real-time systems, where time-critical tasks
depend on quick interrupt servicing.

7. Types of Interrupts

Some computer systems use two types of interrupts, which handle the context saving/restoring
differently:

● Type 1: Saves all register contents


○ In this approach, all the processor’s register contents are saved automatically
when an interrupt occurs. This is a more comprehensive, but potentially slower,
method as it saves the entire context.
○ This method ensures that the system can return to exactly the same state as
before the interrupt, making it useful for systems where exact restoration is
necessary.
● Type 2: Doesn’t save all register contents
○ In this approach, only a minimal number of registers (or no registers at all) are
saved. This method is faster, but more manual work may be required in the ISR
to save and restore specific registers if needed.
○ This method is often used in simpler or time-critical systems where saving every
register isn’t necessary or would introduce too much latency.

8. How Interrupt I/O Works (Step-by-Step)

Here’s a summary of how the Interrupt I/O process typically works:

1. Interrupt Request: A peripheral device sends an interrupt signal via the IRQ line when it
needs to communicate with the CPU (e.g., when data is ready to be processed).
2. Processor Detects Interrupt: The processor recognizes the interrupt, either
immediately or after completing the current instruction (depending on the system’s
interrupt priority and handling mechanisms).
3. Processor Sends Acknowledge: The processor sends an acknowledgment signal back
to the device to indicate that the interrupt has been recognized.
4. Saving Context: The processor saves the current program’s context (register contents,
program counter, flags, etc.) so it can resume execution after the ISR.
5. Interrupt Service Routine (ISR): The processor executes the ISR specific to the
interrupting device. The ISR handles the device’s request (e.g., reading or writing data to
the device).
6. Restoring Context: After the ISR finishes, the processor restores the saved context to
continue executing the previously interrupted program.

Advantages of Interrupt I/O

1. Efficient Use of CPU Time: The CPU can perform other tasks while waiting for I/O
operations, rather than constantly checking the device status.
2. Faster Response: Interrupts ensure a quick response to I/O requests, which is
especially important in real-time systems.
3. Reduced Latency: Interrupt I/O can minimize the delay between when a device is ready
and when it is serviced.

Disadvantages of Interrupt I/O

1. Overhead: The process of saving and restoring context can introduce overhead.
2. Interrupt Overload: If too many interrupts occur, the processor may spend too much
time handling them, reducing overall system performance (also known as "interrupt
storm").
3. Complexity: Programming with interrupts can be more complex than using polling I/O,
as it requires careful handling of shared data and registers.
Polling Explained in Detail:

Polling is a method used by the CPU to manage and detect interrupt requests from multiple
devices. Instead of each device sending an interrupt signal directly to the CPU (as in interrupt-
driven systems), the CPU continuously checks (or "polls") the status of each device to determine
whether it needs to be serviced. Here's a detailed breakdown of each point:

1. Information in the Status Register:


o Each device connected to the system has a status register that holds
information about its current state, including whether it needs service.
o The status register has specific bits that indicate whether the device has raised
an interrupt request. One of these bits is the IRQ bit (Interrupt Request bit),
which is set to 1 when the device requires the CPU's attention.

2. Interrupt Request through IRQ Bit:


o When a device raises an interrupt request, it sets its IRQ bit in the status register
to 1.
o For example:
 The KIRQ bit is the Interrupt Request bit for the keyboard.
 The DIRQ bit is the Interrupt Request bit for the display.
o These bits signal that the corresponding devices are requesting the CPU’s
attention for some operation (e.g., reading keyboard input or refreshing the
display).

3. Polling Process:
o Polling is the process where the CPU checks the status of all devices connected
to the system to identify which device is requesting service.
o The CPU does this by reading the status register of each device. If it finds the
IRQ bit set to 1, the CPU knows that the device needs service and initiates the
corresponding Interrupt Service Routine (ISR).
o The polling process continues until the CPU checks all devices in sequence.
Once the CPU has identified a device that needs attention, it services that
device.

4. Servicing the First Device:


o The CPU checks devices one by one. The first device it encounters with its IRQ
bit set to 1 is the one that will be serviced first.
o After the CPU services this device (handles the interrupt), it can return to the
polling loop and check for other devices that may be requesting service.
o This method ensures that interrupt requests are handled sequentially.

5. Advantages of Polling:
o Simple & Easy to Implement: Polling is a straightforward technique. It does not
require complex hardware or software mechanisms to handle interrupts. Since
the CPU directly reads each device’s status register, the system doesn't need
additional components like dedicated interrupt controllers.
o No Need for Special Hardware: Unlike interrupt-driven systems that require
hardware to detect and handle multiple interrupt lines, polling relies entirely on
the CPU’s ability to check each device manually, making it a simpler approach.

6. Disadvantages of Polling:
o Inefficient and Time-Consuming: The major drawback of polling is that it can
waste a lot of CPU time. The CPU has to check the IRQ bit of every device, even
if most of them are not requesting service.
o If many devices are connected but only a few need service at any given time, the
CPU spends unnecessary time polling all devices, including those that don't
require attention. This can slow down the system and reduce overall efficiency.
o Latency: Because the CPU checks devices in a sequential manner, there may
be some delay (latency) before it identifies and services the device that is
requesting attention, especially if it’s further down the list of devices.

Summary:

In polling, the CPU continuously checks the IRQ bits in the status registers of connected
devices to identify which one needs servicing. The first device with an IRQ bit set to 1 is
serviced, and the CPU continues polling other devices. Polling is simple to implement but can be
inefficient because the CPU spends time checking devices that may not need service, leading to
increased processing time and system latency.

You said:
Vectored Interrupts ❖ A device requesting an interrupt identifies itself by sending a special-code
to processor over bus. (This enables processor to identify individual devices even if they share a
single interrupt-request line). ❖ The code represents starting-address of ISR for that device. ❖
ISR for a given device must always start at same location. ❖ The address stored at the location
pointed to by interrupting-device is called the interrupt-vector. ❖ Processor loads interrupt-
vector into PC &executes appropriate ISR ❖ Interrupting-device must wait to put data on bus
only when processor is ready to receive it. ❖ When processor is ready to receive interrupt-
vector code, it activates INTA line. ❖ I/O device responds by sending its interrupt-vector code &
turning off the INTR signal explain in detail

ChatGPT said:
ChatGPT

Vectored Interrupts Explained in Detail:

Vectored interrupts are a more efficient way for handling interrupt requests compared to non-
vectored or polling methods. In this system, devices not only signal that they need service, but
they also send specific information to the processor to help it quickly identify the correct
Interrupt Service Routine (ISR) to execute.

Here's a breakdown of the points:


1. Device Identifies Itself with a Special Code:
o In a vectored interrupt system, when a device raises an interrupt request, it
doesn’t just signal that it needs service. Instead, it also sends a special code
over the bus to the processor.
o This code uniquely identifies the device and tells the processor which ISR needs
to be executed. This allows the processor to distinguish between multiple devices
even if they share a common interrupt-request line.
o This method is more efficient than polling, where the processor must check each
device individually.

2. The Code is the Starting Address of the ISR:


o The special code sent by the device contains the starting address of the ISR
for that particular device.
o Each device has a dedicated ISR, which handles its specific interrupt-related
tasks (like reading data, sending a signal, etc.).
o This ISR must always start at the same address in memory so that the device
can quickly inform the processor where to begin executing the service routine.

3. Interrupt Vector:
o The interrupt vector refers to the memory address that points to the starting
location of the ISR.
o When a device sends an interrupt vector (a special code), the processor uses it
to determine where to jump in memory to start the appropriate ISR.
o Essentially, the interrupt vector is a pointer to the ISR. Each device has a unique
interrupt vector corresponding to its ISR.

4. Processor Loads Interrupt Vector into PC (Program Counter):


o Once the processor receives the interrupt vector, it loads this value into its
Program Counter (PC).
o The PC holds the address of the next instruction to execute. By loading the
interrupt vector into the PC, the processor effectively jumps to the ISR for the
interrupting device and begins executing the service routine.

5. Interrupting Device Must Wait Until Processor is Ready:


o A device cannot simply send its interrupt vector immediately after raising an
interrupt request. It has to wait for the processor to be ready to handle the
interrupt.
o The processor needs to finish whatever it is currently doing and signal that it is
prepared to receive the interrupt vector. This is necessary because the processor
could be busy with another task when the interrupt is raised.

6. Processor Activates the INTA Line:


o The Interrupt Acknowledge (INTA) line is a signal sent by the processor to
indicate that it is ready to receive the interrupt vector from the device.
o When the processor has completed any ongoing tasks and is ready to handle the
interrupt, it activates the INTA line. This informs the interrupting device that it can
now send the interrupt vector (the address of its ISR).

7. I/O Device Sends Interrupt Vector and Turns Off INTR Signal:
o Once the INTA signal is activated by the processor, the I/O device responds by
sending its interrupt vector code (the ISR address) over the bus.
o At the same time, the device turns off the Interrupt Request (INTR) signal,
indicating that the interrupt request has been acknowledged and no further action
is needed until the ISR is executed.

Key Advantages of Vectored Interrupts:

 Efficient Interrupt Handling: The processor immediately knows which ISR to execute,
avoiding the need to check all devices (as in polling).
 Direct Identification: Devices send their interrupt vectors directly, so the processor
doesn’t have to guess or search for the interrupting device.
 Faster Response: Since each device has a dedicated ISR address, the processor can
quickly jump to the appropriate service routine, reducing latency.

Example Workflow for Vectored Interrupts:

1. A device (e.g., a keyboard) raises an interrupt request by signaling over the interrupt-
request line.
2. The processor finishes its current operation and sends an interrupt acknowledge (INTA)
signal.
3. The keyboard device responds by sending its interrupt vector (which contains the
starting address of the keyboard’s ISR) to the processor.
4. The processor loads the interrupt vector into its program counter (PC), jumps to the
keyboard’s ISR, and starts executing the necessary routine (e.g., reading a key press).
5. The interrupt request signal (INTR) is turned off, and the processor completes the ISR
before returning to its main tasks.

Internal Organization of Memory Chips in Semiconductor RAM

Semiconductor Random Access Memory (RAM) chips are internally organized in a way that
allows efficient storage and retrieval of information. The typical structure is a matrix of memory
cells, with specific components like word lines, bit lines, and sense/write circuits that facilitate
the reading and writing processes. Let's break down the key components and operations in detail:
1. Memory Cells:

 Memory cells are the fundamental storage units in a RAM chip, where each cell stores
one bit of information (either 0 or 1).
 These cells are typically organized in a two-dimensional grid (or array) structure where:
o Rows represent memory words.
o Columns represent individual bits within those words.

2. Memory Array:

 The memory array is a matrix of memory cells arranged in rows and columns.
o Each row corresponds to a word (a group of bits).
o Each column corresponds to a single bit position across multiple words.

3. Word Lines:

 A word line is a common electrical line that connects all memory cells in a row.
 When a particular row (or word) is selected for either a read or write operation, its
corresponding word line is activated.
 Activating the word line enables the memory cells in that row to interact with the
sense/write circuits, making them ready for reading or writing.

4. Bit Lines:

 Bit lines are wires that connect all the memory cells in a particular column.
o Each memory cell in a column is connected to two bit lines. These lines carry
information during both read and write operations.
 The bit lines are connected to Sense/Write circuits.

5. Sense/Write Circuits:

 The Sense/Write circuits are critical components that control the actual read and write
processes. They serve two main functions:
o Sense Circuit (Read operation): When a row is selected and the word line is
activated, the sense circuit reads the data stored in the memory cells of that row
via the bit lines. It senses the voltage difference between the bit lines and
determines whether a 0 or 1 is stored in the cell.
o Write Circuit (Write operation): During a write operation, the sense/write circuit
takes the input data and writes it into the appropriate memory cells along the
selected word line by manipulating the voltage levels on the bit lines.
6. Data Input/Output Lines:

 The Sense/Write circuits are connected to the external data input/output lines of the
memory chip.
o Data input lines carry the data to be written into the memory.
o Data output lines carry the data read from the memory to the outside world
(e.g., to a CPU or other device).

The interaction between these lines and the sense/write circuits determines the flow of data in
and out of the memory.

7. Write Operation:

 During a write operation, the process happens as follows:


1. Row Selection: The word line for the specific row (memory word) is activated.
2. Column Activation: The bit lines for the specific columns (bits) are activated,
and the data to be written is provided to the Sense/Write circuits.
3. Data Storage: The Sense/Write circuits receive the input data and adjust the
voltage levels on the bit lines to store the corresponding 0 or 1 in the memory
cells of the selected row.

8. Read Operation:

 During a read operation, the following steps occur:


1. Row Selection: The word line for the specific row is activated, allowing the
memory cells in that row to be connected to the bit lines.
2. Bit Line Activation: The sense circuit reads the voltage difference between the
two bit lines connected to each column and determines whether the value in the
memory cell is a 0 or 1.
3. Data Output: The Sense/Write circuits pass the read values to the data output
lines, which then transfer the data to the external system.

Example Layout:

Imagine a 4x4 memory array where each row is a 4-bit word:

 Row (Word Line 0): [cell_00, cell_01, cell_02, cell_03]


 Row (Word Line 1): [cell_10, cell_11, cell_12, cell_13]
 Row (Word Line 2): [cell_20, cell_21, cell_22, cell_23]
 Row (Word Line 3): [cell_30, cell_31, cell_32, cell_33]
Here:

 The word line activates all cells in a row (e.g., Word Line 1 activates cell_10,
cell_11, cell_12, cell_13).
 Each bit line connects the cells vertically across the rows (e.g., Bit Line 0 connects
cell_00, cell_10, cell_20, cell_30).

10. Sense/Write Circuit Example:

 If the user writes data "1101" to the second row, the sense/write circuits will:
o Activate Word Line 1.
o Set the bit lines corresponding to the columns based on the data:
 Bit Line 0 will write 1 in cell_10.
 Bit Line 1 will write 1 in cell_11.
 Bit Line 2 will write 0 in cell_12.
 Bit Line 3 will write 1 in cell_13.

When a read operation is performed, the sense circuits will detect these values via the bit lines
and output "1101."

DRAM Chip Organization Explained in Detail

Dynamic Random Access Memory (DRAM) is one of the most commonly used memory types in
computer systems. DRAM chips are organized in a way that allows efficient access to large
amounts of data. Let’s break down the structure and functioning of DRAM, particularly focusing
on row/column selection, RAS/CAS signals, and how data is read and written.

1. 4K x 4K Array Organization:

 The DRAM chip described is organized as a 4K x 4K array.


o This means that it has 4096 rows (4K rows) and 4096 columns (4K columns).
 Each row contains 4096 bits (cells), but these are divided into 512 groups of 8 bits
(512 bytes).
o So, each row can store a total of 512 bytes of data.
o This division into groups makes data access more efficient, allowing for byte-
level addressing within rows.
2. Addressing the Rows and Columns:

To access a particular byte in the memory, two key pieces of information are needed:

 Row address: Identifies the specific row where the data is located.
 Column address: Identifies the specific byte (group of 8 bits) within the selected row.

Given the size of the memory:

 12 bits are used to select a row (because 212=40962^{12} = 4096212=4096, enough to


address all 4096 rows).
 9 bits are used to select one of the 512 groups of 8 bits in a row (because 29=5122^9 =
51229=512).

Thus, the total number of address bits required is 21 bits (12 for the row and 9 for the column).

3. Reducing Address Bits by Multiplexing (Row and Column Selection):

Instead of providing all 21 bits at once, DRAM uses address multiplexing to reduce the number
of address lines (pins). This is done by splitting the address into two parts:

1. Row Address: The first 12 bits of the address select the row.
2. Column Address: The next 9 bits select the column (the group of 8 bits within the row).

The process happens in two phases:

 Phase 1 (Row Selection):


o The Row Address is provided first.
o The Row Address Strobe (RAS) signal is then triggered, which latches
(captures and holds) the row address. This selects all the 4096 cells in the
specified row.
 Phase 2 (Column Selection):
o Next, the Column Address is provided.
o The Column Address Strobe (CAS) signal is triggered, latching the column
address. This selects the specific group of 8 bits (a byte) in the row.

Thus, the multiplexing of row and column addresses reduces the number of address pins,
making it more efficient.
4. Row and Column Access:

 When a row is selected using the row address, the entire row of 4096 bits is read by the
sense amplifiers (or sense circuits).
 To manage the output efficiently:
o The output of each sense amplifier is connected to a latch (a small storage
element that holds the sensed data temporarily).
o These latches hold the data from the selected row, allowing the memory
controller to subsequently select individual bytes or blocks of data from the row.

By using this structure, you only need to apply the CAS signal (without reselecting the row) to
select different bytes within the same row. This feature enables fast data transfers, especially
when reading sequential bytes from the same row.

5. Column Access Strobe (CAS) and Fast Page Mode:

 The CAS signal is used to select different bytes within the row.
o Once a row is selected with RAS, the memory controller can issue a sequence of
CAS signals to select consecutive columns (bytes) within that row.
o This means that multiple bytes from the same row can be accessed quickly,
without having to reselect the row each time.

This feature, called Fast Page Mode, allows for fast transfers of blocks of data. Instead of
randomly accessing different rows (which is slower), DRAM can quickly transfer a sequence of
bytes from the same row. This is especially useful when accessing blocks of data (such as in
cache line fetches or reading sequential bytes in an application).

6. Asynchronous Control and Memory Controller:

 DRAM chips operate asynchronously, meaning the timing of operations (read/write) is


not directly tied to the system clock.
o Instead, a memory controller is used to generate and control the RAS, CAS,
and other signals that govern the timing and operation of the memory.
o The memory controller ensures that the correct timing is maintained between row
and column selection, read/write operations, and data transfers.

The memory controller is critical for coordinating the communication between the CPU (or other
system components) and the DRAM.
7. Fast Data Transfers:

 Using RAS and CAS, a block of data can be transferred at a much faster rate than
individual random accesses.
 When accessing data in bursts (e.g., sequential access within a row), the fast page
mode feature allows the memory to transfer multiple bytes from the same row without
reselecting it.
o This capability is essential in applications like graphics processing, streaming
data, or any operation that requires high-speed data transfers.

Summary of Operations:

1. Memory Array: Organized as rows and columns. Each row can store 512 bytes, and
there are 4096 rows.
2. Addressing: Requires 21 bits (12 for the row, 9 for the column) but uses multiplexing
to reduce the number of address lines.
3. Row Access: The row address is latched with the RAS signal, selecting an entire row.
4. Column Access: The column address is latched with the CAS signal, selecting
specific bytes within the row.
5. Data Transfer: Consecutive bytes in a row can be transferred quickly using fast page
mode, reducing overhead and improving performance.
6. Memory Controller: Provides the necessary control signals (RAS, CAS) and ensures
proper timing for asynchronous operations.

By organizing DRAM in this way, large blocks of data can be accessed efficiently, with faster
data transfer rates for sequential reads and writes, making DRAM suitable for high-performance
applications.

You might also like