Document1
Document1
successive values differ by only one bit, and the second half of the sequence is
created by a mirror image (or reflection) of the first half. This property ensures that
transitioning between consecutive values minimizes errors in digital systems.
Don’t care :
Don’t care terms are the input conditions for which a system’s output can
be either 0 or 1 without affecting the overall functionality of the circuit.
These terms are typically represented by an X in truth tables or Karnaugh
maps (K-maps).
Programmable (can be
OR Array Programmable (can be customized)
customized)
Encoders convert 2N lines of input into a code of N bits and Decoders decode the
N bits into 2N lines.
1. Encoders :
An encoder is a combinational circuit that converts binary information in the form
of a 2N input lines into N output lines, which represent N bit code for the input. For
simple encoders, it is assumed that only one input line is active at a time. As an
example, let’s consider Octal to Binary encoder. As shown in the following figure,
an octal-to-binary encoder takes 8 input lines and generates 3 output lines.
One limitation of this encoder is that only one input can be active at any given
time. If more than one inputs are active, then the output is undefined. For
example, if D6 and D3 are both active, then, our output would be 111 which is the
output for D7. To overcome this, we use Priority Encoders
Truth Table –
D D D D D D D D
7 6 5 4 3 2 1 0 X Y Z
0 0 0 0 0 0 0 1 0 0 0
0 0 0 0 0 0 1 0 0 0 1
0 0 0 0 0 1 0 0 0 1 0
0 0 0 0 1 0 0 0 0 1 1
0 0 0 1 0 0 0 0 1 0 0
0 0 1 0 0 0 0 0 1 0 1
0 1 0 0 0 0 0 0 1 1 0
1 0 0 0 0 0 0 0 1 1 1
Priority Encoder –
A priority encoder is an encoder circuit in which inputs are given priorities. When
more than one inputs are active at the same time, the input with higher priority
takes precedence and the output corresponding to that is generated.
Decoders –
A decoder does the opposite job of an encoder. It is a combinational circuit that
converts n lines of input into 2
n
lines of output. Let’s take an example of 3-to-8 line decoder.
Truth Table –
D D D D D D D D
X Y Z 0 1 2 3 4 5 6 7
0 0 0 1 0 0 0 0 0 0 0
0 0 1 0 1 0 0 0 0 0 0
0 1 0 0 0 1 0 0 0 0 0
0 1 1 0 0 0 1 0 0 0 0
1 0 0 0 0 0 0 1 0 0 0
1 0 1 0 0 0 0 0 1 0 0
1 1 0 0 0 0 0 0 0 1 0
1 1 1 0 0 0 0 0 0 0 1
What is a Multiplexer?
A multiplexer is a data selector which takes several inputs and gives a single
output. In multiplexer, we have 2N Input lines and 1 output lines where n is the
number of selection lines.
What is Demultiplexer ?
Demultiplexer is a data distributor which takes a single input and gives several
outputs. In demultiplexer we have 1 input and 2 N output lines where n is the
selection line.
Given below is the block diagram of the Demultiplexer, It will have one Input line
and will give 2N output lines.
Multiplexer Demultiplexer
Employed in data-intensive
Implementatio Majorly implemented in the applications where data need
n networking application. to be changed into another
form.
Cache Memory
Definition:
Cache memory is a small, high-speed memory located closer to the CPU
than the main memory (RAM). It is used to temporarily store frequently
accessed data and instructions to reduce the time the CPU takes to fetch
data from main memory. This helps improve the overall speed and
performance of the computer system.
Memory Hierarchy
The memory hierarchy in a computer system is a structure that organizes
memory components based on speed, size, cost, and proximity to the
CPU. It aims to provide the best tradeoff between performance and cost-
efficiency. The hierarchy ensures that the fastest, most expensive
memory is limited in size and closer to the CPU, while the slower,
cheaper memory is larger and farther away.
External, removable
5. Tertiary
storage devices like CDs, Slowest Cheapest Very large
Storage
DVDs, or backup tapes.
Internal Memory
Definition:
Internal memory refers to memory that is directly accessible by the CPU
and is used for storing data and instructions during execution. It is also
known as primary memory or main memory.
RAM (Random Access Memory): Temporarily stores active data and
instructions.
Cache Memory: A small, high-speed memory close to the CPU that
stores frequently accessed data.
ROM (Read-Only Memory): Permanently stores critical system
instructions, like the BIOS.
Registers: Small storage inside the CPU used for immediate data
processing.
Characteristics:
1. Proximity: Directly connected to or embedded in the CPU.
2. Speed: Very fast compared to external memory.
3. Volatility:
o RAM and cache are volatile (data is lost when the power is
off).
o ROM is non-volatile (retains data permanently).
4. Capacity: Smaller in size (bytes to a few gigabytes).
5. Cost: More expensive per unit compared to external memory.
Functions:
Temporary storage for data actively being processed.
Reduces CPU idle time by quickly supplying instructions and data.
Essential for program execution.
External Memory
Definition:
External memory refers to storage devices outside the CPU that are used
for storing data permanently or for long-term use. It is also called
secondary memory or auxiliary memory.
Examples:
Hard Disk Drives (HDDs) and Solid State Drives (SSDs).
Flash Drives (USB drives).
Optical Disks (CDs, DVDs, Blu-ray).
Characteristics:
1. Proximity: Located outside the CPU, connected via interfaces like
USB, or Ethernet.
2. Speed: Slower than internal memory.
3. Volatility: Non-volatile, meaning data is retained even when power
is off.
4. Capacity: Larger in size (gigabytes to terabytes or more).
5. Cost: Cheaper per unit compared to internal memory.
Functions:
Long-term data storage for files, programs, and backups.
Portable storage for transferring data between systems
Comparison of Internal and External Memory
Aspect Internal Memory External Memory
Cost per
More expensive. Less expensive.
Unit
Memory Management
Definition:
Memory management is the process of efficiently allocating, organizing,
and controlling a computer's memory resources. It ensures that the
system uses its memory effectively to run programs and processes while
avoiding conflicts, wastage, or errors. Memory management is primarily
handled by the operating system (OS).
Key Goals of Memory Management:
1. Efficient Resource Utilization:
o Maximize the use of available physical and virtual memory.
2. Process Isolation:
o Ensure processes do not interfere with each other's memory.
3. Multitasking Support:
o Allow multiple programs to run simultaneously by dividing
memory between them.
4. Memory Protection:
o Prevent unauthorized access or modification of memory by
programs.
5. Minimize Latency:
o Optimize memory access time and reduce delays.
1. Programmed I/O
Definition: The CPU is responsible for managing the data transfer
by executing specific instructions (polling or checking) for every
byte or word of data.
Process:
o The CPU continuously polls (checks) the status of the
peripheral to determine if it is ready to send or receive data.
o Once the peripheral is ready, the CPU initiates the transfer.
Characteristics:
o The CPU is heavily involved in the transfer process.
o Slower because the CPU is busy waiting (polling) for the
device to be ready.
Applications: Used in simple systems or where transfer speed is not
critical.
Example: Transferring data between the CPU and a printer.
2. Interrupt-Driven I/O
Definition: The peripheral device interrupts the CPU when it is
ready to send or receive data, eliminating the need for the CPU to
continuously poll.
Process:
1. The device sends an interrupt signal to the CPU when it is
ready.
2. The CPU temporarily stops its current task to handle the
interrupt.
3. Data transfer occurs, and the CPU resumes its previous task.
Characteristics:
o More efficient than programmed I/O because the CPU does not
waste time polling.
o Requires an interrupt controller to manage multiple devices.
o Used when real-time response is needed.
Applications: Keyboard inputs, mouse inputs, or real-time systems.
Example: A keyboard interrupt triggers the CPU to process user
input.
Low (polling
Efficiency Moderate High
overhead)
Hardware
Low Moderate High
Complexity
Interrupt:
An interrupt is a signal sent to the CPU by a hardware device or software
process to indicate that an event requires immediate attention. It
temporarily halts the CPU's current operations, saves its state, and
executes a specific service routine (Interrupt Service Routine or ISR) to
handle the event. Once the interrupt is serviced, the CPU resumes its
previous operations.
Interrupt Cycle
Interrupt cycle is very similar to the instruction cycle. At the very start,
the status of flip-flop R is checked. If it is 0 there is no interrupt and CPU
can continue it's ongoing tasks. But when R=1, it denotes that the
ongoing process should halt because an interrupt has occured.
When R=0, CPU continues it's tasks checking the status of IEN in parallel.
If it is 1, FGI and FGO are checked in a hierarchy. If any of these flip-flops
are found set, R is immidiately set by 1.
When R=1, the content in PC (adress of next instruction in memory) is
saved at M[0] and then PC is set by 1 enabling it to point the BUN
operation. The instruction at M[1] is a BUN instruction that leads the
control to approriate I/O ref. Instruction stored at some other location in
the memory. Now separate Fetch, Decode and Execute phases are
practised to entertain the I/O ref. instruction.
Once the I/O ref. instruction is executed completely, PC is loaded with 0
where it finds the saved RETURN address. The entire workout is
diagrammed as follows:
Horizontal and vertical microprogramming are two approaches to
designing microcode in computer systems. Microprogramming is a
method used to implement the control logic of a processor by using a
sequence of low-level instructions (microinstructions) stored in a
microprogram memory.
1. Horizontal Microprogramming
In horizontal microprogramming, each microinstruction specifies a wide
set of control signals that can be executed in parallel.
Characteristics:
Wide control word: Each microinstruction is very wide (e.g.,
hundreds of bits), with each bit controlling a specific part of the
hardware.
Highly parallel execution: Multiple control signals can be activated
simultaneously
Harder to program: Requires careful design to ensure that no
conflicting control signals are active.
Faster execution: Parallelism reduces the number of cycles needed
for a task.
Advantages:
Allows for very fine-grained control over hardware.
High degree of parallelism enables faster microinstruction
execution.
Disadvantages:
Control word size is very large, leading to increased memory usage.
Complexity in programming and debugging.
2. Vertical Microprogramming
In vertical microprogramming, each microinstruction specifies fewer
control signals and relies on encoding to reduce the width of the control
word..
Characteristics:
Narrow control word: Each microinstruction has fewer bits, as
control signals are encoded.
Sequential execution: A single microinstruction may activate only
one or a few control signals.
Simpler programming: Easier to design and manage compared to
horizontal microprogramming.
Slower execution: Decoding can increase execution time.
Compact design: Reduces the memory required for microprogram
storage.
Advantages:
Smaller microprogram memory requirements.
Easier to design and maintain.
More scalable for complex systems.
Disadvantages:
Reduced parallelism due to encoding.
Decoding overhead increases latency.
Comparison Table
Horizontal Vertical
Feature
Microprogramming Microprogramming
Control Word
Large (hundreds of bits) Small (tens of bits)
Width
LOAD R1,
Indexed Effective address = Base + Index
1000(R2)
1. Fetch
The CPU retrieves (or fetches) an instruction from memory.
Steps in the fetch phase:
1. The Program Counter (PC) holds the memory address of the
next instruction to be executed.
2. The CPU uses the Memory Address Register (MAR) to send this
address to the memory.
3. The instruction is fetched from memory and placed into the
Memory Data Register (MDR) or directly into the Instruction
Register (IR).
4. The Program Counter is incremented to point to the next
instruction.
2. Decode
The CPU interprets (or decodes) the fetched instruction.
Steps in the decode phase:
1. The instruction is sent to the Control Unit (CU).
2. The CU identifies the operation to be performed (e.g.,
addition, memory access, branching) by decoding the binary
instruction (opcode).
3. The CPU determines what data is needed (operands) and
where it is located.
3. Execute
The CPU performs the operation specified by the instruction.
Steps in the execute phase:
1. The appropriate unit (e.g., Arithmetic Logic Unit (ALU),
memory, or I/O) carries out the operation.
2. If needed, data is fetched from memory or registers.
3. The result may be stored in a register, written back to
memory, or sent to an output device.
4. The status of the operation is updated in the flags (e.g., zero
flag, carry flag).
CISC Characteristics:
1. Complex, multi-step instructions.
2. Variable instruction size.
3. Fewer instructions in code.
4. Many addressing modes.
5. Slower (higher cycles per instruction).
6. Uses microprogramming.
7. Examples: Intel x86, IBM System/360.
RISC Characteristics:
1. Simple, single-step instructions.
2. Fixed instruction size.
3. More instructions in code.
4. Few addressing modes.
5. Faster (lower cycles per instruction).
6. Uses hardwired control.
7. Examples: ARM, MIPS, SPARC.
1. Hardwired Control
Control Mechanism: Uses fixed, combinational logic circuits (gates,
flip-flops, etc.) to generate control signals.
Speed: Faster because control signals are generated directly by
hardware with minimal delay.
Complexity: Less flexible but more efficient for simple operations.
The design is more complex, requiring more gates and circuitry to
handle each instruction.
Cost: Generally more expensive to design and implement due to
hardware complexity.
Flexibility: Not flexible; changes to the instruction set or control
logic require redesigning the hardware.
Usage: Typically used in RISC architectures where simple operations
are performed frequently and control logic is relatively
straightforward.
Example: Traditional microprocessors like early Intel x86 used
hardwired control.
2. Microprogrammed Control
Control Mechanism: Uses a control memory (often ROM or RAM) to
store a set of microinstructions. These microinstructions define
control signals for each operation.
Speed: Slower than hardwired control due to the need to fetch
microinstructions from memory.
Complexity: More flexible but can be less efficient because it
requires additional memory and decoding steps to fetch
microinstructions.
Cost: Cheaper to design and modify because the control logic can
be changed by altering the microprogram stored in memory,
without needing hardware changes.
Flexibility: More flexible; new instructions or changes in control
logic can be added by modifying the microprogram.
Usage: Typically used in CISC architectures where complex
instructions need to be supported and where flexibility is more
important.
Example: The IBM System/360 and many modern x86 processors
use microprogrammed control.
Key Differences:
Feature Hardwired Control Microprogrammed Control
Summary
Hardwired control is fast but rigid and complex, making it suitable
for simpler tasks.
Microprogrammed control is slower but more flexible, ideal for
systems where changes or complex instructions are needed.
What is Assembly Language?
Assembly language is a low-level programming language that provides a
symbolic representation of machine code. It is specific to a particular
processor architecture and uses mnemonics (e.g., MOV, ADD, SUB) for
instructions, making it easier for humans to write and understand
compared to binary machine code.
Key Features:
1. Assembly language is hardware-dependent.
2. It allows direct control of hardware components like registers,
memory, and I/O.
3. Requires an assembler to translate the code into machine
language (binary).
Shift Registers
A shift register is a sequential logic circuit that stores and transfers
data. It is made up of flip-flops, where the stored data is shifted from one
flip-flop to another on the application of a clock signal. Shift registers are
widely used in digital circuits for temporary data storage, data transfer,
and data manipulation.
Examples of Duality
Example 1: A Simple Boolean Expression
Original: A · (B + C) = A · B + A · C
o (This is the distributive property.)
Dual: A + (B · C) = (A + B) · (A + C)
Latches
Definition: A latch is a simple memory device that stores a bit of
data. It changes its state based on the input and control signal,
typically level-triggered.
Control: The output of a latch can change as long as the control
signal (often called enable) is active. It is level-sensitive, meaning it
reacts to the level of the control signal (high or low).
Types:
o SR Latch: The simplest form, made of two cross-coupled NOR
or NAND gates.
o D Latch: A more controlled version, where the data input (D) is
transferred to the output when the enable signal is active.
Characteristics:
o Level Triggered: Output changes as long as the enable signal
is active.
o Simple: Easier to design but can cause glitches if the enable
signal is unstable.
Flip-Flops
Definition: A flip-flop is a bistable device that also stores a bit of
data but is edge-triggered, meaning it only changes its output on
the rising or falling edge of a clock signal.
Control: Flip-flops are edge-triggered, meaning their output only
changes at the transition of a clock signal (either rising edge or
falling edge).
Types:
o D Flip-Flop: Stores the input data on the clock edge.
o JK Flip-Flop: More complex, with inputs for setting, resetting,
and toggling.
o T Flip-Flop: Toggles the output on each clock edge.
Characteristics:
o Edge Triggered: Output only changes at specific clock edges.
o More Stable: Less prone to glitches compared to latches.
Key Differences:
Feature Latch Flip-Flop
Edge-triggered (clock
Triggering Level-sensitive
edge)
Control
Active as long as enable is active Active only on clock edge
Signal
Common
Flip-flops (D, JK, T) Latches (SR, D)
Devices
Modulus Counter
A modulus counter is a type of counter that counts from 0 to a specified
value (the modulus) and then resets back to 0. The modulus determines
the number of unique states the counter goes through before it repeats.
Modulus: The modulus is the total number of states the counter can
hold. For example, a modulus-5 counter counts from 0 to 4 (5
states).
Datapath
A datapath is a critical component of a computer's central processing
unit (CPU) that is responsible for executing arithmetic, logical, and data
manipulation operations. It is the collection of functional units (such as
registers, ALUs, and multiplexers) and their interconnections, which work
together to process data in a system.
1. Direct-Mapped Cache
Definition: In direct-mapped cache, each block of memory maps to
exactly one cache line. This means that for each memory address,
there is only one possible location in the cache where the data can
be stored.
How it works:
o The memory address is divided into three parts: tag, index,
and block offset.
o The index is used to find a specific cache line, while the tag is
compared with the tag stored in that line to verify if the data
is present (cache hit or miss).
Advantages:
o Simple and easy to implement.
o Fast access time for cache lookup.
Disadvantages:
o Cache conflicts: If multiple memory blocks map to the same
cache line, they will overwrite each other, causing more cache
misses (thrashing).
Example: In a 4-line cache with 16 memory blocks, each memory
block is mapped to one specific cache line.
2. Set-Associative Cache
Definition: In set-associative cache, each memory block can be
mapped to any one of a set of cache lines, making the cache more
flexible. The cache is divided into several sets, and each set can
contain multiple cache lines. A memory block can map to any line
within a set.
How it works:
o The memory address is divided into three parts: tag, set
index, and block offset.
o The set index points to a specific set, and the tag is compared
to all the tags in that set. If a match is found in one of the
lines, it’s a cache hit.
Advantages:
o More flexibility than direct-mapped cache, reducing cache
conflicts.
o Better hit rate compared to direct-mapped cache.
Disadvantages:
o Slightly more complex than direct-mapped cache due to the
need to check multiple lines within a set.
o Slightly slower than direct-mapped cache because of the need
to compare tags in multiple lines in the set.
Example: In a 4-line cache, a 2-way set-associative cache would
have 2 sets, each with 2 cache lines. A memory block could map to
either of the two lines in the set.
Comparison:
Feature Direct-Mapped Cache Set-Associative Cache
Instruction Cycle
The instruction cycle is the cycle through which a CPU (Central
Processing Unit) fetches, decodes, and executes instructions from
memory to perform tasks. The cycle is repetitive and continues until the
program is completed.
The instruction cycle is typically divided into the following stages:
1. Fetch
Purpose: Retrieve the next instruction from memory.
Process:
o The Program Counter (PC) holds the address of the next
instruction.
o The CPU uses the PC to fetch the instruction from the memory
(RAM).
o The fetched instruction is stored in the Instruction Register
(IR).
Actions:
o The PC is incremented to point to the next instruction.
o The instruction is fetched from the memory location.
2. Decode
Purpose: Interpret the fetched instruction and prepare the
necessary control signals for execution.
Process:
o The instruction in the Instruction Register (IR) is decoded by
the Control Unit (CU).
o The instruction is broken into opcode (operation code) and
operand.
o The opcode specifies the operation to be performed (e.g., add,
subtract), and the operand(s) specify the data or memory
addresses involved.
Actions:
o The Control Unit (CU) generates control signals based on the
opcode.
o Identifies which registers or memory locations to use.
3. Execute
Purpose: Perform the operation specified by the instruction.
Process:
o The Arithmetic and Logic Unit (ALU) performs the operation
(e.g., arithmetic operations like addition or logical operations
like AND).
o If the instruction involves memory, data is fetched from or
written to the memory.
o If the instruction is a jump or branch, the Program Counter
(PC) is updated to the new address.
Actions:
o The required data is fetched from registers or memory.
o The ALU performs the operation.
o The result is stored in the appropriate register or memory
location.
Instruction Cycle
The instruction cycle is the cycle through which a CPU (Central
Processing Unit) fetches, decodes, and executes instructions from
memory to perform tasks. The cycle is repetitive and continues until the
program is completed.
The instruction cycle is typically divided into the following stages:
1. Fetch
Purpose: Retrieve the next instruction from memory.
Process:
o The Program Counter (PC) holds the address of the next
instruction.
o The CPU uses the PC to fetch the instruction from the memory
(RAM).
o The fetched instruction is stored in the Instruction Register
(IR).
Actions:
o The PC is incremented to point to the next instruction.
o The instruction is fetched from the memory location.
2. Decode
Purpose: Interpret the fetched instruction and prepare the
necessary control signals for execution.
Process:
o The instruction in the Instruction Register (IR) is decoded by
the Control Unit (CU).
o The instruction is broken into opcode (operation code) and
operand.
o The opcode specifies the operation to be performed (e.g., add,
subtract), and the operand(s) specify the data or memory
addresses involved.
Actions:
o The Control Unit (CU) generates control signals based on the
opcode.
o Identifies which registers or memory locations to use.
3. Execute
Purpose: Perform the operation specified by the instruction.
Process:
o The Arithmetic and Logic Unit (ALU) performs the operation
(e.g., arithmetic operations like addition or logical operations
like AND).
o If the instruction involves memory, data is fetched from or
written to the memory.
o If the instruction is a jump or branch, the Program Counter
(PC) is updated to the new address.
Actions:
o The required data is fetched from registers or memory.
o The ALU performs the operation.
o The result is stored in the appropriate register or memory
location.
4. Store (Optional)
Purpose: Store the result of the execution (if needed).
Process:
o The result of the operation (from the ALU or other units) is
stored in a register or written back to memory.
Actions:
o The result is written back to a destination (e.g., register,
memory).
Cycle Continuation
After completing one instruction, the cycle repeats.
The Program Counter (PC) points to the next instruction, and the
cycle continues until the program ends (or until an interrupt
occurs).