0% found this document useful (0 votes)
7 views42 pages

Document1

The document provides an overview of various digital circuit types, including combinational and sequential logic circuits, along with their definitions, features, and examples. It also discusses encoding methods like BCD and binary code, as well as the roles of multiplexers, demultiplexers, encoders, and decoders in data processing. Additionally, it covers cache memory, its types, advantages, disadvantages, and the memory hierarchy in computer systems, emphasizing the organization of memory based on speed, size, and cost.

Uploaded by

Sahil Hans
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views42 pages

Document1

The document provides an overview of various digital circuit types, including combinational and sequential logic circuits, along with their definitions, features, and examples. It also discusses encoding methods like BCD and binary code, as well as the roles of multiplexers, demultiplexers, encoders, and decoders in data processing. Additionally, it covers cache memory, its types, advantages, disadvantages, and the memory hierarchy in computer systems, emphasizing the organization of memory based on speed, size, and cost.

Uploaded by

Sahil Hans
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 42

Gray code is called "reflected code" because it is constructed in such a way that

successive values differ by only one bit, and the second half of the sequence is
created by a mirror image (or reflection) of the first half. This property ensures that
transitioning between consecutive values minimizes errors in digital systems.

1. Combinational Logic Circuits


 Definition: A combinational circuit is a type of digital circuit where the
output depends only on the current inputs. It has no memory and does not
store past inputs.
Key Features:
 Outputs are calculated directly from the inputs using logic gates.
 There is no feedback or memory element.
 The circuit’s behavior is described using Boolean algebra.
Examples:
 Adders (e.g., Half Adder, Full Adder), Multiplexers(MUX), Demultiplexers
(DEMUX), Encoders and Decoders , Subtractors
Real-Life Use:
 Arithmetic operations in calculators.
 Data selection in communication systems.

2. Sequential Logic Circuits


 Definition: A sequential circuit is a type of digital circuit where the output
depends on the current inputs and the previous state (stored in
memory). It has feedback paths and can store information.
Key Features:
 Includes memory elements (e.g., flip-flops, latches) to store state
information.
 Requires a clock signal to coordinate state changes (in most cases).
 Outputs are a function of both inputs and previous states.
Examples:
 Flip-flops (SR, JK, D, T) , Registers , Counters (e.g., Binary Counters, Ripple
Counters),Shift Registers
 Real-Life Use:
 Storing and processing data in memory chips.
 Control systems in CPUs (e.g., program counters).

Don’t care :
 Don’t care terms are the input conditions for which a system’s output can
be either 0 or 1 without affecting the overall functionality of the circuit.
 These terms are typically represented by an X in truth tables or Karnaugh
maps (K-maps).

Why are they used?


1. Simplification: They allow more flexibility in simplifying logic circuits.
2. Irrelevant inputs: These terms occur in cases where some input
combinations will never happen or where the output is not used for those
inputs.

BCD (Binary-Coded Decimal) Code:


 Definition: BCD is a method of encoding each decimal digit (0-9)
using a 4-bit binary number.
Binary Code (Natural Binary) Code:
 Definition: In binary code (also referred to as natural binary), the
entire number is represented in base-2 (binary). It uses a series of
0s and 1s to represent the value of a number, with each bit having a
power of 2 significance.

 Key Differences Between BCD and Binary:

Feature BCD (Binary-Coded Decimal) Binary (Natural Binary)


Each decimal digit is represented separately in The entire number is represented as a
Representation
4 bits. single binary number.
The bit length depends on the size of
Bit Length Each decimal digit requires 4 bits.
the number (log2).
Less efficient for representing large numbers, More efficient, as the number is
Efficiency
as each digit requires 4 bits. represented in a compact binary form.
Applications where decimal representation is General-purpose representation in
Used For
required (e.g., digital clocks, calculators). computers and systems.
Example Decimal 57 → 0101 0111 Decimal 57 → 111001
Limited to valid decimal digits (0-9) per digit Can represent any integer value
Range
(invalid states like 1010 are not allowed). without such limitations.

Comparison Between PAL and PLA:


PAL (Programmable Array
Feature PLA (Programmable Logic Array)
Logic)

AND Array Fixed (cannot be changed) Programmable (can be customized)

Programmable (can be
OR Array Programmable (can be customized)
customized)

Less flexible due to the fixed More flexible due to fully


Flexibility
AND array programmable AND and OR arrays

Faster (due to fixed AND Slower (due to more programmable


Speed
array) elements)

Suitable for simpler logic Suitable for more complex logic


Complexity
designs designs

Cost Generally cheaper Generally more expensive

Common Simple combinational logic Complex combinational and sequential


Uses circuits circuits

Encoders convert 2N lines of input into a code of N bits and Decoders decode the
N bits into 2N lines.
1. Encoders :
An encoder is a combinational circuit that converts binary information in the form
of a 2N input lines into N output lines, which represent N bit code for the input. For
simple encoders, it is assumed that only one input line is active at a time. As an
example, let’s consider Octal to Binary encoder. As shown in the following figure,
an octal-to-binary encoder takes 8 input lines and generates 3 output lines.
One limitation of this encoder is that only one input can be active at any given
time. If more than one inputs are active, then the output is undefined. For
example, if D6 and D3 are both active, then, our output would be 111 which is the
output for D7. To overcome this, we use Priority Encoders

Truth Table –
D D D D D D D D
7 6 5 4 3 2 1 0 X Y Z

0 0 0 0 0 0 0 1 0 0 0

0 0 0 0 0 0 1 0 0 0 1

0 0 0 0 0 1 0 0 0 1 0

0 0 0 0 1 0 0 0 0 1 1

0 0 0 1 0 0 0 0 1 0 0

0 0 1 0 0 0 0 0 1 0 1

0 1 0 0 0 0 0 0 1 1 0

1 0 0 0 0 0 0 0 1 1 1

Priority Encoder –
A priority encoder is an encoder circuit in which inputs are given priorities. When
more than one inputs are active at the same time, the input with higher priority
takes precedence and the output corresponding to that is generated.

Decoders –
A decoder does the opposite job of an encoder. It is a combinational circuit that
converts n lines of input into 2
n
lines of output. Let’s take an example of 3-to-8 line decoder.
Truth Table –
D D D D D D D D
X Y Z 0 1 2 3 4 5 6 7

0 0 0 1 0 0 0 0 0 0 0

0 0 1 0 1 0 0 0 0 0 0

0 1 0 0 0 1 0 0 0 0 0

0 1 1 0 0 0 1 0 0 0 0

1 0 0 0 0 0 0 1 0 0 0

1 0 1 0 0 0 0 0 1 0 0

1 1 0 0 0 0 0 0 0 1 0

1 1 1 0 0 0 0 0 0 0 1

Encoder: Converts multiple inputs to a smaller number of output codes,


usually in binary form.
Decoder: Converts encoded binary input back to its expanded output.

What is a Multiplexer?
A multiplexer is a data selector which takes several inputs and gives a single
output. In multiplexer, we have 2N Input lines and 1 output lines where n is the
number of selection lines.
What is Demultiplexer ?
Demultiplexer is a data distributor which takes a single input and gives several
outputs. In demultiplexer we have 1 input and 2 N output lines where n is the
selection line.
Given below is the block diagram of the Demultiplexer, It will have one Input line
and will give 2N output lines.

Difference Between of Multiplexer and Demultiplexer

Multiplexer Demultiplexer

Multiplexer processes the digital Demultiplexer receives digital


Multiplexer Demultiplexer

information from various sources information from a single source and


into a single source. converts it into several sources

It is known as Data Selector It is known as Data Distributor

Multiplexer is a digital switch Demultiplexer is a digital circuit

It has 2N input data lines It has single input line

It has a single output data line It has output data lines

It works on many to one operational It works on one to many operational


principle principle

In time division Multiplexing,


In time division Multiplexing,
multiplexer is used at the
demultiplexer is used at the receiver end
transmitter end

Comparison Decoder Demultiplexer

n number of input lines and n number of select lines and


Input/Output
2n number of output lines. 2n number of output lines.

Inverse of Encoder. Multiplexer.

In Detection of bits, data In Distribution of the data,


Application
encoding. switching.

It is used for changing the It is used as a routing device


format of the instruction in to route the data coming from
Use
the machine specific one signal into multiple
language. signals.
Comparison Decoder Demultiplexer

Select Lines Not contains. Contains.

Employed in data-intensive
Implementatio Majorly implemented in the applications where data need
n networking application. to be changed into another
form.

Cache Memory
Definition:
Cache memory is a small, high-speed memory located closer to the CPU
than the main memory (RAM). It is used to temporarily store frequently
accessed data and instructions to reduce the time the CPU takes to fetch
data from main memory. This helps improve the overall speed and
performance of the computer system.

Key Characteristics of Cache Memory:


1. Speed: Cache memory is much faster than main memory (RAM) but
slower than the CPU registers.
2. Size: Cache memory is smaller in size compared to RAM, typically
measured in kilobytes (KB) to a few megabytes (MB).
3. Cost: Cache memory is more expensive per byte compared to RAM
and hard drives due to its faster access speed and advanced
technology.
4. Proximity to CPU: Cache memory is placed closer to or inside the
CPU to minimize latency.

Types of Cache Memory:


Cache memory is often divided into levels based on proximity to the CPU
and their role:
1. L1 Cache (Level 1):
o Closest to the CPU core, often embedded within the processor
itself.
o Very small in size (typically 16 KB to 128 KB).
o Extremely fast, with the lowest latency.
2. L2 Cache (Level 2):
o Larger than L1 cache (typically 128 KB to 4 MB).
o Located either on the CPU chip or close to it.
o Slower than L1 but still much faster than main memory.
3. L3 Cache (Level 3):
o Shared among all CPU cores in multi-core processors.
o Larger than L1 and L2 (typically 4 MB to 64 MB).
o Slower than L1 and L2 but still faster than RAM.

How Cache Memory Works:


1. Data Storage: Cache stores copies of frequently accessed data and
instructions from main memory.
2. CPU Request: When the CPU needs data:
o It first checks the cache (L1, then L2, then L3).
o If the data is found in the cache, it’s called a cache hit (faster
access).
o If the data is not found, it’s called a cache miss, and the CPU
retrieves the data from main memory and stores it in the
cache for future use.
3. Replacement Policies: Cache uses algorithms (like LRU—Least
Recently Used) to decide which data to replace when the cache is
full.
Advantages of Cache Memory:
 Faster than RAM.
 Reduces CPU access time for frequently used data.
 Improves system performance.
Disadvantages of Cache Memory:
 Limited size.
 Expensive compared to other memory types.
 Complexity in managing and designing multi-level caches.

Memory Hierarchy
The memory hierarchy in a computer system is a structure that organizes
memory components based on speed, size, cost, and proximity to the
CPU. It aims to provide the best tradeoff between performance and cost-
efficiency. The hierarchy ensures that the fastest, most expensive
memory is limited in size and closer to the CPU, while the slower,
cheaper memory is larger and farther away.

Structure of Memory Hierarchy:


The memory hierarchy typically consists of the following levels, from the
fastest and smallest at the top to the slowest and largest at the bottom:

Level Description Speed Cost Size

Small storage locations


inside the CPU that hold Most
1. Registers Fastest Few bytes
data for immediate expensive
execution.

High-speed memory closer


2. Cache Kilobytes
to the CPU. Divided into Very fast Expensive
Memory (KB)
L1, L2, and L3 levels.

Primary memory that


3. Main
stores active data and Moderat Gigabytes
Memory Moderate
instructions. Used for e (GB)
(RAM)
running programs.

Non-volatile storage like


4. Secondary Terabytes
SSDs or HDDs. Used for Slow Cheaper
Storage (TB)
permanent data storage.

External, removable
5. Tertiary
storage devices like CDs, Slowest Cheapest Very large
Storage
DVDs, or backup tapes.

Key Features of Each Level:


1. Registers:
 Located inside the CPU.
 Store temporary data for immediate CPU operations (e.g., operands
for arithmetic operations).
 Extremely fast but very small in size (few bytes).
 Example: Accumulator, Program Counter.
2. Cache Memory:
 A small, high-speed memory located inside or very near the CPU.
 Stores frequently accessed data to reduce main memory (RAM)
access.
 Divided into levels: L1, L2, and L3.
 Example: Cache in Intel or AMD processors.
3. Main Memory (RAM):
 Volatile memory used to store data and instructions for active
processes.
 Faster than secondary memory but slower than cache.
 Moderate size (GB).
 Example: DDR4 RAM.
4. Secondary Storage:
 Non-volatile memory used for long-term storage of data.
 Slower and cheaper than main memory.
 Large capacity (TB).
 Example: SSD, HDD.
5. Tertiary Storage:
 Removable and external storage devices.
 Slowest memory, used for archival and backup purposes.
 Very large capacity.
 Example: DVDs, Blu-ray discs, backup tapes.
By combining different memory types in a hierarchical structure, modern
computing systems achieve both high performance and large storage
capacity at a reasonable cost.

Associative Memory (Content-Addressable Memory


CAM)
Definition:
Associative memory is a type of memory that enables data retrieval
based on content rather than a specific memory address. Unlike
traditional memory systems, where data is accessed using a specific
address, associative memory searches the entire memory simultaneously
for a given data value or key.
Key Features of Associative Memory:
1. Content-Based Access:
o Data is retrieved using a part of the content (a key) instead of
an address.
2. Parallel Search:
o The memory hardware allows simultaneous comparison of the
key with all stored data.
o This parallel search capability makes associative memory
extremely fast for lookups.
3. High-Speed Retrieval:
o Access time is independent of the size of the memory, as all
searches are performed in parallel.

How Associative Memory Works:


1. Data Storage:
o Data is stored in the memory along with a tag or key.
o Tags are used to uniquely identify the stored data.
2. Search Operation:
o A search key is provided as input.
o The memory compares this key simultaneously with all stored
tags in parallel.
o If a match is found, the corresponding data is retrieved.
3. Example:
o If the memory stores the following data:
Tag: 1011 | Data: A
Tag: 1100 | Data: B
Tag: 1110 | Data: C
o Searching for the tag 1100 retrieves data B.

Applications of Associative Memory:


1. Cache Memory:
o Associative memory is used in fully associative cache to
quickly determine if a memory block is present in the cache.
2. Networking (Routing Tables):
o Used in routers for fast IP address lookup in routing tables.
3. Database Systems:
o Helps in fast retrieval of database records based on content.

Advantages of Associative Memory:


1. Fast Data Retrieval:
o Parallel search capability allows extremely fast lookups.
2. Efficient Searching:
o Eliminates the need for sequential searching.
3. No Address Dependency:
o Access is based on content, not a specific address.

Disadvantages of Associative Memory:


1. High Cost:
o Associative memory is expensive due to its hardware design.
2. Complexity:
o Requires specialized hardware, making it harder to implement.
3. Power Consumption:
o The parallel comparison process requires more power.

Comparison of Associative Memory vs. Traditional Memory:


Feature Associative Memory Traditional Memory

Access Content-based (using a tag Address-based (using memory


Method or key). addresses).

Search Parallel comparison of all Sequential or address-based


Method stored data. retrieval.

Very fast due to parallel Slower, depends on memory


Speed
search. hierarchy.

Expensive due to hardware


Cost Cheaper and widely available.
complexity.
Virtual Memory
Definition:
Virtual memory is an essential feature of modern operating systems,
providing a cost-effective way to extend physical memory by temporarily
transferring data to a portion of the storage drive (HDD or SSD)..
Although it comes with some performance trade-offs, it enables efficient
memory utilization, better multitasking, and the ability to run larger
programs on systems with limited RAM.

Key Features of Virtual Memory:


1. Logical Extension of RAM:
o Virtual memory extends the addressable memory space
beyond the physical RAM.
2. Storage Area:
o A portion of the storage drive is designated as "swap space"
or a "page file," which acts as an extension of RAM.
3. Paging:
o Virtual memory uses a technique called paging, where data is
divided into small fixed-size blocks called pages. These pages
are moved between RAM and the storage drive as needed.
4. Thrashing:
o Happens when the system spends more time swapping pages
in and out of memory than executing tasks. This occurs when
the RAM is insufficient.

How Virtual Memory Works:


1. Page Table:
o The operating system maintains a page table to map virtual
memory addresses to physical memory addresses.
o If a requested page is not in physical memory (a page fault), it
is fetched from the disk.
2. Page Fault:
o When a program requests data that is not currently in RAM, a
page fault occurs.
o The operating system retrieves the data from the swap space
and loads it into RAM.
3. Swapping:
o If RAM is full, the operating system swaps less-used pages
from RAM to the storage drive and loads the requested data
into RAM.
Advantages of Virtual Memory:
1. Run Larger Programs:
o Virtual memory allows programs to use more memory than the
system's physical RAM.
2. Multitasking:
o Enables multiple programs to run simultaneously without
running out of physical memory.
3. Efficient Use of RAM:
o Keeps only the frequently accessed data in RAM, reducing
unnecessary memory usage.
Disadvantages of Virtual Memory:
1. Slower Performance:
o Accessing data from the storage drive is much slower than
accessing data from RAM.
o Excessive swapping (known as thrashing) can significantly
degrade system performance.
2. Increased Disk Wear:
o Frequent read/write operations to the disk can shorten its
lifespan, especially for SSDs.
3. Complexity:
o Implementing virtual memory requires complex memory
management algorithms.
Comparison: Virtual Memory vs. Physical Memory
Feature Virtual Memory Physical Memory (RAM)

Location On the hard drive (HDD/SSD). On the RAM chips.

Slower (depends on storage


Speed Much faster.
speed).

Limited (determined by RAM


Size Large (limited by storage size).
size).

Cheaper (uses existing


Cost More expensive.
storage).
Feature Virtual Memory Physical Memory (RAM)

Temporary storage for active


Purpose Extends RAM capacity.
data.

Internal Memory
Definition:
Internal memory refers to memory that is directly accessible by the CPU
and is used for storing data and instructions during execution. It is also
known as primary memory or main memory.
 RAM (Random Access Memory): Temporarily stores active data and
instructions.
 Cache Memory: A small, high-speed memory close to the CPU that
stores frequently accessed data.
 ROM (Read-Only Memory): Permanently stores critical system
instructions, like the BIOS.
 Registers: Small storage inside the CPU used for immediate data
processing.

Characteristics:
1. Proximity: Directly connected to or embedded in the CPU.
2. Speed: Very fast compared to external memory.
3. Volatility:
o RAM and cache are volatile (data is lost when the power is
off).
o ROM is non-volatile (retains data permanently).
4. Capacity: Smaller in size (bytes to a few gigabytes).
5. Cost: More expensive per unit compared to external memory.
Functions:
 Temporary storage for data actively being processed.
 Reduces CPU idle time by quickly supplying instructions and data.
 Essential for program execution.
External Memory
Definition:
External memory refers to storage devices outside the CPU that are used
for storing data permanently or for long-term use. It is also called
secondary memory or auxiliary memory.
Examples:
 Hard Disk Drives (HDDs) and Solid State Drives (SSDs).
 Flash Drives (USB drives).
 Optical Disks (CDs, DVDs, Blu-ray).
Characteristics:
1. Proximity: Located outside the CPU, connected via interfaces like
USB, or Ethernet.
2. Speed: Slower than internal memory.
3. Volatility: Non-volatile, meaning data is retained even when power
is off.
4. Capacity: Larger in size (gigabytes to terabytes or more).
5. Cost: Cheaper per unit compared to internal memory.

Functions:
 Long-term data storage for files, programs, and backups.
 Portable storage for transferring data between systems
Comparison of Internal and External Memory
Aspect Internal Memory External Memory

Inside the computer (CPU or Outside the CPU, often


Location
motherboard). removable.

Slower (HDD, SSD, USB


Speed Very fast (RAM, cache, registers).
drives).

Can be volatile (e.g., RAM) or Non-volatile (e.g., SSDs,


Volatility
non-volatile (e.g., ROM). HDDs).

Larger (GBs to several


Capacity Smaller (bytes to a few GBs).
TBs).

Cost per
More expensive. Less expensive.
Unit

Temporary storage for active Long-term and


Usage
data. permanent data storage.
Aspect Internal Memory External Memory

Access by Indirectly accessible via


Directly accessible.
CPU I/O devices.

Non-portable (fixed in the


Portability Often portable.
system).

Memory Management
Definition:
Memory management is the process of efficiently allocating, organizing,
and controlling a computer's memory resources. It ensures that the
system uses its memory effectively to run programs and processes while
avoiding conflicts, wastage, or errors. Memory management is primarily
handled by the operating system (OS).
Key Goals of Memory Management:
1. Efficient Resource Utilization:
o Maximize the use of available physical and virtual memory.
2. Process Isolation:
o Ensure processes do not interfere with each other's memory.
3. Multitasking Support:
o Allow multiple programs to run simultaneously by dividing
memory between them.
4. Memory Protection:
o Prevent unauthorized access or modification of memory by
programs.
5. Minimize Latency:
o Optimize memory access time and reduce delays.

Memory Management Techniques:


Memory management uses various techniques to optimize the use of
memory and ensure smooth operation:

1. Memory Allocation Techniques:


 Allocation refers to how memory is divided and assigned to
processes.
a. Contiguous Memory Allocation:
 Allocates a continuous block of memory to a process.
b. Non-Contiguous Memory Allocation:
 Divides memory into non-adjacent blocks and uses pointers to
connect them.
 Examples: Paging and segmentation.
2. Paging:
 Memory is divided into fixed-sized blocks called pages (in virtual
memory) and frames (in physical memory).
 Pages are loaded into frames as needed.
3. Segmentation:
 Divides memory into segments of varying sizes based on logical
divisions (e.g., code, data, stack).
 Each segment has a unique name or number.
4. Swapping:
 Temporarily moves inactive processes from RAM to secondary
storage (swap space) to free up memory for active processes.
 When required, the data is swapped back into RAM.
5. Virtual Memory:
 Uses part of the storage drive (HDD/SSD) to simulate RAM, enabling
larger programs to run on systems with limited physical memory.

How Memory Management Provides Protection


Memory management ensures the security and stability of a system by
protecting memory spaces from unauthorized or accidental access by
different programs or processes. This protection prevents errors,
malicious activities, and interference between processes. Below are the
key ways memory management provides protection:
1. Process Isolation
 Purpose: Each process in a system is given its own memory space,
and no process is allowed to access the memory of another process.
 Benefit: Prevents one process from corrupting another process's
data or code.
2. Memory Access Control
 Purpose: Control which parts of memory a process can access and
what operations (read, write, execute) it can perform.
 Benefit: Prevents accidental overwriting of data or execution of
malicious code.
3. Virtual Memory
 Purpose: Provides an abstraction of memory, isolating the physical
memory from the virtual address space of processes.
 Benefit: Keeps processes isolated and prevents memory conflicts.
4. Swapping and Swap Space Protection
 Purpose: Secure memory content when it is swapped out to disk.
 Benefit: Prevents unauthorized access to swapped-out memory.
10. Hardware-Assisted Memory Protection
 Purpose: Leverage hardware features to enforce memory protection
efficiently.
 Benefit: Protects against common attacks like buffer overflows and
code injection.

Modes of Data Transfer


Data transfer refers to the process of moving data between a source
(e.g., memory, peripheral device, or CPU) and a destination (e.g.,
another device, memory, or CPU). There are different modes of data
transfer based on how data flows and how control signals are handled.

Types of Data Transfer Modes


1. Programmed I/O
2. Interrupt-Driven I/O
3. Direct Memory Access (DMA)

1. Programmed I/O
 Definition: The CPU is responsible for managing the data transfer
by executing specific instructions (polling or checking) for every
byte or word of data.
 Process:
o The CPU continuously polls (checks) the status of the
peripheral to determine if it is ready to send or receive data.
o Once the peripheral is ready, the CPU initiates the transfer.
 Characteristics:
o The CPU is heavily involved in the transfer process.
o Slower because the CPU is busy waiting (polling) for the
device to be ready.
 Applications: Used in simple systems or where transfer speed is not
critical.
 Example: Transferring data between the CPU and a printer.

2. Interrupt-Driven I/O
 Definition: The peripheral device interrupts the CPU when it is
ready to send or receive data, eliminating the need for the CPU to
continuously poll.
 Process:
1. The device sends an interrupt signal to the CPU when it is
ready.
2. The CPU temporarily stops its current task to handle the
interrupt.
3. Data transfer occurs, and the CPU resumes its previous task.
 Characteristics:
o More efficient than programmed I/O because the CPU does not
waste time polling.
o Requires an interrupt controller to manage multiple devices.
o Used when real-time response is needed.
 Applications: Keyboard inputs, mouse inputs, or real-time systems.
 Example: A keyboard interrupt triggers the CPU to process user
input.

3. Direct Memory Access (DMA)


 Definition: The CPU delegates data transfer to a dedicated
hardware controller called the DMA controller, which manages the
transfer directly between the device and memory.
 Process:
1. The CPU initiates the transfer by sending commands to the
DMA controller.
2. The DMA controller takes over the bus system and performs
the transfer independently. The DMA controller directly
transfers data between the memory and the I/O device without
involving the CPU.
3. After the transfer is complete, the DMA controller releases
control of the system bus and notifies the CPU.
 Characteristics:
o The CPU is free to perform other tasks during the transfer.
o Faster and more efficient for bulk data transfer.
o Often used for high-speed devices like hard drives, network
interfaces, and graphics cards.
 Applications: Large block transfers such as file copying, disk I/O, or
video rendering.
 Example: Transferring data from a hard drive to memory without
CPU intervention.

Comparison of Data Transfer Modes


Interrupt-Driven
Feature Programmed I/O DMA
I/O

CPU Involvement High Medium Low

Low (polling
Efficiency Moderate High
overhead)

Data Transfer Faster than


Slow Very fast
Speed polling

High (CPU free for


CPU Utilization Low (wasted time) Better
tasks)

Hardware
Low Moderate High
Complexity

Real-time data High-speed data


Best Use Case Simple devices
transfer transfer

The choice of data transfer mode depends on system requirements,


device speed, and efficiency. While Programmed I/O is simpler and suited
for low-speed devices, Interrupt-Driven I/O is more efficient for real-time
systems. For high-speed or bulk transfers, DMA is the preferred method
as it minimizes CPU involvement and maximizes performance.

Definition of DMA (Direct Memory Access):


DMA (Direct Memory Access) is a feature in computer systems that
allows certain hardware subsystems (like disk drives, network cards, or
graphics cards) to access the main system memory (RAM) directly,
bypassing the CPU.
Need for DMA:
1. CPU Efficiency: Without DMA, the CPU would have to manage all
data transfers between peripherals and memory, which consumes a
significant amount of CPU time. DMA frees the CPU from these
tasks, improving overall system efficiency.
2. Faster Data Transfer: DMA transfers data directly between memory
and peripherals at high speeds without the overhead of CPU
intervention.
3. Reduced Latency: By avoiding the need for CPU involvement in each
data transaction, DMA reduces latency in data transfer processes.
4. Concurrent Operations: With DMA, the CPU can execute other
instructions while the DMA controller handles data transfer,
enabling multitasking and better system performance.
DMA Controller:
A DMA controller is a hardware module that manages DMA transfers. It
acts as an intermediary between the CPU, memory, and I/O devices. The
DMA controller ensures that data transfer is performed efficiently and
without CPU intervention.

Interrupt:
An interrupt is a signal sent to the CPU by a hardware device or software
process to indicate that an event requires immediate attention. It
temporarily halts the CPU's current operations, saves its state, and
executes a specific service routine (Interrupt Service Routine or ISR) to
handle the event. Once the interrupt is serviced, the CPU resumes its
previous operations.
Interrupt Cycle
Interrupt cycle is very similar to the instruction cycle. At the very start,
the status of flip-flop R is checked. If it is 0 there is no interrupt and CPU
can continue it's ongoing tasks. But when R=1, it denotes that the
ongoing process should halt because an interrupt has occured.
When R=0, CPU continues it's tasks checking the status of IEN in parallel.
If it is 1, FGI and FGO are checked in a hierarchy. If any of these flip-flops
are found set, R is immidiately set by 1.
When R=1, the content in PC (adress of next instruction in memory) is
saved at M[0] and then PC is set by 1 enabling it to point the BUN
operation. The instruction at M[1] is a BUN instruction that leads the
control to approriate I/O ref. Instruction stored at some other location in
the memory. Now separate Fetch, Decode and Execute phases are
practised to entertain the I/O ref. instruction.
Once the I/O ref. instruction is executed completely, PC is loaded with 0
where it finds the saved RETURN address. The entire workout is
diagrammed as follows:
Horizontal and vertical microprogramming are two approaches to
designing microcode in computer systems. Microprogramming is a
method used to implement the control logic of a processor by using a
sequence of low-level instructions (microinstructions) stored in a
microprogram memory.
1. Horizontal Microprogramming
In horizontal microprogramming, each microinstruction specifies a wide
set of control signals that can be executed in parallel.
Characteristics:
 Wide control word: Each microinstruction is very wide (e.g.,
hundreds of bits), with each bit controlling a specific part of the
hardware.
 Highly parallel execution: Multiple control signals can be activated
simultaneously
 Harder to program: Requires careful design to ensure that no
conflicting control signals are active.
 Faster execution: Parallelism reduces the number of cycles needed
for a task.
Advantages:
 Allows for very fine-grained control over hardware.
 High degree of parallelism enables faster microinstruction
execution.
Disadvantages:
 Control word size is very large, leading to increased memory usage.
 Complexity in programming and debugging.

2. Vertical Microprogramming
In vertical microprogramming, each microinstruction specifies fewer
control signals and relies on encoding to reduce the width of the control
word..
Characteristics:
 Narrow control word: Each microinstruction has fewer bits, as
control signals are encoded.
 Sequential execution: A single microinstruction may activate only
one or a few control signals.
 Simpler programming: Easier to design and manage compared to
horizontal microprogramming.
 Slower execution: Decoding can increase execution time.
 Compact design: Reduces the memory required for microprogram
storage.
Advantages:
 Smaller microprogram memory requirements.
 Easier to design and maintain.
 More scalable for complex systems.
Disadvantages:
 Reduced parallelism due to encoding.
 Decoding overhead increases latency.

Comparison Table
Horizontal Vertical
Feature
Microprogramming Microprogramming

Control Word
Large (hundreds of bits) Small (tens of bits)
Width

Parallelism High Low

Execution Speed Faster Slower

Complexity Complex Simpler

Memory Usage High Low

Encoding Not used (direct signals) Used (encoded signals)

Addressing Mode in Computer Architecture


An addressing mode specifies how the operand (data) of an instruction is
accessed. It determines the way in which the processor locates the data
required for executing an operation. Addressing modes enhance the
flexibility and functionality of a computer's instruction set.
Summary Table
Addressing Mode Description Example

Operand is part of the


Immediate ADD R1, #5
instruction

Register Operand is in a register ADD R1, R2


Addressing Mode Description Example

Operand is at a specific memory


Direct LOAD R1, 1000
address

Memory Address of operand is


Indirect LOAD R1, (R2)
stored in a register.

Register holds the address of


Register Indirect ADD R1, (R2)
the operand

LOAD R1,
Indexed Effective address = Base + Index
1000(R2)

Effective address = Base register LOAD R1,


Base
+ Offset BASE(R2)

Relative Effective address = PC + Offset JUMP 100

Operand is implied in the


Implied CLR
instruction

Auto-Increment/Auto- Access operand and modify LOAD R1,


Decrement address in register (R2)+

The instruction cycle, also known as the fetch-decode-execute cycle, is


the fundamental process through which a computer's central processing
unit (CPU) operates. It describes how the CPU processes instructions
stored in memory. The cycle has three main stages, which repeat
continuously:

1. Fetch
 The CPU retrieves (or fetches) an instruction from memory.
 Steps in the fetch phase:
1. The Program Counter (PC) holds the memory address of the
next instruction to be executed.
2. The CPU uses the Memory Address Register (MAR) to send this
address to the memory.
3. The instruction is fetched from memory and placed into the
Memory Data Register (MDR) or directly into the Instruction
Register (IR).
4. The Program Counter is incremented to point to the next
instruction.

2. Decode
 The CPU interprets (or decodes) the fetched instruction.
 Steps in the decode phase:
1. The instruction is sent to the Control Unit (CU).
2. The CU identifies the operation to be performed (e.g.,
addition, memory access, branching) by decoding the binary
instruction (opcode).
3. The CPU determines what data is needed (operands) and
where it is located.

3. Execute
 The CPU performs the operation specified by the instruction.
 Steps in the execute phase:
1. The appropriate unit (e.g., Arithmetic Logic Unit (ALU),
memory, or I/O) carries out the operation.
2. If needed, data is fetched from memory or registers.
3. The result may be stored in a register, written back to
memory, or sent to an output device.
4. The status of the operation is updated in the flags (e.g., zero
flag, carry flag).

The Cycle Then Repeats


After execution, the CPU starts the cycle again with the next instruction
in memory

CISC Characteristics:
1. Complex, multi-step instructions.
2. Variable instruction size.
3. Fewer instructions in code.
4. Many addressing modes.
5. Slower (higher cycles per instruction).
6. Uses microprogramming.
7. Examples: Intel x86, IBM System/360.
RISC Characteristics:
1. Simple, single-step instructions.
2. Fixed instruction size.
3. More instructions in code.
4. Few addressing modes.
5. Faster (lower cycles per instruction).
6. Uses hardwired control.
7. Examples: ARM, MIPS, SPARC.

Hardwired Control vs Microprogrammed Control


Both hardwired control and microprogrammed control are methods used
to control the operation of a CPU. They determine how instructions are
fetched, decoded, and executed. Here's a comparison between the two:

1. Hardwired Control
 Control Mechanism: Uses fixed, combinational logic circuits (gates,
flip-flops, etc.) to generate control signals.
 Speed: Faster because control signals are generated directly by
hardware with minimal delay.
 Complexity: Less flexible but more efficient for simple operations.
The design is more complex, requiring more gates and circuitry to
handle each instruction.
 Cost: Generally more expensive to design and implement due to
hardware complexity.
 Flexibility: Not flexible; changes to the instruction set or control
logic require redesigning the hardware.
 Usage: Typically used in RISC architectures where simple operations
are performed frequently and control logic is relatively
straightforward.
 Example: Traditional microprocessors like early Intel x86 used
hardwired control.

2. Microprogrammed Control
 Control Mechanism: Uses a control memory (often ROM or RAM) to
store a set of microinstructions. These microinstructions define
control signals for each operation.
 Speed: Slower than hardwired control due to the need to fetch
microinstructions from memory.
 Complexity: More flexible but can be less efficient because it
requires additional memory and decoding steps to fetch
microinstructions.
 Cost: Cheaper to design and modify because the control logic can
be changed by altering the microprogram stored in memory,
without needing hardware changes.
 Flexibility: More flexible; new instructions or changes in control
logic can be added by modifying the microprogram.
 Usage: Typically used in CISC architectures where complex
instructions need to be supported and where flexibility is more
important.
 Example: The IBM System/360 and many modern x86 processors
use microprogrammed control.

Key Differences:
Feature Hardwired Control Microprogrammed Control

Control Fixed combinational Control stored in memory


Mechanism circuits (microprograms)

Faster (due to direct Slower (due to fetching


Speed
hardware control) microinstructions)

Less flexible, hard to More flexible, easy to modify


Flexibility
modify via microprograms

More complex design of More complex due to control


Complexity
hardware memory and decoding

Higher due to hardware Lower, easier to change and


Cost
complexity maintain

More efficient for simple Less efficient for complex


Efficiency
operations operations

Used in RISC architectures Used in CISC architectures or


Use Case
or simple control logic for complex systems

Summary
 Hardwired control is fast but rigid and complex, making it suitable
for simpler tasks.
 Microprogrammed control is slower but more flexible, ideal for
systems where changes or complex instructions are needed.
What is Assembly Language?
Assembly language is a low-level programming language that provides a
symbolic representation of machine code. It is specific to a particular
processor architecture and uses mnemonics (e.g., MOV, ADD, SUB) for
instructions, making it easier for humans to write and understand
compared to binary machine code.
 Key Features:
1. Assembly language is hardware-dependent.
2. It allows direct control of hardware components like registers,
memory, and I/O.
3. Requires an assembler to translate the code into machine
language (binary).
Shift Registers
A shift register is a sequential logic circuit that stores and transfers
data. It is made up of flip-flops, where the stored data is shifted from one
flip-flop to another on the application of a clock signal. Shift registers are
widely used in digital circuits for temporary data storage, data transfer,
and data manipulation.

Types of Shift Registers


Shift registers are categorized based on the way data is input and
output:
1. Serial-In Serial-Out (SISO)
 Operation: Data is input bit by bit (serially) and shifted through the
register, with output also being serial.
 Applications: Used in applications where data transfer is slow, such
as communication systems.
 Example: Converting parallel data to serial data.
2. Serial-In Parallel-Out (SIPO)
 Operation: Data is input serially but is made available at all flip-flop
outputs in parallel.
 Applications: Used to convert serial data into parallel data for
systems that require parallel inputs, such as digital displays.
3. Parallel-In Serial-Out (PISO)
 Operation: Data is loaded into the register in parallel, then shifted
out serially.
 Applications: Used to send parallel data serially over a
communication line to save bandwidth.
4. Parallel-In Parallel-Out (PIPO)
 Operation: Data is loaded into the register in parallel and output in
parallel.
 Applications: Used for fast data transfer between two systems.

Duality in Boolean Algebra


Duality is a fundamental principle in Boolean algebra, stating that every
Boolean expression remains valid if we interchange:
1. AND (·) ↔ OR (+)
2. 1 ↔ 0
How Duality Works
Given a Boolean equation or expression, to find its dual:
1. Replace every AND (·) operation with an OR (+) operation, and vice
versa.
2. Replace constants 1 with 0, and vice versa.
3. Leave the variables and complements (NOT operations, like A' or
¬A) unchanged.

Examples of Duality
Example 1: A Simple Boolean Expression
 Original: A · (B + C) = A · B + A · C
o (This is the distributive property.)
 Dual: A + (B · C) = (A + B) · (A + C)

Example 2: A Theorem in Boolean Algebra


 Original: A + 0 = A
 Dual: A · 1 = A
Race Around Condition
The race around condition is a phenomenon that occurs in JK flip-flops
when both inputs JJJ and KKK are set to 1 and the flip-flop is triggered by
a clock signal that remains high (level-triggered) for a sufficient
duration. In this condition, the output toggles (flips) continuously
between 0 and 1 during the high phase of the clock, leading to an
unstable or unpredictable output.

How Race Around Condition Occurs


1. In a JK flip-flop:
o When J=K=1J = K = 1J=K=1, the output toggles on every clock
pulse.
o If the clock pulse duration is longer than the flip-flop's
propagation delay, multiple toggles can occur within a single
clock cycle.
2. The repeated toggling creates instability, as the final state of the
flip-flop depends on when the clock pulse ends.

Conditions for Race Around


1. JK flip-flop is used.
2. J=K=1J = K = 1J=K=1.
3. The clock pulse width is greater than the propagation delay of the
flip-flop.

How to Solve or Avoid the Race Around Condition


1. Edge-Triggered Flip-Flops:
o Use edge-triggered flip-flops that respond only to the rising or
falling edge of the clock signal, rather than its level.
2. Master-Slave JK Flip-Flop:
o A master-slave configuration is used to ensure that the
toggling occurs only once per clock cycle. The master is
triggered on the clock's rising edge, and the slave is triggered
on the falling edge.
3. Reducing Clock Pulse Width:
o Ensure the clock pulse width is less than the propagation
delay of the flip-flop to prevent multiple toggles within a
single cycle.
4. Using T Flip-Flops:
o Replace the JK flip-flop with a T flip-flop where toggling
happens only once per clock pulse.

Latches
 Definition: A latch is a simple memory device that stores a bit of
data. It changes its state based on the input and control signal,
typically level-triggered.
 Control: The output of a latch can change as long as the control
signal (often called enable) is active. It is level-sensitive, meaning it
reacts to the level of the control signal (high or low).
 Types:
o SR Latch: The simplest form, made of two cross-coupled NOR
or NAND gates.
o D Latch: A more controlled version, where the data input (D) is
transferred to the output when the enable signal is active.
 Characteristics:
o Level Triggered: Output changes as long as the enable signal
is active.
o Simple: Easier to design but can cause glitches if the enable
signal is unstable.

Flip-Flops
 Definition: A flip-flop is a bistable device that also stores a bit of
data but is edge-triggered, meaning it only changes its output on
the rising or falling edge of a clock signal.
 Control: Flip-flops are edge-triggered, meaning their output only
changes at the transition of a clock signal (either rising edge or
falling edge).
 Types:
o D Flip-Flop: Stores the input data on the clock edge.
o JK Flip-Flop: More complex, with inputs for setting, resetting,
and toggling.
o T Flip-Flop: Toggles the output on each clock edge.
 Characteristics:
o Edge Triggered: Output only changes at specific clock edges.
o More Stable: Less prone to glitches compared to latches.

Key Differences:
Feature Latch Flip-Flop

Edge-triggered (clock
Triggering Level-sensitive
edge)

Control
Active as long as enable is active Active only on clock edge
Signal

Can change output anytime Changes output only at


Response
during active signal clock edges

Complexity Simpler design More complex and stable

 Edge Triggering: Responds only at the rising or falling edge of the


clock signal, providing stable operation and reducing glitches, making it
suitable for synchronous systems.
 Level Triggering: Responds to the level of the control signal (high or
low), often leading to unstable behavior if the control signal is not
properly managed, but simpler and useful in certain applications.

Feature Edge Triggering Level Triggering

Control Responds to clock edges Responds to the level (high or


Signal (rising or falling) low)

Timing Sensitive to clock Sensitive to the active level of


Sensitivity transitions control

Changes output only on Changes output as long as the


Operation
clock transition enable signal is active

Common
Flip-flops (D, JK, T) Latches (SR, D)
Devices

Risk of Lower (less likely to


Higher (more prone to glitches)
Glitches change unexpectedly)
Feature Edge Triggering Level Triggering

Best for synchronous Suitable for asynchronous


Suitability
systems (clock-driven) circuits

A Johnson counter is a type of ring counter where the


inverted output of the last flip-flop is fed back to the first flip-flop. It
generates a unique counting sequence that repeats every 2n states,
where n is the number of flip-flops.

Modulus Counter
A modulus counter is a type of counter that counts from 0 to a specified
value (the modulus) and then resets back to 0. The modulus determines
the number of unique states the counter goes through before it repeats.
 Modulus: The modulus is the total number of states the counter can
hold. For example, a modulus-5 counter counts from 0 to 4 (5
states).

Datapath
A datapath is a critical component of a computer's central processing
unit (CPU) that is responsible for executing arithmetic, logical, and data
manipulation operations. It is the collection of functional units (such as
registers, ALUs, and multiplexers) and their interconnections, which work
together to process data in a system.

Cache Mapping: Direct-Mapped vs. Set-Associative Cache


In a cache memory system, data is temporarily stored for fast access.
There are different ways to map data from main memory to cache. Two
common methods are Direct Mapping and Set-Associative Mapping.

1. Direct-Mapped Cache
 Definition: In direct-mapped cache, each block of memory maps to
exactly one cache line. This means that for each memory address,
there is only one possible location in the cache where the data can
be stored.
 How it works:
o The memory address is divided into three parts: tag, index,
and block offset.
o The index is used to find a specific cache line, while the tag is
compared with the tag stored in that line to verify if the data
is present (cache hit or miss).
 Advantages:
o Simple and easy to implement.
o Fast access time for cache lookup.
 Disadvantages:
o Cache conflicts: If multiple memory blocks map to the same
cache line, they will overwrite each other, causing more cache
misses (thrashing).
 Example: In a 4-line cache with 16 memory blocks, each memory
block is mapped to one specific cache line.

2. Set-Associative Cache
 Definition: In set-associative cache, each memory block can be
mapped to any one of a set of cache lines, making the cache more
flexible. The cache is divided into several sets, and each set can
contain multiple cache lines. A memory block can map to any line
within a set.
 How it works:
o The memory address is divided into three parts: tag, set
index, and block offset.
o The set index points to a specific set, and the tag is compared
to all the tags in that set. If a match is found in one of the
lines, it’s a cache hit.
 Advantages:
o More flexibility than direct-mapped cache, reducing cache
conflicts.
o Better hit rate compared to direct-mapped cache.
 Disadvantages:
o Slightly more complex than direct-mapped cache due to the
need to check multiple lines within a set.
o Slightly slower than direct-mapped cache because of the need
to compare tags in multiple lines in the set.
 Example: In a 4-line cache, a 2-way set-associative cache would
have 2 sets, each with 2 cache lines. A memory block could map to
either of the two lines in the set.

Comparison:
Feature Direct-Mapped Cache Set-Associative Cache

Each block maps to Each block maps to a set of


Mapping
exactly one cache line. cache lines.

Faster (single cache line Slower (needs to check multiple


Access Time
lookup). cache lines in a set).

Simple and easy to More complex due to set


Complexity
implement. organization.

Cache Miss Higher due to more Lower miss rate because of


Rate conflicts. more flexibility.

1-line cache for each Multiple lines per set (e.g., 2-


Example
memory block. way, 4-way associative).

Instruction Cycle
The instruction cycle is the cycle through which a CPU (Central
Processing Unit) fetches, decodes, and executes instructions from
memory to perform tasks. The cycle is repetitive and continues until the
program is completed.
The instruction cycle is typically divided into the following stages:

1. Fetch
 Purpose: Retrieve the next instruction from memory.
 Process:
o The Program Counter (PC) holds the address of the next
instruction.
o The CPU uses the PC to fetch the instruction from the memory
(RAM).
o The fetched instruction is stored in the Instruction Register
(IR).
 Actions:
o The PC is incremented to point to the next instruction.
o The instruction is fetched from the memory location.

2. Decode
 Purpose: Interpret the fetched instruction and prepare the
necessary control signals for execution.
 Process:
o The instruction in the Instruction Register (IR) is decoded by
the Control Unit (CU).
o The instruction is broken into opcode (operation code) and
operand.
o The opcode specifies the operation to be performed (e.g., add,
subtract), and the operand(s) specify the data or memory
addresses involved.
 Actions:
o The Control Unit (CU) generates control signals based on the
opcode.
o Identifies which registers or memory locations to use.

3. Execute
 Purpose: Perform the operation specified by the instruction.
 Process:
o The Arithmetic and Logic Unit (ALU) performs the operation
(e.g., arithmetic operations like addition or logical operations
like AND).
o If the instruction involves memory, data is fetched from or
written to the memory.
o If the instruction is a jump or branch, the Program Counter
(PC) is updated to the new address.
 Actions:
o The required data is fetched from registers or memory.
o The ALU performs the operation.
o The result is stored in the appropriate register or memory
location.

Instruction Cycle
The instruction cycle is the cycle through which a CPU (Central
Processing Unit) fetches, decodes, and executes instructions from
memory to perform tasks. The cycle is repetitive and continues until the
program is completed.
The instruction cycle is typically divided into the following stages:

1. Fetch
 Purpose: Retrieve the next instruction from memory.
 Process:
o The Program Counter (PC) holds the address of the next
instruction.
o The CPU uses the PC to fetch the instruction from the memory
(RAM).
o The fetched instruction is stored in the Instruction Register
(IR).
 Actions:
o The PC is incremented to point to the next instruction.
o The instruction is fetched from the memory location.

2. Decode
 Purpose: Interpret the fetched instruction and prepare the
necessary control signals for execution.
 Process:
o The instruction in the Instruction Register (IR) is decoded by
the Control Unit (CU).
o The instruction is broken into opcode (operation code) and
operand.
o The opcode specifies the operation to be performed (e.g., add,
subtract), and the operand(s) specify the data or memory
addresses involved.
 Actions:
o The Control Unit (CU) generates control signals based on the
opcode.
o Identifies which registers or memory locations to use.

3. Execute
 Purpose: Perform the operation specified by the instruction.
 Process:
o The Arithmetic and Logic Unit (ALU) performs the operation
(e.g., arithmetic operations like addition or logical operations
like AND).
o If the instruction involves memory, data is fetched from or
written to the memory.
o If the instruction is a jump or branch, the Program Counter
(PC) is updated to the new address.
 Actions:
o The required data is fetched from registers or memory.
o The ALU performs the operation.
o The result is stored in the appropriate register or memory
location.

4. Store (Optional)
 Purpose: Store the result of the execution (if needed).
 Process:
o The result of the operation (from the ALU or other units) is
stored in a register or written back to memory.
 Actions:
o The result is written back to a destination (e.g., register,
memory).

Cycle Continuation
 After completing one instruction, the cycle repeats.
 The Program Counter (PC) points to the next instruction, and the
cycle continues until the program ends (or until an interrupt
occurs).

You might also like