0% found this document useful (0 votes)
49 views

Ayano .Microprocessorassignment

This individual assignment submission from Ayano Bores covers concepts related to operating systems and computer architecture. Specifically, it addresses the concept of virtual memory, cache memory, and floating point units (FPUs) of microprocessors. It provides detailed explanations of each concept in 3 paragraphs.

Uploaded by

Ayano Boresa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views

Ayano .Microprocessorassignment

This individual assignment submission from Ayano Bores covers concepts related to operating systems and computer architecture. Specifically, it addresses the concept of virtual memory, cache memory, and floating point units (FPUs) of microprocessors. It provides detailed explanations of each concept in 3 paragraphs.

Uploaded by

Ayano Boresa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 18

Hawassa University

Department of Computer
SCIENCE

Module Title Operating System and Computer Architecture


Course Title Microprocessor and Assembly programming

INDIVIDUAL ASSIGNMENT

NAME AYANO BORESA

IDNO 0472/14

SUBMITTED TO: Mr. ROBA BARETO

SubmissionDATE - 24/01/2024
1. What is the concept of virtual memory, cache memory and FPU (Floating
point unit) of a microprocessor?

Virtual memory is a technique that allows a computer to use more memory than it physically
has available. It does this by temporarily transferring data from the computer’s RAM to its hard
disk. This frees up space in the RAM for other programs to use. Virtual memory is managed by
the operating system and is transparent to the user .
Cache memory is a small amount of high-speed memory that is used to store frequently
accessed data. It is faster than main memory (RAM) and is used to reduce the average time it
takes to access data from the main memory. Cache memory is managed by hardware and is
transparent to the user .
Floating point unit (FPU) is a specialized processor that performs arithmetic operations on
floating-point numbers. Floating-point numbers are used to represent real numbers and are
stored in a format that allows for a wide range of values and precision. The FPU is used to
perform complex mathematical operations that are required in scientific and engineering
applications
What is virtual memory?
Virtual memory is a memory management technique where secondary memory can
be used as if it were a part of the main memory. Virtual memory is a common
technique used in a computer's operating system (OS).

Virtual memory uses both hardware and software to enable a computer to


compensate for physical memory shortages, temporarily transferring data from
random access memory (RAM) to disk storage. Mapping chunks of memory to disk
files enables a computer to treat secondary memory as though it were main
memory.

Today, most personal computers (PCs) come with at least 8 GB (gigabytes) of RAM.
But, sometimes, this is not enough to run several programs at one time. This is
where virtual memory comes in. Virtual memory frees up RAM by swapping data
that has not been used recently over to a storage device, such as a hard drive or
solid-state drive (SSD).

Virtual memory is important for improving system performance, multitasking and


using large programs. However, users should not overly rely on virtual memory, since
it is considerably slower than RAM. If the OS has to swap data between virtual
memory and RAM too often, the computer will begin to slow down -- this is
called thrashing.

Virtual memory was developed at a time when physical memory -- also referenced as
RAM -- was expensive. Computers have a finite amount of RAM, so memory will
eventually run out when multiple programs run at the same time. A system using
virtual memory uses a section of the hard drive to emulate RAM. With virtual
memory, a system can load larger or multiple programs running at the same time,
enabling each one to operate as if it has more space, without having to purchase
more RAM.

How virtual memory works


Virtual memory uses both hardware and software to operate. When an application is
in use, data from that program is stored in a physical address using RAM. A memory
management unit (MMU) maps the address to RAM and automatically translates
addresses. The MMU can, for example, map a logical address space to a
corresponding physical address.

If, at any point, the RAM space is needed for something more urgent, data can be
swapped out of RAM and into virtual memory. The computer's memory manager is
in charge of keeping track of the shifts between physical and virtual memory. If that
data is needed again, the computer's MMU will use a context switch to resume
execution.

While copying virtual memory into physical memory, the OS divides memory with a
fixed number of addresses into either pagefiles or swap files. Each page is stored on
a disk, and when the page is needed, the OS copies it from the disk to main memory
and translates the virtual addresses into real addresses.

However, the process of swapping virtual memory to physical is rather slow. This
means using virtual memory generally causes a noticeable reduction in performance.
Because of swapping, computers with more RAM are considered to have better
performance.

Types of virtual memory


A computer's MMU manages virtual memory operations. In most computers, the
MMU hardware is integrated into the central processing unit (CPU). The CPU also
generates the virtual address space. In general, virtual memory is either paged or
segmented.

Paging divides memory into sections or paging files. When a computer uses up its
available RAM, pages not in use are transferred to the hard drive using a swap file. A
swap file is a space set aside on the hard drive to be used as the virtual memory
extension for the computer's RAM. When the swap file is needed, it is sent back to
RAM using a process called page swapping. This system ensures the computer's OS
and applications do not run out of real memory. The maximum size of the page file
can be 1 ½ to four times the physical memory of the computer.

The virtual memory paging process uses page tables, which translate the virtual
addresses that the OS and applications use into the physical addresses that the MMU
uses. Entries in the page table indicate whether the page is in RAM. If the OS or a
program does not find what it needs in RAM, then the MMU responds to the missing
memory reference with a page fault exception to get the OS to move the page back
to memory when it is needed. Once the page is in RAM, its virtual address appears in
the page table.

Segmentation is also used to manage virtual memory. This approach divides virtual
memory into segments of different lengths. Segments not in use in memory can be
moved to virtual memory space on the hard drive. Segmented information or
processes are tracked in a segment table, which shows if a segment is present in
memory, whether it has been modified and what its physical address is. In addition,
file systems in segmentation are only made up of segments that are mapped into a
process's potential address space.
Segmentation and paging differ as a memory model in terms of how memory is
divided; however, the processes can also be combined. In this case, memory gets
divided into frames or pages. The segments take up multiple pages, and the virtual
address includes both the segment number and the page number.

Other page replacement methods include first-in-first-out (FIFO), optimal algorithm


and least recently used (LRU) page replacement. The FIFO method has memory
select the replacement for a page that has been in the virtual address for the longest
time. The optimal algorithm method selects page replacements based on which page
is unlikely to be replaced after the longest amount of time; although difficult to
implement, this leads to less page faults. The LRU page replacement method
replaces the page that has not been used for the longest time in the main memory.

How to manage virtual memory


Managing virtual memory within an OS can be straightforward, as there are default
settings that determine the amount of hard drive space to allocate for virtual
memory. Those settings will work for most applications and processes, but there may
be times when it is necessary to manually reset the amount of hard drive space
allocated to virtual memory -- for example, with applications that depend on fast
response times or when the computer has multiple hard disk drives (HDDs).

When manually resetting virtual memory, the minimum and maximum amount of
hard drive space to be used for virtual memory must be specified. Allocating too little
HDD space for virtual memory can result in a computer running out of RAM. If a
system continually needs more virtual memory space, it may be wise to consider
adding RAM. Common OSes may generally recommend users not increase virtual
memory beyond 1 ½ times the amount of RAM.

Managing virtual memory differs by OS. For this reason, IT professionals should
understand the basics when it comes to managing physical memory, virtual memory
and virtual addresses.
RAM cells in SSDs also have a limited lifespan. RAM cells have a limited number of
writes, so using them for virtual memory often reduces the lifespan of the drive.

What are the benefits of using virtual memory?


The advantages to using virtual memory include:

 It can handle twice as many addresses as main memory.

 It enables more applications to be used at once.

 It frees applications from managing shared memory and saves users from
having to add memory modules when RAM space runs out.

 It has increased speed when only a segment of a program is needed for


execution.

 It has increased security because of memory isolation.

 It enables multiple larger applications to run simultaneously.

 Allocating memory is relatively inexpensive.

 It does not need external fragmentation.

 CPU use is effective for managing logical partition workloads.

 Data can be moved automatically.

 Pages in the original process can be shared during a fork system call
operation that creates a copy of itself.

In addition to these benefits, in a virtualized computing environment, administrators


can use virtual memory management techniques to allocate additional memory to a
virtual machine (VM) that has run out of resources. Such virtualization management
tactics can improve VM performance and management flexibility.

What are the limitations of using virtual memory?


Although the use of virtual memory has its benefits, it also comes with some
tradeoffs worth considering, such as:

 Applications run slower if they are running from virtual memory.

 Data must be mapped between virtual and physical memory, which


requires extra hardware support for address translations, slowing down a
computer further.

 The size of virtual storage is limited by the amount of secondary storage, as


well as the addressing scheme with the computer system.

 Thrashing can occur if there is not enough RAM, which will make the
computer perform slower.

 It may take time to switch between applications using virtual memory.

 It lessens the amount of available hard drive space.

Virtual memory (virtual RAM) vs. physical memory (RAM)


When talking about the differences between virtual and physical memory, the
biggest distinction commonly made is to speed. RAM is considerably faster than
virtual memory. RAM, however, tends to be more expensive.

When a computer requires storage, RAM is the first used. Virtual memory, which is
slower, is used only when the RAM is filled.
This chart
shows how virtual RAM (virtual memory) compares to RAM (physical memory).

Users can actively add RAM to a computer by buying and installing more RAM chips.
This is useful if they are experiencing slowdowns due to memory swaps happening
too often. The amount of RAM depends on what is installed on a computer. Virtual
memory, on the other hand, is limited by the size of the computer's hard drive.
Virtual memory settings can often be controlled through the OS.

In addition, RAM uses swapping techniques, while virtual memory uses paging. While
physical memory is limited to the size of the RAM chip, virtual memory is limited by
the size of the hard disk. RAM also has direct access to the CPU, while virtual RAM
does not.

Floating-point unit

Collection of the x87 family of math coprocessors by Intel


A floating-point unit (FPU, colloquially a math coprocessor) is a part of a computer system
specially designed to carry out operations on floating-point numbers. Typical operations
are addition, subtraction, multiplication, division, and square root. Some FPUs can also perform
various transcendental functions such as exponential or trigonometric calculations, but the
accuracy can be low, so some systems prefer to compute these functions in software.

In general-purpose computer architectures, one or more FPUs may be integrated as execution


units within the central processing unit; however, many embedded processors do not have
hardware support for floating-point operations (while they increasingly have them as standard).

When a CPU is executing a program that calls for a floating-point operation, there are three
ways to carry it out:

 A floating-point unit emulator (a floating-point library in software)


 Add-on FPU hardware
 Integrated FPU (in hardware)

Cache Memory

What Does Cache Memory Mean?


Cache memory is a small-sized type of volatile computer memory that provides high-speed data
access to a processor and stores frequently used computer programs, applications and data.

A temporary storage of memory, cache makes data retrieving easier and more efficient. It is the
fastest memory in a computer, and is typically integrated onto the motherboard and directly
embedded in the processor or main random access memory (RAM).

Cache memory provides faster data storage and access by storing instances of programs and
data routinely accessed by the processor. Thus, when a processor requests data that already
has an instance in the cache memory, it does not need to go to the main memory or the hard
disk to fetch the data.

Cache memory is the fastest memory available and acts as a buffer between RAM and the CPU.
The processor checks whether a corresponding entry is available in the cache every time it
needs to read or write a location, thus reducing the time required to access information from
the main memory.

Hardware cache is also called processor cache, and is a physical component of the processor.
Depending on how close it is to the processor core, can be primary or secondary cache
memory, with primary cache memory directly integrated into (or closest to) the processor.

Speed depends on the proximity as well as the size of the cache itself. The more data can be
stored into the cache, the quicker it operates, so chips with a smaller storage capacity tend to
be slower even if it’s closer to the processor.

In addition to hardware-based cache, cache memory also can be a disk cache, where a reserved
portion on a disk stores and provides access to frequently accessed data/applications from the
disk. Whenever the processor accesses data for the first time, a copy is made into the cache.

When that data is accessed again, if a copy is available in the cache, that copy is accessed first
so the speed and efficiency is increased. If it’s not available, then larger, more distant, and
slower memories are accessed (such as the RAM or the hard disk).

Modern video cards also store their own cached memory inside their graphics processing chips.
This way, their GPU can complete complex rendering operations more quickly without having to
rely on the system’s RAM.

Other than hardware cache, software cache is also available as a method to store temporary
files on the hard disk. This cache (also known as browser or application cache) is used to rapidly
access previously stored files for the same reason: increasing speed. For example, an online
browser might save some images from a web page by caching them to avoid re-downloading
them every time that page is open again

2. Differentiate register and memory of microprocessor?


Both Register and Memory are types of storing elements used in computing and digital systems for
the storage of data. Although both have similar functions, they are absolutely different from each
other. In this article, we will cover all those differences, but before that let’s have a basic overview
of registers and computer memory.

What is a Register?
A register is a most elementary data-storing device that is implemented onto the processor chip
itself. It is a small, high−speed storage area within a computer's processor or central processing unit
(CPU). The processor can directly access the data stored in registers. For this reason, registers are
primarily used for storing those instructions or operands on which the CPU is currently working.
Registers allow the processor to quickly access and manipulate the stored information.

Registers have very high access speed, thus the CPU can access the register cells within its one clock
cycle. The storage capacity of a register is expressed in terms of Bits such as 16bit register, 32bit
register, and so on. The number of register Bits provide information about the speed and power of
the processor.

What is Memory?
Memory is again a data storage device used to store data, instructions, computer programs, etc.
Unlike registers, which are small and temporary, memory is typically larger and more long−term in
nature.
Based on the accessibility to the CPU, memories are classified into two types namely primary
memory and secondary memory. The primary memory is the internal memory of the system whose
data can be directly accessed by the processor at higher speed, whereas the secondary memory is
one whose data is accessed by the CPU through the primary memory.

We may also classify the memory on the basis of its nature, i.e. volatile memory and non−volatile
memory. The volatile memory stores data temporarily, whereas the non−volatile memory stores
data permanently.

Difference Between Register and Memory Tabular Form.


Sno Register Memory
.
1. Registers are located internal to the CPU. Memory, or RAM, is located external to the CPU.

2. Data has to be loaded into a CPU register Data has to be loaded into a CPU memory after register the CPU
from memory before the CPU can process can process it.
it.

3. Registers are faster than memory. RAM is much slower than registers

4. Registers are temporary storage in the CPU RAM holds the


that holds the data the processor is program instructions and the data the program requires.
currently working on.

5. Register holds the data that the CPU is The memory holds the data the that will be required for
currently processing. processing.

6. On RISC processors, all data On CISC (Intel) chips, there are a few operations that can load da
must be moved into a register before it can from RAM, process it and save the result back out, but the faste
be operated. operations work directly with registers.

7. Register holds the small amount of data. Memory holds the large amount of data than register.

8. The range of data storage of register is The range of data storage of memory is around GB to TB.
around 32-bits to 64-bits.

9. CPU can operate on register contents at the CPU accesses memory at the slower rate than register.
rate of more than one operation in one
clock.

10. Registers are smallest data holding Memory is largest data holding elements that is built external to
elements that are built into the processor the processor itself.
itself.

11. Registers are controllable; you can store Memory is almost completely uncontrollable.
and retrieve information from them.

12. Computer Registers are Accumulator Memory is referred as the main memory of the computer which
Register, Program Counter, Instruction RAM.
Register, Address Registers, etc.

Conclusion
The most significant difference that you should note here is that registers are used to quickly access
and manipulate data, while memory is used to store data and instructions for longer periods of
time.

5.Discuss different types of register indirect addressing language


Size, operation?
Register Indirect Addressing Mode
We use a processor register to hold a memory location’s address wherever the operand has
been placed. This addressing mode would allow the execution of a similar set of instructions for
various different memory locations. It can be done if we increment the content of the register
and, thereby, point to the new location every single time.

In a higher-level language, we refer to it as pointers. We denote the indirect mode by placing


the given register inside a parenthesis.

In this case, the effective address refers to the content of the memory location that is present in
the available register.

EA=(R)

Here, for instance:

Load X3, (X2) // Load X2, A

Here, the Load instruction will ultimately load the values that are present at the memory
locations that are contained by the X2 register in the X3 register. The figure given below shows
how the X3 register is loaded with the value that is stored in a memory location held by the X2
register.

register indirect addressing is a method of addressing memory locations that uses the
contents of a register as an address. The effective address of the operand is obtained by adding
the contents of the register to a constant or another register value. This addressing mode is
used in many microprocessors and is particularly useful for accessing data structures such as
arrays and linked lists 1.
There are different types of register indirect addressing modes, including:
 Register indirect mode: In this addressing mode, the operand’s offset is placed in any
one of the registers BX, BP, SI, DI as specified in the instruction. The effective address of
the data is in the base register or an index register that is specified by the
instruction. Here two register references are required to access the data 1.
 Auto-indexed mode: In this mode, the effective address of the operand is the contents
of a register specified in the instruction. After accessing the operand, the contents of
this register are automatically incremented to point to the next consecutive memory
location 1.
 Auto-decrement mode: In this mode, the effective address of the operand is the
contents of a register specified in the instruction. Before accessing the operand, the
contents of this register are automatically decremented to point to the previous
consecutive memory location 1.
 Displacement mode: In this mode, the operand’s offset is specified as a constant in the
instruction. The effective address of the data is obtained by adding the offset to the
contents of a register specified in the instruction 1.
The size of the register used for indirect addressing depends on the microprocessor
architecture. For example, the 8086 microprocessor uses 16-bit general-purpose registers for
indirect addressing .
The operation of register indirect addressing is transparent to the user and is managed
by the hardware
Pros
We can use the very same set of instructions multiple times in the case of a register indirect
addressing mode.

Cons
The total number of memory references is more in the case of a register indirect addressing
mode.

6.List the 8/16/32 bit registers that are used for register
addressing?

Register addressing is the most common form of data addressing. The microprocessor contains the
following 8-bit registers used with register addressing: AH, AL, BH, BL, CH, CL, DH, and DL. Also present
are the following 16-bit registers: AX, BX, CX, DX, SP, BP, SI, and DI. In the 80386 and above, the ex-
tended 32-bit registers are EAX, EBX, ECX, EDX, ESP, EBP, EDI, and ESI. With register addressing, some
MOV instructions, and the PUSH and POP instructions, also use the 16-bit segment registers (CS, ES, DS,
SS, FS, and GS).

It is important for instructions to use registers that are the same size. Never mix an 8-bit register with a
16-bit register, an 8-bit register with a 32-bit register, or a 16-bit register with 32-bit register because
this is not allowed by the microprocessor and results in an error when assembled. This is even true when
a MOV AX,AL or a MOV EAX,AL instruction may seem to make sense. Of course, the MOV AX,AL or MOV
EAX,AL instruction is not allowed because these registers are of different sizes. Note that a few
instructions, such as SHL DX,CL, are exceptions to this rule. It is also important to note that none of the
MOV instructions affect the flag bits

Here are some examples of registers used for register addressing in different microprocessors:
 Intel 8086: The 8086 microprocessor has eight 16-bit general-purpose registers: AX, BX,
CX, DX, SI, DI, BP, and SP .
 Motorola 68000: The 68000 microprocessor has eight 32-bit general-purpose registers:
D0-D7 and A0-A7 .
 ARM Cortex-M0: The Cortex-M0 microprocessor has thirteen 32-bit general-purpose
registers: R0-R12 and SP .
 MIPS32: The MIPS32 microprocessor has thirty-two 32-bit general-purpose registers:
R0-R31

7.Define super scalar of architecture of a Pentium processor?


The Pentium has what is known as a superscalar pipelined architecture. Superscalar means
that the CPU can execute two or more instructions per cycle. To be more precise:

The Pentium can generate the results of two instructions in a single clock cycle. Superscalar
architecture is a method of parallel computing used in many processors. In a superscalar
computer, the central processing unit (CPU) manages multiple instruction pipelines to execute
several instructions concurrently during a clock cycle. This is achieved by feeding the different
pipelines through a number of execution units within the processor. To successfully implement
a superscalar architecture, the CPU's instruction fetching mechanism must intelligently retrieve
and delegate instructions. Otherwise, pipeline stalls may occur, resulting in execution units that
are often idle.

8.What is clock period of a clock frequency of 1 Ghetz?


The clock period of a clock frequency of 1 GHz is 1 nanosecond. This is because the clock period
is the reciprocal of the clock frequency, which means that the clock period is equal to
1/frequency . In this case, the clock frequency is 1 GHz, which is equivalent to 1 billion cycles
per second. Therefore, the clock period is 1/1 billion seconds, or 1 nanosecond. One gigahertz
equals 1,000,000,000 Hz or 1,000 MHz and has a frequency measurement with periodic 1-
second cycles. A nanosecond is one-billionth of a second or one-thousandth of a microsecond

9.Suppose memory bytes 0-4 have the following contents


Address Contents
0 01101010
1 11011101
2 00010001
3 11111111
4 01010101

Assume that a word is 2 byte; what are the contents in (Hex)


-the word of memory address of 2?
- the word of memory address of 3?
- what is bit 7 of byte 2?

 The word of memory address 2 is 0x7FFF. This is because the word at memory
address 2 is made up of the bytes at memory addresses 2 and 3. The byte at
memory address 2 is 00010001, and the byte at memory address 3 is 11111111.
When these two bytes are combined, we get the word 0001000111111111, which is
equal to 0x7FFF in hexadecimal format.
 The word of memory address 3 is 0x55FF. This is because the word at memory
address 3 is made up of the bytes at memory addresses 3 and 4. The byte at
memory address 3 is 11111111, and the byte at memory address 4 is 01010101.
When these two bytes are combined, we get the word 1111111101010101, which is
equal to 0x55FF in hexadecimal format.
 Bit 7 of byte 2 is 0. Bit 7 is the leftmost bit of the byte, and it has a value of 2^7 =
128. The byte at memory address 2 is 00010001, so bit 7 of this byte is the leftmost
bit, which is 0
10.The processor has only 4 instructions (Conditional branch, Add, LDW,
SDW). The processor has 16-bit 8 registers and 256B Memory. The ISA is a
fixed-length ISA and it has 16 bits.
You need to implement, BR, ADD, LDW and SDW. Architectural states, you
need to implement PC, Registers, Memory, and 3-bit CC (NZP).

 This implementation defines a Processor class with methods for each of the
instructions you mentioned. The BR method implements the conditional branch
instruction, the ADD method implements the add instruction, the LDW method
implements the load word instruction, and the SDW method implements the store
word instruction. The get_CC method is a helper method that sets the condition code
register based on the value of the result of the last operation.
 The Processor class has attributes for the memory, registers, program counter (PC),
and condition code (CC) register. The memory is a list of 256 bytes, the registers are
a list of 8 16-bit registers, the PC is a 16-bit register that holds the address of the
next instruction to be executed, and the CC is a 3-bit register that holds the
condition code for the last operation

You might also like