0% found this document useful (0 votes)
3 views

MPMC Module 5

The document discusses the evolution of microprocessors and microcontrollers, focusing on the transition from Complex Instruction Set Computers (CISCs) to Reduced Instruction Set Computers (RISCs). It highlights the benefits of RISC architecture, such as simpler design, higher performance, and efficient pipelining, while also noting drawbacks like poor code density. The text emphasizes the significance of understanding processor operations and the impact of instruction set design on performance.

Uploaded by

rishiv1947
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

MPMC Module 5

The document discusses the evolution of microprocessors and microcontrollers, focusing on the transition from Complex Instruction Set Computers (CISCs) to Reduced Instruction Set Computers (RISCs). It highlights the benefits of RISC architecture, such as simpler design, higher performance, and efficient pipelining, while also noting drawbacks like poor code density. The text emphasizes the significance of understanding processor operations and the impact of instruction set design on performance.

Uploaded by

rishiv1947
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

MICROPROCESSORS AND

MICROCONTROLLERS – UNIT 5

B Y S ON A L G U P TA
THE STORED-PROGRAM COMPUTER

• The stored-program digital computer keeps its instructions and data in


the same program memory system, allowing the instructions to be
treated as data when necessary.
• This Computer enables the processor itself to generate instructions
which it can subsequently execute.
• Whenever a computer loads in a new program from disk (overwriting
an old program) and then executes it the computer is employing this
ability to change its own program.
COMPLEX INSTRUCTION SET
COMPUTERS

• Before 1980, designers of computer instruction sets often made them


more complex to make it easier for compilers to turn programming
code into machine operations.
• They added single instructions that could handle many tasks, like
starting or finishing a function, which took several steps and clock
cycles to complete.
• These processors were often marketed based on how sophisticated
and varied their operations and data handling were.
COMPLEX INSTRUCTION SET
COMPUTERS

• This trend started with minicomputers in the 1970s.


• These computers had slow main memory but were built with
processors that could perform tasks quicker using simple integrated
circuits and microcode ROMs (a type of fast, read-only memory).
• It was more efficient to have the processors perform common tasks
directly through microcode instead of pulling multiple instructions
from slower main memory.
• During the 1970s, microprocessors — single-chip processors — were
evolving rapidly.
• They relied on cutting-edge technology to pack as many transistors as
possible onto one chip.
COMPLEX INSTRUCTION SET
COMPUTERS

• Most of their design ideas were borrowed from the minicomputer


industry, even though the technologies used were quite different.
• One issue with these designs was that a lot of the chip space was
taken up by microcode ROM needed for complex instructions, leaving
less room for other features that could improve performance.
• This situation led to the creation of what's called Complex Instruction
Set Computers (CISCs) in the late 1970s.
• These were microprocessors that tried to include extensive
minicomputer-like instructions but were limited by the amount of
space available on a single chip.
• Intel was the pioneer in CISC
THE RISC REVOLUTION

• In a time when computer instruction sets were becoming more and


more complex, a new kind of computer architecture called the
Reduced Instruction Set Computer (RISC) emerged.
• RISC had a significant impact on the design of the ARM processor,
which is even reflected in its name, originally standing for "Acorn RISC
Machine."
WHAT PROCESSORS DO?

• If we want to make a processor go fast, we must first


understand what it spends its time doing.
• It is a common misconception that computers spend
their time computing, that is, carrying out arithmetic
operations on user data.
• In practice they spend very little time 'computing' in
this sense.
• Although they do a fair amount of arithmetic, most of
this is with addresses in order to locate the relevant
data items and program routines.
• Then, having found the user's data, most of the work
is in moving it around rather than processing it in any
transformational sense.
PIPELINES

• A processor executes an individual instruction in a sequence of steps.


A typical sequence might be:
• 1. Fetch the instruction from memory (fetch).
• 2. Decode it to see what sort of instruction it is (dec).
• 3. Access any operands that may be required from the register bank
(reg).
• 4. Combine the operands to form the result or a memory address
(ALU).
• 5. Access memory for a data operand, if necessary (mem).
• 6. Write the result back to the register bank (res).
PIPELINES

• Not all instructions will require every step, but most instructions will
require most of them.
• These steps tend to use different hardware functions, for instance the
ALU is probably only used in step 4.
• Therefore, if an instruction does not start before its predecessor has
finished, only a small proportion of the processor hardware will be in
use in any step.
• An obvious way to improve the utilization of the hardware resources,
and also the processor throughput, would be to start the next
instruction before the current one has finished.
• This technique is called pipelining, and is a very effective way of
exploiting concurrency in a general-purpose processor.
PIPELINES

• Taking the above sequence of operations, the processor is organized so that as soon as one instruction
has completed step 1 and moved on to step 2, the next instruction begins step 1.
• This is illustrated in Figure 1.13.
• In principle such a pipeline should deliver a six times speed-up compared with non-overlapped
instruction execution; in practice things do not work out quite so well for reasons we will see below.
PIPELINE HAZARDS

• It is relatively frequent in typical computer programs that the result


from one instruction is used as an operand by the next instruction.
• When this occurs the pipeline operation shown in Figure 1.13 breaks
down, since the result of instruction 1 is not available at the time that
instruction 2 collects its operands.
• Instruction 2 must therefore stall until the result is available, giving the
behaviour shown in Figure 1.14 on page 23.
• This is a read-after-write pipeline hazard.
PIPELINE HAZARDS

• Branch instructions result in even worse pipeline behaviour since the


fetch step of the following instruction is affected by the branch target
computation and must there- fore be deferred.
• Unfortunately, subsequent fetches will be taking place while the branch
is being decoded and before it has been recognized as a branch, so the
fetched instructions may have to be discarded.
• If, for example, the branch target calculation is performed in the ALU
stage of the pipeline in Figure 1.13, three instructions will have been
fetched from the old stream before the branch target is available (see
Figure 1.15).
PIPELINE HAZARDS

• It is better to compute the branch target earlier in the pipeline if


possible, even though this will probably require dedicated hardware.
• If branch instructions have a fixed format, the target may be computed
speculatively (that is, before it has been determined that the
instruction is a branch) during the 'dec' stage, thereby reducing the
branch latency to a single cycle, though note that in this pipeline there
may still be hazards on a conditional branch due to dependencies on
the condition code result of the instruction preceding the branch.
• Some RISC architectures (though not the ARM) define that the
instruction following the branch is executed whether or not the
branch is taken.
• This technique is known as the delayed branch.
PIPELINE EFFICIENCY

• Though there are techniques which reduce the impact of these


pipeline problems, they cannot remove the difficulties altogether.
• The deeper the pipeline (that is, the more pipeline stages there are),
the worse the problems get.
• For reasonably simple processors, there are significant benefits in
introducing pipelines from three to five stages long, but beyond this the
law of diminishing returns begins to apply and the added costs and
complexity outweigh the benefits.
• Pipelines clearly benefit from all instructions going through a similar
sequence of steps.
PIPELINE EFFICIENCY

• Processors with very complex instructions where every instruction


behaves differently from the next are hard to pipeline.
• In 1980 the complex instruction set microprocessor of the day was
not pipelined due to the limited silicon resource, the limited design
resource and the high complexity of designing a pipeline for a complex
instruction set.
RISC ARCHITECTURE

• A fixed (32-bit) instruction size with few formats;


• CISC processors typically had variable length instruction sets with
many formats.
• A load-store architecture where instructions that process data operate
only on registers and are separate from instructions that access
memory;
• CISC processors typically allowed values in memory to be used as
operands in data processing instructions.
RISC ARCHITECTURE

• A large register bank of thirty-two 32-bit registers, all of which could


be used for any purpose, to allow the load-store architecture to
operate efficiently;
• CISC register sets were getting larger, but none was this large and
most had different registers for different purposes.
• These differences greatly simplified the design of the processor and
allowed the designers to implement the architecture using
organizational features.
RISC ORGANIZATION

• Hard-wired instruction decode logic;


• CISC processors used large microcode ROMs to decode their
instructions.
• Pipelined execution;
• CISC processors allowed little, if any, overlap between consecutive
instructions (though they do now).
• Single-cycle execution;
• CISC processors typically took many clock cycles to complete a single
instruction.
RISC ADVANTAGES

• A smaller die size.


• A simple processor should require fewer transistors and less silicon
area.
• A shorter development time.
• A simple processor should take less design effort and therefore have a
lower design cost and be better matched to the process technology
when it is launched (since process technology developments need be
predicted over a shorter development period).
RISC ADVANTAGES

• A higher performance.
• This is the tricky one! The previous two advantages are easy to accept,
but in a world where higher performance had been sought through
ever-increasing complexity, this was a bit hard to swallow.
• Simple processor allow a high clock rate.
RISC IN RETROSPECT

• Since the RISC is now well established in commercial use it is possible


to look back and see more clearly what its contribution to the
evolution of the microprocessor really was.
• Early RISCs achieved their performance through:
• Pipelining.
• Pipelining is the simplest form of concurrency to implement in a
processor and delivers around two to three times speed-up.
• A simple instruction set greatly simplifies the design of the pipeline.
RISC IN RETROSPECT

• A high clock rate with single-cycle execution.


• In 1980 standard semiconductor memories (DRAMs - Dynamic
Random Access Memories) could operate at around 3 MHz for
random accesses and at 6 MHz for sequential (page mode) accesses.
• The CISC microprocessors of the time could access memory at most
at 2 MHz, so memory bandwidth was not being exploited to the full.
• RISC processors, being rather simpler, could be designed to operate at
clock rates that would use all the available memory bandwidth.
RISC DRAWBACKS

• RISCs generally have poor code density compared with CISCs.


• RISCs don't execute x86 code.

You might also like