Computer Organization | Instruction Formats (Zero, One, Two and Three Address Instruction)
Last Updated :
03 Feb, 2025
Instruction formats refer to the way instructions are encoded and represented in machine language. There are several types of instruction formats, including zero, one, two, and three-address instructions.
Each type of instruction format has its own advantages and disadvantages in terms of code size, execution time, and flexibility. Modern computer architectures typically use a combination of these formats to provide a balance between simplicity and power.
Different Types of Instruction Fields
A computer performs a task based on the instructions provided. Instructions in computers are comprised of groups called fields. These fields contain different information for computers which are all written in 0s and 1s. Each field has a different significance or meaning, based on which a CPU decides what to perform. The most common fields are:
- The operation field specifies the operation to be performed, like addition.
- Address field which contains the location of the operand, i.e., register or memory location.
- Mode field which specifies how operand is to be founded.
An instruction is of variable length depending upon the number of addresses it contains. Generally, CPU organization is of three types based on the number of address fields:
Types of Instructions
Based on the number of addresses, instructions are classified as:
NOTE: We will use the X = (A+B)*(C+D) expression to showcase the procedure.
Zero Address Instructions
These instructions do not specify any operands or addresses. Instead, they operate on data stored in registers or memory locations implicitly defined by the instruction. For example, a zero-address instruction might simply add the contents of two registers together without specifying the register names.
Zero Address Instruction
A stack-based computer does not use the address field in the instruction. To evaluate an expression, it is first converted to reverse Polish Notation i.e. Postfix Notation.
Expression: X = (A+B)*(C+D)
Postfixed : X = AB+CD+*
TOP means top of stack
M[X] is any memory location
Operation | Instruction | Stack (TOP Value After Execution) |
---|
Push A | PUSH A | TOP = A |
Push B | PUSH B | TOP = B |
Add | ADD | TOP = A + B |
Push C | PUSH C | TOP = C |
Push D | PUSH D | TOP = D |
Add | ADD | TOP = C + D |
Multiply | MUL | TOP = (C + D) * (A + B) |
Pop X | POP X | M[X] = TOP |
One Address Instructions
These instructions specify one operand or address, which typically refers to a memory location or register. The instruction operates on the contents of that operand, and the result may be stored in the same or a different location. For example, a one-address instruction might load the contents of a memory location into a register.
This uses an implied ACCUMULATOR register for data manipulation. One operand is in the accumulator and the other is in the register or memory location. Implied means that the CPU already knows that one operand is in the accumulator so there is no need to specify it.
One Address InstructionExpression: X = (A+B)*(C+D)
AC is accumulator
M[] is any memory location
M[T] is temporary location
Operation | Instruction | Stack / Register (AC / M[]) |
---|
Load A | AC = A | AC = A |
Add B | AC = AC + B | AC = A + B |
Store M[T] | M[T] = AC | M[T] = A + B |
Load C | AC = C | AC = C |
Add D | AC = AC + D | AC = C + D |
Store M[] | M[] = AC | M[] = C + D |
Multiply M[T] | AC = AC * M[T] | AC = (A + B) * (C + D) |
Store X | M[X] = AC | M[X] = (A + B) * (C + D) |
Two Address Instructions
These instructions specify two operands or addresses, which may be memory locations or registers. The instruction operates on the contents of both operands, and the result may be stored in the same or a different location. For example, a two-address instruction might add the contents of two registers together and store the result in one of the registers.
This is common in commercial computers. Here two addresses can be specified in the instruction. Unlike earlier in one address instruction, the result was stored in the accumulator, here the result can be stored at different locations rather than just accumulators, but require more number of bit to represent the address.
Two Address Instruction Here destination address can also contain an operand.
Expression: X = (A+B)*(C+D)
R1, R2 are registers
M[] is any memory location
Operation | Instruction | Registers / Memory (R1, R2, M[]) |
---|
Load A | R1 = A | R1 = A |
Add B | R1 = R1 + B | R1 = A + B |
Store M[T] | M[T] = R1 | M[T] = A + B |
Load C | R2 = C | R2 = C |
Add D | R2 = R2 + D | R2 = C + D |
Store M[] | M[] = R2 | M[] = C + D |
Multiply M[T] | R1 = R1 * M[T] | R1 = (A + B) * (C + D) |
Store X | M[X] = R1 | M[X] = (A + B) * (C + D) |
Three Address Instructions
These instructions specify three operands or addresses, which may be memory locations or registers. The instruction operates on the contents of all three operands, and the result may be stored in the same or a different location. For example, a three-address instruction might multiply the contents of two registers together and add the contents of a third register, storing the result in a fourth register.
This has three address fields to specify a register or a memory location. Programs created are much short in size but number of bits per instruction increases. These instructions make the creation of the program much easier but it does not mean that program will run much faster because now instructions only contain more information but each micro-operation (changing the content of the register, loading address in the address bus etc.) will be performed in one cycle only.
Three Address InstructionExpression: X = (A+B)*(C+D)
R1, R2 are registers
M[] is any memory location
Operation | Instruction | Registers / Memory (R1, R2, M[]) |
---|
Load A | R1 = A | R1 = A |
Add B | R1 = R1 + B | R1 = A + B |
Store M[T] | M[T] = R1 | M[T] = A + B |
Load C | R2 = C | R2 = C |
Add D | R2 = R2 + D | R2 = C + D |
Store M[] | M[] = R2 | M[] = C + D |
Multiply M[T] | R1 = R1 * M[T] | R1 = (A + B) * (C + D) |
Store X | M[X] = R1 | M[X] = (A + B) * (C + D) |
Advantages of Zero-Address, One-Address, Two-Address and Three-Address Instructions
Zero-address instructions
- Stack-based Operations: In stack-based architectures, where operations implicitly employ the top items of the stack, zero-address instructions are commonly used.
- Reduced Instruction Set: It reduces the complexity of the CPU design by streamlining the instruction set, which may boost reliability.
- Less Decoding Complexity: Especially helpful for recursive or nested processes, which are frequently used in function calls and mathematical computations.
- Efficient in Nested Operations: Less bits are required to specify operands, which simplifies the logic involved in decoding instructions.
- Compiler Optimization: Because stacks are based on stacks, several algorithms can take use of this to improve the order of operations.
One-address instructions
- Intermediate Complexity: Strikes a balance between versatility and simplicity, making it more adaptable than zero-address instructions yet simpler to implement than multi-address instructions.
- Reduced Operand Handling: Compared to multi-address instructions, operand fetching is made simpler by just needing to handle a single explicit operand.
- Implicit Accumulator: O ften makes use of an implicit accumulator register, which can expedite up some operations' execution and simplify designs in other situations.
- Code Density: S maller code in comparison to two- and three-address instructions, which may result in more efficient use of memory and the instruction cache.
- Efficient Use of Addressing Modes: Can make use of different addressing modes (such indexed, direct, and indirect) to improve flexibility without adding a lot of complexity.
Two-address instructions
- Improved Efficiency: Allows for the execution of operations directly on memory or registers, which reduces the amount of instructions required for certain activities.
- Flexible Operand Use: Increases programming variety by offering more options for operand selection and addressing modes.
- Intermediate Data Storage: May directly store interim results, increasing some algorithms' and calculations' efficiency.
- Enhanced Code Readability: Produces code that is frequently easier to read and comprehend than one-address instructions, which is beneficial for maintenance and troubleshooting.
- Better Performance: Better overall performance can result from these instructions because they minimize the amount of memory accesses required for certain processes.
Three-address instructions
- Direct Representation of Expressions: Reduces the need for temporary variables and extra instructions by enabling the direct representation of complicated expressions.
- Parallelism: Allows for the simultaneous fetching and processing of several operands, which facilitates parallelism in CPU architecture.
- Compiler Optimization: Makes it possible for more complex compiler optimizations to be implemented, which improve execution efficiency by scheduling and reordering instructions.
- Reduced Instruction Count: May increase execution performance even with bigger instruction sizes by perhaps lowering the overall number of instructions required for complicated processes.
- Improved Pipeline Utilization: More information in each instruction allows CPU pipelines to be used more efficiently, increasing throughput overall.
- Better Register Allocation: Permits direct manipulation of several registers inside a single instruction, enabling more effective usage of registers.
Disadvantages of Zero-Address, One-Address, Two-Address and Three-Address Instructions
Zero-address instructions
- Stack Dependency: In contrast to register-based architectures, zero-address instructions might result in inefficiencies when it comes to operand access because of their heavy reliance on the stack.
- Overhead of Stack Operations: Performance might be negatively impacted by the frequent push and pop actions needed to maintain the stack.
- Limited Addressing Capability: The processing of intricate data structures may become more difficult since they do not directly support accessing memory regions or registers.
- Difficult to Optimize: Because operand access is implied in stack-based designs, code optimization might be more difficult.
- Harder to Debug: When compared to register-based operations, stack-based operations might be less obvious and more difficult to debug.
One-address instructions
- Accumulator Bottleneck: Often uses an accumulator, which can act as a bottleneck and reduce efficiency and parallelism.
- Increased Instruction Count: Multiple instructions may be needed for complex processes, which would increase the overall number of instructions and code size.
- Less Efficient Operand Access: There is just one operand that is specifically addressed, which might result in inefficient access patterns and extra data management instructions.
- Complex Addressing Modes: The instruction set and decoding procedure get more complicated when several addressing modes are supported.
- Data Movement Overhead: Moving data between memory and the accumulator could need more instructions, which would increase overhead.
Two-address instructions
- Operand Overwriting: Usually, the result overwrites one of the source operands, which might lead to an increase in the number of instructions needed to maintain data.
- Larger Instruction Size: Because two-address instructions are bigger than zero- and one-address instructions, the memory footprint may be increased.
- Intermediate Results Handling: It is frequently necessary to handle intermediate outcomes carefully, which can make programming more difficult and result in inefficiencies.
- Decoding Complexity: The design and performance of the CPU may be impacted by the greater complexity involved in decoding two addresses.
- Inefficient for Some Operations: The two-address style could still be inefficient for some tasks, needing more instructions to get the desired outcome.
Three-address instructions
- Largest Instruction Size: Has the highest memory requirements per instruction, which can put strain on the instruction cache and increase code size.
- Complex Instruction Decoding: Three addresses to decode adds complexity to the CPU architecture, which might affect power consumption and performance.
- Increased Operand Fetch Time: Each instruction may execute more slowly if obtaining three operands takes a long period.
- Higher Hardware Requirements: Has the potential to raise cost and power consumption since it requires more advanced hardware to handle the higher operand handling and addressing capabilities.
- Power Consumption: Higher power consumption is a crucial factor for devices that run on batteries since it can be caused by more complicated instructions and increased memory utilization.
Overall, the choice of instruction format depends on the specific requirements of the computer architecture and the trade-offs between code size, execution time, and flexibility.
Note: The fastest IR is zero-address instructions after that three, then two and at last one-address instructions because time taken in memory reference access has to do a lot with the length of an IR.
Similar Reads
Machine instructions and addressing modes
Computer Organization - Von Neumann architectureComputer Organization is like understanding the "blueprint" of how a computer works internally. One of the most important models in this field is the Von Neumann architecture, which is the foundation of most modern computers. Named after John von Neumann, this architecture introduced the concept of
6 min read
Computer Organization - Basic Computer InstructionsComputer organization refers to the way in which the components of a computer system are organized and interconnected to perform specific tasks. One of the most fundamental aspects of computer organization is the set of basic computer instructions that the system can execute.Basic Computer Instructi
6 min read
Computer Organization | Instruction Formats (Zero, One, Two and Three Address Instruction)Instruction formats refer to the way instructions are encoded and represented in machine language. There are several types of instruction formats, including zero, one, two, and three-address instructions. Each type of instruction format has its own advantages and disadvantages in terms of code size,
11 min read
Stack based CPU OrganizationBased on the number of address fields, CPU organization is of three types: Single Accumulator organization, register based organization and stack based CPU organization.Stack-Based CPU OrganizationThe computers which use Stack-based CPU Organization are based on a data structure called a stack. The
4 min read
Introduction of General Register based CPU OrganizationWhen we are using multiple general-purpose registers, instead of a single accumulator register, in the CPU Organization then this type of organization is known as General register-based CPU Organization. In this type of organization, the computer uses two or three address fields in their instruction
3 min read
Introduction of Single Accumulator based CPU organizationThe computers, present in the early days of computer history, had accumulator-based CPUs. In this type of CPU organization, the accumulator register is used implicitly for processing all instructions of a program and storing the results into the accumulator. The instruction format that is used by th
2 min read
Computer Organization | Problem Solving on Instruction FormatPrerequisite - Basic Computer Instructions, Instruction Formats Problem statement: Consider a computer architecture where instructions are 16 bits long. The first 6 bits of the instruction are reserved for the opcode, and the remaining 10 bits are used for the operands. There are three addressing mo
7 min read
Addressing ModesAddressing modes are the techniques used by the CPU to identify where the data needed for an operation is stored. They provide rules for interpreting or modifying the address field in an instruction before accessing the operand.Addressing modes for 8086 instructions are divided into two categories:
7 min read
Machine InstructionsMachine Instructions are commands or programs written in the machine code of a machine (computer) that it can recognize and execute. A machine instruction consists of several bytes in memory that tell the processor to perform one machine operation. The processor looks at machine instructions in main
5 min read
Difference between CALL and JUMP instructionsIn assembly language as well as in low level programming CALL and JUMP are the two major control transfer instructions. Both instructions enable a program to go to different other parts of the code but both are different. CALL is mostly used to direct calls to subroutine or a function and regresses
5 min read
Simplified Instructional Computer (SIC)Simplified Instructional Computer (SIC) is a hypothetical computer that has hardware features that are often found in real machines. There are two versions of this machine: SIC standard ModelSIC/XE(extra equipment or expensive)Object programs for SIC can be properly executed on SIC/XE which is known
4 min read
Hardware architecture (parallel computing)Let's discuss about parallel computing and hardware architecture of parallel computing in this post. Note that there are two types of computing but we only learn parallel computing here. As we are going to learn parallel computing for that we should know following terms. Era of computing - The two f
3 min read
Computer Architecture | Flynn's taxonomyParallel computing is a computing where the jobs are broken into discrete parts that can be executed concurrently. Each part is further broken down to a series of instructions. Instructions from each part execute simultaneously on different CPUs. Parallel systems deal with the simultaneous use of mu
4 min read
Evolution of Generation of ComputersThe generation of computers refers to the progression of computer technology over time, marked by key advancements in hardware and software. These advancements are divided into five generations, each defined by improvements in processing power, size, efficiency, and overall capabilities. Starting wi
6 min read
Computer Organization | Amdahl's law and its proofIt is named after computer scientist Gene Amdahl( a computer architect from IBM and Amdahl corporation) and was presented at the AFIPS Spring Joint Computer Conference in 1967. It is also known as Amdahl's argument. It is a formula that gives the theoretical speedup in latency of the execution of a
6 min read
ALU, dataââ¬Âpath and control unit
Instruction pipelining
Computer Organization and Architecture | Pipelining | Set 1 (Execution, Stages and Throughput)Pipelining is a technique used in modern processors to improve performance by executing multiple instructions simultaneously. It breaks down the execution of instructions into several stages, where each stage completes a part of the instruction. These stages can overlap, allowing the processor to wo
9 min read
Computer Organization and Architecture | Pipelining | Set 2 (Dependencies and Data Hazard)Please see Set 1 for Execution, Stages and Performance (Throughput) and Set 3 for Types of Pipeline and Stalling. Dependencies in a pipelined processor There are mainly three types of dependencies possible in a pipelined processor. These are : 1) Structural Dependency 2) Control Dependency 3) Data D
6 min read
Computer Organization and Architecture | Pipelining | Set 3 (Types and Stalling)Please see Set 1 for Execution, Stages and Performance (Throughput) and Set 2 for Dependencies and Data Hazard. Types of pipeline Uniform delay pipeline In this type of pipeline, all the stages will take same time to complete an operation. In uniform delay pipeline, Cycle Time (Tp) = Stage Delay If
3 min read
Computer Organization | Different Instruction CyclesIntroduction : Prerequisite - Execution, Stages and Throughput Registers Involved In Each Instruction Cycle: Memory address registers(MAR) : It is connected to the address lines of the system bus. It specifies the address in memory for a read or write operation.Memory Buffer Register(MBR) : It is co
11 min read
Performance of Computer in Computer OrganizationIn computer organization, performance refers to the speed and efficiency at which a computer system can execute tasks and process data. A high-performing computer system is one that can perform tasks quickly and efficiently while minimizing the amount of time and resources required to complete these
5 min read
Computer Organization | Micro-OperationIn computer organization, a micro-operation refers to the smallest tasks performed by the CPU's control unit. These micro-operations helps to execute complex instructions. They involve simple tasks like moving data between registers, performing arithmetic calculations, or executing logic operations.
3 min read
RISC and CISC in Computer OrganizationRISC is the way to make hardware simpler whereas CISC is the single instruction that handles multiple work. In this article, we are going to discuss RISC and CISC in detail as well as the Difference between RISC and CISC, Let's proceed with RISC first. Reduced Instruction Set Architecture (RISC) The
5 min read
Cache Memory
Memory Hierarchy Design and its CharacteristicsIn the Computer System Design, Memory Hierarchy is an enhancement to organize the memory such that it can minimize the access time. The Memory Hierarchy was developed based on a program behavior known as locality of references (same data or nearby data is likely to be accessed again and again). The
6 min read
Cache Memory in Computer OrganizationCache memory is a small, high-speed storage area in a computer. The cache is a smaller and faster memory that stores copies of the data from frequently used main memory locations. There are various independent caches in a CPU, which store instructions and data. The most important use of cache memory
11 min read
Cache Organization | Set 1 (Introduction)Cache is close to CPU and faster than main memory. But at the same time is smaller than main memory. The cache organization is about mapping data in memory to a location in cache. A Simple Solution: One way to go about this mapping is to consider last few bits of long memory address to find small ca
3 min read
Computer Organization | Locality and Cache friendly codeCaches are the faster memories that are built to deal with the Processor-Memory gap in data read operation, i.e. the time difference in a data read operation in a CPU register and that in the main memory. Data read operation in registers is generally 100 times faster than in the main memory and it k
7 min read
Difference Between CPU Cache and TLBThe CPU Cache and Translation Lookaside Buffer (TLB) are two important microprocessor hardware components that improve system performance, although they have distinct functions. Even though some people may refer to TLB as a kind of cache, it's important to recognize the different functions they serv
4 min read
Read and Write operations in MemoryA memory unit stores binary information in groups of bits called words. Data input lines provide the information to be stored into the memory, Data output lines carry the information out from the memory. The control lines Read and write specifies the direction of transfer of data. Basically, in the
3 min read
Memory InterleavingPrerequisite - Virtual Memory Abstraction is one of the most important aspects of computing. It is a widely implemented Practice in the Computational field. Memory Interleaving is less or More an Abstraction technique. Though it's a bit different from Abstraction. It is a Technique that divides memo
3 min read
Introduction to memory and memory unitsMemory is required to save data and instructions. Memory is divided into cells, and they are stored in the storage space present in the computer. Every cell has its unique location/address. Memory is very essential for a computer as this is the way it becomes somewhat more similar to a human brain.
11 min read
Random Access Memory (RAM) and Read Only Memory (ROM)Memory is a fundamental component of computing systems, essential for performing various tasks efficiently. It plays a crucial role in how computers operate, influencing speed, performance, and data management. In the realm of computer memory, two primary types stand out: Random Access Memory (RAM)
8 min read
Different Types of RAM (Random Access Memory )In the computer world, memory plays an important component in determining the performance and efficiency of a system. In between various types of memory, Random Access Memory (RAM) stands out as a necessary component that enables computers to process and store data temporarily. In this article, we w
8 min read
Difference between RAM and ROMMemory is an important part of the Computer which is responsible for storing data and information on a temporary or permanent basis. Memory can be classified into two broad categories: Primary Memory Secondary Memory What is Primary Memory? Primary Memory is a type of Computer Memory that the Prepro
7 min read
I/O interface (Interrupt and DMA mode)
I/O Interface (Interrupt and DMA Mode)The method that is used to transfer information between internal storage and external I/O devices is known as I/O interface. The CPU is interfaced using special communication links by the peripherals connected to any computer system. These communication links are used to resolve the differences betw
6 min read
Introduction of Input-Output ProcessorThe DMA mode of data transfer reduces the CPU's overhead when handling I/O operations. It also allows parallel processing between CPU and I/O operations. This parallelism is necessary to avoid the wastage of valuable CPU time when handling I/O devices whose speeds are much slower as compared to CPU.
5 min read
Kernel I/O Subsystem in Operating SystemThe kernel provides many services related to I/O. Several services such as scheduling, caching, spooling, device reservation, and error handling - are provided by the kernel's I/O subsystem built on the hardware and device-driver infrastructure. The I/O subsystem is also responsible for protecting i
7 min read
Memory Mapped I/O and Isolated I/OCPU needs to communicate with the various memory and input-output devices (I/O). Data between the processor and these devices flow with the help of the system bus. There are three ways in which system bus can be allotted to them:Separate set of address, control and data bus to I/O and memory.Have co
5 min read
BUS Arbitration in Computer OrganizationIntroduction : In a computer system, multiple devices, such as the CPU, memory, and I/O controllers, are connected to a common communication pathway, known as a bus. In order to transfer data between these devices, they need to have access to the bus. Bus arbitration is the process of resolving conf
7 min read
Priority Interrupts | (S/W Polling and Daisy Chaining)In I/O Interface (Interrupt and DMA Mode), we have discussed the concept behind the Interrupt-initiated I/O. To summarize, when I/O devices are ready for I/O transfer, they generate an interrupt request signal to the computer. The CPU receives this signal, suspends the current instructions it is exe
5 min read
Computer Organization | Asynchronous input output synchronizationIntroduction : Asynchronous input/output (I/O) synchronization is a technique used in computer organization to manage the transfer of data between the central processing unit (CPU) and external devices. In asynchronous I/O synchronization, data transfer occurs at an unpredictable rate, with no fixed
7 min read
Introduction of Ports in ComputersA port is basically a physical docking point which is basically used to connect the external devices to the computer, or we can say that A port act as an interface between the computer and the external devices, e.g., we can connect hard drives, printers to the computer with the help of ports. Featur
3 min read
Clusters In Computer OrganisationA cluster is a set of loosely or tightly connected computers working together as a unified computing resource that can create the illusion of being one machine. Computer clusters have each node set to perform the same task, controlled and produced by the software. Clustered Operating Systems work si
7 min read
Human - Computer interaction through the agesIntroduction - The advent of a technological marvel called the âcomputerâ has revolutionized life in the twenty-first century. From IoT to self-driving cars to smart cities, computers have percolated through the fabric of society. Unsurprisingly the methods with which we interact with computers have
4 min read