What Are The Different Stages of Evolution of Computer Architecture? Explain in Detail
What Are The Different Stages of Evolution of Computer Architecture? Explain in Detail
Ans: Zeroth Generation Computers: The zeroth generation of computers(1642-1946) was distinctly made available by the invention of largelymechanical computers. In 1642, a French mathematician named BlaisePascal invented the first mechanical device which was called Pascaline. In1822, Charles Babbage, an English mathematician, invented a machinecalled Difference Engine to compute tables of numbers for naval navigation.Later on, in the year 1834, Babbage attempted to build a digital computer,called Analytical Engine. The analytical engine had all the parts of a moderncomputer i.e. the store (memory unit), the mill (computation unit), thepunched card reader (input unit) and the punched/ printed output (outputunit). As all the basic parts of modern computers were thought out byCharles Babbage, he is known as Father of Computers. First Generation Computers: The first generation of computers (1946-1954) was marked by the use of vacuum tubes or valves as their basicelectronic component. Although these computers were faster than earliermechanical devices, they had many disadvantages. First of all, they werevery large in size. They consumed too much power and generated too muchheat, when used for even short duration of time. They were very unreliableand broke down frequently. They required regular maintenance and theircomponents had also to be assembled manually. Some examples of first generation computers are ENIAC (Electronic Numerical Integrator and Calculator), EDVAC (Electronic Discrete Variable Automatic Computer), EDSAC (Electronic Delay Storage Automatic Calculator), UNIVAC. Second Generation Computers: The first generation of computersbecame out-dated, when in 1954, the Philco Corporation developedtransistors that can be used in place of vacuum tubes. The secondgeneration of computers (1953-64) was marked by the use of transistors inplace of vacuum tubes. Transistors had a number of advantages over thevacuum tubes. As transistors were made from pieces of silicon, so theywere more compact than vacuum tubes. The second-generation computers were smaller in size and generated lessheat than first generation computers. Although they were slightly faster andmore reliable than earlier computers, they also had many disadvantages. They had limited storage capacity, consumed more power and were alsorelatively slow in performance. Some examples of second generationcomputers are IBM 701, PDP-1 and IBM 650. Third Generation Computers: Second generation computers became out-dated after the invention of ICs. The third generation of computers (1964-1978) was marked by use of Integrated Circuits (ICs) in place of transistors.As hundreds of transistors could be put on a single small circuit, so ICs weremore compact than transistors. The third generation computers removedmany drawbacks of second generation computers. The third generationcomputers were even smaller in size, very less heat generated and requiredvery less power as compared to earlier two generation of computers. Thesecomputers required less human labour at the assembly stage. Some examples of third generation computers are IBM 360, PDP-8, Cray-1and VAX. Fourth Generation Computers: The third generation computers becameout-dated, when it was found in around 1978, that thousands of ICs could beintegrated onto a single chip, called LSI (Large Scale Integration). The fourth generation of computers (1978-till date) was marked by use oflarge-scale Integrated (LSI) circuits in place of ICs. As thousands of ICscould be put onto a single circuit, so LSI circuits are still more compact
thanICs. In 1978, it was found that millions of components could be packed ontoa single circuit, known as Very Large Scale Integration (VLSI). VLSI is thelatest technology of computer that led to the development of the popularPersonal Computers (PCs), also called as Microcomputers. Some examples of fourth generation computers are IBM PC, IBM PC/AT,386, 486, Pentium and CRAY-2. Fifth Generation Computers: Although fourth generation computers offertoo many advantages to users, still they have one main disadvantage. Themajor drawback of these computers is that they have no intelligence on theirown. Scientists are now trying to remove this drawback by makingcomputers, which would have artificial intelligence. The fifth generationcomputers (Tomorrow's computers) are still under research anddevelopment stage. These computers would have artificial intelligence. They will use USLI (Ultra Large-Scale Integration) chips in place of VLSIchips. One USLI chip contains millions of components on a single IC.Robots have some features of fifth generation computers. 2. What are the components of Instruction Set architecture? Discuss in brief. Ans: The most common fields found in instruction formats are: 1. An operation code field that specifies the operation to be performed. 2. An address field that designates a memory address or a processor register. 3. A mode field that specifies the way the operand or the effective address is determined. Zero-address instructions In zero-address machines, both operands are assumed to be stored at a default location. The stack is used as the source of the input operands machines and the result goes back into the stack. Stack is a LIFO (last-in-firstout) data structure which is supported by all the processors, whether or not they are zero-address machines. LIFO implies that the last item placed on the stack is the first item to be taken out of the stack. All operations on this type of machine assume that the required input operands are the top two values on the stack. The result of the operation is placed on top of the stack. One-address instructions Earlier, memory used to be costly and time-consuming, so unique sets of registers were used to provide an input operand and to receive the resultfrom the ALU. Due to this, the registers are known as accumulators. Mostly, there is only one accumulator register in a machine. This type of design, called accumulator machines, is prevalent only if memory is expensive.Most operations, in accumulator machines, are performed on the contents of the accumulator and the operand supplied by the instruction. Therefore, these machines instructions need to state only the address of an individual Two-address instructions Here each address field determines two address fields i.e either a memory word or the processor register. Usually, we use dest (as in table 3.3) to indicate that the address used for destination. Also, this address supplies one of the source operands.
Three-address instructions Here each address field determines 3 addresses. The general format of an instruction is: operation dest, op1, op2where: - operation to be carried out; - address tp store the result operands on which instruction is to be executed.
3. Explain the different types of addressing modes. Ans: A distinct addressing mode field is required in instruction format for signal processing. An address field in an instruction may or may not be present. If its there, it may designate a memory address and if not, then a processor register may be designated. It is noticeable that each address field may be associated with its own specific addressing mode. Implied Mode: The operands in this mode are specified implicitly in the explanation of the instruction. For example, the instruction complement accumulator is considered as an implied mode instruction as the description of the instruction implies the operand in the accumulator register. In fact, all register references instructions that use an accumulator are implied mode instructions. Zero-address introductions are implied mode instructions. Immediate Mode: The operand in this mode is stated in the instruction itself, i.e. there is an operand field rather than an address field in the immediate mode instruction. Register Mode: In this mode, the operands are in registers that reside within the CPU. The register required is chosen from a register field in the instruction. Register Indirect Mode: In this mode, the instruction specifies a register in the CPU that contains the address of the operand and not the operand itself. Usage of register indirect mode instruction necessitates the placing of memory address of the operand in the processor register with a previous instruction. Auto-increment or Auto-decrement Mode: After execution of every instruction from the data in memory it is necessary to increment or decrement the register. This is done by using the increment or decrement instruction. Direct Addressing Mode: In this mode, the operand resides in memory and its address is given directly by the address field of the instruction such that the affective address is equal to the address part of the instruction Indirect Addressing Mode: Unlike direct address mode, in this mode, the address field gives the address where the effective address is stored in memory. The instruction from memory is fetched through control to read the address part in order to access memory again to read the effective address. Relative Address Mode: This mode is applied often with branch type instruction where the branch address position is relative to the instruction word address. As such in this mode, the program counter contents are added to the address element of the instruction so as to acquire the effectual address whose location in memory is relative to the address of the following instruction. Indexed Addressing Mode: In this mode, the effective address is acquired by adding the index register content to an instructions address element. The index register is a unique CPU register which contains an index value and can be added after its value is used to access the memory. Base Register Addressing Mode: In this mode, the affective address is obtained by adding the content of a base
register to the part of the instruction like that of the indexed addressing mode though the register here is a base register and not an index register. The difference between the base register and indexed addressing modes is based on their usage rather than their computation. An index register isassumed to hold an index number that is relative to the address part of the instruction. 4. What is meant by direct mapping? Discuss the various types of mapping? Ans: Mapping Mapping refers to the translation of main memory address to the cache memory address. The transfer of information from main memory to cache memory is conducted in units of cache blocks on cache lines. Blocks in caches are called block frames which are denoted as Bi for i = 1, 2, ...j where j is the entire block frames in caches. The corresponding memory blocks are denoted as Bj for j = 1, 2, k where k is the total number of blocks in memory. It is assumed that k >> j and k = 2 s and j = 2 r Where s is the number of bits required to address a main memory block, and r is number of bits required to address a cache memory block. There are four types of mapping schemes: direct mapping, associative mapping, set associative mapping, and sequential mapping. Direct mapping Associative memories are very costly as compared to RAM due to the additional logic association with all cells. Generally there are 2jwords in main memory and 2kwords in cache memory. The j-bit memory address is separated by 2 fields. k bits are used for index field. j-k bits are long-fields. The direct mapping cache organization utilizes k-bit indexes to access the cache memory and j-bit address for main memory. Cache words contain data and related tags. Every memory block is assigned to a particular line of cache in direct mapping. But if a line already contains memory block when new block is to be added then the old memory block is removed. Tag bits are stored next to data bits as new word enters the cache. Once processor has produced a memory request, the cache index field is utilized for the main memory address to access cache. Tag in word in cache is evaluated with tag field in processor address. If this comparison is positive there is a hit and the word is found in cache.
Associative mapping Associative mapping is used in cache organization which is the quickest and most supple organization. Addresses of the word and content of the words are stored in associative memory. It means cache can store any word in main memory. The associative memory is also called content addressable memory (CAM). When a memory address produced by the processor is sent to the CAM, the CAM simultaneously compares that address to all addresses currently stored in the cache. 5. Explain the hardware architecture of parallel processing. Ans: The core element of parallel processing is CPUs. The essential computing process is the execution of sequence of instruction on asset of data. The term stream is used here to denote a sequence of items as executed by single processor or multiprocessor. Based on a number of instruction and data, streams can be processed simultaneously, Flynn classifies the computer system into four categories. Single Instruction Multiple Data (SIMD) The term single instruction implies that all processing units execute the same instruction at any given clock cycle. On the other hand, the term multiple data implies that each and every processing unit could work on a different data element. Generally, this type of machine has one instruction dispatcher, a very big array of very small capacity instruction units and a network of very high bandwidth. This type is suitable for specialised problems which are characterised by a high regularity, for example, image processing . Today, modern microprocessors can execute the same instruction on multiple data. This is called Single Instruction Multiple Data (SIMD). SIMD instructions handle floating-point real numbers and also provide important speedups in algorithms. As the performing units for SIMD instructions typically belong to a physical core, as many SIMD instructions can run in parallel as the available physical cores. As mentioned, the utilisation of these vector-processing capabilities in parallel could give significant speedups in certain specific algorithms. The adding up of SIMD instructions & hardware to a multi-core CPU is a bit more extreme as compared to the addition of floating point ability. Since their inception, a microprocessor is a SISD device. SIMD is also referred as vector processing as its fundamental unit of organisation is the vector. A normal CPU operates on scalars, which is one at a time. A superscalar CPU operates on multiple scalars at a given moment, but it executes a different operation on each instruction. On the other hand, a vectorprocessor lines up an entire row of these same types of scalars and operates on them as a single unit. Modern, superscalar SISD machines exploit the property instruction-level parallelism of the instruction stream. This signifies that multiple instructions can be executed at a single instance on the same identical data stream. One property of the data stream called data parallelism is exploited by a SIMD machine. In this framework, you get data parallelism when you have a large mass of uniform data that requires same instruction performed on it. Therefore, a SIMD machine is totally a separate class of machine than the normal microprocessor.
6. Discuss scalable and multithreaded architectures. Define RAID. Also explain the levels of RAID. Ans: Scalability is the skill to enhance the amount of processing that can be done by adding up further resources to a system. It is different from performance as it does not improve performance but rather sustains performance by providing higher throughput. In other words, performance is the system response time under a typical load
whereas scalability is the ability of a system to increase that load without degrading response time. A computer architecture that is designed to execute more than one processor is called a scalable architecture. Almost all business applications are scalable. Scalability can be achieved in several ways, such as using more dominant CPUs or adding extra CPUs. There are two different modes of scaling: Scale up and scale out. Scaling up is achieved by adding extra resources to a single machine to allow an application to service more requests. The most common ways to do this are by adding memory (RAM) or to use a faster CPU. Scaling out is achieved by adding servers to a server group to make applications scale by scattering the processing among multiple computers. An understanding of the bottlenecks and the applications of each scaling method is required before a particular method can be productively utilised.Multithreading is the capability of processor to utilise multiple threads of execution simultaneously in one application. In simple words, it allows a program to do two things at once. When an application is run, each of the processes contains at least one thread. However, many concurrent threads may belong to one process in a multithreading system. For example, a Java Virtual Machine (JVM) is a system of one process with multiple threads. Most recent operating systems, such as Solaris, Linux, Windows 2000 and OS/2, support multiple processes with multiple threads per process. However, the traditional operating system, MS-DOS supports a single user process and a single thread. Some traditional UNIX systems are multiprogramming systems as they maintain multiple user processes but only one execution path is allowed for each process.