Evolution of computer (Cont’d)
B. Generations of Computers
The evolution of digital computing is often divided into generations. Each
generation is characterized by dramatic improvements over the previous
generation in the technology used to build computers, the internal
organization of computer systems, and programming languages. Although
not usually associated with computer generations, there has been a steady
improvement in algorithms, including algorithms used in computational
science. The following history has been organized using these widely
recognized generations as mileposts.
What generation a computer belongs to is determined by the technology it
uses. With advancement in the generation, the performance of computers
improved not only due to the implementation of better hardware technology
but also superior operating systems and other software utilities.
1. First Generation Electronic Computers (1941 – 1956)
First generation computers were characterized by vacuum tubes. A vacuum
tube is a delicate glass device that can control and amplify electronic signals.
They were made using thousands of vacuum tubes and were the fastest
calculating devices of their time. These computers were very large in size,
consumed lot of electricity and generated lots of heat.
They relied on machine language, the lowest-level programming language
understood by computers, to perform operations, and they could only solve
one problem at a time. It would take operators days or even weeks to set-up a
new problem. Input was based on punched cards and paper tape, and output
was displayed on printouts.
Universal Automatic Computer (UNIVAC), Electronic Discrete Variable
Automatic Computer (EDVAC), Electronic Delay Storage Automatic
Calculator (EDSAC) and Electronic Numerical Integrator and Computer
(ENIAC) computers are examples of first-generation computing devices.
2. Second Generation (1957 – 1962)
The second generation saw several important developments at all levels of
computer system design, from the technology used to build the basic circuits
to the programming languages used to write scientific applications. First-
generation computers were notoriously unreliable, largely because the
vacuum tubes kept burning out. Transistors were invented. This changed the
way computers were built, leading to the second generation of computer
technology.
A transistor is a solid-state device that functions as an electronic switch. It
regulates current or voltage flow and acts as a switch or gate for electronic
signals in an electronic circuit, but at a tiny fraction of the weight, power
consumption, and heat output of vacuum tubes. Because transistors are small
and can last indefinitely, second-generation computers were much smaller
and more reliable than first-generation computers. They looked much more
like the computers we use today. Although, they used punched cards for
input, they had printers, tape storage, and disk storage.
In contrast to the first-generation computer’s reliance on cumbersome
machine language, the second generation saw the development of the first
high-level programming languages, which are much easier for people to
understand and work with than machine language. The two programming
languages introduced during the second generation, Common Business-
Oriented Language (COBOL) and Formula Translator (FORTRAN), remain
among the most widely-used programming languages even today. COBOL is
preferred by businesses, and FORTRAN is used by scientists and engineers.
Important commercial machines of this era include the IBM 1620 and 7094.
IBM 1620 developed for scientific computing and became the computer of
choice for university research labs.
3. Third Generation (1963 – 1972)
The development of the Integrated Circuit (IC) brought about the third
generation of computers which incorporated many transistors and electronic
circuits on a single wafer or chip of silicon. Essentially, an integrated circuit is
a solid-state device on which an entire circuit—transistors and the
connections between them—can be created i.e. semiconductor devices with
several transistors built into one physical component. This meant that a single
integrated circuit chip, not much bigger than early transistors, could replace
entire circuit boards containing many transistors, again reducing the size of
computers. Because of this, they gained the name microcomputers because
compared to second generation computers which would occupy entire rooms
and buildings, they were quite small and could conserve space. The invention
of the IC was the greatest achievement done in the period of third generation
of computers. IC was invented by Robert Noyce and Jack Kilby in 1958-59. IC
is a single component containing a number of transistors.
4. Fourth Generation (1973 – 1984)
The fourth-generation computers emerged with development of the VLSI
(Very Large-Scale Integration). Very-large-scale integration (VLSI) is the
process of creating an integrated circuit (IC) by combining thousands of
transistors into a single chip. VLSI began in the 1970s when complex
semiconductor and communication technologies were being developed.
Before the introduction of VLSI technology, most ICs had a limited set of
functions they could perform. An electronic circuit might consist of a CPU,
ROM, RAM and other glue logic. VLSI lets IC designers add all of these into
one chip. With the help of VLSI technology microprocessor came into
existence. From then, the evolution of computing technology has been an
ever-increasing miniaturization of the electronic circuitry. Some computers of
this generation were − DEC 10, STAR 1000, PDP 11, CRAY-1(Super
Computer), and CRAY-X-MP(Super Computer)
5. Fifth-generation Computers (now and the future)
Fifth-generation computers are most commonly defined as those that are
based on artificial intelligence, allowing them to think, reason, and learn.
Some aspects of fifth-generation computers—such as voice and touch input
and speech recognition—are in use today. In the future, fifth-generation
computers are expected to be constructed differently than they are today,
such as in the form of optical computers that process data using light instead
of electrons, tiny computers that utilize nanotechnology, or as entire general-
purpose computers built into desks, home appliances, and other everyday
devices.