practice file
practice file
A broad range of industrial and consumer products use computers as control systems,
including simple special-purpose devices like microwave ovens and remote controls,
and factory devices like industrial robots. Computers are at the core of general-purpose
devices such as personal computers and mobile devices such as smartphones.
Computers power the Internet, which links billions of computers and users.
Early computers were meant to be used only for calculations. Simple manual
instruments like the abacus have aided people in doing calculations since ancient times.
Early in the Industrial Revolution, some mechanical devices were built to automate long,
tedious tasks, such as guiding patterns for looms. More sophisticated electrical
machines did specialized analog calculations in the early 20th century. The
first digital electronic calculating machines were developed during World War II,
both electromechanical and using thermionic valves. The
first semiconductor transistors in the late 1940s were followed by the silicon-
based MOSFET (MOS transistor) and monolithic integrated circuit chip technologies in
the late 1950s, leading to the microprocessor and the microcomputer revolution in the
1970s. The speed, power, and versatility of computers have been increasing
dramatically ever since then, with transistor counts increasing at a rapid pace (Moore's
law noted that counts doubled every two years), leading to the Digital Revolution during
the late 20th and early 21st centuries.
Etymology
A human computer, with microscope and calculator, 1952
It was not until the mid-20th century that the word acquired its modern definition;
according to the Oxford English Dictionary, the first known use of the
word computer was in a different sense, in a 1613 book called The Yong Mans
Gleanings by the English writer Richard Brathwait: "I haue [sic] read the truest computer
of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes
into a short number." This usage of the term referred to a human computer, a person
who carried out calculations or computations. The word continued to have the same
meaning until the middle of the 20th century. During the latter part of this period, women
were often hired as computers because they could be paid less than their male
counterparts.[1] By 1943, most human computers were women.[2]
The Online Etymology Dictionary gives the first attested use of computer in the 1640s,
meaning 'one who calculates'; this is an "agent noun from compute (v.)". The Online
Etymology Dictionary states that the use of the term to mean "'calculating machine' (of
any type) is from 1897." The Online Etymology Dictionary indicates that the "modern
use" of the term, to mean 'programmable digital electronic computer' dates from "1945
under this name; [in a] theoretical [sense] from 1937, as Turing machine".[3] The name
has remained, although modern computers are capable of many higher-level functions.
History
Main articles: History of computing and History of computing hardware
For a chronological guide, see Timeline of computing.
Pre-20th century
The Ishango bone, a bone tool dating back to prehistoric Africa
Devices have been used to aid computation for thousands of years, mostly using one-
to-one correspondence with fingers. The earliest counting device was most likely a form
of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi
(clay spheres, cones, etc.) which represented counts of items, likely livestock or grains,
sealed in hollow unbaked clay containers.[a][4] The use of counting rods is one example.
The planimeter was a manual instrument to calculate the area of a closed figure by
tracing over it with a mechanical linkage.
A slide rule
The slide rule was invented around 1620–1630, by the English clergyman William
Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-
operated analog computer for doing multiplication and division. As slide rule
development progressed, added scales provided reciprocals, squares and square roots,
cubes and cube roots, as well as transcendental functions such as logarithms and
exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with
special scales are still used for quick performance of routine calculations, such as
the E6B circular slide rule used for time and distance calculations on light aircraft.
In the 1890s, the Spanish engineer Leonardo Torres Quevedo began to develop a
series of advanced analog machines that could solve real and complex roots
of polynomials,[17][18][19][20] which were published in 1901 by the Paris Academy of Sciences.
[21]
First computer
Charles Babbage
A diagram of a portion of Babbage's Difference engine
After working on his difference engine he announced his invention in 1822, in a paper to
the Royal Astronomical Society, titled "Note on the application of machinery to the
computation of astronomical and mathematical tables".[23] He also designed to aid in
navigational calculations, in 1833 he realized that a much more general design,
an analytical engine, was possible. The input of programs and data was to be provided
to the machine via punched cards, a method being used at the time to direct
mechanical looms such as the Jacquard loom. For output, the machine would have a
printer, a curve plotter and a bell. The machine would also be able to punch numbers
onto cards to be read in later. The engine would incorporate an arithmetic logic
unit, control flow in the form of conditional branching and loops, and integrated memory,
making it the first design for a general-purpose computer that could be described in
modern terms as Turing-complete.[24][25]
The machine was about a century ahead of its time. All the parts for his machine had to
be made by hand – this was a major problem for a device with thousands of parts.
Eventually, the project was dissolved with the decision of the British Government to
cease funding. Babbage's failure to complete the analytical engine can be chiefly
attributed to political and financial difficulties as well as his desire to develop an
increasingly sophisticated computer and to move ahead faster than anyone else could
follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the
analytical engine's computing unit (the mill) in 1888. He gave a successful
demonstration of its use in computing tables in 1906.
Analog computers
Main article: Analog computer
The art of mechanical analog computing reached its zenith with the differential analyzer,
built by H. L. Hazen and Vannevar Bush at MIT starting in 1927. This built on the
mechanical integrators of James Thomson and the torque amplifiers invented by H. W.
Nieman. A dozen of these devices were built before their obsolescence became
obvious. By the 1950s, the success of digital electronic computers had spelled the end
for most analog computing machines, but analog computers remained in use during the
1950s in some specialized applications such as education (slide rule) and aircraft
(control systems).
Digital computers
Electromechanical
Claude Shannon's 1937 master's thesis laid the foundations of digital computing, with
his insight of applying Boolean algebra to the analysis and synthesis of switching
circuits being the basic concept which underlies all electronic digital computers.[35][36]
By 1938, the United States Navy had developed an electromechanical analog computer
small enough to use aboard a submarine. This was the Torpedo Data Computer, which
used trigonometry to solve the problem of firing a torpedo at a moving target.
During World War II similar devices were developed in other countries as well.
Zuse's next computer, the Z4, became the world's first commercial computer; after initial
delay due to the Second World War, it was completed in 1950 and delivered to the ETH
Zurich.[46] The computer was manufactured by Zuse's own company, Zuse KG, which
was founded in 1941 as the first company with the sole purpose of developing
computers in Berlin.[46] The Z4 served as the inspiration for the construction of
the ERMETH, the first Swiss computer and one of the first in Europe.[47]
Colossus was the world's first electronic digital programmable computer.[34] It used a
large number of valves (vacuum tubes). It had paper-tape input and was capable of
being configured to perform a variety of boolean logical operations on its data, but it was
not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II
making ten machines in total). Colossus Mark I contained 1,500 thermionic valves
(tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate
than Mark I, greatly speeding the decoding process.[55][56]
ENIAC was the first electronic, Turing-complete device, and performed ballistics trajectory
calculations for the United States Army.
The ENIAC[57] (Electronic Numerical Integrator and Computer) was the first
electronic programmable computer built in the U.S. Although the ENIAC was similar to
the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the
Colossus, a "program" on the ENIAC was defined by the states of its patch cables and
switches, a far cry from the stored program electronic machines that came later. Once a
program was written, it had to be mechanically set into the machine with manual
resetting of plugs and switches. The programmers of the ENIAC were six women, often
known collectively as the "ENIAC girls".[58][59]
It combined the high speed of electronics with the ability to be programmed for many
complex problems. It could add or subtract 5000 times a second, a thousand times
faster than any other machine. It also had modules to multiply, divide, and square root.
High speed memory was limited to 20 words (about 80 bytes). Built under the direction
of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's
development and construction lasted from 1943 to full operation at the end of 1945. The
machine was huge, weighing 30 tons, using 200 kilowatts of electric power and
contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of
resistors, capacitors, and inductors.[60]
Modern computers
Concept of modern computer
The principle of the modern computer was proposed by Alan Turing in his seminal 1936
paper,[61] On Computable Numbers. Turing proposed a simple device that he called
"Universal Computing machine" and that is now known as a universal Turing machine.
He proved that such a machine is capable of computing anything that is computable by
executing instructions (program) stored on tape, allowing the machine to be
programmable. The fundamental concept of Turing's design is the stored program,
where all the instructions for computing are stored in memory. Von
Neumann acknowledged that the central concept of the modern computer was due to
this paper.[62] Turing machines are to this day a central object of study in theory of
computation. Except for the limitations imposed by their finite memory stores, modern
computers are said to be Turing-complete, which is to say, they
have algorithm execution capability equivalent to a universal Turing machine.
Stored programs
Main article: Stored-program computer
The Manchester Baby was the world's first stored-program computer. It was built at
the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff
Tootill, and ran its first program on 21 June 1948.[63] It was designed as a testbed for
the Williams tube, the first random-access digital storage device.[64] Although the
computer was described as "small and primitive" by a 1998 retrospective, it was the first
working machine to contain all of the elements essential to a modern electronic
computer.[65] As soon as the Baby had demonstrated the feasibility of its design, a
project began at the university to develop it into a practically useful computer,
the Manchester Mark 1.
The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first
commercially available general-purpose computer.[66] Built by Ferranti, it was delivered to
the University of Manchester in February 1951. At least seven of these later machines
were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam.[67] In
October 1947 the directors of British catering company J. Lyons & Company decided to
take an active role in promoting the commercial development of computers.
Lyons's LEO I computer, modelled closely on the Cambridge EDSAC of 1949, became
operational in April 1951[68] and ran the world's first routine office computer job.
Transistors
Main articles: Transistor and History of the transistor
Further information: Transistor computer and MOSFET
At the University of Manchester, a team under the leadership of Tom Kilburn designed
and built a machine using the newly developed transistors instead of valves.[72] Their
first transistorized computer and the first in the world, was operational by 1953, and a
second version was completed there in April 1955. However, the machine did make use
of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write
on its magnetic drum memory, so it was not the first completely transistorized computer.
That distinction goes to the Harwell CADET of 1955,[73] built by the electronics division of
the Atomic Energy Research Establishment at Harwell.[73][74]
Integrated circuits
Main articles: Integrated circuit and Invention of the integrated circuit
Further information: Planar process and Microprocessor
The first working ICs were invented by Jack Kilby at Texas Instruments and Robert
Noyce at Fairchild Semiconductor.[92] Kilby recorded his initial ideas concerning the
integrated circuit in July 1958, successfully demonstrating the first working integrated
example on 12 September 1958.[93] In his patent application of 6 February 1959, Kilby
described his new device as "a body of semiconductor material ... wherein all the
components of the electronic circuit are completely integrated".[94][95] However, Kilby's
invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated
circuit (IC) chip.[96] Kilby's IC had external wire connections, which made it difficult to
mass-produce.[97]
Noyce also came up with his own idea of an integrated circuit half a year later than
Kilby.[98] Noyce's invention was the first true monolithic IC chip.[99][97] His chip solved many
practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was
made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC
was fabricated using the planar process, developed by his colleague Jean Hoerni in
early 1959. In turn, the planar process was based on Carl Frosch and Lincoln Derick
work on semiconductor surface passivation by silicon dioxide.[100][101][102][103][104][105]
System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a
coin.[115] They may or may not have integrated RAM and flash memory. If not integrated,
the RAM is usually placed directly above (known as Package on package) or below (on
the opposite side of the circuit board) the SoC, and the flash memory is usually placed
right next to the SoC. This is done to improve data transfer speeds, as the data signals
do not have to travel long distances. Since ENIAC in 1945, computers have advanced
enormously, with modern SoCs (such as the Snapdragon 865) being the size of a coin
while also being hundreds of thousands of times more powerful than ENIAC, integrating
billions of transistors, and consuming only a few watts of power.
Mobile computers
The first mobile computers were heavy and ran from mains power. The 50 lb
(23 kg) IBM 5100 was an early example. Later portables such as the Osborne
1 and Compaq Portable were considerably lighter but still needed to be plugged in. The
first laptops, such as the Grid Compass, removed this requirement by incorporating
batteries – and with the continued miniaturization of computing resources and
advancements in portable battery life, portable computers grew in popularity in the
2000s.[116] The same developments allowed manufacturers to integrate computing
resources into cellular mobile phones by the early 2000s.
These smartphones and tablets run on a variety of operating systems and recently
became the dominant computing device on the market.[117] These are powered
by System on a Chip (SoCs), which are complete computers on a microchip the size of
a coin.[115]
Types
See also: Classes of computers
Computers can be classified in a number of different ways, including:
By architecture
Analog computer
Digital computer
Hybrid computer
Harvard architecture
Von Neumann architecture
Complex instruction set computer
Reduced instruction set computer
By size, form-factor and purpose
See also: List of computer size categories
Supercomputer
Mainframe computer
Minicomputer (term no longer used),[118] Midrange computer
Server
Rackmount server
Blade server
Tower server
Personal computer
Workstation
Microcomputer (term no longer used)[119]
Home computer (term fallen into disuse)[120]
Desktop computer
Tower desktop
Slimline desktop
Multimedia computer (non-linear editing system computers, video editing
PCs and the like, this term is no longer used)[121]
Gaming computer
All-in-one PC
Nettop (Small form factor PCs, Mini PCs)
Home theater PC
Keyboard computer
Portable computer
Thin client
Internet appliance
Laptop computer
Desktop replacement computer
Gaming laptop
Rugged laptop
2-in-1 PC
Ultrabook
Chromebook
Subnotebook
Smartbook
Netbook
Mobile computer
Tablet computer
Smartphone
Ultra-mobile PC
Pocket PC
Palmtop PC
Handheld PC
Pocket computer
Wearable computer
Smartwatch
Smartglasses
Single-board computer
Plug computer
Stick PC
Programmable logic controller
Computer-on-module
System on module
System in a package
System-on-chip (Also known as an Application Processor or AP if it lacks circuitry
such as radio circuitry)
Microcontroller
Hardware
Main articles: Computer hardware, Personal computer hardware, Central processing
unit, and Microprocessor
Video demonstrating the standard components of a "slimline" computer
The term hardware covers all of those parts of a computer that are tangible physical
objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM),
motherboard, displays, power supplies, cables, keyboards, printers and "mice" input
devices are all hardware.
Pascal's
calculator, Arithmometer, Differen
Calculators
ce engine, Quevedo's analytical
machines
First generation
(mechanical/electromechanic
al)
Jacquard loom, Analytical
Programmable engine, IBM ASCC/Harvard Mark
devices I, Harvard Mark II, IBM
SSEC, Z1, Z2, Z3
4-bit microcomput
Intel 4004, Intel 4040
er
Embedded
Intel 8048, Intel 8051
computer
Chemical
computer
DNA computing
Theoretical/experimental
Optical computer
Spintronics-
based computer
Wetware/Organic
computer
Peripheral device
(input/output) Output Monitor, printer, loudspeaker
Computer buses
Long range
(computer Ethernet, ATM, FDDI
networking)
A general-purpose computer has four main components: the arithmetic logic unit (ALU),
the control unit, the memory, and the input and output devices (collectively termed I/O).
These parts are interconnected by buses, often made of groups of wires. Inside each of
these parts are thousands to trillions of small electrical circuits which can be turned off
or on by means of an electronic switch. Each circuit represents a bit (binary digit) of
information so that when the circuit is on it represents a "1", and when off it represents a
"0" (in positive logic representation). The circuits are arranged in logic gates so that one
or more of the circuits may control the state of one or more of the other circuits.
Input devices
When unprocessed data is sent to the computer with the help of input devices, the data
is processed and sent to output devices. The input devices may be hand-operated or
automated. The act of processing is mainly regulated by the CPU. Some examples of
input devices are:
Computer keyboard
Digital camera
Graphics tablet
Image scanner
Joystick
Microphone
Mouse
Overlay keyboard
Real-time clock
Trackball
Touchscreen
Light pen
Output devices
The means through which computer gives output are known as output devices. Some
examples of output devices are:
Computer monitor
Printer
PC speaker
Projector
Sound card
Graphics card
Control unit
Main articles: CPU design and Control unit
Diagram showing how a particular MIPS
architecture instruction would be decoded by the control system
The control unit (often called a control system or central controller) manages the
computer's various components; it reads and interprets (decodes) the program
instructions, transforming them into control signals that activate other parts of the
computer.[d] Control systems in advanced computers may change the order of execution
of some instructions to improve performance.
A key component common to all CPUs is the program counter, a special memory cell
(a register) that keeps track of which location in memory the next instruction is to be
read from.[e]
The control system's function is as follows— this is a simplified description, and some of
these steps may be performed concurrently or in a different order depending on the type
of CPU:
1. Read the code for the next instruction from the cell indicated by the program
counter.
2. Decode the numerical code for the instruction into a set of commands or signals
for each of the other systems.
3. Increment the program counter so it points to the next instruction.
4. Read whatever data the instruction requires from cells in memory (or perhaps
from an input device). The location of this required data is typically stored within
the instruction code.
5. Provide the necessary data to an ALU or register.
6. If the instruction requires an ALU or specialized hardware to complete, instruct
the hardware to perform the requested operation.
7. Write the result from the ALU back to a memory location or to a register or
perhaps an output device.
8. Jump back to step (1).
Since the program counter is (conceptually) just another set of memory cells, it can be
changed by calculations done in the ALU. Adding 100 to the program counter would
cause the next instruction to be read from a place 100 locations further down the
program. Instructions that modify the program counter are often known as "jumps" and
allow for loops (instructions that are repeated by the computer) and often conditional
instruction execution (both examples of control flow).
The sequence of operations that the control unit goes through to process an instruction
is in itself like a short computer program, and indeed, in some more complex CPU
designs, there is another yet smaller computer called a microsequencer, which runs
a microcode program that causes all of these events to happen.
Superscalar computers may contain multiple ALUs, allowing them to process several
instructions simultaneously.[123] Graphics processors and computers
with SIMD and MIMD features often contain ALUs that can perform arithmetic
on vectors and matrices.
Memory
Main articles: Computer memory and Computer data storage
Magnetic-core memory (using magnetic cores) was
the computer memory of choice in the 1960s, until it was replaced by semiconductor
memory (using MOS memory cells).
A computer's memory can be viewed as a list of cells into which numbers can be placed
or read. Each cell has a numbered "address" and can store a single number. The
computer can be instructed to "put the number 123 into the cell numbered 1357" or to
"add the number that is in cell 1357 to the number that is in cell 2468 and put the
answer into cell 1595." The information stored in memory may represent practically
anything. Letters, numbers, even computer instructions can be placed into memory with
equal ease. Since the CPU does not differentiate between different types of information,
it is the software's responsibility to give significance to what the memory sees as
nothing but a series of numbers.
In almost all modern computers, each memory cell is set up to store binary numbers in
groups of eight bits (called a byte). Each byte is able to represent 256 different numbers
(28 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several
consecutive bytes may be used (typically, two, four or eight). When negative numbers
are required, they are usually stored in two's complement notation. Other arrangements
are possible, but are usually not seen outside of specialized applications or historical
contexts. A computer can store any kind of information in memory if it can be
represented numerically. Modern computers have billions or even trillions of bytes of
memory.
The CPU contains a special set of memory cells called registers that can be read and
written to much more rapidly than the main memory area. There are typically between
two and one hundred registers depending on the type of CPU. Registers are used for
the most frequently needed data items to avoid having to access main memory every
time data is needed. As data is constantly being worked on, reducing the need to
access main memory (which is often slow compared to the ALU and control units)
greatly increases the computer's speed.
In more sophisticated computers there may be one or more RAM cache memories,
which are slower than registers but faster than main memory. Generally computers with
this sort of cache are designed to move frequently needed data into the cache
automatically, often without the need for any intervention on the programmer's part.
Input/output (I/O)
Main article: Input/output
Multitasking
Main article: Computer multitasking
While a computer may be viewed as running one gigantic program stored in its main
memory, in some systems it is necessary to give the appearance of running several
programs simultaneously. This is achieved by multitasking i.e. having the computer
switch rapidly between running each program in turn.[127] One means by which this is
done is with a special signal called an interrupt, which can periodically cause the
computer to stop executing instructions where it was and do something else instead. By
remembering where it was executing prior to the interrupt, the computer can return to
that task later. If several programs are running "at the same time". then the interrupt
generator might be causing several hundred interrupts per second, causing a program
switch each time. Since modern computers typically execute instructions several orders
of magnitude faster than human perception, it may appear that many programs are
running at the same time even though only one is ever executing in any given instant.
This method of multitasking is sometimes termed "time-sharing" since each program is
allocated a "slice" of time in turn.[128]
Before the era of inexpensive computers, the principal use for multitasking was to allow
many people to share the same computer. Seemingly, multitasking would cause a
computer that is switching between several programs to run more slowly, in direct
proportion to the number of programs it is running, but most programs spend much of
their time waiting for slow input/output devices to complete their tasks. If a program is
waiting for the user to click on the mouse or press a key on the keyboard, then it will not
take a "time slice" until the event it is waiting for has occurred. This frees up time for
other programs to execute so that many programs may be run simultaneously without
unacceptable speed loss.
Multiprocessing
Main article: Multiprocessing
Software
Main article: Software
Software refers to parts of the computer which do not have a material form, such as
programs, data, protocols, etc. Software is that part of a computer system that consists
of encoded information or computer instructions, in contrast to the
physical hardware from which the system is built. Computer software includes computer
programs, libraries and related non-executable data, such as online
documentation or digital media. It is often divided into system software and application
software. Computer hardware and software require each other and neither can be
realistically used on its own. When software is stored in hardware that cannot easily be
modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes
called "firmware".
Macintosh
Classic Mac OS, macOS (previously OS X and
operating
Mac OS X)
systems
Embedded and r
List of embedded operating systems
eal-time
Microsoft
Graphical user
Windows, GNOME, KDE, QNX Photon, CDE, G
interface (WIMP)
EM, Aqua
User interface
Text-based user
Command-line interface, Text user interface
interface
Languages
There are thousands of different programming languages—some intended for general
purpose, others useful for only highly specialized applications.
Programming languages
Commonly
used assem
ARM, MIPS, x86
bly
languages
Commonly
used high-
Ada, BASIC, C, C++, C#, COBOL, Fortran, PL/I, REXX, Java, Lisp, Pa
level
scal, Object Pascal
programmin
g languages
Commonly
used scriptin Bourne script, JavaScript, Python, Ruby, PHP, Perl
g languages
Programs
The defining feature of modern computers which distinguishes them from all other
machines is that they can be programmed. That is to say that some type
of instructions (the program) can be given to the computer, and it will process them.
Modern computers based on the von Neumann architecture often have machine code in
the form of an imperative programming language. In practical terms, a computer
program may be just a few instructions or extend to many millions of instructions, as do
the programs for word processors and web browsers for example. A typical modern
computer can execute billions of instructions per second (gigaflops) and rarely makes a
mistake over many years of operation. Large computer programs consisting of several
million instructions may take teams of programmers years to write, and due to the
complexity of the task almost certainly contain errors.
In most cases, computer instructions are simple: add one number to another, move
some data from one location to another, send a message to some external device, etc.
These instructions are read from the computer's memory and are generally carried out
(executed) in the order they were given. However, there are usually specialized
instructions to tell the computer to jump ahead or backwards to some other place in the
program and to carry on executing from there. These are called "jump" instructions
(or branches). Furthermore, jump instructions may be made to happen conditionally so
that different sequences of instructions may be used depending on the result of some
previous calculation or some external event. Many computers directly
support subroutines by providing a type of jump that "remembers" the location it jumped
from and another instruction to return to the instruction following that jump instruction.
Program execution might be likened to reading a book. While a person will normally
read each word and line in sequence, they may at times jump back to an earlier place in
the text or skip sections that are not of interest. Similarly, a computer may sometimes
go back and repeat the instructions in some section of the program over and over again
until some internal condition is met. This is called the flow of control within the program
and it is what allows the computer to perform tasks repeatedly without human
intervention.
Once told to run this program, the computer will perform the repetitive addition task
without further human intervention. It will almost never make a mistake and a modern
PC can complete the task in a fraction of a second.
Machine code
In most computers, individual instructions are stored as machine code with each
instruction being given a unique number (its operation code or opcode for short). The
command to add two numbers together would have one opcode; the command to
multiply them would have a different opcode, and so on. The simplest computers are
able to perform any of a handful of different instructions; the more complex computers
have several hundred to choose from, each with a unique numerical code. Since the
computer's memory is able to store numbers, it can also store the instruction codes.
This leads to the important fact that entire programs (which are just lists of these
instructions) can be represented as lists of numbers and can themselves be
manipulated inside the computer in the same way as numeric data. The fundamental
concept of storing programs in the computer's memory alongside the data they operate
on is the crux of the von Neumann, or stored program, architecture.[130][131] In some cases,
a computer might store some or all of its program in memory that is kept separate from
the data it operates on. This is called the Harvard architecture after the Harvard Mark
I computer. Modern von Neumann computers display some traits of the Harvard
architecture in their designs, such as in CPU caches.
Bugs
Main article: Software bug
In the 1970s, computer engineers at research institutions throughout the United States
began to link their computers together using telecommunications technology. The effort
was funded by ARPA (now DARPA), and the computer network that resulted was called
the ARPANET.[141] The technologies that made the Arpanet possible spread and evolved.
In time, the network spread beyond academic and military institutions and became
known as the Internet.
The number of computers that are networked is growing phenomenally. A very large
proportion of personal computers regularly connect to the Internet to communicate and
receive information. "Wireless" networking, often utilizing mobile phone networks, has
meant networking is becoming increasingly ubiquitous even in mobile computing
environments.
Unconventional computers
Main article: Human computer
See also: Harvard Computers
A computer does not need to be electronic, nor even have a processor, nor RAM, nor
even a hard disk. While popular usage of the word "computer" is synonymous with a
personal electronic computer,[l] a typical modern definition of a computer is: "A device
that computes, especially a programmable [usually] electronic machine that performs
high-speed mathematical or logical operations or that assembles, stores, correlates, or
otherwise processes information."[142] According to this definition, any device
that processes information qualifies as a computer.
Future
There is active research to make unconventional computers out of many promising new
types of technology, such as optical computers, DNA computers, neural computers,
and quantum computers. Most computers are universal, and are able to calculate
any computable function, and are limited only by their memory capacity and operating
speed. However different designs of computers can give very different performance for
particular problems; for example quantum computers can potentially break some
modern encryption algorithms (by quantum factoring) very quickly.
Artificial intelligence
A computer will solve problems in exactly the way it is programmed to, without regard to
efficiency, alternative solutions, possible shortcuts, or possible errors in the code.
Computer programs that learn and adapt are part of the emerging field of artificial
intelligence and machine learning. Artificial intelligence based products generally fall
into two major categories: rule-based systems and pattern recognition systems. Rule-
based systems attempt to represent the rules used by human experts and tend to be
expensive to develop. Pattern-based systems use data about a problem to generate
conclusions. Examples of pattern-based systems include voice recognition, font
recognition, translation and the emerging field of on-line marketing.
Professions and organizations
As the use of computers has spread throughout society, there are an increasing number
of careers involving computers.
Computer-related professions
The need for computers to work well together and to be able to exchange information
has spawned the need for many standards organizations, clubs and societies of both a
formal and informal nature.
Organizations
See also
Computability theory
Computer security
Glossary of computer hardware terms
History of computer science
List of computer term etymologies
List of computer system manufacturers
List of fictional computers
List of films about computers
List of pioneers in computer science
Outline of computers
Pulse computation
TOP500 (list of most powerful computers)
Unconventional computing