0% found this document useful (0 votes)
189 views74 pages

Information Technology (IT) Just As The Internet and The We Have Affected All

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
189 views74 pages

Information Technology (IT) Just As The Internet and The We Have Affected All

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 74

Information Technology (IT) Just as the Internet and the We have affected all

of us , They have affected computer technology as well. Today communication links to


the Internet are a common feature of almost all computer system. Information
Technology is a modern term that describes this combination of traditional computer
and communication technologies.

The purpose of this book is to help you use and understand information technology
. This involves two aspects: computer competency and computer knowledge. Computer
competency refers to acquiring computer related skills.

Information technology (IT) is the application


of computers and telecommunications equipment to store, retrieve, transmit and
manipulate data, often in the context of a business or other enterprise.

The term is commonly used as a synonym for computers and computer networks, but it
also encompasses other information distribution technologies such as television and
telephones. Several industries are associated with information
technology, including computer
hardware, software, electronics, semiconductors, internet, telecommunications
equipment, engineering, healthcare, e-commerce and computer services.

Humans have been storing, retrieving, manipulating and communicating information since
the Sumerians developed writing in about 3000 BC, but the term information technology
in its modern sense first appeared in a 1958 article

1
published in the Harvard Business Review; authors Harold J. Leavitt and Thomas L.
Whisler commented that "the new technology does not yet have a single established
name. We shall call it information technology (IT)." Their definition consists of three
categories: techniques for processing, the application of statistical and mathematical
methods to decision-making, and the simulation of higher-order thinking through computer
programs.

Based on the storage and processing technologies employed, it is possible to distinguish


four distinct phases of IT development: pre-mechanical (3000 BC – 1450 AD), mechanical
(1450–1840), electromechanical (1840–1940) and electronic (1940–present).[5] This
article focuses on the most recent period (electronic), which began in about 1940.

Zuse Z3 replica on display at Deutsches Museum in Munich. The Zuse Z3 is the first
programmable computer.

Devices have been used to aid computation for thousands of years, probably initially in
the form of a tally stick. The Antikythera mechanism, dating from about the beginning of
the first century BC, is generally considered to be the earliest known mechanical analog
computer, and the earliest known geared mechanism. Comparable geared devices did not
emerge in Europe until the 16th century, and it was not until 1645 that the first mechanical
calculator capable of performing the four basic arithmetical operations was developed.

Electronic computers, using either relays or valves, began to appear in the early 1940s.
The electromechanical Zuse Z3, completed in 1941, was the world's first programmable
computer, and by modern standards one of the first machines that could be considered
a complete computing machine. Colossus,

2
developed during the Second World War to decrypt German messages was the first
electronic digital computer. Although it was programmable, it was not general-purpose,
being designed to perform only a single task. It also lacked the ability to store its program
in memory; programming was carried out using plugs and switches to alter the internal
wiring. The first recognizably modern electronic digital stored-program computer was the
Manchester Small-Scale Experimental Machine (SSEM), which ran its first program on
21 June 1948.

The development of transistors in the late 1940s at Bell Laboratories allowed a new
generation of computers to be designed with greatly reduced power consumption. The
first commercially available stored-program computer, the Ferranti Mark I, contained
4050 valves and had a power consumption of 25 kilowatts. By comparison the first
transistorized computer, developed at the University of Manchester and operational by
November 1953, consumed only 150 watts in its final version.

Computers are used in so many fields in our daily life. From Engineers to Doctors,
Students, Teachers, Government Organization they all use computers to perform specific
tasks, for entertainment or just to finish office work. Computers have made our life easier.
With greater precision and accuracy and less time taking computers can do alot in short
time while that task can take alot of time while doing manually. Computers have taken
industries and businesses to a whole new level. They are used at Home for work and
entertainment purposes, at Office, In hospitals, in government organizations. Here we are
going to discuss some of the uses of computers in various fields.

3
Uses Of Computer At Home

Computer can be used at home in the following ways.

See Also: Right technology for kids

Home Budget

Computer can be used to manage Home Budget. You can easily calculate your
expenses and income. You can list all expenses in one column and income in another
column. Then you can apply any calculation on these columns to plan your home budget.
There are also specialize software that can manage your income and expenses and
generate some cool reports.

Computer Games

An important use of computers at home is playing games. Different types of games


are available. These games are a source of entertainment and recreation. Many games
are available that are specially developed to improve your mental capability and thinking
power.

Working From Home

People can manage the office work at home. The owner of a company can check
the work of the employees from home. He can control his office while sitting at home.

Entertainment

People can find entertainment on the internet. They can watch movies, listen to
songs, and watch videos download different stuff. They can also watch live matches on
the internet.

Information

People can find any type of information on the internet. Educational and informative
websites are available to download books, tutorials etc. to improve their knowledge and
learn new things.

4
Chatting & Social Media

People can chat with friends and family on the internet using different software like
Skype etc. One can interact with friends over social media websites like Facebook, Twitter
& Google Plus. They can also share photos and videos with friends.

Uses Of Computers In Education

CBT are different programs that are supplied on CD-ROM. These programs include
text, graphics and sound. Audio and Video lectures are recorded on the CDs. CBT is a
low cost solution for educating people. You can train a large number of people easily.

Benefits Of CBT

Some benefits of CBT are as follows:

1. The students can learn new skills at their own pace. They can easily
acquire knowledge in any available time of their own choice.

2. Training time can be reduced.

3. Training materials are interactive and easy to learn. It encourages


students to learn the topic.

4. Planning and timing problems are reduced or eliminated.

5. The skills can be taught at any time and at any place.

6. It is very cost effective way to train a large number of students.

7. Training videos and audios are available at affordable prices.

Computer Aided Learning (CAL)

Computer aided learning is the process of using information technology to help


teaching and enhance the learning process. The use of computer can reduce the time that
is spent on preparing teaching material. It can also reduce the administrative load of
teaching and research. The use of multimedia projector and PowerPoint presentations
has improved the quality of teaching. It has also helped the learning process.

5
Distance Learning

Distance learning is a new learning methodology. Computer plays the key role in
this kind of learning. Many institutes are providing distance learning programs. The
student does not need to come to the institute. The institute provides the reading material
and the student attends virtual classroom. In virtual classroom, the teacher delivers lecture
at his own workplace. The student can attend the lecture at home by connecting to a
network. The student can also ask questions to the teacher.

Online Examination

The trend of online examination is becoming popular. Different examination like


GRE, GMAT and SAT are conducted online all over the world. The questions are marked
by computer. It minimizes the chance of mistakes. It also enables to announce the result
in time.

Online Training Resources

• Lynda.com (For different Software training and Web development and CMS
tutorials)

• CBT Nuggets (For certification in Networking Technologies by CISCO &


Microsoft)

• Nettuts + (For web technologies and web programming languages)

• Byte-Notes (And of course Byte-Notes.com for intrudctory level


programming tutorials and lecture notes)

Uses Of Computers In Business

The use of computer technology in business provides many facilities.


Businessmen are using computers to interact with their customers anywhere in the world.
Many business tasks are performed more quickly and efficiently. Computers also help
them to reduce the overall cost of their business. Computer can be used in business in the
following ways.

6
Marketing

An organization can use computers for marketing their products. Marketing


applications provide information about the products to customers. Computer is also used
to manage distribution system, advertising and selling activities. It can also be used in
deciding pricing strategies. Companies can know more about their customers and their
needs and requirements etc.

Stock Exchange

Stock Exchange is the most important place for businessmen. Many stock
exchanges use computers to conduct bids. The stockbrokers perform all trading activities
electronically. They connect with the computer where brokers match the buyers with
sellers. It reduces cost as no paper or special building is required to conduct these
activities.

Uses Of Computers In Medical Field

Hospital Management System

Specialized hospital management softwares are used to automate the day to day
procedures and operations at hospitals. These tasks may be Online appointments, payroll
admittance and discharge records etc.

Patient History

Hospital management systems can store data about patients. Computers are used
to store data about patients, their diseases & symptoms, the medicines that are
prescribed.

Patients Monitoring

Monitoring systems are installed in medical wards and Intensive care units to
monitoring patients continously. These systems can monitor pulse, blood pressure and
body temperature and can alert medical staff about any serious situations.

7
Life Support Systems

Specialised devices are used to help impaired patients like hearing aids.

Diagnosis Purpose

A variety of softwares are used to investigate symptoms and prescribed medication


accordingly. Sophisticated systems are used for tests like CT Scan, ECG, and other
medical tests.

System software is software designed to provide a platform for other software.


Examples of system software include operating
systems like macOS, GNU/Linux and Microsoft Windows, computational science
software. Game engines, industrial automation, and software as a service
applications.

In contrast to system software, software that allows users to do user-oriented tasks


such as create text documents, play games, listen to music, or browse the web are
collectively referred to as application software.

In the early days of computing most application software was custom-written by


computer users to fit their specific hardware and requirements. In contrast, system
software was usually supplied by the manufacturer of the computer hardware and was
intended to be used by most or all users of that system.

The line where the distinction should be drawn is not always clear] Many operating
systems bundle application software. Such software is not considered system
software when it can be uninstalled usually without affecting the functioning of other
software. Exceptions could be e.g. web browsers such as Internet Explorer where
Microsoft argued in court that it was system software that could not be uninstalled.
Later examples are Chrome OS and Firefox OS where the browser functions as
the only user interface and the only way to run programs (and other web browsers
cannot be installed in their place), then they can well be argued to be (part of) the
operating system and hence system software.

8
Another borderline example is cloud-based software. This software provides services
to a software client (usually a web browser or a JavaScript application running in the
web browser), not to the user directly, and is therefore systems software. It is also
developed using system programming methodologies and systems programming
languages. Yet from the perspective of functionality there is little difference between a
word processing application and word processing web application.

Operating systems or system control program

The operating system (prominent examples being Microsoft Windows,


macOS, Linux, and z/OS), allows the parts of a computer to work together by
performing tasks like transferring data between memory and disks or rendering output
onto a display device. It provides a platform (hardware abstraction layer) to run high-
level system software and application software.

A kernel is the core part of the operating system that defines an API for applications
programs (including some system software) and an interface to device drivers.
Device drivers, including also computer BIOS and device firmware, provide basic
functionality to operate and control the hardware connected to or built into the
computer.
A user interface "allows users to interact with a computer." Either a command- line
interface (CLI) or, since the 1980s a graphical user interface (GUI). Since this is the
part of the operating system the user directly interacts with, it may be considered an
application and therefore not system software.

Utility software or system support programs

For historical reasons, some organizations use the term systems programmer to
describe a job function which is more accurately termed systems administrator.
Software tools these employees use are then called system software. This so- called
Utility software helps to analyze, configure, optimize and maintain the
computer, such as virus protection. In some publications, the term system
software also includes software development tools (like a
compiler, linker or debugger).

9
Systems programming, or system programming, is the activity of programming
computer system software. The primary distinguishing characteristic of systems
programming when compared to application programming is that application
programming aims to produce software which provides services to the user directly
(e.g. word processor), whereas systems programming aims to produce software and
software platforms which provide services to other software, are performance
constrained, or both (e.g. operating systems, computational science applications,
game engines, industrial automation, and software as a service applications).[1]

Systems programming requires a great degree of hardware awareness. Its goal is to


achieve efficient use of available resources, either because the software itself is
performance critical or because even small efficiency improvements directly transform
into significant monetary savings for the service provider (cloud based word
processors).

Overview

The following attributes characterize systems programming:

• The programmer can make assumptions about the hardware and other
properties of the system that the program runs on, and will often exploit
those properties, for example by using an algorithm that is known to be
efficient when used with specific hardware.

• Usually a low-level programming language or programming language


dialect is used so that:

o Programs can operate in resource-constrained environments

o Programs written to be efficient with little runtime overhead, they


may have a small runtime library, or none at all

o Programs may use direct and "raw" control over memory


access and control flow

o The programmer may write parts of the program directly in assembly


language

10
• Often systems programs cannot be run in a debugger. Running the
program in a simulated environment can sometimes be used to reduce this
problem.

Systems programming is sufficiently different from application programming that


programmers tend to specialize in one or the other.

In systems programming, often limited programming facilities are available. The use of
automatic garbage collection is not common and debugging is sometimes hard to do.
The runtime library, if available at all, is usually far less powerful, and does less
error checking. Because of those
limitations, monitoring and logging are often used; operating systems may have
extremely elaborate logging subsystems.

Implementing certain parts in operating systems and networking requires systems


programming, for example implementing paging (virtual memory) or a device driver
for an operating system.

History

Originally systems programmers invariably wrote in assembly language. Experiments


with hardware support in high level languages in the late 1960s led to such languages
as PL/S, BLISS, BCPL, and extended ALGOL for Burroughs large systems. Forth also
has applications as a systems language. In the 1970s, C became ubiquitous, aided
by the growth of Unix. More recently a subset of C++ called Embedded C++ has seen
some use, for instance it is used in the I/O Kit drivers of macOS.

Alternate usage

For historical reasons, some organizations use the term systems programmer to
describe a job function which would be more accurately termed systems administrator.
This is particularly true in organizations whose computer resources have historically
been dominated by mainframes, although the term is even used to describe job
functions which do not involve mainframes. This usage arose because administration
of IBM mainframes often involved the writing of custom assembler code (IBM's Basic
Assembly Language (BAL)), which integrated with the operating system such as
OS/MVS, DOS/VSE or VM/CMS. Indeed, some IBM software products had substantial
code contributions from customer programming staff. This type of programming is
progressively less

11
Common, but the term systems programmer is still the de facto job title for staff directly
administering IBM mainframes.

A system programming language is a programming language used for system


programming; such languages are designed for writing system software, which usually
requires different development approaches when compared with application software.
Edsger Dijkstra refers to these language as Machine Oriented High Order Languages,
or mohol.

General-purpose programming languages tend to focus on generic features to allow


programs written in the language to use the same code on different platforms.
Examples of such languages include ALGOL and Pascal. This generic quality typically
comes at the cost of denying direct access to the machine's internal workings, and this
often has negative effects on performance.

System languages, in contrast, are designed not for compatibility, but for performance
and ease of access to the underlying hardware while still providing high-level
programming concepts like structured programming. Examples include Systems
Programming Language (SPL or SPL/3000) and Executive Systems Problem
Oriented Language (ESPOL), both of which are similar to ALGOL in syntax but
tuned to their respective platforms. Others are cross- platform but designed to work
close to the hardware, like JOVIAL and BCPL.

Some languages straddle the system and application domains, bridging the gap
between these uses. The canonical example is C, which is used widely for both system
and application programming. Some modern languages also do this such as Rust and
Swift.

In contrast with application languages, system programming languages typically offer


more-direct access to the physical hardware of the machine: an archetypical system
programming language in this sense was BCPL. System programming languages
often lack built-in input/output (I/O) facilities because a system- software project
usually develops its own I/O mechanisms or builds on basic monitor I/O or screen
management facilities. The distinction between languages used for system
programming and application programming became blurred over time with the
widespread popularity of PL/I, C and Pascal.

12
History

The earliest system software was written in assembly language primarily because
there was no alternative, but also for reasons including efficiency of object code,
compilation time, and ease of debugging. Application languages such as
FORTRAN were used for system programming, although they usually still required
some routines to be written in assembly language.

Mid-level languages

Mid-level languages "have much of the syntax and facilities of a higher level language,
but also provide direct access in the language (as well as providing assembly
language) to machine features. The earliest of these was ESPOL on Burroughs
mainframes in about 1960, followed by Niklaus Wirth's PL360 (first written on a
Burroughs system as a cross compiler), which had the general syntax of ALGOL 60
but which statements directly manipulated CPU registers and memory. Other
languages in this category include MOL-360 and PL/S.

As an example, a typical PL360 statement is R9 := R8 and R7 shall 8 or R6, signifying


that registers 8 and 7 should be added together, the result shifted left 8 bits, the result
of that orbed

with the contents of register 6, and the final result placed into register 9.

Higher-level languages

While PL360 is at the semantic level of assembly language, another kind of system
programming language operates at a higher semantic level, but has specific
extensions designed to make the language suitable for system programming. An early
example of this kind of language is LRLTRAN, which extended Fortran with features
for character and bit manipulation, pointers, and directly addressed jump tables.

Subsequently, languages such as C were developed, where the combination of


features was sufficient to write system software, and a compiler could be developed
that generated efficient object programs on modest hardware. Such a language
generally omits features that cannot be implemented efficiently, and adds a small
number of machine-dependent features needed to access specific hardware abilities;
inline assembly code, such as C's asm statement, is often

13
used for this purpose. Although many such languages were developed, ] C and
C++ are the ones which survived.

System Programming Language (SPL) is also the name of a specific language on the HP
3000 computer series, used for its operating system HP Multi- Programming Executive
(MPE), and other parts of its system software.

The Xbox 360 system software or the Xbox 360 Dashboard is the updateable software
and operating system for the Xbox 360. It formerly resided in a 16 MB file
system. However, starting with the NXE Update, more storage became a requirement,
rectified by either having a Hard Drive installed, or one of the later revisions of the
console with adequate flash storage embedded within the console. The system
software has access to a maximum of 32 MB of the system's Random Access
Memory.[8] The updates can be downloaded from the Xbox Live service directly
to the Xbox 360 and subsequently installed. Microsoft has also provided the ability
to download system software updates from their respective official Xbox website to
their PCs and then storage media, from which the update can be installed to the
system.

The Xbox 360 game system allows users to download applications that add to the
functionality of the dashboard. Most apps required the user to be signed into a valid
Xbox Live Gold account in order to use the features advertised for the given app. But
as of the 2.0.16756.0 update, most apps do not require an Xbox Live Gold Subscription
to access them, although the app may have its own subscription to be able to use
it.[9][10] With the exception of a few early apps, Microsoft has added partners to
develop apps for the Xbox 360 system since the New Xbox Experience (NXE)
Dashboard update in 2008.[11]

Following the success of Xbox One preview program launched in 2014, in March 2015,
Microsoft announced the Xbox 360 preview program to the public.

Microsoft released the Xbox 360 console on November 22, 2005, a whole year earlier
than both the Sony PlayStation 3 and Nintendo Wii. Having the advantage of the lead,
Microsoft was able to experiment with various customization options for the
consumer's individual consoles. The ability to customize the way the console looked
with various themes to fit the front and sides of it was something very different for
home console users. In system, the Xbox 360 Dashboard had the ability to have
multiple profiles with password on the same console with each user being able to
customize the dashboard to exactly fit their own unique style. There were premium
themes available for purchase on the Xbox Live Marketplace apart from the default
styles. Originally there were five tabs or sections known

14
as the "blades" for the Xbox 360 menu, namely the Marketplace, Xbox live, Games,
Media and System. In scrolling from left to right, each section would have a different-
colored background signifying its own unique area but users also had the option to
change all sections to one background color as well. In 2008 however, when the gaming
scene changed dramatically because of the competitions with the PlayStation 3 and the
Wii, a new Xbox Dashboard titled the New Xbox Experience (NXE) was launched,
which features major changes in

both the user interface and other functionalities. The new user interface had a
navigation system similar to that of Windows Media Center.

Multimedia features

While the Xbox 360 console is primarily designed to play games just like other video
game consoles, it can be used as a media player too. Similar to the PlayStation 3
from Sony, Xbox 360 has media center capabilities built in, so it is relatively easy to
set up. With the Xbox 360 users can also copy videos directly to the hard drive, or play
via a USB stick. There are two ways to watch videos on Xbox 360. The first is to
download videos from the Xbox Live Marketplace. Some of these videos are available
for free while others have to be paid. Microsoft is in control of what videos are available
through the Xbox Live Marketplace. The second is to stream videos from a Windows
Media Center PC by using Xbox 360 as a Media Center Extender. In this way users
are in control of what videos they want to watch, however there are restrictions on
what kind of video they can playback. More specifically, it only supports playback of
DVR- MS, MPEG-1, MPEG-2 and WMV videos.[16] Every Xbox 360 can play DVD
movies out of the box using the built-in DVD drive, with no additional parts necessary,
although the user may control everything with an optional remote. There are other
improvements to the experience on the Xbox 360 over the original Xbox too, including
the ability to upscale the image so it will look better. Progressive scan is another
feature of the DVD output in the Xbox 360 that produces smoother output when playing
movies on televisions that support high definition, although using a dedicated DVD
player would offer even more features and sound quality.

15
Backward compatibility [

See also: List of Xbox games compatible with Xbox 360

The Xbox 360 system software includes built in software emulation support for the
original Xbox game system. Software emulation is achieved with downloadable
emulation profiles, which require a hard drive. Not all original Xbox games are
supported; the last reported update to the compatibility list was

in 2007 and support has since been discontinued for adding new titles. There are more
than 400 titles on the list which covers most of the big name titles, and as a requirement
for backwards compatibility the users have to have a hard drive for their Xbox 360,
specifically an official Microsoft-brand Xbox 360 hard drive. ] In contrast, Xbox 360's
successor, the Xbox One console was not backward compatible at launch, but after
applying the November 2015 "New Xbox One Experience" system update it also
supports a select group of Xbox 360 games using a software emulator, similar to Xbox
360's backward compatibility feature. However, there are also notable differences
between the ways of their emulations—unlike Xbox 360's emulation of the original
Xbox, by Xbox One's emulation of the Xbox 360 games do not have to be specifically
patched but instead need to be repackaged in the Xbox One format.

Xbox Live Preview Program

Starting with the NXE Dashboard in November 2008, Larry Hryb (known on Xbox Live
as "Major Nelson") and other team members hosted a new segment using Microsoft
Connect to allow members of the Xbox Live community to get a chance to have a
preview of the next dashboard. Small bug fixes & minor improvements were not
included in the Preview Program; it was limited to major releases (NXE, Kinect, Metro)
released in November of some years. In 2009, the Preview Program returned in August
rather than November for a summer update.

The Preview Program incentive enrollment started in either March/April or


August/September, depending on when a major dashboard release was being
planned. Applicants were notified no later than 2 weeks after the enrollment date with
the new Preview Program dashboard launching on their consoles as early as 24 hours
after acceptance. The Preview program is by invitation only. Users can be invited by
their friends by using the Invite friends page in the Xbox Preview Dashboard app if
their friends are already in the program. The Xbox

16
Preview Dashboard app is the place for Preview participants to give feedback about
the program, get the latest news, change console enrollment settings, and report
problems. If users decide that they didn't want to get Preview updates anymore they
could opt out in the Xbox Preview Dashboard app. details for the Preview Program
were located on the Xbox official website.

The Xbox Live Preview Program for the Xbox 360 has since been discontinued.

History of updates

The first version of the Xbox 360 system software was 2.0.1888.0, released on
November 22, 2005, as shipped in the original Xbox 360 consoles, although the
version numbered "2.0" was available at product launch. Over the course of next a few
years saw the continuous updates of the system software. While early updates such
as version 2.0.4532.0 released on October 31, 2006 added supports for 1080p video
output and the external HD DVD drive attachment, version 2.0.7357.0 released on
November 19, 2008 was the first major upgrade of the system software, titled the New
Xbox Experience that had added many new features, [22] including a completely
redesigned GUI. It included changes in the menu system, featuring a more 3D style
vibe with more options and sections, new sound effects (menus only, notification
sounds remain the same), support for 1440×900 and 1680×1050 16:10
resolutions (letterboxed) over VGA, HDMI and when using DVI, as well as the
abilities to preview themes before setting them, to disable notifications (new
messages, chat requests, etc.) or mute the notification sound, and to change to
QWERTY keyboard in place of alphabetical keyboard.

Subsequent system software updates after this major upgrade continued to add
(although usually numerically smaller) new features or make other changes, including
bugfixes. An example of the new features introduced in version 2.0.8498.0 released
on August 11, 2009 was the addition of Display Discovery to allow console to override
factory settings for HDTV resolutions and refresh rates as well as discovering the best
possible resolution and refresh rates that the HDTV is capable of displaying (Selected
HDTVs). Version 2.0.12611.0 released on November 1, 2010 also added features
such as the ability to install game updates to the HDD (select games only) and a visual
refresh to incorporate elements of Microsoft's Metro design style. It also featured a
new boot screen animation with redesigned Xbox 360 orb and ribbons. New anti-piracy
2.5 scheme to newly released games was also added in this version, later updated to

17
anti-piracy 2.6 in the version 2.0.13599.0 released on July 19, 2011. Version
2.0.14699.0 released on December 6, 2011 introduced a redesigned interface and a
fresh new take on a platform that has had more than half a decade of changes and
enhancements.[14] The releases after the version 2.0.16197.0 released October 16,
2012 were typically minor, usually bugfixes or as a

Mandatory update that prepared for subsequent growth of the service, but the system
software is still being constantly updated by now. On June 15, 2020 the
Advertisements included on the Xbox 360 software were removed via a server- side
update without notice.

See also

• Xbox 360 applications, non-game software applications designed to run on


the Xbox 360 platform

• Windows Phone, a family of mobile operating systems developed by


Microsoft for smartphones as the replacement successor to Windows
Mobile and Zune

Other gaming platforms from Microsoft [

• Xbox One system software, the operating system for the eighth-generation
home video game console, Xbox One

Other gaming platforms from the next generation:

• Nintendo 3DS system software, a set of updatable firmware versions and


software frontend on the Nintendo 3DS family of video game consoles

• PlayStation 4 system software, the updatable firmware and operating


system of the PlayStation 4

• PlayStation Vita system software, the official, updatable firmware and


operating system for the PlayStation Vita and PlayStation TV (known in
Asia as PlayStation Vita TV)

• Wii U system software, the official firmware version and operating system for
Nintendo's Wii U game console

• Nintendo Switch system software, the official firmware version and


operating system for Nintendo Switch gaming console

18
Other gaming platforms from this generation:

• Nintendo DSi system software, a set of updatable firmware versions, and


a software frontend on the Nintendo DSi (including its XL variant) video
game console

• PlayStation 3 system software, the updatable firmware and operating


system of the PlayStation 3

• PlayStation Portable system software, the official firmware for the


PlayStation Portable

• Wii system software, a set of updatable firmware versions, and a software


frontend on the Wii video game console

The Wiki system software is a discontinued set of updatable firmware versions and a
software frontend on the Wiki home video game console. Updates, which can be
downloaded over the Internet or read from a game disc, allowed Nintendo
to add additional features and software, as well as to patch security vulnerabilities
used by users to load homebrew software. When a new update became available,
Nintendo sent a message to the Wii Message Board of Internet-connected systems
notifying them of the available update.

Most game discs, including first-party and third-party games, include system software
updates so that systems that are not connected to the Internet can still receive
updates. The system menu will not start such games if their updates have not been
installed, so this has the consequence of forcing users to install updates in order to
play these games. Some games, such as online games like Super Smash Bros.
Brawl and Mario Kart Wii, contain specific extra updates, such as the ability to receive
Wii Message Board posts from game-specific addresses; therefore, these games
always require that an update be installed before their first time running on a given
console.

Technology IOS

Not to be confused with Apple iOS or Cisco IOS.

The Wii's firmware is in the form of IOSes, thought by the Wii homebrew
developers to stand for "Input Output Systems" or "Internal Operating Systems" IOS
runs on a separate ARM926EJ-S processor, unofficially

19
Starlet. The patent for the Wii U shows a similar device which is simply named
"Input/Output Processor" IOS controls I/O between the code running on the main
Broadway processor and the various Wiki that does not also exist on the
GameCube.

Except for bug fixes, new IOS versions do not replace existing IOS versions. Instead,
Wii consoles have multiple IOS versions installed. All native Wii software (including
games distributed on Nintendo optical discs, the System Menu itself, Virtual Console
games, WiiWare, and Wii Channels), with the exception of certain homebrew
applications, have the IOS version hardcoded into the software.

When the software is run, the IOS that is hardcoded gets loaded by the Wii, which then
loads the software itself. If that IOS does not exist on the Wii, in the case of disc-based
software, it gets installed automatically (after the user is prompted). With downloaded
software, this should not theoretically happen, as the user cannot access the shop to
download software unless the player has all the IOS versions that they require.
However, if homebrew is used to forcefully install or run a piece of software when the
required IOS does not exist, the user is brought back to the system menu.

Nintendo created this system so that new updates would not unintentionally break
compatibility with older games, but it does have the side effect that it uses up space
on the Wii's internal NAND Flash memory. IOSes are referred to by their number,
which can theoretically be between 0 and 254, although many numbers are skipped,
presumably being development versions that were never completed.

Only one IOS version can run at any given time. The only time an IOS is not running
is when the Wii enters GameCube backward compatibility mode, during which the Wii
runs a variant of IOS specifically for GameCube games, MIOS, which contains a
modified version of the GameCube's IPL.

User interface

The system provides a graphical interface to the Wiki’s abilities. All games run directly
on the Broadway processor, and either directly interface with the hardware (for the
hardware common to the Wiki and GameCube), or interface with IOS running on the
ARM architecture processor (for Wiki-specific hardware). The ARM processor does
not have access to the screen, and therefore

20
neither does IOS. This means that while a piece of software is running, everything
seen on the screen comes from that software, and not from any operating system or
firmware. Therefore, the version number reported by the Wii is actually only the version
number of the System Menu. This is why some updates do not result in a change of
the version number: the System Menu itself is not updated, only (for example) IO uses
and channels. As a side effect, this means it is impossible for Nintendo to implement
any functions that would affect the games themselves, for example an in-game system
menu (similar to the Xbox 360's in-game Dashboard or the PlayStation 3's in-game
XMB).

The Wiki Menu (known internally as the System Menu) is the name of the user
interface for the Wiki game console, and it is the first thing to be seen when the system
boots up. Similar to many other video game consoles, the Wiki is not only about games.
For example, it is possible to install applications such as Netflix to stream media
(without requiring a disc) on the Wiki. The Wiki Menu let users access both game and
no-game functions through built-in applications called Channels, which are designed
to represent television channels. There are six primary channels: the Disc Channel,
Mii Channel, Photo Channel, Wii Shop Channel, Forecast Channel and News
Channel, although the latter two were not initially included and only became available
via system updates. Some of the functions provided by these Channels on the
Wii used to be limited to a computer, such as a full-featured web browser and
digital photo viewer can be navigated using the pointer capability of the Wii
Remote.[10] Users can also rearrange these Channels if they are not satisfied with
how the Channels are originally organized on the menu.

Network features

The Wiki system supports wireless connectivity with the Nintendo DS handheld
console with no additional accessories. This connectivity allows players to use the
Nintendo DS microphone and touch screen as inputs for Wii games. Pokémon Battle
Revolution is the first example Nintendo has given of a game using Nintendo DS-Wii
connectivity. Nintendo later released the Nintendo Channel for
the Wii allowing its users to download game demos or additional data to their Nintendo
DS.

Like many other video game consoles, the Wii console is able to connect to the
Internet, although this is not required for the Wii system itself to function. Each Wii has
its own unique 16-digit Wii Code for use with Wii's non-game features. With Internet
connection enabled users are able to access the established Nintendo Wi-Fi
Connection service. Wireless

21
encryption by WEP, WPA (TKIP/RC4) and WPA2 (CCMP/AES) is supported.[12]
AOSS support was added in System Menu version 3.0. As with the Nintendo DS,
Nintendo does not charge for playing via the service; the 12-digit Friend Code system
controls how players connect to one another. The service has a few features
for the console, including the Virtual Console, WiiConnect24 and several
Channels. The Wii console can also communicate and connect with other Wii systems
through a self-generated wireless LAN, enabling local wireless multiplayer on different
television sets. The system also implements console-based software, including the Wii
Message Board. One can connect to the Internet with third-party devices as well.

The Wii console also includes a web browser known as the Internet Channel, which is
a version of the Opera 9 browser with menus. It is meant to be a convenient way to
access the web on the television screen, although it is far from

offering a comfortable user interface compared with modern Internet browsers. A


virtual keyboard pops up when needed for input, and the Wii Remote acts like a mouse,
making it possible to click anywhere on the screen and navigate through web links.
However, the browser cannot always handle all the features of most normal web
pages, although it does support Adobe Flash, thus capable of playing Flash games
Internet Channel and system updates may be restricted by the parental controls

Backward compatibility

The original designs of the Nintendo Wii console, more specifically the Wii models made
pre-2011 were fully backward compatible with GameCube devices including game
discs, memory cards and controllers. This was because the Wii hardware had ports
for both GameCube memory cards, and peripherals and its slot-loading drive was able
to accept and read the previous console's discs. GameCube games work with the Wii
without any additional configuration, but a GameCube controller is required to play
GameCube titles; neither the Wii Remote or the Classic Controller functions in this
capacity. The Wii supports progressive-scan output in 480p-enabled GameCube
titles. Peripherals can be connected via a set of four GameCube controller sockets
and two Memory Card slots (concealed by removable flip-open panels) The console
retains connectivity with the Game Boy Advance and e-Reader through the Game Boy
Advance Cable, which is used in the same manner as with the GameCube; however,
this feature can only be accessed on select GameCube titles which previously utilized
it.

22
There are also a few limitations in the backward compatibility. For example, online
and LAN features of certain GameCube games were not available since the Wii
does not have serial ports for the Nintendo GameCube Broadband Adapter and
Modem Adapter. The Wii uses a proprietary port for video output, and is
incompatible with all Nintendo GameCube audio/video cables (composite video,
S-Video, component video and RGB SCART). The console also lacks the
GameCube footprint and high-speed port needed for Game Boy Player support.

Furthermore, only GameCube functions were available and only compatible


memory cards and controllers could be used when playing a GameCube game.
This is due to the fact that the Wii's internal memory would not save GameCube
data.

Application software (app for short) is a program or group of programs designed for
end users. Examples of an application include a word processor, a
spreadsheet, an accounting application, a web browser, an email client, a
media player, a file viewer, simulators, a console game or a photo editor. The
collective noun application software refers to all applications collectively. This
contrasts with system software, which is mainly involved with running the
computer.

Applications may be bundled with the computer and its system software or
published separately, and may be coded as proprietary, open-source or university
projects.[2] Apps built for mobile platforms are called mobile apps.

Terminology

In information technology, an application (app), application program or


23
application software is a computer program designed to help people perform an
activity. Depending on the activity for which it was designed, an application can
manipulate text, numbers, audio, graphics and a combination of these elements.
Some application packages focus on a single task, such as word processing; others,
called integrated software include several applications.

User-written software tailors systems to meet the user's specific needs. User-
written software includes spreadsheet templates, word processor macros,
scientific simulations, audio, graphics, and animation scripts. Even email filters
are a kind of user software. Users create this software themselves and often
overlook how important it is.

24
The delineation between system software such as operating systems and
application software is not exact, however, and is occasionally the object of
controversy. For example, one of the key questions in the United States v. Microsoft
Corp. antitrust trial was whether Microsoft's Internet Explorer web browser was
part of its Windows operating system or a separable piece of application software.
As another example, the GNU/Linux naming controversy is, in part, due to
disagreement about the relationship between the Linux kernel and the operating
systems built over this kernel. In some types of embedded systems, the application
software and the operating system software may be indistinguishable to the user,
as in the case of software used to control a VCR, DVD player or microwave oven.
The above definitions may exclude some applications that may exist on some
computers in large organizations. For an alternative definition of an app: see
Application Portfolio Management.

Metonymy [

The word "application" used as an adjective is not restricted to the "of or pertaining
to application software" meaning.] For example, concepts such as application
programming interface (API), application server, application virtualization,
application lifecycle management and portable application apply to all computer
programs alike, not just application software.

Apps and killer apps

Main article: Killer application

Some applications are available in versions for several different platforms; others
only work on one and are thus called, for example, a
geography application for Microsoft Windows, or an Android application for
education, or a Linux game. Sometimes a new and popular application arises which
only runs on one platform, increasing the desirability of that platform. This is
called a killer application or killer app. For example, VisiCalc was the first modern
spreadsheet software for the Apple II and helped selling the then- new personal
computers into offices. For Blackberry it was their email software.

In recent years, the shortened term "app" (coined in 1981 or earlier) has become
popular to refer to applications for mobile devices such as smart phones
and tablets, the shortened form matching their typically smaller scope compared
to applications on PCs. Even more recently, the shortened version is used for
desktop application software as well

25
Classification

There are many different and alternative ways in order to classify application
software.

By the legal point of view, application software is mainly classified with a black box
approach, in relation to the rights of its final end-users or subscribers (with
eventual intermediate and tiered subscription levels).

Software applications are also classified in respect of the programming language


in which the source code is written or executed, and respect of their purpose and
outputs.

By property and use rights

Application software is usually distinguished among two main classes: closed


source vs open source software applications, and among free or proprietary
software applications.

Proprietary software is placed under the exclusive copyright, and a software


license grants limited usage rights. The open-closed principle states that software
may be "open only for extension, but not for modification". Such applications can
only get add-on by third-parties.

Free and open-source software shall be run, distributed, sold or extended for any
purpose, and -being open- shall be modified or reversed in the same way.

FOSS software applications released under a free license may be perpetual and
also royalty-free. Perhaps, the owner, the holder or third-party enforcer of any
right (copyright, trademark, patent, or ius in re aliena) are entitled to add
exceptions, limitations, time decays or expiring dates to the license terms of use.

By coding language

Since the development and near-universal adoption of the web, an important


distinction that has emerged, has been between web applications — written with
HTML, JavaScript and other web-native technologies and typically requiring one to
be online and running a web browser, and the more traditional native applications
written in whatever languages are available for one's particular type of computer.
There has been a contentious debate in the computing community regarding web
applications replacing native applications for many purposes, especially on mobile
devices such as smartphones and tablets. Web apps have

26
indeed greatly increased in popularity for some uses, but the advantages of
applications make them unlikely to disappear soon, if ever. Furthermore, the two
can be complementary, and even integrated.

By purpose and output

Application software can also be seen as being


either horizontal or vertical. Horizontal applications are more popular and
widespread, because they are general purpose, for example word processors or
databases. Vertical applications are niche products, designed for a particular type
of industry or business, or department within an organization. Integrated suites of
software will try to handle every specific aspect possible of, for example,
manufacturing or banking worker, or accounting, or customer service.

There are many types of application software:

LibreOffice Writer, an open-source word processor that is a component of


LibreOffice (running on Linux Mint)
• An application suite consists of multiple applications bundled together. They
usually have related functions, features and user interfaces, and may be
able to interact with each other, e.g. open each other's files. Business
applications often come in suites, e.g. Microsoft Office,
LibreOffice and iWork, which bundle together a word processor, a
spreadsheet, etc.; but suites exist for other purposes, e.g. graphics or music.
• Enterprise software addresses the needs of an entire organization's processes
and data flows, across several departments, often in a large distributed
environment. Examples include enterprise resource planning systems,
customer relationship management (CRM) systems and supply chain
management software. Departmental Software is a sub-

27
• type of enterprise software with a focus on smaller organizations or groups
within a large organization. (Examples include travel expense management
and IT Helpdesk.)
• Enterprise infrastructure software provides common capabilities needed to
support enterprise software systems. (Examples include databases, email
servers, and systems for managing networks and security.)
• Application platform as a service (aPaaS) is a cloud computing service that offers
development and deployment environments for application services.
• Information worker software lets users create and manage information, often
for individual projects within a department, in contrast to enterprise
management. Examples include time management, resource management,
analytical, collaborative and documentation tools. Word processors,
spreadsheets, email and blog clients, personal information system, and
individual media editors may aid in multiple information worker tasks.
• Content access software is used primarily to access content without editing,
but may include software that allows for content editing. Such software
addresses the needs of individuals and groups to consume digital
entertainment and published digital content. (Examples include media
players, web browsers, and help browsers.)
• Educational software is related to content access software, but has the
content or features adapted for use in by educators or students. For
example, it may deliver evaluations (tests), track progress through
material, or include collaborative capabilities.
• Simulation software simulates physical or abstract systems for either
research, training or entertainment purposes.
• Media development software generates print and electronic media for others
to consume, most often in a commercial or educational setting. This
includes graphic-art software, desktop publishing software, multimedia
development software, HTML editors, digital-animation editors, digital
audio and video composition, and many others.

28
• Product engineering software is used in developing hardware and software
products. This includes computer-aided design (CAD), computer- aided
engineering (CAE), computer language editing and compiling tools,
integrated development environments, and application programmer
interfaces.
• Entertainment Software can refer to video games, screen savers, programs to
display motion pictures or play recorded music, and other forms of
entertainment which can be experienced through use of a computing
device.

Applications can also be classified by computing platform such as a particular


operating system, delivery network such as in cloud computing and Web
2.0 applications, or delivery devices such as mobile apps for mobile devices.

The operating system itself can be considered application software when


performing simple calculating, measuring, rendering, and word processing tasks
not used to control hardware via command-line interface or graphical user
interface. This does not include application software bundled within operating
systems such as a software calculator or text editor.

Accounting software describes a type of application software that records and


processes accounting transactions within functional modules such as accounts
payable, accounts receivable, journal, general ledger, payroll, and trial balance. It
functions as an accounting information system. It may be developed in-house by
the organization using it, may be purchased from a third party, or may be a
combination of a third-party application software package with local
modifications. Accounting software may be on-line based, accessed anywhere at
any time with any device which is Internet enabled, or may be desktop based. It
varies greatly in its complexity and cost.

The market has been undergoing considerable consolidation since the mid- 1990s,
with many suppliers ceasing to trade or being bought by larger groups.

A general ledger is a bookkeeping ledger that serves as a central repository for accounting
data transferred from all sub ledgers like accounts payable, accounts receivable, cash
management, fixed assets, purchasing and projects. Each account maintained by an
organization is known as a ledger account, and the collection of all

29
These accounts is known as the general ledger. The general ledger is the backbone of any
accounting system which holds financial and non-financial data for an organization.

An organization's statement of financial position and the statement of income and


comprehensive income are both derived from the general ledger.] Each account in
the general ledger consists of one or more pages. The general ledger is where
posting to the accounts occurs. Posting is the process of recording amounts as
credits (right side), and amounts as debits (left side), in the pages of the general
ledger. Additional columns to the right hold a running activity total (similar to a
chequebook).

The listing of the account names is called the chart of accounts. The extraction of
account balances is called a trial balance. The purpose of the trial balance is, at a
preliminary stage of the financial statement preparation process, to ensure the
equality of the total debits and credits.

A spreadsheet is a computer application for organization, analysis and storage of


data in tabular form. Spreadsheets were developed as computerized analogs of
paper accounting worksheets. The program operates on data entered in cells of a
table. Each cell may contain either numeric or text data, or the results of
formulas that automatically calculate and display a value based on the contents of
other cells. A spreadsheet may also refer to one such electronic document.

Spreadsheet users can adjust any stored value and observe the effects on calculated
values. This makes the spreadsheet useful for "what-if" analysis since many cases
can be rapidly investigated without manual recalculation. Modern spreadsheet
software can have multiple interacting sheets, and can display data either as text
and numerals, or in graphical form.

Besides performing basic arithmetic and mathematical functions, modern


spreadsheets provide built-in functions for
common financial and statistical operations. Such calculations as net present
value or standard deviation can be applied to tabular data with a pre- programmed
function in a formula. Spreadsheet programs also provide conditional expressions,
functions to convert between text and numbers, and functions that operate on
strings of text.

Spreadsheets have replaced paper-based systems throughout the business world.


Although they were first developed for accounting or bookkeeping tasks,

30
They now are used extensively in any context where tabular lists are built, sorted,
and shared.

LANPAR, available in 1969, was the first electronic spreadsheet on mainframe and
time sharing computers. LANPAR was an acronym: LANguage for Programming
Arrays at Random. VisiCalc was the first electronic spreadsheet on a
microcomputer, and it helped turn the Apple II computer into a popular and
widely used system. Lotus 1-2-3 was the leading spreadsheet when DOS was the
dominant operating system. Excel now has the largest market share on the
Windows and Macintosh platforms. A spreadsheet program is a standard feature
of an office productivity suite; since the advent of web apps, office suites now also
exist in web app form. Web based spreadsheets are a relatively new category.

Lotus 1-2-3 and other MS-DOS spreadsheets

The acceptance of the IBM PC following its introduction in August, 1981, began
slowly, because most of the programs available for it were translations from other
computer models. Things changed dramatically with the introduction of Lotus 1-
2-3 in November, 1982, and release for sale in January, 1983. Since it was written
especially for the IBM PC, it had good performance and became the killer app for
this PC. Lotus 1-2-3 drove sales of the PC due to the improvements in speed and
graphics compared to VisiCalc on the Apple II.

Lotus 1-2-3, along with its competitor Borland Quattro, soon displaced VisiCalc.
Lotus 1-2-3 was released on January 26, 1983, started outselling then-most-
popular VisiCalc the very same year, and for a number of years was the leading
spreadsheet for DOS.

Microsoft Excel [

Microsoft released the first version of Excel for the Macintosh on September 30,
1985, and then ported it to Windows, with the first version being numbered 2.05
(to synchronize with the Macintosh version 2.2) and released in November 1987.
The Windows 3.x platforms of the early 1990s made it possible for Excel to take
market share from Lotus. By the time Lotus responded with usable Windows
products, Microsoft had begun to assemble their Office suite. By 1995, Excel was

31
The market leader, edging out Lotus 1-2-3, and in 2013, IBM discontinued Lotus
1-2-3 altogether.

Web based spreadsheets [Main article: List of online spreadsheets

With the advent of advanced web technologies such as Ajax circa 2005, a new
generation of online spreadsheets has emerged. Equipped with a rich Internet
application user experience, the best web based online spreadsheets have many of
the features seen in desktop spreadsheet applications. Some of them such as
Edit Grid, Google Sheets, Microsoft Excel Online, Smart sheet, or Zoho Sheet also
have strong multi-user collaboration features or offer real time updates from
remote sources such as stock prices and currency exchange rates.

Other spreadsheets [

Gnumeric is a free, cross-platform spreadsheet program that is part of the


GNOME Free Software Desktop Project. OpenOffice.org Calc and the closely
related LibreOffice Calc (using the LGPL license) are free and open-source
spreadsheets.

A database is an organized collection of data, generally stored and accessed


electronically from a computer system. Where databases are more complex they
are often developed using formal design and modeling techniques.

The database management system (DBMS) is the software that interacts with end
users, applications, and the database itself to capture and analyze the data. The
DBMS software additionally encompasses the core facilities provided to
administer the database. The sum total of the database, the DBMS and the
associated applications can be referred to as a "database system". Often the term
"database" is also used to loosely refer to any of the DBMS, the database system or
an application associated with the database.

Computer scientists may classify database-management systems according to the


database models that they support. Relational databases became dominant in the
1980s. These model data as rows and columns in a series of tables, and the vast
majority use SQL for writing and querying data. In the 2000s, non- relational
databases became popular, referred to as NoSQL because they use different query
languages.

32
1960s, navigational DBMS

Basic structure of navigational CODASYL database model

The introduction of the term database coincided with the availability of direct-
access storage (disks and drums) from the mid-1960s onwards. The term
represented a contrast with the tape-based systems of the past, allowing shared
interactive use rather than daily batch processing. The Oxford English Dictionary
cites a 1962 report by the System Development Corporation of California as the
first to use the term "data-base" in a specific technical sense.

1970s, relational DBMS

Edgar F. Codd worked at IBM in San Jose, California, in one of their offshoot
offices that was primarily involved in the development of hard disk systems. He
was unhappy with the navigational model of the CODASYL approach, notably the
lack of a "search" facility. In 1970, he wrote a number of papers that outlined a new
approach to database construction that eventually culminated in the
groundbreaking A Relational Model of Data for Large Shared Data Banks.

In this paper, he described a new system for storing and working with large
databases. Instead of records being stored in some sort of linked list of free-form
records as in CODASYL, Codd's idea was to organise the data as a number of
"tables", each table being used for a different type of entity. Each table would
contain a fixed number of columns containing the attributes of the entity.

Late 1970s, SQL DBMS

IBM started working on a prototype system loosely based on Codd's concepts as


System R in the early 1970s. The first version was ready in 1974/5, and work then
started on multi-table systems in which the data could be split so that all of the
data for a record (some of which is optional) did not have to be stored in a single large
"chunk". Subsequent multi-user versions were tested by customers in 1978 and
1979, by which time a standardized query language – SQL[ had been added. Codd's
ideas were establishing themselves as both workable and superior to CODASYL,
pushing IBM to develop a true production version of System R, known as SQL/DS, and,
later, Database 2 (DB2).

Document automation (also known as document assembly) is the design of systems


and workflows that assist in the creation of electronic documents. These include
logic-based systems that use segments of pre-existing text and/or data

33
to assemble a new document. This process is increasingly used within certain
industries to assemble legal documents, contracts and letters. Document
automation systems can also be used to automate all conditional text, variable text,
and data contained within a set of documents.

Automation systems allow companies to minimize data entry, reduce the time
spent proof-reading, and reduce the risks associated with human error. Additional
benefits include: time and financial savings due to decreased paper handling,
document loading, storage, distribution, postage/shipping, faxes, telephone, labor
and waste

Document assembly

The basic functions are to replace the cumbersome manual filling in of repetitive
documents with template-based systems where the user answers software- driven
interview questions or data entry screen. The information collected then populates
the document to form a good first draft'. Today's more advanced document
automation systems allow users to create their own data and rules (logic) without
the need for programming.

While document automation software is used primarily in the legal, financial


services, and risk management industries, it can be used in any industry that
creates transaction-based documents. A good example of how document
automation software can be used is with commercial mortgage documents. A
typical commercial mortgage transaction can include several documents
including:

• promissory note

• environmental indemnity

• trust deed

• mortgage

• guaranty

A word processor (WP) is a device or computer program that provides for input,
editing, formatting and output of text, often with some additional features.

34
Early word processors were stand-alone devices dedicated to the function, but
current word processors are word processor programs running on general purpose
computers.

The functions of a word processor program fall somewhere between those of a


simple text editor and a fully functioned desktop publishing program. However
the distinctions between these three have changed over time, and were unclear
after 2010.

Contents

• 1Background

• 2Mechanical word processing

• 3Electromechanical and electronic word processing

• 4Word processing software

• 5See also

• 6References

Background

Word processors did not develop out of computer technology. Rather, they evolved
from mechanical machines and only later did they merge with the computer field.
The history of word processing is the story of the gradual automation of the
physical aspects of writing and editing, and then to the refinement of the
technology to make it available to corporations and Individuals.

The term word processing appeared in American offices in early 1970s centered on
the idea of streamlining the work to typists, but the meaning soon shifted toward
the automation of the whole editing cycle.

At first, the designers of word processing systems combined existing technologies


with emerging ones to develop stand-alone equipment, creating a new business
distinct from the emerging world of the personal computer. The concept of word
processing arose from the more general data processing, which since the 1950s had
been the application of computers to business administration.

Through history, there have been 3 types of word processors: mechanical,


electronic and software.

35
Mechanical word processing

The first word processing device (a "Machine for Transcribing Letters" that
appears to have been similar to a typewriter) was patented by Henry Mill for a
machine that was capable of "writing so clearly and accurately you could not
distinguish it from a printing press". More than a century later, another patent
appeared in the name of William Austin Burt for the typographer. In the late 19th
century, Christopher Latham Sholes created the first recognizable typewriter that
although it was a large size, which was described as a "literary piano".

The only "word processing" these mechanical systems could perform was to
change where letters appeared on the page, to fill in spaces that were previously
left on the page, or to skip over lines. It was not until decades later that the
introduction of electricity and electronics into typewriters began to help the writer
with the mechanical part. The term “word processing” itself was created in the
1950s by Ulrich Steinhilper, a German IBM typewriter sales executive. However,
it did not make its appearance in 1960s office management or computing
literatures, though many of the ideas, products, and technologies to which it would
later be applied were already well known.

Electromechanical and electronic word processing

By the late 1960s, IBM had developed the IBM MT/ST (Magnetic Tape/Selectric
Typewriter). This was a model of the IBM Selectric typewriter from the earlier part
of this decade, but built into its own desk, and integrated with magnetic tape
recording and playback facilities, with controls and a bank of electrical relays. The
MT/ST automated word wrap, but it had no screen. This device allowed rewriting
text that had been written on another tape and you could collaborate (send the
tape to another person for them to edit or make a copy). It was a revolution for the
word processing industry. In 1969 the tapes were replaced by magnetic cards.
These memory cards were introduced in the side of an extra device that
accompanied the MT/ST, able to read and record the work.

In the early 1970s, word processing then became computer-based (although only
with single-purpose hardware) with the development of several innovations. Just
before the arrival of the personal computer (PC), IBM developed the floppy disk.
Also in the early 1970s word-processing systems with a CRT screen display editing
were designed.

36
Word processing software

Main article: Word processor program

The final step in word processing came with the advent of the personal computer
in the late 1970s and 1980s and with the subsequent creation of word processing
software. Word processing systems that would create much more complex and
capable text were developed and prices began to fall, making them more accessible
to the public.

The first word processing program for personal computers (microcomputers) was
Electric Pencil, from Michael Shrayer Software, which went on sale in December of
1976. In 1978 WordStar appeared and because of its many new features soon
dominated the market. However, WordStar was written for the early CP/M (Control
Program–Micro) operating system, and by the time it was rewritten for the newer MS-
DOS (Microsoft Disk Operating System), it was obsolete. WordPerfect and its
competitor Microsoft Word replaced it as the main word processing programs during
the MS-DOS era, although there were less successful programs such as XyWrite.

Most early word processing software required users to memorize semi-mnemonic


key combinations rather than pressing keys such as "copy" or "bold". Moreover,
CP/M lacked cursor keys; for example WordStar used the E-S-D-X-centered
"diamond" for cursor navigation. However, the price differences between
dedicated word processors and general-purpose PCs, and the value added to the
latter by software such as “killer app” spreadsheet applications,
e.g. VisiCalc and Lotus 1-2-3, were so compelling that personal computers and
word processing software became serious competition for the dedicated machines
and soon dominated the market.

Desktop publishing (DTP) is the creation of documents using page layout software
on a personal ("desktop") computer. It was first used almost exclusively for print
publications, but now it also assists in the creation of various forms of online
content.[1] Desktop publishing software can generate layouts and produce
typographic-quality text and images comparable to traditional typography and
printing. Desktop publishing is also the main reference for digital typography. This
technology allows individuals, businesses, and other organizations to self-publish
a wide variety of content, from menus to magazines to books, without the expense
of commercial printing.

37
Desktop publishing often requires the use of a personal computer
and WYSIWYG page layout software to create documents for either large-scale
publishing or small-scale local multifunction peripheral output and distribution –
although a non-WYSIWYG system such as LaTeX could also be used for the creation
of highly structured and technically demanding documents as well.[3] Desktop
publishing methods provide more control over design, layout, and typography than
word processing. However, word processing software has evolved to include most, if
not all, capabilities previously available only with professional printing or desktop
publishing.

A presentation program (also called presentation software) is a software


package used to display information in the form of a slide show. It has three major
functions: an editor that allows text to be inserted and formatted, a method for
inserting and manipulating graphic images, and a slide- show system to display
the content.[1] Presentation software can be viewed as enabling a functionally-
specific category of electronic media, with its own distinct culture and practices as
compared to traditional presentation media.

Presentations in this mode of delivery are pervasive in all aspects of business


communications, especially in business planning, as well as in academic
conference and professional conference settings, and in the knowledge economy
generally, where ideas are a primary work output. Presentations may also feature
prominently in political settings, especially workplace politics, where persuasion is
a central determinant of group outcomes.

Most modern meeting rooms and conference halls are configured to include
presentation electronics, such as overhead projectors suitable for displaying
presentation slides, often driven by the presenter's own laptop, under direct
control of the presentation program used to develop the presentation. Often the
presenter will present a lecture using the slides as a visual aid for both the
presenter (to track the lecture's coverage) and the audience (especially whn an
audience member mishears or misunderstands the verbal component).

Electronic mail (email or e-mail) is a method of exchanging messages ("mail") between


people using electronic devices. Email entered limited use in the 1960s, but users
could only send to users of the same computer, and some early email systems
required the author and the recipient to both be online simultaneously, similar to
instant messaging. Ray Tomlinson is credited as the inventor of email; in 1971, he
developed the first system able to send mail between users on

38
different hosts across the ARPANET, using the @ sign to link the user name with a
destination server. By the mid-1970s, this was the form recognized as email.

Email operates across computer networks, primarily the Internet. Today's email
systems are based on a store-and-forward model. Email servers accept, forward,
deliver, and store messages. Neither the users nor their computers are required to
be online simultaneously; they need to connect, typically to a mail server or a
webmail interface to send or receive messages or download it

The computer is very powerful and useful that they make our life very convenient
and comfortable. They are use in scheduling airlines flight which makes ticket
reservation so easy and fast, not to mention accuracy and convenience. They are
even use predict or forecast the weather conditions, making us well –informed
about the incoming typhoon (or even a tidal wave). Automated Teller Machines
(ATM) are computers that allow us to withdraw cash anytime, anywhere. These are
all but the few benefits which we have enjoyed because of the invention of
computers. Imagine, living without computers, how lousy life could be.

But, what the computer can really do? Why can they accomplish such incredible
tasks? The truth is computers can do only for simple tasks: receive an input,
process information, produce output, and store information. The question here is
why they can accomplish such tremendous task? The answer is: man made them
so. Behind every computer’s power and tremendous capabilities is an intelligent
man. Man’s boundless creativity and brilliance are the driving forces that power-
up this computer; and they express it in terms of program. A program is a set of
commands or instruction for a computer to follow. We usually call this end –
product: a software, this will be our topic in the next chapter.

Every computer system contains hardware components that are used for receiving
input, processing information, producing the output and storing information. Let
us start with the input device. The most common input devices are keyboards,
mouse, and joystick (used in computer games.) and the most common output
devices are monitor (screen) and printer.

39
Let us go to the microprocessor. The microprocessor is used in processing
information such as performing arithmetic computations and making decisions
based on given choices or alternatives. Technically speaking, the microprocessor
is also called the central processing unit (CPU) or the brain of the computer.

The most common storage devices that are used for storing information are floppy
diskettes, hard disks, CD-ROMS, DVD-ROMS USB, Flash Drive and backup
tapes., Computer’s memory (both temporary or permanent storage) such as RAM
(Random Access Memory) are considered storage devices (technically speaking) .
It’s because they are capable of holding information temporarily (in the case of
RAM chips) and permanently (in the case of ROM Chips ). The computer is made
of these major components.

The historyofcomputinghardwarecovers the developments from early simple devices


to aid calculation to modern day computers. Before the 20th century, most
calculations were done by humans. Early mechanical tools to help humans with
digital calculations, like the abacus, were referred to as calculating
machines or calculators (and other proprietary names). The machine operator was
called the computer.

The first aids to computation were purely mechanical devices which required the
operator to set up the initial values of an elementary arithmetic operation, then
manipulate the device to obtain the result. Later, computers represented numbers
in a continuous form (e.g. distance along a scale, rotation of a shaft, or a voltage).
Numbers could also be represented in the form of digits, automatically
manipulated by a mechanism. Although this approach generally required more
complex mechanisms, it greatly increased the precision of results. The
development of transistor technology and then the integrated circuit chip led to a
series of breakthroughs, starting with transistor computers and then integrated
circuit computers, causing digital computers to largely replace analog computers.
Metal-oxide-semiconductor (MOS) large-scale integration (LSI) then enabled
semiconductor memory and the microprocessor, leading to another key
breakthrough, the miniaturized personal computer (PC), in the 1970s. The cost of
computers gradually became so low that personal computers by the 1990s, and
then mobile computers (smartphones and tablets) in the 2000s, became
ubiquitous.

40
Contents

• 1Early devices

o 1.1Ancient and medieval

o 1.2Renaissance calculating tools

o 1.3Mechanical calculators

o 1.4Punched-card data processing

o 1.5Calculators

• 2First general-purpose computing device

• 3Analog computers

• 4Advent of the digital computer

o 4.1Electromechanical computers

o 4.2Digital computation

o 4.3Electronic data processing

o 4.4The electronic programmable computer

• 5Stored-program computer

o 5.1Theory

o 5.2Manchester Baby

o 5.3Manchester Mark 1

o 5.4EDSAC

o 5.5EDVAC

o 5.6Commercial computers

o 5.7Microprogramming

• 6Magnetic memory

• 7Early digital computer characteristics

• 8Transistor computers

41
o 8.1Transistor peripherals

o 8.2Transistor supercomputers

• 9Integrated circuit computers

• 10Semiconductor memory

• 11Microprocessor computers

• 12Epilogue

• 13See also

• 14Notes

• 15References

• 16Further reading

• 17External links

Early 8Ndevices [edit]

See also: Timeline of computing hardware before 1950

Ancient and medieval

42
The Ishango bone is thought to be a Paleolithic tally stick.

Suanpan (The number represented on this abacus is 6,302,715,408)

Devices have been used to aid computation for thousands of years, mostly using
one-to-one correspondence with fingers. The earliest counting device was
probably a form of stick. The Lebombo bone from the mountains between
Swaziland and South Africa may be the oldest known mathematical artifact. It
dates from 35,000 BCE and consists of 29 distinct notches that were deliberately
cut into a baboon's fibula. Later record keeping aids throughout the Fertile
Crescent included calculi (clay spheres, cones, etc.) which represented counts of
items, probably livestock or grains, sealed in hollow unbaked clay containers. The
use of counting rods is one example. The abacus was early used for arithmetic
tasks. What we now call the Roman abacus was used in Babylonia as
early as c. 2700–2300 BC. Since then, many other forms of reckoning boards or
tables have been invented. In a medieval European counting house, a checkered
cloth would be placed on a table, and markers moved around on it according to
certain rules, as an aid to calculating sums of money.

Several analog computers were constructed in ancient and medieval times to


perform astronomical calculations. These included
the astrolabe and Antikythera mechanism from the Hellenistic world (c. 150–100
BC). In Roman Egypt, Hero of Alexandria (c. 10–70 AD) made mechanical devices
including automata and a programmable cart. Other early mechanical devices
used to perform one or another type of calculations include the plan sphere and
other mechanical computing devices invented by Abu Rayhan al-Biruni (c. AD
1000); the eupatorium and universal latitude-independent astrolabe by Abū Ishāq
Ibrahim al-Zarqālī (c. AD 1015); the astronomical analog computers of other
medieval Muslim astronomers and engineers; and the astronomical clock tower of
Su Song (1094) during the Song dynasty. The castle clock, a hydro powered
mechanical astronomical clock invented by Ismail al-Jazari in 1206, was the first
programmable analog computer. Ramon Llull invented the Lullian

43
Circle: a notional machine for calculating answers to philosophical questions (in
this case, to do with Christianity) via logical combinatorics. This idea was taken up
by Leibniz centuries later, and is thus one of the founding elements in computing
and information science.

Renaissance calculating tools

A set of John Napier's calculating tables from around 1680

Scottish mathematician and physicist John Napier discovered that the


multiplication and division of numbers could be performed by the addition and
subtraction, respectively, of the logarithms of those numbers. While producing the
first logarithmic tables, Napier needed to perform many tedious multiplications.
It was at this point that he designed his 'Napier's bones', an abacus-like device that
greatly simplified calculations that involved multiplication and division.

A slide rule

Since real numbers can be represented as distances or intervals on a line, the


slide rule was invented in the 1620s, shortly after Napier's work, to allow
multiplication and division operations to be carried out significantly faster than
was previously possible. Edmund Gunter built a calculating device with a single
logarithmic scale at the University of Oxford. His device greatly simplified
arithmetic calculations, including multiplication and division. William Oughtred
greatly improved this in 1630 with his circular slide rule. He followed this up with
the modern slide rule in 1632, essentially a combination of two Gunter rules,
held together with the hands. Slide rules were used by

44
Generations of engineers and other mathematically involved professional workers,
until the invention of the pocket calculator.

Mechanical calculators [

Wilhelm Schickard, a German polymath, designed a calculating machine in 1623


which combined a mechanized form of Napier's rods with the world's first
mechanical adding machine built into the base. Because it made use of a single-
tooth gear there were circumstances in which its carry mechanism would jam. A
fire destroyed at least one of the machines in 1624 and it is believed Schickard was
too disheartened to build another.

View through the back of Pascal's calculator. Pascal invented his machine in 1642.

In 1642, while still a teenager, Blaise Pascal started some pioneering work on
calculating machines and after three years of effort and 50 prototypes he invented
a mechanical calculator. He built twenty of these machines (called Pascal's
calculator or Pascaline) in the following ten years. Nine Pascal Ines have survived,
most of which are on display in European museums. A continuing debate exists
over whether Schickard or Pascal should be regarded as the "inventor of the
mechanical calculator" and the range of issues to be considered is discussed
elsewhere.

Gottfried Wilhelm von Leibniz invented the stepped reckoner and his famous
stepped drum mechanism around 1672. He attempted to create a machine that
could be used not only for addition and subtraction but would utilise a moveable
carriage to enable long multiplication and division. Leibniz once said "It is
unworthy of excellent men to lose hours like slaves in the labor of calculation
which could safely be relegated to anyone else if machines were used." However,
Leibniz did not incorporate a fully successful carry mechanism. Leibniz also
described the binary numeral system, a central ingredient of all modern
computers. However, up to the 1940s, many subsequent designs

45
Punched-card data processing

In 1804, French weaver Joseph Marie Jacquard developed a loom in which the
pattern being woven was controlled by a paper tape constructed from punched
cards. The paper tape could be changed without changing the mechanical design
of the loom. This was a landmark achievement in programmability. His machine
was an improvement over similar weaving looms. Punched cards were preceded
by punch bands, as in the machine proposed by Basile Bouchon. These bands
would inspire information recording for automatic pianos and more recently
numerical control machine tools.

IBM punched-card accounting machines, 1936

In the late 1880s, the American Herman Hollerith invented data storage on
punched cards that could then be read by a machine. To process these punched
cards, he invented the tabulator and the keypunch machine. His machines used
electromechanical relays and counters. Hollerith's method was used in the 1890
United States Census. That census was processed two years faster than the prior
census had been. Hollerith's company eventually became the core of IBM.

By 1920, electromechanical tabulating machines could add, subtract, and print


accumulated totals. Machine functions were directed by inserting dozens of wire
jumpers into removable control panels. When the United States instituted Social
Security in 1935, IBM punched-card systems were used to process records of 26

46
million workers. Punched cards became ubiquitous in industry and government
for accounting and administration.

Leslie Comrie's articles on punched-card methods and W. J. Eckert's publication


of Punched Card Methods in Scientific Computation in 1940, described punched- card
techniques sufficiently advanced to solve some differential equations or perform
multiplication and division using floating point representations, all on punched
cards and unit record machines. Such machines were used during World War II
for cryptographic statistical processing, as well as a vast number of administrative
uses. The Astronomical Computing Bureau, Columbia University, performed
astronomical calculations representing the state of the art in computing.[

The book IBM and the Holocaust by Edwin Black outlines the ways in which IBM's
technology helped facilitate Nazi genocide through generation and tabulation of
punch cards based on national census data. See also: Dehomag

Calculators

Main article: Calculator

The Curta calculator could also do multiplication and division.

By the 20th century, earlier mechanical calculators, cash registers, accounting


machines, and so on were redesigned to use electric motors, with gear position as
the representation for the state of a variable. The word "computer" was a job title
assigned to primarily women who used these calculators to perform mathematical
calculations.[35] By the 1920s, British scientist Lewis Fry

47
Richardson's interest in weather prediction led him to propose human computers
and numerical analysis to model the weather; to this day, the most powerful
computers on Earth are needed to adequately model its weather using the Navier–
Stokes equations.

Companies like Friden, Merchant Calculator and Monroe made desktop


mechanical calculators from the 1930s that could add, subtract, multiply and
divide. In 1948, the Curta was introduced by Austrian inventor Curt Herzstark. It
was a small, hand-cranked mechanical calculator and as such, a descendant of
Gottfried Leibniz's Stepped Reckoner and Thomas's Arithmometer.

The world's first all-electronic desktop calculator was the British Bell Punch
ANITA, released in 1961. It used vacuum tubes, cold-cathode tubes and
Dynatrons in its circuits, with 12 cold-cathode "Nixie" tubes for its display. The
ANITA sold well since it was the only electronic desktop calculator available,

and was silent and quick. The tube technology was superseded in June 1963 by the
U.S. manufactured Friden EC-130, which had an all-transistor design, a stack of
four 13-digit numbers displayed on a 5-inch (13 cm) CRT, and introduced reverse
Polish notation (RPN).

First general-purpose computing device

Main article: Analytical Engine

A portion of Babbage's Difference Engine

48
Charles Babbage, an English mechanical engineer and polymath, originated the
concept of a programmable computer. Considered the "father of the computer",
he conceptualized and invented the first mechanical computer in the early 19th
century. After working on his revolutionary difference engine, designed to aid in
navigational calculations, in 1833 he realized that a much more general design, an
Analytical Engine, was possible. The input of programs and data was to be
provided to the machine via punched cards, a method being used at the time to
direct mechanical looms such as the Jacquard loom. For output, the machine
would have a printer, a curve plotter and a bell. The machine would also be able to
punch numbers onto cards to be read in later. It employed ordinary base-10 fixed-
point arithmetic.

The Engine incorporated an arithmetic logic unit, control flow in the form of
conditional branching and loops, and integrated memory, making it the first

Design for a general-purpose computer that could be described in modern terms


as Turing-complete.

There was to be a store, or memory, capable of holding 1,000 numbers of 40


decimal digits each (ca. 16.7 kB). An arithmetical unit, called the "mill", would be
able to perform all four arithmetic operations, plus comparisons and optionally
square roots. Initially it was conceived as a difference engine curved back upon
itself, in a generally circular layout, with the long store exiting off to one side.
(Later drawings depict a regularized grid layout.) Like the central processing unit
(CPU) in a modern computer, the mill would rely on its own internal procedures,
roughly equivalent to microcode in modern CPUs, to be stored in the form of pegs
inserted into rotating drums called "barrels", to carry out some of the more
complex instructions the user's program might specify.

Trial model of a part of the Analytical Engine, built by Babbage, as displayed at the
Science Museum (London

49
The programming language to be employed by users was akin to modern day
assembly languages. Loops and conditional branching were possible, and so the
language as conceived would have been Turing-complete as later defined by Alan
Turing. Three different types of punch cards were used: one for arithmetical
operations, one for numerical constants, and one for load and store operations,
transferring numbers from the store to the arithmetical unit or back. There were
three separate readers for the three types of cards.

The machine was about a century ahead of its time. However, the project was
slowed by various problems including disputes with the chief machinist building
parts for it. All the parts for his machine had to be made by hand—this was a major
problem for a machine with thousands of parts. Eventually, the project

was dissolved with the decision of the British Government to cease funding.
Babbage's failure to complete the analytical engine can be chiefly attributed to
difficulties not only of politics and financing, but also to his desire to develop an
increasingly sophisticated computer and to move ahead faster than anyone else
could follow. Ada Lovelace translated and added notes to the "Sketch of the
Analytical Engine" by Luigi Federico Menabrea. This appears to be the first
published description of programming, so Ada Lovelace is widely regarded as the
first computer programmer

Analog computers

Main article: Analog computer

Further information: Mechanical computer

Sir William Thomson's third tide-predicting machine design, 1879–81

In the first half of the 20th century, analog computers were considered by many to
be the future of computing. These devices used the continuously changeable
aspects of physical phenomena such as electrical, mechanical,
50
or hydraulic quantities to model the problem being solved, in contrast to digital
computers that represented varying quantities symbolically, as their numerical
values change. As an analog computer does not use discrete values, but rather
continuous values, processes cannot be reliably repeated with exact equivalence,
as they can with Turing machines.

The first modern analog computer was a tide-predicting machine, invented by Sir
William Thomson, later Lord Kelvin, in 1872. It used a system of pulleys and wires
to automatically calculate predicted tide levels for a set period at a particular
location and was of great utility to navigation in shallow waters. His device was the
foundation for further developments in analog computing.

The differential analyzer, a mechanical analog computer designed to solve


differential equations by integration using wheel-and-disc mechanisms, was
conceptualized in 1876 by James Thomson, the brother of the more famous Lord
Kelvin. He explored the possible construction of such calculators, but was stymied
by the limited output torque of the ball-and-disk integrators. In a differential
analyzer, the output of one integrator drove the input of the next integrator, or a
graphing output.

A Mk. I Drift Sight. The lever just in front of the bomb aimer's fingertips sets the
altitude, the wheels near his knuckles set the wind and airspeed.

An important advance in analog computing was the development of the first fire-
control systems for long range ship gun laying. When gunnery ranges increased
dramatically in the late 19th century it was no longer a simple matter of calculating
the proper aim point, given the flight times of the shells. Various spotters on board
the ship would relay distance measures and observations to a central plotting
station. There the fire direction teams fed in the location, speed

51
and direction of the ship and its target, as well as various adjustments for
Coriolis effect, weather effects on the air, and other adjustments; the computer
would then output a firing solution, which would be fed to the turrets for laying.
In 1912, British engineer Arthur Pollen developed the first electrically powered
mechanical analogue computer (called at the time the Argo Clock). It was used by
the Imperial Russian Navy in World War I. The alternative Dreyer Table fire
control system was fitted to British capital ships by mid-1916.

Mechanical devices were also used to aid the accuracy of aerial bombing. Drift
Sight was the first such aid, developed by Harry Wimperis in 1916 for the Royal
Naval Air Service; it measured the wind speed from the air, and used that
measurement to calculate the wind's effects on the trajectory of the bombs. The

system was later improved with the Course Setting Bomb Sight, and reached a climax
with World War II bomb sights, Mark XIV bomb sight (RAF Bomber Command) and
the Norden (United States Army Air Forces).

The art of mechanical analog computing reached its zenith with the differential
analyzer, built by H. L. Hazen and Vannevar Bush at MIT starting in 1927, which
built on the mechanical integrators of James Thomson and the torque amplifiers
invented by H. W. Nieman. A dozen of these devices were built before their
obsolescence became obvious; the most powerful was constructed at the
University of Pennsylvania's Moore School of Electrical Engineering, where the
ENIAC was built.

He also introduced the notion of a "universal machine" (now known as a


universal Turing machine), with the idea that such a machine could perform the
tasks of any other machine, or in other words, it is provably capable of computing
anything that is computable by executing a program stored on tape, allowing the
machine to be programmable. Von Neumann acknowledged that the central
concept of the modern computer was due to this paper. Turing machines are to
this day a central object of study in theory of computation.

Electromechanical computers

Further information: Mechanical computer § Electro-mechanical computers

The era of modern computing began with a flurry of development before and during
World War II. Most digital computers built in this period were electromechanical –
electric switches drove mechanical relays to perform the calculation. These devices
had a low operating speed and were eventually

52
superseded by much faster all-electric computers, originally using vacuum tubes.

The Z2 was one of the earliest examples of an electromechanical relay computer,


and was created by German engineer Konrad Zuse in 1940. It was an improvement
on his earlier Z1; although it used the same mechanical memory, it replaced the
arithmetic and control logic with electrical relay circuits.

Replica of Zuse's Z3, the first fully automatic, digital (electromechanical)


computer

In the same year, electro-mechanical devices called bombs were built by British
cryptologists to help decipher German Enigma-machine-encrypted secret
messages during World War II. The bombe's initial design was created in 1939 at
the UK Government Code and Cypher School (GC&CS) at Bletchley Park by
Alan Turing, with an important refinement devised in 1940 by Gordon Welchman.
The engineering design and construction was the work of Harold Keen of the
British Tabulating Machine Company. It was a substantial development from a
device that had been designed in 1938 by Polish Cipher Bureau cryptologist
Marian Rejewski, and known as the "cryptologic bomb" (Polish: "bomba
kryptologiczna").

In 1941, Zuse followed his earlier machine up with the Z3, the world's first working
electromechanical programmable, fully automatic digital computer. The Z3 was
built with 2000 relays, implementing a 22-bit word length that operated at a clock
frequency of about 5–10 Hz. Program code and data were stored on punched film.
It was quite similar to modern machines in some respects, pioneering numerous
advances such as floating point numbers. Replacement of the hard-to-implement
decimal system (used in Charles Babbage's earlier design) by the simpler binary
system meant that Zuse's machines were easier to build

53
and potentially more reliable, given the technologies available at that time. The Z3
was probably a Turing-complete machine. In two 1936 patent applications, Zuse
also anticipated that machine instructions could be stored in the same storage
used for data—the key insight of what became known as the von Neumann
architecture, first implemented in 1948 in America in the
electromechanical IBM SSEC and in Britain in the fully electronic
Manchester Baby.

Digital computation

The term digital was first suggested by George Robert Stibitz and refers to where a
signal, such as a voltage, is not used to directly represent a value (as it would be in
an analog computer), but to encode it. In November 1937, George Stibitz, then
working at Bell Labs (1930–1941), completed a relay-based calculator he later
dubbed the "Model K" (for "kitchen table", on which he had assembled it), which
became the first binary adder. Typically signals have two states – low (usually
representing 0) and high (usually representing 1), but sometimes three- valued logic
is used, especially in high-density memory. Modern computers generally use binary
logic, but many early machines were decimal computers. In these machines, the basic
unit of data was the decimal digit, encoded in one of several schemes, including binary-
coded decimal or BCD, bi-quinary, excess-3, and two-out-of-five code.

The mathematical basis of digital computing is Boolean algebra, developed by the


British mathematician George Boole in his work The Laws of Thought, published in
1854. His Boolean algebra was further refined in the 1860s by William Jevons
and Charles Sanders Peirce, and was first presented systematically by Ernst
Schröder and A. N. Whitehead. In 1879 Gottlob Frege develops the formal
approach to logic and proposes the first logic language for logical equations.

In the 1930s and working independently, American electronic engineer Claude


Shannon and Soviet logician Victor Shestakov both showed a one-to-one
correspondence between the concepts of Boolean logic and certain electrical
circuits, now called logic gates, which are now ubiquitous in digital computers.]
They showed that electronic relays and switches can realize the expressions
of Boolean algebra. This thesis essentially founded practical digital circuit
design.

Electronic data processing [

54
Atanasoff–Berry Computer replica at first floor of Durham Center, Iowa State
University

Purely electronic circuit elements soon replaced their mechanical and


electromechanical equivalents, at the same time that digital calculation replaced
analog. Machines such as the Z3, the Atanasoff–Berry Computer, the Colossus
computers, and the ENIAC were built by hand, using circuits containing relays or
valves (vacuum tubes), and often used punched cards or punched paper tape for
input and as the main (non-volatile) storage medium.

The engineer Tommy Flowers joined the telecommunications branch of the


General Post Office in 1926. While working at the research station in Dollis Hill in
the 1930s, he began to explore the possible use of electronics for the telephone
exchange. Experimental equipment that he built in 1934 went into operation 5
years later, converting a portion of the telephone exchange network into an
electronic data processing system, using thousands of vacuum tubes.

Computers whose logic was primarily built using vacuum tubes are now known as
first generation computers.

55
The electronic programmable computer

Main articles: Colossus computer and ENIAC

Colossus was the first electronic digital programmable computing device, and was
used to break German ciphers during World War II. It remained unknown, as a
military secret, well into the 1970s

During World War II, British code breakers at Bletchley Park, 40 miles (64 km)
north of London, achieved a number of successes at breaking encrypted enemy
military communications. The German encryption machine, Enigma, was first
attacked with the help of the electro-mechanical bombes. Women often operated
these bombe machines. They ruled out possible Enigma settings by performing
chains of logical deductions implemented electrically. Most possibilities led to a
contradiction, and the few remaining could be tested by hand.

The Germans also developed a series of teleprompter encryption systems, quite


different from Enigma. The Lorenz SZ 40/42 machine was used for high-level
Army communications, code-named "Tunney" by the British. The first intercepts
of Lorenz messages began in 1941. As part of an attack on Tunney, Max Newman
and his colleagues developed the Heath Robinson, a fixed-function machine to aid
in code breaking. Tommy Flowers, a senior engineer at the Post Office Research
Station was recommended to Max Newman by Alan Turing and spent eleven
months from early February 1943 designing and building the more flexible
Colossus computer (which superseded the Heath Robinson). After a functional
test in December 1943, Colossus was shipped to Bletchley Park, where it was
delivered on 18 January 1944 and attacked its first message on 5 February.

56
Wartime photo of Colossus No. 10

Colossus was the world's first electronic digital programmable computer. It used
a large number of valves (vacuum tubes). It had paper-tape input and was
capable of being configured to perform a variety of Boolean logical operations on
its data, but it was not Turing-complete. Data input to Colossus was by
photoelectric reading of a paper tape transcription of the enciphered
intercepted message. This was arranged in a continuous loop so that it could be
read and re-read multiple times – there being no internal store for the data. The
reading mechanism ran at 5,000 characters per second with the paper tape
moving at 40 ft./s (12.2 m/s; 27.3 mph). Colossus Mark 1 contained 1500
thermionic valves (tubes), but Mark 2 with 2400 valves and five processors in
parallel, was both 5 times faster and simpler to operate than Mark 1, greatly
speeding the decoding process. Mark 2 was designed while Mark 1 was being
constructed. Allen Coombs took over leadership of the Colossus Mark 2 project
when Tommy Flowers moved on to other projects.[91] The first Mark 2 Colossus
became operational on 1 June 1944, just in time for the Allied Invasion of
Normandy on D-Day.

Most of the use of Colossus was in determining the start positions of the Tunny
rotors for a message, which was called "wheel setting". Colossus included the first-
ever use of shift registers and systolic arrays, enabling five simultaneous tests, each
involving up to 100 Boolean calculations. This enabled five different possible start
positions to be examined for one transit of the paper tape. As well as wheel setting
some later Colossi included mechanisms intended to help determine pin patterns
known as "wheel breaking". Both models were programmable using switches and
plug panels in a way their predecessors had not been. Ten Mk 2 Colossi were
operational by the end of the war.

57
ENIAC was the first Turing-complete electronic device, and performed ballistics
trajectory calculations for the United States Army.

Without the use of these machines, the Allies would have been deprived of the very
valuable intelligence that was obtained from reading the vast quantity of
enciphered high-level telegraphic messages between the German High Command
(OKW) and their army commands throughout occupied Europe. Details of their
existence, design, and use were kept secret well into the 1970s. Winston
Churchill personally issued an order for their destruction into pieces no larger than
a man's hand, to keep secret that the British were capable of cracking Lorenz SZ
cyphers (from German rotor stream cipher machines) during the oncoming Cold
War. Two of the machines were transferred to the newly formed GCHQ and the
others were destroyed. As a result, the machines were not included in many
histories of computing. A reconstructed working copy of one of the Colossus
machines is now on display at Bletchley Park.

It combined the high speed of electronics with the ability to be programmed for
many complex problems. It could add or subtract 5000 times a second, a thousand
times faster than any other machine. It also had modules to multiply, divide, and
square root. High-speed memory was limited to 20 words (equivalent to about 80
bytes). Built under the direction of John Mauchly and J. Presper Eckert at the
University of Pennsylvania, ENIAC's development and construction lasted from
1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons,
using 200 kilowatts of electric power and contained over 18,000 vacuum tubes,
1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. ]
One of its major engineering feats was to minimize the effects of tube burnout,
which was a common problem in machine reliability at that time. The machine
was in almost constant use for the next ten years.

Stored-program computer

Main article: Stored-program computer

58
Further information: List of vacuum tube computers

Early computing machines were programmable in the sense that they could follow
the sequence of steps they had been set up to execute, but the "program", or steps
that the machine was to execute, were set up usually by changing how the wires
were plugged into a patch panel or plug board. "Reprogramming", when it was
possible at all, was a laborious process, starting with engineers working out
flowcharts, designing the new set up, and then the often-exacting process of physically
re-wiring patch panels. Stored-program computers, by contrast, were designed to
store a set of instructions (a program), in memory – typically the same memory as
stored data.

Theory

Design of the von Neumann architecture, 1947

The theoretical basis for the stored-program computer had been proposed by
Alan Turing in his 1936 paper. In 1945 Turing joined the National Physical
Laboratory and began his work on developing an electronic stored-program digital
computer. His 1945 report ‘Proposed Electronic Calculator’ was the first
specification for such a device.

Meanwhile, John von Neumann at the Moore School of Electrical Engineering,


University of Pennsylvania, circulated his First Draft of a Report on the EDVAC in 1945.
Although substantially similar to Turing's design and containing comparatively
little engineering detail, the computer architecture it outlined became known as
the "von Neumann architecture". Turing presented a more detailed paper to the
National Physical Laboratory (NPL) Executive Committee in 1946, giving the first
reasonably complete design of a stored- program computer, a device he called the
Automatic Computing Engine (ACE). However, the better-known EDVAC design
of John von Neumann, who knew of

59
Turing's theoretical work, received more publicity, despite its incomplete nature
and questionable lack of attribution of the sources of some of the ideas.

Manchester Baby

Main article: Manchester Baby

A section of the rebuilt Manchester Baby, the first electronic stored-program


computer

The Manchester Baby was the world's first electronic stored-program computer. It
was built at the Victoria University of Manchester by Frederic C. Williams, Tom
Kilburn and Geoff Toot ill, and ran its first program on 21 June 1948.

The machine was not intended to be a practical computer but was instead designed
as a test bed for the Williams tube, the first random-access digital storage device.
Invented by Freddie Williams and Tom Kilburn at the University of Manchester in 1946
and 1947, it was a cathode ray tube that used an effect called secondary emission to temporarily
store electronic binary data, and was used successfully in several early computers.

Although the computer was small and primitive by the standards of the 1990s, it was the first
working machine to contain all of the elements essential to a modern electronic computer. As soon
as the Baby had demonstrated the feasibility of its design, a project was initiated at the university
to develop it into a more usable computer, the Manchester Mark 1. The Mark 1 in turn quickly
became the prototype for the Ferranti Mark 1, the world's first commercially available general-
purpose computer.

The Baby had a 32-bit word length and a memory of 32 words. As it was designed to be the simplest
possible stored-program computer, the only arithmetic operations implemented in hardware were
subtraction and negation; other

60
arithmetic operations were implemented in software. The first of three programs written for the
machine found the highest proper divisor of 218 (262,144), a calculation that was known would
take a long time to run—and so prove the computer's reliability—by testing every integer from 218
− 1 downwards, as division was implemented by repeated subtraction of the divisor. The program
consisted of 17 instructions and ran for 52 minutes before reaching the correct answer of 131,072,
after the Baby had performed 3.5 million operations (for an effective CPU speed of 1.1 kIPS).

Manchester Mark 1

The Experimental machine led on to the development of the Manchester Mark 1 at the University
of Manchester.[104] Work began in August 1948, and the first version was operational by April 1949;
a program written to search for Mersenne primes ran error-free for nine hours on the night of
16/17 June 1949. The machine's successful operation was widely reported in the British press,
which used the phrase "electronic brain" in describing it to their readers.

The computer is especially historically significant because of its pioneering inclusion of index
registers, an innovation which made it easier for a program to read sequentially through an array
of words in memory. Thirty-four patents resulted from the machine's development, and many of the
ideas behind its design were incorporated in subsequent commercial products such as the IBM
701 and 702 as well as the Ferranti Mark 1. The chief designers, Frederic C. Williams and Tom
Kilburn, concluded from their experiences with the Mark 1 that computers would be used more in
scientific roles than in pure mathematics. In 1951 they started development work on Meg,
the Mark 1's successor, which would include a floating point unit.

EDSAC

61
EDSAC

The other contender for being the first recognizably modern digital stored-
program computer was the EDSAC, designed and constructed by Maurice Wilkes
and his team at the University of Cambridge Mathematical Laboratory in
England at the University of Cambridge in 1949. The machine was inspired by
John von Neumann's seminal First Draft of a Report on the EDVAC and was one of the
first usefully operational electronic digital stored- program computer.

EDSAC ran its first programs on 6 May 1949, when it calculated a table of
squares and a list of numbers. The EDSAC also served as the basis for the first
commercially applied computer, the LEO I, used by food manufacturing
company J. Lyons & Co. Ltd. EDSAC 1 and was finally shut down on 11 July
1958, having been superseded by EDSAC 2 which stayed in use until 1965.

EDVAC

EDVAC

ENIAC inventors John Mauchly and J. Presper Eckert proposed the EDVAC's
construction in August 1944, and design work for the EDVAC commenced at the
University of Pennsylvania's Moore School of Electrical Engineering, before the
ENIAC was fully operational. The design implemented a number of important
architectural and logical improvements conceived during the ENIAC's
construction, and a high-speed serial-access memory. However, Eckert and
Mauchly left the project and its construction floundered.

62
Commercial computers

The first commercial computer was the Ferranti Mark 1, built by Ferranti and
delivered to the University of Manchester in February 1951. It was based on the
Manchester Mark 1. The main improvements over the Manchester Mark 1 were
in the size of the primary storage (using random access Williams tubes),
secondary storage (using a magnetic drum), a faster multiplier, and additional
instructions. The basic cycle time was 1.2 milliseconds, and a multiplication
could be completed in about 2.16 milliseconds. The multiplier used almost a
quarter of the machine's 4,050 vacuum tubes (valves). [112] A second machine
was purchased by the University of Toronto, before the design was revised into
the Mark 1 Star. At least seven of these later machines were delivered between
1953 and 1957, one of them to Shell labs in Amsterdam.[113]

In October 1947, the directors of J. Lyons & Company, a British catering company
famous for its teashops but with strong interests in new office management
techniques, decided to take an active role in promoting the commercial
development of computers. The LEO I computer became operational in April
1951[114] and ran the world's first regular routine office computer job. On 17
November 1951, the J. Lyons Company began weekly operation of a bakery
valuations job on the LEO (Lyons Electronic Office). This was the first business
application to go live on a stored program computer.[115]

Front panel of the IBM 650

In June 1951, the UNIVAC I (Universal Automatic Computer) was delivered to the
U.S. Census Bureau. Remington Rand eventually sold 46 machines at more than
US$1 million each ($9.85 million as of 2020). UNIVAC was the first "mass

63
Produced" computer. It used 5,200 vacuum tubes and consumed 125 kW of
power. Its primary storage was serial-access mercury delay lines capable of
storing 1,000 words of 11 decimal digits plus sign (72-bit words).

IBM introduced a smaller, more affordable computer in 1954 that proved very
popular. The IBM 650 weighed over 900 kg, the attached power supply weighed
around 1350 kg and both were held in separate cabinets of roughly 1.5 meters by
0.9 meters by 1.8 meters. It cost US$500,000 ] ($4.76 million as of 2020) or could
be leased for US$3,500 a month ($30 thousand as of 2020). Its drum memory was
originally 2,000 ten-digit words, later expanded to 4,000 words. Memory
limitations such as this were to dominate programming for decades afterward.

Microprogramming

In 1951, British scientist Maurice Wilkes developed the concept of


microprogramming from the realization that the central processing unit of a
computer could be controlled by a miniature, highly specialized computer
program in high-speed ROM. Microprogramming allows the base instruction set
to be defined or extended by built-in programs (now called
firmware or microcode). This concept greatly simplified CPU development. He first
]

described this at the University of Manchester Computer Inaugural Conference in


1951, then published in expanded form in IEEE Spectrum in 1955.

It was widely used in the CPUs and floating-point units of mainframe and other
computers; it was implemented for the first time in EDSAC 2, which also used
multiple identical "bit slices" to simplify design. Interchangeable, replaceable tube
assemblies were used for each bit of the processor.

Magnetic memory

64
Diagram of a 4×4 plane of magnetic core memory in an X/Y line coincident-
current setup. X and Y are drive lines, S is sense, Z is inhibit. Arrows indicate the
direction of current for writing.

Magnetic drum memories were developed for the US Navy during WW II with the
work continuing at Engineering Research Associates (ERA) in 1946 and 1947.
ERA, then a part of Univac included a drum memory in its 1103, announced in
February 1953. The first mass-produced computer, the IBM 650, also announced
in 1953 had about 8.5 kilobytes of drum memory.

Magnetic core memory patented in 1949] with its first usage demonstrated for the
Whirlwind computer in August 1953. Commercialization followed quickly.
Magnetic core was used in peripherals of the IBM 702 delivered in July 1955, and
later in the 702 itself. The IBM 704 (1955) and the Ferranti Mercury (1957) used
magnetic-core memory. It went on to dominate the field into the 1970s, when it
was replaced with semiconductor memory. Magnetic core peaked in volume about
1975 and declined in usage and market share thereafter.

Defining characteristics of some early digital computers of the 1940s (In the history of
computing hardware)

Comput
First
Numeral ing Turing
Name operati Programming
system mechan complete
onal
ism

Arthur H.
Jan Electron Not
Dickinson IB M Decimal No
1940 ic programmable
(US)

Joseph
March Electron Not
Desch NCR (U S) Decimal No
1940 ic programmable

Program-
Electro- controlled by In
Zuse Z3 (Ger May Binary flo
mechani punched theory (199
many) 1941 ating point
cal 35 mm film 8)
stock (but no

65
conditional
branch)

Atanasoff– Berry Not


Computer (US Electron programmable
1942 Binary No
) ic —single
purpose

Program-
Colossus Mar k 1 Februar Electron controlled by
Binary No
(UK) y 1944 ic patch cables
and switches

Program-
controlled by
24-
Harvard Mark I – Electro-
May channel punc
IBM ASCC (US) Decimal mechani Debatable
1944 hed paper
cal
tape (but no
conditional
branch)

Program-
In
Colossus Mark June Electron controlled by
Binary theory (201
2 (UK) 1944 ic patch cables [126]
1)
and switches

Program-
Binary Electro- controlled by
Zuse Z4 (Ger March
floating mechani punched Yes
many) 1945
point cal 35 mm film
stock

Program-
Februar Electron controlled by
ENIAC (US) Decimal Yes
y 1946 ic patch cables
and switches

66
Stored-
ARC2 (SEC) May Electron program in rot
Binary Yes
(UK) 1948 ic ating drum
memory

Stored-
program in Wi
Manchester June Electron
Binary lliams cathode Yes
Baby (UK) 1948 ic
ray tube
memory

Read-only
stored
programming
Modified Septemb Electron mechanism
Decimal Yes
ENIAC (US) er 1948 ic using the
Function
Tables as
program ROM

Stored-
program in
Williams
Manchester April Electron
Binary cathode ray Yes
Mark 1 (UK) 1949 ic
tube memory
and magnetic
drum memory

Stored-
May Electron program in
EDSAC (UK) Binary Yes
1949 ic mercury delay
line memory

Stored-
CSIRAC (Aust Novemb Electron program in
Binary Yes
ralia) er 1949 ic mercury delay
line memory

Transistor computers

67
Main article: Transistor computer

Further information: List of transistorized computers

A bipolar junction transistor

The bipolar transistor was invented in 1947. From 1955 onward transistors
replaced vacuum tubes in computer designs, giving rise to the "second generation"
of computers. Compared to vacuum tubes, transistors have many advantages: they
are smaller, and require less power than vacuum tubes, so give off less heat. Silicon
junction transistors were much more reliable than vacuum tubes and had longer
service life. Transistorized computers could contain tens of thousands of binary
logic circuits in a relatively compact space. Transistors greatly reduced computers'
size, initial cost, and operating cost. Typically, second-generation computers were
composed of large numbers of printed circuit boards such as the IBM Standard
Modular System, each carrying one to four logic gates or flip-flops.

At the University of Manchester, a team under the leadership of Tom Kilburn


designed and built a machine using the newly developed transistors
instead of valves. Initially the only devices available were germanium point-
contact transistors, less reliable than the valves they replaced but which consumed
far less power Their first transistorized computer, and the first in the world, was
operational by 1953, and a second version was completed there in April 1955.
The 1955 version used 200 transistors, 1,300 solid-state diodes, and had a power
consumption of 150 watts. However, the machine did make use of valves to
generate its 125 kHz clock waveforms and in the circuitry to read and write on its
magnetic drum memory, so it was not the first completely transistorized
computer.

68
CADET used 324 point-contact transistors provided by the UK company
Standard Telephones and Cables; 76 junction transistors were used for the first
stage amplifiers for data read from the drum, since point-contact transistors were
too noisy. From August 1956 CADET was offering a regular computing service,
during which it often executed continuous computing runs of 80 hours or more.
Problems with the reliability of early batches of point contact and alloyed junction
transistors meant that the machine's mean time between failures was about 90
minutes, but this improved once the more reliable bipolar junction transistors
became available.

The Manchester University Transistor Computer's design was adopted by the local
engineering firm of Metropolitan-Vickers in their Metrovick 950, the first
commercial transistor computer anywhere. Six Metrovick 950s were built, the first
completed in 1956. They were successfully deployed within various departments
of the company and were in use for about five years. A second generation
computer, the IBM 1401, captured about one third of the world market. IBM
installed more than ten thousand 1401s between 1960 and 1964.

Transistor peripherals

Transistorized electronics improved not only the CPU (Central Processing Unit),
but also the peripheral devices. The second generation disk data storage units
were able to store tens of millions of letters and digits. Next to the fixed disk
storage units, connected to the CPU via high-speed data transmission, were
removable disk data storage units. A removable disk pack can be easily exchanged
with another pack in a few seconds. Even if the removable disks' capacity is smaller
than fixed disks, their interchangeability guarantees a nearly unlimited quantity
of data close at hand. Magnetic tape provided archival capability for this data, at
a lower cost than disk.

Many second-generation CPUs delegated peripheral device communications to a


secondary processor. For example, while the communication processor controlled
card reading and punching, the main CPU executed calculations and binary
branch instructions. One data bus would bear data between the main CPU and
core memory at the CPU's fetch-execute cycle rate, and other data busses would
typically serve the peripheral devices. On the PDP-1, the core memory's cycle time
was 5 microseconds; consequently most arithmetic instructions took 10
microseconds (100,000 operations per second) because most operations took at
least two memory cycles; one for the instruction, one for the operand data fetch.

69
Transistor supercomputers

The University of Manchester Atlas in January 1963

The early 1960s saw the advent of supercomputing. The Atlas was a joint development
between the University of Manchester, Ferranti, and Plessey, and was first installed
at Manchester University and officially commissioned in 1962 as one of the world's
first supercomputers – considered to be the most powerful computer in the world at
that time. It was said that whenever Atlas went offline half of the United Kingdom's
computer capacity was lost.] It was a second- generation machine, using discrete
germanium transistors. Atlas also pioneered the Atlas Supervisor, "considered by
many to be the first recognizable modern operating system".

In the US, a series of computers at Control Data Corporation (CDC) were


designed by Seymour Cray to use innovative designs and parallelism to achieve
superior computational peak performance. The CDC 6600, released in 1964, is
generally considered the first supercomputer. The CDC 6600 outperformed its
predecessor, the IBM 7030 Stretch, by about a factor of 3. With performance of
about 1 megaflops, the CDC 6600 was the world's fastest computer from 1964
to 1969, when it relinquished that status to its successor, the CDC 7600.

Integrated circuit computers

Main article: History of computing hardware (1960s–present) § Third generation

The "third-generation" of digital electronic computers used integrated circuit (IC)


chips as the basis of their logic.

The idea of an integrated circuit was conceived by a radar scientist working for the
Royal Radar Establishment of the Ministry of Defense, Geoffrey W.A. Dummer.

70
The first working integrated circuits were invented by Jack Kilby at Texas
Instruments and Robert Noyce at Fairchild Semiconductor. Kelby recorded his
initial ideas concerning the integrated circuit in July 1958, successfully
demonstrating the first working integrated example on 12 September 1958.
Kelby’s invention was a hybrid integrated circuit (hybrid IC). It had external wire
connections, which made it difficult to mass-produce.

Third generation (integrated circuit) computers first appeared in the early 1960s
in computers developed for government purposes, and then in commercial
computers beginning in the mid-1960s.

Semiconductor memory

Main article: Semiconductor memory

The MOSFET (metal-oxide-semiconductor field-effect transistor, or MOS


transistor) was invented by Mohamed M. Atalla and Dawon Kahng at Bell Labs in
1959. In addition to data processing, the MOSFET enabled the practical use of
MOS transistors as memory cell storage elements, a function previously served by
magnetic cores. Semiconductor memory, also known as MOS memory, was
cheaper and consumed less power than magnetic-core memory. MOS random-
access memory (RAM), in the form of static RAM (SRAM), was developed by John
Schmidt at Fairchild Semiconductor in 1964.

Microprocessor computers

Main article: History of computing hardware (1960s–present) § Fourth generation

The "fourth-generation" of digital electronic computers used microprocessors as


the basis of their logic. The microprocessor has origins in the MOS integrated
circuit (MOS IC) chip. The MOS IC was first proposed by Mohamed M. Atalla
at Bell Labs in 1960, and then fabricated by Fred Heiman and Steven Hofstein at
RCA in 1962.] Due to rapid MOSFET scaling, MOS IC chips rapidly increased in
complexity at a rate predicted by Moore's law, leading to large-scale integration
(LSI) with hundreds of transistors on a single MOS chip by the late 1960s. The
application of MOS LSI chips to computing was the basis for the first
microprocessors, as engineers began recognizing that a complete computer
processor could be contained on a single MOS LSI chip.

The subject of exactly which device was the first microprocessor is contentious,
partly due to lack of agreement on the exact definition of the term
"microprocessor". The earliest multi-chip microprocessors were the Four-Phase
Systems AL-1 in 1969 and Garrett AiResearch MP944 in 1970, developed with

71
multiple MOS LSI chips. The first single-chip microprocessor was the Intel 4004,
developed on a single PMOS LSI chip. It was designed and realized by Ted Hoff,
Federico Faggin, Masatoshi Shima and Stanley Mazor at Intel, and released in 1971.
Tadashi Sasaki and Masatoshi Shima at Busicom, a calculator manufacturer, had
the initial insight that the CPU could be a single MOS LSI chip, supplied by Intel.

The die from an Intel 8742, an 8-bit microcontroller that includes a CPU running
at 12 MHz, RAM, EPROM, and I/O.

While the earliest microprocessor ICs literally contained only the processor, i.e.
the central processing unit, of a computer, their progressive development
naturally led to chips containing most or all of the internal electronic parts of a
computer. The integrated circuit in the image on the right, for example, an
Intel 8742, is an 8-bit microcontroller that includes a CPU running at 12 MHz, 128
bytes of RAM, 2048 bytes of EPROM, and I/O in the same chip.

During the 1960s there was considerable overlap between second and third
generation technologies.[169] IBM implemented its IBM Solid Logic Technology
modules in hybrid circuits for the IBM System/360 in 1964. As late as 1975, Sperry
Univac continued the manufacture of second-generation machines such as the
UNIVAC 494. The Burroughs large systems such as the B5000 were stack
machines, which allowed for simpler programming. These pushdown
automatons were also implemented in minicomputers and microprocessors later,
which influenced programming language design. Minicomputers served as low-
cost computer centers for industry, business and universities. [170] It became
possible to simulate analog circuits with the simulation program with
integrated circuit emphasis, or SPICE (1971) on minicomputers, one of the programs
for electronic design automation (EDA). The microprocessor led to the
development of the microcomputer, small, low-cost computers that could be
owned by individuals and small businesses.

72
Microcomputers, the first of which appeared in the 1970s, became ubiquitous in
the 1980s and beyond.

Altair 8800

While which specific system is considered the first microcomputer is a matter of


debate, as there were several unique hobbyist systems developed based on the
Intel 4004 and its successor, the Intel 8008, the first commercially available
microcomputer kit was the Intel 8080-based Altair 8800, which was announced
in the January 1975 cover article of Popular Electronics. However, this was an
extremely limited system in its initial stages, having only 256 bytes of DRAM in its
initial package and no input-output except its toggle switches and LED register
display. Despite this, it was initially surprisingly popular, with several hundred
sales in the first year, and demand rapidly outstripped supply. Several early third-
party vendors such as Cromemco and Processor Technology soon began supplying
additional S-100 bus hardware for the Altair 8800.

In April 1975 at the Hannover Fair, Olivetti presented the P6060, the world's first
complete, pre-assembled personal computer system. The central processing unit
consisted of two cards, code named PUCE1 and PUCE2, and unlike most other
personal computers was built with TTL components rather than a microprocessor.
It had one or two 8" floppy disk drives, a 32-character plasma display, 80-column
graphical thermal printer, 48 Kbytes of RAM, and BASIC language. It
weighed 40 kg (88 lbs.). As a complete system, this was a significant step from the
Altair, though it never achieved the same success. It was in competition with a
similar product by IBM that had an external floppy disk drive.

Architectures, with features added from their larger brethren, now dominant in
most market segments.

73
A NeXT Computer and its object-oriented development tools and libraries were used
by Tim Berners-Lee and Robert Cailliau at CERN to develop the world's first web
server software, CERN httpd, and also used to write the first web browser, World
Wide Web.

An indication of the rapidity of development of this field can be inferred from the
history of the seminal 1947 article by Burks, Goldstine and von Neumann. ] By the
time that anyone had time to write anything down, it was obsolete. After 1945, others
read John von Neumann's First Draft of a Report on the EDVAC, and immediately started
implementing their own systems. To this day, the rapid pace of development has
continued, worldwide.

A 1966 article in Time predicted that: "By 2000, the machines will be producing so
much that everyone in the U.S. will, in effect, be independently wealthy. How to use
leisure time will be a major problem.

74

You might also like