Increased Capacity To Express Ideas. Increased Capacity To Express Ideas
Increased Capacity To Express Ideas. Increased Capacity To Express Ideas
3 4
7 8
11 12
17 18
Factor 1: Readability Readability
• One of the most important criteria for judging a programming language is
the ease with which programs can be read and understood. • Overall Simplicity
• Before 1970, software development was largely thought of in terms of
writing code. • Orthogonality
• The primary positive characteristic of programming languages was
efficiency. • Data Types
• Language constructs were designed more from the point of view of the
computer than of the computer users. • Syntax Design
• Because ease of maintenance is determined in large part by the readability
• of programs, readability became an important measure of the quality of
• programs and programming languages.
• This was an important juncture in the evolution of programming languages.
• There was a distinct crossover from a focus on machine orientation to a
focus on human orientation.
• Readability must be considered in the context of the problem domain.
20 21
22 23
1. Overall Simplicity – Example 2 2. Orthogonality
• A third potential problem is operator overloading, in which a single
operator symbol has more than one meaning. • Orthogonality in a programming language means that a relatively small set of
• Although this is often useful, it can lead to reduced readability if users are primitive constructs can be combined in a relatively small number of ways to
allowed to create their own overloading and do not do it sensibly. build the control and data structures of the language.
• Furthermore, every possible combination of primitives is legal and meaningful.
• For example, it is clearly acceptable to overload + to use it for both integer
• Suppose a language has four primitive data types (integer, float, double, and
and floating-point addition. character) and two type operators (array and pointer).
• In fact, this overloading simplifies a language by reducing the number of • If the two type operators can be applied to themselves and the four primitive
operators. However, suppose the programmer defined + used between data types, a large number of data structures can be defined.
single-dimensioned array operands to mean the sum of all elements of both • The meaning of an orthogonal language feature is independent of the
arrays. context of its appearance in a program.
• Because the usual meaning of vector addition is quite different from this, it • The word orthogonal comes from the mathematical concept of orthogonal
would make the program more confusing for both the author and the vectors, which are independent of each other.
program’s readers. • Orthogonality follows from a symmetry of relationships among primitives.
• An even more extreme example of program confusion would be a user • A lack of orthogonality leads to exceptions to the rules of the language.
defining + between two vector operands to mean the difference between
their respective first elements.
24 25
28 29
30 31
Simplicity and Orthogonality
• Smaller number of primitive constructs and a consistent set of rules for
combining them is much better than simply having a large number of
primitives. Support for Abstraction
• A programmer can design a solution to a complex problem after learning • Abstraction means the ability to define and then use complicated structures or
operations in ways that allow many of the details to be ignored.
only a simple set of primitive constructs. • Abstraction is a key concept in contemporary programming language design.
• On the other hand, too much orthogonality can be a detriment to writability. • This is a reflection of the central role that abstraction plays in modern program
design methodologies.
• Errors in programs can go undetected when nearly any combination of
• The degree of abstraction allowed by a programming language and the naturalness
primitives is legal. of its expression are therefore important to its writability.
• Programming languages can support two distinct categories of abstraction, process
and data.
• A simple example of process abstraction is the use of a subprogram to implement a
sort algorithm that is required several times in a program.
• Without the subprogram, the sort code would need to be replicated in all places
where it was needed, which would make the program much longer and more
tedious to write.
32 33
Expressivity
• In a language such as APL ,it means that there are very powerful operators
Factor 3: Reliability
that allow a great deal of computation to be accomplished with a very small
program. • A program is said to be reliable if it performs to its specifications under all
• More commonly, it means that a language has relatively convenient, rather conditions.
than cumbersome, ways of specifying computations. – Type Checking
• For example, in C, the notation count++ is more convenient and shorter – Exception Handling
than count = count + 1.
– Aliasing
• Also, the and then Boolean operator in Ada is a convenient way of
– Readability and Writability
specifying short-circuit evaluation of a Boolean expression.
• The inclusion of the for statement in Java makes writing counting loops
easier than with the use of while, which is also possible.
• All of these increase the writability of a language.
34 35
Type Checking
Exception Handling
• It is simply testing for type errors in a given program, either by the
• The ability of a program to intercept run-time errors (as well as other
compiler or during program execution.
unusual conditions detectable by the program), take corrective measures,
• Type checking is an important factor in language reliability. and then continue is an obvious aid to reliability. This language facility is
• Because run-time type checking is expensive, compile-time type checking called exception handling.
is more desirable. • Ada, C++, Java, and C# include extensive capabilities for exception
• Earlier errors in programs are detected, the less expensive it is to make the handling, but such facilities are practically nonexistent in many widely
required repairs. used languages, including C and Fortran.
• The design of Java requires checks of the types of nearly all variables and Aliasing
expressions at compile time. • Loosely defined, aliasing is having two or more distinct names that can be
• This virtually eliminates type errors at run time in Java programs. used to access the same memory cell.
• One example of how failure to type check, at either compile time or run • It is now widely accepted that aliasing is a dangerous feature in a
time, has led to countless program errors is the use of subprogram programming language.
parameters in the original C language. • Most programming languages allow some kind of aliasing—for example,
• An int type variable could be used as an actual parameter in a call to a two pointers set to point to the same variable, which is possible in most
function that expected a float type as its formal parameter, and neither the languages.
compiler nor the run-time system would detect the inconsistency
36 37
38 39
• Two of the primary components of a computer are its internal memory and
its processor.
• The internal memory is used to store programs and data.
• The processor is a collection of circuits that provides a realization of a set
of primitive operations, or machine instructions, such as those for
arithmetic and logic operations.
• In most computers, some of these instructions, which are sometimes called
macroinstructions, are actually implemented with a set of instructions
called microinstructions, which are defined at an even lower level.
• The machine language of the computer is its set of instructions.
Chapter 1 • In the absence of other supporting software, its own machine language is
the only language that most hardware computers “understand.”
IMPLEMENTATION METHODS • A language implementation system cannot be the only software on a
computer.
• Also required is a large collection of programs, called the operating system,
which supplies higher-level primitives than those of the machine language.
41 42
43 45
Compilation
• Programming languages can be implemented by any of three general
methods.
• At one extreme, programs can be translated into machine language, which
can be executed directly on the computer.
• This method is called a compiler implementation and has the advantage
of very fast program execution, once the translation process is complete.
• Most production implementations of languages, such as C, COBOL, C++,
and Ada, are by compilers.
• The language that a compiler translates is called the source language.
• The process of compilation and program execution takes place in several
phases….
46 47
• The lexical analyzer gathers the characters of the source program into • The user and system code together are sometimes called a load module, or
lexical units. executable image.
• The lexical units of a program are identifiers, special words, operators,and • The process of collecting system programs and linking them to user
punctuation symbols. programs is called linking and loading, or sometimes just linking.
• The syntax analyzer takes the lexical units from the lexical analyzer and • It is accomplished by a systems program called a linker.
uses them to construct hierarchical structures called parse trees.
• The speed of the connection between a computer’s memory and its
• These parse trees represent the syntactic structure of the program. processor usually determines the speed of the computer, because
• The intermediate code generator produces a program in a different instructions often can be executed faster than they can be moved to the
language, at an intermediate level between the source program and the final processor for execution.
output of the compiler: the machine language program. • This connection is called the von Neumann bottleneck; it is the primary
• Optimization, which improves program by making them smaller or faster limiting factor in the speed of von Neumann architecture computers.
or both, is often an optional part of compilation. • The von Neumann bottleneck has been one of the primary motivations for
• The code generator translates the optimized intermediate code version of the research and development of parallel computers.
the program into an equivalent machine language program.
• The symbol table serves as a database for the compilation process.
• The primary contents of the symbol table are the type and attribute
information of each user-defined name in the program.
48 49
Pure Interpretation
• Pure interpretation lies at the opposite end (from compilation) of • It has a serious disadvantage that execution is 10 to 100 times slower than
implementation methods. in compiled systems.
• With this approach, programs are interpreted by another program called an • The primary source of this slowness is the decoding of the high-level
interpreter, with no translation. language statements, which are far more complex than machine language
• The interpreter program acts as a software simulation of a machine whose instructions.
fetch-execute cycle deals with high-level language program statements • Another disadvantage of pure interpretation is that it often requires more
rather than machine instructions. space.
• This software simulation obviously provides a virtual machine for the
language.
• Pure interpretation has the advantage of allowing easy implementation of
many source-level debugging operations, because all run-time error
messages can refer to source-level units.
50 51
52 53
Preprocessors
• A preprocessor is a program that processes a program immediately before
the program is compiled.
• Preprocessor instructions are embedded in programs.
• The preprocessor is essentially a macro expander.
• Preprocessor instructions are commonly used to specify that the code from
another file is to be included.
• Examples
– #include "myLib.h“ causes the preprocessor to copy the contents of myLib.h into the
program at the position of the #include.
– #define max(A, B) ((A) > (B) ? (A) : (B)) to determine the largest of two given
expressions
54