0% found this document useful (0 votes)
93 views65 pages

SS 35 Questions

IMP Question

Uploaded by

roshni.mandli
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
93 views65 pages

SS 35 Questions

IMP Question

Uploaded by

roshni.mandli
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 65

System Software (3160715)

New L J Institute of Engineering and Technology, Bodakdev


Branch: CSE
Semester: 6
Subject: System Software

IMP Questions

1. Define system software. Give difference between system software and application software.
System Software:
System software refers to the low-level software that manages and controls a computer’s hardware and
provides basic services to higher-level software. There are two main types of software: systems software and
application software. Systems software includes the programs that are dedicated to managing the computer
itself, such as the operating system, file management utilities, and disk operating system (or DOS).
Features of System Software:
System Software is closer to the computer system.
System Software is written in a low-level language in general.
System software is difficult to design and understand.
System software is fast in speed(working speed).
System software is less interactive for the users in comparison to application software.

System Software Application Software

System Software maintains the system Application software is built for specific
resources and gives the path for application tasks.
software to run.

Low-level languages are used to write the While high-level languages are used to write
system software. the application software.

It is general-purpose software. While it’s a specific purpose software.

Created By: Prof. Roshni Mandli


System Software (3160715)

Without system software, the system stops and While Without application
can’t run. software system always runs.

System software runs when the system is turned While application software runs as per the
on and stops when the system is turned off. user’s request.

Example: System software is an operating Example: Application software


system, etc. is Photoshop, VLC player, etc.

2. Explain different kinds of system software.

1. Operating System
An operating system (OS) is a type of system software that manages a computer’s hardware and software
resources. It provides common services for computer programs. An OS acts as a link between the software
and the hardware. It controls and keeps a record of the execution of all other programs that are present in
the computer, including application programs and other system software.
The main functions of operating systems are as follow:
 Resource Management
 Process Management
 Memory Management
 Security
 File Management
 Device Management

2. Programming Language Translator


2

Created By: Prof. Roshni Mandli


System Software (3160715)

Programming language translators are programs that translate code written in one programminglanguage
into another programming language. Below are examples of programming language translator.
Compiler: A compiler is a software that translates the code written in one language to some other
language without changing the meaning of the program. The compiler is also said to make the target
code efficient and optimized in terms of time and space. A compiler performs almost all of the
following operations during compilation pre-processing, lexical analysis, parsing, semantic analysis
(syntax-directed translation), conversion of input programs to an intermediate representation, code
optimization, and code generation. Examples of compilers may include gcc(C compiler), g++ (C++
Compiler
), javac (Java Compiler), etc.
 Interpreter: An interpreter is a computer program that directly executes, i.e. it performs instructions
written in a programming or scripting language. Interpreters do not require theprogram to be previously
compiled into a machine language program. An interpreter translates high-level instructions into an
intermediate form, which is then executed.
Interpreters are fast as it does not need to go through the compilation stage during which machine
instructions are generated. The interpreter continuously translates the program until the first error is
met. If an error comes it stops executing. Hence debugging is easy. Examples may include Ruby,
Python, PHP, etc.
 Assembler: An assembler is a program that converts the assembly language into machine code. It takes
the basic commands and operations and converts them into binary code specific to a type of processor.
Assemblers produce executable code that is similar to compilers. However, assemblers are more
simplistic since they only convert low-level code (assembly language) to machine code. Since each
assembly language is designed for a specific processor, assembling a program is performed using a
simple one-to-one mapping from assembly code to machine code. On the other hand, compilers must
convert generic high-level source code into machine code for a specific processor.

3. Device Drivers

Device drivers are a class of system software that minimizes the need for system troubleshooting.
Internally, the operating system communicates with hardware elements. Device drivers make it simple to
manage and regulate this communication.
To operate the hardware components, the operating system comes with a variety of device drivers. The
majority of device drivers, including those for a mouse, keyboard, etc., are pre- installed in the computer
system by the businesses that make computers.

4. Firmware

These are the operational programs installed on computer motherboards that assist the operating system
in distinguishing between Flash, ROM, EPROM, and memory chips. However, managing and controlling
all of a device’s actions is the main purpose of any firmware software. For initial installation, it
makes use of non-volatile chips.
There are mainly two main types of firmware chips:
 BIOS (Basic Input/Output System) chip. 
 UEFI (Unified Extended Firmware Interface) chips.

Created By: Prof. Roshni Mandli


System Software (3160715)

5. Utility Software
System Software and application software interact through utility software. A third-party product called
utility software is created to lessen maintenance problems and find computer system defects. It is included
with your computer’s operating system.
Listed below are some particular attributes of utility software:
 Users benefit from protection from threats and infections. 
 WinRAR and WinZip are programs that aid in reducing disk size.
 It assists with disk partitioning and functions as a windows disk management service.
 It makes it easier for users to back up old data and improves system security.
 It operates as a disk defragmenter to organize the dispersed files on the drive. 
 It aids in the recovery of lost data.

3. Explain the various stages of the life cycle of a source program with a neat diagram.
Whenever we create a source code and start the process of evaluating it, computer only shows the output and
errors (if occurred). We don’t know the actual process behind it. In this, the exact procedure behind the
compilation task and step by step evaluation of source code are explained.

High Level Languages:


Source program is in the form of high level language which uses natural language elements and is easier to
create program. It is programming language having very strong abstraction. It makes the process of developing
source code easier, simpler and more understandable. High level languages are very much closer to English
language and uses English structure for program coding. Examples of high level languages are Visual Basic,
PHP, Python, Delphi, FORTRAN, COBOL, C, Pascal, C++, LISP, BASIC etc.

Low Level Languages


Low level languages are languages which can be directly understand by machines. It is programming language
having little or no abstraction. These languages are described as close to hardware. Examples of low level
languages are machine languages, binary language, assembly level languages and object code etc.

Created By: Prof. Roshni Mandli


System Software (3160715)

Pre-Processors:
Pre-processor is a computer program that manipulates its input data in order to generate output which is
ultimately used as input to some other program or compiler. Input of pre-processor is high level languages and
output of pre-processor is pure high level languages. Pure high level language refers to the language which is
having Macros in the program and File Inclusion. Macro means some set of instructions which can be used
repeatedly in the program. Macro pre-processing task is done by pre-processor. Pre-processor allows user to
include header files which may be required by program known as File Inclusion. Example: # define PI 3.14
shows that whenever PI encountered in a program, it is replaced by 3.14 value.
Translators:
Translator is a program that takes input as a source program and convert it into another form as output.
Translator takes input as a high level language and convert it into low level language. There are mainly three
types of translators:
[1] Compilers
[2] Assemblers
5

Created By: Prof. Roshni Mandli


System Software (3160715)

[3] Interpreters

[1] Compilers
Compiler reads whole program at a time and generate errors (if occurred). Compiler generates intermediate
code in order to generate target code. Once the whole program is checked, errors are displayed. Example of
compilers are Borland Compiler, Turbo C Compiler. Generated target code is easy to understand after the
process of compilation. The process of compilation must be done efficiently.
[2] Assemblers
Assembler is a translator which takes assembly language as an input and generates machine language as an
output. Output of compiler is input of assembler which is assembly language. Assembly code is mnemonic
version of machine code. Binary codes for operations are replaced by names. Binary language or relocatable
machine code is generated from the assembler. Assembler uses two passes. Pass means one complete scan of

the input program.

[3] Interpreters
Interpreter performs the line by line execution of source code. It takes single instruction as an input, reads the
statement, analyzes it and executes it. It shows errors immediately if occur. Interpreter is machine independent
and which does not produces object code or intermediate code as it directly generates the target code. Many
languages can be implemented using both compilers and interpreters such as BASIC, Python, C#, Pascal, Java,
and Lisp etc. Example of interpreter is UPS Debugger (Built in C interpreter).
Linkers and Loaders:

Created By: Prof. Roshni Mandli


System Software (3160715)

Linker combines two or more separate object programs. It combines target program with other library routines.
Linker links the library files and prepares single module or file. Linker also solves the external reference.
Linker allows us to create single program from several files.

Loader is utility program which takes object code as input and prepares it for execution. It also loads the object
code into executable code. Loader refers to initializing of execution process. Tasks done by loaders are
mentioned below.

Relocation of an object means allocation of load time addresses and placement of load time addresses into
memory at proper locations.
Target Program:
The final output of the language processing system is the target program. This is the executable code that can
be run on a computer to perform the desired tasks as specified by the original source program.
In summary, the target program is the end result of the language processing system, representing the
executable code that can be executed on a computer or a specific target platform.
Execution by hardware:
The program continues to execute instructions until it reaches the end or encounters a specific termination
condition. At this point, the target program has completed its execution.
Overall, the execution of a target program involves the coordination of various hardware components, with
the CPU playing a central role in fetching, decoding, and executing instructions. The program interacts with
the computer's memory, registers, and other resources to perform the specified computations and produce the
desired output.

4. Compare user-centric view and system-centric view of system software.


Aspect User-Centric View System-Centric View

Focus Primarily focuses on user Primarily focuses on system


experience and interaction. efficiency and behaviour.
Emphasis Prioritizes user needs, preferences, Prioritizes system functionality,
and convenience. performance, etc.
Concern Addresses how the software serves Addresses how the software operates
the end-user. within the system.
Decision User's perspective influences design System requirements dictate design
Making
7

Created By: Prof. Roshni Mandli


System Software (3160715)

decisions. and implementation.


Examples Graphical User Interfaces (GUI), Operating System Kernel, Device
User Experience (UX). Drivers.

5. Define Language Processing. List various phases of Language Processor(compiler). Explain each
phase in detail.
Language Processing:
The computer is an intelligent combination of software and hardware. Hardware is simply a piece
of mechanical equipment and its functions are being compiled by the relevant software. The hardware
considers instructions as electronic charge, which is equivalent to the binary language in software
programming. The binary language has only 0s and 1s. To enlighten, the hardware code has to be written
in binary format, which is just a series of 0s and 1s. Writing such code would be an inconvenient and
complicated task for computer programmers, so we write programs in a high-level language, which is
Convenient for us to comprehend and memorize. These programs are then fed into a series of devices and
operating system (OS) components to obtain the desired code that can be used by the machine. This is
known as a language processing system.

Phases of Language Processor:


A compiler is a software program that converts the high-level source code written in a
programming language into low-level machine code that can be executed by the computer hardware. The
process of converting the source code into machine code involves several phases or stages, which are
collectively known as the phases of a compiler. The typical phases of a compiler are:

Created By: Prof. Roshni Mandli


System Software (3160715)

Created By: Prof. Roshni Mandli


System Software (3160715)

1. Lexical Analysis: The first phase of a compiler is lexical analysis, also known as scanning. This phase
reads the source code and breaks it into a stream of tokens, which are the basic units of the
programming language. The tokens are then passed on to the next phase for further processing.
Example:
Consider following code i: integer;
a, b: real; a= b + i;

The statement a=b+i is represented as a string of token

a = b + I

Id#1 Op#1 Id#2 Op#2 Id#3

As another example, consider below printf statement.

There are 5 valid token in this printf statement.

2. Syntax Analysis: The second phase of a compiler is syntax analysis, also known as parsing. This phase
takes the stream of tokens generated by the lexical analysis phase and checks whether they conform
to the grammar of the programming language. The output of this phase is usually an Abstract Syntax
Tree (AST).

Example:

Consider the statement a = b + i can be represented in tree form as:

10

Created By: Prof. Roshni Mandli


System Software (3160715)

3. Semantic Analysis: The third phase of a compiler is semantic analysis. This phase checks whether the
code is semantically correct, i.e., whether it conforms to the language’s type system and other semantic
rules. In this stage, the compiler checks the meaning of the source code to ensure that it makes sense.
The compiler performs type checking, which ensures that variables are used correctly and that
operations are performed on compatible data types. The compiler also checks for other semantic errors,
such as undeclared variables and incorrect function calls.
Example:

• Consider the statement a = b + i can be represented in tree form While evaluating the expression
the type of b is real and i is int so type of i is converted to real i*.

4. Intermediate Code Generation: The fourth phase of a compiler is intermediate code generation. This
phase generates an intermediate representation of the source code that can be easily translated into
machine code.
Intermediate code can be either language-specific (e.g., Bytecode for Java) or language. independent
(three-address code).

Three-Address Code:
Following two important properties of intermediate code are:
1. It should be easy to produce.
2. Easy to translate into target program.

5. Optimization: The fifth phase of a compiler is optimization. This phase applies various optimization
techniques to the intermediate code to improve the performance of the generated machine code.
Compiler optimizing process should meet the following objectives :
 The optimization must be correct, it must not, in any way, change the meaning of the program.
 Optimization should increase the speed and performance of the program.
 The compilation time must be kept reasonable.
 The optimization process should not delay the overall compiling process.

11

Created By: Prof. Roshni Mandli


System Software (3160715)

6. Code Generation: The final phase of a compiler is code generation. This phase takes the optimized
intermediate code and generates the actual machine code that can be executed by the target hardware.

Symbol Table: It is a data structure being used and maintained by the compiler, consisting of all the
identifier’s names along with their types. It helps the compiler to function smoothly by finding the
identifiers quickly.
Error Handler: Error handling in compiler design refers to the methods used to deal with and recover
from compilation mistakes. These techniques are necessary for a compiler to generate trustworthy machine
code from high-level source code.

6. Define following terms: 1)OPTAB 2)SYMTAB 3)LITAB 4)POOLTAB


OPTAB:
OPTAB is a data structure used by the assembler to store information about machine instructions or operations.
It contains the mnemonic codes for instructions along with their corresponding opcodes. Opcodes are unique
codes assigned to each instruction for execution by the processor. During assembly, the assembler looks up
instructions in the OPTAB to retrieve their opcodes for generating machine code.
Key points:
• A table of mnemonics opcode and related information.
• OPTAB contains the field mnemonics opcodes, class and mnemonics info.
• The class field indicates whether the opcode belongs to an imperative statement (IS), a declaration
statement (DS), or an assembler directive (AD).
• Opcode table is as follows:

12

Created By: Prof. Roshni Mandli


System Software (3160715)

SYMTAB:
SYMTAB is a data structure used by the assembler to store information about symbols (labels, variables,
constants) defined in the source program. It associates each symbol with its corresponding address or value in
memory. SYMTAB facilitates the resolution of symbols and their addresses during the assembly process.
• A SYMTAB entry contains the symbol name, field address and length. Example of Symbol table:
Symbol Address Length
LOOP 202 1
NEXT 214 1
LAST 216 1
A 217 1
BACK 202 1
B 218 1

13

Created By: Prof. Roshni Mandli


System Software (3160715)

LITTAB:

LITTAB is a data structure used by the assembler to store information about literals (constants) defined in the
source program. It maintains a list of literals along with their addresses. LITTAB assists in assigning addresses
to literals and replacing their references in the generated machine code.

Key points:
• A table of literals used in the program.
• A LITTAB entry contains the field literal and address.
• The first pass uses LITTAB to collect all literals used in a program. For example:

literal Address
1 =’5’
2 =’1’
3 =’1’

POOLTAB (Pool Table):


POOLTAB is a data structure used by the assembler to maintain information about literal pools. It keeps track
of the addresses of the first literals in each literal pool.
Key points:
• Awareness of different literal pools is maintained using the auxiliary table POOLTAB.
• This table contains the literal number of the starting literal of each literal pool.
• At any stage, the current literal pool is the last pool in the LITTAB.
• On encountering an LTORG statement (or the END statement), literals in the current pool are allocated
addresses starting with the current value in LC and LC is appropriately incremented.

For example:

Literal no
#1
#3

14

Created By: Prof. Roshni Mandli


System Software (3160715)

7. Explain the tasks performed by the PASS-1 and PASS-2 assembler?


Assembler is a program for converting instructions written in low-level assembly code into
relocatable machine code and generating along information for the loader.

It generates instructions by evaluating the mnemonics (symbols) in operation field and find
the value of symbol and literals to produce machine code. Now, if assembler do all this work
in one scan then it is called single pass assembler, otherwise if it does in multiple scans then
called multiple pass assembler.

Here assembler divide these tasks in two passes:


 Pass-1:
1. Define symbols and literals and remember them in symbol table and
literaltable respectively.
2. Keep track of location counter
3. Process pseudo-operations
4. Defines program that assigns the memory addresses to the variables
andtranslates the source code into machine code
 Pass-2:
1. Generate object code by converting symbolic op-code into respective
numericop-code
2. Generate data for literals and look for values of symbols
3. Defines program which reads the source code two times
4. It reads the source code and translates the code into object code.

Firstly, We will take a small assembly language program to understand the working in
theirrespective passes. Assembly language statement format:
15

Created By: Prof. Roshni Mandli


System Software (3160715)

Let’s take a look on how this program is working:


1. START: This instruction starts the execution of program from location 200 and
labelwith START provides name for the program.(JOHN is name for program)
2. MOVER: It moves the content of literal(=’3′) into register operand R1.
3. MOVEM: It moves the content of register into memory operand(X).
4. MOVER: It again moves the content of literal(=’2′) into register operand R2 and
itslabel is specified as L1.
5. LTORG: It assigns address to literals(current LC value).
6. DS(Data Space): It assigns a data space of 1 to Symbol X.
7. END: It finishes the program execution.

Working of Pass-1:

Define Symbol and literal table with their addresses. Note: Literal address is specified
byLTORG or END.
Step-1: START 200
(here no symbol or literal is found so both table would be empty)

Step-2: MOVER R1, =’3′ 200


( =’3′ is a literal so literal table is made)

Literal Address

=’3′ –––

Step-3: MOVEM R1, X 201


X is a symbol referred prior to its declaration so it is stored in symbol table with blank
addressfield.

16

Created By: Prof. Roshni Mandli


System Software (3160715)

Symbol Address

X –––

Step-4: L1 MOVER R2, =’2′ 202


L1 is a label and =’2′ is a literal so store them in respective tables

Symbol Address

X –––

L1 202

Literal Address

=’3′ –––

=’2′ –––

Step-5: LTORG 203


Assign address to first literal specified by LC value, i.e., 203

Literal Address

=’3′ 203

=’2′ –––

Step-6: X DS 1 204
It is a data declaration statement i.e X is assigned data space of 1. But X is a symbol which
was referred earlier in step 3 and defined in step 6.This condition is called Forward Reference
Problem where variable is referred prior to its declaration and can be solved by back-patching.
So now assembler will assign X the address specified by LC value of current step.

Symbol Address

X 204

L1 202

17

Created By: Prof. Roshni Mandli


System Software (3160715)

Step-7: END 205


Program finishes execution and remaining literal will get address specified by LC value of
END instruction. Here is the complete symbol and literal table made by pass 1 of assembler.

Now tables generated by pass 1 along with their LC value will go to pass-2 of assembler for
further processing of pseudo-opcodes and machine op-codes.
Working of Pass-2:

Pass-2 of assembler generates machine code by converting symbolic machine-opcodes into


their respective bit configuration(machine understandable form). It stores all machine-
opcodes in MOT table (op-code table) with symbolic code, their length and their bit
configuration.
1. Generate object code by converting symbolic op-code into respective
numericop-code
2. Generate data for literals and look for values of symbols
3. Defines program which reads the source code two times
4. It reads the source code and translates the code into object code.
8.

18

Created By: Prof. Roshni Mandli


System Software (3160715)

9. Explain the following. 1. ORIGIN 2. EQU 3. LTORG

19

Created By: Prof. Roshni Mandli


System Software (3160715)

20

Created By: Prof. Roshni Mandli


System Software (3160715)

10. Differentiate one pass and two pass assembler. Explain how forward references are handled in
two pass assembler.

Aspect One Pass Assembler Two Pass Assembler

Number of It performs assembly in a single It performs assembly in two passes over


Passes pass over the source code. the source code. The first pass builds the
symbol table, and the second pass
generates the machine code.
Forward It cannot handle forward It can efficiently handle forward
References references efficiently because it references since it builds the symbol table
processes instructions in a single in the first pass, allowing it to resolve
pass. labels in subsequent passes.

Memory Requires less memory as it Requires more memory as it maintains the


Requirement processes the source code in a symbol table and intermediate data
single pass. structures between passes.
Speed Generally faster than two-pass May be slower compared to one-pass
assemblers due to a single pass assemblers due to the additional pass and
over the source code. symbol table construction.

Forward reference:

 Forward reference occurs when a symbol is referenced before it is defined within the
program. This can lead to errors during the assembly process because the assembler
doesn't know the address or value associated with the symbol at the time of reference.
 A forward reference of a program entity is a reference to the entity in some statement of

21

Created By: Prof. Roshni Mandli


System Software (3160715)

the program that occurs before the statement containing the definition or declaration
ofthe entity.

Example: Consider the following assembly code:


(Note: ‘ ; ‘ represents comment in assembly language)

LOOP LOAD R1, NEXT ; Load the value at NEXT into R1


...
...
NEXT DC 5 ; Define the value 5 at NEXT

In this example, the assembler encounters the symbol NEXT before it's defined. When it tries
to resolve the reference to NEXT in the LOAD instruction, it doesn't know the address of
NEXT, leading to a forward reference error.

Back-patching:

 Back-patching is a technique used to resolve forward references by patching or


updating the addresses or values of the instructions or data that make reference to a
symbol once its value or address is known.
 The operand field of an instruction containing forward references is left blank
initially.
 The assembler keeps track of such forward references and, once it encounters the
definition of the symbol, it updates the instructions or data items that made reference
to that symbol with the correct address or value. This process is called back-patching.
 It builds a Table of Incomplete Instructions (TII) to record information about
instructions whose operand fields were left blank.
 Each entry in TII contains a pair in the form of (instruction address, symbol)
 When END statement is processed, the symbol table would contain addresses of all
symbols defined in the source program.
 TII would contain information describing all forward references.
 The assembler can now process each entry in TII to complete the concerned
instruction.

22

Created By: Prof. Roshni Mandli


System Software (3160715)

For example:

TII:-

So, in summary, forward references occur when symbols are referenced before they are defined, and back-
patching is used to resolve these references by updating the instructions ordata items once the symbols' values
or addresses are known.

11. An assembly program contains the statement


X EQU Y+25
Indicate how the EQU statement can be processed if
a) Y is a back reference
b) Y is a forward reference.

a) If Y is a back reference, it means that Y has already been defined before the statement where X is defined.
In this case, the assembler can easily substitute the value of Y into the expression for X during the assembly
process.
For example:
Y DC F'100' X EQU Y+25
In this scenario, the value of Y is known during the assembly process, so the assembler can directly
substitute it into the expression for X.
b) If Y is a forward reference, it means that Y is defined after the statement where X is defined. In this case, the
assembler cannot immediately resolve the value of Y during the assembly process. Instead, it creates a
placeholder for the forward reference and postpones the calculation of X until it encounters the definition of
Y later in the program.
For example:
X EQU Y+25
...
Y DC F'100'

23

Created By: Prof. Roshni Mandli


System Software (3160715)

Here, the assembler notes that Y is referenced in the expression for X but doesn't yet know its value. It records
the forward reference and continues assembling the program. Once it encounters the definition of Y later in
the program, it revisits the X statement and substitutes the actual value of Y into the expression to calculate
the final value of X.

12. Explain macro definition and call in detail.


1. Macro definition:
To define a macro you have to just choose an abbreviation that suits your macro definition. The
macro definition starts with the MACRO keyword followed by the macro abbreviation. The
MACRO abbreviation is followed by formal parameters. The following definition shows you
how you can define a macro:-
Syntax:
MACRO
macro_name Formal Parameters body
MEND

The body of macro consists of the macro statements that either define operations or data. the body of a
macro ends with the MEND keyword.

MACRO – This keyword identifies the beginning of the macro definition. MEND
– This keyword identifies the end of a macro definition.
Parameters – It includes the parameters passed to the macro. Every parameter has to begin with
‘&’.
Body – The body includes all the statements that the processor will substitute in response to the
macro call in the program.

Macro Calls (Macro Invocation):


Macro invocation is a statement in a program that provides the name of the macro definition
that has been invoked. Along with the macro name it also provides the parameters or arguments
that we have to provide for expanding the macro.

Syntax:
macro_name Actual Parameters

Example:

24

Created By: Prof. Roshni Mandli


System Software (3160715)

13. List and explain all the task involved in macro expansion
Tasks Involved in Macro Expansion:
Macro Call Detection:
 The assembler scans through the source code to identify macro calls.
 It recognizes macro calls by comparing them with the names of defined macros.
Macro Expansion:
 When a macro call is detected, the assembler replaces it with the corresponding macro
definition.
 The macro definition includes all the instructions and statements defined within the
macro.

Parameter Substitution:
 During macro expansion, the actual parameters passed in the macro call are substituted
into the macro definition.
 Formal parameters within the macro definition are replaced with the corresponding
actual parameters.

Recursive Expansion:
 If the expanded macro contains nested macro calls, the expansion process continues
recursively.
 Nested macro calls are expanded in the same manner as the initial macro call.
Error Handling:
 The assembler checks for any errors during macro expansion, such as undefined macros
or incorrect parameter counts.
 It generates appropriate error messages to notify the user about any issues encountered
during expansion.

Output Generation:
25

Created By: Prof. Roshni Mandli


System Software (3160715)

 After successful expansion, the assembler generates the final output code.
 The expanded code may be written to an output file or stored in memory for further
processing.

Symbol Table Management:


 As macros are expanded, symbol table entries may need to be updated to reflect any
new labels or symbols introduced during expansion.
 The assembler ensures that the symbol table remains accurate and up-to-date
throughout the expansion process.

Explanation:

Macro expansion involves several tasks, starting with the detection of macro calls in the source code. Once a
macro call is identified, the assembler expands it by replacing it with the corresponding macro definition.
This expansion includes substituting actual parameters into the macro definition and handling any nested
macro calls. The assembler also performs error handling to catch any issues that may arise during expansion
and ensures that the symbol table is properly managed throughout the process. Finally, the assembler generates
the final output code containing the expanded macros.

14. Define a macro taking starting location and N as parameters to find summation of all N numbers
stored at location starting from starting location. The result is to be stored at starting location.

26

Created By: Prof. Roshni Mandli


System Software (3160715)

15. What is the difference between Keyword parameters and positional parameters?
Positional Parameters Keyword Parameters
Parameters are identified by their position in the Parameters are identified by their names, known
macro definition. as keywords.
The order of parameters in the macro call must match Parameters can be specified in any order in the
the order in the macro definition. macro call, using their corresponding keywords.
Example: MACRO ADD &A, &B where A is the Example: MACRO ADD &A=, &B= where A
first positional parameter and B is the second. (Note:
and B are keyword parameters.
Write example from above question.)
(Note: Write example from above question.)
Less flexible as changing the order of parameters More flexible as parameters can be specified
requires modifying all macro calls. independently of their position, simplifying macro
calls and enhancing readability.

Suitable for simple macros with a fixed parameter Suitable for complex macros with multiple
order. parameters or optional arguments.

16. Compare and contrast the properties of macros and subroutines with respect to following criterion.
1. Code space requirement
2. Execution speed
3. Processing requirement by assembler
4. Flexibility

27

Created By: Prof. Roshni Mandli


System Software (3160715)

Criteria Macros Subroutines


Code Space Typically larger due to code Generally smaller as the subroutine
Requirement expansion, especially for repetitive code is centralized and reused at
code segments multiple locations

Execution Speed Faster as there is no overhead of Slower due to overhead of calling


calling and returning from a and returning
subroutine

Processing Requires additional processing Straightforward assembly process


Requirement by during assembly to expand macros
Assembler
Flexibility More flexible as it allows Less flexible as it follows fixed
parameterized code expansion and execution path and cannot be
conditional assembly conditionally executed

17. Explain design of macro preprocessor.


Macro preprocessors are vital for processing all programs that contain macro definitions and/or calls.
Language translators such as assemblers and compilers cannot directly generate the target code from the
programs containing definitions and calls for macros. Therefore, most language processing activities by
assemblers and compilers preprocess these programs through macro processors. A macro preprocessor
essentially accepts an assembly program with macro definitions and calls as its input and processes it into an
equivalent expanded assembly program with no macro definitions and calls. The macro preprocessor output
program is then passed over to an assemble to generate the target object program.

28

Created By: Prof. Roshni Mandli


System Software (3160715)

29

Created By: Prof. Roshni Mandli


System Software (3160715)

30

Created By: Prof. Roshni Mandli


System Software (3160715)

31

Created By: Prof. Roshni Mandli


System Software (3160715)

32

Created By: Prof. Roshni Mandli


System Software (3160715)

18. Draw a flowchart and explain a simple one pass macro processor.

33

Created By: Prof. Roshni Mandli


System Software (3160715)

34

Created By: Prof. Roshni Mandli


System Software (3160715)

35

Created By: Prof. Roshni Mandli


System Software (3160715)

19. What is program relocation? How it is performed?


Program Relocation:
Program relocation refers to the process of adjusting the memory addresses of a program duringthe
loading or execution phase. It is necessary when a program is loaded into memory at a different
36

Created By: Prof. Roshni Mandli


System Software (3160715)

address than the one it was originally compiled for. Relocation allows the program torun correctly
regardless of its actual memory location.

Modification Record for Relocation in Assembler:

In assembly language, a modification record (also known as a mod record) is used to specify the changes that
need to be made to the program's object code during relocation. It contains information about the memory
locations that need to be modified and the values to be insertedat those locations.

The modification record consists of three parts:


1. The starting address of the memory location to be modified.
2. The length of the memory location to be modified.
3. The value to be inserted at the modified memory location.
Example
Let's consider the following assembly code snippet:
START 1000 LDA
VALUE ADD
VALUE2STA
RESULT END
START

VALUE DC 5
In this example, the program starts at memory address 1000. The LDA, ADD, and STA instructions
refer to the labels VALUE, VALUE2, and RESULT, respectively. These labels represent memory locations that
need to be modified during relocation.

During the assembly process, the assembler generates a modification record for each label usedin the program. For
example, the modification record for the LDA instruction would be:

M1003+01

Here, M indicates a modification record, 1003 is the starting address of the memory location to be modified, and
+01 is the length of the memory location. The assembler will replace the VALUE label with the actual memory
address of the VALUE variable during the relocationprocess.

In summary, program relocation involves adjusting the memory addresses of a program, and modification records
are used in assembly language to specify the changes that need to be madeto the object code during relocation.

20. What is overlay? Explain the linking of overlay structured program performed.

37

Created By: Prof. Roshni Mandli


System Software (3160715)

Overlay is a technique to run a program that is bigger than the size of the physical memory by keeping
only those instructions and data that are needed at any given time. Divide the program into modules in such
a way that not all modules need to be in the memory at the same time.
Overlays refer to a technique used to manage memory efficiently by overlaying a portion of memory with another
program or data.
The idea behind overlays is to only load the necessary parts of a program into memory at a given time, freeing up
memory for other tasks. The unused portions of the program are kept on disk or other storage, and are loaded into
memory as needed. This allows programs to be larger than the available memory, but still run smoothly.
The concept of overlays is that whenever a process is running it will not use the complete program at the same
time, it will use only some part of it. Then overlays concept says that whatever part you required, you load it and
once the part is done, then you just unload it, means just pull it back and get the new part you required and run
it.
Formally,
“The process of transferring a block of program code or other data into internal memory, replacing what is
already stored”.
Sometimes it happens that compare to the size of the biggest partition, the size of the program will be even more,
then, in that case, you should go with overlays.

For Example:

38

Created By: Prof. Roshni Mandli


System Software (3160715)

linking of overlay structured program.


• An overlay is part of a program (or software package) which has the same load origin as some
other part of the program.
• Overlay is used to reduce the main memory requirement of a program.
Overlay structured program
• We refer to a program containing overlays as an overlay structured program. Such a program
consists of
o A permanently resident portion, called the root.
o A set of overlays.
• Execution of an overlay structured program proceeds as follows:
• To start with, the root is loaded in memory and given control for the purpose of execution.
• Other overlays are loaded as and when needed.
• Note that the loading of an overlay overwrites a previously loaded overlay with the same load
origin.
• This reduces the memory requirement of a program.
• It also makes it possible to execute programs whose size exceeds the amount of memory which
can be allocated to them.

39

Created By: Prof. Roshni Mandli


System Software (3160715)

• The overlay structure of a program is designed by identifying mutually exclusive modules


thatis, modules which do not call each other.
• Such modules do not need to reside simultaneously in memory.

Execution of an overlay structured program


• For linking and execution of an overlay structured program in MS DOS the linker produces
asingle executable file at the output, which contains two provisions to support overlays.
• First, an overlay manager module is included in the executable file.
• This module is responsible for loading the overlays when needed.
• Second, all calls that cross overlay boundaries are replaced by an interrupt producing
instruction.
• To start with, the overlay manager receives control and loads the root.
• A procedure call which crosses overlay boundaries leads to an interrupt.
• This interrupt is processed by the overlay manager and the appropriate overlay is loaded
into memory.
• When each overlay is structured into a separate binary program, as in IBM mainframe
systems,a call which crosses overlay boundaries leads to an interrupt which is attended by
the OS kernel.
• Control is now transferred to the OS loader to load the appropriate binary program.

21. Write and explain an algorithm for first pass of the Linker program.
First pass of the Linker program: Relocation
Relocation
 The linker uses an area of memory called the work area for
constructing thebinary program.
 It loads the machine language program found in the program
component ofan object module into the work area and relocates the
address sensitive instructions in it by processing entries of the
RELOCTAB.
 For each RELOCTAB entry, the linker determines the address of the
word inthe work area that contains the address sensitive instruction and
relocates it.
 The details of the address computation would depend on whether the
linker loads and relocates one object module at a time, or loads all
object modulesthat are to be linked together into the work area before
performing relocation.

40

Created By: Prof. Roshni Mandli


System Software (3160715)

Algorithm: Program Relocation

1. program_linked_origin := <link origin> from the linker command;


2. For each object module mentioned in the linker command

(a) t_origin := translated origin of the object


module;OMsize := size of the object module;

(b) relocation_factor := program_linked_origin


–t_origin;

(c) Read the machine language program contained in the program


component of the object module into the work-area.

(d) Read RELOCTAB of the object module.

(e) For each entry in RELOCTAB


i. translated_address := address found in the RELOCTAB entry;
ii. address_in_work_area:=address of work_area + translated_address
–t_origin;,
iii. Add relocation-factor to the operand address found in the word
thathas the address address_in_work_area.

(f) Program_linked_origin := program_linked_origin + OM_size;

22. With algorithm and example, explain how relocation is performed by linker?

• Program relocation is the process of modifying the addresses used in the address
sensitive instruction of a program such that the program can execute correctly from the
designated area of memory.
• If linked origin ≠ translated origin, relocation must be performed by the linker.
• If load origin ≠ linked origin, relocation must be performed by the loader.
• Let AA be the set of absolute address - instruction or data addresses – used in
theinstruction of a program P.
• AA ≠ ф implies that program P assumes its instructions and data to occupy
memory words with specific addresses.
• Such a program – called an address sensitive program – contains one or more of
thefollowing:
Relocation

41

Created By: Prof. Roshni Mandli


System Software (3160715)

• A dress sensitive instruction: an instruction which uses an address αiε


n AA.
a
• An address constant: a data word which contains an address αi ε AA.
d

• The linker uses an area of memory called the work area for constructing the binary program.

• It loads the machine language program found in the program component of an object module into
the work area and relocates the address sensitive instructions in it by processing entries of the
RELOCTAB.

• For each RELOCTAB entry, the linker determines the address of the word in the work area that
contains the address sensitive instruction and relocates it.

• The details of the address computation would depend on whether the linker loads and relocates one
object module at a time, or loads all object modules that are to be linked together into the work area
before performing relocation.

• Algorithm: Program Relocation


1. program_linked_origin := <link origin> from the linker
command;2.For each object module mentioned in the linker
command
(a) t_origin := translated origin of the object module; OMsize := size of the
object module;
(b) relocation_factor := program_linked_origin – t_origin;
(c) Read the machine language program contained in the program component of the object
moduleinto the work-area.
(d) Read RELOCTAB of the object module.
(e) For each entry in RELOCTAB
i. translated_address := address found in the RELOCTAB entry;
ii. address_in_work_area:= ....................... address of work_area
+translated_address – t_origin;,
iii. Add relocation-factor to the operand address found in the word that has the
addressaddress_in_work_area.
(f) Program_linked_origin := program_linked_origin + OM_size;

Example:

42

Created By: Prof. Roshni Mandli


System Software (3160715)

23. Explain compile-and-go loaders in brief.

In this scheme, the architecture of memory is like, an assembler present in memory and it will always be there
when we have a compile-and-go loading scheme. In another part of memory, there is an assembled machine
instruction which means the assembled source program. Assembled machine instruction is placed directly into
their assigned memory location.

Working:
In this scheme, the source code goes into the translator line by line, and then that single line of code loads into
memory. In another language, chunks of source code go into execution. Line-by-line code goes to the translator
so there is no proper object code. Because of that, if the user runs the same source program, every line of code
will again be translated by a translator. So here re-translation happens.

43

Created By: Prof. Roshni Mandli


System Software (3160715)

The source program goes through the translator (compiler/assembler) and it consumes one part of the memory
ad the second part of the memory is consumed by the assembler. The source program does not need that
assembler but it is still there so this is a waste of memory.
Advantages:
1. It is very simple to implement.
2. The translator is enough to do the task, no subroutines are needed.
3. It is the most simple scheme of the functions of the loader.
4. Improved performance: The use of a compiler and loader can result in faster and more efficient code
execution. This is because the compiler can optimize the code during the compilation process, and the
loader can perform certain memory-related optimizations during program loading.
5. Portability: A compiler and loader can help make software more portable by allowing the same source
code to be compiled and loaded on different hardware platforms and operating systems.
6. Security: The loader can perform various security checks during program loading to ensure that the
program does not have any malicious code. This can help prevent security vulnerabilities and protect
the user’s data.
7. Ease of use: Compilers and loaders can automate many aspects of the software development process,
making it easier and faster to develop, test, and deploy software.
8. Flexibility: The use of a compiler and loader can provide developers with greater flexibility in terms of
the programming languages and tools they can use.
Disadvantages:
1. There is no use of the assembler but it is still there so a wastage of memory takes place.
2. When source code runs multiple times the translation is also done every time. so re-translation is

44

Created By: Prof. Roshni Mandli


System Software (3160715)

happening.
3. Difficult to produce an orderly modular program
4. Difficult to handle multiple segments like if the source program is in a different language.

24. Differentiate Absolute loader and Direct linking loader.


Criteria Absolute Loader Direct Linking Loader

Memory Uses absolute memory addresses for


Addressing program execution
Uses relative or symbolic addresses
Relocation Does not support relocation of programs Supports relocation of programs

Address Requires modification of program code Automatically translates and


Translation for relocation relocates addresses during load

Memory Less memory efficient due to fixed More memory efficient as it allows sharing of
Efficiency addresses memory space

Program Programs are less flexible and harder to Programs are more flexible and easier to
Flexibility modify modify

25. Explain in brief design of an absolute Loader.


 An absolute loader loads a binary program in memory for execution.
 The binary program is stored in a file contains the following:
o A Header record showing the load origin, length and load time execution start address
of the program.
o A sequence of binary image records containing the program’s code. Each binary image
record contains a part of the program’s code in the form of a sequence of bytes, the load
address of the first byte of this code and a count of the number of bytes of code.

 The absolute loader notes the load origin and the length of the program mentioned in the
headerrecord.

45

Created By: Prof. Roshni Mandli


System Software (3160715)

 It then enters a loop that reads a binary image record and moves the code contained in it to
thememory area starting on the address mentioned in the binary image record.
 At the end, it transfers control to the execution start address of the program.

 Advantages
o Simple to implement and efficient in execution.
o Saves the memory (core) because the size of the loader is smaller than that of the
assembler.
o Allows use of multi-source programs written in different languages. In such cases, the
given language assembler converts the source program into the language, and a
common object file is then prepared by address resolution.
o The loader is simpler and just obeys the instruction regarding where to place the object
code in the main memory.

 Disadvantages
o The programmer must know and clearly specify to the translator (the assembler) the
address in the memory for inner-linking and loading of the programs. Care should be
taken so that the addresses do not overlap.
o For programs with multiple subroutines, the programmer must remember the absolute
address of each subroutine and use it explicitly in other subroutines to perform linking.
o If the subroutine is modified, the program has to be assembled again from first to last.

26. Explain types of grammar.


It is a finite set of formal rules for generating syntactically correct sentences or meaningful correct sentences.
Formal Definition of Grammar :
Any Grammar can be represented by 4 tuples – <N, T, P, S>
 N – Finite Non-Empty Set of Non-Terminal Symbols
46

Created By: Prof. Roshni Mandli


System Software (3160715)

 T – Finite Set of Terminal Symbols.


 P – Finite Non-Empty Set of Production Rules.
 S – Start Symbol (Symbol from where we start producing our sentences or strings).

Hierarchy of grammar

Type-0 grammar
• This grammar is also known as phrase structure grammar.
• Their productions are of the form:

α⭢β
• Where both α and β can be strings of terminal and nonterminal symbols.
• Such productions permit arbitrary substitution of strings during derivation or reduction, hence
they are not relevant to specification of programming languages.

• Example: S ⭢ ACaB

Bc ⭢ acB

CB ⭢ DB

aD ⭢ Db
Type-1 grammar
• Their productions are of the form:

αAβ ⭢ απβ
• Where A is non terminal and α, β, π are strings of terminals and non-terminals.
• The strings α and β may be empty, but π must be non-empty.
• Here, a string π can be replaced by ′A′ (or vice versa) only when it is enclosed by the strings α
and β in a sentential form.

47

Created By: Prof. Roshni Mandli


System Software (3160715)

• Productions of Type-1 grammars specify that derivation or reduction of strings can take place
only in specific contexts. Hence these grammars are also known as context sensitive grammars.
• These grammars are also not relevant for programming language specification since
recognition of programming language constructs is not context sensitive in nature.

• Example: AB ⭢ AbBc

A ⭢ bcA

B⭢b
Type-2 grammar
• This grammar is also known as Context Free Grammar (CFG).
• Their productions are of the form:

A⭢π
• Where A is non terminal and π is string of terminals and non terminals.
• These grammars do not impose any context requirements on derivations or reductions which
can be applied independent of its context.
• CFGs are ideally suited for programming language specification.

• Example: S ⭢ Xa

X⭢a

X ⭢ aX

X ⭢ abc
Type-3 grammar
• These grammar is also known as linear grammar or regular grammar.

• Their productions are of the form: A ⭢ tB | t or A⭢ Bt | t


• Where A, B are non terminals and t is terminal.
• The specific form of the RHS alternatives - namely a single terminal symbol or a string
containing a single terminal and a single nonterminal.
• However, the nature of the productions restricts the expressive power of these grammars, e.g.,
nesting of constructs or matching of parentheses cannot be specified using such productions.
• Hence the use of Type-3 productions is restricted to the specification of lexical units, e.g.,
identifiers, constants, labels, etc.

• Example: X ⭢ a | aY

Y⭢bZ⭢c

A ⭢ dX | Zc

48

Created By: Prof. Roshni Mandli


System Software (3160715)

27. Explain Ambiguous grammar with any suitable example.

1. Ambiguous grammar:
A CFG is said to be ambiguous if there exists more than one derivation tree for the given input
string i.e., more than one LeftMost Derivation Tree (LMDT) or RightMost Derivation Tree
(RMDT).
Definition: G = (V,T,P,S) is a CFG that is said to be ambiguous if and only if there exists
astring in T* that has more than one parse tree.
In short, Ambiguous grammar is one that produces more than one leftmost or more than
onerightmost derivation for the same sentence.
For Example:
Let us consider this grammar:
E -> E+E|id
We can create a 2 parse tree from this grammar to obtain a string id+id+id.
The following are the 2 parse trees generated by left-most derivation:

Both the above parse trees are derived from the same grammar rules but both parse trees aredifferent.
Hence the grammar is ambiguous.

28. Find First and Follow Set from following grammar


S-> Aa|bAc|Bc|bBa
A-> d
B-> d
Check grammar is LL(1) or not?

First Follow
49

Created By: Prof. Roshni Mandli


System Software (3160715)

S-> Aa|bAc|Bc|bBa d, b $

A-> d d a, c

B-> d d a, c

Parse Table:

$
b
a c d

S->bAc S-> Aa
S
S-> bBa S-> Bc

A A-> d

B B-> d

Here, we can see that there are two productions in the same cell. Hence, this grammar is
notfeasible for LL(1) Parser.

29.

Top-Down Parsing Bottom-Up Parsing

It is a parsing strategy that first looks at the It is a parsing strategy that first looks at the
highest level of the parse tree and works lowest level of the parse tree and works up
down the parse tree by using the rules of the parse tree by using the rules of
grammar. grammar.

50

Created By: Prof. Roshni Mandli


System Software (3160715)

Bottom-up parsing can be defined as an


Top-down parsing attempts to find the left
attempt to reduce the input string to the
most derivations for an input string.
start symbol of a grammar.

In this parsing technique we start parsing In this parsing technique we start parsing
from the top (start symbol of parse tree) to from the bottom (leaf node of the parse
down (the leaf node of parse tree) in a top- tree) to up (the start symbol of the parse
down manner. tree) in a bottom-up manner.

This parsing technique uses Left Most This parsing technique uses Right Most
Derivation. Derivation.

The main leftmost decision is to select what The main decision is to select when to use
production rule to use in order to construct a production rule to reduce the string to get
the string. the starting symbol.

Example: Recursive Descent parser. Example: ItsShift Reduce parser.

Step 1: The grammar satisfies all below properties:


1. The grammar is free from left recursion.
4. The grammar should not be ambiguous.
5. The grammar has to be left factored in so that the grammar is deterministic grammar.

Step 2: Calculate first() and

follow().Find their First and Follow

sets:

First Follow

E –> TE’ { id, ( } { $, ) }

E’ –> +TE’/
{ +, ε } { $, ) }
ε

T –> FT’ { id, ( } { +, $, ) }

T’ –> *FT’/
{ *, ε } { +, $, ) }
ε

F –> id/(E) { id, ( } { *, +, $, ) }

51

Created By: Prof. Roshni Mandli


System Software (3160715)

Step 3: Make a parser table. Now, the LL(1) Parsing Table is:

id + * ( ) $

E E –> TE’ E –> TE’

E’ E’ –> +TE’ E’ –> ε E’ –> ε

T T –> FT’ T –> FT’

T’ T’ –> ε T’ –> *FT’ T’ –> ε T’ –> ε

F F –> id F –> (E)

52

Created By: Prof. Roshni Mandli


System Software (3160715)

As you can see that all the null productions are put under the Follow set of that symbol andall the remaining
productions lie under the First of that symbol.

30. Eliminate left recursion from the following grammar.

S → Aa / b
A → Ac / Sd / ∈

31. Explain in brief about causes of large semantic gap.


The causes of a large semantic gap in programming languages can be attributed to several factors:
Abstraction Levels: Programming languages often operate at different levels of abstraction. Higher-level
languages allow developers to express ideas more naturally and concisely, abstracting away low-level details.
However, this abstraction can lead to a significant semantic gap when translating these high-level concepts
into machine- executable code, which operates at a much lower level of abstraction.
Expressiveness vs. Precision: High-level languages are designed to be expressive and flexible, enabling
programmers to write complex algorithms and logic efficiently. On the other hand, machine code requires
precise instructions that directly manipulate hardware resources. This difference in expressiveness and
precision contributes to the semantic gap, as translating abstract concepts into precise machine operations can
be challenging.
Data Types and Operations: Programming languages support various data types and operations, such as
arithmetic, logical, and relational operations. Translating these operations into machine-level instructions
involves mapping high-level data types and operations to their corresponding low-level representations, which
may not always align perfectly, leading to a semantic mismatch.
Memory Management: High-level languages often provide automatic memory management through features
like garbage collection, while low-level languages require manual memory management. Bridging the

53

Created By: Prof. Roshni Mandli


System Software (3160715)

semantic gap involves translating memory-related operations and management strategies from high-level
constructs to low-level memory addressing and management techniques.
Optimization and Efficiency: High-level languages prioritize programmer productivity and readability, often
sacrificing optimization and efficiency. On the other hand, machine code optimizations focus on performance
and resource utilization. The semantic gap widens when optimizing code for execution efficiency, as the
optimized machine code may differ significantly from the original high-level code in terms of structure and
execution flow.
Addressing these causes requires sophisticated compiler techniques and translation mechanisms to ensure that
the translated code preserves the intended semantics and functionality while optimizing for efficient execution
on the target hardware architecture.

32. Discuss dead code elimination method with suitable example Explain any three Code Optimization
Techniques.
Optimizing transformation refers to the process of modifying a program's code or structure to
improve its performance, reduce resource usage, or enhance its functionality. This transformation
involves making changes at the code level or the algorithmic level to achieve better efficiency, speed,
or reliability in program execution.

Code Optimization is an approach to enhance the performance of the code.

The process of code optimization involves-


 Eliminating the unwanted code lines
 Rearranging the statements of the code
Advantages-
The optimized code has the following advantages-
 Optimized code has faster execution speed.
 Optimized code utilizes the memory efficiently.

 Optimized code gives better performance.

Code Optimization Techniques-

Important code optimization techniques are-

54

Created By: Prof. Roshni Mandli


System Software (3160715)

1. Compile Time Evaluation


2. Common sub-expression elimination
3. Dead Code Elimination
4. Code Movement
5. Strength Reduction

1. Compile Time Evaluation-


Two techniques that falls under compile time evaluation are-
A) Constant Folding-
In this technique,
 As the name suggests, it involves folding the constants.
 The expressions that contain the operands having constant values at compile time are
evaluated.
 Those expressions are then replaced with their respective results.
Example-
Circumference of Circle = (22/7) x DiameterHere,
 This technique evaluates the expression 22/7 at compile time.
 The expression is then replaced with its result 3.14.
 This saves the time at run time.

55

Created By: Prof. Roshni Mandli


System Software (3160715)

B) Constant Propagation-
In this technique,
 If some variable has been assigned some constant value, then it replaces that variablewith
its constant value in the further program during compilation.
 The condition is that the value of variable must not get alter in between.
Example-
pi = 3.14
radius = 10
Area of circle = pi x radius x radius

Here,
 This technique substitutes the value of variables ‘pi’ and ‘radius’ at compile time.
 It then evaluates the expression 3.14 x 10 x 10.
 The expression is then replaced with its result 314.
 This saves the time at run time.

2. Common Sub-Expression Elimination-

The expression that has been already computed before and appears again in the code for
computation
is called as Common Sub-Expression.

In this technique,
 As the name suggests, it involves eliminating the common sub expressions.
 The redundant expressions are eliminated to avoid their re-computation.
 The already computed result is used in the further program when required.

Example-

Code Before Optimization Code After Optimization

56

Created By: Prof. Roshni Mandli


System Software (3160715)

S1 = 4 x i
S1 = 4 x i
S2 = a[S1]
S2 = a[S1]
S3 = 4 x j
S3 = 4 x j
S4 = 4 x i // Redundant Expression
S5 = n
S5 = n
S6 = b[S1] + S5
S6 = b[S4] + S5

3. Code Movement-
In this technique,
 As the name suggests, it involves movement of the code.
 The code present inside the loop is moved out if it does not matter whether it is present
inside or outside.
 Such a code unnecessarily gets execute again and again with each iteration of the loop.
 This leads to the wastage of time at run time.

Example-

Code Before Optimization Code After Optimization

for ( int j = 0 ; j < n ; j ++) x= y+z;


{ for ( int j = 0 ; j < n ; j ++)
x=y+z; {
a[j] = 6 x j; a[j] = 6 x j;
} }

4. Dead Code Elimination-


In this technique,
 As the name suggests, it involves eliminating the dead code.
 The statements of the code which either never executes or are unreachable or their
output is never used are eliminated.

Example-

57

Created By: Prof. Roshni Mandli


System Software (3160715)

Code Before Optimization Code After Optimization

i=0;
if (i == 1)
{ i=0;
a=x+5;
}

5. Strength Reduction-
In this technique,
 As the name suggests, it involves reducing the strength of expressions.
 This technique replaces the expensive and costly operators with the simple and cheaperones.
Example-

Code Before Optimization Code After Optimization

B=Ax2 B=A+A

Here,
 The expression “A x 2” is replaced with the expression “A + A”.
 This is because the cost of multiplication operator is higher than that of addition
operator.

58

Created By: Prof. Roshni Mandli


System Software (3160715)

33. Explain phases of Compiler with suitable example.

59

Created By: Prof. Roshni Mandli


System Software (3160715)

60

Created By: Prof. Roshni Mandli


System Software (3160715)

For Example:
Input: a=b+c*60 (Show output at each stage of compiler.)

61

Created By: Prof. Roshni Mandli


System Software (3160715)

34. What is interpreter? Explain it’s types and benefits of interpreter. Compare interpreter and
compiler.
All high-level languages need to be converted to machine code so that the computer can understand the
program after taking the required inputs. The software by which the conversion of the high-level instructions
is performed line-by-line to machine-level language, other than compiler and assembler, is known as
INTERPRETER.

The interpreter in the compiler checks the source code line-by-line and if an error is found on any line, it stops
the execution until the error is resolved. Error correction is quite easy for the interpreter as the interpreter
provides a line-by-line error. But the program takes more time to complete the execution successfully.
Interpreters were first used in 1952 to ease programming within the limitations of computers at the time. It
translates source code into some efficient intermediate representation and executes them immediately.
1. Need for an Interpreter
The first and vital need of an interpreter is to translate source code from high-level language to machine
language. However, we already had the compiler to serve the purpose, the compiler is a very powerful tool
for developing programs in a high-level language. However, there are several demerits associated with the
compiler. If the source code is huge in size, then it might take hours to compile the source code, which will
significantly increase the compilation duration. Here, the Interpreter plays its role. The interpreter can cut this
huge compilation duration. As they are designed to translate single instruction at a time and execute them
immediately. So instead of waiting for the entire code, the interpreter translates a single line and executes it.

62

Created By: Prof. Roshni Mandli


System Software (3160715)

- The source program is maintained in the source form throughout its interpretation.

- Perform some preliminary processing of the source program to reduce the analysis overhead during
interpretation. Pre-processor converts program to an IR (Intermediate representation) which is used
during interpretation.
Benefits of Interpreter
The following are the advantages of an interpreter:
1. Interactive debugging: Interpreters allow programmers to test their code interactively, meaning they
can execute code one line at a time and see the results immediately. This makes it easier to debug
code and identify errors quickly.
2. Ease of use: Interpreters typically have a simple and easy-to-use interface, making them accessible
to novice programmers. Programmers can run their code without having to worry about the
complexities of compilation and linking.
3. Portability: Interpreted code can be run on any platform that has an interpreter for the programming
language used. This means that the same code can be run on different operating systems and hardware
configurations without the need for modification.

63

Created By: Prof. Roshni Mandli


System Software (3160715)

4. Faster development: Interpreted languages allow programmers to write code more quickly because
they can test their code immediately. This leads to faster development cycles and shorter time-to-
market for software projects.
5. More detailed error messages: Interpreters can provide more detailed error messages than compilers
because they analyze and execute code one line at a time. This can help programmers identify and fix
errors more quickly.
Compiler Interpreter

A compiler is a program that converts the entire An interpreter takes a source program
source code of a programming language into and runs it line by line, translating each
executable machine code for a CPU. line as it comes to it.

The compiler takes a large amount of time to An interpreter takes less amount of time
analyze the entire source code but the overall to analyze the source code but the
execution time of the program is comparatively overall execution time of the program
faster. is slower.
The compiler generates the error message only Its Debugging is easier as it continues
after scanning the whole program, so translating the program until the error is
debugging is comparatively hard as the error met.
can be present anywhere in the program.

The compiler requires a lot of memory for It requires less memory than a compiler
generating object codes. because no object code is generated.
Generates intermediate object code. No intermediate object code
is
generated.
For Security purpose compiler is more useful. The interpreter is a little vulnerable in
case of security.
Examples: C, C++, C# Examples: Python, Perl, JavaScript,
Ruby.

35. What is debugger? Explain different types of error in program.


A debugger is a tool that allows you to examine the state of a running program. Debugging is the process of
locating and then removing bugs or errors in a program. An interactive debugging system gives programmers
tools to help them test and debug their programs. Debugging is the methodical process of locating and
eliminating bugs or defects in a computer program.
1. Compile Time Errors: These errors occur during the compilation of the code before the program is
executed. There are three main types:

64

Created By: Prof. Roshni Mandli


System Software (3160715)

 Lexical Error: This error occurs when the compiler encounters invalid characters or symbols
that do not match the language's syntax rules. For example, in Python, using a variable name
with spaces like "my variable" would result in a lexical error.
 Syntax Error: Syntax errors happen when the code violates the grammar rules of the
programming language. For instance, in C++, forgetting to add a semicolon at the end of a
statement can lead to a syntax error:
int x = 10 // Missing semicolon here
 Semantic Error: Semantic errors are more subtle and occur when the code is syntactically
correct but does not produce the desired outcome due to logical mistakes. For example,
consider the following Python code:
# Calculate the average of two numbers a = 10
b = '20' # Incorrect: should be an integer avg = (a + b) / 2

Here, the code is syntactically correct, but it tries to perform arithmetic operations on incompatible
data types, leading to a semantic error.
2. Run Time Error: Run time errors occur while the program is running. They are not detected during
compilation but arise due to unexpected conditions during execution. Examples include:
 Division by Zero: Attempting to divide a number by zero results in a run time error:
result = 10 / 0 # Division by zero error
 Index Out of Range: Accessing an array or list element with an invalid index can cause a run
time error:
numbers = [1, 2, 3]
print(numbers[5]) # Index out of range error

Understanding and being able to identify these types of errors is crucial for debugging and improving the
quality of software programs.

65

Created By: Prof. Roshni Mandli

You might also like