0% found this document useful (0 votes)
3 views

sepm

Software re-engineering involves examining and altering a software system to improve its efficiency and effectiveness, often requiring reverse engineering and restructuring of code and data. Key activities include inventory analysis, documentation reconstruction, and forward engineering, while cohesion and coupling are important concepts that describe the functional relationships within modules and their interdependencies. The document also outlines the characteristics of a good Software Requirements Specification (SRS) and various software project estimation techniques, including Function Point Analysis and the COCOMO model.

Uploaded by

aaryaborkar67
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

sepm

Software re-engineering involves examining and altering a software system to improve its efficiency and effectiveness, often requiring reverse engineering and restructuring of code and data. Key activities include inventory analysis, documentation reconstruction, and forward engineering, while cohesion and coupling are important concepts that describe the functional relationships within modules and their interdependencies. The document also outlines the characteristics of a good Software Requirements Specification (SRS) and various software project estimation techniques, including Function Point Analysis and the COCOMO model.

Uploaded by

aaryaborkar67
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

Explain software reengineering

Software Re-Engineering is the examination and alteration of a system to reconstitute it in a new form.
The principle of Reengineering when applied to the software development process is called software
re-engineering. It positively affects software cost, quality, customer service, and delivery speed. In
Software Re-engineering, we are improving the software to make it more efficient and effective. It is a
process where the software’s design is changed and the source code is created from scratch. Sometimes
software engineers notice that certain software product components need more upkeep than other
components, necessitating their re-eng.

The re-Engineering procedure requires the following steps

1. Decide which components of the software we want to re-engineer. Is it the


complete software or just some components of the software?
2. Do Reverse Engineering to learn about existing software functionalities.
3. Perform restructuring of source code if needed for example modifying
functional-Oriented programs in Object-Oriented programs
4. Perform restructuring of data if required
5. Use Forward Engineering ideas to generate re-engineered software

Software Re-Engineering Activities:


1. Inventory Analysis:
Every software organization should have an inventory of all the applications.
Inventory can be nothing more than a spreadsheet model containing information that
provides a detailed description of every active application.

2. Document reconstructing:
Documentation of a system either explains how it operates or how to use it.
Documentation must be updated. It may not be necessary to fully document an
application. The system is business-critical and must be fully re-documented.

3. Reverse Engineering:
Reverse engineering is a process of design recovery. Reverse engineering tools
extract data and architectural and procedural design information from an existing
program.

4. Code Reconstructing:
To accomplish code reconstruction, the source code is analyzed using a
reconstructing tool. Violations of structured programming construct are noted and
code is then reconstructed. The resultant restructured code is reviewed and tested to
ensure that no anomalies have been introduced.

6. Forward Engineering:
Forward Engineering also called renovation or reclamation not only recovers design
information from existing software but uses this information to alter or reconstitute the
What do you mean by Cohesion & Coupling? Explain differ types of cohesion Coupling

Cohesion: 1. The measure of how strongly the elements are related functionally inside a module is called
2. A good software design will have high cohesion.
3. In software engineering the elements inside a module can be instructions, groups of instructions,
definition of data, call from another module etc.
4. The aim is always for functions that are strongly related and the expectation is
for everything inside the module to be in connection with one another.
5. Cohesion is a measure of functional strength of a module.
6. Basically, cohesion is the internal glue that keeps the module together.
7. Good system design must have high cohesion between the components ofthe system.

TYPES
Coincidental cohesion: (worst): An unplanned cohesion where elements are not related(unrelated). The
elements have no conceptual relationship other than location in source code. It is accidental and the
worst form of cohesion Eg.- print next line and reverse the characters of a string in a single component.

Logical cohesion: When logically classified elements are combined into a single module but not
functionally then it is called logical cohesion. Here all elements of modules contribute to the same system
operation. Eg.- Print Functions: the case where a set of print functions generating different output reports
are arranged into a single module.

Procedural Cohesion: A module is said to possess procedural cohesion, if the set of functions of the
module are all part of a procedure (algorithm) in which a certain sequence of steps have to be carried out
for achieving an objective. Eg.- a function which checks file permissions then opens the file.

Sequential Cohesion: Sequential cohesion is when parts of a module are grouped because the output
from one part is the input to another part like an assembly line(e.g. a function which reads data from a file
and processes the data). Sequential cohesion is easy maintenance Eg.- In a transaction processing
system (TPS), the get-input, validate-input, sort-input functions are grouped into one module.

Communication cohesion: When elements of a module perform different functions but each function
accepts the same input and generates the same output then that module is said to be communicational
cohesive. Eg.- Module determines customer details like use customer account no to find and return
customer name and loan balance.

Coupling : 1. Coupling is the measure of the degree of interdependence between the modules.
2. A good software will have low coupling.
3. The lower the coupling, the more modular a program is, which means that
less code has to be changed when the program’s functionality is altered later on.
4. However, coupling cannot be completely eliminated; it can only be minimised.

Content Coupling: In a content coupling, one module can modify the data of another module or control
flow is passed from one module to the other module. E.g. a branch from one module into another module.

Common Coupling: The modules have shared data such as global data structures. The changes in global
data mean tracing back to all modules which access that data to evaluate the effect of the change. Eg.-
when two classes access the same shared data (e.g., a global variable).
Data Coupling: If the dependency between the modules is based on the fact that they communicate by
passing only data, then the modules are said to be data coupled. In data coupling, the components are
independent of each other and communicate through data.

Stamp Coupling : stamp coupling occurs when two modules communicate by passing a composite data
structure, and the receiving module uses only a portion of the data structure's fields. This type of coupling
is considered a moderate form of coupling, where the dependency between modules is not as tight as
with content or control coupling, but it's also not as loose as data coupling.

External Coupling:external coupling occurs when modules in a system depend on external entities or
systems outside the software, such as communication protocols, device interfaces, or external data
formats. This means the modules rely on external standards or specific formats for interaction, making the
system vulnerable to changes in those external elements.
4 P’s in Software engineering
The management spectrum focuses on the four P’s; people, product, process and
project. The manager of the project has to control all these P’s to have a smooth flow in
the progress of the project and to reach the goal.

1. People:
• People have the most important contribution in software projects.
• The success of a project depends on selecting the right kind of people with the right talent.
• Depending on roles & responsibilities, following are main categories:
Senior manager-Define the business issues & have significant influence on the project.
Project Manager- Plan, Motivate, organize & control project work.
Has problem solving skills & team management capabilities.
Software Engineer- Who delivers technical skills which are necessary in a project.

2. Product:
• It is the ultimate goal of the project.
• It can be any type of software product that has to be delivered.
• Before product development, its objectives & scope should be established, solutions
should be considered & technical or management constraints should be identified.

3. Process:
• PM & TM should have a methogy & plan that completes the project as per customer req.
• W/o a clearly def process, team mem will not know what to do and when to carry out proj activ.
• Using right process will incr project execution success rate that meets goals and objectives.
• It includes requirement analysis, design, coding, testing, deployment, maintenance

4. Project:
• The proj is the compl s/w proj that includes req analys, developm,delivery, maint and updates.
• The PM of a project or sub-project is responsible for managing the people, product & process.
• The responsibilities or activities of software project manager would be a long list but that has to
be followed to avoid project failure.
Software Requirements Specification (SRS):
A document that describes the features and behavior of a software system.
serves as a blueprint for the development process, guiding developers and other stakeholders.
An SRS includes fxnl and non-fxnl requirements, system constraints, and other relevant info

• Characteristics of good SRS :

1. Correctness: User review is used to provide the accuracy of requirements stated in the SRS.
SRS is said to be perfect if it covers all the needs that are truly expected from the system.

2. Completeness: The SRS is complete if, and only if, it includes the following elements:
• All essential requirements, whether relating to functionality, performance, design, constraints,
attributes, or external interfaces.
• Definition of their responses of the s/w to all realizable classes of input data in all available
categories of situations.
• Full labels and references to all figures, tables, and diagrams in the SRS and definitions of
all terms and units of measure.

3. Consistency: The SRS is consistent if, and only if, no subset of individual requirements
described in its conflict. For example,
• The format of an output report may be described in one requirement as tabular but in
another as textual. • One condition may state that "A" must alwaysfollow "B," while another
requires that "A and B" co-occurs.
• A program's request for user i/p may be called a "prompt" in one requ and a"cue" in another.
• The use of standard terminology and descriptions promotes consistency.

4. Unambiguousness: SRS is unambiguous when every fixed requirement has only one
Interpretation.This suggests that each element is uniquely interpreted.
• In case there is a method used with multiple definitions, the requirements report
should determine the implications in the SRS so that it is clear and simple to
understand.

5.Ranking for importance and stability: The SRS is ranked for importance and stability if
each requirement in it has an identifier to indicate either the significance or stability
of that particular requirement.

6.Modifiability: SRS should be made as modifiable as likely and should be capable of


quickly obtain changes to the system to some extent.Modifications should be perfectly
indexed and cross-referenced.

7. Understandable by the customer: An end user may be an expert in his/her explicit domain but
might not be trained in computer science. Hence, the purpose of formal notations and symbols
should be avoided too as much extent as possible.
The language should be kept simple and clear.
Software Requirement Specification (SRS) Format

Introduction
Purpose of this Document - At first, the main aim of why this document is necessary and what's
the purpose of the document is explained and described.
Scope of this document - In this, the overall working and main objective of the doc and what
value it will provide to customers is described and explained.
It also includes a description of development cost and time required.
Overview - description of product is explained.simply a summary or overall review of a product.
General description - In this, general functions of the product which includes objective of user, a
user characteristic, features, benefits, about why its importance is mentioned. It also describes
features of the user community.

Functional Requirements
In this, the possible outcome of the s/w system which includes effects due to operation of the
program is fully explained. All fnxl requirements which may include calculations,data processing,
etc. are placed in a ranked order. Functional requirements specify the expected behavior of the
system-which outputs should be produced from the given inputs. They describe the relationship
between the i/p and o/p of the system.

Interface Requirements
In this, software interfaces which mean how software programs communicate with each other or
users either in the form of any language, code, or message are fully described and explained.
Examples can be shared memory, data streams, etc.

Performance Requirements
In this, how a s/w system performs desired functions under specific conditions is explained. It
also explains required time, required memory, maximum error rate, etc. There are 2 types of
performance requirements: static and dynamic. Static requirements are those that do not
impose constraint on the execution characteristics of the system. Dynamic requirements specify
constraints on the execution behaviour of the system.

Design Constraints
In this, constraints which simply means limitation or restriction are specified and explained for
the design team. Examples may include use of a particular algorithm, h/w & s/w limitations, etc.
An SRS should identify and specify all such constraints.

Non-Functional Attributes
In this, non-functional attributes are explained that are required by the software system for
better performance. An example may include Security, Portability, Reliability, Reusability,
Application compatibility, Data integrity, Scalability capacity, etc.
Preliminary Schedule and Budget
In this, initial version and budget of project plan are explained which include overall time
duration required and overall cost required for development of project.

Appendices
In this, additional information like references from where information is gathered, definitions of
some specific terms, acronyms, abbreviations, etc. are given and explained.

Uses of SRS document


Development team requires it for developing products according to the need.
Test plans are generated by a testing group based on the described external behaviour.
Maintenance and support staff need it to understand what the s/w product is supposed to do.
Project managers base their plans and estimates of schedule, effort and resources on it.

SOFTWARE PROJECT ESTIMATION TECHNIQUES

LOC(Line of Code): (not universally accepted)


• It is one of the earliest and simpler metrics for calculating the size of the computer program.
• It is generally used in calculating and comparing the productivity of programmers.
• Size-oriented metrics depend on the programming language used.
• Productivity is defined as KLOC / EFFORT, where effort is measured in person-months.
• As productivity depends on KLOC, so assembly language code will have more productivity.
• LOC measure requires a level of detail which may not be practically achievable.
• It requires that all organizations must use the same method for counting LOC. This is so
because some organizations use only executable statements, some useful comments, and
some do not. Thus, the standard needs to be established.
Advantages: It is simple to measure.
Disadvantages:
• It is defined on the code. For example, it cannot measure the size of the specification.
• Bad software design may cause an excessive line of code.
• It is language dependent. • Users cannot easily understand it.

FP (Function Point) Analysis:


• Allan J. Albrecht initially developed function Point Analysis in 1979 at IBM and it has
been further modified by the International Function Point Users Group (IFPUG).
• The functional size of the product is measured in terms of the function point, which is a
standard of measurement to measure the software application.
• The basic and primary purpose of the functional point analysis is to measure and provide the
software application functional size to the client, customer, and the stakeholder on their request.

Characteristics of Functional Point Analysis


We can calculate the functional point with the help of the number of functions and types of
functions used in applications. These are classified into five types:
1.Functional Point helps in describing system complexity and also shows project timelines.
2.FP is language and technology independent, meaning it can be applied to software systems
developed using any programming language or technology stack.

Function Point Analysis Steps:


1.Determine the Type of Count: Decide whether to perform a basic or extended FPA.
2.Determine the Boundary of the Count: Define the scope of the system to be analyzed.
3.Identify Elementary Processes (EP): Identify the functions or processes that the user interacts
with, such as input screens, reports, inquiries, files, and interfaces.
4.Count the Functional Elements: Count the number of External Inputs, External Outputs,
External Inquiries, Internal Logical Files, and External Interface Files.
5.Assign Complexity Weights: Assign complexity weights to each functional element based on
their complexity (simple, medium, or complex).
6.Calculate Unadjusted Function Points (UFP): Calculate the UFP based on the number of
functional elements and their complexity weights.
7.Determine the Value Adjustment Factor (VAF): Assess the impact of factors that affect the
complexity of the system, such as data complexity, file complexity, etc
8.Calculate Adjusted Function Points (AFP): Calculate the AFP by applying the VAF to the UFP.

Benefits of Function Point Analysis:


Helps in estimating project timelines, resources, and costs based on historical data.
Facilitates better communication between developers and clients.
FP analysis is independent of the technology or programming language used.

Disadvantages of Function Point Analysis:


Complexity: FP analysis can be complex and time-consuming.
Subjectivity: The determination of complexity weights can be subjective.
Requires Expertise: Requires expertise in software analysis and FPA techniques

numerical
COCOMO Model: •proposed by Barry Boehm in 1970 and is based on the study of 63 projects.
• COCOMO (Constructive Cost Model) is a regression model based on the number of (LOC).
• It is a procedural cost estimation model for software projects.
• This model is used for predicting the various parameters associated with developing a project
such as size, effort, cost and time.
• Effort and schedule parameters are outcomes of the COCOMO model.
• Effort : Amount of labor (staff) that will be required to complete a task.
It is measured in person-months units.
Schedule : the amt of time required for the completion of the job, which is propo to the effort.
It is measured in the units of time such as weeks, months.
Different models of COCOMO have been proposed to predict the cost estimation at different
levels, based on the amount of accuracy and correctness required. All of these models can be
applied to a variety of projects. Boehm categorizes the projects as follows:

1. Organic: If the team size required to develop a software project is small then the software
project is called an Organic project. In this the problem is well understood and has been solved
in the past and also the team members have nominal experience regarding the problem.

2. Semi-detached: If the main features such as team-size, experience, knowledge of the various
programming environments lie in between Organic and Embedded then a software project is
called a Semi-detached type project. The Semi-Detached projects are less familiar and difficult
to develop compared to the organic ones and require more experience and better guidance and
creativity. Eg:Compilers or different Embedded Systems.

3. Embedded: Embedded software projects require the highest level of complexity, creativity,
and exp. This type of software requires a larger team size than the other 2 models & also the
developers need to be sufficiently experienced and creative to develop such complex models.
The Six phases of detailed COCOMO are:
1.​ Planning and requirements
2.​ System design
3.​ Detailed design
4.​ Module code and test
5.​ Integration and test
6.​ Cost Constructive model

Planning and requirements: This initial phase involves defining the scope, objectives, and constraints of
the project. It includes developing a project plan that outlines the schedule, resources, and milestones

System design: : In this phase, the high-level architecture of the software system is created. This includes
defining the system’s overall structure, including major components, their interactions, and the data flow
between them.

Detailed design: This phase involves creating detailed specifications for each component of the system. It
breaks down the system design into detailed descriptions of each module, including data structures,
algorithms, and interfaces.

Module code and test: This involves writing the actual source code for each module or component as
defined in the detailed design. It includes coding the functionalities, implementing algorithms, and
developing interfaces.

Integration and test: This phase involves combining individual modules into a complete system and
ensuring that they work together as intended.

Cost Constructive model: The Constructive Cost Model (COCOMO) is a widely used method for
estimating the cost and effort required for software development projects.

Importance of the COCOMO Model

1.​ Cost Estimation: To help with resource planning and project budgeting, COCOMO offers a
methodical approach to software development cost estimation.
2.​ Resource Management: By taking team experience, project size, and complexity into
account, the model helps with efficient resource allocation.
3.​ Project Planning: COCOMO assists in developing practical project plans that include
attainable objectives, due dates, and benchmarks.
4.​ Risk management: Early in the development process, COCOMO assists in identifying and
mitigating potential hazards by including risk elements.

Formulae and numericals


Data Flow Diagram (DFD)is a graphical representation of data flow in any system. It is capable
of illustrating incoming data flow, outgoing data flow and store data. Data flow diagm describes
anything about how data flows through the system.
DFDs can be divided into different levels, which provide varying degrees of detail about the
system. Higher levels of DFD provide a broad overview of the system, while lower levels provide
more detail about the system's processes, data flows, and data stores.
A combination of different levels of DFD can provide a complete understanding of the system.
The following are the four levels of DFDs:

0-Level Data Flow Diagram (DFD)


Level 0 is the highest-level Data Flow Diagram (DFD), which provides an overview of the entire
system. It shows the major processes, data flows, and data stores in the system, without
providing any details about the internal workings of these processes.
It’s designed to be an abstraction view, showing the system as a single process with its
relationship to external entities.

1-Level Data Flow Diagram (DFD)


1-Level provides a more detailed view of the system by breaking down the major processes
identified in the level 0 Data Flow Diagram (DFD) into sub-processes. Each sub-process is
depicted as a separate process on the level 1 Data Flow Diagram (DFD). The data flows and
data stores associated with each sub-process are also shown.
In this level, we highlight the main functions of the system and break down the high-level
process of 0-level Data Flow Diagram (DFD) into subprocesses.

2-Level Data Flow Diagram (DFD)


2-Level provides an even more detailed view of the system by breaking down the
sub-processes identified in the level 1 Data Flow Diagram (DFD) into further sub-processes.
Each sub-process is depicted as a separate process on the level 2 DFD. The data flows and
data stores associated with each sub-process are also shown.
2-Level Data Flow Diagram (DFD) goes one step deeper into parts of 1-level DFD. It can be
used to plan or record the specific/necessary detail about the system’s functioning.

3-Level Data Flow Diagram (DFD)


3-Level is the most detailed level of Data Flow Diagram (DFDs), which provides a detailed view
of the processes, data flows, and data stores in the system. This level is typically used for
complex systems, where a high level of detail is required to understand the system. Each
process on the level 3 DFD is depicted with a detailed description of its input, processing, and
output. The data flows and data stores associated with each process are also shown.
DFD FOR HOTEL RESERVATION
Dfd level 0

Dfd level 1

Dfd level 2
DFD FOR Library management
Dfd level 0

Dfd level 1

Dfd level 2
User Interface Design
The user interface is the front-end application view to which the user interacts to use the
software. • The software becomes more popular if its user interface is:
1. Attractive 2. Simple to use 3. Responsive in a short time 4. Clear to understand
5. Consistent on all interface screens

Types of User Interface:


1. Command Line Interface: The Command Line Interface provides a command prompt, where
the user types the command and feeds it to the system. The user needs to remember the syntax
of the command and its use. It is mostly used by technical people or programmers.
2. Graphical User Interface: Graphical User Interface provides a simple interactive interface to
interact with the system. GUI is easier to learn and use compared to command-line interfaces.
GUI provides multiple windows to users simultaneously to interact with the system.

User Interface Design Process: diag (mod3-58)

1. GUI Requirement Gathering - The designers may like to have list of all functional and
non-functional requirements of GUI.
2. User Analysis - The designer studies who is going to use the s/w GUI. The target audience
matters as the design details change acco to the knowledge & competency level of the user.
3. Task Analysis - Designers have to analyze what task is to be done by the software
solution. Here in GUI, it does not matter how it will be done. Tasks can be represented in
hierarchical manner taking one major task and dividing it further into smaller sub-tasks.
4. GUI Design & implementation - Designers after having information about requirements,
tasks and user environment, design the GUI and implements into code with visual studio etc
and embed the GUI with working or dummy software in the background.
5. Testing - GUI testing can be done in various ways. Organizations can have in-house
inspection, direct involvement of users and release of beta versions are few of them. Testing
may include usability, compatibility, user acceptance etc.

User Interface design Principles:


1. User Familiarity: Instead of using computer terminology, make use of user- oriented
terminologies. Eg in ‘Microsoft Office’ Software, the terms such as document, spreadsheet,
letter, Folder are used and use of the terms directory, file, and identities are avoided.
2. Consistency: The appropriate level of consistency should be maintained in the user interface.
For example, the commands or menus should be of the same format.
3. Minimal Surprise: The commands should operate as per user tendency. This makes the user
to easily predict the interface. For example, in the word processing document under the file
menu, there should be new, open, save, print,close options.
4.Recoverability: The system should provide a recovering facility to the user from his errors so
that the user can correct those errors. For example, an undo can correct those errors.
User Interface Design Golden Rules:

1. Place the user in control:


• The user should be able to easily enter and exit the mode with little or no effort.
• Provide for flexible interaction using keyboard, mouse or touch screen
• When a user is doing a sequence of actions the user must be able to interrupt the sequence to
do some other work without losing the work that had been done.
• Allows users to customize the interface like shortcut keys etc.
• Hide technical internals from casual users.
• The user should be able to use the objects and manipulate the objects that are present on the
screen to perform a necessary task.

2. Reduce the User’s Memory Load:


• Reduce demand on short-term memory.
• if a user needs to add some new features then he should be able to add the required features.
• Define shortcuts that are intuitive.
• The visual layout of the interface should be based on a real-world metaphor.
• Disclose information in a progressive, hierarchical fashion

3.Make the Interface Consistent:


• Allow the user to put the current task into a meaningful context.
• Maintain consistency across a family of applications.
• If past interactive models have created user expectations do not make changes unless there is
a compelling reason.
Explain functional and non-functional requirements.
Functional Requirements: These are the essential features and functionalities demanded by the end user,
forming the core capabilities of the system. Represented in terms of inputs, operations, and expected o/p,
they are directly observable in the final product and are integral to fulfilling the user’s needs. Eg include
determining the system’s required features and identifying edge cases for consideration during design.

Non-Functional Requirements: These are the quality constraints mandated by the project contract,
dictating the standards the system must meet. They encompass aspects such as portability, security,
reliability, and performance, contributing to the overall system’s effectiveness and usability. Examples
include specifying minimum latency for request processing and ensuring high system availability.
Software Reverse Engineering is also known as backward engineering, is the process of forward
engineering in reverse. In this, the information is collected from the given or existing application. It
takes less time than forward engineering to develop an application. In reverse engineering, the
application is broken to extract knowledge or its architecture.


Steps of Software Reverse Engineering:

1.​ Collection Information: This step focuses on collecting all possible information (i.e., source
design documents, etc.) about the software.
2.​ Examining the Information: The information collected in step-1 is studied so as to get
familiar with the system.
3.​ Extracting the Structure: This step concerns identifying program structure in the form of a
structure chart where each node corresponds to some routine.
4.​ Recording the Functionality: During this step processing details of each module of the
structure, charts are recorded using structured language like decision table, etc.
5.​ Recording Data Flow: From the information extracted in step-3 and step-4, a set of data
flow diagrams is derived to show the flow of data among the processes.
6.​ Recording Control Flow: The high-level control structure of the software is recorded.
7.​ Review Extracted Design: The design document extracted is reviewed several times to
ensure consistency and correctness. It also ensures that the design represents the program.
8.​ Generate Documentation: Finally, in this step, the complete documentation including SRS,
design document, history, overview, etc. is recorded for future use.

Reverse Engineering Tools: Reverse engineering tools accept source code as input and produce a
variety of structural, procedural, data, and behavioral design. Some of the tools are given below:

1.​ Rigi: A visual software understanding tool.


2.​ Bunch: A software clustering/modularization tool.
3.​ GEN++: An application generator to support the development of analysis tools for the C++
language.
4.​ PBS: Software Bookshelf tools for extracting and visualizing the architecture of programs.
Software Requirements Specification (SRS) for a University Management System
1. Introduction
Purpose: The University Management System (UMS) is designed to manage and streamline all
university operations, including student registration, course management, and result processing.
Scope: The UMS will be used by students, faculty, and administrative staff to facilitate the
efficient management of academic and administrative tasks within the university.

2. General Description
Product Perspective: The UMS will be a web-based application accessible from any device with
an internet connection. It will integrate with existing university systems for seamless operation.
Product Functions: The UMS will manage:
Student registration, Course management, Result processing, Faculty management

3. Specific Requirements
Student Registration: Students can register for courses each semester.
Students can view their course schedules and grades.
Course Managm: Faculty can create & mng courses & can upload course materials & grades.
Result Processing: The system will calculate students’ GPA each sem based on their grades.
Faculty Management: Administrative staff can add and manage faculty members.

4. Non-Functional Requirements
Performance Req: The sys should handle a large no. of users simult w/o performance degradn.
Security Req:The system should have secure login and data encryption to protect sensitive info.
User roles and permissions must be implemented to ensure data confidentiality and integrity.
Reliability Req: The system should have backup and recovery features to prevent data loss.
The system should ensure high availability with minimal downtime.

5. System Features
User Interface: The system should have a user-friendly interface that is intuitive and easy to
navigate.
Interfaces should be responsive and accessible from various devices and screen sizes.
Database: The system should have a robust and scalable database to store all data securely.
Data should be normalized to reduce redundancy and improve data integrity.

6. External Interface Requirements


User Interfaces: The system should be accessible via common web browsers (e.g., Chrome,
Firefox, Safari, Edge).The interface should support modern web standards (HTML5, CSS3,
JavaScript).
Hardware Interfaces: The system should be compatible with standard computer hardware,
including desktops, laptops, tablets, and smartphones.
Software Requirements Specification (SRS) for Online Movie Booking System
1. Introduction
1.1 Purpose : The purpose of this document is to outline the requirements for an Online Movie Booking.
System, designed to facilitate the booking of movie tickets online. The system will enable users to browse
available movies, check showtimes, select seats, and book tickets, offering convenience and efficiency for
users.
1.2 Scope: Browse movies and view details (showtimes, ratings, trailers, etc.). Select preferred seats and
showtimes. Make payments securely. Receive e-tickets and booking confirmation via email. The system
is aimed at cinema operators and customers, aiming to reduce wait times,
minimize errors, and increase user satisfaction.
1.3 Definitions, Acronyms, and Abbreviations
User: Anyone who uses the Online Movie Booking System.
Admin:responsible for managing movies, schedules, and other system functionalities.
SRS: Software Requirements Specification.
UI: User Interface.
1.4 Overview
This document outlines the system functionality, user requirements, and technical
requirements for the development of an Online Movie Booking System.

2. Overall Description
2.1 Product Perspective : The Online Movie Booking System is an independent, web-based platform that
connects users with movie theaters for seamless ticket bookings.
2.2 Product Functions: User Management: Registration, login, profile management.
Movie and Show Management: Display movies, schedules, and related information.
Seat Selection: View available seats and choose preferences.
Payment Gateway: Secure transactions with various payment options.
2.3 User Classes and Characteristics
Registered Users: Users who can book tickets and manage profiles.
Guests: Users who can view information but need to register to book tickets.
Admin: Manages the system by adding movies, managing schedules, & overseeing
bookings.
2.4 Operating Environment
Web application compatible with modern browsers (Chrome, Firefox, Safari, etc.).
Mobile-responsive design for smartphone and tablet users.
2.5 Design and Implementation Constraints
Compliance with industry-standard security protocols.
Integration with third-party payment gateways.

3. Functional Requirements
3.1 User Registration and Login: Users can register by providing personal details and login credentials. A
login function with password recovery options.
3.2 Movie Listings and Search: The system displays a list of currently available movies.
Users can search for movies based on title, genre, or rating. Each movie page shows details such as
trailers, descriptions, reviews, and showtimes.
3.3 Showtimes and Seat Selection: Users can view showtimes for selected movies and select preferred
showtimes. Users can view seating layouts and choose available seats.
3.4 Ticket Booking and Confirmation: Users can confirm the booking by reviewing selected seats, date,
and time.System sends confirmation emails with e-tickets and booking details after successful payment.
3.5 Payment Processing: Secure payment integration (credit card, debit card, PayPal, etc.).
Explain different types of software maintenance. 10M
In a software lifetime, type of maintenance may vary based on its nature. It may be just a routine
maintenance task as some bug discovered by some user or it may be a large event in itself based on
maintenance size or nature.
Following are some types of maintenance based on their characteristics:

Corrective Maintenance: Corrective maintenance deals with the repair of faults or defects
found in day-to-day system functions. A defect can result due to errors in software design,
logic and coding. Design errors occur when changes made to the software are incorrect,
incomplete, wrongly communicated, or the change request is misunderstood. Logical errors
result from invalid tests and conclusions, incorrect implementation of design specifications,
faulty logic flow, or incomplete test of data.

Adaptive Maintenance: Adaptive maintenance is the implementation of changes in a part of


the system, which has been affected by a change that occurred in some other part of the
system. Adaptive maintenance consists of adapting software to changes in the environment
such as the hardware or the operating system. Adaptive maintenance accounts for 25% of
all the maintenance activities.

Perfective Maintenance: Perfective maintenance mainly deals with implementing new or


changed user requirements. Perfective maintenance involves making functional
enhancements to the system in addition to the activities to increase the system's
performance even when the changes have not been suggested by faults.This includes
enhancing both the function and efficiency of the code and changing the functionalities of the
system as per the users' changing needs. Perfective maintenance accounts for 50%, that is,
the largest of all the maintenance activities.

Preventive Maintenance: Preventive maintenance involves performing activities to prevent


the occurrence of errors. It tends to reduce the software complexity thereby improving
program understandability and increasing software maintainability. It comprises
documentation updating, code optimization, and code restructuring. Preventive maintenance
is limited to the maintenance organization only and no external requests are acquired for this
type of maintenance. Preventive maintenance accounts for only 5% of all the maintenance
activities.
What are the different phases in the project life cycle? Explain with suitable examples.
The project life cycle refers to the series of phases that a project goes through from initiation to
completion. Each phase represents a distinct stage in the project where specific tasks, and
objectives are accomplished. the phases in a project life cycle along with suitable examples:

1.​ Initiation Phase:


○​ This is the first phase of the project life cycle where the project is conceived, authorized,
and defined.
○​ Example: Imagine a company wants to develop a new mobile application. In the initiation
phase, the project stakeholders define the project’s purpose, scope, objectives, and initial
budget.
2.​ Planning Phase:
○​ In this phase, detailed planning is carried out to define project scope, objectives,
deliverables, resources, schedule, and budget.
○​ Example: Continuing with the mobile application project, in the planning phase, the
project team creates a project plan outlining tasks, milestones, dependencies, and
resource allocation. They also conduct risk analysis and develop a communication plan.
3.​ Execution Phase:
○​ This phase involves executing the project plan, coordinating resources, and completing
project deliverables as outlined in the plan.
○​ Example: For the mobile application project, the execution phase involves actual
development activities like coding, designing user interfaces, implementing features, and
conducting testing.
4.​ Controlling Phase:
○​ During this phase, project performance is monitored, and project activities are controlled
to ensure that the project stays on track in terms of scope, schedule, budget, and quality.
○​ Eg: In the mobile application project, the project manager monitors progress
against the proj plan, tracks budget expenditure, manages risks, and resolves
issues as they arise. may also conduct regular meetings to review project status
○​
5.​ Closing Phase: This is the final phase of the project life cycle where the project is
formally completed, and project closure activities are conducted. Example: After the
mobile application is developed, tested, and deployed to users, the closing phase
involves activities such as obtaining client acceptance, documenting lessons learned,
releasing project resources, and handing over deliverables to the operations team.
Describe the details of FTR and Walkthrough.
FTR (Formal Technical Review) and Walkthrough are both methods used in software
engineering for inspecting and reviewing software artifacts, such as requirements documents,
design specifications, or code. Let’s break down each one:

1.​ Formal Technical Review (FTR):


●​ Purpose: FTR aims to identify defects and issues in software artifacts early in the
development process.
●​ Participants: Typically involves a formal group of reviewers, including developers,
testers, project managers, and sometimes users or stakeholders.
●​ Process:
○​ Preparation: Before the review meeting, the artifact to be reviewed is distributed
to all participants. Reviewers examine the artifact thoroughly, noting any defects,
ambiguities, or inconsistencies.
○​ Meeting: In a structured meeting, the reviewers discuss their findings and
document them. Discussions may involve clarifying requirements, questioning
design decisions, or pointing out potential problems.
○​ Follow-up: After the meeting, the author of the artifact addresses the identified
issues, making corrections or modifications as necessary.
●​ Documentation: The results of the review, including identified issues and actions taken,
are documented for future reference.

2.Walkthrough:

●​ Purpose: Walkthroughs are more informal compared to FTR and are primarily aimed at
familiarizing team members with the content of a document or code.
●​ Participants: Typically involves the author of the artifact and a small group of peers or
stakeholders.
●​ Process:
○​ Presenter: The author presents the artifact, walking through its content,
explaining its purpose, structure, and key points.
○​ Discussion: Participants can ask questions, seek clarification, or provide
feedback during the walkthrough. This interaction helps in ensuring everyone
understands the artifact and can provide valuable insights.
○​ Note-taking: Notes may be taken during the walkthrough to capture feedback,
questions, and suggestions.
●​ Follow-up: After the walkthrough, the author may revise the artifact based on the
feedback received.
what is project and what are the different Metrics used in software management
A project is a temporary endeavor undertaken to create a unique product, service, or result. In software
management, metrics are used to track and assess project performance, identify issues, and improve
processes.
Metrics are crucial for software management because they provide quantifiable data that can be used to:
Track Project Progress: Monitor the project's performance against its planned schedule and budget.
Identify Issues: Detect potential problems and risks early in the project lifecycle.
Improve Project Performance: Use data to identify areas for improvement and make changes that will
lead to better outcomes.
Demonstrate Project Value: Show stakeholders the impact of the project and its contribution to the
organization's goals.
Different Types of Metrics in Software Management:
Project Metrics:
These metrics focus on the overall health and performance of the software project.
Eg: Schedule Variance: Measures the difference between planned and actual project completion time.
Cost Variance: Tracks the difference between planned and actual project costs.
Productivity: Measures the o/p of the developm team, often expressed as loc/developer/month or similar.

Process Metrics:
These metrics focus on the efficiency and effectiveness of the software development processes.
Eg: Defect Density: Measures the number of defects per unit of code (e.g., defects per thousand loc).
Test Coverage: Measures the percentage of code that is covered by tests.
Time to Market: Measures the time it takes to release a software product.

Product Metrics:
These metrics focus on the characteristics and quality of the software product itself.
Ex: Code Complexity: Measures the difficulty of understanding and maintaining the code.
Maintainability: Measures how easy it is to modify and update the software.
Usability: Measures how easy it is for users to use the software.

Organizational Metrics:
These metrics focus on the impact of the project on the organization as a whole. Examples include:
Employee Satisfaction: Measures the satisfaction of team members with the project.
Customer Satisfaction: Measures the satisfaction of users with the software product.
Return on Investment (ROI): Measures the financial benefits of the project.
Software Testing is a process of verifying and validating whether the Software Product or Application is
working as expected or not. The complete testing includes identifying errors and bugs that cause future
problems for the performance of an application.
Software testing mainly divides into the two parts, which is used in the Software Development Process:
Verification: This step involves checking if the software is doing what it is supposed to do. Its like asking,
"Are we building the product the right way?"
Validation: This step verifies that the software actually meets the customer's needs and requirements. Its
like asking, "Are we building the right product?"
Software testing is mainly classified :
1.Defects can be Identified Early: Software testing is important because if there are any bugs they can be
identified early and can be fixed before the delivery of the software.
2.Improves Quality of Software: Software Testing uncovers the defects in the software, and fixing them
improves the quality of the software.
3.Increased Customer Satisfaction: Software testing ensures reliability, security, and high performance
which results in saving time, costs, and customer satisfaction.
Helps with Scalability: Software testing type non-functional testing helps to identify the scalability issues
and the point where an application might stop working.
4.Saves Time and Money: After the application is launched it will be very difficult to trace and resolve the
issues, as performing this activity will incur more costs and time.

Software Testing Types


1. Manual Testing : a technique to test
the software that is carried out using
the functions and features of an
application. Which means manual
testing will check the defect manually
with trying one by one function is
working as expected. Further divided
into black, white, gray box testing

2. Automation Testing: technique


where the Tester writes scripts
independently and uses suitable
Software or Automation Tools to test
the software. It is an Automation Process of a Manual Process. It allows for executing repetitive tasks
without the use of a Manual Tester.
Types of Manual Testing

White Box Testing : is a software testing technique that involves testing the internal structure and
workings of a software application. The tester has access to the source code and uses this knowledge to
design test cases that can verify the correctness of the software at the code level.

Black-box testing : is a type of software testing in which the tester is not concerned with the internal
knowledge or implementation details of the software but rather focuses on validating the functionality
based on the provided specifications or requirements.

Gray Box Testing : is a software testing technique that is a combination of the Black Box Testing
technique and the White Box Testing technique. The internal structure is partially known in Gray Box
Testing

Types of Black Box Testing


1. Functional Testing : is a type of Software Testing in which the system is tested against the functional
requirements and specifications. Functional testing ensures that the requirements or specifications are
properly satisfied by the application. This type of testing is particularly concerned with the result of
processing.
2.Non-functional testing :is a software testing technique that checks the non-functional attributes of the
system. Non-functional testing is defined as a type of software testing to check non-functional aspects of
a software application. It is designed to test the readiness of a system as per nonfunctional parameters
which are never addressed by functional testing.

Types of Functional Testing

1. Unit Testing: is a method of testing individual units or components of a s/w application. It is typically
done by developers and is used to ensure that the individual units of the s/w are working as intended.

2.Integration testing is a method of testing how different units or components of a software application
interact with each other. It is used to identify and resolve any issues that may arise when different units of
the software are combined. Integration testing is typically done after unit testing and before functional
testing and is used to verify that the different units of the software work together as intended.
intended.
• Different ways of Integration Testing are discussed below.
1. Top-downIT : It starts with the highest-level modules and differentiates them from lower-level modules.
2. Bottom-up IT : It starts with the lowest-level modules and integrates them with higher-level modules.
3. Big-Bang IT : It combines all the modules and integrates them all at once.
4. Incremental IT : It integrates the modules in small groups, testing each group as it is added.

3. System Testing: is a type of software testing that evaluates the overall functionality and performance of
a complete and fully integrated software solution. It tests if the system meets the specified requirements
and if it is suitable for delivery to the end-users. This type of testing is performed after the integration
testing and before the acceptance testing.

4. End-to-end Testing : is the type of software testing used to test entire software from starting to the end
along with its integration with external interfaces. The main purpose of end-to-end testing is to identify
system dependencies and to ensure data integrity and communication with other systems, interfaces and
databases to exercise complete production.
5. Acceptance Testing : It is formal testing according to user needs, requirements, and business
processes conducted to determine whether a system satisfies the acceptance criteria or not and to enable
the users, customers, or other authorized entities to determine whether to accept the system or not.

Types of Non Functional Testing:

1. Performance Testing: a type of software testing that focuses on evaluating the performance and
scalability of a system or application. The goal of performance testing is to identify bottlenecks, measure
system performance under various loads and conditions, and ensure that the system can handle the
expected number of users or transactions.

2. Usability Testing: You design a product (say a refrigerator) and when it becomes completely ready, you
need a potential customer to test it to check it working. To understand whether the machine is ready to
come on the market, potential customers test the machines. Likewise, the best example of usability
testing is when the software also undergoes various testing processes which is performed by potential
users before launching into the market. It is a part of the software development lifecycle (SDLC).

3. Compatibility Testing: It is performed on an application to check its compatibility (running capability) on


different platforms/environments. This testing is done only when the application becomes stable. This
means simply this compatibility test aims to check the developed software application functionality on
various software, hardware platforms, networks, browsers etc. This testing is very important in product
production & implementation point of view as it is performed to avoid future issues regarding compatibility.

DIFFERENCE BETWEEN ALPHA AND BETA TESTING


Principles of s/w testing
1. Testing shows the Presence of Defects
Software testing talks about the presence of defects and doesn't talk about the absence of defects.
Software testing can ensure that defects are present but it can not prove that software is defect-free.
Even multiple tests can never ensure that software is 100% bug-free. Testing can reduce the number of
defects but not remove all defects.

2. Exhaustive Testing is not Possible


It is the process of testing the functionality of the software in all possible inputs (valid or invalid) and
pre-conditions is known as exhaustive testing. Exhaustive testing is impossible, meaning the software can
never test at every test case.

3. Early Testing
To find the defect in the software, early test activity shall be started. The defect detected in the early
phases of SDLC will be very less expensive. For better performance of software, software testing will start
at the initial phase i.e. testing will perform at the requirement analysis phase.

4. Defect Clustering
In a project, a small number of modules can contain most of the defects. The Pareto Principle for software
testing states that 80% of software defects come from 20% of modules.

5. Pesticide Paradox
Repeating the same test cases, again and again, will not find new bugs. So it is necessary to review the
test cases and add or update test cases to find new bugs.

6. Testing is Context-Dependent
The testing approach depends on the context of the software developed. Different types of software need
to perform different types of testing. For example, The testing of the e-commerce site is different from the
testing of the Android application.

7. Absence of Errors Fallacy


If a built software is 99% bug-free but does not follow the user requirement then it is unusable. It is not
only necessary that software is 99% bug-free but it is also mandatory to fulfill all the customer
requirements.
RISK MITIGATION, MONITORING & MANAGEMENT (RMMM)

• R.M.M.M stands for risk mitigation, monitoring and management.


• The Project manager uses RMMM plan as part of the overall project plan.
• Risk is documented with the help of a Risk Information Sheet(RIS).
• This RIS is controlled by using a database system for easier management of information.

Risk Mitigation: means preventing the risk to occur (Risk Avoidance).


Following are the steps to be taken for mitigating the risks.
• Communicate with the concerned staff to find probable risk.
• Find out and eliminate all those causes that can create risk before the project starts.
• Develop a policy in an organization which will help to continue the project even though some staff leaves
the organization.

Risk Monitoring: is an activity used to track a project’s progress.


In risk monitoring process following things must be monitored by the project manager :
• The approach of the team members as the pressure of the project varies.
• The degree in which the team performs with the spirit of "team-work".
• The type of cooperation among the team members.
• The types of problems that are occurring.
• The objectives of risk monitoring are A)to check whether the predicted risks really occur or not.
B)To ensure the steps defined to avoid the risk are applied properly or not.
C)To gather the information which can be useful for future risk assessment.
D) To determine which risks generate which problems throughout the project.

Risk Management:
• It is a reactive approach which can be applied after risks have been generated.
• Project managers perform this task when risk becomes a reality.
• If the project manager is successful in applying the project mitigation effectively then it becomes very
much easy to manage the risks.
• For example, Consider a scenario that many people are leaving the organization then:
1. If sufficient additional staff is available;
2. If current development activity is known to everybody in the team;
3. If latest and systematic documentation is available;
• Then any 'new comer' can easily understand current development activity.
• This will ultimately help in continuing the work without any problems.

RMMM Plan example:


• Risk generated: Late delivery of project
1. Mitigation: • Before development apply some precautionary measures. Developer already knew the
project will be complete in 20 days. But he told the customer the project will be completed in 30 days.
2. Monitoring: To develop project schedule. Mentioned start and end date of project. Within 20 to 30 days.
3. Management: If a project is not completed within deadline, then negotiate with customers.
SOFTWARE CONFIGURATION MANAGEMENT

When we develop software, the product (software) undergoes many changes in their development phase.
Changes can be in the requirements, team or organization, govt. policies, rules or in project schedule.
• Software configuration management(SCM) is the process to systematically manage, organize, and
control changes in documents, code and other entities during software development life cycle.
• Several individuals work together to achieve these common goals. This individual provides several work
products (SC Items) e.g., Intermediate version of modules or test data used during debugging
A configuration of the product refers not only to the product’s component but also to a particular version
of the component.
• Therefore, SCM is the discipline which
Identify change
Monitor and control change
Ensure the proper implementation of change made to the item.
Auditing and reporting on the change made.
• Configuration Management (CM) is a techQ of identifying, organizing, and controlling modification to
s/w being built by a programming team. The objective is to maximize productivity by minimizing errors

Importance of Configuration Management


• Multiple people are working on software which is consistently updating. SCM helps teams to collaborate
and coordinate their work.
• It provides the tool to ensure that changes are being properly implemented.
• It manage & track different versions of the system and to revert to earlier versions if necessary.
• It ensures that software systems can be easily replicated and distributed to other environments such as
test, production and customer sites.
• It improves the quality & reliability of s/w sys as well as incr efficiency and reduces the risk of errors.

SCM Repository : The SCM repository is the set of mechanisms and data structures that allow a software
team to manage change in an effective manner.
• It manages version control, change
control and release control process.

SCM Features:
• Versioning: As a project progresses, many
versions of individual work products will be
created. The repository must be able to
save all of these versions to enable
effective management of product releases
and to permit developers to go back to
previous versions during testing and
debugging.

• Dependency tracking and change management: The repository manages a wide variety of relationships
among the data elements stored in it. Some of these relationships are merely associations, and some are
dependencies or mandatory relationships. The repository keeps track and manages all of these
relationships which is crucial to the integrity of the information stored in the repository and to the
generation of deliverables based on it. For example, UML diagrams.
• Requirements tracing :
This special function depends on link management and provides the ability to track all the design and
construction components that result from a specific requirements specification (forward tracing).
In addition, it provides the ability to identify which requirement is responsible for which feature. It is done
after software development to check changes are made properly as per the requirements or
not.(backward tracing).

• Configuration management:
A configuration management facility keeps track of a series of configurations representing specific
project milestones or production releases.

Audit trails:
• An audit trail establishes additional information about when, why, and by whom changes are made.
Information about the source of changes can be entered as attributes of specific objects in the repository.

SCM Process:
• It uses the tools which keep track of the necessary change has been implemented adequately to the
appropriate component. The SCM process defines a number of tasks:
1. Planning and Identification
2. Version Control and Baseline
3. Change Control
4. Configuration Audit
5. Status Reporting

Planning and Identification:


• In this step, the goal: plan for the development of the s/w project and
identify the items within the scope.
• This is accomplished by having meetings and brainstorming sessions
with your team to figure out the basic criteria for the rest of the project.
• Part of this process involves figuring out how the project will proceed
and identifying the exit criteria.
• This way, your team will know how to recognize when all of the goals of the project have been met.
• Specific activities during this step include:
Identifying items like test cases,specification requirements, code modules, schedule time.
Identifying each computer software configuration item in the process
Group basic details of why, when & what changes will be made & who will be in charge of making them
Create a list of necessary resources, like tools, files, documents, etc.

Version Control and Baseline:


• The version control and baseline step ensures the continuous integrity of the product by identifying an
accepted version of the software.
• The point of this step is to control the changes being made to the product. As the project develops, new
baselines are established, resulting in several versions of the software.
• This step involves the following activities:
Identifying and classifying the components that are covered by the project.
Developing a way to track the hierarchy of different versions of the software.
Identifying the essential relationships between various components.
Change Control:
• method used to ensure that any changes that are made are consistent with the rest of the project.
• Change control is essential to the successful completion of the project.
• Having these controls in place helps with quality assur, the approval & release of new baseline(s).
• In this step, requests to change configurations are submitted to the team and approved or denied by the
software configuration manager.
• The most common types of requests are to add/edit various configurn items/change user permissions.

Configuration Status Accounting:


• The next step is to ensure the project is developing according to the plan by testing and verifying
according to the predetermined baselines.
• It involves looking at release notes and related docs to ensure the software meets all fxnl req.
• Configuration status accounting tracks each version released during the process, assessing what is new
in each version and why the changes were necessary.

Audits and Reviews:


• The final step is a technical review of every stage in the software development life cycle.
• Audits and reviews look at the process, configurations, workflow, change requests, and everything that
has gone into developing each baseline throughout the project’s development.
• The team performs multiple reviews of the application to verify its integrity and also put together
essential accompanying documentation such as release notes, user manuals, and installation guides.

White box testing is a Software Testing Technique that involves testing the internal structure and
workings of a Software Application. The tester has access to the source code and uses this knowledge to
design test cases that can verify the correctness of the software at the code level.
White box testing is also known as Structural Testing or Code-based Testing, and it is used to test the
software’s internal logic, flow, and structure. The tester creates test cases to examine the code paths and
logic flows to ensure they meet the specified requirements.

Types Of White Box Testing:


There are three main types of White Box testing which is follows:-
Unit Testing: Unit Testing checks if each part or function of the application works correctly. It will check
the application meets design requirements during development.
Integration Testing: Integration Testing Examines how different parts of the application work together.
After unit testing to make sure components work well both alone and together.
Regression Testing: Regression Testing Verifies that changes or updates don't break existing functionality
of the code. It will check the application still passes all existing tests after updates.

white box testing uses the following techniques:

1. Statement Coverage: In this technique, the aim is to traverse all statements at least once. Hence, each
line of code is tested. In the case of a flowchart, every node must be traversed at least once. Since all
lines of code are covered, it helps in pointing out faulty code. If we see in the case of a flowchart, every
node must be traversed at least once. Since all lines of code are covered, it helps in pointing out faulty
code while detecting.
2. Branch Coverage: Branch coverage focuses on testing the decision points or conditional branches in
the code. It checks whether both possible outcomes (true and false) of each conditional statement are
tested. In this technique, test cases are designed so that each branch from all decision points is traversed
at least once. In a flowchart, all edges must be traversed at least once.

3. Condition Coverage: In this technique, all individual conditions must be covered as shown in the eg:
READ X, Y
IF(X == 0 || Y == 0)
PRINT ‘0’
#TC1 – X = 0, Y = 55
#TC2 – X = 5, Y = 0

4. Multiple Condition Coverage: In this technique, all the possible combinations of the possible outcomes
of conditions are tested at least once. Let’s consider the following example:
READ X, Y
IF(X == 0 || Y == 0)
PRINT ‘0’
#TC1: X = 0, Y = 0
#TC2: X = 0, Y = 5
#TC3: X = 55, Y = 0
#TC4: X = 55, Y = 5

5. Basis Path Testing: In this technique, control flow graphs are made from code or flowchart and then
Cyclomatic complexity is calculated which defines the number of independent paths so that the minimal
number of test cases can be designed for each independent path.
Steps:
Make the corresponding control flow graph
Calculate the cyclomatic complexity
Find the independent paths
Design test cases corresponding to each independent path
V(G) = P + 1, where P is the number of predicate nodes in the flow graph
V(G) = E – N + 2, where E is the number of edges and N is the total number of nodes
V(G) = Number of non-overlapping regions in the graph
#P1: 1 – 2 – 4 – 7 – 8
#P2: 1 – 2 – 3 – 5 – 7 – 8
#P3: 1 – 2 – 3 – 6 – 7 – 8
#P4: 1 – 2 – 4 – 7 – 1 – . . . – 7 – 8

6. Loop Testing: Loops are widely used and these are fundamental to many algorithms hence, their
testing is very important. Errors often occur at the beginnings and ends of loops.
Simple loops: For simple loops of size n, test cases are designed that:
Skip the loop entirely
Only one pass through the loop
2 passes
m passes, where m < n
n-1 ans n+1 passes
Software Quality Assurance is a planned effort to ensure that a software product fulfills these criteria and
has additional attributes specific to the project, e.g., portability, efficiency, reusability, and flexibility.

•It is the collection of activities and functions used to monitor and control a software project so that
specific objectives are achieved with the desired level of confidence.

• It is not the sole responsibility of the software quality assurance group but is determined by the
consensus of the project manager, project leader, project personnel, and users.

•Quality assurance is the set of support activities needed to provide adequate confidence that processes
are established and continuously improved in order to produce products that meet specifications and are
fit for use.

The Software Quality Assurance (SQA) focuses on the following

Software's portability: Software's portability refers to its ability to be easily transferred or adapted to
different environments or platforms without needing significant modifications. This ensures that the
software can run efficiently across various systems, enhancing its accessibility and flexibility.

software's usability: Usability of software refers to how easy and intuitive it is for users to interact with
and navigate through the application. A high level of usability ensures that users can effectively
accomplish their tasks with minimal confusion or frustration, leading to a positive user experience.

software's reusability: Reusability in software development involves designing components or modules


that can be reused in multiple parts of the software or in different projects. This promotes efficiency and
reduces development time by eliminating the need to reinvent the wheel for similar functionalities,
enhancing productivity and maintainability.

software's correctness: Correctness of software refers to its ability to produce the desired results under
specific conditions or inputs. Correct software behaves as expected without errors or unexpected
behaviors, meeting the requirements and specifications defined for its functionality.

software's maintainability: Maintainability of software refers to how easily it can be modified, updated, or
extended over time. Well-maintained software is structured and documented in a way that allows
developers to make changes efficiently without introducing errors or compromising its stability.

software's error control: Error control in software involves implementing mechanisms to detect, handle,
and recover from errors or unexpected situations gracefully.
Agile Software Development Model is a combination of iterative and incremental process models with
focus on process adaptability and customer satisfaction by rapid delivery of working software product.
• Agile methods break tasks into smaller iterations, or parts that do not directly involve long term
planning.
• The project scope and requirements are laid down at the beginning of the development process.
• Plans regarding the number of iterations, the duration and the scope of each iteration are clearly defined
in advance.
• Each iteration is considered as a short time "frame" in the Agile process model, which typically lasts
from one to four weeks.
• The division of the entire project into smaller parts helps to minimize the project risk and to reduce the
overall project delivery time requirements.
• Each iteration involves a team working through a full software development life cycle including planning,
requirements analysis, design, coding, and testing before a working product is demonstrated to the client.

Agile Process (Agile Software Development Model)


Agile Principles:
• The highest priority of this process is to satisfy the customer.
• Acceptance of changing requirements even late in development.
• Frequently deliver working software in a small time span.
• Throughout the project business people and developers work together on a daily basis.
• Projects are created around motivated people if they are given the proper environment and support.
• Face to face interaction is the most efficient method of moving information in the development team.

Phases of Agile Model:


• Requirements gathering : In this phase, you must define the requirements. You should explain business
opportunities and plan the time and effort needed to build the project. Based on this
information, you can evaluate technical and economic feasibility.

• Design the requirements : When you have identified the project, work with stakeholders to define
requirements. You can use the user flow diagram or the high-level UML diagram to show the work of
new features and show how it will apply to your existing system.

• Construction/ iteration
When the team defines the requirements, the work begins. Designers and developers
start working on their project, which aims to deploy a working product. The product
will undergo various stages of improvement, so it includes simple, minimal functionality.

• Testing/ Quality assurance : In this phase, the Quality Assurance team examines the product's
performance and looks for the bug.

• Deployment : In this phase, the team issues a product for the user's work environment.

• Feedback : After releasing the product, the last step is feedback. In this, the team receives
feedback about the product and works through the feedback.
Advantages:
• Is a very realistic approach to software development. • Little or no planning required.
• Promotes Teamwork and cross training. • Easy to manage. • Gives flexibility to developers.
• Functionality can be developed rapidly and demonstrated.
• Resource requirements are minimum. • Suitable for fixed or changing requirements
• Delivers early partial working solutions. • Good model for environments that change steadily.

Disadvantages:
• Not Suitable for handling complex dependencies.
• More risk of sustainability, maintainability and extensibility.
• Depends heavily on customer interaction, so if the customer is not clear, the team can be driven in the
wrong direction.
• Transfer of technology to new team members may be quite challenging due to lack of documentation.
Scrum is the type of Agile framework. • It is a framework within which people can address complex
adaptive problem while productivity and creativity of delivering product is at highest possible values.
Features of Scrum:
• Scrum is light-weighted framework
• Scrum emphasizes self-organization
• Scrum is simple to understand
• Scrum framework help the team to work together

Scrum Lifecycle:
• Sprint : A Sprint is a time-box of one month or less.
A new Sprint Starts immediately after the completion of the previous Sprint.
• Release: When the product is completed then it goes to the Release stage.
• Sprint Review: If the product still have some non-achievable features then it will be checked in this stage
and then the product is passed to the Sprint Retrospective stage.
• Sprint Retrospective: In this stage the quality or status of the product is checked.
• Product Backlog: According to the prioritized features the product is organized.
• Sprint Backlog: It is divided into 2 parts: Product assigned features to sprint & Sprint planning meeting.

Advantages:
• Scrum framework is fast moving and money efficient. • Scrum framework works by dividing the large
product into small sub-products. It’s like a divide and conquer strategy
• In Scrum customer satisfaction is very important. • Scrum is adaptive in nature because it has short
sprints. • As Scrum framework rely on constant feedback therefore the quality of product increases
less amount of time.

Disadvantages:
• Scrum frameworks do not allow changes into their sprint.
• The Scrum framework is not a fully described model. If you want to adopt it you need to fill in the
framework with your own details like Extreme Programming(XP), Kanban, DSDM.
• The daily Scrum meetings and frequent reviews require substantial resources.
Kanban is a popular framework which is used to implement agile software development.
• Kanban is developed by ‘ Taiichi Ohno’ , an industrial engineer at Toyota. Kanban means ‘Visual Cards’.
• It takes real time communication of capacity and complete transparency of work.
• The work items are represented in a Kanban board visually, allowing team members to see
the state of every piece of work at any time.
• Kanban Boards: The Kanban board is the agile project management tool that designs the necessary
visualized work, limited work-in-progress, and maximizes flow (or efficiency). • It uses cards, columns, and
provides continuous improvement to help technology and service teams who commit the right amount of
work and get it done.

Elements of Kanban Boards:

1. Visual Signals: The Kanban board is a visual card (stickies, tickets, or otherwise). • Kanban team write
their projects and work items onto cards, usually per person each card. • For agile teams, each card could
be encapsulated into one user story. • Once the board is completed, this visual team helps team members
and stock members quickly to understand what the team is working on.

2. Columns: • The column represents the specific activities that compose a "workflow" together.
• The card flows through a workflow until its completion.

3. Work In Progress(WIP) Limits: The work in progress limits are the maximum number of cards which
can be in one column. • It gives the alert signal that you committed too much work.

4. Commitment Point: • Kanban teams also maintain a backlog for their board. • This is where the
customers and team members put ideas for projects that the team can pick up. • The team members pick
up plans when they are ready. • The committed point is a movement where the design is picked up by the
team, and work starts on the project.

5. Delivery Point: • It is the end point of a Kanban team's workflow. • Mostly the delivery point for every
team is when the product and services are handed to the customer.
Software Project Scheduling is the responsibility of the project manager.
• Project schedule is a mechanism that is used to communicate and know about what tasks are required
and how to perform them.
• Project managers separate total work tasks in projects into different activities.
• Project managers estimate time and resources required to complete tasks and organize them into sequ.
• Schedule revolves around time.
• Early stages of planning – macroscopic schedule - identifies all major software engineering activities
and the product functions to which they are applied.
• Project under way – detailed schedule - each entry on the macroscopic schedule is
refined. Here, specific software tasks are identified and scheduled.

Principles of Software Project Scheduling:


• Compartmentalization : The project must be divided into a number of manageable activities and tasks.
• Interdependency : The interdependency of each compartmentalized activity or task must be determined.
Some tasks must occur in sequence while others can occur in parallel.
• Time allocation : Each task assigned a start date and a completion date that are a function of the
interdependencies and whether work will be conducted on a full-time or part-time basis.
• Effort validation : Every project has a defined number of staff members.
As time allocation occurs, the project manager ensures that no more than the allocated number of
people has been scheduled at any given time. The PM allocates a number of people for each task.
• Defined responsibilities : Every task that is scheduled should be assigned to a specific team member.
• Defined outcomes : Every task that is scheduled should have a defined outcome. For
• Defined milestones : Every task or group of tasks should be associated with a project milestone.

Timeline Charts (Gantt Charts)


• Generalized Activity Normalization Time Table (GANTT) chart is a type of chart in which a series of
horizontal lines are present that show the amount of work done or production completed in a given period
of time in relation to the amount planned for those projects.
• It is horizontal bar chart developed by Henry L. Gantt (American engineer and social
scientist) in 1917 as a production control tool.
• It is simply used for graphical representation of schedule that helps to plan in an efficient way,
coordinate, and track some particular tasks in a project.
• It can be developed for an entire project or it can be developed for individual functions.
• In most projects, after generation of timeline charts, project tables are prepared.
• In project tables, all tasks are listed in proper manner along with start date & end date, dependencies &
milestones.

• Key Components of a Gantt Chart


1.Tasks – Individual activities that make up the project.
2.Timeline – A horizontal axis representing project duration (days, weeks,months).
3.Bars – Represent the length of time each task takes.
4.Dependencies – Relationships between tasks (e.g., one task must be completed before another starts).
5.Milestones – Key events or significant points in the project.
6.Resources – Team members or departments assigned to task

• Advantages: Gantt charts are generally used for simplifying complex projects.
It brings efficiency in planning and allows teams to better coordinate project activities.
Software Design is also a process to plan or convert the software requirements into a step that is needed
to be carried out to develop a software system. There are several principles that are used to organize and
arrange the structural components of Software design. These principles are stated below

Principles Of Software Design :


Should not suffer from "Tunnel Vision" -While designing the process, it should not suffer from “tunnel
vision” i.e that it should not only focus on completing or achieving the aim but on other effects too

Traceable to analysis model - The design process should be traceable to the analysis model which
means it should satisfy all the requirements that software requires to develop a high-quality product.

Should not "Reinvent The Wheel" - The design process should not reinvent the wheel that means it
should not waste time or effort in creating things that already exist. Due to this, the overall development
will increase.

Minimize Intellectual distance - The design process should reduce the gap between real-world problems
and software solutions for that problem meaning it should simply minimize intellectual distance.

Exhibit uniformity and integration - The design should display uniformity which means it should be uniform
throughout the process without any change. Integration means it should mix or combine all parts of
software i.e. subsystems into one system.

Accommodate change - The software should be designed in such a way that it accommodates the
change implying that the software should adjust to the change that is required to be done as per the
user's need.

Degrade gently - The software should be designed in such a way that it degrades gracefully which means
it should work properly even if an error occurs during the execution.

Assessed or quality - The design should be assessed or evaluated for the quality meaning that during the
evaluation, the quality of the design needs to be checked and focused on.

Review to discover errors - The design should be reviewed which means that the overall evaluation
should be done to check if there is any error present or if it can be minimized.

Design is not coding and coding is not design - Design means describing the logic of the program to solve
any problem and coding is a type of language that is used for the implementation of a design.
Unified Modelling Language (UML)
• Unified Modeling Language (UML) is a standardized visual modeling language that is a versatile,
flexible, and user-friendly method for visualizing a system’s design.
• Software system artifacts can be specified, visualized, built, and documented with the use of UML. • We
use UML diagrams to show the behavior and structure of a system.
• UML helps sw engineers, businessmen, and system architects with modeling, design, and analysis.

Types of UML Diagrams:


Class Diagram: class diagrams is used to depict the static structure of a system by showing system’s
classes, their methods and attributes. Class diagrams also help us identify relationship between different
classes or objects.

Object Diagram: An Object Diagram can be referred to as a screenshot of the instances in a system and
the relationship that exists between them. An object diagram is similar to a class diagram except it shows
the instances of classes in the system.

State diagram: A state diagram is a uml diagram which is used to represent the condition of the system or
part of the system at finite instances of time. Eg (diagram from pc uml ex)

Activity Diagram: Activity diagram is basically a flowchart to represent the flow from one activity to another
activity. The activity can be described as an operation of the system. The control flow is drawn from one
operation to another. UML Diagrams

Component Diagram: It shows how the components of a system are arranged and relate to one another is
termed a component-based diagram. Component-Based Diagrams are widely used in system design to
promote modularity, enhance understanding of system architecture.

Deployment Diagram: A Deployment Diagram shows how the software design turns into the actual
physical system where the software will run. They show where software components are placed on
hardware devices and how they connect with each other. This diagram helps visualize how the software
will operate across different devices.

Use case diagram: A Use Case Diagram in Unified Modeling Language (UML) is a visual representation
that illustrates the interactions between users (actors) and a system. It captures the functional
requirements of a system, showing how different users engage with various use cases, or specific
functionalities, within the system.
Explain project scheduling and describe CPM and PERT.
Project scheduling is the process of determining start and end dates for project activities, allocating resources, and
establishing dependencies among tasks to achieve project objectives within defined constraints such as time, cost,
and resources. Effective scheduling helps in organizing and managing project activities, optimizing resource
utilization, and ensuring timely project completion.Two widely used methods for project scheduling are (CPM) and
(PERT).

Critical Path Method (CPM):


●​ Definition: CPM is a deterministic scheduling technique used to determine the longest sequence of
dependent activities (critical path) in a proj, which dictates the minim duration required to complete the proj.
●​ Key Features:
1.​ Activity-Based: Breaks down the proj into a network of activities and their dependencies.
2.​ Deterministic: Relies on known activity durations & dependencies to calc proj duration.
3.​ Focus on Critical Path: Identifies the sequence of activities that collectively determine the
shortest project duration.
4.​ Float Analysis: Identifies non-critical paths and activities with slack or float time, which
can be delayed without impacting project completion.
●​ Steps:
1.​ Identify project activities and their dependencies.
2.​ Estimate activity durations.
3.​ Construct a network diagram (Activity-on-Node or Activity-on-Arrow).
4.​ Calculate earliest start and finish times, and latest start and finish times for each activity.
5.​ Identify the critical path (activities with zero float) and total project duration.

Program Evaluation and Review Technique (PERT):


●​ Definition: PERT is a probabilistic scheduling technique used to analyze and represent the
uncertainty in project duration by considering three time estimates for each activity: optimistic
(O), most likely (M), and pessimistic (P).
●​ Key Features:
1.​ Probabilistic Appro: Incorporates uncertainty by using 3 time estimates for each activity.
2.​ Activity Duration Calculation: Calculates activity durations using weighted averages of
the three time estimates (PERT formula).
3.​ Variability Focus: Identifies activities with the highest variability and potential impact on
project duration.
4.​ Probability Analysis: Estim the probab of completing the proj within a specified duration.
●​ Steps:
1.​ Identify project activities and their dependencies.
2.​ Determine optimistic (O), most likely (M), and pessimistic (P) time estimates for each
activity.
3.​ Calculate activity durations using the PERT formula: (O + 4M + P) / 6.
4.​ Construct a network diagram (similar to CPM).
5.​ Perform forward and backward pass calculations to determine earliest start and finish
times, latest start and finish times, and float.
6.​ Analyze the critical path and project duration uncertainty
What is Risk management? Discuss Common sources of risk in IT projects.​
Risk management is the process of identifying, assessing, prioritizing, and mitigating risks that could
potentially impact the success of a project or an organization.​
Steps in Risk Management:

●​ Risk Identification:
1.​ This step involves identify pot risks that could arise during the course of the proj. Risks
can come from various sources such as technical, organizl, extnl, or environml factors.
2.​ Techniques like brainstorming, checklists, interviews, and documentation review are
commonly used to identify risks.
●​ Risk Analysis:
1.​ Once risks are identified, they need to be analyzed to determine their potential impact and
likelihood of occurrence. This step helps in understanding the severity of each risk and
prioritizing them for further action.
2.​ involves assessing the conseq of risks if they occur and the probab of their occurrence.
●​ Risk Planning:
1.​ After analyzing risks, a plan is developed to mitigate, avoid, transfer, or accept them. Risk
planning involves developing strategies and action plans to address identified risks.
2.​ Strategies could include risk avoidance (eliminating the risk altogether), risk mitigation
(reducing the impact or likelihood of the risk), risk transfer (shifting the risk to a third
party, like insurance), or risk acceptance (accepting the consequences if the risk occurs).
●​ Risk Monitoring:
1.​ Risk management is an ongoing process throughout the project lifecycle. Risk monitoring
involves tracking identified risks, assessing their status, and evaluating the effectiveness
of risk mitigation strategies.
2.​ Its imp to regularly review & update the risk mgm plan as new risks emerge or existing
risks evolve.
●​ Common Sources of Risk in IT Projects:
1.​ Technological Complexity:
■​ Complexity in technology can lead to technical challenges, delays, and increased
costs if not managed properly.
2.​ Unclear Requirements:
■​ Incomplete or ambiguous requirements can lead to scope creep, rework, and
project delays.
3.​ Resource Constraints:
■​ Limited availability of skilled resources, hardware, or software can impact project
timelines and quality.
4.​ Integration Issues:
■​ Integration of different systems or components may pose compatibility issues,
leading to system failures or performance issues.
5.​ Security Threats:
■​ Cybersecurity threats such as data breaches, malware attacks, or unauthorized
access can compromise the confidentiality, integrity, and availability of IT
systems and

You might also like