0% found this document useful (0 votes)
5 views30 pages

Final

The System Development Life Cycle (SDLC) is a structured process for developing software, encompassing phases from planning to maintenance, aimed at delivering high-quality systems that meet user expectations. It includes phases like planning, analysis, design, deployment, testing, implementation, and maintenance, each with specific objectives and tasks. Additionally, feasibility analysis, cost-benefit analysis, and various design methodologies like top-down and bottom-up design are essential components in evaluating and executing software projects.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views30 pages

Final

The System Development Life Cycle (SDLC) is a structured process for developing software, encompassing phases from planning to maintenance, aimed at delivering high-quality systems that meet user expectations. It includes phases like planning, analysis, design, deployment, testing, implementation, and maintenance, each with specific objectives and tasks. Additionally, feasibility analysis, cost-benefit analysis, and various design methodologies like top-down and bottom-up design are essential components in evaluating and executing software projects.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

System Development Life Cycle (SDLC)

The System Development Life Cycle (SDLC) is a systematic process used to


develop software or information systems. It provides a structured framework that
outlines the phases involved in the development of a system, from the initial
planning to the final maintenance.

The main objective of SDLC is to deliver a high-quality system that meets or exceeds
user expectations, is completed within time and cost estimates, and functions
efficiently in the real world.

Phases of SDLC: The SDLC consists of the following phases:


1. Planning:In this initial phase, the project goals are defined, and a feasibility study is
conducted to evaluate whether the proposed system is practical. It includes identifying the scope,
resources, time, cost, and potential risks of the project
2.System Analysis:This phase focuses on understanding the business requirements and
problems of the current system. Detailed requirements are gathered through various techniques
such as interviews, questionnaires, and observation
3.System Design:In the design phase, the logical and physical design of the system is prepared
based on the requirements identified in the analysis phase. This includes designing input/output
screens, database structure, data flow diagrams, and user interfaces.
4.Deployment:This phase involves actual writing of the code based on the system design.
Developers use suitable programming languages and follow standard coding practices to build the
system.

5.Testing:After development, the system undergoes rigorous testing to find and fix errors. Various
levels of testing are performed, such as unit testing, integration testing, system testing, and user
acceptance testing

6Implementation:Once the system is tested and approved, it is installed in the live environment.
Users are trained, and data is migrated from the old system to the new one.

7.Maintanance:After implementation, the system enters the maintenance phase, where it is


monitored for performance. Any issues, errors, or enhancement requests are handled during this
phase.
Disadvantages of SDLC:
Advantages of SDLC: Rigid and not suitable for projects with

changing requirements (in traditional
 Provides a clear project roadmap. models).
 Improves quality and efficiency of the system.  Time-consuming due to detailed
 Allows for proper documentation. documentation.
  Requires expert project management.
Ensures better management of time, cost, and resources.
📘 Feasibility Analysis
Feasibility Analysis is the process of evaluating whether a proposed system or project is practical
and achievable with available resources. It helps decision-makers determine if the project should
proceed or be rejected.

🔷 Purpose of Feasibility Analysis:


 To assess the viability of a proposed system.
 To identify potential risks and limitations.
 To support decision-making in the planning phase.
 To ensure optimal use of resources (time, money, people).
🔶 Types of Feasibility:

Feasibility analysis is typically divided into the following major categories:

Type Description

1. Technical Checks whether the technology needed is available and


Feasibility practical.

2. Economic Evaluates cost vs. benefit. Is the project financially


Feasibility viable?

3. Operational Determines if the system will function in the user’s


Feasibility environment.

4. Legal Feasibility Checks compliance with laws and regulations.

5. Schedule Assesses whether the project can be completed within


Feasibility the time frame.

Technical Feasibility (In-Depth)


Technical Feasibility is an evaluation of whether the technical resources, tools,
hardware software, and expertise required to develop and implement a system
are available and sufficient.

🔶 Purpose of Technical Feasibility:


 To assess if the system can be designed and implemented with current technology.
 To identify technical risks early in the project.
 To ensure the required infrastructure and skills are available.
 To guide the selection of appropriate tools and platforms.
Key Factors to Consider in Technical Feasibility:
1. Hardware Requirements:
 Is the existing hardware capable of supporting the new system?
 Will new hardware need to be purchased?
2. Software Requirements:
 Are appropriate software platforms (OS, DBMS, programming languages) available?
 Are there licensing or compatibility issues?
3. Technical Expertise:
 Do team members have the required technical knowledge and skills?
 Is training needed?
4. System Compatibility:
 Will the new system integrate with existing systems?
 Are data formats and communication protocols compatible?
5. Network Requirements:
 Is the current network infrastructure (bandwidth, security) sufficient?
 Will additional networking resources be needed?
6. Security and Reliability:
 Can the system be protected against data breaches or failures?
 Is backup and disaster recovery feasible?
7. Scalability:
 Can the system grow in the future if user base or data volume increases?

Cost-Benefit Analysis (CBA) is a financial technique used to compare the total


expected costs of a project against its total expected benefits, to determine whether
the project is economically viable.

It helps decision-makers evaluate whether a proposed system is worth implementing


by identifying if the benefits outweigh the costs.

🔷 Purpose of Cost-Benefit Analysis:

 To evaluate the economic feasibility of a system.


 To compare alternative solutions.
 To ensure that the project yields maximum return on investment (ROI).
 To make informed decisions during system planning.
🔶 Components of CBA:
1. Costs:

These are the total expenditures required to develop and operate the system.

Types of Costs:

 💻 Hardware Costs – Servers, computers, networking devices.


 �💻 Software Costs – Licenses, development tools, databases.
 👷♂️ Manpower Costs – Salaries of developers, analysts, testers.
 🛠️ Training Costs – User and technical training expenses.
 � Operational Costs – Maintenance, electricity, upgrades.
2. Benefits:

These are the positive outcomes (monetary or non-monetary) the organization gains
from the system.

Types of Benefits:

 💰 Tangible Benefits – Can be measured in money (e.g., reduced labor costs,


increased sales).
 🌟 Intangible Benefits – Cannot be measured directly (e.g., improved user
satisfaction, better brand image).
📐 Formulas in Cost-Benefit Analysis
Where:

 tt = year
 rr = discount rate
 BenefitstBenefitst = benefits in year t
 CoststCostst = costs in year t

Interpretation:

 If NPV > 0, the project is economically feasible.


 If NPV < 0, the project should not be accepted.

2. Benefit-Cost Ratio (BCR)

If BCR > 1, the project is profitable.


If BCR < 1, the project should be rejected.

3. Return on Investment (ROI)

Where: Net Benefit = Total Benefits – Total Costs

The COCOMO (Constructive Cost Model) is a software cost estimation model developed
by Barry W. Boehm in 1981. It is used to estimate the effort, cost, and time required to develop
a software project based on the size of the software (measured in Kilo Lines of Code, or KLOC).

� Purpose of COCOMO:
 To predict project cost, effort, and schedule.
 To help in budgeting and resource planning.
 To aid in project evaluation and bidding.
🔷 Types of COCOMO Models:
COCOMO is divided into three models based on complexity and accuracy:

1. Basic COCOMO

A quick, rough estimate of software development effort.

2. Intermediate COCOMO

Adds cost drivers like experience, tools, etc., for more accurate estimation.

3. Detailed COCOMO

Considers each part of the system separately with all cost drivers in detail.

Basic COCOMO Model


The Basic COCOMO model estimates:

 Effort (E) in person-months


 Development Time (TDEV) in months
Formulas:

KLOC = Thousands of Delivered Lines of Code

 a, b, c, d = Constants depending on the type of project

🔶 Project Categories in Basic COCOMO:


Project Type Description a b c d

Organic Simple projects, small teams, familiar environments 2.4 1.05 2.5 0.38

Semi-Detached Medium-size projects, mixed experience 3.0 1.12 2.5 0.35

Embedded Complex systems, real-time constraints 3.6 1.20 2.5 0.32

🔍 Intermediate COCOMO Model


 In addition to KLOC, it uses 15 cost drivers such as:
 Product attributes (complexity, reliability)
 Hardware constraints
 Personnel capability
 Project attributes

Detailed COCOMO Model


 Divides the project into modules.
 Applies cost drivers to each module.
 Most accurate but requires the most data.
Advantages of COCOMO:
 Simple to use.
 Provides early cost estimation.
 Offers different levels of accuracy (basic to detailed).
 Useful for budgeting and project planning.
Limitations of COCOMO:
 Based on KLOC, which is hard to estimate early.
 Assumes waterfall model, less effective for Agile methods.
 Not suitable for small or rapidly changing projects.
 Doesn't consider reuse or object-oriented development directly.
Context Diagram
 Represents the entire system as one single process.
 Shows how the system interacts with external entities (users, other systems).
 Displays data flow between the system and external entities (inputs and outputs).
 Does not show internal processes or data storage.
 Helps define the system boundary and provides an overview of the system.
 Used for system analysis and design to map out processes and data flow clearly.
� Definition of DFD:

A Data Flow Diagram (DFD) is a graphical representation used to visualize the flow
of data within a system. It shows how data enters the system, how it moves between
different processes, where it is stored, and how it leaves the system as output. DFD
focuses on the movement of data rather than the program logic or physical
implementation. It helps system analysts and designers understand the system’s
functions and the flow of information.

� Purpose of DFD:
 To model the system from the perspective of data flow.
 To break down complex processes into smaller, understandable parts.
 To document how data flows through the system.
 To identify inputs, outputs, storage points, and processing steps.
🔷 Types / Levels of DFD:
DFDs are created in different levels of detail, which help to understand the system
gradually from a high-level overview to detailed processes

1. Level 0 DFD (Context Diagram):


 Shows the system as a single process.
 Illustrates the interaction between the system and external entities (sources or
destinations of data).
 Represents the boundary of the system.
 Does not show internal processes or data stores.
 Provides a big picture or overview of the system.

2. Level 1 DFD:
 Breaks down the single process from Level 0 into major sub-processes or modules.
 Shows the main functions of the system.
 Illustrates the flow of data between these sub-processes, external entities, and data
stores.
 Gives more detail than Level 0 but still a high-level view.
3. Level 2 (and Lower) DFD:
 Further breaks down the Level 1 sub-processes into more detailed processes.
 Shows detailed data flow, processing steps, and interactions within the system.
 Used for very large and complex systems where more detail is necessary.
 Each process in Level 1 can be decomposed into Level 2 processes, and so on.
Symbols Used in DFD:
Symbol Meaning

Circle / Oval Process (transforms data)

Rectangle (open-ended) Data Store (where data is stored)

Arrow Data Flow (movement of data)

Square / Rectangle External Entity (source or destination)

📘 Problem Partitioning
Problem Partitioning is the process of breaking down a large, complex system or
problem into smaller, manageable parts or modules. Each module represents a sub-
problem that is easier toanalyze, design, develop, and maintain

Purpose of Problem Partitioning:


 To simplify complex systems by dividing them into smaller pieces.
 To make system development more organized and manageable.
 To allow different teams or individuals to work on separate modules simultaneously.
 To improve understandability, testing, and debugging.
 To promote code reuse and modular design.
 To help in assigning responsibilities clearly.
How Problem Partitioning Works:
1. Identify the main problem or system that needs to be developed.
2. Divide the system into smaller sub-systems or modules, each focusing on a
specific function or task.
3. Each module should be independent as much as possible, with a clearly defined
interface for communication with other modules.
4. Continue breaking down modules into smaller parts until each module is simple
enough to be easily implemented and understood.

Benefits of Problem Partitioning:


 Improves clarity: Smaller problems are easier to understand and solve.
 Simplifies development: Developers can focus on one module at a time.
 Facilitates parallel work: Different modules can be developed simultaneously by
different teams.
 Enhances maintainability: Changes in one module do not heavily affect others.
 Encourages modular design: Supports reuse of modules in other projects.
Types of Partitioning Techniques:
1. Functional Partitioning: Dividing the system based on functions or processes it
performs.
2. Data Partitioning: Dividing the system based on data it handles or manages.
3. Object-Oriented Partitioning: Dividing the system based on objects or entities and
their interactions.
4. Process Partitioning: Breaking the system based on different processes or
workflows.
Example:, in a Library Management System, problem partitioning may break down the
system into modules like:
 User Registration Module
 Book Search Module
 Book Issue Module
 Fine Calculation Module
 Report Generation Module

Each module can then be developed separately and integrated to form the complete
system.

Top-Down Design:

Top-Down Design is a systematic approach where the system is designed by


starting from the highest level of abstraction and breaking it down into smaller,
more detailed parts or modules.

 The process begins with the overall system or problem.


 It is divided into major modules or subsystems.
 Each module is further divided into smaller components.
 This continues until the modules are simple enough to be implemented directly.
 It is sometimes called stepwise refinement or functional decomposition.

Advantages of Top-Down Design:

 Easy to understand the system as a whole first.


 Helps in organizing the development process clearly.
 Ensures that the overall system architecture is defined before detailed work.
 Errors can be detected early in the design process.

Disadvantages:

 Requires a clear understanding of the whole system upfront.


 May overlook low-level details initially.
 Changes at higher levels may require rework of lower-level module

Bottom-Up Design: Bottom-Up Design is the opposite approach, where design starts
with designing the most basic or fundamental components first and then integrating them
to form higher-level modules or the complete system.
 Begins with designing small, well-understood, reusable modules.
 These modules are combined to build larger subsystems.
 Gradually, the complete system is constructed by integrating these modules.
 Often used when some components or modules already exist and can be reused.

Advantages of Bottom-Up Design:

 Encourages code reuse of existing modules.


 Modules can be tested individually early in the process.
 Useful when detailed knowledge of components is available first.
 Flexible for changes at higher levels.

Disadvantages:

 May be difficult to see the overall system early on.


 Integration of modules can be complex if the design is not well planned.
 Risk of lack of overall system control or architecture clarity.
� 1. Decision Tree
🔷 Definition:

A Decision Tree is a graphical representation used to make decisions and show the
various possible outcomes or actions based on different conditions. It resembles a
tree structure, where:

 Nodes represent conditions or decisions.


 Branches represent possible outcomes or actions based on those conditions.
 Leaves represent final results or actions.
🔷 Features:
 Easy to understand and visualize.
 Helps in identifying all possible paths based on decision logic.
 Used when conditions and actions are sequential.
🔷 Advantages:
 Easy to follow and interpret.
 Visually represents all possible outcomes.
 Suitable for complex decision-making.
🔷 Disadvantages:
 Can become very large and complex.
 Difficult to manage if too many conditions exist.
2. Decision Table
🔷 Definition:

A Decision Table is a tabular method for representing and analyzing complex


decision logic. It lists conditions, possible condition combinations,
and corresponding actions.

It is usually divided into four parts:

 Condition Stub: Lists all the conditions.


 Action Stub: Lists all possible actions.
 Condition Entries: Different combinations of condition values (Yes/No or
True/False).
 Action Entries: Actions taken for each combination.
🔷 Features:
 Organizes conditions and actions in a logical table.
 Ensures all possible conditions and rules are covered.
 Good for handling multiple conditions at once.
🔷 Advantages:
 Compact and organized format.
 Easy to verify and validate logic.
 Helps avoid missing any rule.
🔷 Disadvantages:
 Not very visual like decision trees.
 Becomes complex with too many conditions
� 1. Functional Approach
🔷 Definition:

The Functional Approach focuses on functions or procedures that perform


operations on data. The system is seen as a set of functions that take input, process
it, and produce output.

🔷 Characteristics:
 The main focus is on what the system does.
 Data and functions are kept separate.
 Uses top-down design (problem is divided into smaller sub-problems).
 Flow of data is managed through function calls and parameters.
 Examples: C, Structured programming languages.
🔷 Advantages:
 Easy to design small systems.
 Simple and straightforward for data-centric problems.
 Useful for applications with clear and fixed processes.
🔷 Disadvantages:
 Difficult to manage large systems.
 Code reuse is limited.
 Data security is low (data is accessible by any function).
 Maintenance becomes harder as the system grows.

� 2. Object-Oriented Approach

The Object-Oriented (OO) Approach organizes a system as a collection of objects,


which are instances of classes. Each object holds data (attributes) and functions
(methods) that operate on that data.

🔷 Characteristics:
 Focus is on real-world entities like users, products, orders, etc.
 Data and behavior are encapsulated in objects.
 Uses bottom-up design (build small reusable classes first).
 Promotes inheritance, polymorphism, and abstraction.
 Examples: Java, Python, C++ (OOP).
🔷 Advantages:
 Modular and reusable code.
 Easier to manage complex and large systems.
 Data is more secure (private or protected).
 Easier to maintain, modify, and extend.
 Better mapping to real-world problems.
🔷 Disadvantages:
 More complex to design initially.
 Overhead of learning OOP concepts.
 May be overkill for small, simple tasks.
� 1. Structured Programming
🔷 Definition: Structured Programming is a programming paradigm that emphasizes a logical
structure in the code to improve clarity, quality, and development time. It
uses functions, sequential flow, and control structures (like loops, conditionals) to write clean
and organized code.
🔷 Key Features:
 Based on top-down design.
 Programs are divided into functions or procedures.
 Uses control structures: if, else, while, for, etc.
 Data and functions are kept separate.
 Focuses on how to solve the problem step-by-step.
🔷 Advantages:
 Easy to understand and implement.
 Ideal for small and medium-sized programs.
 Encourages code reuse through functions.
🔷 Disadvantages:
 Poor scalability for large systems.
 Difficult to maintain and modify.
 No data hiding or encapsulation.
 Reusability is limited.
🔷 Examples of Structured Languages:
 C, Pascal, FORTRAN
2. Object-Oriented Programming (OOP):Object-Oriented Programming (OOP) is a
paradigm based on the concept of "objects", which contain both data (attributes)
and functions (methods). It models real-world entities using classes and supports concepts
like encapsulation, inheritance, and polymorphism.
🔷 Key Features:
 Based on bottom-up design.
 Uses objects and classes.
 Combines data and functions into a single unit (object).
 Promotes encapsulation, abstraction, inheritance, and polymorphism.
 Focuses on what objects do rather than how to do it.
🔷 Advantages:
 Modular, maintainable, and reusable code.
 Better security and data protection.
 Suitable for large and complex systems.
 Closer to real-world modeling.
🔷 Disadvantages:
 More complex and has a learning curve.
 Overhead of designing objects and classes.
 Slower for small programs compared to structured programming.
🔷 Examples of OOP Languages:
 Java, C++, Python (when using classes), C#
� Comparison Table:
Feature Structured Programming Object-Oriented Programming (OOP)

Design Approach Top-down Bottom-up

Basic Unit Function/Procedure Object

Data & Functions Separate Combined in objects

Data Security Low (global access) High (encapsulation)

Reusability Limited High (through classes & inheritance)

Modularity Function-based Class-based

Scalability Less scalable Highly scalable

Real-World Mapping Less direct Direct (models real-world entities)

Language Examples C, Pascal, FORTRAN Java, Python (OOP), C++, C#

1. Information Hiding
🔷 Definition:

Information Hiding is a software design principle that hides the internal details or
complexities of a module/class from other modules. Only essential information is
exposed, and unnecessary implementation details are kept private.

🔷 Key Points:
 Promotes encapsulation.
 Commonly used in Object-Oriented Programming (OOP).
 Achieved using access specifiers like private, public, protected.
 Example: In a class, internal variables are made private and accessed only
through public methods (getters/setters).
🔷 Benefits:
 Reduces system complexity.
 Increases security and robustness.
 Makes maintenance and updates easier.
 Prevents unintended interference between modules.
🔷 Example:
cpp
CopyEdit
class BankAccount { private: double balance; // hidden data public: void
deposit(double amount); // exposed behavior void withdraw(double amount); };
2. Reuse
🔷 Definition: Reuse in software refers to the practice of using existing software components (like
classes, modules, functions) in new applications, rather than building from scratch.
🔷 Types of Reuse:
 Code Reuse: Using existing functions, libraries, or APIs.
 Design Reuse: Using standard design patterns or templates.
 Component Reuse: Using ready-made modules like login systems, payment
gateways, etc.
🔷 Benefits:
 Saves time and cost.
 Increases productivity.
 Reduces errors (already tested code).
 Promotes consistency across systems.
🔷 Example: Using a login module in multiple applications or using a library
like jQuery or Bootstrap in web development.
3. System Documentation: System Documentation is a detailed written description of
the system, its components, and how it works. It helps developers, users, and maintainers
understand and work with the system effectively.
🔷 Types of Documentation:
Type Description

User Documentation Guides end users on how to use the system

Technical Documentation Provides details about system design, architecture, and code

System Requirements Lists functional and non-functional requirements


Type Description

Design Documentation Explains system design, flowcharts, DFDs, UML diagrams

Code Documentation In-line comments, API docs, etc., used by developers

Maintenance Manual Helps in troubleshooting and system updates

🔷 Importance:
 Ensures smooth development and maintenance.
 Reduces dependency on original developers.
 Helps in onboarding new team members.
 Supports compliance and legal requirements.
Software Testing
🔷 Definition:

Software Testing is the process of evaluating a system or its components to check


whether it meets the specified requirements and to identify any defects. It ensures
that the software is reliable, functional, and of high quality.

Levels of Testing
Software testing is performed at different levels in the software development
lifecycle. The main levels of testing are:

1. Unit Testing
 Tests individual components or modules of the software.
 Done by developers.
 Ensures each function or class works as expected.
2. Integration Testing
 Tests the interaction between modules.
 Ensures that combined modules work together correctly.
3. System Testing
 Tests the entire software system as a whole.
 Done by testers (QA team).
 Verifies the system against the specified requirements.
4. Acceptance Testing
 Done by the end users or clients.
 Ensures the system meets business needs and is ready for deployment.
 Types: Alpha Testing (by internal users), Beta Testing (by external users).
Integration Testing
Integration Testing is a level of testing where individual units/modules are
combined and tested as a group to detect interface defects between them.

Why Integration Testing is Needed:


 Modules may work well independently but fail when integrated.
 Interfaces between modules may have bugs (e.g., wrong data formats, function
calls).
 It ensures smooth communication and data exchange between modules.

Types of Integration Testing


🔹 1. Big Bang Integration Testing
 All modules are integrated simultaneously, and the entire system is tested.
 Advantages: Simple and saves time in planning.
 Disadvantages: Hard to find which module caused an error, if any issue occurs.
🔹 2. Top-Down Integration Testing
 Testing starts from the top-level modules and moves to the lower-level modules.
 Stubs are used to simulate lower modules that are not yet developed.

Example:

 Test Main Menu before Sub Menu options are ready.


🔹 3. Bottom-Up Integration Testing
 Testing starts from the lowest-level modules and moves upward.
 Drivers are used to simulate higher-level modules.

Example:

 Test individual database modules before combining with user interface.


🔹 4. Sandwich/Hybrid Integration Testing
 Combines both top-down and bottom-up approaches.
 Tests in layers (middle, top, and bottom) simultaneously

Test Case Specification


Test Case Specification is the process of defining and documenting individual test cases that will
be used to validate that a software system meets its requirements. Each test case outlines inputs,
execution conditions, expected outputs, and post conditions for a specific feature or function.

Purpose of Test Case Specification


 To ensure system behavior is correct under various conditions.
 To provide a repeatable method for testing.
 To ensure completeness in test coverage.
 To aid manual or automated testing processes.

Components of a Test Case Specification Document

1. Test Case ID
A unique identifier assigned to each test case.
Example: TC001

2. Test Case Title


A short descriptive title for the test case.
Example: "Login with valid credentials"

3. Objective / Description
Describes what the test case intends to verify or validate.

4. Preconditions
Conditions that must be met before the test is executed.
Example: The user must be registered.

5. Test Steps / Procedure


A sequence of steps to execute the test.
Example:

 Open the login page


 Enter username and password
 Click the login button

6. Test Data
Input values required for the test.
Example:
Username: test_user
Password: pass123
7. Expected Result
The output or system behavior expected if it works correctly.

8. Actual Result
What actually happens when the test is run.

9. Status
Indicates whether the test case passed or failed.

10. Remarks / Comments


Any additional notes or observations.

Example Test Case Table

Test Case ID Title Input Data Expected Result Status

TC001 Login with valid credentials Username: user, Pass: 1234 Redirect to dashboard page Pass

TC002 Login with invalid password Username: user, Pass: xyz Display error message Pass

Importance of Test Case Specification

 Ensures complete and accurate testing


 Helps in identifying and fixing defects early
 Serves as documentation for future testing cycles
 Improves communication among developers, testers, and stakeholders
 Aids in automation and regression testing

RELIABILITY ASSESSMENT

Definition:
Reliability assessment is the process of evaluating the software system’s ability to
perform its intended functions consistently and without failure over a specified
period under given conditions.

Purpose:

 To ensure the software is dependable and stable.


 To measure the likelihood and frequency of software failures.
 To identify weaknesses and improve design/test processes.
 To build confidence among users and stakeholders.

Key Aspects:
1. Failure Rate:
 Number of failures occurring during a time interval.
2. Mean Time Between Failures (MTBF):
 Average operational time between consecutive failures.
 Higher MTBF indicates better reliability.
3. Mean Time To Repair (MTTR):
 Average time taken to fix a failure and restore service.
4. Fault Tolerance:
 Ability of software to continue functioning despite faults.

Methods of Reliability Assessment:

 Reliability Testing:
 Testing software under normal and stress conditions to detect failures.
 Fault Injection:
 Intentionally introducing faults to verify error handling and recovery.
 Statistical Modeling:
 Using historical failure data to predict future reliability.
 Failure Mode and Effects Analysis (FMEA):
 Systematic identification of possible failure points and their impacts.

Importance:

 Critical for systems in sensitive domains (banking, healthcare, aviation).


 Helps in planning maintenance and support activities.
 Enables risk management by anticipating failures.
 Increases user trust and satisfaction.

VALIDATION & VERIFICATION

Verification

 Ensures the product is built correctly according to specifications.


 Focuses on process-oriented activities like reviews, inspections, and walkthroughs.
 Answers the question: “Are we building the product right?”

Validation

 Ensures the product meets the user needs and requirements.


 Focuses on product-oriented activities like testing and user acceptance.
 Answers the question: “Are we building the right product?”
12 METRICS IN SOFTWARE ENGINEERING
1. Lines of Code (LOC)
 Measures size by counting lines in the code.
2. Function Points (FP)
 Measures functionality delivered to the user.
3. Cyclomatic Complexity
 Measures the complexity of a program’s control flow.
4. Defect Density
 Number of defects per size unit (e.g., per 1000 LOC).
5. Mean Time to Failure (MTTF)
 Average time before a system failure occurs.
6. Mean Time to Repair (MTTR)
 Average time taken to fix a defect.
7. Test Coverage
 Percentage of code or requirements tested.
8. Requirements Stability Index
 Measures how much the requirements change over time.
9. Customer Problem Reports
 Number of issues reported by users after release.
10. Schedule Variance
 Difference between planned and actual schedule.
11. Cost Variance
 Difference between budgeted and actual cost.
12. Productivity
 Output per unit effort (e.g., LOC per person-month).

MONITORING & CONTROL

Monitoring

 Continuous measurement and tracking of project parameters like cost, schedule,


quality, and progress.
 Tools used: dashboards, status reports, metrics collection.

Control

 Actions taken to keep the project on track.


 Includes corrective measures based on monitoring feedback.
 Ensures that deviations from the plan are identified and addressed promptly.
SOFTWARE PROJECT MANAGEMENT

Software project management involves planning, organizing, and managing


resources to successfully complete software development projects on time, within
budget, and with the required quality.

PROJECT SCHEDULING

Definition:
Project scheduling is the process of creating a timeline for the project activities,
defining when and how long each task should take, and determining the sequence
of these tasks.

Key Objectives:

 To allocate time for each task.


 To identify task dependencies and order.
 To estimate the total project duration.
 To provide a clear timeline for the team and stakeholders.

Steps in Project Scheduling:

1. Define Activities: Break down the project into manageable tasks or activities.
2. Sequence Activities: Determine the order of tasks and dependencies.
3. Estimate Duration: Estimate how long each task will take.
4. Develop Schedule: Create a timeline using tools like Gantt charts or network
diagrams (PERT/CPM).
5. Assign Resources: Allocate resources to tasks.

Tools for Scheduling:

 Gantt Chart: Visual bar chart showing start and end dates of activities.
 PERT (Program Evaluation Review Technique): Uses probabilistic time estimates
for tasks.
 CPM (Critical Path Method): Identifies the longest sequence of dependent tasks
that determine project duration.

STAFFING

Definition:
Staffing involves recruiting, selecting, training, and assigning the right people to the
right tasks to ensure successful project completion.
Importance of Staffing:

 Proper staffing ensures that the project has the necessary skills and manpower.
 Avoids overloading or underutilization of team members.
 Influences project productivity and quality.

Staffing Process:

1. Human Resource Planning: Determine project roles and skills required.


2. Recruitment: Identify and bring suitable candidates onboard.
3. Selection: Choose the best-fit team members based on skills and experience.
4. Training and Development: Equip team members with necessary knowledge and
tools.
5. Team Formation: Assign tasks, set responsibilities, and establish communication.

Challenges in Staffing:

 Balancing team size with project budget.


 Managing skill gaps and training needs.
 Motivating and retaining staff throughout the project.

SOFTWARE CONFIGURATION MANAGEMENT (SCM)

Definition:
Software Configuration Management (SCM) is the process of systematically
controlling, organizing, and tracking changes in software products throughout their
lifecycle. SCM ensures that software remains reliable, consistent, and traceable as it
evolves.

Purpose of SCM:

 To control and manage changes in software code, documents, and other artifacts.
 To maintain the integrity and traceability of the software configuration.
 To support collaborative development by multiple team members.
 To facilitate version control, build management, and release management.

Key Activities in SCM:

1. Configuration Identification:
 Defining and identifying the configuration items (CIs) such as source code,
documents, requirements, and test cases.
 Assigning unique identifiers to each item for tracking.
2. Version Control:
 Managing multiple versions of software artifacts.
 Keeping track of changes, who made them, and when.
 Tools: Git, SVN, Mercurial.
3. Change Control:
 Managing requests for changes in a controlled manner.
 Evaluating, approving, and implementing changes while minimizing disruption.
4. Configuration Status Accounting:
 Recording and reporting the status of configuration items and changes.
 Providing visibility into the current state of software artifacts.
5. Configuration Auditing:
 Verifying that configurations conform to requirements and standards.
 Ensuring consistency between the software and its documentation.

Benefits of SCM:

 Improves software quality and reliability.


 Enables easier tracking of bugs and defects.
 Facilitates collaboration among distributed teams.
 Supports rollback to previous stable versions if needed.
 Helps in managing multiple releases and patches.

QUALITY ASSURANCE (QA)

Definition:
Quality Assurance is a systematic process designed to ensure that the software
development and maintenance processes are adequate to produce a product that
meets specified requirements and customer expectations. QA focuses
on preventing defects through planned and systematic activities.

Purpose of Quality Assurance:

 To improve and stabilize development processes.


 To prevent defects before they occur.
 To ensure compliance with standards and procedures.
 To enhance customer satisfaction by delivering a high-quality product.
 To provide confidence to stakeholders that quality requirements will be met.

Key Activities in Quality Assurance:

1. Process Definition and Implementation:


 Establishing and documenting software development and testing standards and
procedures.
2. Audits and Reviews:
 Conducting process audits and product reviews to ensure adherence to standards.
3. Process Monitoring and Control:
 Tracking process performance metrics and identifying areas for improvement.
4. Training and Improvement:
 Training teams on quality standards and best practices.
 Continuously improving processes based on feedback.
5. Quality Planning:
 Defining quality objectives and specifying necessary quality standards for the
project.

Difference between Quality Assurance and Quality Control:

 Quality Assurance focuses on processes to prevent defects.


 Quality Control focuses on product testing to identify defects.

Benefits of Quality Assurance:

 Reduces development costs by preventing defects early.


 Improves product reliability and user satisfaction.
 Enhances team productivity by standardizing processes.
 Helps in compliance with regulatory and industry standards.

PROJECT MONITORING

Definition:
Project monitoring is the continuous process of tracking, reviewing, and regulating
the progress and performance of a project to ensure that it meets its objectives on
time, within budget, and according to quality standards.

Purpose of Project Monitoring:

 To track actual progress against the project plan.


 To identify deviations or problems early.
 To provide timely information for decision-making.
 To ensure project goals are achieved effectively.
 To improve communication among team members and stakeholders.

Key Elements of Project Monitoring:

1. Progress Tracking:
 Measuring completed work against planned milestones and deliverables.
2. Schedule Monitoring:
 Checking whether tasks are being completed on time.
3. Cost Monitoring:
 Comparing actual expenditures with the budget.
4. Quality Monitoring:
 Ensuring deliverables meet the required quality standards.
5. Risk Monitoring:
 Identifying new risks and tracking existing risks and mitigation plans.

Techniques and Tools for Monitoring:

 Gantt Charts: Visualize project timelines and task progress.


 Earned Value Management (EVM): Combines scope, schedule, and cost data to
assess project health.
 Status Reports: Regular reports summarizing progress, issues, and risks.
 Dashboards: Real-time graphical displays of key project metrics.

Benefits of Project Monitoring:

 Enables early detection and correction of problems.


 Improves resource utilization and efficiency.
 Helps maintain stakeholder confidence through transparency.
 Supports informed decision-making and adaptive planning.

STATIC AND DYNAMIC MODELS

Static Models

 Represent the structure or organization of a system at a specific point in time.


 Show the system’s components and their relationships but do not depict how the
system changes over time.
 Examples include:
 Class diagrams (in object-oriented design)
 Entity-Relationship diagrams (ER diagrams)
 Data flow diagrams (showing data stores and flows at a snapshot)

Dynamic Models

 Represent the behavior of the system and how it changes over time.
 Show interactions, events, and state changes within the system.
 Examples include:
 Sequence diagrams
 State transition diagrams
 Activity diagrams

WHY MODELING IS IMPORTANT


1. Simplification:
 Models help simplify complex real-world systems by focusing on relevant details.
2. Communication:
 Serve as a common language for developers, stakeholders, and clients to understand
system design.
3. Visualization:
 Allow visualization of system structure and behavior before actual implementation.
4. Analysis:
 Help identify errors, inconsistencies, and missing requirements early in development.
5. Documentation:
 Provide clear documentation for maintenance and future enhancements.
6. Planning:
 Assist in project planning, resource allocation, and risk management.

UML DIAGRAMS

Unified Modeling Language (UML) diagrams are standardized visual representations


used to model software systems. They help describe structure, behavior, and
interactions within the system.

CLASS DIAGRAM
 Represents the static structure of a system.
 Shows classes with their:
 Attributes (data members)
 Operations/Methods (functions or behaviors)
 Displays relationships between classes such as:
 Association: A link between objects (e.g., Student attends Course).
 Multiplicity: Number of instances in relationships (e.g., one-to-many).
 Inheritance (Generalization): Subclass inherits from superclass (e.g., Car is a
Vehicle).
 Aggregation: Whole-part relationship where parts can exist independently.
 Composition: Stronger form of aggregation where parts depend on the whole.
 Used during analysis and design phases to model the data and object structures.

INTERACTION DIAGRAMS

These focus on how objects communicate to perform a function or process.

Collaboration Diagram (Communication Diagram):

 Emphasizes object relationships and message passing.


 Displays objects as nodes connected by links showing associations.
 Messages are numbered to show the sequence of interactions.
 Good for understanding how objects are linked and collaborate.

Sequence Diagram:

 Focuses on time-ordered interactions between objects.


 Objects are arranged horizontally, and time progresses vertically.
 Arrows represent messages/calls between objects.
 Shows the order and timing of method calls, returns, and lifelines.
 Useful to model detailed behavior in specific scenarios or use cases.

STATECHART DIAGRAM (STATE MACHINE DIAGRAM)


 Models the lifecycle of an object by depicting its different states and transitions
triggered by events.
 States represent conditions during the object's life (e.g., “Idle”, “Running”, “Paused”).
 Transitions show changes caused by events or conditions (e.g., “Start”, “Stop”).
 Supports hierarchical states (substates) and concurrent states.
 Widely used in event-driven and reactive systems.
ACTIVITY DIAGRAM
 Represents the workflow or business process using activities and actions.
 Shows the flow of control from one activity to another, including:
 Decision points (branching based on conditions)
 Forks and joins (parallel processing)
 Start and end nodes
 Can model sequential and concurrent activities.
 Useful for process modeling and visualizing complex operations.
IMPLEMENTATION DIAGRAMS

These describe the physical aspects of a system.

Component Diagram:

 Shows the organization and dependencies of software components (modules,


libraries, executables).
 Components are units of software with defined interfaces.
 Displays how components are wired together to form a system.
 Helps in understanding modular design and managing large projects.

Deployment Diagram:

 Models the hardware nodes (servers, devices) where software components are
deployed.
 Shows the configuration of physical hardware and software artifacts.
 Nodes are connected to show communication paths.
 Useful for system architects to plan deployment and infrastructure.

You might also like