Final
Final
The main objective of SDLC is to deliver a high-quality system that meets or exceeds
user expectations, is completed within time and cost estimates, and functions
efficiently in the real world.
5.Testing:After development, the system undergoes rigorous testing to find and fix errors. Various
levels of testing are performed, such as unit testing, integration testing, system testing, and user
acceptance testing
6Implementation:Once the system is tested and approved, it is installed in the live environment.
Users are trained, and data is migrated from the old system to the new one.
Type Description
These are the total expenditures required to develop and operate the system.
Types of Costs:
These are the positive outcomes (monetary or non-monetary) the organization gains
from the system.
Types of Benefits:
tt = year
rr = discount rate
BenefitstBenefitst = benefits in year t
CoststCostst = costs in year t
Interpretation:
The COCOMO (Constructive Cost Model) is a software cost estimation model developed
by Barry W. Boehm in 1981. It is used to estimate the effort, cost, and time required to develop
a software project based on the size of the software (measured in Kilo Lines of Code, or KLOC).
� Purpose of COCOMO:
To predict project cost, effort, and schedule.
To help in budgeting and resource planning.
To aid in project evaluation and bidding.
🔷 Types of COCOMO Models:
COCOMO is divided into three models based on complexity and accuracy:
1. Basic COCOMO
2. Intermediate COCOMO
Adds cost drivers like experience, tools, etc., for more accurate estimation.
3. Detailed COCOMO
Considers each part of the system separately with all cost drivers in detail.
Organic Simple projects, small teams, familiar environments 2.4 1.05 2.5 0.38
A Data Flow Diagram (DFD) is a graphical representation used to visualize the flow
of data within a system. It shows how data enters the system, how it moves between
different processes, where it is stored, and how it leaves the system as output. DFD
focuses on the movement of data rather than the program logic or physical
implementation. It helps system analysts and designers understand the system’s
functions and the flow of information.
� Purpose of DFD:
To model the system from the perspective of data flow.
To break down complex processes into smaller, understandable parts.
To document how data flows through the system.
To identify inputs, outputs, storage points, and processing steps.
🔷 Types / Levels of DFD:
DFDs are created in different levels of detail, which help to understand the system
gradually from a high-level overview to detailed processes
2. Level 1 DFD:
Breaks down the single process from Level 0 into major sub-processes or modules.
Shows the main functions of the system.
Illustrates the flow of data between these sub-processes, external entities, and data
stores.
Gives more detail than Level 0 but still a high-level view.
3. Level 2 (and Lower) DFD:
Further breaks down the Level 1 sub-processes into more detailed processes.
Shows detailed data flow, processing steps, and interactions within the system.
Used for very large and complex systems where more detail is necessary.
Each process in Level 1 can be decomposed into Level 2 processes, and so on.
Symbols Used in DFD:
Symbol Meaning
📘 Problem Partitioning
Problem Partitioning is the process of breaking down a large, complex system or
problem into smaller, manageable parts or modules. Each module represents a sub-
problem that is easier toanalyze, design, develop, and maintain
Each module can then be developed separately and integrated to form the complete
system.
Top-Down Design:
Disadvantages:
Bottom-Up Design: Bottom-Up Design is the opposite approach, where design starts
with designing the most basic or fundamental components first and then integrating them
to form higher-level modules or the complete system.
Begins with designing small, well-understood, reusable modules.
These modules are combined to build larger subsystems.
Gradually, the complete system is constructed by integrating these modules.
Often used when some components or modules already exist and can be reused.
Disadvantages:
A Decision Tree is a graphical representation used to make decisions and show the
various possible outcomes or actions based on different conditions. It resembles a
tree structure, where:
🔷 Characteristics:
The main focus is on what the system does.
Data and functions are kept separate.
Uses top-down design (problem is divided into smaller sub-problems).
Flow of data is managed through function calls and parameters.
Examples: C, Structured programming languages.
🔷 Advantages:
Easy to design small systems.
Simple and straightforward for data-centric problems.
Useful for applications with clear and fixed processes.
🔷 Disadvantages:
Difficult to manage large systems.
Code reuse is limited.
Data security is low (data is accessible by any function).
Maintenance becomes harder as the system grows.
� 2. Object-Oriented Approach
🔷 Characteristics:
Focus is on real-world entities like users, products, orders, etc.
Data and behavior are encapsulated in objects.
Uses bottom-up design (build small reusable classes first).
Promotes inheritance, polymorphism, and abstraction.
Examples: Java, Python, C++ (OOP).
🔷 Advantages:
Modular and reusable code.
Easier to manage complex and large systems.
Data is more secure (private or protected).
Easier to maintain, modify, and extend.
Better mapping to real-world problems.
🔷 Disadvantages:
More complex to design initially.
Overhead of learning OOP concepts.
May be overkill for small, simple tasks.
� 1. Structured Programming
🔷 Definition: Structured Programming is a programming paradigm that emphasizes a logical
structure in the code to improve clarity, quality, and development time. It
uses functions, sequential flow, and control structures (like loops, conditionals) to write clean
and organized code.
🔷 Key Features:
Based on top-down design.
Programs are divided into functions or procedures.
Uses control structures: if, else, while, for, etc.
Data and functions are kept separate.
Focuses on how to solve the problem step-by-step.
🔷 Advantages:
Easy to understand and implement.
Ideal for small and medium-sized programs.
Encourages code reuse through functions.
🔷 Disadvantages:
Poor scalability for large systems.
Difficult to maintain and modify.
No data hiding or encapsulation.
Reusability is limited.
🔷 Examples of Structured Languages:
C, Pascal, FORTRAN
2. Object-Oriented Programming (OOP):Object-Oriented Programming (OOP) is a
paradigm based on the concept of "objects", which contain both data (attributes)
and functions (methods). It models real-world entities using classes and supports concepts
like encapsulation, inheritance, and polymorphism.
🔷 Key Features:
Based on bottom-up design.
Uses objects and classes.
Combines data and functions into a single unit (object).
Promotes encapsulation, abstraction, inheritance, and polymorphism.
Focuses on what objects do rather than how to do it.
🔷 Advantages:
Modular, maintainable, and reusable code.
Better security and data protection.
Suitable for large and complex systems.
Closer to real-world modeling.
🔷 Disadvantages:
More complex and has a learning curve.
Overhead of designing objects and classes.
Slower for small programs compared to structured programming.
🔷 Examples of OOP Languages:
Java, C++, Python (when using classes), C#
� Comparison Table:
Feature Structured Programming Object-Oriented Programming (OOP)
1. Information Hiding
🔷 Definition:
Information Hiding is a software design principle that hides the internal details or
complexities of a module/class from other modules. Only essential information is
exposed, and unnecessary implementation details are kept private.
🔷 Key Points:
Promotes encapsulation.
Commonly used in Object-Oriented Programming (OOP).
Achieved using access specifiers like private, public, protected.
Example: In a class, internal variables are made private and accessed only
through public methods (getters/setters).
🔷 Benefits:
Reduces system complexity.
Increases security and robustness.
Makes maintenance and updates easier.
Prevents unintended interference between modules.
🔷 Example:
cpp
CopyEdit
class BankAccount { private: double balance; // hidden data public: void
deposit(double amount); // exposed behavior void withdraw(double amount); };
2. Reuse
🔷 Definition: Reuse in software refers to the practice of using existing software components (like
classes, modules, functions) in new applications, rather than building from scratch.
🔷 Types of Reuse:
Code Reuse: Using existing functions, libraries, or APIs.
Design Reuse: Using standard design patterns or templates.
Component Reuse: Using ready-made modules like login systems, payment
gateways, etc.
🔷 Benefits:
Saves time and cost.
Increases productivity.
Reduces errors (already tested code).
Promotes consistency across systems.
🔷 Example: Using a login module in multiple applications or using a library
like jQuery or Bootstrap in web development.
3. System Documentation: System Documentation is a detailed written description of
the system, its components, and how it works. It helps developers, users, and maintainers
understand and work with the system effectively.
🔷 Types of Documentation:
Type Description
Technical Documentation Provides details about system design, architecture, and code
🔷 Importance:
Ensures smooth development and maintenance.
Reduces dependency on original developers.
Helps in onboarding new team members.
Supports compliance and legal requirements.
Software Testing
🔷 Definition:
Levels of Testing
Software testing is performed at different levels in the software development
lifecycle. The main levels of testing are:
1. Unit Testing
Tests individual components or modules of the software.
Done by developers.
Ensures each function or class works as expected.
2. Integration Testing
Tests the interaction between modules.
Ensures that combined modules work together correctly.
3. System Testing
Tests the entire software system as a whole.
Done by testers (QA team).
Verifies the system against the specified requirements.
4. Acceptance Testing
Done by the end users or clients.
Ensures the system meets business needs and is ready for deployment.
Types: Alpha Testing (by internal users), Beta Testing (by external users).
Integration Testing
Integration Testing is a level of testing where individual units/modules are
combined and tested as a group to detect interface defects between them.
Example:
Example:
1. Test Case ID
A unique identifier assigned to each test case.
Example: TC001
3. Objective / Description
Describes what the test case intends to verify or validate.
4. Preconditions
Conditions that must be met before the test is executed.
Example: The user must be registered.
6. Test Data
Input values required for the test.
Example:
Username: test_user
Password: pass123
7. Expected Result
The output or system behavior expected if it works correctly.
8. Actual Result
What actually happens when the test is run.
9. Status
Indicates whether the test case passed or failed.
TC001 Login with valid credentials Username: user, Pass: 1234 Redirect to dashboard page Pass
TC002 Login with invalid password Username: user, Pass: xyz Display error message Pass
RELIABILITY ASSESSMENT
Definition:
Reliability assessment is the process of evaluating the software system’s ability to
perform its intended functions consistently and without failure over a specified
period under given conditions.
Purpose:
Key Aspects:
1. Failure Rate:
Number of failures occurring during a time interval.
2. Mean Time Between Failures (MTBF):
Average operational time between consecutive failures.
Higher MTBF indicates better reliability.
3. Mean Time To Repair (MTTR):
Average time taken to fix a failure and restore service.
4. Fault Tolerance:
Ability of software to continue functioning despite faults.
Reliability Testing:
Testing software under normal and stress conditions to detect failures.
Fault Injection:
Intentionally introducing faults to verify error handling and recovery.
Statistical Modeling:
Using historical failure data to predict future reliability.
Failure Mode and Effects Analysis (FMEA):
Systematic identification of possible failure points and their impacts.
Importance:
Verification
Validation
Monitoring
Control
PROJECT SCHEDULING
Definition:
Project scheduling is the process of creating a timeline for the project activities,
defining when and how long each task should take, and determining the sequence
of these tasks.
Key Objectives:
1. Define Activities: Break down the project into manageable tasks or activities.
2. Sequence Activities: Determine the order of tasks and dependencies.
3. Estimate Duration: Estimate how long each task will take.
4. Develop Schedule: Create a timeline using tools like Gantt charts or network
diagrams (PERT/CPM).
5. Assign Resources: Allocate resources to tasks.
Gantt Chart: Visual bar chart showing start and end dates of activities.
PERT (Program Evaluation Review Technique): Uses probabilistic time estimates
for tasks.
CPM (Critical Path Method): Identifies the longest sequence of dependent tasks
that determine project duration.
STAFFING
Definition:
Staffing involves recruiting, selecting, training, and assigning the right people to the
right tasks to ensure successful project completion.
Importance of Staffing:
Proper staffing ensures that the project has the necessary skills and manpower.
Avoids overloading or underutilization of team members.
Influences project productivity and quality.
Staffing Process:
Challenges in Staffing:
Definition:
Software Configuration Management (SCM) is the process of systematically
controlling, organizing, and tracking changes in software products throughout their
lifecycle. SCM ensures that software remains reliable, consistent, and traceable as it
evolves.
Purpose of SCM:
To control and manage changes in software code, documents, and other artifacts.
To maintain the integrity and traceability of the software configuration.
To support collaborative development by multiple team members.
To facilitate version control, build management, and release management.
1. Configuration Identification:
Defining and identifying the configuration items (CIs) such as source code,
documents, requirements, and test cases.
Assigning unique identifiers to each item for tracking.
2. Version Control:
Managing multiple versions of software artifacts.
Keeping track of changes, who made them, and when.
Tools: Git, SVN, Mercurial.
3. Change Control:
Managing requests for changes in a controlled manner.
Evaluating, approving, and implementing changes while minimizing disruption.
4. Configuration Status Accounting:
Recording and reporting the status of configuration items and changes.
Providing visibility into the current state of software artifacts.
5. Configuration Auditing:
Verifying that configurations conform to requirements and standards.
Ensuring consistency between the software and its documentation.
Benefits of SCM:
Definition:
Quality Assurance is a systematic process designed to ensure that the software
development and maintenance processes are adequate to produce a product that
meets specified requirements and customer expectations. QA focuses
on preventing defects through planned and systematic activities.
PROJECT MONITORING
Definition:
Project monitoring is the continuous process of tracking, reviewing, and regulating
the progress and performance of a project to ensure that it meets its objectives on
time, within budget, and according to quality standards.
1. Progress Tracking:
Measuring completed work against planned milestones and deliverables.
2. Schedule Monitoring:
Checking whether tasks are being completed on time.
3. Cost Monitoring:
Comparing actual expenditures with the budget.
4. Quality Monitoring:
Ensuring deliverables meet the required quality standards.
5. Risk Monitoring:
Identifying new risks and tracking existing risks and mitigation plans.
Static Models
Dynamic Models
Represent the behavior of the system and how it changes over time.
Show interactions, events, and state changes within the system.
Examples include:
Sequence diagrams
State transition diagrams
Activity diagrams
UML DIAGRAMS
CLASS DIAGRAM
Represents the static structure of a system.
Shows classes with their:
Attributes (data members)
Operations/Methods (functions or behaviors)
Displays relationships between classes such as:
Association: A link between objects (e.g., Student attends Course).
Multiplicity: Number of instances in relationships (e.g., one-to-many).
Inheritance (Generalization): Subclass inherits from superclass (e.g., Car is a
Vehicle).
Aggregation: Whole-part relationship where parts can exist independently.
Composition: Stronger form of aggregation where parts depend on the whole.
Used during analysis and design phases to model the data and object structures.
INTERACTION DIAGRAMS
Sequence Diagram:
Component Diagram:
Deployment Diagram:
Models the hardware nodes (servers, devices) where software components are
deployed.
Shows the configuration of physical hardware and software artifacts.
Nodes are connected to show communication paths.
Useful for system architects to plan deployment and infrastructure.