SDLC NOTES BY GPT FROM SLIDES SUMMARY
Software Development Life Cycles:
1. System Study
What it is:
This is the initial investigation stage — understanding the problem,
current system (if any), and needs of the stakeholders.
Activities:
Meet users, managers, and stakeholders.
Study existing system (manual or computerized).
Identify pain points (e.g., inefficiency, errors, delays).
Gather initial requirements.
Example:
In a university admission system, the system study reveals that
applications are being processed manually, causing delays and lost
documents. Stakeholders want faster online applications with automated
shortlisting.
2. Feasibility Study
What it is:
Evaluate if building the new system is worth doing. Focus on practicality
and viability.
Types of Feasibility:
Technical → Do we have the right technology/tools?
Economic → Is it cost-effective? ROI?
Operational → Will people actually use it? Is training possible?
Schedule → Can we finish on time?
Example:
For the admission system:
Technical → University has IT infrastructure for a web-based system.
Economic → Costs $50,000 to build but saves $10,000/year in admin
costs.
Operational → Staff and students are ready for online admissions.
Schedule → Must be ready before the next admissions cycle.
3. System Analysis
What it is:
Detailed study of requirements. Defines what the system must do (not
how).
Activities:
Gather requirements using interviews, questionnaires, observation.
Document functional requirements (use cases, data requirements).
Draw DFDs (Data Flow Diagrams), ER Diagrams, or UML
diagrams.
Example:
For admissions, analysis identifies:
Users: Students, Admin staff.
Requirements: Students must be able to apply online, track status;
Admin staff must shortlist candidates.
Data entities: Student, Application, Course.
4. System Design
What it is:
Translate requirements into a blueprint of the system. Focus on how the
system will work.
Activities:
High-level design (HLD): overall architecture, modules, database
design, interfaces.
Low-level design (LLD): detailed algorithms, class diagrams, data
structures, input/output screens.
Example:
For admissions:
HLD: Three-tier architecture (Web UI, Application Server, Database).
LLD: Student class with attributes (name, rollNo), SQL schema for
storing applications, login UI screen.
5. Coding (Implementation)
What it is:
Developers write the actual program code in the chosen language,
following the design.
Practices:
Follow coding standards.
Use version control (Git).
Write unit tests alongside code.
Example:
Developers write Python/Java/C++ code to implement user login, course
application form, database queries, and admin dashboards.
6. Testing
What it is:
Verify that the system works correctly and meets requirements.
Levels of Testing:
Unit Testing: test individual modules (e.g., login validation).
Integration Testing: test combined modules (e.g., login → application
form).
System Testing: test the full system end-to-end.
Acceptance Testing: users test system to confirm it meets their
needs.
Example:
QA team tests the admission system by:
Creating a fake student application.
Checking if application flows correctly through the database.
Verifying error messages for invalid inputs.
7. Implementation (Deployment)
What it is:
Put the system into use.
Approaches:
Direct cutover: replace old system immediately.
Parallel running: run old and new system together for a while.
Phased implementation: deploy in stages (by module, location, or
feature).
Example:
University deploys the online admission system:
Run it in parallel with paper applications for the first semester.
After success, fully switch to the online system.
8. Maintenance
What it is:
After deployment, the system requires ongoing support, updates, and fixes.
Types of Maintenance:
Corrective: fix bugs.
Adaptive: update system for new environment (e.g., OS upgrade).
Perfective: improve performance/features.
Preventive: improve reliability for the future.
Example:
After deployment:
Fix bug in payment integration (Corrective).
Add support for new courses (Adaptive).
Improve page load speed (Perfective).
Add automated database backup (Preventive).
OOA vs OOD
1. OOA – Object-Oriented Analysis
What it is:
OOA is the problem understanding phase in object-oriented
software development.
It focuses on what the system should do, not on how it will be built.
Goal: To analyze the requirements of the system in terms of
objects, their attributes, behaviors, and relationships.
Key Activities in OOA:
1. Identify objects/entities in the system (e.g., Student, Teacher,
Course in a university system).
2. Define their attributes and operations (e.g., Student has
name, rollNo; operations: enrollCourse()).
3. Establish relationships (e.g., Student enrolls in Course →
multiplicity: many-to-many).
4. Use UML diagrams (Use Case, Class Diagrams, Activity
Diagrams) to model the system.
Example (University Management System):
o During OOA, we find entities like Student, Course, Professor.
o We describe: “A student can enroll in many courses. A professor
teaches one or many courses.”
o We don’t care yet about databases, code, or UI – just
requirements in an object-oriented perspective.
2. OOD – Object-Oriented Design
What it is:
OOD is the solution phase, where we decide how the system will
be implemented using objects.
It translates the OOA models into a blueprint for coding.
Goal: To design a system that can be implemented in an object-
oriented programming language (like C++, Java, Python).
Key Activities in OOD:
1. Refine classes into detailed software components (define
methods, visibility, interfaces).
2. Design inheritance hierarchies (use Generalization to create
parent/child classes).
3. Decide on Aggregation/Composition (e.g., A Course contains
Lessons).
4. Define software architecture (e.g., database schema, GUI
layout, class interactions).
5. Use UML diagrams like Class Diagrams, Sequence Diagrams,
Deployment Diagrams.
OOP CONCEPTS REVISION
1. UML Diagrams and Its Categories
UML (Unified Modeling Language) is a standard way to visually
represent the design and structure of a software system.
It helps software developers, analysts, and stakeholders understand the
system better before (and during) coding.
UML diagrams are broadly divided into two categories:
(a) Structural Diagrams (show what the system contains)
These describe the static aspects of the system – the objects, classes, and
their relationships.
Examples:
Class Diagram → Shows classes, their attributes, methods, and
relationships.
Example: A Student class with attributes like name, rollNo and a
Course class with courseName.
Object Diagram → Shows objects and links at a specific moment in
time.
Component Diagram → Shows how different software components
interact.
Deployment Diagram → Shows how the system is deployed across
hardware nodes.
(b) Behavioral Diagrams (show how the system behaves)
These describe the dynamic aspects of the system – workflows,
interactions, and state changes.
Examples:
Use Case Diagram → Shows interactions between users (actors) and
the system.
Example: In a Library System, a Student actor performs a Borrow Book
use case.
Sequence Diagram → Shows the order of interactions between
objects.
Activity Diagram → Shows workflow of activities (like a flowchart).
State Diagram → Shows how an object changes states due to events.
👉 In practice: Analysts use UML diagrams to communicate system
design, validate requirements, and guide developers.
2. Multiplicity
Multiplicity defines how many objects of one class can be associated
with objects of another class in a relationship.
It is shown on UML class diagrams as numbers near the association line.
Examples:
1..1 → Exactly one.
Example: Every Person must have one Passport.
0..1 → Zero or one.
Example: A Student may or may not have a Scholarship.
0..* or * → Zero or many.
Example: A Customer can place many Orders.
1..* → One or many.
Example: An Author must write at least one Book.
👉 Multiplicity ensures analysts correctly model constraints in
relationships between entities.
3. Aggregation
Aggregation is a “whole-part” relationship between classes, where one
class is made up of others, but the parts can exist independently of the
whole.
It is often shown with a white diamond in UML.
Example:
A Library aggregates Books.
o If the Library is closed, the Books still exist.
A Team aggregates Players.
o If the Team disbands, players can still join another team.
👉 Key idea: Aggregation is a “Has-A” relationship where the parts are not
strongly dependent.
4. Generalization
Generalization is the process of defining a general (parent) class and
specializing it into more specific (child) classes.
It represents an inheritance relationship in object-oriented design.
Shown in UML with a triangle arrow pointing to the parent class.
Example:
Vehicle is a general class.
Car, Bike, and Truck are specific classes inheriting from Vehicle.
o Vehicle has a method start().
o Car can override start() with its own implementation.
👉 Key idea: Generalization promotes reusability and abstraction in
software design.
UNIFIED AND RATIONAL UNIFIED PROCESS
1 — What is the Unified Process (short definition)
The Unified Process (UP) is an object-oriented, UML-driven, iterative
and incremental process framework for developing large software
systems. It does not prescribe one rigid lifecycle; instead it provides a
flexible process framework made of phases (time dimension) and
workflows (technical/task dimension) so teams can adapt the process
to their product and organization. The UP emphasizes developing and
validating high-risk elements early, delivering executable increments
frequently, and using UML models to communicate design.
week01c
2 — Core characteristics (and why each matters)
1. Object-oriented
o UP assumes an OO solution domain: identify classes/objects in
analysis, refine them in design, implement as
classes/components.
o Why: maps naturally to modern languages and supports
modularity and reuse.
week01c
2. Use-case driven
o Use cases capture functional requirements and drive iteration
scope, tests and acceptance criteria.
o Why: keeps focus on user-visible behavior and ensures that
delivered increments provide real value.
week01c
3. Architecture-centric
o The architecture (the structural backbone) is a primary artifact
developed and stabilized early (early iterations focus on
architectural risks).
o Why: early architectural validation reduces costly rework later and
helps manage non-functional requirements (scalability, security).
week01c
4. Iterative & incremental
o Work is organized into time-boxed iterations; each iteration
produces an executable increment of the system.
o Why: enables early feedback, progressive risk reduction, and the
ability to change requirements based on real user feedback.
week01c
5. Risk driven & time-boxed (practical rules from UP slides)
o Tackle highest risks first; use short fixed iterations (“time-boxes”)
to keep momentum and avoid speculative design.
week01c
6. Process framework (configurable)
o UP is a framework: you pick which artifacts/workflows to use, how
many iterations, and how to adapt it to the product.
week01c
3 — Goals and features (what UP tries to achieve)
Deliver working, valuable software early and frequently. Each
iteration should end with something tangible (an increment).
Find and resolve major risks early. (e.g., performance, integration,
conceptual design flaws.)
Support change: accept that requirements will evolve and design the
process to accommodate that change early when it’s cheap.
Promote a single-team mindset: analysts, designers, developers,
testers and stakeholders work as one coordinated team.
Use UML and visual models to communicate designs quickly and
accurately.
week01c
4 — The four UP phases (time dimension) — what happens in each,
deliverables and examples
UP divides the project lifecycle into four major phases. Each phase has
one or more iterations.
1. Inception
Purpose: establish the project’s scope and viability.
Key outcomes/deliverables: vision/business case, initial requirements
(≈10% identified), top risks, initial domain model, rough estimate & plan for
next phase.
Example activities: gather primary use cases, sketch initial UI screens,
identify payment gateway risk for an e-commerce project.
week01c
2. Elaboration
Purpose: build and validate the architecture; eliminate architecture and
major functional risks.
Key outcomes/deliverables: architectural baseline (often an executable
prototype), ~80% of use cases identified/priority-ordered, project
management plan for construction, updated risk list.
Example activities: implement a prototype checkout flow that integrates
with the payment service to validate feasibility.
week01c
3. Construction
Purpose: build the product’s components and features; most
implementation occurs here.
Key outcomes/deliverables: fully implemented product feature set (beta
releases), completed architecture, user manuals in draft.
Example activities: iterative implementation of catalog browsing, cart,
user accounts, admin functions; frequent integration builds and tests.
week01c
4. Transition
Purpose: deploy the system to users, perform acceptance testing, training,
and fine tuning.
Key outcomes/deliverables: production release, final documentation,
training materials, support processes.
Example activities: final acceptance testing with pilot users, deploy to
production, run data migration scripts.
week01c
(Extended UP variants) — Enterprise Unified Process (EUP) may add
Production (operational support) and Retirement phases for long-lived
enterprise apps.
week01c
5 — Workflows (the second dimension) — what they are and the
primary workflows
Where phases are when work happens, workflows describe what kind of
work happens. Workflows cut across phases; each iteration may run many
workflows in parallel.
Primary workflows (the UP slide deck lists these):
Business modeling — model the business processes (business use
cases, activity diagrams).
Requirements — capture actors and use cases (use case model,
textual scenarios, supplementary requirements).
Analysis — create analysis model: domain classes, state machines,
identify boundary/entity/control classes. This yields a precise spec for
designers.
Design — refine analysis into implementable design:
component/subsystem decomposition, interfaces, data structures,
design patterns, mapping to implementation technology.
Implementation — produce code, packages, components, perform
unit tests, integrate builds.
Test — develop test cases (many per use case), automate tests,
system testing; testing runs continuously in parallel with development.
Deployment — packaging, distribution, installation, data migration,
beta releases and acceptance.
Post-delivery maintenance — bug fixes, regression testing,
maintenance releases.
Supporting / managerial workflows: project management,
configuration/change management, environment (tooling), and occasionally
specialized workflows like usability.
week01c
6 — What each workflow produces (artifacts & roles)
Requirements → Use case model, actors, feature list. Roles: business
analyst, stakeholder.
Analysis → Analysis class diagrams, state diagrams, sequence
diagrams, a specification document (contract). Roles: systems analyst,
architect.
Design → Design class diagrams, components, subsystem packages,
interfaces, database schema. Roles: software architect, designers.
Implementation → Source code, unit tests, builds, component
binaries. Roles: developers.
Test → Test cases (per use case scenarios), automated test scripts, test
reports. Roles: QA/testers, developers.
Deployment → Installers, release notes, migration scripts, user
documentation. Roles: release manager, ops, technical writers.
week01c
7 — Iterations: “mini-waterfalls” and risk focus
Each iteration is a time-boxed mini-project that runs (in much smaller
scope) the core activities: requirements → analysis → design →
implementation → test. Iterations are risk-driven: you select the subset of
use cases and architecture elements that reduce the highest current risks.
The result of every iteration is an executable increment that stakeholders
can evaluate.
week01c
8 — Six best “must” UP practices (how to run the process well)
From your slides (and worth repeating with examples):
week01c
1. Time-boxed iterations — e.g., two-week sprints/iterations avoid
speculative architecture work.
2. Cohesive architecture & reuse — build a small core, well-designed
architecture early; reuse components where possible.
3. Continuously verify quality — integrate and test often; automated
CI builds with unit and integration tests.
4. Visual modeling — use UML diagrams to explore and validate design
ideas before coding.
5. Manage requirements — track, prioritize and trace requirements to
design/code/test.
6. Manage change — version control, change requests, baselines at
iteration end.
9 — Example walkthrough (concrete) — Online Bookstore (showing
phases, iter., workflows)
Project goal: build an online bookstore with search, browsing, shopping
cart, checkout (payment), order tracking, and admin catalog management.
Inception (1–3 iterations, short)
Work done: stakeholder interviews; capture primary use cases
(Search Book, Add to Cart, Checkout).
Artifacts: Vision document, initial use case list (10% of use cases),
business case, top-3 risks (payment integration, search performance,
inventory sync).
Decision: proceed to Elaboration if business case is acceptable.
Elaboration (2–6 iterations)
Work done: design and prototype the architecture to reduce top risks.
Build an executable architectural prototype that integrates search
engine service + a mock payment gateway.
Artifacts: domain model (classes: Book, Customer, Order, Cart,
Payment), initial component diagram (SearchService, CatalogService,
PaymentAdapter), sequence diagram for Checkout.
Goal: validate payment flow and search performance; stabilize
architecture. After success, produce project management plan for
Construction.
Construction (many iterations)
Work done: implement features in prioritized increments:
o Iteration A: Implement catalog browsing + search (unit tests).
o Iteration B: Implement shopping cart and persistent session.
o Iteration C: Add user registration, order submission, integration
with real payment gateway.
Artifacts per iteration: code modules, unit tests, integration builds,
updated design docs and more detailed sequence/class diagrams.
Quality: continuous integration, regression test suite grows.
Transition (1–2 iterations)
Work done: beta release to pilot customers, finalize documentation,
performance tuning, user training, finalize data migration scripts.
Artifacts: production installer, final user manual, support procedures.
Outcome: production go-live.
During every iteration: run requirements → analysis → design →
implementation → test flows; update risk list and backlog; re-prioritize
remaining use cases.
10 — Rational Unified Process (RUP) — what it is and how it relates
to UP
RUP is a concrete, commercial instantiation/implementation of the
Unified Process created by Rational (later IBM). It takes the UP framework
and provides detailed guidance, templates, roles, tailoring instructions and
best practices. RUP emphasizes: iterative development, requirements
management, component-based architecture, visual modeling,
quality management, and disciplined change control. RUP versions (e.g.,
RUP 5.0) packaged the process with tool support (Rational Rose, Rational
Team Concert, etc.).
week01c
Key distinctions / features of RUP:
Process guidance: RUP ships with prescriptive guidance, role
definitions, artifact templates and example workflows.
Tool integration: strong tool support for UML modeling, configuration
management and build automation (historically Rational tools).
Use of best practices: RUP codifies the six best practices and gives
concrete templates and work products.
Tailorability: RUP is intended to be tailored – you pick what you need
based on project size and criticality.
week01c
11 — RUP in practice — extended example snippet (how RUP looks
when applied)
Using the same Online Bookstore example, RUP provides:
Detailed role descriptions (Business Analyst, Use-Case Specifier,
Architect, Component Developer, Tester, Configuration Manager).
Work products/templates (Use Case specifications, Supplementary
Requirements, Analysis Model, Design Model, Test Plan templates).
Workflows defined with activities such as “elaborate the
architecture,” “write use case descriptions,” “implement component
X.”
Tooling: model the design in a UML tool, generate skeleton code, use
a configuration manager to perform controlled builds.