Module3to5 SE&PM
Module3to5 SE&PM
AGILE DEVELOPMENT
Introduction
Agile software engineering combines a philosophy and a set of development guidelines. The philosophy
encourages customer satisfaction and early incremental delivery of software; small, highly motivated
project teams;informal methods; minimal software engineering work products; and overall development
simplicity. The development guidelines stress delivery over analysis and design (although these
activities are not discouraged), and active and continuous communication between developers and
customers.*
Definition :-
Agility means characteristics of being dynamic, content specific, aggressively change embracing and
growth oriented.
Agility has become today’s buzzword when describing a modern software process. Everyone is agile. An
agile team is a nimble team able to appropriately respond to changes. Change is what software
development is very much about. Changes in the software being built, changes to the team members,
changes because of new technology, changes of all kinds that may have an impact on the product they
build or the project that creates the product. Support for changes should be built-in everything we do in
software.
An agile team recognizes that software is developed by individuals working in teams and that the skills
of these people,their ability to collaborate is at the core for the success of the project.
All changes can be represented as shown in the below diagram which is considered according to Ivar
Jacobson Agility process of Software.
Note :Ivar Hjalmar Jacobson (born 1939) is a Swedish computer scientist and software engineer,
known as a major contributor to UML, Objectory, Rational Unified Process (RUP), aspect-oriented
software development and Essence.
In Jacobson’s view, the pervasiveness of change is the primary driver for agility.
Software engineers must be quick on their feet if they are to accommodate the rapid changes that
Jacobson describes.But agility is more than an effective response to change. It encourages team
It emphasizes rapid delivery of operational software and de-emphasizes the importance of
intermediate work products (not always a good thing); it adopts the customer as a part of the
development team and works to eliminate the “us and them” attitude that continues to pervade many
software projects; it recognizes that planning in an uncertain world has its limits and that a project plan
must be flexible.
Agility can be applied to any software process. To accomplish this, it is essential that the process be
designed in a way that allows the project team to adapt tasks and to streamline them, conduct planning
in a way that understands the fluidity of an agile development approach, eliminate all but the most
essential work products and keep them lean, and emphasize an incremental delivery strategy that gets
working software to the customer as rapidly as feasible for the product type and operational
environment.
The conventional wisdom in software development (supported by decades of experience) is that the cost
of change increases nonlinearly as a project progresses (Figure 3.1, solid black curve). It is relatively
easy to accommodate a change when a software team is gathering requirements (early in a project).
A usage scenario might have to be modified, a list of functions may be extended, or a written
specification can be edited. The costs of doing this work are minimal, and the time required will not
adversely affect the outcome of the project.
The team is in the middle of validation testing (something that occurs relatively late in the project),
and an important stakeholder is requesting a major functional change.
The change requires a modification to the architectural design of the software, the design and
construction of three new components, modifications
to another five components, the design of new tests, and so on.
The agile process encompasses incremental delivery. When incremental delivery is coupled with other
agile practices such as continuous unit testing and pair programming the cost of making a change is
attenuated.
Definition :
The Processes which are adaptable of changes in requirements, which have incrementality and work on
unpredictability. These processes are based on three assumptions which all do refer to the
unpredictability in different stages of software process development such unpredictability at time
requirements, at analysis and design or at time construction. So these processes are adaptable at all
stages on SDLC.
Any agile software process is characterized in a manner that addresses a number of key assumptions
about the majority of software projects:
1. It is difficult to predict in advance which software requirements will persist and which will change. It
is equally difficult to predict how customer priorities will change as the project proceeds.
2. For many types of software, design and construction are interleaved. That is, both activities should be
performed in tandem so that design models are proven as they are created. It is difficult to predict how
much design is necessary before construction is used to prove the design.
3. Analysis, design, construction, and testing are not as predictable (from a planning point of view) as we
might like.
Given these three assumptions, an important question arises: How do we create a process that can
manage unpredictability?
The answer, lies in process adaptability (to rapidly changing project and technical conditions).
An agile process, therefore, must be adaptable. But continual adaptation without forward progress
accomplishes little. Therefore, an agile software process must adapt incrementally. To accomplish
incremental adaptation,an agile team requires customer feedback (so that the appropriate adaptations
can be made).
The Agile Alliance defines 12 agility principles for those who want to achieve agility:
Jim Highsmith states the extremes when he characterizes the feeling of the pro-agility camp (“agilists”).
“Traditional methodologists are a bunch of stick-in-the-muds who’d rather produce flawless
documentation than a working system that meets business needs.”
Note : James A. Highsmith (born 1945) is an American software engineer and author of books in the
field of software development methodology. He is the creator of Adaptive Software Development,
described in his 1999 book "Adaptive Software Development", and winner of the 2000 Jolt Award, and
the Stevens Award in 2005. Highsmith was one of the 17 original signatories of the Agile Manifesto, the
founding document for agile software development.
There are no absolute answers to either of these questions. Even within the agile school itself, there are
many proposed process models (Section 3.4), each with a subtly different approach to the agility
problem.
Within each model there is a set of “ideas” (agilists are loath to call them “work tasks”) that represent a
significant departure from traditional software engineering. And yet, many agile concepts are simply
adaptations of good software engineering concepts. Bottom line: there is much that can be gained by
considering the best of both schools and virtually nothing to be gained by denigrating either approach.
Proponents of agile software development take great pains to emphasize the importance of “people
factors.” As Cockburn and Highsmith state, “Agile development focuses on the talents and skills of
individuals, molding the process to specific people and teams.”
The key point in this statement is that the process molds to the needs of the people and team, not the other
way around.
Common focus. Although members of the agile team may perform different tasks and bring different
skills to the project, all should be focused on one goal—to deliver a working software increment to the
customer within the time promised.
To achieve this goal, the team will also focus on continual adaptations (small and large) that will make
the process fit the needs of the team.
Collaboration. Software engineering (regardless of process) is about assessing, analyzing, and using
information that is communicated to the software team; creating information that will help all
stakeholders understand the work of the team; and building information (computer software and
relevant databases) that provides business value for the customer. To accomplish these tasks, team
members must collaborate—with one another and all other stakeholders.
Decision-making ability. Any good software team (including agile teams) must be allowed the freedom
to control its own destiny. This implies that the team is given autonomy—decision-making authority for
both technical and project issues.
Fuzzy problem-solving ability. Software managers must recognize that the agile team will continually
have to deal with ambiguity and will continually be buffeted by change. In some cases, the team must
accept the fact that the problem they are solving today may not be the problem that needs to be solved
tomorrow.
Lessons learned from any problem-solving activity (including those that solve the wrong problem) may
be of benefit to the team later in the project.
Mutual trust and respect. The agile team must become what DeMarcoand Lister call a “jelled” team .
A jelled team exhibits the trust and respect that are necessary to make them “so strongly knit that the
whole is greater than the sum of the parts.”
Ken Schwaber addresses these issues when he writes: “The team selects how much work it believes it
can perform within the iteration, and the team commits to the work. Nothing demotivates a team as
much as someone else making commitments for it. Nothing motivates a team as much as accepting the
responsibility for fulfilling commitments that it made itself.”
3.4.1 XP Values
Beck defines a set of five values that establish a foundation for all work performed as part of XP—
communication, simplicity, feedback, courage, and respect. Each of these values is used as a driver for
specific XP activities, actions, and tasks.
In order to achieve effective communication between software engineers and other stakeholders (e.g., to
establish required features and functions for the software), XP emphasizes close, yet informal (verbal)
collaboration between customers and developers, the establishment of effective metaphors for
communicating important concepts, continuous feedback, and the avoidance of voluminous
documentation as a communication medium.
To achieve simplicity, XP restricts developers to design only for immediate needs, rather than consider
future needs. The intent is to create a simple design that can be easily implemented in code). If the
design must be improved, it can be refactored at a later time.
Feedback is derived from three sources: the implemented software itself, the customer, and other
software team members.
By designing and implementing an effective testing strategy , the software (via test results) provides
the agile team with feedback. XP makes use of the unit test as its primary testing
tactic.
As each class is developed, the team develops a unit test to exercise each operation according to its
specified functionality.
As an increment is delivered to a customer, the user stories or use cases that are implemented by the
increment are used as a basis for acceptance tests. The degree to which the software implements the
output, function, and behavior of the use case is a form of feedback.
Finally, as new requirements are derived as part of iterative planning, the team provides the customer
with rapid feedback regarding cost and schedule impact.
Beck argues that strict adherence to certain XP practices demands courage. A better word might be
discipline.
For example, there is often significant pressure to design for future requirements. Most software teams
succumb, arguing that “designing for tomorrow” will save time and effort in the long run.
An agile XP team must have the discipline (courage) to design for today, recognizing that future
requirements may change dramatically, thereby demanding substantial rework of the design and
implemented code.
Extreme Programming uses an object-oriented approach as its preferred development paradigm and
encompasses a set of rules and practices that occur within the context of four framework activities:
planning, design, coding, and testing.
Figure 3.2 illustrates the XP process and notes some of the key ideas and tasks that are associated with
each framework activity. Key XP activities are summarized in the paragraphs that follow.
Planning: The planning activity (also called the planning game) begins with listening—a requirements
gathering activity that enables the technical members of the XP team to understand the business context
for the software and to get a broad
The customer assigns a value (i.e., a priority) to the story based on the overall business value of the
feature or function.
Members of the XP team then assess each story and assign a cost—measured in development weeks—to
it. If the story is estimated to require more than three development weeks, the customer is asked to split
the story into smaller stories and the assignment of value and cost occurs again.
Customers and developers work together to decide how to group stories into the next release (the next
software increment) to be developed by the XP team. Once a basic commitment (agreement on stories to
be included, delivery date, and other project matters) is made for a release.
The XP team orders the stories that will be developed in one of three ways:
After the first project release (also called a software increment) has been delivered,the XP team
computes project velocity. Stated simply, project velocity is the number of customer stories
implemented during the first release.
(1) help estimate delivery dates and schedule for subsequent releases and
(2) determine whether an overcommitment has been made for all stories across the entire development
project. If an overcommitment occurs, the content of releases is modified or end delivery dates are
changed.
As development work proceeds, the customer can add stories, change the value of an existing story, split
stories, or eliminate them. The XP team then reconsiders all remaining releases and modifies its plans
accordingly.
XP encourages the use of CRC cards as an effective mechanism for thinking about the software in an
object-oriented context.
CRC (class-responsibility collaborator) cards identify and organize the object-oriented classes that are
relevant to the current software increment. The CRC cards are the only design work product produced
as part of the XP process.
Refactoring is the process of changing a software system in such a way that it does not alter the
external behavior of the code yet improves the internal structure.
It is a disciplined way to clean up code [and modify/simplify the internal design] that minimizes the
chances of introducing bugs. In essence, when you refactor you are improving the design of the code
after it has been written.
A central notion in XP is that design occurs both before and after coding commences.
Refactoring means that design occurs continuously as the system is constructed.The construction
activity itself will provide the XP team with guidance on how to improve the design.
Coding: After stories are developed and preliminary design work is done, the team does not move to
code, but rather develops a series of unit tests that will exercise each of the stories that is to be included
in the current release (software increment).
Once the unit test has been created, the developer is better able to focus on what must be
implemented to pass the test. Nothing extraneous is added (KIS).
Once the code is complete, it can be unit-tested immediately, thereby providing instantaneous
feedback to the developers.
A key concept during the coding activity (and one of the most talked about aspects of XP) is pair
programming.
XP recommends that two people work together at one computer workstation to create code for a
story. This provides a mechanism for realtime problem solving (two heads are often better than
one) and real-time quality assurance (the code is reviewed as it is created). It also keeps the
developers focused
on the problem at hand. In practice, each person takes on a slightly different role.
For example, one person might think about the coding details of a particular portion of the design while
the other ensures that coding standards (a required part of XP) are being followed or that the code for
the story will satisfy the unit test that has been developed to validate the code against the story.
As pair programmers complete their work, the code they develop is integrated with the work of others.
In some cases this is performed on a daily basis by an integration team.
In other cases, the pair programmers have integration responsibility. This “continuous integration”
strategy helps to avoid compatibility and interfacing problems and provides a “smoke testing”
environment that helps to uncover errors early.
Testing: Note that ,the creation of unit tests before coding commences is a key element of the XP
approach.
The unit tests that are created should be implemented using a framework that enables them to be
automated (hence, they
can be executed easily and repeatedly). This encourages a regression testing strategy whenever code is
modified (which is often, given the XP refactoring philosophy).
Wells states: “Fixing small problems every few hours takes less time than fixing huge problems just
before the deadline.”
XP acceptance tests, also called customer tests, are specified by the customer and focus on overall
system features and functionality that are visible and reviewable by the customer.
Acceptance tests are derived from user stories that have been implemented as part of a software release.
3.4.3 Industrial XP
Joshua Kerievsky describes Industrial Extreme Programming (IXP) in the following manner:
“IXP is an organic evolution of XP. It is imbued with XP’s minimalist, customer-centric, test-driven spirit.
IXP differs most from the original XP in its greater inclusion of management, its expanded role for
customers, and its upgraded technical practices.”
IXP incorporates six new practices that are designed to help ensure that an XP project works
successfully for significant projects within a large organization.
Readiness assessment: Prior to the initiation of an IXP project, the organization should conduct a
readiness assessment. The assessment ascertains whether
Project community: Classic XP suggests that the right people be used to populate the agile team to
ensure success. The implication is that people on the team must be well-trained, adaptable and skilled,
and have the proper temperament to contribute to a self-organizing team.
When XP is to be applied for a significant project in a large organization, the concept of the “team”
should morph into that of a community.
A community may have a technologist and customers who are central to the success of a project as well
as many other stakeholders (e.g., legal staff, quality auditors, manufacturing or sales types) who “are
often at the periphery of an IXP project yet they may play important roles on the project” .
In IXP, the community members and their roles should be explicitly defined and mechanisms for
communication and coordination between community members should be established.
Project chartering: The IXP team assesses the project itself to determine whether an appropriate
business justification for the project exists and whether the project will further the overall goals and
objectives of the organization.
Test-driven management: An IXP project requires measurable criteria for assessing the state of the
project and the progress that has been made to date. Test-driven management establishes a series of
measurable “destinations”and then defines mechanisms for determining whether or not these
destinations have been reached.
Retrospectives: An IXP team conducts a specialized technical review after a software increment is
delivered. Called a retrospective, the review examines “issues, events, and lessons-learned” across a
software increment and/or the entire software release. The intent is to improve the IXP process.
Continuous learning:: Because learning is a vital part of continuous process improvement, members of
the XP team are encouraged (and possibly, incented) to learn new methods and techniques that can lead
to a higher quality product.
Story-driven development (SDD) insists that stories for acceptance tests be written before a single
line of code is generated.
Domain-driven design (DDD) is an improvement on the “system metaphor” concept used in XP. DDD
suggests the evolutionary creation of a domain model that “accurately represents how domain experts
think about their subject” .
Pairing extends the XP pair programming concept to include managers and other stakeholders. The
intent is to improve knowledge sharing among XP team members who may not be directly involved in
technical development.
Iterative usability discourages front-loaded interface design in favor of usability design that evolves as
software increments are delivered and users’ interaction with the software is studied.
IXP makes smaller modifications to other XP practices and redefines certain roles and responsibilities to
make them more amenable to significant projects for large organizations.
All new process models and methods spur worthwhile discussion and in some instances heated debate.
Extreme Programming has done both.
Stephens and Rosenberg argue that many XP practices are worthwhile, but others have been overhyped,
and a few are problematic.
The authors suggest that the codependent nature of XP practices are both its strength and its weakness.
Because many organizations adopt only a subset of XP practices, they weaken the efficacy of the entire
process.
Proponents counter that XP is continuously evolving and that many of the issues raised by critics have
been addressed as XP practice matures. Among the issues that continue to trouble some critics of XP are:
• Conflicting customer needs:- Many projects have multiple customers, each with his own set of needs.
In XP, the team itself is tasked with assimilating the needs of different customers, a job that may be
beyond their scope of authority.
• Requirements are expressed informally. User stories and acceptance tests are the only explicit
manifestation of requirements in XP.
Critics argue that amore formal model or specification is often needed to ensure that omissions,
inconsistencies, and errors are uncovered before the system is built. Proponents counter that the
changing nature of requirements makes such models and specification obsolete almost as soon as they
are developed.
• Lack of formal design. XP deemphasizes the need for architectural design and in many instances,
suggests that design of all kinds should be relatively informal. Critics argue that when complex systems
are built, design must be emphasized to ensure that the overall structure of the software will exhibit
quality and maintainability.
XP proponents suggest that the incremental nature of the XP process limits complexity (simplicity is a
core value) and therefore reduces the need for extensive design.
Note :Every software process has flaws and that many software organizations have used XP successfully.
The key is to recognize where a process may have weaknesses and to adapt it to the specific needs of
your organization.
The history of software engineering is littered with dozens of obsolete process descriptions and
methodologies, modeling methods and notations, tools, and technology. Each flared in notoriety and was
then eclipsed by something new and (purportedly) better. With the introduction of a wide array of agile
process models— each contending for acceptance within the software development community—the
agile movement is following the same historical path.
The most widely used of all agile process models is Extreme Programming (XP). But many other agile
process models have been proposed and are in use across the industry. Among the most common are:
• Adaptive Software Development (ASD)
• Scrum
• Dynamic Systems Development Method (DSDM)
• Crystal
• Feature Drive Development (FDD)
• Lean Software Development (LSD)
• Agile Modeling (AM)
• Agile Unified Process (AUP)
Adaptive Software Development (ASD) has been proposed by Jim Highsmith as a technique for
building complex software and systems. The philosophical underpinnings of ASD focus on human
collaboration and team self-organization.
Highsmith argues that an agile, adaptive development approach based on collaboration is “as much a
source of order in our complex interactions as discipline and engineering.” He defines an ASD “life cycle”
(Figure 3.3) that incorporates three phases: speculation, collaboration, and learning.
Based on information obtained at the completion of the first cycle, the plan is reviewed and adjusted so
that planned work better fits the reality in which an ASD team is working.
Motivated people use collaboration in a way that multiplies their talent and creative output beyond their
absolute numbers. This approach is a recurring theme in all agile methods. But collaboration is not easy.
It encompasses communication and teamwork, but it also emphasizes individualism, because individual
creativity plays an important role in collaborative thinking. It is, above all, a matter of trust.
As members of an ASD team begin to develop the components that are part of an adaptive cycle, the
emphasis is on “learning” as much as it is on progress toward a completed cycle.
Highsmith argues that software developers often overestimate their own understanding (of the
technology, the process, and the project) and that learning will help them to improve their level of real
understanding.
ASD teams learn in three ways: focus groups, technical reviews, and project postmortems.
The ASD philosophy has merit regardless of the process model that is used. ASD’s overall emphasis on
the dynamics of self-organizing teams, interpersonal collaboration, and individual and team learning
yield software project teams that have a much higher likelihood of success.
3.5.2 Scrum
Scrum (the name is derived from an activity that occurs during a rugby match) is an agile software
development method that was conceived by Jeff Sutherland and his development team in the early
1990s.
In recent years, further development on the Scrum methods has been performed by Schwaber and
Beedle.
Scrum principles are consistent with the agile manifesto and are used to guide development activities
within a process that incorporates the following framework activities: requirements, analysis, design,
evolution, and delivery.
Within each framework activity, work tasks occur within a process called a sprint. The work conducted
within a sprint (the number of sprints required for each framework activity will vary depending on
product complexity and size) is adapted to the problem at hand and is defined and often modified
Scrum emphasizes the use of a set of software process patterns that have proven effective for projects
with tight timelines, changing requirements, and business criticality.
Backlog—a prioritized list of project requirements or features that provide business value for the
customer. Items can be added to the backlog at any time (this is how changes are introduced). The
product manager assesses the backlog and updates priorities as required.
Sprints—consist of work units that are required to achieve a requirement defined in the backlog that
must be fit into a predefined time-box14 (typically 30 days). Changes (e.g., backlog work items) are not
introduced during the sprint. Hence, the sprint allows team members to work in a short-term, but stable
environment.
Scrum meetings—are short (typically 15 minutes) meetings held daily by the Scrum team.
Three key questions are asked and answered by all team members :
• What did you do since the last team meeting?
• What obstacles are you encountering?
• What do you plan to accomplish by the next team meeting?
A team leader, called a Scrum master, leads the meeting and assesses the responses from each person.
The Scrum meeting helps the team to uncover potential problems as early as possible. Also, these daily
meetings lead to “knowledge socialization” and thereby promote a self-organizing team structure.
Note :
Scrum : Scrum is a management framework that teams use to self-organize and work towards a
common goal.
Scrum allows us to develop products of the highest value while making sure that we maintain
creativity and productivity.
The iterative and incremental approach used in scrum allows the teams to adapt to the changing
requirements.
Demos—deliver the software increment to the customer so that functionality that has been
implemented can be demonstrated and evaluated by the customer. It is important to note that the demo
may not contain all planned functionality, but rather those functions that can be delivered within the
time-box that was established.
Beedle and his colleagues present a comprehensive discussion of these patterns in which they state:
“Scrum assumes up-front the existence of chaos. . . . ”
The Scrum process patterns enable a software team to work successfully in a world where the
elimination of uncertainty is impossible.
The Dynamic Systems Development Method (DSDM) is an agile software development approach that
“provides a framework for building and maintaining systems which meet tight time constraints through
the use of incremental prototyping in a controlled project environment” .
The DSDM philosophy is borrowed from a modified version of the Pareto principle—80 percent of an
application can be delivered in 20 percent of the time it would take to deliver the complete (100
percent) application.
DSDM is an iterative software process in which each iteration follows the 80 percent rule. That is, only
enough work is required for each increment to facilitate movement to the next increment. The
remaining detail can be completed later when more business requirements are known or changes have
been requested and accommodated.
The DSDM Consortium (www.dsdm.org) is a worldwide group of member companies that collectively
take on the role of “keeper” of the method.
The consortium has defined an agile process model, called the DSDM life cycle that defines three
different iterative cycles, preceded by two additional life cycle activities:
Feasibility study—establishes the basic business requirements and constraints associated with the
application to be built and then assesses whether the application is a viable candidate for the DSDM
process.
Business study—establishes the functional and information requirements that will allow the
application to provide business value; also, defines the basic application architecture and identifies the
maintainability requirements for the application.
Implementation—places the latest software increment (an “operationalized” prototype) into the
operational environment.
It should be noted that
(1) the increment may not be 100 percent complete or
(2) changes may be requested as the increment is put into place. In either case, DSDM development
work continues by returning to the functional model iteration activity.
DSDM can be combined with XP to provide a combination approach that defines a solid process model
(the DSDM life cycle) with the nuts and bolts practices (XP) that are required to build software
increments.
In addition, the ASD concepts of collaboration and self-organizing teams can be adapted to a combined
process model.
3.5.4 Crystal
Alistair Cockburn and Jim Highsmith created the Crystal family of agile methods in order to achieve a
software development approach that puts a premium on “maneuverability” during what Cockburn
characterizes as “a resource limited,
cooperative game of invention and communication, with a primary goal of delivering useful, working
software and a secondary goal of setting up for the next game”.
To achieve maneuverability, Cockburn and Highsmith have defined a set of methodologies, each with
core elements that are common to all, and roles, process patterns, work products, and practice that are
unique to each.
The Crystal family is actually a set of example agile processes that have been proven effective for
different types of projects. The intent is to allow agile teams to select the member of the crystal family
that is most appropriate for their project and environment.
Feature Driven Development (FDD) was originally conceived by Peter Coad and his colleagues as a
practical process model for object-oriented software engineering.
Stephen Palmer and John Felsing have extended and improved Coad’s work, describing an adaptive,
agile process that can be applied to moderately sized and larger software projects.
• Because features are small blocks of deliverable functionality, users can describe them more easily;
understand how they relate to one another more readily; and better review them for ambiguity, error, or
omissions.
• Features can be organized into a hierarchical business-related grouping.
• Since a feature is the FDD deliverable software increment, the team develops operational features
every two weeks.
• Because features are small, their design and code representations are easier to inspect effectively.
• Project planning, scheduling, and tracking are driven by the feature hierarchy, rather than an
arbitrarily adopted software engineering task set.
Coad and his colleagues suggest the following template for defining a feature:
<action> the <result> <by for of to> a(n) <object> where an <object> is “a person, place, or thing
(including roles, moments in time or intervals of time, or catalog-entry-like descriptions).”
Lean Software Development (LSD) has adapted the principles of lean manufacturing to the world of
software engineering. The lean principles that inspire the LSD process can be summarized as eliminate
waste, build quality in, create knowledge, defer commitment, deliver fast, respect people, and optimize
the whole.
Each of these principles can be adapted to the software process.
For example, eliminate waste within the context of an agile software project can be interpreted to
mean :
(1) adding no extraneous features or functions,
(2) assessing the cost and schedule impact of any newly requested requirement,
(3) removing any superfluous process steps,
(4) establishing mechanisms to improve the way team members find information,
(5) ensuring the testing finds as many errors as possible,
6) reducing the time required to request and get a decision that affects the software
or the process that is applied to create it, and
(7) streamlining the manner in which information is transmitted to all stakeholders involved in the
process.
There are many situations in which software engineers must build large, business critical systems.
The scope and complexity of such systems must be modeled so that
(1) all constituencies can better understand what needs to be accomplished,
(2) the problem can be partitioned effectively among the people who must solve it, and
(3) quality can be assessed as the system is being engineered and built.
Over the past 30 years, a wide variety of software engineering modeling methods and notation have
been proposed for analysis and design (both architectural and component-level). These methods have
merit, but they have proven to be difficult to apply and challenging to sustain (over many projects). Part
of the problem is the “weight” of these modeling methods.
Analysis and design modeling have substantial benefit for large projects—if for no other reason than
to make these projects intellectually manageable.
Agile Modeling (AM) is a practice-based methodology for effective modeling and documentation
of software-based systems.
Agile Modeling (AM) is a collection of values, principles, and practices for modeling software that
can be applied on a software development project in an effective and light-weight manner. Agile
models are more effective than traditional models because they are just barely good, they don’t
have to be perfect.
Agile modeling adopts all of the values that are consistent with the agile manifesto.
The agile modeling philosophy recognizes that an agile team must have the courage to make
decisions that may cause it to reject a design and refactor.
The team must also have the humility to recognize that technologists do not have all the answers
and
that business experts and other stakeholders should be respected and embraced.
AM suggests a wide array of “core” and “supplementary” modeling principles, those that make AM
unique are:
Model with a purpose- A developer who uses AM should have a specific goal (e.g., to communicate
information to the customer or to help better understand some aspect of the software) in mind before
creating the model.
Once the goal for the model is identified, the type of notation to be used and level of detail required will
be more obvious.
Use multiple models- There are many different models and notations that can be used to describe
software. Only a small subset is essential for most projects. AM suggests that to provide needed insight,
each model should present a different aspect of the system and only those models that provide value to
their intended audience should be used.
Travel light- As software engineering work proceeds, keep only those models that will provide long-
term value and jettison the rest. Every work product that is kept must be maintained as changes occur.
This represents work that slows the team down.
Ambler notes that “Every time you decide to keep a model you trade-off agility for the convenience of
having that information available to your team in an abstract manner (hence potentially enhancing
communication within your team as well as with project stakeholders).”Content is more important than
representation.
Modeling should impart information to its intended audience. A syntactically perfect model that
imparts little useful content is not as valuable as a model with flawed notation that nevertheless
provides valuable content for its audience.
Know the models and the tools you use to create them. Understand the strengths and weaknesses of
each model and the tools that are used to create it.
Adapt locally. The modeling approach should be adapted to the needs of the agile team.
The Unified Process has been developed to provide a framework for the application of UML. Scott
Ambler has developed a simplified version of the UP that integrates his agile modeling philosophy.
The Agile Unified Process (AUP) adopts a “serial in the large” and “iterative in the small” philosophy for
building computer-based systems.
• Modeling. UML representations of the business and problem domains are created. However, to stay
agile, these models should be “just barely good enough” to allow the team to proceed.
• Implementation. Models are translated into source code.
• Testing. Like XP, the team designs and executes a series of tests to uncover
errors and ensure that the source code meets its requirements.
• Deployment. Deployment in this context focuses on the delivery of a software increment and the
acquisition of feedback from end users.
• Configuration and project management. In the context of AUP, configuration management
addresses change management, risk management, and the control of any persistent work products that
are produced by the team. Project management tracks and controls the progress of the team
and coordinates team activities.
• Environment management. Environment management coordinates a process infrastructure that
includes standards, tools, and other support technology available to the team.
Although the AUP has historical and technical connections to the Unified Modeling Language, it is
important to note that UML modeling can be using in conjunction with any of the agile process models .
Some proponents of the agile philosophy argue that automated software tools (e.g., design tools) should
be viewed as a minor supplement to the team’s activities, and not at all pivotal to the success of the team.
Alistair Cockburn suggests that tools can have a benefit and that “agile teams stress using tools that
permit
For example, a hiring “tool” might be the requirement to have a prospective team member spend a few
hours pair programming with an existing member of the team. The “fit” can be assessed immediately.
Collaborative and communication “tools” are generally low tech and incorporateany mechanism
(“physical proximity, whiteboards, poster sheets, index cards, and sticky notes” ) that provides
information and coordination among agile developers.
Active communication is achieved via the team dynamics (e.g., pair programming), while passive
communication is achieved by “information radiators” (e.g., a flat panel display that presents the overall
status of different components of an increment).
Project management tools deemphasize the Gantt chart and replace it with earned value charts or
“graphs of tests created versus passed . . . other agile tools are used to optimize the environment in
which the agile team works (e.g., more efficient meeting areas), improve the team culture by nurturing
social interactions
(e.g., collocated teams), physical devices (e.g., electronic whiteboards), and process enhancement (e.g.,
pair programming or time-boxing)” .
In a generic sense, practice is a collection of concepts, principles, methods, and tools that a software
engineer calls upon on a daily basis.
Practice allows managers to manage software projects and software engineers to build computer
programs.
Practice populates a software process model with the necessary technical and management how-to’s to
get the job done.
Practice transforms a haphazard unfocused approach into something that is more organized, more
effective, and more likely to achieve success.
In an editorial published in IEEE Software a decade ago, Steve McConnell made the following comment:
Many software practitioners think of software engineering knowledge almost exclusively as knowledge
of specific technologies: Java, Perl, html, C, Linux, Windows NT, and so on. Knowledge of specific
technology details is necessary to perform computer programming.
If someone assigns you to write a program in C, you have to know something about C to get your
program to work.
McConnell goes on to argue that the body of software engineering knowledge (circa the year 2000) had
evolved to a “stable core” that he estimated represented about “75 percent of the knowledge needed to
develop a complex system.” But what resides within this stable core?
As McConnell indicates, core principles—the elemental ideas that guide software engineers in the work
that they do—now provide a foundation from which software engineering models, methods, and tools
can be applied and evaluated.
Software engineering is guided by a collection of core principles that help in the application of a
meaningful software process and the execution of effective software engineering methods. At the
process level, core principles establish a philosophical foundation that guides a software team as it
performs framework and umbrella activities, navigates the process flow, and produces a set of software
engineering work products.
At the level of practice, core principles establish a collection of values and rules that serve as a guide as
you analyze a problem, design a solution, implement and test the solution, and ultimately deploy the
software in the user community.
A set of general principles that span software engineering process and practice:
(1) provide value to end users,
(2) keep it simple,
(3) maintain the vision (of the product and the project),
(4) recognize that others consume (and must understand) what you produce,
(5) be open to the future,
(6) plan ahead for reuse, and
(7) think!
Although these general principles are important, they are characterized at such a high level of
abstraction that they are sometimes difficult to translate into day-to-day software engineering practice.
It can be characterized using the generic process framework that is applicable for all process models.
The following set of core principles can be applied to the framework, and by extension, to every software
process.
Principle 1. Be agile:-Whether the process model you choose is prescriptive or agile, the basic tenets of
agile development should govern your approach. Every aspect of the work you do should emphasize
economy of
Principle 2. Focus on quality at every step. The exit condition for every process activity, action, and
task should focus on the quality of the work product that has been produced.
Principle 3. Be ready to adapt. Process is not a religious experience, and dogma has no place in it.
When necessary, adapt your approach to constraints imposed by the problem, the people, and the
project itself.
Principle 4. Build an effective team. Software engineering process and practice are important, but the
bottom line is people. Build a self-organizing team that has mutual trust and respect.
Principle 6. Manage change. The approach may be either formal or informal, but mechanisms must be
established to manage the way changes are requested, assessed, approved, and implemented.
Principle 7. Assess risk. Lots of things can go wrong as software is being developed. It’s essential that
you establish contingency plans.
Principle 8. Create work products that provide value for others.Create only those work products
that provide value for other process activities, actions, or tasks.
Every work product that is produced as part of software engineering practice will be passed on to
someone else. A list of required functions and features will be passed along to the person (people) who
will develop a design, the design will be passed along to those who generate code, and so on. Be sure that
the work product imparts the necessary information without ambiguity or omission.
Software engineering practice has a single overriding goal—to deliver on-time, high quality, operational
software that contains functions and features that meet the needs of all stakeholders. To achieve this
goal, you should adopt a set of core principles that guide your technical work. These principles have
merit regardless of the analysis and design methods that you apply, the construction techniques (e.g.,
programming language, automated tools) that you use, or the verification and validation approach that
you choose.
The following set of core principles are fundamental to the practice of software engineering:
Principle 1. Divide and conquer. Stated in a more technical manner, analysis and design should always
emphasize separation of concerns (SoC). A large problem is easier to solve if it is subdivided into a
Principle 2. Understand the use of abstraction. At its core, an abstraction is a simplification of some
complex element of a system used to communicate meaning in a single phrase.
In analysis and design work, a software team normally begins with models that represent high levels of
abstraction (e.g., a spreadsheet) and slowly refines those models into lower levels of abstraction (e.g., a
column or the SUM function).
Joel Spolsky suggests that “all non-trivial abstractions, to some degree, are leaky.” The intent of an
abstraction is to eliminate the need to communicate details. But sometimes, problematic effects
precipitated by these details “leak” through. Without an understanding of the details, the cause of a
problem cannot be easily diagnosed.
Principle 3. Strive for consistency. Whether it’s creating a requirements model, developing a software
design, generating source code, or creating test cases, the principle of consistency suggests that a
familiar context makes software easier to use.
As an example, consider the design of a user interface for a WebApp. Consistent placement of menu
options, the use of a consistent color scheme, and the consistent use of recognizable icons all help to
make
the interface ergonomically sound.
In every case, information flows across an interface, and as a consequence, there are opportunities for
error, or omission,or ambiguity. The implication of this principle is that you must pay special attention
to the analysis, design, construction, and testing of interfaces.
Principle 5. Build software that exhibits effective modularity. Separation of concerns (Principle 1)
establishes a philosophy for software.
Modularity provides a mechanism for realizing the philosophy. Any complex system can be divided into
modules (components), but good software engineering practice demands more.
Modularity must be effective. Each module should focus exclusively on one well-constrained aspect of
the system—it should be cohesive in its function and/or constrained in the content it represents.
Additionally, modules should be interconnected in a relatively simple manner—each module should
exhibit low coupling to other modules, to data sources, and to other environmental aspects.
Principle 6. Look for patterns. Brad Appleton suggests that:The goal of patterns within the software
community is to create a body of literature to help software developers resolve recurring problems
encountered throughout all of software development. Patterns help create a shared language for
communicating insight and experience about these problems and their solutions.
Principle 7. When possible, represent the problem and its solution from a number of different
perspectives. When a problem and its solution are examined from a number of different perspectives, it
is more likely that greater insight will be achieved and that errors and omissions will be uncovered.
For example, a requirements model can be represented using a data oriented viewpoint, a function-
oriented viewpoint, or a behavioral viewpoint.Each provides a different view of the problem and its
requirements.
Principle 8. Remember that someone will maintain the software. Over the long term, software will
be corrected as defects are uncovered, adapted as its environment changes, and enhanced as
stakeholders request more capabilities. These maintenance activities can be facilitated if solid software
engineering practice is applied throughout the software process.
Principles that have a strong bearing on the success of each generic framework activity defined as part of
the software process. In many cases, the principles that are discussed for each of the framework
activities are a refinement of the principles presented in Section 4.2. They are simply core principles
stated at a lower level of abstraction.
Before customer requirements can be analyzed, modeled, or specified they must be gathered through
the communication activity. A customer has a problem that may be amenable to a computer-based
solution.. Communication has begun. But the road from communication to understanding is often full of
potholes.
Effective communication (among technical peers, with the customer and other stakeholders, and with
project managers) is among the most challenging activities that we will confront.
Many of the principles apply equally to all forms of communication that occur within a software project.
Principle 1. Listen. Try to focus on the speaker’s words, rather than formulating your response to
those words. Ask for clarification if something is unclear, but avoid constant interruptions. Never
become contentious in your words or actions (e.g., rolling your eyes or shaking your head) as a person is
talking.
Principle 2. Prepare before you communicate. Spend the time to understand the problem before you
meet with others. If necessary, do some research to understand business domain jargon. If you have
responsibility for conducting a meeting, prepare an agenda in advance of the meeting.
Principle 4. Face-to-face communication is best. But it usually works better when some other
representation of the relevant information is present. For example, a participant may create a drawing
or a “strawman” document that serves as a focus for discussion.
Principle 5. Take notes and document decisions. Things have a way of falling into the cracks.
Someone participating in the communication should serve as a “recorder” and write down all important
points and decisions.
Principle 6. Strive for collaboration. Collaboration and consensus occur when the collective
knowledge of members of the team is used to describe product or system functions or features.
Each small collaboration serves to build trust among team members and creates a common goal for
the team.
Principle 7. Stay focused; modularize your discussion. The more people involved in any
communication, the more likely that discussion will bounce from one topic to the next. The facilitator
should keep the conversation .modular, leaving one topic only after it has been resolved.
Principle 8. If something is unclear, draw a picture. Verbal communication goes only so far. A sketch
or drawing can often provide clarity when words fail to do the job.
Principle 9.
(a) Once you agree to something, move on.
(b) If you can’t agree to something, move on.
(c) If a feature or function is unclear and cannot be clarified at the moment, move on.
Communication, like any software engineering activity, takes time.
Principle 10. Negotiation is not a contest or a game. It works best when both parties win. There are
many instances in which you and other stakeholders must negotiate functions and features, priorities,
and delivery dates. If the team has collaborated well, all parties have a common goal. Still, negotiation
will demand compromise from all parties.
The communication activity helps you to define your overall goals and objectives (subject, of course, to
change as time passes). However, understanding these goals and objectives is not the same as defining a
plan for getting there.
The planning activity encompasses a set of management and technical practices that enable the
software team to define a road map as it travels toward its strategic goal and tactical objectives.
The Difference Between Customers and End Users
Software engineers communicate with many different stakeholders, but customers and end
users have the most significant impact on the technical work that follows.
In some cases the customer and the end user are one and the same, but for many projects, the
There are many different planning philosophies. Some people are “minimalists,” arguing that change
often obviates the need for a detailed plan. Others are “traditionalists,” arguing that the plan provides an
effective road map and the more detail it has, the less likely the team will become lost. Still others are
“agilists,” arguing that a quick “planning game” may be necessary, but that the road map will emerge as
“real work” on the software begins.
On many projects, overplanning is time consuming and fruitless (too many things change), but
underplanning is a recipe for chaos.
Regardless of the rigor with which planning is conducted, the following principles always apply:
Principle 1. Understand the scope of the project. It’s impossible to use a road map if you don’t know
where you’re going. Scope provides the software team with a destination.
Principle 2. Involve stakeholders in the planning activity. Stakeholders define priorities and
establish project constraints. To accommodate these realities, software engineers must often negotiate
order of delivery, timelines, and other project-related issues.
Principle 3. Recognize that planning is iterative. A project plan is never engraved in stone. As work
begins, it is very likely that things will change. As a consequence, the plan must be adjusted to
accommodate these changes. In addition, iterative, incremental process models dictate replanning after
the delivery of each software increment based on feedback received from users.
Principle 4. Estimate based on what you know. The intent of estimation is to provide an indication of
effort, cost, and task duration, based on the team’s current understanding of the work to be done. If
information is vague or unreliable, estimates will be equally unreliable.
Principle 5. Consider risk as you define the plan. If you have identified risks that have high impact
and high probability, contingency planning is necessary. In addition, the project plan (including the
schedule) should be adjusted to accommodate the likelihood that one or more of these risks will occur.
Principle 6. Be realistic. People don’t work 100 percent of every day. Noise always enters into any
human communication. Omissions and ambiguity are facts of life. Change will occur. Even the best
Principle 7. Adjust granularity as you define the plan. Granularity refers to the level of detail that is
introduced as a project plan is developed. A “high-granularity” plan provides significant work task detail
that is planned over relatively short time increments (so that tracking and control occur frequently).
A “low-granularity” plan provides broader work tasks that are planned over longer time periods. In
general, granularity moves from high to low as the project time line moves away from the current date.
Over the next few weeks or months, the project can be planned in significant detail.
Activities that won’t occur for many months do not require high granularity (too much can change).
Principle 8. Define how you intend to ensure quality. The plan should identify how the software
team intends to ensure quality. If technical reviews are to be conducted, they should be scheduled. If pair
programming is to be used during construction, it should be explicitly defined within the plan.
Principle 9. Describe how you intend to accommodate change. Even the best planning can be
obviated by uncontrolled change.
Identify how changes are to be accommodated as software engineering work proceeds. For example, can
the customer request a change at any time? If a change is requested, is the team obliged to implement it
immediately? How is the impact and cost of the change assessed?
Principle 10. Track the plan frequently and make adjustments as required.
Software projects fall behind schedule one day at a time. Therefore, it makes sense to track progress on a
daily basis, looking for problem areas and situations in which scheduled work does not conform to
actual work conducted. When slippage is encountered, the plan is adjusted accordingly.
To be most effective, everyone on the software team should participate in the planning activity. Only
then will team members “sign up” to the plan.
In software engineering work, two classes of models can be created: requirements models and design
models.
Scott Ambler and Ron Jeffries define a set of modeling principles4 that are intended for those who use
the agile process model but are appropriate for all software engineers who perform modeling actions
and tasks:
Principle 1. The primary goal of the software team is to build software, not create models. Agility
means getting software to the customer in the fastest possible time. Models that make this happen are
worth creating, but models that slow the process down or provide little new insight should be avoided.
Principle 2. Travel light—don’t create more models than you need. Every model that is created must
be kept up-to-date as changes occur.
More importantly, every new model takes time that might otherwise be spent on construction (coding
and testing). Therefore, create only those models that make it easier and faster to construct the
software.
Principle 3. Strive to produce the simplest model that will describe the
problem or the software. Don’t overbuild the software . By keeping models simple, the resultant
software will also be simple.
The result is software that is easier to integrate, easier to test, and easier to maintain (to change). In
addition, simple models are easier for members of the software team to understand and critique,
resulting in an ongoing form of feedback that optimizes the end result.
For example, since requirements will change, there is a tendency to give requirements models short
shrift. Why? Because you know that they’ll change anyway. The problem with this attitude is that
without a reasonably complete requirements model, you’ll create a design (design model) that will
invariably miss important functions and features.
Principle 5. Be able to state an explicit purpose for each model that is created. Every time you
create a model, ask yourself why you’re doing so. If you can’t provide solid justification for the existence
of the model, don’t spend time on it.
Principle 6. Adapt the models you develop to the system at hand. It may be necessary to adapt
model notation or rules to the application; for example, a video game application might require a
different modeling technique than real-time, embedded software that controls an automobile engine.
Principle 7. Try to build useful models, but forget about building perfect models. When building
requirements and design models, a software engineer reaches a point of diminishing returns. That is, the
effort required to make the model absolutely complete and internally consistent is not worth the
benefits of these properties. Am I suggesting that modeling should be sloppy or low quality? The answer
Principle 8. Don’t become dogmatic about the syntax of the model. If it communicates content
successfully, representation is secondary. Although everyone on a software team should try to use
consistent notation during modeling, the most important characteristic of the model is to communicate
information that enables the next software engineering task. If a model does this successfully, incorrect
syntax can be forgiven.
Principle 9. If your instincts tell you a model isn’t right even though it seems okay on paper, you
probably have reason to be concerned. If you are an experienced software engineer, trust your
instincts. Software work teaches many lessons—some of them on a subconscious level. If something
tells you that a design model is doomed to fail (even though you can’t prove it explicitly), you have
reason to spend additional time examining the model or developing a different one.
Principle 10. Get feedback as soon as you can. Every model should be reviewed by members of the
software team. The intent of these reviews is to provide feedback that can be used to correct modeling
mistakes, change misinterpretations,and add features or functions that were inadvertently omitted.
Over the past three decades, a large number of requirements modeling methods have been developed.
Investigators have identified requirements analysis problems and their causes and have developed a
variety of modeling notations and corresponding sets of heuristics to overcome them. Each analysis
method has a unique point of view. However, all analysis methods are related by a set of operational
principles:
Principle 1. The information domain of a problem must be represented and understood. The
information domain encompasses the data that flow into the system (from end users, other systems, or
external devices), the data that flow out of the system (via the user interface, network interfaces,
reports,
graphics, and other means), and the data stores that collect and organize persistent data objects (i.e.,
data that are maintained permanently).
Principle 2. The functions that the software performs must be defined. Software functions provide
direct benefit to end users and also provide internal support for those features that are user visible.
Some functions transform data that flow into the system.
In other cases, functions effect some level of control over internal software processing or external
system elements. Functions can be described at many different levels of abstraction, ranging from a
general statement of purpose to a detailed description of the processing elements that must be invoked.
Principle 3. The behavior of the software (as a consequence of external events) must be
represented. The behavior of computer software is driven by its interaction with the external
environment. Input provided by end users, control data provided by an external system, or monitoring
data collected over a network all cause the software to behave in a specific way.
Principle 4. The models that depict information, function, and behavior must be partitioned in a
manner that uncovers detail in a layered (or hierarchical) fashion. Requirements modeling is the
Complex problems are difficult to solve in their entirety. For this reason, you should use a divide-and-
conquer strategy. A large, complex problem is divided into subproblems until each subproblem is
relatively easy to understand. This concept is called partitioning or separation of concerns, and it is a key
strategy in requirements modeling.
Principle 5. The analysis task should move from essential information toward implementation
detail. Requirements modeling begins by describing the problem from the end-user’s perspective. The
“essence” of the problem is described without any consideration of how a solution will be implemented.
For example, a video game requires that the player “instruct” its protagonist on what direction to
proceed as she moves into a dangerous maze. That is the essence of the problem. Implementation detail
(normally
described as part of the design model) indicates how the essence will be implemented. For the video
game, voice input might be used. Alternatively, a keyboard command might be typed, a joystick (or
mouse) might be pointed in a specific direction, or a motion-sensitive device might be waved in the
air.By applying these principles, a software engineer approaches a problem systematically.
Over the past three decades, a large number of requirements modeling methods have been developed.
Investigators have identified requirements analysis problems and their causes and have developed a
variety of modeling notations and corresponding sets of heuristics to overcome them. Each analysis
method has a unique point of view.
Principle 2. The functions that the software performs must be defined. Software functions provide
direct benefit to end users and also provide internal support for those features that are user visible.
Some functions transform data that flow into the system. In other cases, functions effect some level of
control over internal software processing or external system elements.
Functions can be described at many different levels of abstraction, ranging from a general statement of
purpose to a detailed description of the processing elements that must be invoked.
Principle 3. The behavior of the software (as a consequence of external events) must be
represented. The behavior of computer software is driven by its interaction with the external
environment. Input provided by end users, control data provided by an external system, or monitoring
data collected over a network all cause the software to behave in a specific way.
A large, complex problem is divided into subproblems until each subproblem is relatively easy to
understand. This concept is called partitioning or separation of concerns, and it is a key strategy in
requirements modeling.
Principle 5. The analysis task should move from essential information toward implementation
detail. Requirements modeling begins by describing the problem from the end-user’s perspective. The
“essence” of the problem is described without any consideration of how a solution will be implemented.
For example, a video game requires that the player “instruct” its protagonist on what direction to
proceed as she moves into a dangerous maze. That is the essence of the problem. Implementation detail
(normally
described as part of the design model) indicates how the essence will be implemented.
For the video game, voice input might be used. Alternatively, a keyboard command might be typed, a
joystick (or mouse) might be pointed in a specific direction, or a motion-sensitive device might be waved
in the air. By applying these principles, a software engineer approaches a problem systematically.
There is no shortage of methods for deriving the various elements of a software design. Some methods
are data driven, allowing the data structure to dictate the program architecture and the resultant
processing components.
Others are pattern driven, using information about the problem domain (the requirements model) to
develop architectural styles and processing patterns. Still others are object oriented, using problem
domain objects as the driver for the creation of data structures and the methods that manipulate them.
A set of design principles that can be applied regardless of the method that is used:
Principle 4. Interfaces (both internal and external) must be designed with care. The manner in
which data flows between the components of a system has much to do with processing efficiency, error
propagation, and design simplicity. A well-designed interface makes integration easier and assists the
tester in validating component functions.
Principle 5. User interface design should be tuned to the needs of the end user. In every case, it
should stress ease of use. The user interface is the visible manifestation of the software. No matter how
sophisticated its internal functions, no matter how comprehensive its data structures, no matter how
well designed its architecture, a poor interface design often leads to the perception that the software is
“bad.”
Principle 7. Components should be loosely coupled to one another and to the external
environment.
Coupling is achieved in many ways— via a component interface, by messaging, through global data. As
the level of coupling increases, the likelihood of error propagation also increases and the overall
maintainability of the software decreases. Therefore, component coupling should be kept as low as is
reasonable.
Principle 9. The design should be developed iteratively. With each iteration, the designer should
strive for greater simplicity. Like almost all creative activities, design occurs iteratively. The first
iterations work to refine the design and correct errors, but later iterations should strive to make the
design as simple as is possible.
When these design principles are properly applied, you create a design that exhibits both external and
internal quality factors .
Testing Principles. Glen Myers states a number of rules that can serve well as testing objectives:
In addition, the data collected as testing is conducted provide a good indication of software reliability
and some indication of software quality as a whole. But testing cannot show the absence of errors and
defects; it can show only that software errors and defects are present.
It is important to keep this (rather gloomy) statement in mind as testing is being conducted.
Davis suggests a set of testing principles that have been adapted for use:
Principle 2. Tests should be planned long before testing begins. Test planning can begin as soon as
the requirements model is complete. Detailed definition of test cases can begin as soon as the design
model
has been solidified. Therefore, all tests can be planned and designed before any code has been
generated.
Principle 3. The Pareto principle applies to software testing. In this context the Pareto principle
implies that 80 percent of all errors uncovered during testing will likely be traceable to 20 percent of all
program components. The problem, of course, is to isolate these suspect components and to thoroughly
test them.
Principle 4. Testing should begin “in the small” and progress toward testing “in the large.” The
first tests planned and executed generally focus on individual components. As testing progresses, focus
shifts in an attempt to find errors in integrated clusters of components and ultimately in the entire
system.
Principle 5. Exhaustive testing is not possible. The number of path permutations for even a
moderately sized program is exceptionally large. For this reason, it is impossible to execute every
The deployment activity encompasses three actions: delivery, support, and feedback. Because modern
software process models are evolutionary or incremental in nature, deployment happens not once, but a
number of times as software moves toward completion.
Each delivery cycle provides the customer and end users with an operational software increment that
provides usable functions and features. Each support cycle provides documentation and human
assistance for all functions and features introduced during all deployment cycles to
date.
Each feedback cycle provides the software team with important guidance that results in modifications to
the functions, features, and approach taken for the next increment.
The delivery of a software increment represents an important milestone for any software project. A
number of key principles should be followed as the team prepares to deliver an increment:
Principle 1. Customer expectations for the software must be managed.Too often, the customer
expects more than the team has promised to deliver, and disappointment occurs immediately. This
results in feedback that is not productive and ruins team morale. In her book on managing expectations,
Naomi Karten states: “The starting point for managing expectations is to become more conscientious
about what you communicate and how.”
She suggests that a software engineer must be careful about sending the customer conflicting messages
(e.g., promising more than you can reasonably deliver in the time frame provided or delivering more
than you promise for one software increment and then less than promised for the next).
Principle 2. A complete delivery package should be assembled and tested. A CD-ROM or other
media (including Web-based downloads) containing all executable software, support data files, support
documents, and other relevant information should be assembled and thoroughly beta-tested with actual
users.
All installation scripts and other operational features should be thoroughly exercised in as many
different computing configurations (i.e., hardware, operating systems, peripheral devices, networking
arrangements) as possible.
Principle 3. A support regime must be established before the software is delivered. An end user
expects responsiveness and accurate information when a question or problem arises. If support is ad
hoc, or worse, nonexistent, the customer will become dissatisfied immediately.
Support should be planned, support materials should be prepared, and appropriate recordkeeping
mechanisms should be established so that the software team can conduct a categorical assessment of the
kinds of support requested.
Principle 4. Appropriate instructional materials must be provided to end users. The software team
delivers more than the software itself. Appropriate training aids (if required) should be developed;
Principle 5. Buggy software should be fixed first, delivered later. Under time pressure, some
software organizations deliver low-quality increments with a warning to the customer that bugs “will be
fixed in the next release.” This is a mistake. There’s a saying in the software business: “Customers will
forget you delivered a high-quality product a few days late, but they will never forget the problems that a
low-quality product caused them. The software reminds them every day.”
The delivered software provides benefit for the end user, but it also provides useful feedback for the
software team. As the increment is put into use, end users should be encouraged to comment on features
and functions, ease of use, reliability, and any other characteristics that are appropriate.
----------------------------------*****************************------------------------------------***********------------
MODULE- 4
The reason for these project shortcomings is often the management of projects. The
National Audit Office in the UK, for example, among other factors causing project
failure identified ‘lack of skills and proven approach to project management and risk
management’.
The dictionary definitions put a clear emphasis on the project being a planned
activity.The emphasis on being planned assumes we can determine how to carry out a
task before
Other activities, such as routine maintenance, will have been performed so many times
that everyone knows exactly what to do. In these cases, planning hardly seems
necessary,although procedures might be documented to ensure consistency and to help
newcomers.
The activities that benefit most from conventional project management are likely to
lie between these two extremes – see Figure 1.1.
There is a hazy boundary between the non-routine project and the routine job. The
first time you do a routine task it will be like a project. On the other hand, a project to
develop a system similar to previous ones that you have developed will have a large
element of the routine.
Example :
Also, expertise built up during the project may be lost when the team is eventually
dispersed at the end of the project.
Invisibility When a physical artefact such as a bridge is constructed the progress can
actually be seen. With software, progress is not immediately visible. Software project
management can be seen as the process of making the invisible visible.
Complexity Per dollar, pound or euro spent, software products contain more
complexity than other engineered artefacts.
Conformity The ‘traditional’ engineer usually works with physical systems and
materials like cement and steel. These physical systems have complexity, but are
governed by consistent physical laws.
Flexibility That software is easy to change is seen as a strength. However, where the
software system interfaces with a physical or organizational system, it is expected
that the software will change to accommodate the other components rather than
vice versa. Thus software systems are particularly subject to change.
1.5.1 Some of the common features of contract management and technical project
management are as follows: -
1. stakeholders are involved in both.
2. Team from both the clients and suppliers are involved for accomplishing the
project.
3. They generally evolve out of need and requirements from both the clients and
suppliers.
4. They are interdependent on each other.
5. Standard protocols are maintained by both the clients and suppliers.
Thus, the project manager will not worry about estimating the effort needed to write
individual software components as long as the overall project is within budget and
on time. On the supplier side, there will need to be project managers who deal with
the more technical issues.
A software project is not only concerned with the actual writing of software. In fact,
where a software application is bought ‘off the shelf’, there may be no software
writing as such, but this is still fundamentally a software project because so many of
the other activities associated with software will still be present.
Usually there are three successive processes that bring a new system into being – see
Figure 1.2.
1.The feasibility study assesses whether a project is worth starting – that it has a
valid business case. Information is gathered about the requirements of the proposed
application. Requirements elicitation can, at least initially, be complex and difficult.
The stakeholders may know the aims they wish to
pursue, but not be sure about the means of achievement. The developmental and
operational costs, and the value of the benefits of the new system, will also have to be
estimated. With a large system, the feasibility study could be a project in its own right
with its own plan. The study could be part of a strategic planning exercise examining a
range of potential software developments. Sometimes an organization assesses a
programme of development made up of a number of projects.
Planning If the feasibility study indicates that the prospective project appears viable, the
project planning can start. For larger projects, we would not do all our detailed planning
at the beginning.
We create an outline plan for the whole project and a detailed one for the first stage.
Because we will have more detailed and accurate project information after the earlier
stages of the project have been completed, planning of the later stages is left to nearer
their start.
Project execution The project can now be executed. The execution of a project often
contains design and implementation sub-phases.
Students new to project planning often find that the boundary between design and
planning can be hazy. Design is making decisions about the form of the products to be
created. This could relate to the external appearance of the software, that is, the user
interface, or the internal architecture. The plan details the activities to be carried out to
create these products.
Planning and design can be confused because at the most detailed level, planning
Figure 1.3 shows the typical sequence of software development activities recommended
in the international standard ISO 12207. Some activities are concerned with the system
while others relate to software. The development of software will be only one part of a
project.
Software could be developed, for example, for a project which also requires the
installation of an ICT infrastructure, the design of user jobs and user training.
Training to ensure that operators use the computer system efficiently is an example of a
system requirement for the project, as opposed to a specifically software requirement.
There would also be resource requirements that relate to application development costs.
Architecture design The components of the new system that fulfil each requirement have to
be identified. Existing components may be able to satisfy some requirements.
In other cases, a new component will have to be made. These components are not only
software: they could be new hardware or work processes. Although software developers
are primarily concerned with software components, it is very rare that these can be
developed in isolation. They will, for example, have to take account of existing legacy
systems with which they will interoperate.
The design of the system architecture is thus an input to the software requirements.
A second architecture design process then takes place that maps the software
requirements to software components.
Detailed design Each software component is made up of a number of software units that
can be separately coded and tested. The detailed design of these units is carried out
separately.
Code and test refers to writing code for each software unit. Initial testing to debug
individual software units would be carried out at this stage.
Integration The components are tested together to see if they meet the overall
requirements. Integration could involve combining different software components,
or combining and testing the software element of the system in conjunction with
the hardware platforms and user interactions.
Installation This is the process of making the new system operational. It would
include activities such as setting up standing data (for example, the details for
Acceptance support This is the resolving of problems with the newly installed
system, including the correction of any errors, and implementing agreed extensions
and improvements. Software maintenance can be seen as a series of minor
software projects. In many environments, most software development is in fact
maintenance.
A plan for an activity must be based on some idea of a method of work. For
example, if you were asked to test some software, you may know nothing about
the software to be tested, but you could assume that you would need to:
● devise and write test cases that will check that each requirement has been
satisfied;
● create test scripts and expected results for each test case;
● compare the actual results and the expected results and identify discrepancies.
While a method relates to a type of activity in general, a plan takes that method
(and perhaps others) and converts it to real activities, identifying for each
activity:
Groups of methods or techniques are often grouped into methodologies such as object-
oriented design.
In workplaces there are systems that staff have to use if they want to do something,
such as recording a sale.
A traditional distinction has been between information systems which enable staff
to carry out office processes and embedded systems which control machines. A
stock control system would be an information system. An embedded, or process
control, system might control the air conditioning equipment in a building. Some
systems may have elements of both where, for example, the stock control system
also controls an automated warehouse.
All types of software projects can be classified into software product development
projects and software services projects.
These two broad classes of software projects can be furthur classified into
subclasses as shown in figure 1.4 below.
Example : BANCS from TCS, FINACLE from Infosys in the banking domain and
AspenPlus from Aspen corporation in the chemical process simulation.
Outsourced projects
While developing a large project, sometimes, it makes good commercial sense for a
company to outsource some parts of its work to other companies.
For example, a company may consider outsourcing as a good option, if it feels that it
does not have sufficient expertise to develop some specific parts of the product or if
it determines that some parts can be developed cost-effectively by another
company. Since an outsourced project is a small part of some project, it is usually
small in size and needs to be completed within a few months.
The type of development work being handled by a company can have an impact on
its profitability. For example, a company that has developed a generic software
product usually gets an uninterrupted stream of revenue over several years.
However, outsourced projects fetch only one time revenue to any company.
Objective-driven development
A project might be to create a product, the details of which have been specified by the
client. The client has the responsibility for justifying the product.
On the other hand, the project requirement might be to meet certain objectives which
could be met in a number of ways. An organization might have a problem and ask a
specialist to recommend a solution.
This is useful where the technical work is being done by an external group and the
user needs are unclear at the outset. The external group can produce a preliminary
design at a fixed fee. If the design is acceptable the developers can then quote a price
for the second, implementation, stage based on an agreed requirement.
The computers that are in a distributed system can be physically close together and
connected by a local network, or they can be geographically distant and connected
by a Wide Area Network. A distributed system permits resource sharing, including
software by systems linked to the network. Some of the examples of the distributed
systems are Intranets, Internets, www, emails, etc.
3. Free Software Projects – Free software is software that can be freely used,
modified, and reallocated with only one constraint. Any redistributed version of the
software must be distributed with the original terms of free use, modification, and
distribution. In other words, the user has the liberty to copy, run, download,
distribute and do anything for his upgradation. Thus, this software gives the liberty
without money and the users can regulate the programs as per their needs.
4. Software Hosted on Code Plex – Code Plex is Microsoft’s open- source project
hosting website. Code Plex is a site for managing open- source software projects, but
most of those projects are leniently licensed, commonly written in C# and the
building blocks for own open-source project using advanced GUI control libraries.
The great thing about permissively licensed building blocks is that one doesn’t have
to worry about the project being sucked into GPL if one decides to close the source.
Because Code Plex is based on Team Foundation Server, it also provides enterprise
bug tracking and build management for open-source project, which is far better than
the services provided by Source Forge.
A project charter explains the project in clear, concise wording for high level
management.
The document provides key information about a project and provides approval to
start the project. Therefore, it serves as a formal announcement that a new
approved projectis about to commence.
Project charter also contain the appointment of the project manager, the person
who is overall responsible for the project.
The project charter is a final official document that is prepared in accordance with
the mission and visions of the company along with the deadlines and the
milestones to be achieved in the project.
It acts as a road map for the project manager and clearly describes the objectives
that has to be achieved in the project.
The project charter clearly defines the projects, its attributes, the end results, and
the project authorities who will be handling the project.
The project charter along with the project plan provide strategic plans for the
implementation of the projects. It is also the green signal for the project manger to
commence the project.
In a nutshell, the elements of the project charter which serves the following are: -
Reasons for the project
Objectives and constraints of the project
The main stakeholders
Risks identified
Benefits of the project
General overview of the budget
1.10 Stakeholders
These are people who have a stake or interest in the project. Their early identification
is important as you need to set up adequate communication channels with them.
Stakeholders can be categorized as:
● Internal to the project team This means that they will be under the direct
managerial control of the project leader.
● External to the project team but within the same organization For example, the
project leader might need the assistance of the users to carry out systems
testing. Here the commitment of the people involved has to be negotiated.
Different types of stakeholder may have different objectives and one of the jobs of the
project leader is to recognize these different interests and to be able to reconcile them.
For example, end-users may be concerned with the ease of use of the new application,
while their managers may be more focused on staff savings.
The project leader needs to be a good communicator and negotiator. Boehm and Ross
proposed a ‘Theory W’ of software project management where the manager concentrates
on creating situations where all parties benefit from a project and therefore have an
interest in its success. (The ‘W’ stands for ‘win–win’.)
Among all these stakeholders are those who actually own the project. They control the
financing of the project. They also set the objectives of the project.
The objectives should define what the project team must achieve for project success.
Although different stakeholders have different motivations, the project objectives identify
the shared intentions for the project.
Objectives focus on the desired outcomes of the project rather than the tasks within it –
they are the ‘post-conditions’ of the project.
Informally the objectives could be written as a set of statements following the opening
words ‘the project will be a success if ' Thus one statement in a set of objectives might be
‘customers can order our products online’ rather than ‘to build an e-commerce website’.
There is often more than one way to meet an objective and the more possible routes to
success the better.
There may be several stakeholders, including users in different business areas, who might
have some claim to project ownership. In such a case, a project authority needs to be
explicitly identified with overall authority over the project.
This authority is often a project steering committee (or project board or project
management board) with overall responsibility for setting, monitoring and modifying
objectives. The project manager runs the project on a day-to-day basis, but regularly
reports to the steering committee.
An effective objective for an individual must be something that is within the control of
that individual. An objective might be that the software application produced must pay for
itself by reducing staff costs. As an overall business objective this might be reasonable.
We can say that in order to achieve the objective we must achieve certain goals or sub-
Specific Effective objectives are concrete and well defined. Vague aspirations such as ‘to
improve customer relations’ are unsatisfactory. Objectives should be defined so that it is
obvious to all whether the project has been successful.
Measurable Ideally there should be measures of effectiveness which tell us how successful
the project has been. For example, ‘to reduce customer complaints’ would be more
satisfactory as an objective than ‘to improve customer relations’. The measure can, in some
cases, be an answer to simple yes/no question, e.g. ‘Did we install the new software by 1
June?’
Achievable It must be within the power of the individual or group to achieve the
objective.
Relevant The objective must be relevant to the true purpose of the project.
Time constrained There should be a defined point in time by which the objective should
have been achieved.
Measures of effectiveness
For example, a large number of errors found during code inspections might indicate
potential problems with reliability later.
A cost–benefit analysis will often be part of the project’s feasibility study. This will
itemize and quantify the project’s costs and benefits. The benefits will be affected by the
completion date: the sooner the project is completed, the sooner the benefits can be
experienced.
The quantification of benefits will often require the formulation of a business model
which explains how the new application can generate the claimed benefits.
For example:
The development costs are not allowed to rise to a level which threatens to exceed
the value of benefits;The features of the system are not reduced to a level where the
expected benefits cannot be realized;
The delivery date is not delayed so that there is an unacceptable loss of benefits.
The project plan should be designed to ensure project success by preserving the
business case for the project. However, every non-trivial project will have problems,
and at what stage do we say that a project is actually a failure? Because different
stakeholders have different interests, some stakeholders in a project might see it as a
success while others do not.
The project objectives are the targets that the project team is expected to achieve. In
the case of software projects, they can usually be summarized as delivering:the
agreed functionality to the required level of quality,on time ,within budget.
A project could meet these targets but the application, once delivered could fail to meet
the business case. A computer game could be delivered on time and within budget, but
We have seen that in business terms it can generally be said that a project is a success if the
value of benefits exceeds the costs. While project managers have considerable control
over development costs, the value of the benefits of the project deliverables is dependent
on external factors such as the number of customers.
Project objectives still have some bearing on eventual business success. A delay in
completion reduces the amount of time during which benefits can be generated and
diminishes the value of the project.
A project can be a success on delivery but then be a business failure, On the other hand, a
project could be late and over budget, but its deliverables could still, over time, generate
benefits that outweigh the initial expenditure.
The possible gap between project and business concerns can be reduced by having a
broader view of projects that includes business issues.
Because the focus of project management is, not unnaturally, on the immediate project, it may
not be seen that the project is actually one of a sequence. Later projects benefit from the
technical skills learnt on earlier projects.
Technical learning will increase costs on the earlier projects, but later projects benefit as
the learnt technologies can be deployed more quickly, cheaply and accurately. This
expertise is often accompanied by additional software assets.
For example reusable code. Where software development is outsourced, there may be
immediate savings, but these longer-term benefits of increased expertise will be lost.
Astute managers may assess which areas of technical expertise it would be beneficial to
develop.
Customer relationships can also be built up over a number of projects. If a client has
trust in a supplier who has done satisfactory work in the past, they are more likely to
use that company again, particularly if the new requirement builds on functionality
already delivered. It is much more expensive to acquire new clients than it is to
Definition :
Much of the project manager’s time is spent on only three of the eight identified
activities, viz., project planning, monitoring, and control. The time period
during which these activities are carried out is indicated in Fig. 1.4.
It shows that project management is carried out over three well-defined stages
or processes, irrespective of the methodology used. In the project initiation
stage, an initial plan is made.
Finally, the project is closed. In the project closing stage, all activities are logically
completed and all contracts are formally closed.
Once the project execution starts, monitoring and control activities are taken
up to ensure that the project execution proceeds as planned.
The monitoring activity involves monitoring the progress of the project. Control
activities are initiated to minimize any significant variation in the plan.
Note that we have given a very brief description of these activities in this
chapter. We will discuss these activities in more detail in subsequent chapters.
● Effort How much effort would be necessary for completing the project?
The effectiveness of all activities such as scheduling and staffing, which are
planned at a later stage, depends on the accuracy with which the above three
project parameters have been estimated.
Project monitoring and control activities are undertaken after the initiation
of development activities.
• The aim of project monitoring and control activities is to ensure that the
software development proceeds as planned.
• At the start of a project, the project manager does not have complete
knowledge about the details of the project. As the project progresses
through different development phases, the manager’s information base
gradually improves.
• By taking these developments into account, the project manager can plan
subsequent activities more accurately with increasing levels of
confidence.
Figure 1.4 shows this aspect as iterations between monitoring and control, and the
plan revision activities.
In Figure 1.5 the ‘real world’ is shown as being rather formless. Especially in
the case of large undertakings, there will be a lot going on about which
management should be aware.
This will involve the local managers in data collection. Bare details, such as
‘location X has processed 2000 documents’, will not be very useful to higher
management: data processing will be needed to transform this raw data into
useful information. This might be in such forms as ‘percentage of records
processed’, ‘average documents processed per day per person’ and ‘estimated
completion date’.
implementation
In effect they are comparing actual performance with one aspect of the overall
project objectives. They might find that one or two branches will fail to
complete the transfer of details in time. They would then need to consider what
to do (this is represented in Figure 1.5 by the box Making decisions/ plans).
One possibility would be to move staff temporarily from one branch to another. If this is
done, there is always the danger that while the completion date for the one branch is
pulled back to before the overall target date, the date for the branch from which staff are
being moved is pushed forward beyond that date.
The project manager would need to calculate carefully what the impact would be in
moving staff from particular branches. This is modelling the consequences of a
Project Management and Software quality 62
Module 4 and 5
potential solution. Several different proposals could be modelled in this way before
one was chosen for implementation.
Having implemented the decision, the situation needs to be kept under review by
collecting and processing further progress details.
For instance, the next time that progress is reported, a branch to which staff have been
transferred could still be behind in transferring details. This might be because the reason
why the branch has got behind in transferring details is because the manual records are
incomplete and another department, for whom the project has a low priority, has to be
involved in providing the missing information. In this case, transferring extra staff to do
data inputting will not have accelerated data transfer.
A project plan is dynamic and will need constant adjustment during the execution of the
project.
A good plan provides a foundation for a good project, but is nothing without intelligent
execution. The original plan will not be set in stone but will be modified to take account of
changing circumstances.
Software Development and Project management Life Cycles
Software development life cycle denotes the stages through which a software
is developed.
In figure 1.7 a software development life cycle(SDLC) is shown in terms of the set of
activities that are undertaken during a software development project, their grouping
into different phases and their sequencing .
During the software development cycle, starting from its conception, the
developers carry out several processes(or development methodologies)till
the software is fully developed and deployed at the client site.
During the software development life cycle,the developers carry out several
During the software management life cycle,the software manager carries out
several project management activities (or project management
methodologies) to perform the required software management activities.
The activities carried out by the developers during software development life
cycle as well as the management life cycle are grouped into a number of
phases.
Sets of phases and their sequencing in the software development life cycle
and project management life cycle have shown in figure 1.8 below.
The different phases in the software development life cycle are requirements
analysis,design,development,test and delivery.
The different phases of the project management life cycle are shown in Figure 1.8. In
the following, we discuss the main activities that are carried out in each phase.
Project Initiation
As shown in Figure 1.8, the software project management life cycle starts with
project initiation.
The project initiation phase usually starts with project concept development. During
concept development the different characteristics of the software to be developed
are thoroughly understood. The different aspects of the project that are investigated
and understood include: the scope of the project, project constraints, the cost that
would be incurred and the benefits that would accrue.
For example, an organization might feel a need for a software to automate some of
its activities, possibly for more efficient operation. Based on the feasibility study, the
business case is developed.
Once the top management agrees to the business case, the project manager is
appointed, the project charter is written, and finally the project team is formed. This
sets the ground for the manager to start the project planning phase.
During the project initiation phase it is crucial for the champions of the project to
develop a thorough understanding of the important characteristics of the project.
W5HH Principle: Boehm suggested that during project initiation, the project
champions should have comprehensive answers to a set of key questions pertaining
to the project.
The name of this principle (W5HH) is an acronym constructed from the first letter of
each question.
Project bidding:
The different types of bidding techniques and their implications and applicability.
The RFQ issuing organization can select a vendor based on the price quoted as well
as the competency of the vendor.
In government organizations, The term Request For Tender (RFT) is usually used in
place of RFQ. RFT is similar to RFQ; however, in RFT the bidder needs to deposit a
tender fee in order to participate in the bidding process.
Request for proposal (RFP) Many times it so happens that an organization has
reasonable under- standing of the problem to be solved, however it does not have a
good grasp of the solution aspects.
• The organization may not have sufficient knowledge about the different
features that are to be implemented, and may lack familiarity with the
possible choices of the implementation environment, such as, databases,
operating systems, client-server deployment, etc.
• In this case, the organization may solicit solution proposals from vendors.
The vendors may submit a few alternative solutions and the approximate
costs for each solution. In order to develop a better understanding, the
requesting organization may ask the vendors to explain or demonstrate their
solutions.
• Based on the RFP process, the requesting organization can form a clear idea
of the project solutions required, based on which it can form a statement of
work (SOW) for requesting RFQ from the vendors.
Request for Information (RFI) An organization soliciting bids may publish an RFI.
Based on the vendor response to the RFI, the organization can assess the
competencies of the vendors and shortlist the vendors who can bid for the work.
However, it must be noted that vendor selection is seldom done based on RFI, but
Project planning
An important outcome of the project initiation phase is the project charter. During the
project planning phase, the project manager carries out several processes and creates the
following documents:
Project plan :This document identifies the project tasks, and a schedule for the project
tasks that assigns project resources and time frames to the tasks.
Resource plan: It lists the resources, manpower and equipment that would be required
to execute the project.
Financial plan: It documents the plan for manpower, equipment and other costs.
Quality plan :Plan of quality targets and control plans are included in this document.
Risk plan : This document lists the identification of the potential risks, their
prioritization and a plan for the actions that would be taken to contain the different risks.
Project execution
• In this phase the tasks are executed as per the project plan developed during
the planning phase. A series of management processes are undertaken to
ensure that the tasks are executed as per plan.
• Monitoring and control processes are executed to ensure that the tasks are
executed as per plan and corrective actions are initiated whenever any
deviations from the plan are noticed.
Project closure
Project closure involves completing the release of all the required deliverables to
the customer along with the necessary documentation.
Subsequently, all the project resources are released and supply agreements with the
Over the last two decades, the basic approach taken by the software industry to develop
software has undergone a radical change. Hardly any software is being developed from
scratch any more. Software development projects are increasingly being based on either
tailoring some existing product or reusing certain pre-built libraries.
In either case, two important goals of recent life cycle models are maximization of code
reuse and compression of project durations. Other goals include facilitating and
accommodating client feedbacks and customer participation in project development
work, and incremental delivery of the product with evolving functionalities.
Change requests from customers are encouraged, rather than circumvented. Clients on
the other hand, are demanding further reductions in product delivery times and costs.
These recent developments have changed project management practices in many
significant ways.
Planning Incremental Delivery: Few decades ago, projects were much simpler and
therefore more predictable than the present day projects. In those days, projects were
planned with sufficient detail. much before the actual project execution started. After the
project initiation, monitoring and control activities were carried out to ensure that the
project execution proceeded as per plan.
Now, projects are required to be completed over a much shorter duration, and rapid
application development and deployment are considered key strategies. The traditional
long-term planning has given way to adaptive short-term planning.
Instead of making a long-term project completion plan, the project manager now plans all
incremental deliveries with evolving functionalities. This type of project management is
often called extreme project management.
Quality Management Of late: customer awareness about product quality has increased
significantly. Tasks associated with quality management have become an important
responsibility of the project manager. The key responsibilities of a project manager now
include assessment of project progress and tracking the quality of all intermediate
artifacts.
Change Management Earlier :when the requirements were signed off by the customer,
any changes to the requirements were rarely entertained. Customer suggestions are now
actively being solicited and incorporated throughout the development process.
To facilitate customer feedback, incremental delivery models are popularly being used.
Product development is being carried out through a series of product versions
implementing increasingly greater functionalities.
Also customer feedback is solicited on each version for incorporation. This has made it
necessary for an organization to keep track of the various versions and revisions through
which the product develops.
Another reason for the increased importance of keeping track of the versions and
revisions is the following. Application development through customization has become a
popular business model. Therefore, existence of a large number of versions of a product
and the need to support these by a development organization has become common.
From this view point, modern software development practices advocate delivery of
software in increments as and when the increments are completed by the development
team, and actively soliciting change requests from the customer as they use the
increments of the software delivered to them.
Change requests from the customer after the start of the project were discouraged.
Consequently, at present in most projects, the requirements change frequently during the
development cycle. It has, therefore, become necessary to properly manage the
requirements, so that as and when there is any change in requirements, the latest and up-
to-date requirements become available to all.
Starting with an initial release, releases are made each time the code changes. There are
several reasons as to why the code needs to change. These reasons include functionality
enhancements, bug fixes and improved execution speed. Further, modern development
processes such as the agile development processes advocate frequent and regular
releases of the software to be made to the customer during the software development.
Starting with the release of the basic or core functionalities of the software, more
complete functionalities are made available to the customers every couple of weeks. In
this context, effective release management has become important.
Every project is susceptible to a host of risks that could usually be attributed to factors
such as technology, personnel and customer. Unless proper risk management is
practised, the progress of the project may get adversely affected. Risk management
Scope Management : Once a project gets underway, many requirement change requests
usually arise. Some of these can be attributed to the customers and the others to the
development team.
While accepting change requests, it must be remembered that the three critical project
parameters: scope, schedule and project cost are interdependent and are very intricately
related. If the scope is allowed to change extensively, while strictly maintaining the
schedule and cost, then the quality of the work would be the major casualty.
For every scope change request, the project managers examine whether the change
request is really necessary and whether the budget and time schedule would permit it.
Often, the scope change requests are superfluous.
For example, an over enthusiastic project team member may suggest to add features
that are not required by the customer. Such scope change requests originated by the
overenthusiastic team members are called goldplating and should be discouraged if the
project is to succeed.
The customer may also initiate scope change requests that are more ornamental or at
best nonessential. These serve only to jeopardize the success of the project, while not
adding any perceptible value to the delivered software. Such avoidable scope change
requests originated by the customer are termed as scope creep. To ensure the success of
the project, the project manager needs to guard against both gold plating and scope
creep.
QUESTIONS
10) What are the reasons behind the success and failure of a project?
----------------------******************------------------------****************-------------------
Module 5
Software quality
5.1 Introduction
While quality is generally agreed to be a good thing', in practice what is meant by the
'qualit can be vague. We need to define precisely what qualities we require of a system.
Rather than concentrating on the quality of the final system, a potential customer for
software might check that the suppliers were using the best
methods.
Quality will be of concern at all stages of project planning and execution, but will be of
particular interest a the following points in the Step Wise framework (Figure 5.1).
Step 1: Identify project scope and objectives :Some objectives could relate to the
qualities of the application to be delivered.
Step 2: Identify project infrastructure :Within this step, activity identifies installation
standards and procedures. Some of these will almost certainly be about quality.
Step 3: Analyze project characteristics :In activity 5.2 ('Analyze other project
characteristics - including quality based ones') the application to be implemented is
examined to see if it has any special quality requirements.
If, for example, it is safety critical then a range of activities could be added, such as n-
version development where a number of teams develop versions of the same software
which are then run in parallel with the outputs being cross-checked for discrepancies.
Step 4: Identify the products and activities of the project :It is at this point that the
entry, exit and process requirements are identified for each activity.
Step 8: Review and publicize plan: At this stage the overall quality aspects of the
project plan are reviewed.
We would expect quality to be a concern of all producers of goods and services. However,
the special characteristics of software create special demands.
->Increasing criticality of software: The final customer or user is naturally anxious
about the general quality of software, especially its reliability. This is increasingly so as
organizations rely more on their computer systems and software is used in more safety-
critical applications, for example to control aircraft.
->The intangibility of software: can make it difficult to know that a project task was
completed satisfactorily. Task outcomes can be made tangible by demanding that the
developer produce 'deliverables' that can be examined for quality.
Definitions :
Software quality refers to how well a software application conforms to a set of functional
and non-functional requirements, as well as how well it satisfies the needs or expectations of
its users. It encompasses various attributes such as reliability, efficiency, maintainability,
usability, security, and scalability.
Some qualities of a software product reflect the external view of software held by users,
as in the case of usability. These external qualities have to be mapped to internal factors
of which the developers would be aware. It could be argued, for example, that well-
structured code is likely to have fewer errors and thus improve reliability.
Defining quality is not enough. If we are to judge whether a system meets our
requirements we need to be able to measure its qualities.
A good measure must relate the number of units to the maximum possible."
The maximum number of faults in a program, for example, is related to the size of the
program, so a measure of faults per thousand lines of code is more helpful than total
faults in a program.
Trying to find measures for a particular quality helps to clarify and communicate wha
that quality really is. What is being asked is, in effect, 'how do we know when
have been successful?'
The measures may be direct, where we can measure the quality directly, or indirect,
where the thing being measured is not the quality itself but an indicator that the quality is
present. For example, the number of enquiries by users received by a help desk about
how one operates a particular software application might be an indirect measurement of
its usability.
When project managers identify quality measurements they effectively set targets for
project team members. so care has to be taken that an improvement in the measured
quality is always meaningful.
For example, the number of errors found in program inspections could be counted, on
the grounds that the more thorough the inspection process, the more errors will be
discovered. This count could be improved by allowing more errors to go through to the
inspection stage rather than eradicating them earlier - which is not quite the point.
Test: the practical test of the extent to which the attribute quality exists
Target range: the range of values within which it is planned the quality measurement
value should lie.
-> Availability: the percentage of a particular time interval that a system is usable
.->Mean time between failures: the total service time divided by the number of failures
->Failure on demand: the probability that a system will not be available at the time
required or the probability that a transaction will fail
->Support activity: the number of fault reports that are generated and processed.
Associated with reliability is maintainability, which is how quickly a fault, once detected,
can be corrected. A key component of this is changeability, which is the ease with which
the software can be modified.
Before an amendment can be made, the fault has to be diagnosed. Maintainability can
therefore be seen as change- ability plus a new quality, analysability, which is the ease
with which causes of failure can be identified.
1. The user will be concerned with the elapsed time between a fault being detected
and it being corrected, while the software development managers will be
concerned about the effort involved.
There are several well-established quality models, including McCall's, Dromey's and
Boehm's. Since there was no standardization among the large number of quality models
that became available, the ISO 9126 model of quality was developed.
Garvin reasoned that sometimes users have subjective judgment of the quality of a
program (perceived quality) that must be taken into account to judge its quality.
McCall' model:
McCall defined the quality of a software in terms of three broad parameters: its
operational characteristics, how easy it is to fix defects and how easy it is to port it to
different platforms.
Dromey's model
Dromey proposed that software product quality depends on four major high-level
properties of the software: Correctness, internal characteristics, contextual
characteristics and certain descriptive properties. Each of these high-level properties
of a software product, in turn, depends on several lower-level quality attributes of the
software. Dromey's hierarchical quality model is shown in Figure 5.2.
Boehm's model: Boehm postulated that the quality of a software can be defined based
on three high-level characteristics that are important for the users of the software.
Over the years, various lists of software quality characteristics have been put forward,
such as those of James McCall and of Barry Boehm.
The term 'maintainability' has been used, for example, to refer to the ease with which an
error can be located and corrected in a piece of software, and also in a wider sense to
include the ease of making any changes. For some, 'robustness' has meant the software's
tolerance of incorrect input, while for others it has meant the ability to change program
code without introducing errors.
->The ISO 9126 standard was first introduced in 1991 to tackle the question of the
definition of software quality. The original 13-page document was designed as a
foundation upon which further, more detailed, standards could be built. The ISO 9126
standards documents are now very lengthy.
->Currently, in the UK, the main ISO 9126 standard is known as BS ISO/IEC 9126-
1:2001. This is supplemented by some technical reports' (TRS). published in 2003, which
are provisional standards. At the time of writing, a new standard in this area, ISO 25000,
is being developed.
->ISO 9126 has separate documents to cater for above three sets of needs. Despite the
size of this set of documentation, it relates only to the definition of software quality
attributes. A separate standard,
->ISO 14598, describes the procedures that should be carried out when assessing the
degree to which a software product conforms to the selected ISO 9126 quality
characteristics.
->ISO 14598 could be used to carry out an assessment using a different set of quality
characteristics from those in ISO 9126 if circumstances required it.
The difference between internal and external quality attributes has already been noted.
Note :
External Quality (Functional)
External quality is the usefulness of the system as perceived from outside. It provides
customer value and meets the product owner’s specifications. This quality can be
measured through feature tests, QA and customer feedback. This is the quality that
affects your clients directly, as opposed to internal quality which affects them
indirectly.
Internal Quality (Structural)
Internal quality has to do with the way that the system has been constructed. It is a
much more granular measurement and considers things like clean code, complexity,
duplication, component reuse. This quality can be measured through predefined
standards, linting tools, unit tests etc. Internal quality affects your ability to manage
and reason about the program.
Extern software quality focuses on the end user perceiving the quality while
interacting with the software system.
->ISO 9126 also introduces another type of quality - quality in use- for which the
following elements have been identified:
• Effectiveness: the ability to achieve user goals with accuracy and completeness
• Productivity: avoiding the excessive use of resources, such as staff effort, in achieving
user goals.
Safety: within reasonable levels of risk of harm to people and other entities such as
business, software.. property and the environment
• Satisfaction: smiling users
'Users' in this context includes not just those who operate the system containing the
software, but also those who maintain and enhance the software. The idea of quality in
use underlines how the required quality of the software is an attribute not just of the
software but also of the context of use.
For instance :
In the IOE scenario, suppose the maintenance job reporting procedure varies
considerably, depending on the type of equipment being serviced, because different
inputs are needed to calculate the cost to IOE. Say that 95% of jobs currently involve
maintaining photocopiers and 5% concern maintenance of printers.
If the software is written for this application, then despite good testing, some errors
might still get into the operational system. As these are reported and corrected, the
software would become more 'mature' as faults become rarer.
If there were a rapid switch so that more printer maintenance jobs were being
processed, there could be an increase in reported faults as coding bugs in previously less
heavily used parts of the software code for printer maintenance were flushed out by the
larger number of printer maintenance transactions. Thus, changes to software use
involve changes to quality requirements.
ISO 9126 suggests sub-characteristics for each of the primary characteristics. They are
useful as they clarify what is meant by each of the main characteristics.
Typically these could be auditing requirements. Since the original 1999 draft, a sub-
characteristic called 'compliance' has been added to all six ISO external characteristics. In
each case. this refers to any specific standards that might apply to the particular quality
attribute.
'Interoperability' is a good illustration of the efforts of ISO 9126 to clarify terminology.
'Interoperability refers to the ability of the software to interact with other systems.
The framers of ISO 9126 have chosen this word rather than 'compatibility' because the
latter causes confusion with the characteristic referred to by ISO 9126 as 'replaceability'
(see below).
'Maturity' refers to the frequency of failure due to faults in a software product, the
implication being that the more the software has been used, the more faults will have
been uncovered and removed.
Note that 'recoverability' has been clearly distinguished from 'security' which describes
the control of access to a system.
Note how 'learnability' is distinguished from 'operability'. A software tool could be easy
to learn but time- consuming to use because, say, it uses a large number of nested menus.
This might be fine for a package used intermittently, but not where the system is used for
many hours each day. In this case 'learnability' has been incorporated at the expense of
'operability'.
'Analysability' is the ease with which the cause of a failure can be determined.
'Changeability' is the quality that others call 'flexibility': the latter name is a better one as
'changeability' has a different connotation in plain English - it might imply that the
suppliers of the software are always changing it!
'Stability', on the other hand, does not refer to software never changing: it means that
there is a low risk of a modification to the software having unexpected effects.
'Portability compliance' relates to those standards that have a bearing on portability. The
use of a standard programming language common to many software/hardware
environments would be an example of this.
'Replaceability' refers to the factors that give 'upwards compatibility' between old
software components and the new ones. 'Downwards' compatibility is not implied by the
definition.
A new version of a word processing package might read the documents produced by
previous versions and thus be able to replace them, but previous versions might not be
able to read all documents created by the new version.
'Coexistence' refers to the ability of the software to share resources with other software
components; unlike 'interoperability', no direct data passing is necessarily involved.
ISO 9126 provides guidelines for the use of the quality characteristics. Variation in the
importance of different quality characteristics depending on the type of product is
stressed.
Once the requirements for the software product have been established, the following
steps are suggested:
1. Judge the importance of each quality characteristic for the application Thus reliability
will be of particular concern with safety-critical systems while efficiency will be
important for some real-time systems.
2. Select the external quality measurements within the ISO 9126 framework relevant to
the qualities prioritized above Thus for reliability mean time between failures would be
an important measurement, while for efficiency, and more specifically 'time behaviour',
response time would be an obvious measurement.
4. Identify the relevant internal measurements and the intermediate products in which
they appear This would only be important where software was being developed, rather
than existing software being evaluated.
According to ISO 9126, measurements that might act as indicators of the final quality of
the software can be taken at different stages of the development life cycle. For products
at the early stages these indicators might be qualitative.For example, they could be
based on checklists where compliance with predefined criteria is assessed by expert
judgement. As the product nears completion, objective, quantitative, measurements
would increasingly be taken.
For example, the efficiency characteristics of time behaviour and resource utilization
could be enhanced by exploiting the particular characteristics of the operating system
and hardware environments within which the software will perform. This, however,
would probably be at the expense of portability.
It was noted above that quality assessment could be carried out for a number of different
reasons: to assist software development, acquisition or independent assessment.
Where potential users are assessing a number of different software products in order to
choose the best one, the outcome will be along the lines that product A is more
satisfactory than product B or C. Here some idea of relative satisfaction exists and there is
a justification in trying to model how this satisfaction might be formed.
One approach recognizes some mandatory quality rating levels which a product must
reach or be rejected, regardless of how good it is otherwise. Other characteristics might
be desirable but not essential.
For these a user satisfaction rating could be allocated in the range, say, 0-5. This could be
based on having an objective measurement of some function and then relating different
measurement values to different levels of user satisfaction - see Table 5.2 above.
Along with the rating for satisfaction, a rating in the range 1-5, say, could be assigned to
reflect how important each quality characteristic was. The scores for each quality could
be given due weight by multiplying it by its importance weighting. These weighted scores
can then be summed to obtain an overall score for the product. The scores for various
products are then put in the order of preference.
Finally, a quality assessment can be made on behalf of a user community as a whole. For
example, a professional body might assess software tools that support the working
practices of its members. Unlike the selection by an individual user/purchaser, this is an
attempt to produce an objective assessment of the software independently of a particular
user environment.
It is clear that the result of such an exercise would vary considerably depending on the
weightings given to each software characteristic, and different users could have different
requirements. Caution would be needed here.
The users assess the quality of a software product based on its external attributes,
whereas during development, the developers assess the product's quality based on
various internal attributes. We can also say that during development, the developers can
The internal attributes may measure either some aspects of the product (called product
or of the development process (called process metrics).
Let us understand the basic differences between product and process metrics.
Product metrics help measure the characteristics of a product being developed. A few
examples of product metrics and the specific product characteristics that they measure
are the following: the LOC and function point metrics are used to measure size, the PM
(person-month) metric is used to measure the effort required to develop a product, and
the time required to develop the product is measured in months.
Errors not removed at early stages become more expensive to correct at later stages.
Each development step that passes before the error is found increases the amount of
rework needed. An error in the specification found in testing will mean rework at all the
stages between specification and testing. Each successive step of development is also
more detailed and less able to absorb change.
Note that Extreme Programming advocates suggest that the extra effort needed to amend
software at later stages can be exaggerated and is, in any case, often justified as adding
value to the software.
• Entry requirements, which have to be in place before an activity can start. An example
would be that a comprehensive set of test data and expected results be prepared and
approved before program testing
can commence.
These requirements may be laid out in installation standards, or a Software Quality Plan
may be drawn up for the specific project if it is a major one.
Note : The concept of the Internet of Everything originated at Cisco, who defines IoE as
"the intelligent connection of people, process, data and things." Because in the Internet
of Things, all communications are between machines.
BS EN ISO 9001:2000
At IOE, a decision might be made to use an outside contractor to produce the annual
maintenance contracts subsystem. A natural concern would be the standard of the
contractor's deliverables.
Quality control would involve the rigorous testing of all the software produced by the
contractor, insisting on rework where defects are found. This would be very time-
consuming. An alternative approach would focus on quality assurance.
In this case IOE would check that the contractors themselves were carrying out effective
quality control. A key element of this would be ensuring that the contractor had the right
quality management system in place. Various national and international standards
bodies, including the British Standards Institution (BSI), have engaged in the creation
of standards for quality management systems.
Standards such as the ISO 9000 series try to ensure that a monitoring and control system
to check quality is in place. They are concerned with the certification of the development
process, not of the end-product as in the case of crash helmets and electrical appliances
with their familiar CE marks. Standards in the ISO 9000 series relate to quality systems in
general and not just those in software development.
ISO 9000 describes the fundamental features of a Quality Management System (QMS)
and its terminology.
ISO 9001 describes how a QMS can be applied to the creation of products and the
provision of services.
ISO 9004 applies to process improvement.
There has been some controversy over the value of these standards. Stephen Halliday,
writing in The Observer, had misgivings that these standards are taken by many
customers to imply that the final product is of a certified standard although, as Halliday
says, 'It has nothing to do with the quality of the product going out of the gate. You set
down your own specifications and just have to maintain them, however low they may be.
Obtaining certification can be expensive and time-consuming which can put smaller, but
still well-run, businesses at a disadvantage. Finally, there has been a concern that a
preoccupation with certification might distract attention from the real problems of
producing quality products.
Putting aside these reservations, let us examine how the standard works. First, we
identify those things to be the subject of quality requirements. We then put a system in
place which checks that the requirements are being fulfilled and corrective action taken
when necessary.
These principles are applied through cycles which involve the following activities:
Documentation of objectives -procedures (in the form of a quality manual), plans, and
records relating to the actual operation of processes. The documentation must be subject
to a change control system that ensures that it is current. Essentially one needs to be able
to demonstrate to an outsider that the QMS exists and is actually adhered to.
Management responsibility - the organization needs to show that the QMS and the
processes that produce goods and services conforming to the quality objectives are
actively and properly managed.
Planning
Determination and review of customer requirements
Effective communications between the customer and supplier.
Design and development being subject to planning, control and review
Requirements and other information used in design being adequately and clearly
recorded
Design outcomes being verified, validated and documented in a way that provides
sufficient information for those who have to use the designs
Changes to the designs should be properly controlled
Adequate measures to specify and evaluate the quality of purchased components
Production of goods and the provision of services should be under controlled
conditions to ensure adequate provision of information, work instruction,
equipment, measurement devices, and post- delivery activities
Measurement - to demonstrate that products conform to standards, and the QMS
is effective, and to improve the effectiveness of processes that create products or
services
A historical perspective
Before the 1950s, the primary means of realizing quality products was by undertaking
extensive testing of the finished products. The emphasis of the quality paradigms later
shifted from product assurance (extensive testing of the finished product) to process
assurance (ensuring that a good quality process is used for product development).
The United States Department of Defence (US DoD) is one of the largest buyers of
software products in the world. It has often faced difficulties dealing with the quality of
performance of vendors, to whom it assigned contracts. The department had to live with
recurring problems of delivery of low quality products, late delivery, and cost escalations.
DoD worked with the Software Engineering Institute (SEI) of the Carnegie Mellon
University to develop CMM. Originally, the objective of CMM was to assist DoD in
developing an effective software acquisition method by predicting the likely contractor
performance through an evaluation of their development practices.
Definition:
CMM is a reference model for appraising a software development organization into one of
five process maturity levels. The maturity level of an organization is a ranking of the quality
of the development process used by the organization. This information can be used to
predict the most likely outcome of a project that the organization undertakes.
It should be remembered that SEI CMM can be used in two different ways, viz., capability
evaluation and process assessment.
The different levels of SEI CMM have been designed so that it is easy for an organization
to slowly ramp up.
Each developer feels free to follow any process that he or she may like. Due to the
chaotic development process practised, when a developer leaves the organization,
the new incumbent usually faces great difficulty in understanding the process that
was followed for the portion of the work that has been completed.
Consequently, time pressure builds up towards the product delivery time. To cope
up with the time pressure, many short cuts are tried out leading to low quality
products.
Though project failures and project completion delays are commonplace in these
level 1 organizations, yet it is possible that some projects may get successfully
completed. But an analysis of any incidence of successful completion of a project
would reveal the heroic efforts put in by some members of the project team.
Level 2: Repeatable Organizations at this level usually practise some basic project
management practices such as planning and tracking cost and schedule. Further, these
organizations make use of configuration management tools to keep the deliverable items
under configuration control.
Level 3: Defined At this level, the processes for both management and development
activities are defined and documented. There is a common organization-wide
understanding of activities, roles, and responsibilities. At this level, the organization
builds up the capabilities of its employees through periodic training programs. Also,
systematic reviews are practised to achieve phase containment of errors.
Level 5: Optimizing Organizations operating at this level not only collect process and
product metrics, but analyze them to identify scopes for improving and optimizing the
various development and management activities. In other words, these organizations
strive for continuous process improvement.
Note :
Some of the CMMI Level 5 software IT companies in India include Tata Consultancy
Services (TCS), Infosys, Wipro, HCL Technologies, and Tech Mahindra. These
companies have achieved the highest level of maturity in the Capability Maturity
Model Integration (CMMI) for software development.
In a level 5 organization, the lessons learned from specific projects are incorporated in to
the process. Continuous process improvement is achieved both by careful analysis of the
process measurement results and assimilation of innovative ideas and technologies.
Except for level 1, each maturity level is characterized by several Key Process Areas
(KPAS). The KPAs of a level indicate the areas that an organization at the lower maturity
level needs to focus to reach this level. The KPAS for the different process maturity levels
KPAS provide a way for an organization to gradually improve its quality of over several
stages. In other words, at each stage of process maturity, KPAS identify the key areas on
which an organization needs to focus to take it to the next level of maturity. Each stage
has been carefully designed such that one stage enhances the capability already built up.
For example, trying to implement a defined process (level 3) before a repeatable process
(level 2) would be counterproductive as it becomes difficult to follow the defined process
due to schedule and budget pressures. In other words, trying to focus on some higher
level KPAS without achieving the lower level KPAs would be counterproductive.
CMMI is the successor of the Capability Maturity Model (CMM). In 2002, CMMI Version
1.1 was released. Version 1.2 followed in 2006. The genesis of CMMI is the following.
After CMMI was first released in 1990, it was adopted and used in many domains other
than software development, such as human resource management (HRM).
CMMS were developed for disciplines such as systems engineering (SE-CMM), people
management (PCMM), software acquisition (SA-CMM), and others. Although many
organizations found these models to be useful, they faced difficulties arising from
overlap, inconsistencies, as well as integration of the models.
For example, all the terminologies that are used are very generic in nature and even the
word software does not appear in the definition documentation of CMMI. However, CMMI
has much in common with CMM, and also describes the five distinct levels of process
maturity of CMM.
ISO/IEC 15504 is a standard for process assessment that shares many concepts with
CMMI. The two standards should be compatible. Like CMMI the standard is designed to
provide guidance on the assessment of software development processes. To do this there
must be some benchmark or process reference model which represents the ideal
development life cycle against which the actual processes can be compared.
Various process reference models could be used but the default is the one described in
ISO 12207, which has been briefly discussed in Chapter 1 and which describes the main
processes - such as requirements analysis and architectural design - in the classic
software development life cycle.
Processes are assessed on the basis of nine process attributes - see Table 5.5.
When assessors are judging the degree to which a process attribute is being fulfilled they
allocate one of the following scores:
The CMMI standard has now grown to over 500 pages. Without getting bogged down in
detail, this section explores how the general approach might usefully be employed. To do
this we will take a scenario from industry.
UVW is a company that builds machine tool equipment containing sophisticated control
software. This equipment also produces log files of fault and other performance data in
electronic format. UVW produces software that can read these log files and produce
analysis reports and execute queries.
Both the control and analysis software is produced and maintained by the Software
Engineering department .Within this department there are separate teams who deal with
the software for different types of equipment
Lisa is a Software Team Leader in the Software Engineering department with a team of
six systems designers reporting to the leader.
The group is responsible for new control systems and the maintenance of existing
systems for a particular product line. The dividing line between new development and
maintenance is sometimes blurred as a new control system often makes use of existing
software components which are modified to create the new
software.
A separate Systems Testing Group test software for new control systems, but not fault
correction and adaptive maintenance of released systems.
A project for a new control system is controlled by a Project Engineer with overall
responsibility for managing both the hardware and software sides of the project.
The Project Engineer is not primarily a software specialist and would make heavy
demands on the Software Team Leader, such as Lisa, in an advisory capacity. Lisa may, as
a Software Team Leader, work for a number of different Project Engineers in respect of
different projects, but in the UVW organizational chart she is shown as reporting to the
Head of Software Engineering.
A new control system starts with the Project Engineer writing a software requirement
document which is Software Quality 349 reviewed by a Software Team Leader, who will
then agree to the document, usually after some amendment. A copy of the requirements
Lisa, if she were the designated Software Team Leader, would then write an Architecture
Design document mapping the requirements to actual software components. These
would be allocated to Work Packages carried out by individual members of Lisa's team.
UVW teams get the software quickly written and uploaded onto the newly developed
hardware platform for initial debugging. The hardware and software engineers will then
invariably have to alter the requirement and consequently the software as they find
inconsistencies, faults and missing functions.
The Systems Testing Group should be notified of these changes, but this can be patchy.
Once the system seems to be satisfactory to the developers, it is released to the Systems
Testing Group for final testing before shipping to customers.
Lisa's work problems mainly relate to late deliveries of software by her group because:
(i) The Head of Software Engineering and the Project Leaders may not liaise properly,
leading to the over- commitment of resources to both new systems and maintenance jobs
at the same time
(ii) The initial testing of the prototype often leads to major new requirements being
identified.
(iii) There is no proper control over change requests - the volume of these can
sometimes increase the demand for software development well beyond that originally
planned
(iv) Completion of system testing can be delayed because of the number of bug fixes
We can see that there is plenty of scope for improvements.
One problem is knowing where to start.Approaches like that of CMMI can help us identify
the order in which steps in improvement have to take place.
Some steps need to build on the completion of others. An immediate step would be to
introduce more formal planning and control. This would at least enable us to assess the
size of the problems even if we are not yet able to solve them all.
Effective change control procedures would make managers more aware of how changes
in the system's functionality can force project deadlines to be breached. These process
developments would help an organization move from Level 1 to Level 2.
Figure 5.5 illustrates how a project control system could be envisaged at the level of
maturity.
The steps of defining procedures for each development task and ensuring that they are
actually carried out help to bring an organization up to Level 3.
When more formalized processes exist, the behaviour of component processes can be
monitored.
For example, the numbers of change reports generated and system defects detected at
the system testing phase. Apart from information about the products passing between
processes, we can also collect effort information about each process itself. This enables
effective remedial action to be taken speedily when problems are found. The
development processes are now properly managed, bringing the organization up to Level
4.
PSP is based on the work of Watts Humphrey. Unlike CMMI that is intended for
companies, PSP is suitable for individual use. It is important to note that SEI CMM does
not tell software developers how to analyze, design, code, test or document software
products, but assumes that engineers use effective personal practices.
PSP recognizes that the process for individual use is different from that necessary for a
team. The quality and productivity of an engineer is to a great extent dependent on his
process.
PSP is a framework that helps engineers to measure and improve the way they work. It
helps in developing personal skills and methods by estimating, planning and tracking
performance against plans, and provides a defined process which can be tuned by
individuals.
Time measurement
PSP advocates that developers should rack the way they spend time. Because, boring
activities seem longer than actual and interesting activities seem short. Therefore, the
actual time spent on a task should be measured with the help of a stop-clock to get an
objective picture of the time spent.
For example, he may stop the clock when attending a telephone call, taking a coffee break,
etc. An engineer should measure the time he spends for various development activities
such as designing, writing code, testing, etc. PSP Planning.
Individuals must plan their project. Unless an individual properly plans his activities,
disproportionately high effort may be spent on trivial activities and important activities
may be compromised, leading to poor quality results.
The developers must estimate the maximum, minimum and the average LOC required for
the product. They should use their productivity in minutes/LOC to calculate the
maximum. minimum and the average development time. They must record the plan data
in a project plan summary. The PSP is schematically shown in Figure 5.7.
the log data with the initial plan to achieve better planning in the future projects, to improve his
process etc. The four maturity levels of PSP have schematically been shown in Figure 5.8.
The activities that the developer must perform for achieving a higher level of maturity have
also been annotated on the diagram PSP2 introduces defect management via the use of
checklists for code and design reviews. The checklists are
developed by analysing the defect data gathered from earlier projects.
Six Sigma
The purpose of six sigma is to improve processes to do things better, faster, and at
a lower cost. It can in fact, be used to improve every facet of business, i.e.,
production, human resources, order entry, and technical support areas.
Six sigma becomes applicable to any activity that is concerned with cost,
timeliness, and quality of results. Therefore, it is applicable to virtually every
industry.
Six sigma seeks to improve the quality of process outputs by identifying and
removing the causes of defects and minimizing variability in the use of process. It
uses many quality management methods, including statistical methods, and
requires presence of six sigma experts within the organization (black belts, green
belts, etc.).
A six sigma defect is defined as any system behaviour that is not as per customer
specifications. Total number of six sigma defect opportunities is then the total
number of chances for committing an error. Sigma of a process can easily be
calculated using a six sigma calculator.
The six sigma DMAIC process (define, measure, analyze, improve, control) is an
improvement system for existing processes falling below specification and looking
for incremental improvement. The six sigma DMADV process (define, measure,
analyze, design, verify) is an improvement.
Note : Green Belts typically focus on process improvement, while Black Belts also focus
on process design and innovation.
Green Belts typically report to a Black Belt or other senior leaders, while Black Belts
typically report directly to a Six Sigma executive or sponsor.
Procedural structure: At first, programmers were left to get on with writing programs
as best they could. Over the years there has been the growth of methodologies where
every process in the software development cycle has carefully laid down steps.
Focus has shifted from relying solely on checking the products of intermediate stages and
towards building an application as a number of smaller, relatively independent
components developed quickly and tested at an early stage. This can reduce some of the
problems, noted earlier, of attempting to predict the external quality of the software from
Inspections
● Inspections can be carried out by colleagues at all levels except the very top.
● The inspection is led by a moderator who has had specific training in the
technique.
● The other participants have defined roles. For example, one person will act as a
recorder and note all defects found, and another will act as reader and take the
other participants through the document under inspection.
● Checklists are used to assist the fault-finding process.
● Statistics are maintained so that the effectiveness of the inspection process can
be monitored.
Note : A Fagan inspection is a process of trying to find defects in documents (such as
source code or formal specifications) during various phases of the software
development process. It is named after Michael Fagan, who is credited with the
invention of formal software inspections.
In the late 1960s, software was seen to be getting more complex while the capacity of the
human mind to hold detail remained limited. It was also realized that it was impossible to
test any substantial piece of software completely given the huge number of possible input
combinations. Testing, at best, could prove the presence of errors, not their absence. Thus
Dijkstra and others suggested that the only way to reassure ourselves about the
correctness of software was by examining the code.
The way to deal with complex systems, it was contended, was to break them down into
components of a size the human mind could comprehend. For a large system there would
be a hierarchy of components and subcomponents. For this decomposition to work
properly, each component would have to be self-contained, with only one entry and exit
point.
profile estimating the volume of use for each feature in the system;
● A development team, which develops the code but which does no machine
testing of the program code produced;
Usage profiles reflect the need to assess quality in use as discussed earlier in relation to
ISO 9126. They will be further dis cussed in the Section 5.11 on testing below.
The development team does no debugging; instead, all software has to be verified by
them using mathematical techniques. The argument is that software which is constructed
by throwing up a crude program, which then has test data thrown at it and a series of hit-
and-miss amendments made to it until it works, is bound to be unreliable.
The certification team carry out the testing, which is continued until a statistical model
shows that the failure intensity has been reduced to an acceptable level.
Formal methods
Preconditions define the allowable states. before processing, of the data items upon
which a procedure is to work.
The postconditions define the state of those data items after processing. The
mathematical notation should ensure that such a specification is precise and
unambiguous.
It should also be possible to prove mathematically (in much the same way that at school
you learnt to prove Pythagoras' theorem) that a particular algorithm will work on the
data defined by the preconditions in such a way as to produce the postconditions.
Project Management and Software quality 115
Module 4 and 5
Despite the claims of the effectiveness of the use of a formal notation to define software
specifications for many years now, it is rarely used in mainstream software development.
This is despite it being quite widely being taught in universities.
A newer development that may meet with more success is the development of Object
Constraint Language (OCL). It adds precise, unambiguous, detail to the UML models, for
example about the ranges of values that would be valid for a named attribute. It uses an
unambiguous, but non-mathematical, notation which developers who are familiar with
Java-like programming languages should grasp relatively easily.
Much interest has been shown in Japanese software quality practices. The aim of the
'Japanese' approach is to examine and modify the activities in the development process in
order to reduce the number of errors that they have in their end-products.
Testing and Fagan inspections can assist the removal of errors - but the same types of
error could occur repeatedly in successive products created by a faculty process. By
uncovering the source of errors, this repetition can be eliminated.
Staff are involved in the identification of sources of errors through the formation of
quality circles. These can be set up in all departments of an organization, including those
producing software where they are known as Software Quality Circles (SWQC).
A quality circle is a group of four to ten volunteers working in the same area who meet
for, say, an hour a week to identify, analyze and solve their work-related problems. One of
their number is the group leader and there could be an outsider, a facilitator, who can
advise on procedural matters. In order to make the quality circle work effectively,
training needs to be given.
Together the quality group select a pressing problem that affects their work. They
identify what they think are the causes of the problem and decide on a course of action to
remove these causes. Often, because of resource or possible organizational constraints,
they will have to present their ideas to management to obtain approval before
implementing the process improvement.
Associated with quality circles is the compilation of most probable error lists.
For example, at IOE, Amanda might find that the annual maintenance contracts project
is being delayed because of errors in the requirements specifications. The project team
Project Management and Software quality 116
Module 4 and 5
could be assembled and spend some time producing a list of the most common types of
error that occur in requirements specifications. This is then used to identify measures
which can reduce the occurrence of each type of error. They might suggest, for instance,
that test cases be produced at the same time as the requirements specification and that
these test cases should be dry run at an inspection. The result is a checklist for use when
conducting inspections of requirement specifications.
A PIR takes place after a significant period of operation of the new system, and focuses on
the effectiveness of the new system, rather than the original project process. The PIR is
often produced by someone who was not involved in the original project, in order to
ensure neutrality. An outcome of the PIR will often be changes to enhance the
effectiveness of the installed system.
The Lessons Learnt report, on the other hand, is written by the project manager as soon
as possible after the completion of the project. This urgency is because the project team is
often dispersed to new work soon after the finish of the project. One problem that is
frequently voiced is that there is often very little follow-up on the recommendations of
such reports, as there is often no body within the organization with the responsibility and
authority to do so.
5.12 Testing
The final judgement of the quality of a software application is whether it actually works
correctly when executed. This section looks at aspects of the planning and management
of testing. A major headache with testing is estimating how much testing remains at any
point. This estimate of the work still to be done depends on an unknown, the number of
bugs left in the code. We will briefly discuss how we can deal with this problem.
The V-process model can be seen as expanding the activity box 'testing' in the waterfall
model. Each step has a matching validation process which can, where defects are found,
cause a loop back to the corresponding development stage and a reworking of the
following steps. Ideally this feeding back should occur only where a discrepancy has been
found between what was specified by a particular activity and what was actually
implemented in the next lower activity on the descent of the V loop.
For example, the system designer might have written that a calculation be carried out in
a certain way. A developer building code to meet this design might have misunderstood
what was required. At system testing stage, the original designer would be responsible
for checking that the software is doing what was specified and this would discover the
coder's misreading of that document.
Using the V-process model as a framework. planning decisions can be made at the outset
as to the types and amounts of testing to be done. An obvious example of this would be
that if the software were acquired 'off-the-shelf', the program design and code stages
would not be relevant and so program testing would not be needed.
The objectives of both verification and validation techniques are very similar. Both these
techniques have been designed to help remove errors in software. In spite of the
apparent similarity between their objectives, the underlying principles of these two bug
detection techniques and their applicability are very different.
The main differences between these two techniques are the following:
Example :
All the boxes shown in the right hand side of the V-process model of Figure 5.9
correspond to verification activities except the system testing block which corresponds
to validation activity.
For this reason. black-box testing is also known as functional testing and also as
requirements-driven testing. Design of white-box test cases on the other hand, requires
analysis of the source code. Consequently, white-box testing is also called structural
testing or structure-driven testing.
Levels of testing
A software product is normally tested at three different stages or levels. These three
testing stages are .
• Unit testing
• Integration testing
• System testing
During unit testing, the individual components (or units) of a program are tested.
For every module, unit testing is carried out as soon as the coding for it is complete. Since
every module is tested separately, there is a good scope for parallel activities during unit
testing.
The objective of integration testing is to check whether the modules have any errors
pertaining to interfacing with each other.
Unit testing is referred to as testing in the small, whereas integration and system testing
are referred to as testing in the large. After testing all the units individually, the units are
integrated over a number of steps and tested after each step of integration (integration
testing). Finally, the fully integrated system is tested (system testing).
Testing activities
Test Planning Since many activities are carried out during testing, careful planning is
needed. The specific test case design strategies that would be deployed are also planned.
Project Management and Software quality 121
Module 4 and 5
Test planning consists of determining the relevant test strategies and planning for any
test bed that may be required. A suitable test bed is an especially important concern
while testing embedded applications. A test bed usually includes setting up the hardware
or simulator.
Test Suite Design Planned testing strategies are used to design the set of test cases
(called test suite) using which a program is to be tested.
Test Case Execution and Result Checking Each test case is run and the results are
compared with the expected results. A mismatch between the actual result and expected
results indicates a failure. The test cases for which the system fails are noted down for
test reporting.
Test Reporting When the test cases are run, the tester may raise issues, that is, report
discrepancies between the expected and the actual findings. A means of formally
recording these issues and their history is needed. A review body adjudicates these
issues.
The issue is dismissed on the grounds that there has been a misunderstanding of a
requirement by the tester.
The issue is identified as a fault which the developers need to correct- Where
development is being done by contractors, they would be expected to cover the
cost of the correction.
In a commercial project, execution of the entire test suite can take several weeks
to complete. Therefore, in order to optimize the turnaround time, the test failure
Debugging :For each failure observed during testing, debugging is carried out to identify
the statements that are in error. There are several debugging strategies, but essentially in
each the failure symptoms are analysed to locate the errors.
Error Correction :After an error is located through a debugging activity; the code is
appropriately changed to correct the error.
Defect Retesting : Once a defect has been dealt with by the development team; the
corrected code is retested by the testing team to check whether the defect has
successfully been addressed. Defect retest is also popularly called resolution testing. The
resolution tests are a subset of the complete test suite (see Figure 5.10).
FIGURE 5.10 Types of test cases in the original test suite after a change
Regression Testing While resolution testing checks whether the defect has been fixed,
regression testing checks whether the unmodified functionalities still continue to work
correctly. Thus, whenever a defect is corrected and the change is incorporated in the
program code, a danger is that a change introduced to correct an error could actually
introduce errors in functionalities that were previously working correctly. The regression
tests check whether the unmodified functionalities continue to work correctly. As a
result, after a bug-fixing session, both the resolution and regression test cases need to be
run. This is where the additional effort required to create automated test scripts can pay
off. As shown in Figure 5.6, some test cases may no more be valid after the change. These
Test Closure Once the system successfully passes all the tests, documents related to
lessons learned, test results, logs, etc., are archived for use as a reference in future
projects.
Of all the above-mentioned testing activities, debugging is usually the most time-
consuming activity. Who performs testing?
A question to be settled at the planning stage is who would carry out testing. Many
organizations have separate system testing groups to provide an independent
assessment of the correctness of software before release.
In other organizations, staff is allocated to a purely testing role but work alongside the
developers instead of a separate group. While an independent testing group can provide
final quality check, it has been argued that developers may take less care of their work if
they know the existence of this safety net.
Test automation
Testing is usually the most time consuming and laborious of all software development
activities. This is especially true for large and complex software products that are being
developed currently.
At present, testing cost often exceeds all other development life-cycle costs. With the
growing size of programs and the increased importance being given to product quality,
test automation is drawing considerable attention from both industry circles and
academia.
Test automation is a generic term for automating one or some activities of the test
process.
Other than reducing human effort and time in this otherwise time and effort-intensive
work, test automation also significantly improves the thoroughness of testing. This is
because more testing can be carried out using a large number of test cases within a short
period of time without any significant cost overhead. The effectiveness of testing, to a
large extent, depends on the exact test case design strategy used.
Considering the large overheads that sophisticated testing techniques incur, in many
industrial projects, often testing is carried out using randomly selected test values. With
automation, more sophisticated test case design techniques can be deployed. Without the
A further advantage of using testing tools is that automated test results are much more
reliable and eliminate human errors during testing. Regression testing after every change
or error correction requires running several old test cases. In this situation, test
automation simplifies repeated running of the test cases. Testing tools hold out the
promise of substantial cost and time reduction even in the testing and maintenance
phases.
Every software product undergoes significant change overtime. Each time the code
changes, it needs to be tested whether the changes induce any failures in the unchanged
features. Thus the originally designed test suite needs to be run repeatedly each time the
code changes. Additional tests have to be designed and carried out on the enhanced
features.
Repeated running of the same set of test cases over and over after every change is
monotonous, boring, and error-prone. Automated testing tools can be of considerable use
in repeatedly running the same set of test cases. Testing tools can entirely or at least
substantially eliminate the drudgery of running same test cases and also significantly
reduce testing costs.
A large number of tools are at present available both in the public domain as well as from
commercial sources. It is possible to classify these tools into the following types with
regard to the specific methodology on which they are based. Capture and Playback In this
type of tools, the test cases are executed manually only once.
During the manual execution, the sequence and values of various inputs as well as the
outputs produced are recorded. On any subsequent occasion, the test can be
automatically replayed and the results are checked against the recorded output.
An important advantage of the capture playback tools is that once test data are captured
and the results verified, the tests can be rerun several times over easily and cheaply.
Thus, these tools are very useful for regression testing. However, capture and playback
tools have a few disadvantages as well. Test maintenance can be costly when the unit
under test changes, since some of the captured tests may become invalid. It would
require considerable effort to determine and remove the invalid test cases or modify the
Automated Test Script Test scripts are used to drive an automated test tool. The scripts
provide input to the unit under test and record the output. The testers employ a variety
of languages to express test scripts.
An important advantage of test script-based tools is that once the test script is debugged
and verified, it can be rerun a large number of times easily and cheaply. However,
debugging the test script to ensure its accuracy requires significant effort. Also, every
subsequent change to the unit under test entails effort to identify impacted test scripts,
modify, rerun and reconfirm them.
Random Input Test- In this type of an automatic testing tool, test values are randomly
generated to cover the input space of the unit under test. The outputs are ignored
because analysing them would be extremely expensive. The goal is usually to crash the
unit under test and not to check if the produced results are correct.
An advantage of random input testing tools is that it is relatively easy. This approach
however can be the most cost-effective for finding some types of defects. However,
random input testing is a very limited form of testing. It finds only the defects that crash
the unit under test and not the majority of defects that do not crash the system but simply
produce incorrect results.
Reliability of a software product usually keeps on improving with time during the testing
and operational phases as defects are identified and repaired. In this context, the growth
of reliability over the testing and operational phases can be modelled using a
mathematical expression called Reliability Growth Model (RGM). Thus, RGM models
show how the reliability of a software product improves as failures are reported and
bugs are corrected.
A large number of RGMs have been proposed by researchers based on various failure and
bug repair patterns. A few popular reliability growth models are Jelinski-Moranda model,
Littlewood- Verall's model, and Goel-Okutomo's model. For a given development project,
a suitable RGM can be used to predict when (or if at all) a particular level of reliability is
likely to be attained. Thus, reliability growth modelling can be used to determine when
during the testing phase a given reliability level will be attained, so that testing can be
stopped.
We can summarize the main reasons that make software reliability more difficult to
measure than hardware reliability:
The reliability improvement due to fixing a single bug depends on where the bug
is located in the code.
The perceived reliability of a software product is observer-dependent.
The reliability of a product keeps changing as errors are detected and fixed.
A fundamental issue that sets the reliability study of software apart from hardware
reliability study is the difference between their failure to failure of software components.
Hardware components fail mostly due to wear and tear, whereas software components
fail due to presence of bugs.
As an example of a hardware, consider an electronic circuit. In this circuit, a failure may
occur as a logic gate may be stuck at 1 or 0, or a resistor might short circuit. To fix a
hardware fault, one has to either replace of repair the failed part. In contrast, a software
For this reason, when a hardware part is repaired its reliability would be maintained at
the level that existed before the failure occurred; whereas when a software failure is
repaired, the reliability may either increase or decrease (reliability may decrease if a bug
fix introduces new errors). To put this fact in a different perspective, hardware reliability
study is concerned with stability (e.g. the inter-failure times remain constant). On the
other hand, the aim of software reliability study would be reliability growth (i.e. increase
in inter-failure times).
A comparison of the changes in failure rate over the product life time for a typical
hardware product as well as a software product is sketched in Figure 5.11. Observe that
the plot of change of reliability with time for a hardware component [Figure 5.11(a)]
appears like a 'bath tub'. As shown in Figure 5.11(a), for a hardware system the failure
rate is initially high, but decreases as the faulty components are identified and are either
repaired or replaced.
Figure 5.11: Reliability growth with time for hardware and software products
Reliability Metrics
MTTF is the time between two successive failures, averaged over a large number of
failures. To measure MTTF, we can record the failure data for n failures. Let the failures
occur at the time instants 11, 12, ... and Then, MTTF can be calculated as Et,/n. It is
important to note that only run time is considered in the time measurements. That is, the
time for which the system is down to fix the error, the boot time, etc., are not taken into
account in the time measurements and the clock is stopped at these times.
Once failure occurs, sometime is required to fix the error. MTTR measures the average
time it takes to track the errors causing the failure and to fix them.
Mean Time between Failure (MTBF)
The MTTF and MTTR metrics can be combined to get the MTBF metric:
MTBF = MTTF + MTTR. Thus, MTBF of 300 hours indicates that once a failure occurs, the
next failure is expected after 300 hours. In this case, the time measurements are real time
and not the execution time as in MTTF.
Unlike the other metrics discussed, this metric does not explicitly involve time
measurements. POFOD measures the likelihood of the system failing when a service
request is made. For example, a POFOD of 0.001 would mean that 1 out of every 1000
service requests would result in a failure. We have already mentioned that the reliability
of a software product should be determined through specific service invocations, rather
than making the software run continuously. Thus, POFOD metric is very appropriate for
software program that are not required to run continuously given period of time. This
metric not only considers the number of failures occurring during a time interval
Failures which are transient and whose consequences are not serious are in practice of
little concern in the operational use of a software product. These types of failures can at
best be minor irritants.
More severe types of failures may render the system totally unusable. In order to
estimate the reliability of a software product more accurately, it is necessary to classify
various types of failures.
In the following, give a simple classification of software failures into different types.
Transient. Transient failures occur only for certain input values while invoking a
function of the system.
Permanent. Permanent failures occur for all input values while invoking a function of the
system.
Recoverable. When a recoverable failure occurs, the system can recover without having
to shut down and restart the system (with or without operator intervention).
Unrecoverable. In unrecoverable failures, the system may need to be restarted.
Cosmetic. These classes of failures cause only minor irritations, and do not lead to
incorrect results. An example of a cosmetic failure is the situation where the mouse
button has to be clicked twice instead of once to invoke a given function through the
graphical user interface.
The simplest reliability growth model is a step function model where it is assumed that
the reliability increases by a constant increment each time an error is detected and
repaired. Therefore, perfect error fixing is implicit in this model. Another implicit
assumption in this model is that all errors contribute equally to reliability growth
The instantaneous failure rate (or the hazard rate) in this model is given by Z(t) = K(N-i);
where K is a constant, NW is the total number of errors in the program, and t is any time
between the ith and (i+1)th failure.
This model allows for negative reliability growth to reflect the fact that when a repair is
carried out, it may introduce additional errors. It also models the fact that as errors are
repaired, the average improvement to the product reliability per repair decreases. It
treats an error's contribution to reliability improvement to be an independent random
variable having Gamma distribution. This distribution models the fact that error
corrections with large contributions to reliability growth are removed first. This
represents diminishing return as the test continues.
Goel-Okutomo Model
In this model, it is assumed that the execution times between the failures are
exponentially distributed. The cumulative number of failures at any time can therefore be
given in terms of p(t), the expected value of failures between time t and time + At.
It is assumed that it follows a Non-Homogeneous Poisson Process (NHPP). That is, the
expected number of error occurrences for any time t to t+At is proportional to the
expected number of undetected errors at time t. Once a failure has been detected, it is
assumed that the error correction is perfect and immediate. The number of failures over
time is given in Figure 5.13. The number of failures at time I can be given by, u(t)= N(1-e),
Some organizations produce quality plans for each project. These show how the standard
quality procedures and standards laid down in an organization's quality manual will
actually be applied to the project. If an approach to planning such as Step Wise has been
followed, quality-related activities and requirements will have been identified by the
main planning process with no need for a separate quality plan. However, where
software is being produced for an external client, the client's quality assurance staff
might require that a quality plan be produced to ensure the quality of the delivered
products. A quality plan can be seen as checklist that all quality issues have been dealt
with by the planning process. Thus, most of the content w be references to other
documents.
A quality plan might have entries for:
This contents list is based on a draft IEEE standard for software quality assurance plans.
Purpose - scope of plan
List of references to other documents
Questions :
----------------**************---------------------