0% found this document useful (0 votes)
139 views134 pages

Module3to5 SE&PM

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
139 views134 pages

Module3to5 SE&PM

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Module 3

AGILE DEVELOPMENT
Introduction

Agile software engineering combines a philosophy and a set of development guidelines. The philosophy
encourages customer satisfaction and early incremental delivery of software; small, highly motivated
project teams;informal methods; minimal software engineering work products; and overall development
simplicity. The development guidelines stress delivery over analysis and design (although these
activities are not discouraged), and active and continuous communication between developers and
customers.*

3.1 WHAT IS AGILITY?

Definition :-
Agility means characteristics of being dynamic, content specific, aggressively change embracing and
growth oriented.

Agility has become today’s buzzword when describing a modern software process. Everyone is agile. An
agile team is a nimble team able to appropriately respond to changes. Change is what software
development is very much about. Changes in the software being built, changes to the team members,
changes because of new technology, changes of all kinds that may have an impact on the product they
build or the project that creates the product. Support for changes should be built-in everything we do in
software.

An agile team recognizes that software is developed by individuals working in teams and that the skills
of these people,their ability to collaborate is at the core for the success of the project.

All changes can be represented as shown in the below diagram which is considered according to Ivar
Jacobson Agility process of Software.

Agility – Software Engineering

Note :Ivar Hjalmar Jacobson (born 1939) is a Swedish computer scientist and software engineer,
known as a major contributor to UML, Objectory, Rational Unified Process (RUP), aspect-oriented
software development and Essence.

In Jacobson’s view, the pervasiveness of change is the primary driver for agility.
Software engineers must be quick on their feet if they are to accommodate the rapid changes that
Jacobson describes.But agility is more than an effective response to change. It encourages team

AGILE DEVELOPMENT page. 1


structures and attitudes that make communication (among team members, between technologists and
business people, between software engineers and their managers) more facile.

It emphasizes rapid delivery of operational software and de-emphasizes the importance of
intermediate work products (not always a good thing); it adopts the customer as a part of the
development team and works to eliminate the “us and them” attitude that continues to pervade many
software projects; it recognizes that planning in an uncertain world has its limits and that a project plan
must be flexible.

Agility can be applied to any software process. To accomplish this, it is essential that the process be
designed in a way that allows the project team to adapt tasks and to streamline them, conduct planning
in a way that understands the fluidity of an agile development approach, eliminate all but the most
essential work products and keep them lean, and emphasize an incremental delivery strategy that gets
working software to the customer as rapidly as feasible for the product type and operational
environment.

3.2 AGILITY AND THE COST OF CHANGE

The conventional wisdom in software development (supported by decades of experience) is that the cost
of change increases nonlinearly as a project progresses (Figure 3.1, solid black curve). It is relatively
easy to accommodate a change when a software team is gathering requirements (early in a project).

FIGURE 3.1: Change costs as a function of time in development

A usage scenario might have to be modified, a list of functions may be extended, or a written
specification can be edited. The costs of doing this work are minimal, and the time required will not
adversely affect the outcome of the project.

The team is in the middle of validation testing (something that occurs relatively late in the project),
and an important stakeholder is requesting a major functional change.

 The change requires a modification to the architectural design of the software, the design and
construction of three new components, modifications
to another five components, the design of new tests, and so on.

AGILE DEVELOPMENT page. 2


Costs escalate quickly, and the time and cost required to ensure that the change is made without
unintended side effects is nontrivial. A software team to accommodate changes late in a software project
without dramatic cost and time impact.

The agile process encompasses incremental delivery. When incremental delivery is coupled with other
agile practices such as continuous unit testing and pair programming the cost of making a change is
attenuated.

3.3 WHAT IS AN AGILE PROCESS?

Definition :
The Processes which are adaptable of changes in requirements, which have incrementality and work on
unpredictability. These processes are based on three assumptions which all do refer to the
unpredictability in different stages of software process development such unpredictability at time
requirements, at analysis and design or at time construction. So these processes are adaptable at all
stages on SDLC.

Any agile software process is characterized in a manner that addresses a number of key assumptions
about the majority of software projects:

1. It is difficult to predict in advance which software requirements will persist and which will change. It
is equally difficult to predict how customer priorities will change as the project proceeds.

2. For many types of software, design and construction are interleaved. That is, both activities should be
performed in tandem so that design models are proven as they are created. It is difficult to predict how
much design is necessary before construction is used to prove the design.

3. Analysis, design, construction, and testing are not as predictable (from a planning point of view) as we
might like.
Given these three assumptions, an important question arises: How do we create a process that can
manage unpredictability?

The answer, lies in process adaptability (to rapidly changing project and technical conditions).

An agile process, therefore, must be adaptable. But continual adaptation without forward progress
accomplishes little. Therefore, an agile software process must adapt incrementally. To accomplish
incremental adaptation,an agile team requires customer feedback (so that the appropriate adaptations
can be made).

An effective catalyst for customer feedback is an operational prototype or a portion of an operational


system. Hence, an incremental development strategy should be instituted.

Software increments (executable prototypes or portions of an operational system) must be delivered in


short time periods so that adaptation keeps pace with change (unpredictability). This iterative approach
enables the customer to evaluate the software increment regularly, provide necessary feedback to the
software team, and influence the process adaptations that are made to accommodate the feedback.

3.3.1 Agility Principles

The Agile Alliance defines 12 agility principles for those who want to achieve agility:

AGILE DEVELOPMENT page. 3


1. Our highest priority is to satisfy the customer through early and continuous delivery of valuable
software.
2. Welcome changing requirements, even late in development. Agile processes harness change for the
customer’s competitive advantage.
3. Deliver working software frequently, from a couple of weeks to a couple of months, with a preference
to the shorter timescale.
4. Business people and developers must work together daily throughout the project.
5. Build projects around motivated individuals. Give them the environment and support they need, and
trust them to get the job done.
6. The most efficient and effective method of conveying information to and within a development team is
face-to-face conversation.
7. Working software is the primary measure of progress.
8. Agile processes promote sustainable development. The sponsors, developers, and users should be
able to maintain a constant pace indefinitely.
9. Continuous attention to technical excellence and good design enhances agility.
10. Simplicity—the art of maximizing the amount of work not done—is essential.
11. The best architectures, requirements, and designs emerge from self–organizing teams.
12. At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its
behavior accordingly.

3.3.2 The Politics of Agile Development

Jim Highsmith states the extremes when he characterizes the feeling of the pro-agility camp (“agilists”).
“Traditional methodologists are a bunch of stick-in-the-muds who’d rather produce flawless
documentation than a working system that meets business needs.”
Note : James A. Highsmith (born 1945) is an American software engineer and author of books in the
field of software development methodology. He is the creator of Adaptive Software Development,
described in his 1999 book "Adaptive Software Development", and winner of the 2000 Jolt Award, and
the Stevens Award in 2005. Highsmith was one of the 17 original signatories of the Agile Manifesto, the
founding document for agile software development.

There are no absolute answers to either of these questions. Even within the agile school itself, there are
many proposed process models (Section 3.4), each with a subtly different approach to the agility
problem.

Within each model there is a set of “ideas” (agilists are loath to call them “work tasks”) that represent a
significant departure from traditional software engineering. And yet, many agile concepts are simply
adaptations of good software engineering concepts. Bottom line: there is much that can be gained by
considering the best of both schools and virtually nothing to be gained by denigrating either approach.

3.3.3 Human Factors

Proponents of agile software development take great pains to emphasize the importance of “people
factors.” As Cockburn and Highsmith state, “Agile development focuses on the talents and skills of
individuals, molding the process to specific people and teams.”

The key point in this statement is that the process molds to the needs of the people and team, not the other
way around.

AGILE DEVELOPMENT page. 4


If members of the software team are to drive the characteristics of the process that is applied to build
software, a number of key traits must exist among the people on an agile team and the team itself:

Competence. In an agile development (as well as software engineering) context, “competence”


encompasses innate talent, specific software-related skills, and overall knowledge of the process that the
team has chosen to apply. Skill and knowledge of process can and should be taught to all people who
serve as agile team members.

Common focus. Although members of the agile team may perform different tasks and bring different
skills to the project, all should be focused on one goal—to deliver a working software increment to the
customer within the time promised.

To achieve this goal, the team will also focus on continual adaptations (small and large) that will make
the process fit the needs of the team.

Collaboration. Software engineering (regardless of process) is about assessing, analyzing, and using
information that is communicated to the software team; creating information that will help all
stakeholders understand the work of the team; and building information (computer software and
relevant databases) that provides business value for the customer. To accomplish these tasks, team
members must collaborate—with one another and all other stakeholders.

Decision-making ability. Any good software team (including agile teams) must be allowed the freedom
to control its own destiny. This implies that the team is given autonomy—decision-making authority for
both technical and project issues.

Fuzzy problem-solving ability. Software managers must recognize that the agile team will continually
have to deal with ambiguity and will continually be buffeted by change. In some cases, the team must
accept the fact that the problem they are solving today may not be the problem that needs to be solved
tomorrow.
Lessons learned from any problem-solving activity (including those that solve the wrong problem) may
be of benefit to the team later in the project.

Mutual trust and respect. The agile team must become what DeMarcoand Lister call a “jelled” team .
A jelled team exhibits the trust and respect that are necessary to make them “so strongly knit that the
whole is greater than the sum of the parts.”

Self-organization. In the context of agile development, self-organization implies three things:


(1) the agile team organizes itself for the work to be done,
(2) the team organizes the process to best accommodate its local environment,
(3) the team organizes the work schedule to best achieve delivery of the software increment.
Self-organization has a number of technical benefits, but more importantly, it serves to improve
collaboration and boost team morale. In essence, the team serves as its own management.

Ken Schwaber addresses these issues when he writes: “The team selects how much work it believes it
can perform within the iteration, and the team commits to the work. Nothing demotivates a team as
much as someone else making commitments for it. Nothing motivates a team as much as accepting the
responsibility for fulfilling commitments that it made itself.”

3.4 EXTREME PROGRAMMING (XP)

AGILE DEVELOPMENT page. 5


Extreme Programming (XP) is the most widely used approach to agile software development. Although
early work on the ideas and methods associated with XP occurred during the late 1980s, the seminal
work on the subject has been written by Kent Beck . More recently, a variant of XP, called Industrial XP
(IXP) has been proposed . IXP refines XP and targets the agile process specifically for use within large
organizations.

3.4.1 XP Values

Beck defines a set of five values that establish a foundation for all work performed as part of XP—
communication, simplicity, feedback, courage, and respect. Each of these values is used as a driver for
specific XP activities, actions, and tasks.

In order to achieve effective communication between software engineers and other stakeholders (e.g., to
establish required features and functions for the software), XP emphasizes close, yet informal (verbal)
collaboration between customers and developers, the establishment of effective metaphors for
communicating important concepts, continuous feedback, and the avoidance of voluminous
documentation as a communication medium.

To achieve simplicity, XP restricts developers to design only for immediate needs, rather than consider
future needs. The intent is to create a simple design that can be easily implemented in code). If the
design must be improved, it can be refactored at a later time.

Feedback is derived from three sources: the implemented software itself, the customer, and other
software team members.
By designing and implementing an effective testing strategy , the software (via test results) provides
the agile team with feedback. XP makes use of the unit test as its primary testing
tactic.
As each class is developed, the team develops a unit test to exercise each operation according to its
specified functionality.
As an increment is delivered to a customer, the user stories or use cases that are implemented by the
increment are used as a basis for acceptance tests. The degree to which the software implements the
output, function, and behavior of the use case is a form of feedback.

Finally, as new requirements are derived as part of iterative planning, the team provides the customer
with rapid feedback regarding cost and schedule impact.

Beck argues that strict adherence to certain XP practices demands courage. A better word might be
discipline.
For example, there is often significant pressure to design for future requirements. Most software teams
succumb, arguing that “designing for tomorrow” will save time and effort in the long run.

An agile XP team must have the discipline (courage) to design for today, recognizing that future
requirements may change dramatically, thereby demanding substantial rework of the design and
implemented code.

AGILE DEVELOPMENT page. 6


By following each of these values, the agile team inculcates respect among it members, between other
stakeholders and team members, and indirectly, for the software itself. As they achieve successful
delivery of software increments, the team develops growing respect for the XP process.

3.4.2 The XP Process

Extreme Programming uses an object-oriented approach as its preferred development paradigm and
encompasses a set of rules and practices that occur within the context of four framework activities:
planning, design, coding, and testing.

Figure 3.2 illustrates the XP process and notes some of the key ideas and tasks that are associated with
each framework activity. Key XP activities are summarized in the paragraphs that follow.

Planning: The planning activity (also called the planning game) begins with listening—a requirements
gathering activity that enables the technical members of the XP team to understand the business context
for the software and to get a broad

Figure 3.2 : The Extreme programming process


feel for required output and major features and functionality. Listening leads to the creation of a set of
“stories” (also called user stories) that describe required output, features, and functionality for software
to be built. Each story (similar to use cases )is written by the customer and is placed on an index card.

The customer assigns a value (i.e., a priority) to the story based on the overall business value of the
feature or function.

Members of the XP team then assess each story and assign a cost—measured in development weeks—to
it. If the story is estimated to require more than three development weeks, the customer is asked to split
the story into smaller stories and the assignment of value and cost occurs again.

Customers and developers work together to decide how to group stories into the next release (the next
software increment) to be developed by the XP team. Once a basic commitment (agreement on stories to
be included, delivery date, and other project matters) is made for a release.

The XP team orders the stories that will be developed in one of three ways:

AGILE DEVELOPMENT page. 7


(1) All stories will be implemented immediately (within a few weeks).
(2) The stories with highest value will be moved up in the schedule and implemented first. or
(3) The riskiest stories will be moved up in the schedule and implemented first.

After the first project release (also called a software increment) has been delivered,the XP team
computes project velocity. Stated simply, project velocity is the number of customer stories
implemented during the first release.

Project velocity can then be used to

(1) help estimate delivery dates and schedule for subsequent releases and
(2) determine whether an overcommitment has been made for all stories across the entire development
project. If an overcommitment occurs, the content of releases is modified or end delivery dates are
changed.

As development work proceeds, the customer can add stories, change the value of an existing story, split
stories, or eliminate them. The XP team then reconsiders all remaining releases and modifies its plans
accordingly.

Design: XP design rigorously follows the KIS (keep it simple) principle.


A simple design is always preferred over a more complex representation. In addition, the design
provides implementation guidance for a story as it is written—nothing less, nothing more. The design of
extra functionality (because the developer assumes it will be required later) is discouraged.

XP encourages the use of CRC cards as an effective mechanism for thinking about the software in an
object-oriented context.
CRC (class-responsibility collaborator) cards identify and organize the object-oriented classes that are
relevant to the current software increment. The CRC cards are the only design work product produced
as part of the XP process.

If a difficult design problem is encountered as part of the design of a story, XP recommends


the immediate creation of an operational prototype of that portion of the design.

Called a spike solution, the design prototype is implemented and evaluated.


The intent is to lower risk when true implementation starts and to validate the original
estimates for the story containing the design problem.
We noted that XP encourages refactoring—a construction technique that is also a method for design
optimization.
Fowler describes refactoring in the following manner:

Refactoring is the process of changing a software system in such a way that it does not alter the
external behavior of the code yet improves the internal structure.
It is a disciplined way to clean up code [and modify/simplify the internal design] that minimizes the
chances of introducing bugs. In essence, when you refactor you are improving the design of the code
after it has been written.

AGILE DEVELOPMENT page. 8


XP design uses virtually no notation and produces few, if any, work products other than CRC cards and
spike solutions, design is viewed as a transient artefact that can and should be continually modified as
construction proceeds.
 The intent of refactoring is to control these modifications by suggesting small design changes that
“can radically improve the design” . It should be noted, however, that the effort required for
refactoring can grow dramatically as the size of an application grows.

 A central notion in XP is that design occurs both before and after coding commences.
 Refactoring means that design occurs continuously as the system is constructed.The construction
activity itself will provide the XP team with guidance on how to improve the design.

Coding: After stories are developed and preliminary design work is done, the team does not move to
code, but rather develops a series of unit tests that will exercise each of the stories that is to be included
in the current release (software increment).

 Once the unit test has been created, the developer is better able to focus on what must be
implemented to pass the test. Nothing extraneous is added (KIS).

 Once the code is complete, it can be unit-tested immediately, thereby providing instantaneous
feedback to the developers.

 A key concept during the coding activity (and one of the most talked about aspects of XP) is pair
programming.

 XP recommends that two people work together at one computer workstation to create code for a
story. This provides a mechanism for realtime problem solving (two heads are often better than
one) and real-time quality assurance (the code is reviewed as it is created). It also keeps the
developers focused
 on the problem at hand. In practice, each person takes on a slightly different role.

For example, one person might think about the coding details of a particular portion of the design while
the other ensures that coding standards (a required part of XP) are being followed or that the code for
the story will satisfy the unit test that has been developed to validate the code against the story.

As pair programmers complete their work, the code they develop is integrated with the work of others.
In some cases this is performed on a daily basis by an integration team.

In other cases, the pair programmers have integration responsibility. This “continuous integration”
strategy helps to avoid compatibility and interfacing problems and provides a “smoke testing”
environment that helps to uncover errors early.

Testing: Note that ,the creation of unit tests before coding commences is a key element of the XP
approach.
The unit tests that are created should be implemented using a framework that enables them to be
automated (hence, they
can be executed easily and repeatedly). This encourages a regression testing strategy whenever code is
modified (which is often, given the XP refactoring philosophy).

AGILE DEVELOPMENT page. 9


As the individual unit tests are organized into a “universal testing suite” , integration and validation
testing of the system can occur on a daily basis. This provides the XP team with a continual indication of
progress and also can raise warning flags early if things go awry.

Wells states: “Fixing small problems every few hours takes less time than fixing huge problems just
before the deadline.”

XP acceptance tests, also called customer tests, are specified by the customer and focus on overall
system features and functionality that are visible and reviewable by the customer.
Acceptance tests are derived from user stories that have been implemented as part of a software release.

3.4.3 Industrial XP

Joshua Kerievsky describes Industrial Extreme Programming (IXP) in the following manner:
“IXP is an organic evolution of XP. It is imbued with XP’s minimalist, customer-centric, test-driven spirit.
IXP differs most from the original XP in its greater inclusion of management, its expanded role for
customers, and its upgraded technical practices.”

IXP incorporates six new practices that are designed to help ensure that an XP project works
successfully for significant projects within a large organization.

Readiness assessment: Prior to the initiation of an IXP project, the organization should conduct a
readiness assessment. The assessment ascertains whether

(1) an appropriate development environment exists to support IXP,


(2) the team will be populated by the proper set of stakeholders,
(3) the organization has a distinct quality program and supports continuous improvement,
(4) the organizational culture will support the new values of an agile team, and
(5) the broader project community will be populated appropriately.

Project community: Classic XP suggests that the right people be used to populate the agile team to
ensure success. The implication is that people on the team must be well-trained, adaptable and skilled,
and have the proper temperament to contribute to a self-organizing team.
When XP is to be applied for a significant project in a large organization, the concept of the “team”
should morph into that of a community.

A community may have a technologist and customers who are central to the success of a project as well
as many other stakeholders (e.g., legal staff, quality auditors, manufacturing or sales types) who “are
often at the periphery of an IXP project yet they may play important roles on the project” .

In IXP, the community members and their roles should be explicitly defined and mechanisms for
communication and coordination between community members should be established.

Project chartering: The IXP team assesses the project itself to determine whether an appropriate
business justification for the project exists and whether the project will further the overall goals and
objectives of the organization.

AGILE DEVELOPMENT page. 10


Chartering also examines the context of the project to determine how it complements, extends, or
replaces existing systems or processes.

Test-driven management: An IXP project requires measurable criteria for assessing the state of the
project and the progress that has been made to date. Test-driven management establishes a series of
measurable “destinations”and then defines mechanisms for determining whether or not these
destinations have been reached.

Retrospectives: An IXP team conducts a specialized technical review after a software increment is
delivered. Called a retrospective, the review examines “issues, events, and lessons-learned” across a
software increment and/or the entire software release. The intent is to improve the IXP process.

Continuous learning:: Because learning is a vital part of continuous process improvement, members of
the XP team are encouraged (and possibly, incented) to learn new methods and techniques that can lead
to a higher quality product.

IXP modifies a number of existing XP practices.

Story-driven development (SDD) insists that stories for acceptance tests be written before a single
line of code is generated.

Domain-driven design (DDD) is an improvement on the “system metaphor” concept used in XP. DDD
suggests the evolutionary creation of a domain model that “accurately represents how domain experts
think about their subject” .

Pairing extends the XP pair programming concept to include managers and other stakeholders. The
intent is to improve knowledge sharing among XP team members who may not be directly involved in
technical development.

Iterative usability discourages front-loaded interface design in favor of usability design that evolves as
software increments are delivered and users’ interaction with the software is studied.

IXP makes smaller modifications to other XP practices and redefines certain roles and responsibilities to
make them more amenable to significant projects for large organizations.

3.4.4 The XP Debate

All new process models and methods spur worthwhile discussion and in some instances heated debate.
Extreme Programming has done both.
Stephens and Rosenberg argue that many XP practices are worthwhile, but others have been overhyped,
and a few are problematic.
The authors suggest that the codependent nature of XP practices are both its strength and its weakness.
Because many organizations adopt only a subset of XP practices, they weaken the efficacy of the entire
process.
Proponents counter that XP is continuously evolving and that many of the issues raised by critics have
been addressed as XP practice matures. Among the issues that continue to trouble some critics of XP are:

AGILE DEVELOPMENT page. 11


• Requirements volatility:-Because the customer is an active member of the XP team, changes to
requirements are requested informally. As a consequence, the scope of the project can change and
earlier work may have to be modified to accommodate current needs. Proponents argue that this
happens
regardless of the process that is applied and that XP provides mechanisms for controlling scope creep.

• Conflicting customer needs:- Many projects have multiple customers, each with his own set of needs.
In XP, the team itself is tasked with assimilating the needs of different customers, a job that may be
beyond their scope of authority.

• Requirements are expressed informally. User stories and acceptance tests are the only explicit
manifestation of requirements in XP.
Critics argue that amore formal model or specification is often needed to ensure that omissions,
inconsistencies, and errors are uncovered before the system is built. Proponents counter that the
changing nature of requirements makes such models and specification obsolete almost as soon as they
are developed.

• Lack of formal design. XP deemphasizes the need for architectural design and in many instances,
suggests that design of all kinds should be relatively informal. Critics argue that when complex systems
are built, design must be emphasized to ensure that the overall structure of the software will exhibit
quality and maintainability.

XP proponents suggest that the incremental nature of the XP process limits complexity (simplicity is a
core value) and therefore reduces the need for extensive design.
Note :Every software process has flaws and that many software organizations have used XP successfully.
The key is to recognize where a process may have weaknesses and to adapt it to the specific needs of
your organization.

3.5 OTHER AGILE PROCESS MODELS

The history of software engineering is littered with dozens of obsolete process descriptions and
methodologies, modeling methods and notations, tools, and technology. Each flared in notoriety and was
then eclipsed by something new and (purportedly) better. With the introduction of a wide array of agile
process models— each contending for acceptance within the software development community—the
agile movement is following the same historical path.

The most widely used of all agile process models is Extreme Programming (XP). But many other agile
process models have been proposed and are in use across the industry. Among the most common are:
• Adaptive Software Development (ASD)
• Scrum
• Dynamic Systems Development Method (DSDM)
• Crystal
• Feature Drive Development (FDD)
• Lean Software Development (LSD)
• Agile Modeling (AM)
• Agile Unified Process (AUP)

AGILE DEVELOPMENT page. 12


It is important to note that all agile process models conform (to a greater or lesser degree) to the
Manifesto for Agile Software Development and the principles noted in Section 3.3.1. For additional
detail, refer to the references noted in each subsection or for a survey, examine the “agile software
development” entry
in Wikipedia.

3.5.1 Adaptive Software Development (ASD)

Adaptive Software Development (ASD) has been proposed by Jim Highsmith as a technique for
building complex software and systems. The philosophical underpinnings of ASD focus on human
collaboration and team self-organization.

Highsmith argues that an agile, adaptive development approach based on collaboration is “as much a
source of order in our complex interactions as discipline and engineering.” He defines an ASD “life cycle”
(Figure 3.3) that incorporates three phases: speculation, collaboration, and learning.

Figure 3.3 :Adaptive software development


During speculation, the project is initiated and adaptive cycle planning is conducted.
Adaptive cycle planning uses project initiation information—the customer’s mission statement, project
constraints (e.g., delivery dates or user descriptions), and basic requirements—to define the set of
release cycles (software increments) that will be required for the project.No matter how complete and
farsighted the cycle plan, it will invariably change.

Based on information obtained at the completion of the first cycle, the plan is reviewed and adjusted so
that planned work better fits the reality in which an ASD team is working.

Motivated people use collaboration in a way that multiplies their talent and creative output beyond their
absolute numbers. This approach is a recurring theme in all agile methods. But collaboration is not easy.
It encompasses communication and teamwork, but it also emphasizes individualism, because individual
creativity plays an important role in collaborative thinking. It is, above all, a matter of trust.

People working together must trust one another to


(1) criticize without animosity,

AGILE DEVELOPMENT page. 13


(2) assist without resentment,
(3) work as hard as or harder than they do,
(4) have the skill set to contribute to the work at hand, and
(5) communicate problems or concerns in a way that leads to effective action.

As members of an ASD team begin to develop the components that are part of an adaptive cycle, the
emphasis is on “learning” as much as it is on progress toward a completed cycle.

Highsmith argues that software developers often overestimate their own understanding (of the
technology, the process, and the project) and that learning will help them to improve their level of real
understanding.
ASD teams learn in three ways: focus groups, technical reviews, and project postmortems.

The ASD philosophy has merit regardless of the process model that is used. ASD’s overall emphasis on
the dynamics of self-organizing teams, interpersonal collaboration, and individual and team learning
yield software project teams that have a much higher likelihood of success.

3.5.2 Scrum

Scrum (the name is derived from an activity that occurs during a rugby match) is an agile software
development method that was conceived by Jeff Sutherland and his development team in the early
1990s.

In recent years, further development on the Scrum methods has been performed by Schwaber and
Beedle.

Scrum principles are consistent with the agile manifesto and are used to guide development activities
within a process that incorporates the following framework activities: requirements, analysis, design,
evolution, and delivery.

Figure 3.4 :Scrum process flow

Within each framework activity, work tasks occur within a process called a sprint. The work conducted
within a sprint (the number of sprints required for each framework activity will vary depending on
product complexity and size) is adapted to the problem at hand and is defined and often modified

AGILE DEVELOPMENT page. 14


in real time by the Scrum team.
The overall flow of the Scrum process is illustrated in Figure 3.4.

Scrum emphasizes the use of a set of software process patterns that have proven effective for projects
with tight timelines, changing requirements, and business criticality.

Each of these process patterns defines a set of development actions:

Backlog—a prioritized list of project requirements or features that provide business value for the
customer. Items can be added to the backlog at any time (this is how changes are introduced). The
product manager assesses the backlog and updates priorities as required.

Sprints—consist of work units that are required to achieve a requirement defined in the backlog that
must be fit into a predefined time-box14 (typically 30 days). Changes (e.g., backlog work items) are not
introduced during the sprint. Hence, the sprint allows team members to work in a short-term, but stable
environment.

Scrum meetings—are short (typically 15 minutes) meetings held daily by the Scrum team.
Three key questions are asked and answered by all team members :
• What did you do since the last team meeting?
• What obstacles are you encountering?
• What do you plan to accomplish by the next team meeting?
A team leader, called a Scrum master, leads the meeting and assesses the responses from each person.
The Scrum meeting helps the team to uncover potential problems as early as possible. Also, these daily
meetings lead to “knowledge socialization” and thereby promote a self-organizing team structure.
Note :
Scrum : Scrum is a management framework that teams use to self-organize and work towards a
common goal.
Scrum allows us to develop products of the highest value while making sure that we maintain
creativity and productivity.
The iterative and incremental approach used in scrum allows the teams to adapt to the changing
requirements.

Demos—deliver the software increment to the customer so that functionality that has been
implemented can be demonstrated and evaluated by the customer. It is important to note that the demo
may not contain all planned functionality, but rather those functions that can be delivered within the
time-box that was established.
Beedle and his colleagues present a comprehensive discussion of these patterns in which they state:
“Scrum assumes up-front the existence of chaos. . . . ”
The Scrum process patterns enable a software team to work successfully in a world where the
elimination of uncertainty is impossible.

AGILE DEVELOPMENT page. 15


Figure 3.4 :Scrum process flow

3.5.3 Dynamic Systems Development Method (DSDM)

The Dynamic Systems Development Method (DSDM) is an agile software development approach that
“provides a framework for building and maintaining systems which meet tight time constraints through
the use of incremental prototyping in a controlled project environment” .

The DSDM philosophy is borrowed from a modified version of the Pareto principle—80 percent of an
application can be delivered in 20 percent of the time it would take to deliver the complete (100
percent) application.
DSDM is an iterative software process in which each iteration follows the 80 percent rule. That is, only
enough work is required for each increment to facilitate movement to the next increment. The
remaining detail can be completed later when more business requirements are known or changes have
been requested and accommodated.

The DSDM Consortium (www.dsdm.org) is a worldwide group of member companies that collectively
take on the role of “keeper” of the method.

The consortium has defined an agile process model, called the DSDM life cycle that defines three
different iterative cycles, preceded by two additional life cycle activities:

Feasibility study—establishes the basic business requirements and constraints associated with the
application to be built and then assesses whether the application is a viable candidate for the DSDM
process.

Business study—establishes the functional and information requirements that will allow the
application to provide business value; also, defines the basic application architecture and identifies the
maintainability requirements for the application.

Functional model iteration—produces a set of incremental prototypes that demonstrate functionality


for the customer. (Note: All DSDM prototypes are intended to evolve into the deliverable application.)
The intent during this iterative cycle is to gather additional requirements by eliciting feedback from
users as they exercise the prototype.

AGILE DEVELOPMENT page. 16


Design and build iteration—revisits prototypes built during functional model iteration to ensure that
each has been engineered in a manner that will enable it to provide operational business value for end
users.
In some cases, functional model iteration and design and build iteration occur concurrently.

Implementation—places the latest software increment (an “operationalized” prototype) into the
operational environment.
It should be noted that
(1) the increment may not be 100 percent complete or
(2) changes may be requested as the increment is put into place. In either case, DSDM development
work continues by returning to the functional model iteration activity.

DSDM can be combined with XP to provide a combination approach that defines a solid process model
(the DSDM life cycle) with the nuts and bolts practices (XP) that are required to build software
increments.

In addition, the ASD concepts of collaboration and self-organizing teams can be adapted to a combined
process model.

3.5.4 Crystal

Alistair Cockburn and Jim Highsmith created the Crystal family of agile methods in order to achieve a
software development approach that puts a premium on “maneuverability” during what Cockburn
characterizes as “a resource limited,
cooperative game of invention and communication, with a primary goal of delivering useful, working
software and a secondary goal of setting up for the next game”.

To achieve maneuverability, Cockburn and Highsmith have defined a set of methodologies, each with
core elements that are common to all, and roles, process patterns, work products, and practice that are
unique to each.

The Crystal family is actually a set of example agile processes that have been proven effective for
different types of projects. The intent is to allow agile teams to select the member of the crystal family
that is most appropriate for their project and environment.

3.5.5 Feature Driven Development (FDD)

Feature Driven Development (FDD) was originally conceived by Peter Coad and his colleagues as a
practical process model for object-oriented software engineering.

Stephen Palmer and John Felsing have extended and improved Coad’s work, describing an adaptive,
agile process that can be applied to moderately sized and larger software projects.

FDD adopts a philosophy that:


(1) emphasizes collaboration among people on an FDD team;

AGILE DEVELOPMENT page. 17


(2) manages problem and project complexity using feature-based decomposition followed by the
integration of software increments, and
(3) communication of technical detail using verbal, graphical, and text-based means.

FDD emphasizes software quality assurance activities by encouraging an incremental development


strategy, the use of design and code inspections, the application of software quality assurance audits the
collection of metrics, and the use of patterns (for analysis, design, and construction).
In the context of FDD, a feature “is a client-valued function that can be implemented in two weeks or
less”.

The emphasis on the definition of features provides the following benefits:

• Because features are small blocks of deliverable functionality, users can describe them more easily;
understand how they relate to one another more readily; and better review them for ambiguity, error, or
omissions.
• Features can be organized into a hierarchical business-related grouping.
• Since a feature is the FDD deliverable software increment, the team develops operational features
every two weeks.
• Because features are small, their design and code representations are easier to inspect effectively.
• Project planning, scheduling, and tracking are driven by the feature hierarchy, rather than an
arbitrarily adopted software engineering task set.

Coad and his colleagues suggest the following template for defining a feature:
<action> the <result> <by for of to> a(n) <object> where an <object> is “a person, place, or thing
(including roles, moments in time or intervals of time, or catalog-entry-like descriptions).”

Examples of features for an e-commerce application might be:


 Add the product to shopping cart
 Display the technical-specifications of the product
 Store the shipping-information for the customer

Figure 3.5 :Feature driven development


A feature set groups related features into business-related categories and is defined as:
<action><-ing> a(n) <object>
For example: Making a product sale is a feature set that would encompass the features
noted earlier and others.
The FDD approach defines five “collaborating” framework activities (in FDD these are called
“processes”) as shown in Figure 3.5.

AGILE DEVELOPMENT page. 18


FDD provides greater emphasis on project management guidelines and techniques than many other
agile methods. As projects grow in size and complexity, ad hoc project management is often inadequate.
It is essential for developers, their managers, and other stakeholders to understand project status—what
accomplishments have been made and problems have been encountered.

If deadline pressure is significant, it is critical to determine if software increments (features) are


properly
scheduled. To accomplish this, FDD defines six milestones during the design and implementation of a
feature: “design walkthrough, design, design inspection, code,code inspection, promote to build” .

3.5.6 Lean Software Development (LSD)

Lean Software Development (LSD) has adapted the principles of lean manufacturing to the world of
software engineering. The lean principles that inspire the LSD process can be summarized as eliminate
waste, build quality in, create knowledge, defer commitment, deliver fast, respect people, and optimize
the whole.
Each of these principles can be adapted to the software process.
For example, eliminate waste within the context of an agile software project can be interpreted to
mean :
(1) adding no extraneous features or functions,
(2) assessing the cost and schedule impact of any newly requested requirement,
(3) removing any superfluous process steps,
(4) establishing mechanisms to improve the way team members find information,
(5) ensuring the testing finds as many errors as possible,
6) reducing the time required to request and get a decision that affects the software
or the process that is applied to create it, and
(7) streamlining the manner in which information is transmitted to all stakeholders involved in the
process.

3.5.7 Agile Modeling (AM)

There are many situations in which software engineers must build large, business critical systems.
The scope and complexity of such systems must be modeled so that
(1) all constituencies can better understand what needs to be accomplished,
(2) the problem can be partitioned effectively among the people who must solve it, and
(3) quality can be assessed as the system is being engineered and built.
Over the past 30 years, a wide variety of software engineering modeling methods and notation have
been proposed for analysis and design (both architectural and component-level). These methods have
merit, but they have proven to be difficult to apply and challenging to sustain (over many projects). Part
of the problem is the “weight” of these modeling methods.

Analysis and design modeling have substantial benefit for large projects—if for no other reason than
to make these projects intellectually manageable.

AGILE DEVELOPMENT page. 19


At “The Official Agile Modeling Site,” Scott Ambler describes agile modeling (AM) in the following
manner:

 Agile Modeling (AM) is a practice-based methodology for effective modeling and documentation
of software-based systems.
 Agile Modeling (AM) is a collection of values, principles, and practices for modeling software that
can be applied on a software development project in an effective and light-weight manner. Agile
models are more effective than traditional models because they are just barely good, they don’t
have to be perfect.

 Agile modeling adopts all of the values that are consistent with the agile manifesto.
 The agile modeling philosophy recognizes that an agile team must have the courage to make
decisions that may cause it to reject a design and refactor.

 The team must also have the humility to recognize that technologists do not have all the answers
and
 that business experts and other stakeholders should be respected and embraced.
AM suggests a wide array of “core” and “supplementary” modeling principles, those that make AM
unique are:
Model with a purpose- A developer who uses AM should have a specific goal (e.g., to communicate
information to the customer or to help better understand some aspect of the software) in mind before
creating the model.
Once the goal for the model is identified, the type of notation to be used and level of detail required will
be more obvious.
Use multiple models- There are many different models and notations that can be used to describe
software. Only a small subset is essential for most projects. AM suggests that to provide needed insight,
each model should present a different aspect of the system and only those models that provide value to
their intended audience should be used.

Travel light- As software engineering work proceeds, keep only those models that will provide long-
term value and jettison the rest. Every work product that is kept must be maintained as changes occur.
This represents work that slows the team down.

Ambler notes that “Every time you decide to keep a model you trade-off agility for the convenience of
having that information available to your team in an abstract manner (hence potentially enhancing
communication within your team as well as with project stakeholders).”Content is more important than
representation.

Modeling should impart information to its intended audience. A syntactically perfect model that
imparts little useful content is not as valuable as a model with flawed notation that nevertheless
provides valuable content for its audience.

Know the models and the tools you use to create them. Understand the strengths and weaknesses of
each model and the tools that are used to create it.

Adapt locally. The modeling approach should be adapted to the needs of the agile team.

AGILE DEVELOPMENT page. 20


A major segment of the software engineering community has adopted the Unified Modeling Language
(UML) as the preferred method for representing analysis and design models.

The Unified Process has been developed to provide a framework for the application of UML. Scott
Ambler has developed a simplified version of the UP that integrates his agile modeling philosophy.

3.5.8 Agile Unified Process (AUP)

The Agile Unified Process (AUP) adopts a “serial in the large” and “iterative in the small” philosophy for
building computer-based systems.

By adopting the classic UP phased activities—inception, elaboration, construction, and transition—AUP


provides a serial overlay (i.e., a linear sequence of software engineering activities) that enables a team to
visualize the overall process flow for a software project. However, within each of the activities, the team
iterates to achieve agility and to deliver meaningful software increments to end users as rapidly as
possible.

Each AUP iteration addresses the following activities :

• Modeling. UML representations of the business and problem domains are created. However, to stay
agile, these models should be “just barely good enough” to allow the team to proceed.
• Implementation. Models are translated into source code.
• Testing. Like XP, the team designs and executes a series of tests to uncover
errors and ensure that the source code meets its requirements.
• Deployment. Deployment in this context focuses on the delivery of a software increment and the
acquisition of feedback from end users.
• Configuration and project management. In the context of AUP, configuration management
addresses change management, risk management, and the control of any persistent work products that
are produced by the team. Project management tracks and controls the progress of the team
and coordinates team activities.
• Environment management. Environment management coordinates a process infrastructure that
includes standards, tools, and other support technology available to the team.
Although the AUP has historical and technical connections to the Unified Modeling Language, it is
important to note that UML modeling can be using in conjunction with any of the agile process models .

3.6 A TOOL SET FOR THE AGILE PROCESS

Some proponents of the agile philosophy argue that automated software tools (e.g., design tools) should
be viewed as a minor supplement to the team’s activities, and not at all pivotal to the success of the team.

Alistair Cockburn suggests that tools can have a benefit and that “agile teams stress using tools that
permit

AGILE DEVELOPMENT page. 21


the rapid flow of understanding. Some of those tools are social, starting even at the hiring stage. Some
tools are technological, helping distributed teams simulate being physically present. Many tools are
physical, allowing people to manipulate them in workshops.” Because acquiring the right people
(hiring), team collaboration, stakeholder communication, and indirect management are key elements in
virtually all agile process models, Cockburn argues that “tools” that address these issues are critical
success
factors for agility.

For example, a hiring “tool” might be the requirement to have a prospective team member spend a few
hours pair programming with an existing member of the team. The “fit” can be assessed immediately.
Collaborative and communication “tools” are generally low tech and incorporateany mechanism
(“physical proximity, whiteboards, poster sheets, index cards, and sticky notes” ) that provides
information and coordination among agile developers.

Active communication is achieved via the team dynamics (e.g., pair programming), while passive
communication is achieved by “information radiators” (e.g., a flat panel display that presents the overall
status of different components of an increment).

Project management tools deemphasize the Gantt chart and replace it with earned value charts or
“graphs of tests created versus passed . . . other agile tools are used to optimize the environment in
which the agile team works (e.g., more efficient meeting areas), improve the team culture by nurturing
social interactions
(e.g., collocated teams), physical devices (e.g., electronic whiteboards), and process enhancement (e.g.,
pair programming or time-boxing)” .

PRINCIPLES THAT GUIDE PRACTICE

In a generic sense, practice is a collection of concepts, principles, methods, and tools that a software
engineer calls upon on a daily basis.
Practice allows managers to manage software projects and software engineers to build computer
programs.
Practice populates a software process model with the necessary technical and management how-to’s to
get the job done.
Practice transforms a haphazard unfocused approach into something that is more organized, more
effective, and more likely to achieve success.

4.1 SOFTWARE ENGINEERING KNOWLEDGE

In an editorial published in IEEE Software a decade ago, Steve McConnell made the following comment:
Many software practitioners think of software engineering knowledge almost exclusively as knowledge
of specific technologies: Java, Perl, html, C, Linux, Windows NT, and so on. Knowledge of specific
technology details is necessary to perform computer programming.

If someone assigns you to write a program in C, you have to know something about C to get your
program to work.

AGILE DEVELOPMENT page. 22


You often hear people say that software development knowledge has a 3-year half-life: half of what you
need to know today will be obsolete within 3 years. In the domain of technology-related knowledge,
that’s probably about right. But there is another kind of software development knowledge—a kind that I
think of as “software
engineering principles”—that does not have a three-year half-life. These software engineering principles
are likely to serve a professional programmer throughout his or her career.

McConnell goes on to argue that the body of software engineering knowledge (circa the year 2000) had
evolved to a “stable core” that he estimated represented about “75 percent of the knowledge needed to
develop a complex system.” But what resides within this stable core?

As McConnell indicates, core principles—the elemental ideas that guide software engineers in the work
that they do—now provide a foundation from which software engineering models, methods, and tools
can be applied and evaluated.

4.2 CORE PRINCIPLES

Software engineering is guided by a collection of core principles that help in the application of a
meaningful software process and the execution of effective software engineering methods. At the
process level, core principles establish a philosophical foundation that guides a software team as it
performs framework and umbrella activities, navigates the process flow, and produces a set of software
engineering work products.

At the level of practice, core principles establish a collection of values and rules that serve as a guide as
you analyze a problem, design a solution, implement and test the solution, and ultimately deploy the
software in the user community.
A set of general principles that span software engineering process and practice:
(1) provide value to end users,
(2) keep it simple,
(3) maintain the vision (of the product and the project),
(4) recognize that others consume (and must understand) what you produce,
(5) be open to the future,
(6) plan ahead for reuse, and
(7) think!
Although these general principles are important, they are characterized at such a high level of
abstraction that they are sometimes difficult to translate into day-to-day software engineering practice.

4.2.1 Principles That Guide Process

It can be characterized using the generic process framework that is applicable for all process models.
The following set of core principles can be applied to the framework, and by extension, to every software
process.

Principle 1. Be agile:-Whether the process model you choose is prescriptive or agile, the basic tenets of
agile development should govern your approach. Every aspect of the work you do should emphasize
economy of

AGILE DEVELOPMENT page. 23


action—keep your technical approach as simple as possible, keep the work products you produce as
concise as possible, and make decisions locally whenever possible.

Principle 2. Focus on quality at every step. The exit condition for every process activity, action, and
task should focus on the quality of the work product that has been produced.

Principle 3. Be ready to adapt. Process is not a religious experience, and dogma has no place in it.
When necessary, adapt your approach to constraints imposed by the problem, the people, and the
project itself.

Principle 4. Build an effective team. Software engineering process and practice are important, but the
bottom line is people. Build a self-organizing team that has mutual trust and respect.

Principle 5. Establish mechanisms for communication and coordination.Projects fail because


important information falls into the cracks and/or stakeholders fail to coordinate their efforts to create a
successful end product.These are management issues and they must be addressed.

Principle 6. Manage change. The approach may be either formal or informal, but mechanisms must be
established to manage the way changes are requested, assessed, approved, and implemented.

Principle 7. Assess risk. Lots of things can go wrong as software is being developed. It’s essential that
you establish contingency plans.

Principle 8. Create work products that provide value for others.Create only those work products
that provide value for other process activities, actions, or tasks.

Every work product that is produced as part of software engineering practice will be passed on to
someone else. A list of required functions and features will be passed along to the person (people) who
will develop a design, the design will be passed along to those who generate code, and so on. Be sure that
the work product imparts the necessary information without ambiguity or omission.

4.2.2 Principles That Guide Practice

Software engineering practice has a single overriding goal—to deliver on-time, high quality, operational
software that contains functions and features that meet the needs of all stakeholders. To achieve this
goal, you should adopt a set of core principles that guide your technical work. These principles have
merit regardless of the analysis and design methods that you apply, the construction techniques (e.g.,
programming language, automated tools) that you use, or the verification and validation approach that
you choose.

The following set of core principles are fundamental to the practice of software engineering:

Principle 1. Divide and conquer. Stated in a more technical manner, analysis and design should always
emphasize separation of concerns (SoC). A large problem is easier to solve if it is subdivided into a

AGILE DEVELOPMENT page. 24


collection of elements (or concerns). Ideally, each concern delivers distinct functionality that can be
developed, and in some cases validated, independently of other concerns.

Principle 2. Understand the use of abstraction. At its core, an abstraction is a simplification of some
complex element of a system used to communicate meaning in a single phrase.

In analysis and design work, a software team normally begins with models that represent high levels of
abstraction (e.g., a spreadsheet) and slowly refines those models into lower levels of abstraction (e.g., a
column or the SUM function).

Joel Spolsky suggests that “all non-trivial abstractions, to some degree, are leaky.” The intent of an
abstraction is to eliminate the need to communicate details. But sometimes, problematic effects
precipitated by these details “leak” through. Without an understanding of the details, the cause of a
problem cannot be easily diagnosed.

Principle 3. Strive for consistency. Whether it’s creating a requirements model, developing a software
design, generating source code, or creating test cases, the principle of consistency suggests that a
familiar context makes software easier to use.
As an example, consider the design of a user interface for a WebApp. Consistent placement of menu
options, the use of a consistent color scheme, and the consistent use of recognizable icons all help to
make
the interface ergonomically sound.

Principle 4. Focus on the transfer of information. Software is about information transfer—from a


database to an end user, from a legacy system to a WebApp, from an end user into a graphic user
interface (GUI), from an operating system to an application, from one software component to another—
the list is almost endless.

In every case, information flows across an interface, and as a consequence, there are opportunities for
error, or omission,or ambiguity. The implication of this principle is that you must pay special attention
to the analysis, design, construction, and testing of interfaces.

Principle 5. Build software that exhibits effective modularity. Separation of concerns (Principle 1)
establishes a philosophy for software.
Modularity provides a mechanism for realizing the philosophy. Any complex system can be divided into
modules (components), but good software engineering practice demands more.

Modularity must be effective. Each module should focus exclusively on one well-constrained aspect of
the system—it should be cohesive in its function and/or constrained in the content it represents.
Additionally, modules should be interconnected in a relatively simple manner—each module should
exhibit low coupling to other modules, to data sources, and to other environmental aspects.

Principle 6. Look for patterns. Brad Appleton suggests that:The goal of patterns within the software
community is to create a body of literature to help software developers resolve recurring problems
encountered throughout all of software development. Patterns help create a shared language for
communicating insight and experience about these problems and their solutions.

AGILE DEVELOPMENT page. 25


Formally codifying these solutions and their relationships lets us successfully capture the body of
knowledge which defines our understanding of good architectures that meet the needs of their users.

Principle 7. When possible, represent the problem and its solution from a number of different
perspectives. When a problem and its solution are examined from a number of different perspectives, it
is more likely that greater insight will be achieved and that errors and omissions will be uncovered.
For example, a requirements model can be represented using a data oriented viewpoint, a function-
oriented viewpoint, or a behavioral viewpoint.Each provides a different view of the problem and its
requirements.

Principle 8. Remember that someone will maintain the software. Over the long term, software will
be corrected as defects are uncovered, adapted as its environment changes, and enhanced as
stakeholders request more capabilities. These maintenance activities can be facilitated if solid software
engineering practice is applied throughout the software process.

4.3 PRINCIPLES THAT GUIDE EACH FRAMEWORK ACTIVITY

Principles that have a strong bearing on the success of each generic framework activity defined as part of
the software process. In many cases, the principles that are discussed for each of the framework
activities are a refinement of the principles presented in Section 4.2. They are simply core principles
stated at a lower level of abstraction.

4.3.1 Communication Principles

Before customer requirements can be analyzed, modeled, or specified they must be gathered through
the communication activity. A customer has a problem that may be amenable to a computer-based
solution.. Communication has begun. But the road from communication to understanding is often full of
potholes.
Effective communication (among technical peers, with the customer and other stakeholders, and with
project managers) is among the most challenging activities that we will confront.

Many of the principles apply equally to all forms of communication that occur within a software project.

Principle 1. Listen. Try to focus on the speaker’s words, rather than formulating your response to
those words. Ask for clarification if something is unclear, but avoid constant interruptions. Never
become contentious in your words or actions (e.g., rolling your eyes or shaking your head) as a person is
talking.

Principle 2. Prepare before you communicate. Spend the time to understand the problem before you
meet with others. If necessary, do some research to understand business domain jargon. If you have
responsibility for conducting a meeting, prepare an agenda in advance of the meeting.

Principle 3. Someone should facilitate the activity.


1)Every communication meeting should have a leader (a facilitator) to keep the conversation moving in
a productive direction,

AGILE DEVELOPMENT page. 26


(2) to mediate any conflict that does occur, and
(3) to ensure than other principles are followed.

Principle 4. Face-to-face communication is best. But it usually works better when some other
representation of the relevant information is present. For example, a participant may create a drawing
or a “strawman” document that serves as a focus for discussion.

Principle 5. Take notes and document decisions. Things have a way of falling into the cracks.
Someone participating in the communication should serve as a “recorder” and write down all important
points and decisions.

Principle 6. Strive for collaboration. Collaboration and consensus occur when the collective
knowledge of members of the team is used to describe product or system functions or features.
Each small collaboration serves to build trust among team members and creates a common goal for
the team.

Principle 7. Stay focused; modularize your discussion. The more people involved in any
communication, the more likely that discussion will bounce from one topic to the next. The facilitator
should keep the conversation .modular, leaving one topic only after it has been resolved.

Principle 8. If something is unclear, draw a picture. Verbal communication goes only so far. A sketch
or drawing can often provide clarity when words fail to do the job.

Principle 9.
(a) Once you agree to something, move on.
(b) If you can’t agree to something, move on.
(c) If a feature or function is unclear and cannot be clarified at the moment, move on.
Communication, like any software engineering activity, takes time.

Principle 10. Negotiation is not a contest or a game. It works best when both parties win. There are
many instances in which you and other stakeholders must negotiate functions and features, priorities,
and delivery dates. If the team has collaborated well, all parties have a common goal. Still, negotiation
will demand compromise from all parties.

4.3.2 Planning Principles

The communication activity helps you to define your overall goals and objectives (subject, of course, to
change as time passes). However, understanding these goals and objectives is not the same as defining a
plan for getting there.

The planning activity encompasses a set of management and technical practices that enable the
software team to define a road map as it travels toward its strategic goal and tactical objectives.
The Difference Between Customers and End Users
Software engineers communicate with many different stakeholders, but customers and end
users have the most significant impact on the technical work that follows.

In some cases the customer and the end user are one and the same, but for many projects, the

AGILE DEVELOPMENT page. 27


customer and the end user are different people, working for different managers, in different business
organizations.
A customer is the person or group who
(1) originally requested the software to be built,
(2) defines overall business objectives for the software,
(3) provides basic product requirements, and
(4) coordinates funding for the project. In a product or system business, the customer is often the
marketing department. In an information technology (IT) environment, the customer might be a
business component or department.

An end user is the person or group who


(1) will actually use the software that is built to achieve some business purpose and
(2) will define operational details of the software so the business purpose can be achieved.

There are many different planning philosophies. Some people are “minimalists,” arguing that change
often obviates the need for a detailed plan. Others are “traditionalists,” arguing that the plan provides an
effective road map and the more detail it has, the less likely the team will become lost. Still others are
“agilists,” arguing that a quick “planning game” may be necessary, but that the road map will emerge as
“real work” on the software begins.

On many projects, overplanning is time consuming and fruitless (too many things change), but
underplanning is a recipe for chaos.

Regardless of the rigor with which planning is conducted, the following principles always apply:

Principle 1. Understand the scope of the project. It’s impossible to use a road map if you don’t know
where you’re going. Scope provides the software team with a destination.

Principle 2. Involve stakeholders in the planning activity. Stakeholders define priorities and
establish project constraints. To accommodate these realities, software engineers must often negotiate
order of delivery, timelines, and other project-related issues.

Principle 3. Recognize that planning is iterative. A project plan is never engraved in stone. As work
begins, it is very likely that things will change. As a consequence, the plan must be adjusted to
accommodate these changes. In addition, iterative, incremental process models dictate replanning after
the delivery of each software increment based on feedback received from users.

Principle 4. Estimate based on what you know. The intent of estimation is to provide an indication of
effort, cost, and task duration, based on the team’s current understanding of the work to be done. If
information is vague or unreliable, estimates will be equally unreliable.

Principle 5. Consider risk as you define the plan. If you have identified risks that have high impact
and high probability, contingency planning is necessary. In addition, the project plan (including the
schedule) should be adjusted to accommodate the likelihood that one or more of these risks will occur.

Principle 6. Be realistic. People don’t work 100 percent of every day. Noise always enters into any
human communication. Omissions and ambiguity are facts of life. Change will occur. Even the best

AGILE DEVELOPMENT page. 28


software engineers make mistakes. These and other realities should be considered as a project plan is
established.

Principle 7. Adjust granularity as you define the plan. Granularity refers to the level of detail that is
introduced as a project plan is developed. A “high-granularity” plan provides significant work task detail
that is planned over relatively short time increments (so that tracking and control occur frequently).

A “low-granularity” plan provides broader work tasks that are planned over longer time periods. In
general, granularity moves from high to low as the project time line moves away from the current date.
Over the next few weeks or months, the project can be planned in significant detail.
Activities that won’t occur for many months do not require high granularity (too much can change).

Principle 8. Define how you intend to ensure quality. The plan should identify how the software
team intends to ensure quality. If technical reviews are to be conducted, they should be scheduled. If pair
programming is to be used during construction, it should be explicitly defined within the plan.

Principle 9. Describe how you intend to accommodate change. Even the best planning can be
obviated by uncontrolled change.
Identify how changes are to be accommodated as software engineering work proceeds. For example, can
the customer request a change at any time? If a change is requested, is the team obliged to implement it
immediately? How is the impact and cost of the change assessed?

Principle 10. Track the plan frequently and make adjustments as required.
Software projects fall behind schedule one day at a time. Therefore, it makes sense to track progress on a
daily basis, looking for problem areas and situations in which scheduled work does not conform to
actual work conducted. When slippage is encountered, the plan is adjusted accordingly.
To be most effective, everyone on the software team should participate in the planning activity. Only
then will team members “sign up” to the plan.

4.3.3 Modeling Principles


We create models to gain a better understanding of the actual entity to be built. When the entity is a
physical thing (e.g., a building, a plane, a machine), we can build a model that is identical in form and
shape but smaller in scale. However, when the entity to be built is software, our model must take a
different form.
It must be capable of representing the information that software transforms, the architecture and
functions that enable the transformation to occur, the features that users desire, and the behavior of the
system as the transformation is taking place.
Models must accomplish these objectives at different levels of abstraction—first depicting the software
from the customer’s viewpoint and later representing the software at a more technical level.

In software engineering work, two classes of models can be created: requirements models and design
models.

AGILE DEVELOPMENT page. 29


Requirements models (also called analysis models) represent customer requirements by depicting the
software in three different domains: the information domain, the functional domain, and the
behavioral domain.
Design models represent characteristics of the software that help practitioners to construct it effectively:
the architecture, the user interface, and component-level detail.

Scott Ambler and Ron Jeffries define a set of modeling principles4 that are intended for those who use
the agile process model but are appropriate for all software engineers who perform modeling actions
and tasks:
Principle 1. The primary goal of the software team is to build software, not create models. Agility
means getting software to the customer in the fastest possible time. Models that make this happen are
worth creating, but models that slow the process down or provide little new insight should be avoided.

Principle 2. Travel light—don’t create more models than you need. Every model that is created must
be kept up-to-date as changes occur.
More importantly, every new model takes time that might otherwise be spent on construction (coding
and testing). Therefore, create only those models that make it easier and faster to construct the
software.

Principle 3. Strive to produce the simplest model that will describe the
problem or the software. Don’t overbuild the software . By keeping models simple, the resultant
software will also be simple.
The result is software that is easier to integrate, easier to test, and easier to maintain (to change). In
addition, simple models are easier for members of the software team to understand and critique,
resulting in an ongoing form of feedback that optimizes the end result.

Principle 4. Build models in a way that makes them amenable to change.


The intent of any model is to communicate information. To accomplish this, use a consistent format.
Assume that you won’t be there to explain the model. It should stand on its own.

For example, since requirements will change, there is a tendency to give requirements models short
shrift. Why? Because you know that they’ll change anyway. The problem with this attitude is that
without a reasonably complete requirements model, you’ll create a design (design model) that will
invariably miss important functions and features.

Principle 5. Be able to state an explicit purpose for each model that is created. Every time you
create a model, ask yourself why you’re doing so. If you can’t provide solid justification for the existence
of the model, don’t spend time on it.

Principle 6. Adapt the models you develop to the system at hand. It may be necessary to adapt
model notation or rules to the application; for example, a video game application might require a
different modeling technique than real-time, embedded software that controls an automobile engine.

Principle 7. Try to build useful models, but forget about building perfect models. When building
requirements and design models, a software engineer reaches a point of diminishing returns. That is, the
effort required to make the model absolutely complete and internally consistent is not worth the
benefits of these properties. Am I suggesting that modeling should be sloppy or low quality? The answer

AGILE DEVELOPMENT page. 30


is “no.” But modeling should be conducted with an eye to the next software engineering steps. Iterating
endlessly to make a model “perfect” does not serve the need for agility.

Principle 8. Don’t become dogmatic about the syntax of the model. If it communicates content
successfully, representation is secondary. Although everyone on a software team should try to use
consistent notation during modeling, the most important characteristic of the model is to communicate
information that enables the next software engineering task. If a model does this successfully, incorrect
syntax can be forgiven.

Principle 9. If your instincts tell you a model isn’t right even though it seems okay on paper, you
probably have reason to be concerned. If you are an experienced software engineer, trust your
instincts. Software work teaches many lessons—some of them on a subconscious level. If something
tells you that a design model is doomed to fail (even though you can’t prove it explicitly), you have
reason to spend additional time examining the model or developing a different one.

Principle 10. Get feedback as soon as you can. Every model should be reviewed by members of the
software team. The intent of these reviews is to provide feedback that can be used to correct modeling
mistakes, change misinterpretations,and add features or functions that were inadvertently omitted.

Requirements modeling principles.

Over the past three decades, a large number of requirements modeling methods have been developed.
Investigators have identified requirements analysis problems and their causes and have developed a
variety of modeling notations and corresponding sets of heuristics to overcome them. Each analysis
method has a unique point of view. However, all analysis methods are related by a set of operational
principles:
Principle 1. The information domain of a problem must be represented and understood. The
information domain encompasses the data that flow into the system (from end users, other systems, or
external devices), the data that flow out of the system (via the user interface, network interfaces,
reports,
graphics, and other means), and the data stores that collect and organize persistent data objects (i.e.,
data that are maintained permanently).

Principle 2. The functions that the software performs must be defined. Software functions provide
direct benefit to end users and also provide internal support for those features that are user visible.
Some functions transform data that flow into the system.
In other cases, functions effect some level of control over internal software processing or external
system elements. Functions can be described at many different levels of abstraction, ranging from a
general statement of purpose to a detailed description of the processing elements that must be invoked.

Principle 3. The behavior of the software (as a consequence of external events) must be
represented. The behavior of computer software is driven by its interaction with the external
environment. Input provided by end users, control data provided by an external system, or monitoring
data collected over a network all cause the software to behave in a specific way.

Principle 4. The models that depict information, function, and behavior must be partitioned in a
manner that uncovers detail in a layered (or hierarchical) fashion. Requirements modeling is the

AGILE DEVELOPMENT page. 31


first step in software engineering problem solving. It allows you to better understand the problem and
establishes a basis for the solution (design).

Complex problems are difficult to solve in their entirety. For this reason, you should use a divide-and-
conquer strategy. A large, complex problem is divided into subproblems until each subproblem is
relatively easy to understand. This concept is called partitioning or separation of concerns, and it is a key
strategy in requirements modeling.

Principle 5. The analysis task should move from essential information toward implementation
detail. Requirements modeling begins by describing the problem from the end-user’s perspective. The
“essence” of the problem is described without any consideration of how a solution will be implemented.

For example, a video game requires that the player “instruct” its protagonist on what direction to
proceed as she moves into a dangerous maze. That is the essence of the problem. Implementation detail
(normally
described as part of the design model) indicates how the essence will be implemented. For the video
game, voice input might be used. Alternatively, a keyboard command might be typed, a joystick (or
mouse) might be pointed in a specific direction, or a motion-sensitive device might be waved in the
air.By applying these principles, a software engineer approaches a problem systematically.

Requirements Modeling Principles.

Over the past three decades, a large number of requirements modeling methods have been developed.
Investigators have identified requirements analysis problems and their causes and have developed a
variety of modeling notations and corresponding sets of heuristics to overcome them. Each analysis
method has a unique point of view.

All analysis methods are related by a set of operational principles:


Principle 1. The information domain of a problem must be represented and understood. The
information domain encompasses the data that flow into the system (from end users, other systems, or
external devices), the data that flow out of the system (via the user interface, network interfaces,
reports, graphics, and other means), and the data stores that collect and organize persistent data objects
(i.e., data that are maintained permanently).

Principle 2. The functions that the software performs must be defined. Software functions provide
direct benefit to end users and also provide internal support for those features that are user visible.
Some functions transform data that flow into the system. In other cases, functions effect some level of
control over internal software processing or external system elements.

Functions can be described at many different levels of abstraction, ranging from a general statement of
purpose to a detailed description of the processing elements that must be invoked.

Principle 3. The behavior of the software (as a consequence of external events) must be
represented. The behavior of computer software is driven by its interaction with the external
environment. Input provided by end users, control data provided by an external system, or monitoring
data collected over a network all cause the software to behave in a specific way.

AGILE DEVELOPMENT page. 32


Principle 4. The models that depict information, function, and behavior must be partitioned in a
manner that uncovers detail in a layered (or hierarchical) fashion. Requirements modeling is the
first step in software engineering problem solving. It allows you to better understand the problem and
establishes a basis for the solution (design). Complex problems are difficult to solve in their entirety. For
this reason, Use a divide-and-conquer strategy.

A large, complex problem is divided into subproblems until each subproblem is relatively easy to
understand. This concept is called partitioning or separation of concerns, and it is a key strategy in
requirements modeling.

Principle 5. The analysis task should move from essential information toward implementation
detail. Requirements modeling begins by describing the problem from the end-user’s perspective. The
“essence” of the problem is described without any consideration of how a solution will be implemented.

For example, a video game requires that the player “instruct” its protagonist on what direction to
proceed as she moves into a dangerous maze. That is the essence of the problem. Implementation detail
(normally
described as part of the design model) indicates how the essence will be implemented.

For the video game, voice input might be used. Alternatively, a keyboard command might be typed, a
joystick (or mouse) might be pointed in a specific direction, or a motion-sensitive device might be waved
in the air. By applying these principles, a software engineer approaches a problem systematically.

Design Modeling Principles.


The software design model is analogous to an architect’s plans for a house. It begins by representing the
totality of the thing to be built (e.g., a three-dimensional rendering of the house) and slowly refines the
thing to provide guidance for constructing each detail (e.g., the plumbing layout). Similarly, the design
model that is created for software provides a variety of different views of the system.

There is no shortage of methods for deriving the various elements of a software design. Some methods
are data driven, allowing the data structure to dictate the program architecture and the resultant
processing components.

Others are pattern driven, using information about the problem domain (the requirements model) to
develop architectural styles and processing patterns. Still others are object oriented, using problem
domain objects as the driver for the creation of data structures and the methods that manipulate them.

A set of design principles that can be applied regardless of the method that is used:

Principle 1. Design should be traceable to the requirements model.The requirements model


describes the information domain of the problem, user-visible functions, system behavior, and a set of
requirements classes that package business objects with the methods that service them. The design
model translates this information into an architecture, a set of subsystems that implement major
functions, and a set of components that are the realization of requirements classes. The elements of the
design model should be traceable to the requirements model.

AGILE DEVELOPMENT page. 33


Principle 2. Always consider the architecture of the system to be built.
Software architecture is the skeleton of the system to be built. It affects interfaces, data structures,
program control flow and behavior, the manner in which testing can be conducted, the maintainability of
the resultant system, and much more. For all of these reasons, design should start with architectural
considerations. Only after the architecture has been established should component-level issues be
considered.

Principle 3. Design of data is as important as design of processing functions. Data design is an


essential element of architectural design. The manner in which data objects are realized within the
design cannot be left to chance. A well-structured data design helps to simplify program flow, makes
the design and implementation of software components easier, and makes overall processing more
efficient.

Principle 4. Interfaces (both internal and external) must be designed with care. The manner in
which data flows between the components of a system has much to do with processing efficiency, error
propagation, and design simplicity. A well-designed interface makes integration easier and assists the
tester in validating component functions.

Principle 5. User interface design should be tuned to the needs of the end user. In every case, it
should stress ease of use. The user interface is the visible manifestation of the software. No matter how
sophisticated its internal functions, no matter how comprehensive its data structures, no matter how
well designed its architecture, a poor interface design often leads to the perception that the software is
“bad.”

Principle 6. Component-level design should be functionally independent.Functional independence


is a measure of the “single-mindedness” of a software component. The functionality that is delivered by a
component should be cohesive—that is, it should focus on one and only one function or subfunction.

Principle 7. Components should be loosely coupled to one another and to the external
environment.
Coupling is achieved in many ways— via a component interface, by messaging, through global data. As
the level of coupling increases, the likelihood of error propagation also increases and the overall
maintainability of the software decreases. Therefore, component coupling should be kept as low as is
reasonable.

Principle 8. Design representations (models) should be easily understandable. The purpose of


design is to communicate information to practitioners who will generate code, to those who will test the
software, and to others who may maintain the software in the future. If the design is difficult to
understand, it will not serve as an effective communication medium.

Principle 9. The design should be developed iteratively. With each iteration, the designer should
strive for greater simplicity. Like almost all creative activities, design occurs iteratively. The first
iterations work to refine the design and correct errors, but later iterations should strive to make the
design as simple as is possible.
When these design principles are properly applied, you create a design that exhibits both external and
internal quality factors .

AGILE DEVELOPMENT page. 34


External quality factors are those properties of the software that can be readily observed by users (e.g.,
speed, reliability, correctness, usability).
Internal quality factors are of importance to software engineers.They lead to a high-quality design
from the technical perspective. To achieve internal quality factors, the designer must understand basic
design concepts.

4.3.4 Construction Principles


The construction activity encompasses a set of coding and testing tasks that lead to operational software
that is ready for delivery to the customer or end user.
In modern software engineering work, coding may be
(1) the direct creation of programming language source code (e.g., Java),
(2) the automatic generation of source code using an intermediate design-like representation of the
component to be built, or
(3) the automatic generation of executable code using a “fourth-generation programming
language” (e.g., Visual C).
The initial focus of testing is at the component level, often called unit testing.
Other levels of testing include :
(1) integration testing (conducted as the system is constructed),
2)validation testing that assesses whether requirements have been met for
the complete system (or software increment), and
(3) acceptance testing that is conducted by the customer in an effort to exercise all required features
and functions.
The following set of fundamental principles and concepts are applicable to coding and testing:
Coding Principles. The principles that guide the coding task are closely aligned with programming
style, programming languages, and programming methods.There are a number of fundamental
principles that can be stated:
Preparation principles: Before you write one line of code, be sure you
• Understand of the problem you’re trying to solve.
• Understand basic design principles and concepts.
• Pick a programming language that meets the needs of the software to be
built and the environment in which it will operate.
• Select a programming environment that provides tools that will make your
work easier.
• Create a set of unit tests that will be applied once the component you code is
completed.
Programming principles: As you begin writing code, be sure you
• Constrain your algorithms by following structured programming [Boh00]
practice.
• Consider the use of pair programming.
• Select data structures that will meet the needs of the design.
• Understand the software architecture and create interfaces that are
consistent with it.
• Keep conditional logic as simple as possible.
• Create nested loops in a way that makes them easily testable.
• Select meaningful variable names and follow other local coding standards.
• Write code that is self-documenting.
• Create a visual layout (e.g., indentation and blank lines) that aids
understanding.

AGILE DEVELOPMENT page. 35


Validation Principles:

• Conduct a code walkthrough when appropriate.


• Perform unit tests and correct errors you’ve uncovered.
• Refactor the code.

Testing Principles. Glen Myers states a number of rules that can serve well as testing objectives:

• Testing is a process of executing a program with the intent of finding an error.


• A good test case is one that has a high probability of finding an as-yet undiscovered error.
• A successful test is one that uncovers an as-yet-undiscovered error.
These objectives imply a dramatic change in viewpoint for some software developers.
They move counter to the commonly held view that a successful test is one in which no errors are found.
The objective is to design tests that systematically uncover different classes of errors and to do so with a
minimum amount of time and effort.
If testing is conducted successfully (according to the objectives stated previously), it will uncover errors
in the software. As a secondary benefit, testing demonstrates that software functions appear to be
working according to specification, and that behavioral and performance requirements appear to have
been met.

In addition, the data collected as testing is conducted provide a good indication of software reliability
and some indication of software quality as a whole. But testing cannot show the absence of errors and
defects; it can show only that software errors and defects are present.

It is important to keep this (rather gloomy) statement in mind as testing is being conducted.

Davis suggests a set of testing principles that have been adapted for use:

Principle 1. All tests should be traceable to customer requirements.


The objective of software testing is to uncover errors. It follows that the most severe defects (from the
customer’s point of view) are those that cause the program to fail to meet its requirements.

Principle 2. Tests should be planned long before testing begins. Test planning can begin as soon as
the requirements model is complete. Detailed definition of test cases can begin as soon as the design
model
has been solidified. Therefore, all tests can be planned and designed before any code has been
generated.

Principle 3. The Pareto principle applies to software testing. In this context the Pareto principle
implies that 80 percent of all errors uncovered during testing will likely be traceable to 20 percent of all
program components. The problem, of course, is to isolate these suspect components and to thoroughly
test them.

Principle 4. Testing should begin “in the small” and progress toward testing “in the large.” The
first tests planned and executed generally focus on individual components. As testing progresses, focus
shifts in an attempt to find errors in integrated clusters of components and ultimately in the entire
system.

Principle 5. Exhaustive testing is not possible. The number of path permutations for even a
moderately sized program is exceptionally large. For this reason, it is impossible to execute every

AGILE DEVELOPMENT page. 36


combination of paths during testing. It is possible, however, to adequately cover program logic and to
ensure that all conditions in the component-level design have been exercised.

4.3.5 Deployment Principles

The deployment activity encompasses three actions: delivery, support, and feedback. Because modern
software process models are evolutionary or incremental in nature, deployment happens not once, but a
number of times as software moves toward completion.

Each delivery cycle provides the customer and end users with an operational software increment that
provides usable functions and features. Each support cycle provides documentation and human
assistance for all functions and features introduced during all deployment cycles to
date.
Each feedback cycle provides the software team with important guidance that results in modifications to
the functions, features, and approach taken for the next increment.

The delivery of a software increment represents an important milestone for any software project. A
number of key principles should be followed as the team prepares to deliver an increment:

Principle 1. Customer expectations for the software must be managed.Too often, the customer
expects more than the team has promised to deliver, and disappointment occurs immediately. This
results in feedback that is not productive and ruins team morale. In her book on managing expectations,
Naomi Karten states: “The starting point for managing expectations is to become more conscientious
about what you communicate and how.”
She suggests that a software engineer must be careful about sending the customer conflicting messages
(e.g., promising more than you can reasonably deliver in the time frame provided or delivering more
than you promise for one software increment and then less than promised for the next).

Principle 2. A complete delivery package should be assembled and tested. A CD-ROM or other
media (including Web-based downloads) containing all executable software, support data files, support
documents, and other relevant information should be assembled and thoroughly beta-tested with actual
users.
All installation scripts and other operational features should be thoroughly exercised in as many
different computing configurations (i.e., hardware, operating systems, peripheral devices, networking
arrangements) as possible.

Principle 3. A support regime must be established before the software is delivered. An end user
expects responsiveness and accurate information when a question or problem arises. If support is ad
hoc, or worse, nonexistent, the customer will become dissatisfied immediately.

Support should be planned, support materials should be prepared, and appropriate recordkeeping
mechanisms should be established so that the software team can conduct a categorical assessment of the
kinds of support requested.

Principle 4. Appropriate instructional materials must be provided to end users. The software team
delivers more than the software itself. Appropriate training aids (if required) should be developed;

AGILE DEVELOPMENT page. 37


troubleshooting guidelines should be provided, and when necessary, a “what’s different about this
software increment” description should be published.

Principle 5. Buggy software should be fixed first, delivered later. Under time pressure, some
software organizations deliver low-quality increments with a warning to the customer that bugs “will be
fixed in the next release.” This is a mistake. There’s a saying in the software business: “Customers will
forget you delivered a high-quality product a few days late, but they will never forget the problems that a
low-quality product caused them. The software reminds them every day.”

The delivered software provides benefit for the end user, but it also provides useful feedback for the
software team. As the increment is put into use, end users should be encouraged to comment on features
and functions, ease of use, reliability, and any other characteristics that are appropriate.

----------------------------------*****************************------------------------------------***********------------

AGILE DEVELOPMENT page. 38


Module 4 and 5

MODULE- 4

INTRODUCTION TO PROJECT MANAGEMENT


1.1Introduction
We need to look at some key ideas about the planning, monitoring and control
of software projects. We will see that all projects are about meeting objectives.
Like any other project, a software project must satisfy real needs. To do this we
must identify the project’s stakeholders and their objectives. Ensuring that their
objectives are met is the aim of project management. However, we cannot know
that a project will meet its objectives in the future unless we know the present
state of the project.

1.2 Why is Software Project Management Important?


First, there is the question of money. A lot of money is at stake with ICT projects. In
the United Kingdom during the financial year 2002–2003, the central
government spent more on contracts for ICT projects than on contracts related to
roads (about £2.3 billion as opposed to £1.4 billion). The biggest departmental
spender was the Department for Work and Pensions, who spent over £800
million on ICT. Mismanagement of ICT projects means that there is less to
spend on good things such as hospitals.

Unfortunately, projects are not always successful. In a report published in


2003, the Standish Group in the United States analysed 13,522 projects and
concluded that only a third of projects were successful; 82% of projects were
late and 43% exceeded their budget.

The reason for these project shortcomings is often the management of projects. The
National Audit Office in the UK, for example, among other factors causing project
failure identified ‘lack of skills and proven approach to project management and risk
management’.

1.3 What is a Project?

The dictionary definitions put a clear emphasis on the project being a planned
activity.The emphasis on being planned assumes we can determine how to carry out a
task before

Project Management and Software Quality| 39


Module 4 and 5
Planning is in essence thinking carefully about something before you do it – even with
uncertain projects this is worth doing as long as the resulting plans are seen as provisional.

Other activities, such as routine maintenance, will have been performed so many times
that everyone knows exactly what to do. In these cases, planning hardly seems
necessary,although procedures might be documented to ensure consistency and to help
newcomers.
The activities that benefit most from conventional project management are likely to
lie between these two extremes – see Figure 1.1.

There is a hazy boundary between the non-routine project and the routine job. The
first time you do a routine task it will be like a project. On the other hand, a project to
develop a system similar to previous ones that you have developed will have a large
element of the routine.

FIGURE 1.1 Activities most likely to benefit from project management

The following characteristics distinguish projects:


● non-routine tasks are involved;
● planning is required;
● specific objectives are to be met or a specified product is to be created;
● the project has a predetermined time span;
● work is carried out for someone other than yourself;
● work involves several specialisms;
● people are formed into a temporary work group to carry out the task;
● work is carried out in several phases;
● the resources that are available for use on the project are constrained;
● the project is large or complex.

Project Management and Software Quality| 40


Module 4 and 5
The more any of these factors apply to a task, the more difficult that task will be.
Project size is particularly important.

Example :

The project that employs 20 developers is likely to be disproportionately more


difficult than one with only 10 staff because of the need for additional
coordination.

Also, expertise built up during the project may be lost when the team is eventually
dispersed at the end of the project.

1.4 Software Projects versus Other Types of Project

Many techniques in general project management also apply to software project


management, but Fred Brooks identified some characteristics of software projects
which make them particularly difficult:

Invisibility When a physical artefact such as a bridge is constructed the progress can
actually be seen. With software, progress is not immediately visible. Software project
management can be seen as the process of making the invisible visible.

Complexity Per dollar, pound or euro spent, software products contain more
complexity than other engineered artefacts.

Conformity The ‘traditional’ engineer usually works with physical systems and
materials like cement and steel. These physical systems have complexity, but are
governed by consistent physical laws.

Software developers have to conform to the requirements of human clients. It is not


just that individuals can be inconsistent. Organizations, because of lapses in
collective memory, in internal communication or in effective decision making, can
exhibit remarkable ‘organizational stupidity’.

Flexibility That software is easy to change is seen as a strength. However, where the
software system interfaces with a physical or organizational system, it is expected
that the software will change to accommodate the other components rather than
vice versa. Thus software systems are particularly subject to change.

1.5 Contract Management and Technical Project Management

Project Management and Software Quality| 41


Module 4 and 5
In-house projects are where the users and the developers of new software work for
the same organization. However, increasingly organizations contract out ICT
development to outside developers. Here, the client organization will often appoint a
‘project manager’ to supervise the contract who will delegate many technically
oriented decisions to the contractors.

1.5.1 Some of the common features of contract management and technical project
management are as follows: -
1. stakeholders are involved in both.
2. Team from both the clients and suppliers are involved for accomplishing the
project.
3. They generally evolve out of need and requirements from both the clients and
suppliers.
4. They are interdependent on each other.
5. Standard protocols are maintained by both the clients and suppliers.
Thus, the project manager will not worry about estimating the effort needed to write
individual software components as long as the overall project is within budget and
on time. On the supplier side, there will need to be project managers who deal with
the more technical issues.

1.5.2 Difference between Technical Project Management and


Contract Management:-

Some of the differences of contract management and technical project management


are as follows: -

SNO Contract Management Technical Project Management

1. It is a part of the It is all about managing all the


procurement function where aspects of the project from planning
it confirms that terms and till completion within the constraints
conditions mentioned in the of the main project
contract are properly
adhered too.

2. They are generally They are generally responsible to


responsible for delivering check the progress of the project, its
the said project within the validity as per the need of the

Project Management and Software Quality| 42


Module 4 and 5
budget and in the time organization and timeline

3. They solely focus on the They generally meet the suppliers on


contract which is a bond a regular basis in order to emphasis
between the suppliers and their needs and demands
the clients.

4. Their primary duty is to Their primary duty is of proper


conduct research, do risk documentations which includes from
analysis and negotiate on the proper planning till completion along
terms of contract. with timelines and budgetary areas.

5. They are more involved in They are involved in meeting the


working closely with the deadlines of their projects with
clients to help them to specification, budget, and timeframes
understand the paper works in mind.
that the clients had signed

1.6 Activities Covered by Software Project Management

A software project is not only concerned with the actual writing of software. In fact,
where a software application is bought ‘off the shelf’, there may be no software
writing as such, but this is still fundamentally a software project because so many of
the other activities associated with software will still be present.

Usually there are three successive processes that bring a new system into being – see
Figure 1.2.

1.The feasibility study assesses whether a project is worth starting – that it has a
valid business case. Information is gathered about the requirements of the proposed
application. Requirements elicitation can, at least initially, be complex and difficult.
The stakeholders may know the aims they wish to

Project Management and Software Quality| 43


Module 4 and 5

FIGURE 1.2 The feasibility study/plan/execution cycle

pursue, but not be sure about the means of achievement. The developmental and
operational costs, and the value of the benefits of the new system, will also have to be
estimated. With a large system, the feasibility study could be a project in its own right
with its own plan. The study could be part of a strategic planning exercise examining a
range of potential software developments. Sometimes an organization assesses a
programme of development made up of a number of projects.

Planning If the feasibility study indicates that the prospective project appears viable, the
project planning can start. For larger projects, we would not do all our detailed planning
at the beginning.

We create an outline plan for the whole project and a detailed one for the first stage.
Because we will have more detailed and accurate project information after the earlier
stages of the project have been completed, planning of the later stages is left to nearer
their start.

Project execution The project can now be executed. The execution of a project often
contains design and implementation sub-phases.

Students new to project planning often find that the boundary between design and
planning can be hazy. Design is making decisions about the form of the products to be
created. This could relate to the external appearance of the software, that is, the user
interface, or the internal architecture. The plan details the activities to be carried out to
create these products.

Planning and design can be confused because at the most detailed level, planning

Project Management and Software quality 44


Module 4 and 5
decisions are influenced by design decisions. Thus a software product with five major
components is likely to require five sets of activities to create them.

Figure 1.3 shows the typical sequence of software development activities recommended
in the international standard ISO 12207. Some activities are concerned with the system
while others relate to software. The development of software will be only one part of a
project.

Software could be developed, for example, for a project which also requires the
installation of an ICT infrastructure, the design of user jobs and user training.

Requirements analysis starts with requirements elicitation or requirements gathering


which establishes what the potential users and their managers require of the new system.
It could relate to a function – that the system should do something. It could be a quality
requirement – how well the functions must work.

Project Management and Software quality 45


Module 4 and 5
FIGURE 1.3 The ISO 12207 software development life cycle

An example :Dispatching an ambulance in response to an emergency telephone call. In this


case transaction time would be affected by hardware and software performance as well as
the speed of human operation.

Training to ensure that operators use the computer system efficiently is an example of a
system requirement for the project, as opposed to a specifically software requirement.
There would also be resource requirements that relate to application development costs.

Architecture design The components of the new system that fulfil each requirement have to
be identified. Existing components may be able to satisfy some requirements.

In other cases, a new component will have to be made. These components are not only
software: they could be new hardware or work processes. Although software developers
are primarily concerned with software components, it is very rare that these can be
developed in isolation. They will, for example, have to take account of existing legacy
systems with which they will interoperate.

The design of the system architecture is thus an input to the software requirements.

A second architecture design process then takes place that maps the software
requirements to software components.

Detailed design Each software component is made up of a number of software units that
can be separately coded and tested. The detailed design of these units is carried out
separately.

 Code and test refers to writing code for each software unit. Initial testing to debug
individual software units would be carried out at this stage.

 Integration The components are tested together to see if they meet the overall
requirements. Integration could involve combining different software components,
or combining and testing the software element of the system in conjunction with
the hardware platforms and user interactions.

 Qualification testing The system, including the software components, has to be


tested carefully to ensure that all the requirements have been fulfilled.

 Installation This is the process of making the new system operational. It would
include activities such as setting up standing data (for example, the details for

Project Management and Software quality 46


Module 4 and 5
employees in a payroll system), setting system parameters, installing the software
onto the hardware platforms and user training.

 Acceptance support This is the resolving of problems with the newly installed
system, including the correction of any errors, and implementing agreed extensions
and improvements. Software maintenance can be seen as a series of minor
software projects. In many environments, most software development is in fact
maintenance.

1.7 Plans, Methods and Methodologies

A plan for an activity must be based on some idea of a method of work. For
example, if you were asked to test some software, you may know nothing about
the software to be tested, but you could assume that you would need to:

● analyse the requirements for the software;

● devise and write test cases that will check that each requirement has been
satisfied;

● create test scripts and expected results for each test case;

● compare the actual results and the expected results and identify discrepancies.

While a method relates to a type of activity in general, a plan takes that method
(and perhaps others) and converts it to real activities, identifying for each
activity:

● its start and end dates;

● who will carry it out;

● what tools and materials – including information – will be needed.

The output from one method might be the input to another.

Groups of methods or techniques are often grouped into methodologies such as object-
oriented design.

1.8 Some Ways of Categorizing Software Projects

Project Management and Software quality 47


Module 4 and 5
Projects may differ because of the different technical products to be created. Thus
we need to identify the characteristics of a project which could affect the way in
which it should be planned and managed. Other factors are discussed below.

Compulsory versus voluntary users

In workplaces there are systems that staff have to use if they want to do something,
such as recording a sale.

Use of a system is increasingly voluntary, as in the case of computer games. Here it is


difficult to elicit precise requirements from potential users as we could with a
business system. What the game will do will thus depend much on the informed
ingenuity of the developers, along with techniques such as market surveys, focus
groups and prototype evaluation.

Information systems versus embedded systems

A traditional distinction has been between information systems which enable staff
to carry out office processes and embedded systems which control machines. A
stock control system would be an information system. An embedded, or process
control, system might control the air conditioning equipment in a building. Some
systems may have elements of both where, for example, the stock control system
also controls an automated warehouse.

Software products versus services:

All types of software projects can be classified into software product development
projects and software services projects.

These two broad classes of software projects can be furthur classified into
subclasses as shown in figure 1.4 below.

A software product development project concerns developing the software by


keeping the requirements of the general customers in mind and the developed
software is usually sold off-the –shelf to a large number of customers.

A software product development project can furthur be classified depending on


whether it concerns developing a generic product or a domain specific product.

A generic software product is sold to a broad spectrum of customers and is said to


have a horizontal market.

Project Management and Software quality 48


Module 4 and 5
Examples:MSWindows Operating Systems and Oracle corporation Oracle 8i
database management software.

Domain –specific software products are sold to specific categoriesof


customers(called verticals). such as banking,Telecommunication,finance and
accounts,

Example : BANCS from TCS, FINACLE from Infosys in the banking domain and
AspenPlus from Aspen corporation in the chemical process simulation.

Outsourced projects

While developing a large project, sometimes, it makes good commercial sense for a
company to outsource some parts of its work to other companies.

For example, a company may consider outsourcing as a good option, if it feels that it
does not have sufficient expertise to develop some specific parts of the product or if
it determines that some parts can be developed cost-effectively by another
company. Since an outsourced project is a small part of some project, it is usually
small in size and needs to be completed within a few months.

Considering these differences between an outsourced project and a conventional


project, managing an outsourced project entails special challenges.

Project Management and Software quality 49


Module 4 and 5
Indian software companies excel in executing outsourced software projects and have
earned a fine reputation in this field all over the world. Of late, the Indian companies
have slowly begun to focus on product development as well.

The type of development work being handled by a company can have an impact on
its profitability. For example, a company that has developed a generic software
product usually gets an uninterrupted stream of revenue over several years.
However, outsourced projects fetch only one time revenue to any company.

Objective-driven development

Projects may be distinguished by whether their aim is to produce a product or to meet


certain objectives.

A project might be to create a product, the details of which have been specified by the
client. The client has the responsibility for justifying the product.

On the other hand, the project requirement might be to meet certain objectives which
could be met in a number of ways. An organization might have a problem and ask a
specialist to recommend a solution.

Many software projects have two stages.

 First is an objective-driven project resulting in recommendations. This might


identify the need for a new software system.

 The next stage is a project actually to create the software product.

This is useful where the technical work is being done by an external group and the
user needs are unclear at the outset. The external group can produce a preliminary
design at a fixed fee. If the design is acceptable the developers can then quote a price
for the second, implementation, stage based on an agreed requirement.

1.8.1 CATEGORIES OF SOFTWARE PROJECTS: -

Software projects are categorized into 4 categories. They are as follows: -

1.Custom Software Projects - In this, the software application is developed as per


the specific requirements of the users or the organization. This software meets the
specific needs of the customers and is not like the traditional and off the shelf
software. This software is customized either by a third party after a contract is
signed between the customer and the client or by the in-house research and

Project Management and Software quality 50


Module 4 and 5
development team of the organization. Custom Software are tailored as per the
needs of the single entity and would only be used by that single entity.

2. Distributed Computing Projects - A distributed computing system consists of


multiple software components that are on multiple computers but runs as a single
system.

The computers that are in a distributed system can be physically close together and
connected by a local network, or they can be geographically distant and connected
by a Wide Area Network. A distributed system permits resource sharing, including
software by systems linked to the network. Some of the examples of the distributed
systems are Intranets, Internets, www, emails, etc.

3. Free Software Projects – Free software is software that can be freely used,
modified, and reallocated with only one constraint. Any redistributed version of the
software must be distributed with the original terms of free use, modification, and
distribution. In other words, the user has the liberty to copy, run, download,
distribute and do anything for his upgradation. Thus, this software gives the liberty
without money and the users can regulate the programs as per their needs.

4. Software Hosted on Code Plex – Code Plex is Microsoft’s open- source project
hosting website. Code Plex is a site for managing open- source software projects, but
most of those projects are leniently licensed, commonly written in C# and the
building blocks for own open-source project using advanced GUI control libraries.
The great thing about permissively licensed building blocks is that one doesn’t have
to worry about the project being sucked into GPL if one decides to close the source.
Because Code Plex is based on Team Foundation Server, it also provides enterprise
bug tracking and build management for open-source project, which is far better than
the services provided by Source Forge.

1.9 Project Charter

 A project charter is a formal, characteristically brief document that describes the


project in its wholeness — including what the objectives are,how it will be carried
out, and who all are the stakeholders in the project. It is a critical component in
planning the project because it is used throughout the project life cycle.

 A project charter explains the project in clear, concise wording for high level
management.

Project Management and Software quality 51


Module 4 and 5
 Project charters summarizes the entirety of projects to support teams rapidly
comprehend the goals, tasks, timelines, and stakeholders.

 The document provides key information about a project and provides approval to
start the project. Therefore, it serves as a formal announcement that a new
approved projectis about to commence.

 Project charter also contain the appointment of the project manager, the person
who is overall responsible for the project.

 The project charter is a final official document that is prepared in accordance with
the mission and visions of the company along with the deadlines and the
milestones to be achieved in the project.

 It acts as a road map for the project manager and clearly describes the objectives
that has to be achieved in the project.

 The project charter clearly defines the projects, its attributes, the end results, and
the project authorities who will be handling the project.

 The project charter along with the project plan provide strategic plans for the
implementation of the projects. It is also the green signal for the project manger to
commence the project.

Elements of the project charter:

In a nutshell, the elements of the project charter which serves the following are: -
 Reasons for the project
 Objectives and constraints of the project
 The main stakeholders
 Risks identified
 Benefits of the project
 General overview of the budget

Benefits of project charter:

Some of the benefits of project charter are as follows: -


 It improves the customer relationship
 It improves project management methods
 Expands and enhances the regional and headquarters

Project Management and Software quality 52


Module 4 and 5
 communications
 Supports in gaining project funding
 It recognizes senior management roles and authorities
 Allows development, which is meant at achieving industry bestpractices.

1.10 Stakeholders

These are people who have a stake or interest in the project. Their early identification
is important as you need to set up adequate communication channels with them.
Stakeholders can be categorized as:

● Internal to the project team This means that they will be under the direct
managerial control of the project leader.

● External to the project team but within the same organization For example, the
project leader might need the assistance of the users to carry out systems
testing. Here the commitment of the people involved has to be negotiated.

● External to both the project team and the organization External


stakeholders may be customers (or users) who will benefit from the
system that the project implements. They may be contractors who will
carry out work for the project. The relationship here is usually based on a
contract.

Different types of stakeholder may have different objectives and one of the jobs of the
project leader is to recognize these different interests and to be able to reconcile them.
For example, end-users may be concerned with the ease of use of the new application,
while their managers may be more focused on staff savings.

The project leader needs to be a good communicator and negotiator. Boehm and Ross
proposed a ‘Theory W’ of software project management where the manager concentrates
on creating situations where all parties benefit from a project and therefore have an
interest in its success. (The ‘W’ stands for ‘win–win’.)

Project managers can sometimes miss an important stakeholder group, especially in


unfamiliar business contexts. These could be departments supplying important services
that are taken for granted.

Given the importance of coordinating the efforts of stakeholders, the recommended


practice is for a communication plan to be created at the start of a project.
Project Management and Software quality 53
Module 4 and 5
1.11 Setting Objectives

Among all these stakeholders are those who actually own the project. They control the
financing of the project. They also set the objectives of the project.

The objectives should define what the project team must achieve for project success.
Although different stakeholders have different motivations, the project objectives identify
the shared intentions for the project.

Objectives focus on the desired outcomes of the project rather than the tasks within it –
they are the ‘post-conditions’ of the project.

Informally the objectives could be written as a set of statements following the opening
words ‘the project will be a success if ' Thus one statement in a set of objectives might be
‘customers can order our products online’ rather than ‘to build an e-commerce website’.
There is often more than one way to meet an objective and the more possible routes to
success the better.

There may be several stakeholders, including users in different business areas, who might
have some claim to project ownership. In such a case, a project authority needs to be
explicitly identified with overall authority over the project.

This authority is often a project steering committee (or project board or project
management board) with overall responsibility for setting, monitoring and modifying
objectives. The project manager runs the project on a day-to-day basis, but regularly
reports to the steering committee.

Sub-objectives and goals

An effective objective for an individual must be something that is within the control of
that individual. An objective might be that the software application produced must pay for
itself by reducing staff costs. As an overall business objective this might be reasonable.

For software developers it would be unreasonable as any reduction in operational staff


costs depends not just on them but on the operational management of the delivered
system. A more appropriate goal or sub-objective for the software developers would be
to keep development costs within a certain budget.

We can say that in order to achieve the objective we must achieve certain goals or sub-

Project Management and Software quality 54


Module 4 and 5
objectives first. These are steps on the way to achieving an objective, just as goals scored
in a football match are steps towards the objective of winning the match. Informally this
can be expressed as a set of statements following the words ‘To reach objective. . ., the
following must be in place. . .’.

The mnemonic SMART is sometimes used to describe well-defined objectives:

Specific Effective objectives are concrete and well defined. Vague aspirations such as ‘to
improve customer relations’ are unsatisfactory. Objectives should be defined so that it is
obvious to all whether the project has been successful.

Measurable Ideally there should be measures of effectiveness which tell us how successful
the project has been. For example, ‘to reduce customer complaints’ would be more
satisfactory as an objective than ‘to improve customer relations’. The measure can, in some
cases, be an answer to simple yes/no question, e.g. ‘Did we install the new software by 1
June?’

Achievable It must be within the power of the individual or group to achieve the
objective.

Relevant The objective must be relevant to the true purpose of the project.

Time constrained There should be a defined point in time by which the objective should
have been achieved.

Measures of effectiveness

Measures of effectiveness provide practical methods of checking that an objective has


been met. ‘Mean time between failures’ (mtbf) might be used to measure reliability. This
is a performance measurement and, as such, can only be taken once the system is
operational. Project managers want to get some idea of the performance of the
completed system as it is being constructed. They will therefore seek predictive
measures.

For example, a large number of errors found during code inspections might indicate
potential problems with reliability later.

1.12 The Business Case

Project Management and Software quality 55


Module 4 and 5
Most projects need to have a justification or business case: the effort and expense of
pushing the project through must be seen to be worthwhile in terms of the benefits that
will eventually be felt.

A cost–benefit analysis will often be part of the project’s feasibility study. This will
itemize and quantify the project’s costs and benefits. The benefits will be affected by the
completion date: the sooner the project is completed, the sooner the benefits can be
experienced.

The quantification of benefits will often require the formulation of a business model
which explains how the new application can generate the claimed benefits.

A simple example of a business model is that a new web-based application might


allow customers from all over the world to order a firm’s products via the internet,
increasing sales and thus increasing revenue and profits. Any project plan must
ensure that the business case is kept intact.

For example:

The development costs are not allowed to rise to a level which threatens to exceed
the value of benefits;The features of the system are not reduced to a level where the
expected benefits cannot be realized;

The delivery date is not delayed so that there is an unacceptable loss of benefits.

1.11 Project Success and Failure

The project plan should be designed to ensure project success by preserving the
business case for the project. However, every non-trivial project will have problems,
and at what stage do we say that a project is actually a failure? Because different
stakeholders have different interests, some stakeholders in a project might see it as a
success while others do not.

We can distinguish between project objectives and business objectives.

The project objectives are the targets that the project team is expected to achieve. In
the case of software projects, they can usually be summarized as delivering:the
agreed functionality to the required level of quality,on time ,within budget.

A project could meet these targets but the application, once delivered could fail to meet
the business case. A computer game could be delivered on time and within budget, but

Project Management and Software quality 56


Module 4 and 5
might then not sell. A commercial website used for online sales could be created
successfully, but customers might not use it to buy products, because they could buy the
goods more cheaply elsewhere.

We have seen that in business terms it can generally be said that a project is a success if the
value of benefits exceeds the costs. While project managers have considerable control
over development costs, the value of the benefits of the project deliverables is dependent
on external factors such as the number of customers.

Project objectives still have some bearing on eventual business success. A delay in
completion reduces the amount of time during which benefits can be generated and
diminishes the value of the project.

A project can be a success on delivery but then be a business failure, On the other hand, a
project could be late and over budget, but its deliverables could still, over time, generate
benefits that outweigh the initial expenditure.

The possible gap between project and business concerns can be reduced by having a
broader view of projects that includes business issues.

For example, the project management of an e-commerce website implementation could


plan activities such as market surveys, competitor analysis, focus groups, prototyping,
and evaluation by typical potential users – all designed to reduce business risks.

Because the focus of project management is, not unnaturally, on the immediate project, it may
not be seen that the project is actually one of a sequence. Later projects benefit from the
technical skills learnt on earlier projects.

Technical learning will increase costs on the earlier projects, but later projects benefit as
the learnt technologies can be deployed more quickly, cheaply and accurately. This
expertise is often accompanied by additional software assets.

For example reusable code. Where software development is outsourced, there may be
immediate savings, but these longer-term benefits of increased expertise will be lost.
Astute managers may assess which areas of technical expertise it would be beneficial to
develop.

Customer relationships can also be built up over a number of projects. If a client has
trust in a supplier who has done satisfactory work in the past, they are more likely to
use that company again, particularly if the new requirement builds on functionality
already delivered. It is much more expensive to acquire new clients than it is to

Project Management and Software quality 57


Module 4 and 5
retain existing ones.

1.14 What is Management?

Definition :

Software Engineering Management can be defined as the application of management activities -


planning, coordinating, measuring, monitoring, controlling, and reporting - to ensure that the
development and maintenance of software is systematic, disciplined, and quantified.

Software Project Management (SPM) is a proper way of planning and leading


software projects. It is a part of project management in which software projects are
planned, implemented, monitored, and controlled.

Some of the special characteristics of software:-

Management involves the following activities:

● planning – deciding what is to be done;

● organizing – making arrangements;

● staffing – selecting the right people for the job etc.;

● directing – giving instructions;

● monitoring – checking on progress;

● controlling – taking action to remedy hold-ups;

● innovating – coming up with new solutions;

● representing – liaising with clients, users, developer, suppliers and other


stakeholders.

Much of the project manager’s time is spent on only three of the eight identified
activities, viz., project planning, monitoring, and control. The time period
during which these activities are carried out is indicated in Fig. 1.4.

It shows that project management is carried out over three well-defined stages
or processes, irrespective of the methodology used. In the project initiation
stage, an initial plan is made.

Project Management and Software quality 58


Module 4 and 5
As the project starts, the project is monitored and controlled to proceed as
planned.

The initial plan is revised periodically to accommodate additional details and


constraints about the project as they become available.

Finally, the project is closed. In the project closing stage, all activities are logically
completed and all contracts are formally closed.

Initial project planning is undertaken immediately after the feasibility study


phase and before starting the requirements analysis and specification process.
Figure 1.4 shows this project initiation period.

Initial project planning involves estimating several characteristics of a project.


Based on these estimates, all subsequent project activities are planned. The
initial project plans are revised periodically as the project progresses and more
project data becomes available.

Once the project execution starts, monitoring and control activities are taken
up to ensure that the project execution proceeds as planned.

The monitoring activity involves monitoring the progress of the project. Control
activities are initiated to minimize any significant variation in the plan.

FIGURE 1.4 Principal project management processes

Project planning is an important responsibility of the project manager. During


project planning, the project manager needs to perform a few well-defined
activities that have been outlined below.

Note that we have given a very brief description of these activities in this
chapter. We will discuss these activities in more detail in subsequent chapters.

Project Management and Software quality 59


Module 4 and 5
Several best practices have been proposed for software project planning
activities.

Step Wise planning , which is based on the popular PRINCE2 (PRojects IN


Controlled Environments) method.

While PRINCE2 is used extensively in the UK and Europe, similar software


project management best practices have been put forward in the USA by the Project
Management Institute’s ‘PMBOK’ which refers to their publication ‘A Guide to the
Project Management Body of Knowledge.’

● Estimation The following project attributes are estimated.

● Cost How much is it going to cost to complete the project?

● Duration How long is it going to take to complete the project?

● Effort How much effort would be necessary for completing the project?

The effectiveness of all activities such as scheduling and staffing, which are
planned at a later stage, depends on the accuracy with which the above three
project parameters have been estimated.

● Scheduling Based on estimations of effort and duration, the schedules for


manpower and other resources are developed.

● Staffing Staff organization and staffing plans are made.

● Risk Management This activity includes risk identification, analysis, and


abatement planning.

● Miscellaneous Plans This includes making several other plans such as


quality assurance plan, configuration management plan, etc.

Project monitoring and control activities are undertaken after the initiation
of development activities.

• The aim of project monitoring and control activities is to ensure that the
software development proceeds as planned.

• While carrying out project monitoring and control activities, a project


manager may sometimes find it necessary to change the plan to cope

Project Management and Software quality 60


Module 4 and 5
with specific situations and make the plan more accurate as more project
data becomes available.

• At the start of a project, the project manager does not have complete
knowledge about the details of the project. As the project progresses
through different development phases, the manager’s information base
gradually improves.

• The complexities of different project activities become clear, some of the


anticipated risks get resolved, and new risks appear.

• The project parameters are re-estimated periodically incorporating new


understanding and change in project parameters.

• By taking these developments into account, the project manager can plan
subsequent activities more accurately with increasing levels of
confidence.

Figure 1.4 shows this aspect as iterations between monitoring and control, and the
plan revision activities.

1.15 Management Control

Management, in general, involves setting objectives for a system and then


monitoring the performance of the system.

In Figure 1.5 the ‘real world’ is shown as being rather formless. Especially in
the case of large undertakings, there will be a lot going on about which
management should be aware.

This will involve the local managers in data collection. Bare details, such as
‘location X has processed 2000 documents’, will not be very useful to higher
management: data processing will be needed to transform this raw data into
useful information. This might be in such forms as ‘percentage of records
processed’, ‘average documents processed per day per person’ and ‘estimated
completion date’.

Project Management and Software quality 61


Module 4 and 5

implementation

FIGURE 1.5 The project control cycle

In our example, the project management might examine the ‘estimated


completion date’ for completing data transfer for each branch. These can be
checked against the overall target date for completion of this phase of the
project.

In effect they are comparing actual performance with one aspect of the overall
project objectives. They might find that one or two branches will fail to
complete the transfer of details in time. They would then need to consider what
to do (this is represented in Figure 1.5 by the box Making decisions/ plans).

One possibility would be to move staff temporarily from one branch to another. If this is
done, there is always the danger that while the completion date for the one branch is
pulled back to before the overall target date, the date for the branch from which staff are
being moved is pushed forward beyond that date.

The project manager would need to calculate carefully what the impact would be in
moving staff from particular branches. This is modelling the consequences of a
Project Management and Software quality 62
Module 4 and 5
potential solution. Several different proposals could be modelled in this way before
one was chosen for implementation.

Having implemented the decision, the situation needs to be kept under review by
collecting and processing further progress details.

For instance, the next time that progress is reported, a branch to which staff have been
transferred could still be behind in transferring details. This might be because the reason
why the branch has got behind in transferring details is because the manual records are
incomplete and another department, for whom the project has a low priority, has to be
involved in providing the missing information. In this case, transferring extra staff to do
data inputting will not have accelerated data transfer.

A project plan is dynamic and will need constant adjustment during the execution of the
project.
A good plan provides a foundation for a good project, but is nothing without intelligent
execution. The original plan will not be set in stone but will be modified to take account of
changing circumstances.
Software Development and Project management Life Cycles

 Software development life cycle denotes the stages through which a software
is developed.

In figure 1.7 a software development life cycle(SDLC) is shown in terms of the set of
activities that are undertaken during a software development project, their grouping
into different phases and their sequencing .

 During the software development cycle, starting from its conception, the
developers carry out several processes(or development methodologies)till
the software is fully developed and deployed at the client site.

 Examples of the development processes undertaken by the development


team include requirement gathering and analysis,requirements
specification,architectural design,detailed design, coding and testing.

 In contrast to the Software Development Life Cycle typically starts well


before the software development activities start and continues for the entire
duration of SDLC shown in figure 1.7.

 During the software development life cycle,the developers carry out several

Project Management and Software quality 63


Module 4 and 5
types of development processes.

 During the software management life cycle,the software manager carries out
several project management activities (or project management
methodologies) to perform the required software management activities.

Example1 of the project management processes carried out by a project manager


include project inititation, planning, executing, monitoring, controlling and
closing.

 The activities carried out by the developers during software development life
cycle as well as the management life cycle are grouped into a number of
phases.

 Sets of phases and their sequencing in the software development life cycle
and project management life cycle have shown in figure 1.8 below.

 The different phases in the software development life cycle are requirements
analysis,design,development,test and delivery.

 The different phases in the project management life cycle are


Initiation,planning,execution and closing.

Project Management and Software quality 64


Module 4 and 5

1.16 Project management Life cycle

The different phases of the project management life cycle are shown in Figure 1.8. In
the following, we discuss the main activities that are carried out in each phase.

Project Initiation

As shown in Figure 1.8, the software project management life cycle starts with
project initiation.

The project initiation phase usually starts with project concept development. During
concept development the different characteristics of the software to be developed
are thoroughly understood. The different aspects of the project that are investigated
and understood include: the scope of the project, project constraints, the cost that
would be incurred and the benefits that would accrue.

Based on this understanding, a feasibility study is undertaken to determine whether


the project would be financially and technically feasible. This is true for all types of
projects, including the in-house product development projects as well as the
outsourced projects.

For example, an organization might feel a need for a software to automate some of
its activities, possibly for more efficient operation. Based on the feasibility study, the
business case is developed.

Once the top management agrees to the business case, the project manager is
appointed, the project charter is written, and finally the project team is formed. This
sets the ground for the manager to start the project planning phase.

During the project initiation phase it is crucial for the champions of the project to
develop a thorough understanding of the important characteristics of the project.

Project Management and Software quality 65


Module 4 and 5
In his W5HH principle, Barry Boehm summarized the questions that need to be
asked and answered in order to have an understanding of these project
characteristics.

W5HH Principle: Boehm suggested that during project initiation, the project
champions should have comprehensive answers to a set of key questions pertaining
to the project.

The name of this principle (W5HH) is an acronym constructed from the first letter of
each question.

This set of seven questions is the following:

 Why is the software being built?

 What will be done?

 When will it be done?

 Who is responsible for a function?

 Where are they organizationally located?

 How will the job be done technically and managerially?

 How much of each resource is needed?

Project bidding:

Once an organization's top management is convinced by the business case, the


project charter is developed. For some categories of projects, it may be necessary to
have a formal bidding process to select a suitable vendor based on some cost-
performance criteria. If the project involves automating some activities of an
organization, the organization may either decide to develop it in-house or may get
various software vendors to bid for the project.

The different types of bidding techniques and their implications and applicability.

->Request for quotation (RFQ) : An organization advertises an RFQ if it has good

Project Management and Software quality 66


Module 4 and 5
understanding of the project and the possible solutions. While publishing the RFQ,
the organization would have to mention the scope of the work in a statement of
work (SOW) document. Based on the RFQ different vendors can submit their
quotations.

The RFQ issuing organization can select a vendor based on the price quoted as well
as the competency of the vendor.

In government organizations, The term Request For Tender (RFT) is usually used in
place of RFQ. RFT is similar to RFQ; however, in RFT the bidder needs to deposit a
tender fee in order to participate in the bidding process.

Request for proposal (RFP) Many times it so happens that an organization has
reasonable under- standing of the problem to be solved, however it does not have a
good grasp of the solution aspects.

• The organization may not have sufficient knowledge about the different
features that are to be implemented, and may lack familiarity with the
possible choices of the implementation environment, such as, databases,
operating systems, client-server deployment, etc.

• In this case, the organization may solicit solution proposals from vendors.
The vendors may submit a few alternative solutions and the approximate
costs for each solution. In order to develop a better understanding, the
requesting organization may ask the vendors to explain or demonstrate their
solutions.

• It needs to be understood that the purpose of RFP is to get an understanding


of the alternative solutions possible that can be deployed and not vendor
selection.

• Based on the RFP process, the requesting organization can form a clear idea
of the project solutions required, based on which it can form a statement of
work (SOW) for requesting RFQ from the vendors.

Request for Information (RFI) An organization soliciting bids may publish an RFI.
Based on the vendor response to the RFI, the organization can assess the
competencies of the vendors and shortlist the vendors who can bid for the work.
However, it must be noted that vendor selection is seldom done based on RFI, but

Project Management and Software quality 67


Module 4 and 5
the RFI response from the vendors may be used in conjunction with RFP and RFQ
responses for vendor selection.

Project planning

An important outcome of the project initiation phase is the project charter. During the
project planning phase, the project manager carries out several processes and creates the
following documents:
Project plan :This document identifies the project tasks, and a schedule for the project
tasks that assigns project resources and time frames to the tasks.
Resource plan: It lists the resources, manpower and equipment that would be required
to execute the project.
Financial plan: It documents the plan for manpower, equipment and other costs.
Quality plan :Plan of quality targets and control plans are included in this document.
Risk plan : This document lists the identification of the potential risks, their
prioritization and a plan for the actions that would be taken to contain the different risks.

Project execution
• In this phase the tasks are executed as per the project plan developed during
the planning phase. A series of management processes are undertaken to
ensure that the tasks are executed as per plan.

• Monitoring and control processes are executed to ensure that the tasks are
executed as per plan and corrective actions are initiated whenever any
deviations from the plan are noticed.

• The project plan may have to be revised periodically to accommodate any


changes to the project plan that may arise on account of change requests,
risks and various events that occur during the project execution.

• Quality of the deliverables is ensured through execution of proper processes.


Once all the deliverables are produced and accepted by the customer, the
project execution phase completes and the project closure phase starts.

Project closure

Project closure involves completing the release of all the required deliverables to
the customer along with the necessary documentation.

Subsequently, all the project resources are released and supply agreements with the

Project Management and Software quality 68


Module 4 and 5
vendors are terminated and all the pending payments are completed.

Finally, a post implementation review is undertaken to analyse the project


performance and to list the lessons learnt for use in future projects.

1.17 Traditional versus Modern Project Management Practices

Over the last two decades, the basic approach taken by the software industry to develop
software has undergone a radical change. Hardly any software is being developed from
scratch any more. Software development projects are increasingly being based on either
tailoring some existing product or reusing certain pre-built libraries.

In either case, two important goals of recent life cycle models are maximization of code
reuse and compression of project durations. Other goals include facilitating and
accommodating client feedbacks and customer participation in project development
work, and incremental delivery of the product with evolving functionalities.

Change requests from customers are encouraged, rather than circumvented. Clients on
the other hand, are demanding further reductions in product delivery times and costs.
These recent developments have changed project management practices in many
significant ways.

Some important differences between modern project management practices and


traditional practices.

Planning Incremental Delivery: Few decades ago, projects were much simpler and
therefore more predictable than the present day projects. In those days, projects were
planned with sufficient detail. much before the actual project execution started. After the
project initiation, monitoring and control activities were carried out to ensure that the
project execution proceeded as per plan.

Now, projects are required to be completed over a much shorter duration, and rapid
application development and deployment are considered key strategies. The traditional
long-term planning has given way to adaptive short-term planning.

Instead of making a long-term project completion plan, the project manager now plans all
incremental deliveries with evolving functionalities. This type of project management is
often called extreme project management.

Extreme project management is a highly flexible approach to project management that


concentrates on the human side of project management (e.g., managing project

Project Management and Software quality 69


Module 4 and 5
stakeholders), rather than formal and complex planning and monitoring techniques.

Quality Management Of late: customer awareness about product quality has increased
significantly. Tasks associated with quality management have become an important
responsibility of the project manager. The key responsibilities of a project manager now
include assessment of project progress and tracking the quality of all intermediate
artifacts.

Change Management Earlier :when the requirements were signed off by the customer,
any changes to the requirements were rarely entertained. Customer suggestions are now
actively being solicited and incorporated throughout the development process.

To facilitate customer feedback, incremental delivery models are popularly being used.
Product development is being carried out through a series of product versions
implementing increasingly greater functionalities.

Also customer feedback is solicited on each version for incorporation. This has made it
necessary for an organization to keep track of the various versions and revisions through
which the product develops.

Another reason for the increased importance of keeping track of the versions and
revisions is the following. Application development through customization has become a
popular business model. Therefore, existence of a large number of versions of a product
and the need to support these by a development organization has become common.

Change management is a crucial responsibility of the project manager. Change


management is also known as configuration management.

Requirements Management: In modern software development practices, there is a


conscious effort to develop software such that it would, to a large extent, meet the actual
requirements of the customer.

A basic premise of these modern development methodologies is that at the start of a


project the customers are often unable to fully visualize their exact needs and are only
able to determine their actual requirements after they start using the software.

From this view point, modern software development practices advocate delivery of
software in increments as and when the increments are completed by the development
team, and actively soliciting change requests from the customer as they use the
increments of the software delivered to them.

Project Management and Software quality 70


Module 4 and 5
A few customer representatives are included in the development team to foster close
every day interactions with the customers. Contrast this with the practice followed in
older development methodologies, where the requirements had to be identified upfront
and these were then 'signed off' by the customer and 'frozen' before the development
could start.

Change requests from the customer after the start of the project were discouraged.
Consequently, at present in most projects, the requirements change frequently during the
development cycle. It has, therefore, become necessary to properly manage the
requirements, so that as and when there is any change in requirements, the latest and up-
to-date requirements become available to all.

Requirements management has therefore become a systematic process of controlling


changes, documenting, analysing, tracing, prioritizing requirements and then
communicating the changes to the relevant stakeholders. By the term controlling
changes, we mean that every change request is well managed, and problems such as
accidental overwriting of a newer document with an older document are avoided.

Release Management: Release management concerns planning, prioritizing and


controlling the different releases of a software. For almost every software, multiple
releases are made during its life cycle.

Starting with an initial release, releases are made each time the code changes. There are
several reasons as to why the code needs to change. These reasons include functionality
enhancements, bug fixes and improved execution speed. Further, modern development
processes such as the agile development processes advocate frequent and regular
releases of the software to be made to the customer during the software development.

Starting with the release of the basic or core functionalities of the software, more
complete functionalities are made available to the customers every couple of weeks. In
this context, effective release management has become important.

Risk Management : In modern software project management practices, effective risk


management is considered very important to the success of a project. A risk is any
negative situation that may arise as the project progresses and may threaten the success
of the project.

Every project is susceptible to a host of risks that could usually be attributed to factors
such as technology, personnel and customer. Unless proper risk management is
practised, the progress of the project may get adversely affected. Risk management

Project Management and Software quality 71


Module 4 and 5
involves identification of risks, assessment of the impacts of various risks, prioritization
of the risks and preparation of risk-containment plans.

Scope Management : Once a project gets underway, many requirement change requests
usually arise. Some of these can be attributed to the customers and the others to the
development team.

Modern development practices encourage the customer to come up with change


requests. While all essential changes must be carried out, the superfluous and
ornamental changes must be scrupulously avoided.

While accepting change requests, it must be remembered that the three critical project
parameters: scope, schedule and project cost are interdependent and are very intricately
related. If the scope is allowed to change extensively, while strictly maintaining the
schedule and cost, then the quality of the work would be the major casualty.

For every scope change request, the project managers examine whether the change
request is really necessary and whether the budget and time schedule would permit it.
Often, the scope change requests are superfluous.

For example, an over enthusiastic project team member may suggest to add features
that are not required by the customer. Such scope change requests originated by the
overenthusiastic team members are called goldplating and should be discouraged if the
project is to succeed.

The customer may also initiate scope change requests that are more ornamental or at
best nonessential. These serve only to jeopardize the success of the project, while not
adding any perceptible value to the delivered software. Such avoidable scope change
requests originated by the customer are termed as scope creep. To ensure the success of
the project, the project manager needs to guard against both gold plating and scope
creep.

QUESTIONS

1) What is Software Project Management?

2) Why is Software Project Management important?

3) What is project? State it’s features.

Project Management and Software quality 72


Module 4 and 5
4) State the difference between Software project and other projects.

5) State the difference between Contract Management and Technical Project


management.

6) Describe the activities covered by Software Project Management.

7) What are the ways of categorizing Software Project?

8) What is Project Charter? Explain briefly.

9) What is business case? Describe in detail

10) What are the reasons behind the success and failure of a project?

11) What is Management and Management control?

12) Describe the project management life cycle.

----------------------******************------------------------****************-------------------

Module 5
Software quality
5.1 Introduction

While quality is generally agreed to be a good thing', in practice what is meant by the
'qualit can be vague. We need to define precisely what qualities we require of a system.

We need to judge objectively whether a system meets our quality requirements a


extrement. This would be of particular concern to someone like Brigette at Brightmouth
college in the process of selecting a package.

Rather than concentrating on the quality of the final system, a potential customer for
software might check that the suppliers were using the best
methods.

5.2 The Place of Software Quality in Project Planning

Quality will be of concern at all stages of project planning and execution, but will be of
particular interest a the following points in the Step Wise framework (Figure 5.1).

Step 1: Identify project scope and objectives :Some objectives could relate to the
qualities of the application to be delivered.

Project Management and Software quality 73


Module 4 and 5

Step 2: Identify project infrastructure :Within this step, activity identifies installation
standards and procedures. Some of these will almost certainly be about quality.

Step 3: Analyze project characteristics :In activity 5.2 ('Analyze other project
characteristics - including quality based ones') the application to be implemented is
examined to see if it has any special quality requirements.

If, for example, it is safety critical then a range of activities could be added, such as n-
version development where a number of teams develop versions of the same software
which are then run in parallel with the outputs being cross-checked for discrepancies.

FIGURE 5.1 The place of software quality in Step Wise

Project Management and Software quality 74


Module 4 and 5

Step 4: Identify the products and activities of the project :It is at this point that the
entry, exit and process requirements are identified for each activity.

Step 8: Review and publicize plan: At this stage the overall quality aspects of the
project plan are reviewed.

5.3 Importance of Software Quality

We would expect quality to be a concern of all producers of goods and services. However,
the special characteristics of software create special demands.
->Increasing criticality of software: The final customer or user is naturally anxious
about the general quality of software, especially its reliability. This is increasingly so as
organizations rely more on their computer systems and software is used in more safety-
critical applications, for example to control aircraft.

->The intangibility of software: can make it difficult to know that a project task was
completed satisfactorily. Task outcomes can be made tangible by demanding that the
developer produce 'deliverables' that can be examined for quality.

->Accumulating errors during software development: As computer system


development comprises steps where the output from one step is the input to the next, the
errors in the later deliverables will be added to those in the earlier steps, leading to an
accumulating detrimental effect.
In general, the later in a project that an error is found the more expensive it will be to fix.
In addition, because the number of errors in the system is unknown, the debugging
phases of a project are particularly difficult to control.
For these reasons quality management is an essential part of effective overall project
management.

5.4 Defining Software Quality

Definitions :

Project Management and Software quality 75


Module 4 and 5
Software quality is the degree of conformance of a software product to requirements and
expectations.

Software quality refers to how well a software application conforms to a set of functional
and non-functional requirements, as well as how well it satisfies the needs or expectations of
its users. It encompasses various attributes such as reliability, efficiency, maintainability,
usability, security, and scalability.

A system has functional, quality and resource requirements. Functional requirements


define what the system is to do, the resource requirements specify allowable costs and
the quality requirements state how well this system is to operate.

Some qualities of a software product reflect the external view of software held by users,
as in the case of usability. These external qualities have to be mapped to internal factors
of which the developers would be aware. It could be argued, for example, that well-
structured code is likely to have fewer errors and thus improve reliability.

Defining quality is not enough. If we are to judge whether a system meets our
requirements we need to be able to measure its qualities.

A good measure must relate the number of units to the maximum possible."

The maximum number of faults in a program, for example, is related to the size of the
program, so a measure of faults per thousand lines of code is more helpful than total
faults in a program.
Trying to find measures for a particular quality helps to clarify and communicate wha
that quality really is. What is being asked is, in effect, 'how do we know when
have been successful?'

The measures may be direct, where we can measure the quality directly, or indirect,
where the thing being measured is not the quality itself but an indicator that the quality is
present. For example, the number of enquiries by users received by a help desk about
how one operates a particular software application might be an indirect measurement of
its usability.

When project managers identify quality measurements they effectively set targets for
project team members. so care has to be taken that an improvement in the measured
quality is always meaningful.

For example, the number of errors found in program inspections could be counted, on
the grounds that the more thorough the inspection process, the more errors will be
discovered. This count could be improved by allowing more errors to go through to the
inspection stage rather than eradicating them earlier - which is not quite the point.

Project Management and Software quality 76


Module 4 and 5
When there is concern about the need for a specific quality characteristic in a software
product then a quality specification with the following minimum details should be
drafted:

Definition/description: definition of the quality characteristic


Scale: the unit of measurement

Test: the practical test of the extent to which the attribute quality exists

Minimally acceptable: the worst value which might be acceptable if other


characteristics compensated for it, and below which the product would have to be
rejected out of hand

Target range: the range of values within which it is planned the quality measurement
value should lie.

Now: the value that applies currently.

There could be several measurements applicable to a quality characteristic. For example,


in the case reliability, this might be measured in terms of:

-> Availability: the percentage of a particular time interval that a system is usable
.->Mean time between failures: the total service time divided by the number of failures

->Failure on demand: the probability that a system will not be available at the time
required or the probability that a transaction will fail

->Support activity: the number of fault reports that are generated and processed.

Associated with reliability is maintainability, which is how quickly a fault, once detected,
can be corrected. A key component of this is changeability, which is the ease with which
the software can be modified.
Before an amendment can be made, the fault has to be diagnosed. Maintainability can
therefore be seen as change- ability plus a new quality, analysability, which is the ease
with which causes of failure can be identified.

5.5 Software Quality Models

Maintainability can be seen from two different perspectives.

1. The user will be concerned with the elapsed time between a fault being detected
and it being corrected, while the software development managers will be
concerned about the effort involved.

Project Management and Software quality 77


Module 4 and 5
2. The need to be able to quantitatively measure the quality of a software is often
felt. For example, one may want to set quantitative quality requirements for a
software, or to verify whether a software meets the quality requirements set of it.
Unfortunately, it is hard to directly measure the quality of a software. It can be
expressed in terms of several attributes of a software that can be directly
measured.

The quality models give a characterization (often hierarchical) of software quality in


terms of a set of characteristics of the software. The bottom level of the hierarchy can be
directly measured, thereby, enabling a quantitative assessment of the quality of the
software.

There are several well-established quality models, including McCall's, Dromey's and
Boehm's. Since there was no standardization among the large number of quality models
that became available, the ISO 9126 model of quality was developed.

Garvin's quality dimensions: David Garvin, a professor of Harvard Business school . He


states that,
Total Quality Management is defined the quality of any product in terms of eight
general attributes of the product, some of these are measurable and some are not.

Garvin reasoned that sometimes users have subjective judgment of the quality of a
program (perceived quality) that must be taken into account to judge its quality.

o Performance: How well it performs the jobs.


o Features: How well it supports the required features.
o Reliability: Probability of a product working satisfactorily within a specific period
of time.
o Conformance: Degree to which the product meets the requirements.
o Durability: Measure of the product life
o Serviceability: Speed and effectiveness maintenance.
o Aesthetics: The look and feel of the product.
o Perceived quality: User's opinion about the product quality.

McCall' model:

McCall defined the quality of a software in terms of three broad parameters: its
operational characteristics, how easy it is to fix defects and how easy it is to port it to
different platforms.

Project Management and Software quality 78


Module 4 and 5
These three high-level quality attributes are defined based on the following eleven
attributes of the software:
1. Correctness: The extent to which a software product satisfies its specifications.
2. Reliability: The probability of the software product working satisfactorily over a
given duration.
3. Efficiency: The amount of computing resources required to perform the required
functions.
4. Integrity: The extent to which the data of the software product remains valid.
5. Usability: The effort required to operate the software product.
6. Maintainability: The case with which it is possible to locate and fix bugs in the
software product.
7. Flexibility: The effort required to adapt the software product to changing
requirements.
8. Testability: The effort required to test a software product to ensure that it
performs its intended function.
9. Portability: The effort required to transfer the software product from one
hardware or software system environment to another.
10. Reusability: The extent to which a software can be reused in other applications.
11. Interoperability: The effort required to integrate the software with other software.

Dromey's model

Dromey proposed that software product quality depends on four major high-level
properties of the software: Correctness, internal characteristics, contextual
characteristics and certain descriptive properties. Each of these high-level properties
of a software product, in turn, depends on several lower-level quality attributes of the
software. Dromey's hierarchical quality model is shown in Figure 5.2.

Project Management and Software quality 79


Module 4 and 5

FIGURE 5.2: Dromey's quality model

Boehm's model: Boehm postulated that the quality of a software can be defined based
on three high-level characteristics that are important for the users of the software.

These three high-level characteristics are the following:


1. As-is utility: How well (easily, reliably and efficiently) can it be used?
2. Maintainability: How easy is it to understand, modify and then retest the
software?
3. Portability: How difficult would it be to make the software in a changed
environment?
Boehm expressed these high-level product quality attributes in terms of several
measurable product attributes. As compared to McCall's and Dromey's quality models,
Boehm's quality model is based on a wider range of software attributes and with greater
focus on software maintainability. Boehm's hierarchical quality model is shown
schematically in Figure 5.3 below.

Project Management and Software quality 80


Module 4 and 5

Figure 5.3 :Boehm's quality model.


5.6 ISO 9126

Over the years, various lists of software quality characteristics have been put forward,
such as those of James McCall and of Barry Boehm.

Project Management and Software quality 81


Module 4 and 5

Characteristics of Quality Model

The term 'maintainability' has been used, for example, to refer to the ease with which an
error can be located and corrected in a piece of software, and also in a wider sense to
include the ease of making any changes. For some, 'robustness' has meant the software's
tolerance of incorrect input, while for others it has meant the ability to change program
code without introducing errors.

->The ISO 9126 standard was first introduced in 1991 to tackle the question of the
definition of software quality. The original 13-page document was designed as a
foundation upon which further, more detailed, standards could be built. The ISO 9126
standards documents are now very lengthy.

->Currently, in the UK, the main ISO 9126 standard is known as BS ISO/IEC 9126-
1:2001. This is supplemented by some technical reports' (TRS). published in 2003, which
are provisional standards. At the time of writing, a new standard in this area, ISO 25000,
is being developed.

• Acquirers who are obtaining software from external suppliers


• Developers who are building a software product
• Independent evaluators who are assessing the quality of a software product, not for
themselves but for a community of users - for example, those who might use a particular
type of software tool as part of their professional practice.

->ISO 9126 has separate documents to cater for above three sets of needs. Despite the
size of this set of documentation, it relates only to the definition of software quality
attributes. A separate standard,

Project Management and Software quality 82


Module 4 and 5

->ISO 14598, describes the procedures that should be carried out when assessing the
degree to which a software product conforms to the selected ISO 9126 quality
characteristics.

->ISO 14598 could be used to carry out an assessment using a different set of quality
characteristics from those in ISO 9126 if circumstances required it.
The difference between internal and external quality attributes has already been noted.
Note :
External Quality (Functional)
External quality is the usefulness of the system as perceived from outside. It provides
customer value and meets the product owner’s specifications. This quality can be
measured through feature tests, QA and customer feedback. This is the quality that
affects your clients directly, as opposed to internal quality which affects them
indirectly.
Internal Quality (Structural)
Internal quality has to do with the way that the system has been constructed. It is a
much more granular measurement and considers things like clean code, complexity,
duplication, component reuse. This quality can be measured through predefined
standards, linting tools, unit tests etc. Internal quality affects your ability to manage
and reason about the program.

External software quality

Extern software quality focuses on the end user perceiving the quality while
interacting with the software system.

Commonly external software quality consideration.

Correctness (Can software work as a requirement)


Robustness (Can software resilience to incorrect input)
Usability (Effort to learn how to use the software)
Efficiency (Is software performance affect regular operation)

Internal software quality

Internal software quality focuses on software design in source code.

Project Management and Software quality 83


Module 4 and 5
Commonly internal software quality consideration.

Readability (Is the code easy to read)


Reusability (Ease of reusing the code)
Testability (Is the code easy to test)
Maintainability (Ease of finding and fixing bugs)

->ISO 9126 also introduces another type of quality - quality in use- for which the
following elements have been identified:
• Effectiveness: the ability to achieve user goals with accuracy and completeness
• Productivity: avoiding the excessive use of resources, such as staff effort, in achieving
user goals.
Safety: within reasonable levels of risk of harm to people and other entities such as
business, software.. property and the environment
• Satisfaction: smiling users
'Users' in this context includes not just those who operate the system containing the
software, but also those who maintain and enhance the software. The idea of quality in
use underlines how the required quality of the software is an attribute not just of the
software but also of the context of use.
For instance :
In the IOE scenario, suppose the maintenance job reporting procedure varies
considerably, depending on the type of equipment being serviced, because different
inputs are needed to calculate the cost to IOE. Say that 95% of jobs currently involve
maintaining photocopiers and 5% concern maintenance of printers.

If the software is written for this application, then despite good testing, some errors
might still get into the operational system. As these are reported and corrected, the
software would become more 'mature' as faults become rarer.

If there were a rapid switch so that more printer maintenance jobs were being
processed, there could be an increase in reported faults as coding bugs in previously less
heavily used parts of the software code for printer maintenance were flushed out by the
larger number of printer maintenance transactions. Thus, changes to software use
involve changes to quality requirements.

Project Management and Software quality 84


Module 4 and 5

ISO 9126 identifies six major external software quality characteristics:

 Functionality: which covers the functions that a software product provides to


satisfy user needs
 Reliability: which relates to the capability of the software to maintain its level of
performance
 Usability: which relates to the effort needed to use the software Efficiency, which
relates to the physical resources used when the software is executed
 Maintainability : which relates to the effort needed to the make changes to the
software
 Portability : which relates to the ability of the software to be transferred to a
different environment.

ISO 9126 suggests sub-characteristics for each of the primary characteristics. They are
useful as they clarify what is meant by each of the main characteristics.

Functionality compliance' refers to the degree to which the software adheres to


application-related standards of legal requirements.

Typically these could be auditing requirements. Since the original 1999 draft, a sub-
characteristic called 'compliance' has been added to all six ISO external characteristics. In
each case. this refers to any specific standards that might apply to the particular quality
attribute.
'Interoperability' is a good illustration of the efforts of ISO 9126 to clarify terminology.
'Interoperability refers to the ability of the software to interact with other systems.

The framers of ISO 9126 have chosen this word rather than 'compatibility' because the
latter causes confusion with the characteristic referred to by ISO 9126 as 'replaceability'
(see below).

Project Management and Software quality 85


Module 4 and 5

'Maturity' refers to the frequency of failure due to faults in a software product, the
implication being that the more the software has been used, the more faults will have
been uncovered and removed.

Note that 'recoverability' has been clearly distinguished from 'security' which describes
the control of access to a system.

Note how 'learnability' is distinguished from 'operability'. A software tool could be easy
to learn but time- consuming to use because, say, it uses a large number of nested menus.
This might be fine for a package used intermittently, but not where the system is used for
many hours each day. In this case 'learnability' has been incorporated at the expense of
'operability'.

'Attractiveness' is a recent addition to the sub-characteristics of usability and is especially


important where users are not compelled to use a particular software product, as in the
case of games and other entertainment products.

'Analysability' is the ease with which the cause of a failure can be determined.
'Changeability' is the quality that others call 'flexibility': the latter name is a better one as
'changeability' has a different connotation in plain English - it might imply that the
suppliers of the software are always changing it!
'Stability', on the other hand, does not refer to software never changing: it means that
there is a low risk of a modification to the software having unexpected effects.

Project Management and Software quality 86


Module 4 and 5

'Portability compliance' relates to those standards that have a bearing on portability. The
use of a standard programming language common to many software/hardware
environments would be an example of this.
'Replaceability' refers to the factors that give 'upwards compatibility' between old
software components and the new ones. 'Downwards' compatibility is not implied by the
definition.

A new version of a word processing package might read the documents produced by
previous versions and thus be able to replace them, but previous versions might not be
able to read all documents created by the new version.

'Coexistence' refers to the ability of the software to share resources with other software
components; unlike 'interoperability', no direct data passing is necessarily involved.

ISO 9126 provides guidelines for the use of the quality characteristics. Variation in the
importance of different quality characteristics depending on the type of product is
stressed.
Once the requirements for the software product have been established, the following
steps are suggested:

1. Judge the importance of each quality characteristic for the application Thus reliability
will be of particular concern with safety-critical systems while efficiency will be
important for some real-time systems.
2. Select the external quality measurements within the ISO 9126 framework relevant to
the qualities prioritized above Thus for reliability mean time between failures would be
an important measurement, while for efficiency, and more specifically 'time behaviour',
response time would be an obvious measurement.

Project Management and Software quality 87


Module 4 and 5
3. Map measurements onto ratings that reflect user satisfaction For response time, for
example, the mappings might be as in Table 5.1.

4. Identify the relevant internal measurements and the intermediate products in which
they appear This would only be important where software was being developed, rather
than existing software being evaluated.

TABLE 5.1 Mapping measurements to user satisfaction.

According to ISO 9126, measurements that might act as indicators of the final quality of
the software can be taken at different stages of the development life cycle. For products
at the early stages these indicators might be qualitative.For example, they could be
based on checklists where compliance with predefined criteria is assessed by expert
judgement. As the product nears completion, objective, quantitative, measurements
would increasingly be taken.

5. Overall assessment of product quality :To what extent is it possible to combine


ratings for different quality characteristics into a single overall rating for the software? A
factor which discourages attempts at combining the assessments of different quality
characteristics is that they can, in practice, be measured in very different ways, which
makes comparison and combination difficult. Sometimes the presence of one quality
could be to the detriment of another.

For example, the efficiency characteristics of time behaviour and resource utilization
could be enhanced by exploiting the particular characteristics of the operating system
and hardware environments within which the software will perform. This, however,
would probably be at the expense of portability.
It was noted above that quality assessment could be carried out for a number of different
reasons: to assist software development, acquisition or independent assessment.

Project Management and Software quality 88


Module 4 and 5
During the development of a software product, the assessment would be driven by the
need to focus the minds of the developers on key quality requirements. The aim would be
to identify possible weaknesses early on and there would be no need for an overall
quality rating.

TABLE 5.2 :Mapping response times onto user satisfaction

Where potential users are assessing a number of different software products in order to
choose the best one, the outcome will be along the lines that product A is more
satisfactory than product B or C. Here some idea of relative satisfaction exists and there is
a justification in trying to model how this satisfaction might be formed.

One approach recognizes some mandatory quality rating levels which a product must
reach or be rejected, regardless of how good it is otherwise. Other characteristics might
be desirable but not essential.

For these a user satisfaction rating could be allocated in the range, say, 0-5. This could be
based on having an objective measurement of some function and then relating different
measurement values to different levels of user satisfaction - see Table 5.2 above.

Along with the rating for satisfaction, a rating in the range 1-5, say, could be assigned to
reflect how important each quality characteristic was. The scores for each quality could
be given due weight by multiplying it by its importance weighting. These weighted scores
can then be summed to obtain an overall score for the product. The scores for various
products are then put in the order of preference.

For example, two products might be compared as to usability, efficiency and


maintainability. The importance of each of these qualities might be rated as 3, 4 and 2,

Project Management and Software quality 89


Module 4 and 5
respectively, out of a possible maximum of 5. Quality tests might result in the situation
shown in Table 5.3 below.

TABLE 5.3 Weighted quality scores

Finally, a quality assessment can be made on behalf of a user community as a whole. For
example, a professional body might assess software tools that support the working
practices of its members. Unlike the selection by an individual user/purchaser, this is an
attempt to produce an objective assessment of the software independently of a particular
user environment.

It is clear that the result of such an exercise would vary considerably depending on the
weightings given to each software characteristic, and different users could have different
requirements. Caution would be needed here.

5.7 Product and Process Metrics


Types of Software Metrics

Types of Software Metrics

Project Management and Software quality 90


Module 4 and 5
Note:
1.Product Metrics: Product metrics are used to evaluate the state of the product,
tracing risks and undercover prospective problem areas. The ability of the team to
control quality is evaluated. Examples include lines of code, cyclomatic complexity,
code coverage, defect density, and code maintainability index.
2.Process Metrics: Process metrics pay particular attention to enhancing the long-
term process of the team or organization. These metrics are used to optimize the
development process and maintenance activities of software. Examples include effort
variance, schedule variance, defect injection rate, and lead time.
3.Project Metrics: The project metrics describes the characteristic and execution of a
project. Examples include effort estimation accuracy, schedule deviation, cost variance,
and productivity. Usually measures-
-Number of software developer
-Staffing patterns over the life cycle of software
-Cost and schedule
-Productivity
Advantages of Software Metrics
• Reduction in cost or budget.
• It helps to identify the particular area for improvising.
• It helps to increase the product quality.
• Managing the workloads and teams.
• Reduction in overall time to produce the product,.
• It helps to determine the complexity of the code and to test the code with
resources.
• It helps in providing effective planning, controlling and managing of the entire
product.

Disadvantages of Software Metrics


• It is expensive and difficult to implement the metrics in some cases.
• Performance of the entire team or an individual from the team can’t be
determined. Only the performance of the product is determined.
• Sometimes the quality of the product is not met with the expectation.
• It leads to measure the unwanted data which is wastage of time.
• Measuring the incorrect data leads to make wrong decision making.

The users assess the quality of a software product based on its external attributes,
whereas during development, the developers assess the product's quality based on
various internal attributes. We can also say that during development, the developers can

Project Management and Software quality 91


Module 4 and 5
ensure the quality of a software product based on a measurement of the relevant internal
attributes.

The internal attributes may measure either some aspects of the product (called product
or of the development process (called process metrics).
Let us understand the basic differences between product and process metrics.

Product metrics help measure the characteristics of a product being developed. A few
examples of product metrics and the specific product characteristics that they measure
are the following: the LOC and function point metrics are used to measure size, the PM
(person-month) metric is used to measure the effort required to develop a product, and
the time required to develop the product is measured in months.

• Process metrics help measure how a development process is performing. Examples of


process metrics are review effectiveness, average number of defects found per hour of
inspection, average defect correction time, productivity, average number of failures
detected during testing per LOC, and the number of latent defects per line of code in the
developed product.

5.8 Product versus Process Quality Management

The measurements described above relate to products. With a product-based approach


to planning and control, as advocated by the PRINCE2 project management method, this
focus on products is convenient. However, we saw that it is often easier to measure these
product qualities in a completed computer application rather than during its
development.

Note : PRINCE2 is a project management methodology that emphasizes


organization and control. The acronym PRINCE stands for "PRojects IN Controlled
Environments." This project management framework is linear and process-based,
focusing on moving initiatives through predefined stages.
Trying to use the attributes of intermediate products created at earlier stages to predict
the quality of the final application is difficult. An alternative approach is to scrutinize the
quality of the processes used to develop software product.
The system development process comprises a number of activities linked so that the
output from one activity is the input to the next (Figure 5.4).

Project Management and Software quality 92


Module 4 and 5
Errors can enter the process at any stage. They can be caused either by defects in a
process, as when software developers make mistakes in the logic of their software, or by
information not passing clearly and accurately between development stages.

Errors not removed at early stages become more expensive to correct at later stages.
Each development step that passes before the error is found increases the amount of
rework needed. An error in the specification found in testing will mean rework at all the
stages between specification and testing. Each successive step of development is also
more detailed and less able to absorb change.

Note that Extreme Programming advocates suggest that the extra effort needed to amend
software at later stages can be exaggerated and is, in any case, often justified as adding
value to the software.

FIGURE 5.4 An example of the sequence of processes and deliverables.

Errors should therefore be eradicated by careful examination of the deliverables of each


step before they are passed on. One way of doing this is by having the following process
requirements for each step.

Project Management and Software quality 93


Module 4 and 5

• Entry requirements, which have to be in place before an activity can start. An example
would be that a comprehensive set of test data and expected results be prepared and
approved before program testing
can commence.
These requirements may be laid out in installation standards, or a Software Quality Plan
may be drawn up for the specific project if it is a major one.

• Implementation requirements, which define how the process is to be conducted. In


the testing phase, for example, it could be laid down that whenever an error is found and
corrected, all test runs must be repeated, even those that have previously been found to
run correctly.
• Exit requirements, which have to be fulfilled before an activity is deemed to have been
completed. For example, for the testing phase to be recognized as being completed, all
tests will have to have been run successfully with no outstanding errors.

5.9 Quality Management Systems

Note : The concept of the Internet of Everything originated at Cisco, who defines IoE as
"the intelligent connection of people, process, data and things." Because in the Internet
of Things, all communications are between machines.

BS EN ISO 9001:2000

At IOE, a decision might be made to use an outside contractor to produce the annual
maintenance contracts subsystem. A natural concern would be the standard of the
contractor's deliverables.
Quality control would involve the rigorous testing of all the software produced by the
contractor, insisting on rework where defects are found. This would be very time-
consuming. An alternative approach would focus on quality assurance.

In this case IOE would check that the contractors themselves were carrying out effective
quality control. A key element of this would be ensuring that the contractor had the right
quality management system in place. Various national and international standards
bodies, including the British Standards Institution (BSI), have engaged in the creation
of standards for quality management systems.

Project Management and Software quality 94


Module 4 and 5
The British Standard is now called BS EN ISO 9001:2000, which is identical to the
international standard, ISO 9001:2000.

Standards such as the ISO 9000 series try to ensure that a monitoring and control system
to check quality is in place. They are concerned with the certification of the development
process, not of the end-product as in the case of crash helmets and electrical appliances
with their familiar CE marks. Standards in the ISO 9000 series relate to quality systems in
general and not just those in software development.

ISO 9000 describes the fundamental features of a Quality Management System (QMS)
and its terminology.
ISO 9001 describes how a QMS can be applied to the creation of products and the
provision of services.
ISO 9004 applies to process improvement.

There has been some controversy over the value of these standards. Stephen Halliday,
writing in The Observer, had misgivings that these standards are taken by many
customers to imply that the final product is of a certified standard although, as Halliday
says, 'It has nothing to do with the quality of the product going out of the gate. You set
down your own specifications and just have to maintain them, however low they may be.

Obtaining certification can be expensive and time-consuming which can put smaller, but
still well-run, businesses at a disadvantage. Finally, there has been a concern that a
preoccupation with certification might distract attention from the real problems of
producing quality products.

Putting aside these reservations, let us examine how the standard works. First, we
identify those things to be the subject of quality requirements. We then put a system in
place which checks that the requirements are being fulfilled and corrective action taken
when necessary.

An overview of BS EN ISO 9001:2000 QMS requirements :-

 The standard is built on a foundation of the following principles:


 Understanding the requirements of customers so that they can be met, or even
exceeded Leadership to provide the unity of purpose and direction needed to
achieve quality objectives Involvement of staff at all levels
 A focus on the individual processes which create intermediate or deliverable
products and services .

Project Management and Software quality 95


Module 4 and 5
 A focus on the systems of interrelated processes that create delivered products
and services
 Continuous improvement of processes
 Decision making based on factual evidence
 Building mutually beneficial relationships with suppliers.

These principles are applied through cycles which involve the following activities:

1. Determining the needs and expectations of the customer


2. Establishing a quality policy, that is, a framework which allows the organization's
objectives in relation to quality to be defined
3. Design of the processes which will create the products (or deliver the services) which
will have the qualities implied in the organization's quality objectives
4. Allocation of the responsibilities for meeting these requirements for each element of
each process
5. Ensuring that resources are available to execute these processes properly
6. Design of methods for measuring the effectiveness and efficiency of each process in
contributing to the organization's quality objectives
7. Gathering of measurements
8. Identification of any discrepancies between the actual measurements and the target
values
9. Analysis and elimination of the causes of discrepancies
The procedures above should be designed and executed so that there is continual
improvement. They should. if carried out properly, lead to an effective QMS.

More detailed ISO 9001 requirements include:

Documentation of objectives -procedures (in the form of a quality manual), plans, and
records relating to the actual operation of processes. The documentation must be subject
to a change control system that ensures that it is current. Essentially one needs to be able
to demonstrate to an outsider that the QMS exists and is actually adhered to.

Management responsibility - the organization needs to show that the QMS and the
processes that produce goods and services conforming to the quality objectives are
actively and properly managed.

Resources - an organization must ensure that adequate resources, including


appropriately trained staff and appropriate infrastructure, are applied to the processes.

Project Management and Software quality 96


Module 4 and 5

Production should be characterized by:

 Planning
 Determination and review of customer requirements
 Effective communications between the customer and supplier.
 Design and development being subject to planning, control and review
 Requirements and other information used in design being adequately and clearly
recorded
 Design outcomes being verified, validated and documented in a way that provides
sufficient information for those who have to use the designs
 Changes to the designs should be properly controlled
 Adequate measures to specify and evaluate the quality of purchased components
 Production of goods and the provision of services should be under controlled
conditions to ensure adequate provision of information, work instruction,
equipment, measurement devices, and post- delivery activities
 Measurement - to demonstrate that products conform to standards, and the QMS
is effective, and to improve the effectiveness of processes that create products or
services

5.10 Process Capability Models


As compared to the product metrics, the process metrics are more meaningfully
measured during product development. Consequently, to manage quality during
development, process-based techniques are very important. In this section, we discuss
SEI CMM, CMMI, ISO 15504, and Six Sigma, which are popular process capability models.

A historical perspective

Before the 1950s, the primary means of realizing quality products was by undertaking
extensive testing of the finished products. The emphasis of the quality paradigms later
shifted from product assurance (extensive testing of the finished product) to process
assurance (ensuring that a good quality process is used for product development).

Project Management and Software quality 97


Module 4 and 5
In this context, it needs to be emphasized that a basic assumption made by all modern
quality paradigms is that if an organization's processes are good and are followed
rigorously, then the products developed by using it would certainly be of good quality.
Therefore, all modern quality assurance techniques focus on providing sufficient
guidance for recognizing, defining, analysing, and improving the process.

A good documented process enables setting up of a good quality system. However, to


reach the next quality level, it is necessary to improve the process whenever any
shortcomings in it are noticed. It is also necessary to incorporate into the development
process any new tools or techniques that may become available. This forms the essential
idea behind Total Quality Management (TQM).
In a nutshell, TQM advocates that the process followed by an organization must
continuously be improved through process measurements. Continuous process
improvement is achieved through process redesign. A term related to TQM is Business
Process Reengineering (BPR). BPR aims at reengineering the way business is carried out
in an organization.

SEI capability maturity model (CMM):-

Note : Capability Maturity Model (CMM) was developed by the Software


Engineering Institute (SEI) at Carnegie Mellon University in 1987. It is not a
software process model. It is a framework that is used to analyze the approach and
techniques followed by any organization to develop software products. It also
provides guidelines to enhance further the maturity of the process used to develop
those software products.

The United States Department of Defence (US DoD) is one of the largest buyers of
software products in the world. It has often faced difficulties dealing with the quality of
performance of vendors, to whom it assigned contracts. The department had to live with
recurring problems of delivery of low quality products, late delivery, and cost escalations.

DoD worked with the Software Engineering Institute (SEI) of the Carnegie Mellon
University to develop CMM. Originally, the objective of CMM was to assist DoD in
developing an effective software acquisition method by predicting the likely contractor
performance through an evaluation of their development practices.

Most of the DoD contractors began to undertake CMM-based process improvement


initiatives as they vied for DoD contracts. It was soon observed that the SEI CMM model
helped organizations to actually improve the quality of the software they developed.

Project Management and Software quality 98


Module 4 and 5
These organizations were quickly convinced that adoption of SEI CMM model had
significant business benefits even when they were developing software for clients other
than the DoD. Gradually many other commercial organizations began to adopt CMM in
their own internal improvement initiatives.

Definition:
CMM is a reference model for appraising a software development organization into one of
five process maturity levels. The maturity level of an organization is a ranking of the quality
of the development process used by the organization. This information can be used to
predict the most likely outcome of a project that the organization undertakes.

It should be remembered that SEI CMM can be used in two different ways, viz., capability
evaluation and process assessment.

->Capability evaluation and software process assessment differ in motivation, objective,


and the final use of the result.
->Capability evaluation essentially concerns assessing the software process capability of
an organization.
->Capability evaluation is administered by the contract awarding authority, and therefore
the results are indicative of the likely contractor performance if the contractor is
awarded a work.

• Process assessment is used by an organization with the objective of improving its


own process capability. Thus, the result of the latter type of assessment is purely
for internal use by a company.

• In process assessment, the quality level is assessed by a team of assessors coming


into an organization and interviewing the key staff about their practices, using a
standard questionnaire to capture information. It needs to be remembered that in
this case, a key objective is not just to assess, but to recommend specific actions to
bring the organization up to a higher process maturity level.

The different levels of SEI CMM have been designed so that it is easy for an organization
to slowly ramp up.

Level 1: Initial A software development organization at this level is characterized by


haphazard activities by the members of project teams. The chaotic activities are primarily
Project Management and Software quality 99
Module 4 and 5
brought about by the lack of any definition of the development and management
processes.

 Each developer feels free to follow any process that he or she may like. Due to the
chaotic development process practised, when a developer leaves the organization,
the new incumbent usually faces great difficulty in understanding the process that
was followed for the portion of the work that has been completed.

 Besides the lack of any agreed development processes in the organization, no


systematic project management process is prevalent.

 Consequently, time pressure builds up towards the product delivery time. To cope
up with the time pressure, many short cuts are tried out leading to low quality
products.

 Though project failures and project completion delays are commonplace in these
level 1 organizations, yet it is possible that some projects may get successfully
completed. But an analysis of any incidence of successful completion of a project
would reveal the heroic efforts put in by some members of the project team.

 The chances of a successful project execution by a level 1 organization depends to


a large extent on who exactly are the members of the development team.

Level 2: Repeatable Organizations at this level usually practise some basic project
management practices such as planning and tracking cost and schedule. Further, these
organizations make use of configuration management tools to keep the deliverable items
under configuration control.

As level 1 organizations, level 2 organizations are characterized by any documented


process. However, the developers usually have a rough understanding of the process
being followed in the organization. As a result, such an organization can usually repeat its
success on one project on other similar projects.

Level 3: Defined At this level, the processes for both management and development
activities are defined and documented. There is a common organization-wide
understanding of activities, roles, and responsibilities. At this level, the organization
builds up the capabilities of its employees through periodic training programs. Also,
systematic reviews are practised to achieve phase containment of errors.

Project Management and Software quality 100


Module 4 and 5
Level 4: Managed Organizations at this level focus on effectively managing development
tasks by collecting appropriate process and product metrics. Quantitative quality goals
are set for the products and processes. At the time of project completion, it is checked
whether the quantitative quality goals for these have been met. The process metrics are
used to check if project activities were performed satisfactorily. In other words, the
collected metrics are used to measure and track project performance rather than
improve the process.

Level 5: Optimizing Organizations operating at this level not only collect process and
product metrics, but analyze them to identify scopes for improving and optimizing the
various development and management activities. In other words, these organizations
strive for continuous process improvement.
Note :
Some of the CMMI Level 5 software IT companies in India include Tata Consultancy
Services (TCS), Infosys, Wipro, HCL Technologies, and Tech Mahindra. These
companies have achieved the highest level of maturity in the Capability Maturity
Model Integration (CMMI) for software development.

As an example of a process optimization: From an analysis of the process


measurement results, it is observed that the code reviews are not very effective and a
large number of errors are detected only during the unit testing. In this case, the review
process would be fine-tuned to make it more effective.

In a level 5 organization, the lessons learned from specific projects are incorporated in to
the process. Continuous process improvement is achieved both by careful analysis of the
process measurement results and assimilation of innovative ideas and technologies.

Level 5 organizations usually have a department whose sole responsibility is to


assimilate latest tools and technologies and propagate them across the organization.
Since the processes change continuously in these organizations, it becomes necessary to
effectively manage these changing processes. To effectively manage process changes,
level 5 organizations use configuration management techniques.

Key process areas(KPA)

Except for level 1, each maturity level is characterized by several Key Process Areas
(KPAS). The KPAs of a level indicate the areas that an organization at the lower maturity
level needs to focus to reach this level. The KPAS for the different process maturity levels

Project Management and Software quality 101


Module 4 and 5
are shown in Table 5.4. Note that level I has no KPA associated with it, since by default all
organizations are in level 1.

KPAS provide a way for an organization to gradually improve its quality of over several
stages. In other words, at each stage of process maturity, KPAS identify the key areas on
which an organization needs to focus to take it to the next level of maturity. Each stage
has been carefully designed such that one stage enhances the capability already built up.

For example, trying to implement a defined process (level 3) before a repeatable process
(level 2) would be counterproductive as it becomes difficult to follow the defined process
due to schedule and budget pressures. In other words, trying to focus on some higher
level KPAS without achieving the lower level KPAs would be counterproductive.

CMMI (Capability Maturity Model Integration)

CMMI is the successor of the Capability Maturity Model (CMM). In 2002, CMMI Version
1.1 was released. Version 1.2 followed in 2006. The genesis of CMMI is the following.
After CMMI was first released in 1990, it was adopted and used in many domains other
than software development, such as human resource management (HRM).

CMMS were developed for disciplines such as systems engineering (SE-CMM), people
management (PCMM), software acquisition (SA-CMM), and others. Although many
organizations found these models to be useful, they faced difficulties arising from
overlap, inconsistencies, as well as integration of the models.

In this context, CMMI is generalized to be applicable to many domains using a single


framework. However, this unification has resulted in making CMMI much more abstract
than its predecessors such as CMM.

For example, all the terminologies that are used are very generic in nature and even the
word software does not appear in the definition documentation of CMMI. However, CMMI
has much in common with CMM, and also describes the five distinct levels of process
maturity of CMM.

Project Management and Software quality 102


Module 4 and 5

TABLE 5.4: CMMI key process areas


Requirements management, project planning and monitoring and control, supplier
agreement management, measurement and analysis, process and product quality
assurance, configuration management.

Requirements development, technical solution, product integration, verification,


validation, organizational process focus and definition, training, integrated project
management, risk management, integrated teaming, integrated supplier management.
decision analysis and resolution, organizational environment for integration.

Organizational process performance, quantitative project management Organizational


innovation and deployment, causal analysis and resolution.

ISO 15504 process assessment

The main reference in the UK for this standard is BS ISO/IEC 15504-1:2004.

ISO/IEC 15504 is a standard for process assessment that shares many concepts with
CMMI. The two standards should be compatible. Like CMMI the standard is designed to
provide guidance on the assessment of software development processes. To do this there
must be some benchmark or process reference model which represents the ideal
development life cycle against which the actual processes can be compared.

Various process reference models could be used but the default is the one described in
ISO 12207, which has been briefly discussed in Chapter 1 and which describes the main
processes - such as requirements analysis and architectural design - in the classic
software development life cycle.
Processes are assessed on the basis of nine process attributes - see Table 5.5.

Project Management and Software quality 103


Module 4 and 5

TABLE 5.5 : ISO 15504 framework for process capability

When assessors are judging the degree to which a process attribute is being fulfilled they
allocate one of the following scores:

In order to assess the process attribute of a process as being at a certain level of


achievement, indicators have to be found that provide evidence for the assessment.

For example: The requirement analysis processes of an organization were being


assessed. Assessors might wish to test whether the organization is at Level 3, which
relates to there being an established process.
The assessor might find a section in a procedures manual relating to the conduct of
requirements. This could be evidence of the process being defined (3.1 in Table 5.5). They
might also come across control documents which have been signed off as each step of the
Project Management and Software quality 104
Module 4 and 5
requirements analysis process has been completed. This would indicate that the defined
process is actually deployed (3.2).

Implementing process improvement

The CMMI standard has now grown to over 500 pages. Without getting bogged down in
detail, this section explores how the general approach might usefully be employed. To do
this we will take a scenario from industry.

UVW is a company that builds machine tool equipment containing sophisticated control
software. This equipment also produces log files of fault and other performance data in
electronic format. UVW produces software that can read these log files and produce
analysis reports and execute queries.

Both the control and analysis software is produced and maintained by the Software
Engineering department .Within this department there are separate teams who deal with
the software for different types of equipment
Lisa is a Software Team Leader in the Software Engineering department with a team of
six systems designers reporting to the leader.
The group is responsible for new control systems and the maintenance of existing
systems for a particular product line. The dividing line between new development and
maintenance is sometimes blurred as a new control system often makes use of existing
software components which are modified to create the new
software.
A separate Systems Testing Group test software for new control systems, but not fault
correction and adaptive maintenance of released systems.
A project for a new control system is controlled by a Project Engineer with overall
responsibility for managing both the hardware and software sides of the project.

The Project Engineer is not primarily a software specialist and would make heavy
demands on the Software Team Leader, such as Lisa, in an advisory capacity. Lisa may, as
a Software Team Leader, work for a number of different Project Engineers in respect of
different projects, but in the UVW organizational chart she is shown as reporting to the
Head of Software Engineering.

A new control system starts with the Project Engineer writing a software requirement
document which is Software Quality 349 reviewed by a Software Team Leader, who will
then agree to the document, usually after some amendment. A copy of the requirements

Project Management and Software quality 105


Module 4 and 5
document will pass to the Systems Testing Group so that they can create system test
cases and a systems test environment.

Lisa, if she were the designated Software Team Leader, would then write an Architecture
Design document mapping the requirements to actual software components. These
would be allocated to Work Packages carried out by individual members of Lisa's team.

UVW teams get the software quickly written and uploaded onto the newly developed
hardware platform for initial debugging. The hardware and software engineers will then
invariably have to alter the requirement and consequently the software as they find
inconsistencies, faults and missing functions.

The Systems Testing Group should be notified of these changes, but this can be patchy.
Once the system seems to be satisfactory to the developers, it is released to the Systems
Testing Group for final testing before shipping to customers.

Lisa's work problems mainly relate to late deliveries of software by her group because:

(i) The Head of Software Engineering and the Project Leaders may not liaise properly,
leading to the over- commitment of resources to both new systems and maintenance jobs
at the same time
(ii) The initial testing of the prototype often leads to major new requirements being
identified.

(iii) There is no proper control over change requests - the volume of these can
sometimes increase the demand for software development well beyond that originally
planned

(iv) Completion of system testing can be delayed because of the number of bug fixes
We can see that there is plenty of scope for improvements.

One problem is knowing where to start.Approaches like that of CMMI can help us identify
the order in which steps in improvement have to take place.

Some steps need to build on the completion of others. An immediate step would be to
introduce more formal planning and control. This would at least enable us to assess the
size of the problems even if we are not yet able to solve them all.

Project Management and Software quality 106


Module 4 and 5
Given a software requirement, formal plans enable staff workloads to be distributed
more carefully. The monitoring of plans would also allow managers to identify emerging
problems with particular projects.

Effective change control procedures would make managers more aware of how changes
in the system's functionality can force project deadlines to be breached. These process
developments would help an organization move from Level 1 to Level 2.
Figure 5.5 illustrates how a project control system could be envisaged at the level of
maturity.

FIGURE 5.5: Project as a 'closed box'


The next step would be to define carefully the processes involved in each stage of the
development life - see Figure 5.6.

The steps of defining procedures for each development task and ensuring that they are
actually carried out help to bring an organization up to Level 3.

Project Management and Software quality 107


Module 4 and 5

FIGURE 5.6 Process diagram

When more formalized processes exist, the behaviour of component processes can be
monitored.
For example, the numbers of change reports generated and system defects detected at
the system testing phase. Apart from information about the products passing between
processes, we can also collect effort information about each process itself. This enables
effective remedial action to be taken speedily when problems are found. The
development processes are now properly managed, bringing the organization up to Level
4.

Finally, at Level 5 of process management, the information collected is used to improve


the process model itself. It might, for example, become apparent that the changes to
software requirements are a major source of defects. Steps could therefore be taken to
improve this process.
For example, the hardware component of the system could be simulated using software
tools. This could help the hardware engineers to produce more realistic designs and
reduce changes. It might even be possible to build control software and test it against a

Project Management and Software quality 108


Module 4 and 5
simulated hardware system. This could enable earlier and cheaper resolution of technical
problems.

Personal Software Process (PSP)

PSP is based on the work of Watts Humphrey. Unlike CMMI that is intended for
companies, PSP is suitable for individual use. It is important to note that SEI CMM does
not tell software developers how to analyze, design, code, test or document software
products, but assumes that engineers use effective personal practices.

PSP recognizes that the process for individual use is different from that necessary for a
team. The quality and productivity of an engineer is to a great extent dependent on his
process.
PSP is a framework that helps engineers to measure and improve the way they work. It
helps in developing personal skills and methods by estimating, planning and tracking
performance against plans, and provides a defined process which can be tuned by
individuals.

Time measurement

PSP advocates that developers should rack the way they spend time. Because, boring
activities seem longer than actual and interesting activities seem short. Therefore, the
actual time spent on a task should be measured with the help of a stop-clock to get an
objective picture of the time spent.

For example, he may stop the clock when attending a telephone call, taking a coffee break,
etc. An engineer should measure the time he spends for various development activities
such as designing, writing code, testing, etc. PSP Planning.

Individuals must plan their project. Unless an individual properly plans his activities,
disproportionately high effort may be spent on trivial activities and important activities
may be compromised, leading to poor quality results.

The developers must estimate the maximum, minimum and the average LOC required for
the product. They should use their productivity in minutes/LOC to calculate the
maximum. minimum and the average development time. They must record the plan data
in a project plan summary. The PSP is schematically shown in Figure 5.7.

Project Management and Software quality 109


Module 4 and 5
As has been shown in Figure 5.7, an individual developer must plan the personal
activities and make the basic plans before starting the development work. While carrying
out the activities of different phases of software development, the individual developer
must record the log data using time measurement. During post implementation project
review, the developer can compare

FIGURE 5.7: Schematic representation of PSP.

the log data with the initial plan to achieve better planning in the future projects, to improve his
process etc. The four maturity levels of PSP have schematically been shown in Figure 5.8.

The activities that the developer must perform for achieving a higher level of maturity have
also been annotated on the diagram PSP2 introduces defect management via the use of
checklists for code and design reviews. The checklists are
developed by analysing the defect data gathered from earlier projects.

FIGURE 5.8 PSP levels

Six Sigma

Project Management and Software quality 110


Module 4 and 5
 Motorola, USA, initially developed the six sigma method in the early 1980s. Since
then, thousands of companies around the world have discovered the benefits of
adopting six sigma methodologies.

 The purpose of six sigma is to improve processes to do things better, faster, and at
a lower cost. It can in fact, be used to improve every facet of business, i.e.,
production, human resources, order entry, and technical support areas.

 Six sigma becomes applicable to any activity that is concerned with cost,
timeliness, and quality of results. Therefore, it is applicable to virtually every
industry.

 Six sigma seeks to improve the quality of process outputs by identifying and
removing the causes of defects and minimizing variability in the use of process. It
uses many quality management methods, including statistical methods, and
requires presence of six sigma experts within the organization (black belts, green
belts, etc.).

 Six sigma is essentially a disciplined, data-driven approach to eliminate defects in


any process. The statistical representation of six sigma describes quantitatively
how a process is performing. To achieve six sigma. a process must not produce
more than 3.4 defects per million defect opportunities.

 A six sigma defect is defined as any system behaviour that is not as per customer
specifications. Total number of six sigma defect opportunities is then the total
number of chances for committing an error. Sigma of a process can easily be
calculated using a six sigma calculator.

 A basic objective of the six sigma methodology is the implementation of a


measurement- based strategy that focuses on process improvement and variation
reduction through the application of six sigma improvement methodologies. This
is accomplished through the use of two six sigma sub-methodologies: DMAIC and
DMADV.

 The six sigma DMAIC process (define, measure, analyze, improve, control) is an
improvement system for existing processes falling below specification and looking
for incremental improvement. The six sigma DMADV process (define, measure,
analyze, design, verify) is an improvement.

Project Management and Software quality 111


Module 4 and 5
 system used to develop new processes or products at six sigma quality levels. Both
six sigma processes are executed by six sigma green belts and six sigma black
belts, and are overseen by six sigma master black belts. Many frameworks exist for
implementing the six sigma methodology. Six sigma consultants all over the world
have also developed proprietary methodologies for implementing six sigma
quality that is based on various philosophies and tools.

Note : Green Belts typically focus on process improvement, while Black Belts also focus
on process design and innovation.
Green Belts typically report to a Black Belt or other senior leaders, while Black Belts
typically report directly to a Six Sigma executive or sponsor.

5.11 Techniques to Help Enhance Software Quality

Three main themes emerge in this discussion of software quality:

Increasing visibility :A landmark in the movement towards a focus on software quality


was Gerald Weinberg's advocacy of 'egoless programming". Weinberg encouraged the
simple practice of programmers looking at each other's code. .

Procedural structure: At first, programmers were left to get on with writing programs
as best they could. Over the years there has been the growth of methodologies where
every process in the software development cycle has carefully laid down steps.

Checking intermediate stages: It is tempting to push forward quickly with the


development of any engineered object until a 'working' model, however imperfect, has
been produced which can then be 'debugged'. The move towards quality practices has
emphasized checking the correctness of work at its earlier, conceptual, stages.

Gerald Weinberg (1998) The Psychology of Computer Programming, Sver Anniversary


Edition. Dorset House.
The creation of an early working model of a system may still be useful, as the creation
of prototypes shows.

Focus has shifted from relying solely on checking the products of intermediate stages and
towards building an application as a number of smaller, relatively independent
components developed quickly and tested at an early stage. This can reduce some of the
problems, noted earlier, of attempting to predict the external quality of the software from

Project Management and Software quality 112


Module 4 and 5
early design documents. It does not preclude careful checking of the design of
components.
We are now going to look at some specific techniques. The push towards more visibility
has been dominated by the increasing use of walk-throughs, inspections and reviews. The
movement towards a more procedural structure inevitably leads to discussion of
structured programming techniques and to its later manifestation in the ideas of 'clean-
room' software development.
The interest in the dramatic improvements made by the Japanese in product quality has
led to much discussion of the quality techniques they have adopted, such as the use of
quality circles, and these will be looked at briefly. Some of these ideas are variations on
the theme of inspection and clean-room development.

Inspections

Inspections can be applied to documents produced at any development stage.


For instance, test cases need to be reviewed - their production is usually not a high-
profile task even though errors can get through to operational running because of their
poor quality.
When a piece of work is completed, copies are distributed to co-workers who examine
the work, noting defects. A meeting then discusses the work and a list of defects requiring
rework is produced. The work to be examined could be, typically, a program listing that is
free of compilation errors.

The main problem is maintaining the commitment of participants to a thorough


examination of the work distributed to them after the novelty value of reviews has worn
off a little.
This is sometimes called 'peer review', where 'peers' are people who are equals.
Our own experience of using this technique has been that:

it is a very effective way of removing superficial errors;it motivates developers to


produce better structured and self-explanatory software;
 it helps spread good programming practices as the participants discuss specific
pieces of code;
 it can enhance team spirit.
The item will usually be reviewed by colleagues who work in the same area, so that
software developers, for example, will have their work reviewed by fellow developers. To
reduce the problems of communication between different stages, there may be
representatives from the stages preceding and following the one which produced the
work under review.
Project Management and Software quality 113
Module 4 and 5
IBM made the review process more structured and formal, producing statistics to show
its effectiveness. A Fagan inspection (named after the IBM employee who pioneered the
technique) is led, not by the author of the work, but by a specially trained 'moderator'.

The general principles behind the Fagan method

Inspections are carried out on all major deliverables.


● All types of defect are noted - not just logic or function errors.

● Inspections can be carried out by colleagues at all levels except the very top.

● Inspections are carried out using a predefined set of steps.

● Inspection meetings do not last for more than two hours.

● The inspection is led by a moderator who has had specific training in the

technique.
● The other participants have defined roles. For example, one person will act as a

recorder and note all defects found, and another will act as reader and take the
other participants through the document under inspection.
● Checklists are used to assist the fault-finding process.

● Material is inspected at an optimal rate of about 100 lines an hour.

● Statistics are maintained so that the effectiveness of the inspection process can

be monitored.
Note : A Fagan inspection is a process of trying to find defects in documents (such as
source code or formal specifications) during various phases of the software
development process. It is named after Michael Fagan, who is credited with the
invention of formal software inspections.

Structured programming and clean-room software development

In the late 1960s, software was seen to be getting more complex while the capacity of the
human mind to hold detail remained limited. It was also realized that it was impossible to
test any substantial piece of software completely given the huge number of possible input
combinations. Testing, at best, could prove the presence of errors, not their absence. Thus
Dijkstra and others suggested that the only way to reassure ourselves about the
correctness of software was by examining the code.
The way to deal with complex systems, it was contended, was to break them down into
components of a size the human mind could comprehend. For a large system there would
be a hierarchy of components and subcomponents. For this decomposition to work
properly, each component would have to be self-contained, with only one entry and exit
point.

Project Management and Software quality 114


Module 4 and 5
The ideas of structured programming have been further developed into the ideas of
clean-room software development by people such as the late Harlan Mills of IBM.

With this type of development there are three separate teams:


● A specification team, which obtains the user requirements and also a usage

profile estimating the volume of use for each feature in the system;

● A development team, which develops the code but which does no machine
testing of the program code produced;

● A certification team, which carries out testing.

Usage profiles reflect the need to assess quality in use as discussed earlier in relation to
ISO 9126. They will be further dis cussed in the Section 5.11 on testing below.

The development team does no debugging; instead, all software has to be verified by
them using mathematical techniques. The argument is that software which is constructed
by throwing up a crude program, which then has test data thrown at it and a series of hit-
and-miss amendments made to it until it works, is bound to be unreliable.

The certification team carry out the testing, which is continued until a statistical model
shows that the failure intensity has been reduced to an acceptable level.

Formal methods

Clean-room development, mentioned above, uses mathematical verification techniques.


These techniques use unambiguous, mathematically based, specification languages of
which Z and VDM are examples. They are used to define preconditions and
postconditions for each procedure.

Preconditions define the allowable states. before processing, of the data items upon
which a procedure is to work.

The postconditions define the state of those data items after processing. The
mathematical notation should ensure that such a specification is precise and
unambiguous.

It should also be possible to prove mathematically (in much the same way that at school
you learnt to prove Pythagoras' theorem) that a particular algorithm will work on the
data defined by the preconditions in such a way as to produce the postconditions.
Project Management and Software quality 115
Module 4 and 5

Despite the claims of the effectiveness of the use of a formal notation to define software
specifications for many years now, it is rarely used in mainstream software development.
This is despite it being quite widely being taught in universities.

A newer development that may meet with more success is the development of Object
Constraint Language (OCL). It adds precise, unambiguous, detail to the UML models, for
example about the ranges of values that would be valid for a named attribute. It uses an
unambiguous, but non-mathematical, notation which developers who are familiar with
Java-like programming languages should grasp relatively easily.

Software quality circles

Much interest has been shown in Japanese software quality practices. The aim of the
'Japanese' approach is to examine and modify the activities in the development process in
order to reduce the number of errors that they have in their end-products.

Testing and Fagan inspections can assist the removal of errors - but the same types of
error could occur repeatedly in successive products created by a faculty process. By
uncovering the source of errors, this repetition can be eliminated.

Staff are involved in the identification of sources of errors through the formation of
quality circles. These can be set up in all departments of an organization, including those
producing software where they are known as Software Quality Circles (SWQC).

A quality circle is a group of four to ten volunteers working in the same area who meet
for, say, an hour a week to identify, analyze and solve their work-related problems. One of
their number is the group leader and there could be an outsider, a facilitator, who can
advise on procedural matters. In order to make the quality circle work effectively,
training needs to be given.
Together the quality group select a pressing problem that affects their work. They
identify what they think are the causes of the problem and decide on a course of action to
remove these causes. Often, because of resource or possible organizational constraints,
they will have to present their ideas to management to obtain approval before
implementing the process improvement.

Associated with quality circles is the compilation of most probable error lists.
For example, at IOE, Amanda might find that the annual maintenance contracts project
is being delayed because of errors in the requirements specifications. The project team
Project Management and Software quality 116
Module 4 and 5
could be assembled and spend some time producing a list of the most common types of
error that occur in requirements specifications. This is then used to identify measures
which can reduce the occurrence of each type of error. They might suggest, for instance,
that test cases be produced at the same time as the requirements specification and that
these test cases should be dry run at an inspection. The result is a checklist for use when
conducting inspections of requirement specifications.

Lessons learnt reports

Another way by which an organization can improve its performance is by reflecting on


the performance of a project at its immediate end when the experience is still fresh. This
reflection may identify lessons to be applied to future projects. Project managers are
required to write a Lessons Learnt report at the end of the project. This should be
distinguished from a Post Implementation Review (PIR).

A PIR takes place after a significant period of operation of the new system, and focuses on
the effectiveness of the new system, rather than the original project process. The PIR is
often produced by someone who was not involved in the original project, in order to
ensure neutrality. An outcome of the PIR will often be changes to enhance the
effectiveness of the installed system.

The Lessons Learnt report, on the other hand, is written by the project manager as soon
as possible after the completion of the project. This urgency is because the project team is
often dispersed to new work soon after the finish of the project. One problem that is
frequently voiced is that there is often very little follow-up on the recommendations of
such reports, as there is often no body within the organization with the responsibility and
authority to do so.

5.12 Testing

The final judgement of the quality of a software application is whether it actually works
correctly when executed. This section looks at aspects of the planning and management
of testing. A major headache with testing is estimating how much testing remains at any
point. This estimate of the work still to be done depends on an unknown, the number of
bugs left in the code. We will briefly discuss how we can deal with this problem.

Project Management and Software quality 117


Module 4 and 5
The V-process model was introduced as an extension to the waterfall process model.
Figure 5.9 gives a diagrammatic representation of this model. This stresses the necessity
for validation activities that match the activities creating the products of the project.

FIGURE 5.9 V-process model

The V-process model can be seen as expanding the activity box 'testing' in the waterfall
model. Each step has a matching validation process which can, where defects are found,
cause a loop back to the corresponding development stage and a reworking of the
following steps. Ideally this feeding back should occur only where a discrepancy has been
found between what was specified by a particular activity and what was actually
implemented in the next lower activity on the descent of the V loop.

For example, the system designer might have written that a calculation be carried out in
a certain way. A developer building code to meet this design might have misunderstood
what was required. At system testing stage, the original designer would be responsible
for checking that the software is doing what was specified and this would discover the
coder's misreading of that document.

Using the V-process model as a framework. planning decisions can be made at the outset
as to the types and amounts of testing to be done. An obvious example of this would be
that if the software were acquired 'off-the-shelf', the program design and code stages
would not be relevant and so program testing would not be needed.

Project Management and Software quality 118


Module 4 and 5

Verification versus validation

The objectives of both verification and validation techniques are very similar. Both these
techniques have been designed to help remove errors in software. In spite of the
apparent similarity between their objectives, the underlying principles of these two bug
detection techniques and their applicability are very different.

The main differences between these two techniques are the following:
Example :

Figure :Verification and Validation

Sl.No Verification Validation


Verification is the process of Validation is the process of determining
determining whether the output of whether fully developed software
one phase of software development conforms to its requirements
conforms to that of its previous specification.
phase.
The objective of verification is to Validation tests ensure that the product
check if the artifacts produced after matches and adheres to customer
a phase conforms to that of the demands, preferences, and expectations
previous phase. under different conditions (slow
connectivity, low battery, etc.).

Project Management and Software quality 119


Module 4 and 5
For example, a verification step Examples include user acceptance
can be to check if the design testing, alpha and beta testing, and
documents produced after the compatibility testing.
design step conform to the tests are also required to ensure the
requirements specification. software functions flawlessly across
different browser-device-OS
combinations.
It is undertaken by both developers validation is applied to the fully
and testers to ensure that the developed and integrated software to
software adheres to predetermined check if it satisfies the customer's
standards and exactions. requirements.
The primary techniques used for Validation techniques are primarily
verification include review, based on product testing.
simulation, and formal verification.
Verification is carried out during Validation is carried out towards the end
the development process to c.eck if of the development process to check if
the development activities are the right product as required by the
being carried out correctly, customer has been developed.
Verification techniques can be Quality control comes under validation
viewed as an attempt to achieve testing. the execution of code happens.
phase containment of errors. Phase
containment of errors has been
acknowledged to be a cost-effective
way to eliminate program bugs, and
accepted as an important software
engineering principle.

Note : Phase Containment Effectiveness metric is used to detect defects at current


phase and also the escaped defects in the previous phases. This used a defect injection
model to implement the phase containment effectiveness metric. The implementation
has been done on the real software development project.

All the boxes shown in the right hand side of the V-process model of Figure 5.9
correspond to verification activities except the system testing block which corresponds
to validation activity.

Test case design

Project Management and Software quality 120


Module 4 and 5
There are essentially two main approaches to systematically design test cases: black-box
approach and white-box (or glass-box) approach.
In the black-box approach, test cases are designed using only the functional specification
of the software. That is, test cases are designed solely based on an analysis of the
input/output behaviour (that is, functional behaviour) and does not require any
knowledge of the internal structure of a program.

For this reason. black-box testing is also known as functional testing and also as
requirements-driven testing. Design of white-box test cases on the other hand, requires
analysis of the source code. Consequently, white-box testing is also called structural
testing or structure-driven testing.

Levels of testing

A software product is normally tested at three different stages or levels. These three
testing stages are .
• Unit testing
• Integration testing
• System testing
During unit testing, the individual components (or units) of a program are tested.

For every module, unit testing is carried out as soon as the coding for it is complete. Since
every module is tested separately, there is a good scope for parallel activities during unit
testing.

The objective of integration testing is to check whether the modules have any errors
pertaining to interfacing with each other.

Unit testing is referred to as testing in the small, whereas integration and system testing
are referred to as testing in the large. After testing all the units individually, the units are
integrated over a number of steps and tested after each step of integration (integration
testing). Finally, the fully integrated system is tested (system testing).

Testing activities

Testing involves performing the following main activities:

Test Planning Since many activities are carried out during testing, careful planning is
needed. The specific test case design strategies that would be deployed are also planned.
Project Management and Software quality 121
Module 4 and 5

Test planning consists of determining the relevant test strategies and planning for any
test bed that may be required. A suitable test bed is an especially important concern
while testing embedded applications. A test bed usually includes setting up the hardware
or simulator.

Test Suite Design Planned testing strategies are used to design the set of test cases
(called test suite) using which a program is to be tested.

Test Case Execution and Result Checking Each test case is run and the results are
compared with the expected results. A mismatch between the actual result and expected
results indicates a failure. The test cases for which the system fails are noted down for
test reporting.

Test Reporting When the test cases are run, the tester may raise issues, that is, report
discrepancies between the expected and the actual findings. A means of formally
recording these issues and their history is needed. A review body adjudicates these
issues.

The outcome of this scrutiny would be one of the following:

 The issue is dismissed on the grounds that there has been a misunderstanding of a
requirement by the tester.

 The issue is identified as a fault which the developers need to correct- Where
development is being done by contractors, they would be expected to cover the
cost of the correction.

 It is recognized that the software is behaving as specified, but the requirement


originally agreed is in fact incorrect- Remedying this means adding a new
requirement and a contractor could expect to receive payment for the additional
work.

 The issue is identified as a fault but is treated as an off-specification- It is decided


that the application can be made operational with the error still in place.

 In a commercial project, execution of the entire test suite can take several weeks
to complete. Therefore, in order to optimize the turnaround time, the test failure

Project Management and Software quality 122


Module 4 and 5
information is usually informally intimated to the development team as and when
failures are noticed.

Debugging :For each failure observed during testing, debugging is carried out to identify
the statements that are in error. There are several debugging strategies, but essentially in
each the failure symptoms are analysed to locate the errors.

Error Correction :After an error is located through a debugging activity; the code is
appropriately changed to correct the error.

Defect Retesting : Once a defect has been dealt with by the development team; the
corrected code is retested by the testing team to check whether the defect has
successfully been addressed. Defect retest is also popularly called resolution testing. The
resolution tests are a subset of the complete test suite (see Figure 5.10).

FIGURE 5.10 Types of test cases in the original test suite after a change

Regression Testing While resolution testing checks whether the defect has been fixed,
regression testing checks whether the unmodified functionalities still continue to work
correctly. Thus, whenever a defect is corrected and the change is incorporated in the
program code, a danger is that a change introduced to correct an error could actually
introduce errors in functionalities that were previously working correctly. The regression
tests check whether the unmodified functionalities continue to work correctly. As a
result, after a bug-fixing session, both the resolution and regression test cases need to be
run. This is where the additional effort required to create automated test scripts can pay
off. As shown in Figure 5.6, some test cases may no more be valid after the change. These

Project Management and Software quality 123


Module 4 and 5
have been shown as invalid test case. The rest are redundant test cases, which check
those parts of the program code that are not at all affected by the change.

Test Closure Once the system successfully passes all the tests, documents related to
lessons learned, test results, logs, etc., are archived for use as a reference in future
projects.
Of all the above-mentioned testing activities, debugging is usually the most time-
consuming activity. Who performs testing?
A question to be settled at the planning stage is who would carry out testing. Many
organizations have separate system testing groups to provide an independent
assessment of the correctness of software before release.

In other organizations, staff is allocated to a purely testing role but work alongside the
developers instead of a separate group. While an independent testing group can provide
final quality check, it has been argued that developers may take less care of their work if
they know the existence of this safety net.

Test automation

Testing is usually the most time consuming and laborious of all software development
activities. This is especially true for large and complex software products that are being
developed currently.
At present, testing cost often exceeds all other development life-cycle costs. With the
growing size of programs and the increased importance being given to product quality,
test automation is drawing considerable attention from both industry circles and
academia.

Test automation is a generic term for automating one or some activities of the test
process.
Other than reducing human effort and time in this otherwise time and effort-intensive
work, test automation also significantly improves the thoroughness of testing. This is
because more testing can be carried out using a large number of test cases within a short
period of time without any significant cost overhead. The effectiveness of testing, to a
large extent, depends on the exact test case design strategy used.

Considering the large overheads that sophisticated testing techniques incur, in many
industrial projects, often testing is carried out using randomly selected test values. With
automation, more sophisticated test case design techniques can be deployed. Without the

Project Management and Software quality 124


Module 4 and 5
use of proper tools, testing large and complex software products can especially be
extremely time consuming and laborious.

A further advantage of using testing tools is that automated test results are much more
reliable and eliminate human errors during testing. Regression testing after every change
or error correction requires running several old test cases. In this situation, test
automation simplifies repeated running of the test cases. Testing tools hold out the
promise of substantial cost and time reduction even in the testing and maintenance
phases.

Every software product undergoes significant change overtime. Each time the code
changes, it needs to be tested whether the changes induce any failures in the unchanged
features. Thus the originally designed test suite needs to be run repeatedly each time the
code changes. Additional tests have to be designed and carried out on the enhanced
features.

Repeated running of the same set of test cases over and over after every change is
monotonous, boring, and error-prone. Automated testing tools can be of considerable use
in repeatedly running the same set of test cases. Testing tools can entirely or at least
substantially eliminate the drudgery of running same test cases and also significantly
reduce testing costs.

A large number of tools are at present available both in the public domain as well as from
commercial sources. It is possible to classify these tools into the following types with
regard to the specific methodology on which they are based. Capture and Playback In this
type of tools, the test cases are executed manually only once.

During the manual execution, the sequence and values of various inputs as well as the
outputs produced are recorded. On any subsequent occasion, the test can be
automatically replayed and the results are checked against the recorded output.

An important advantage of the capture playback tools is that once test data are captured
and the results verified, the tests can be rerun several times over easily and cheaply.
Thus, these tools are very useful for regression testing. However, capture and playback
tools have a few disadvantages as well. Test maintenance can be costly when the unit
under test changes, since some of the captured tests may become invalid. It would
require considerable effort to determine and remove the invalid test cases or modify the

Project Management and Software quality 125


Module 4 and 5
test input and output data. Also new test cases would have to be added for the altered
code.

Automated Test Script Test scripts are used to drive an automated test tool. The scripts
provide input to the unit under test and record the output. The testers employ a variety
of languages to express test scripts.

An important advantage of test script-based tools is that once the test script is debugged
and verified, it can be rerun a large number of times easily and cheaply. However,
debugging the test script to ensure its accuracy requires significant effort. Also, every
subsequent change to the unit under test entails effort to identify impacted test scripts,
modify, rerun and reconfirm them.

Random Input Test- In this type of an automatic testing tool, test values are randomly
generated to cover the input space of the unit under test. The outputs are ignored
because analysing them would be extremely expensive. The goal is usually to crash the
unit under test and not to check if the produced results are correct.

An advantage of random input testing tools is that it is relatively easy. This approach
however can be the most cost-effective for finding some types of defects. However,
random input testing is a very limited form of testing. It finds only the defects that crash
the unit under test and not the majority of defects that do not crash the system but simply
produce incorrect results.

Model-based Test A model is a simplified representation of program. There can be


several types of models of a program. These models can either be structural models or
behavioural models. Examples of behavioral models are state models and activity models.
A state model-based testing generates tests that adequately cover the state space
described by the model.
Estimation of latent errors
Earlier, we noted the problem of estimating the number of errors left in an application
under test. At the start of testing, there is one relatively straightforward way of
estimating the number of errors in code. Simply put, bigger programs are likely to have
more errors. If you have collected error data from past projects, you can arrive at the
historic number of errors per 1000 lines of code. This can then be used to arrive at a
reasonable estimate of the number of errors likely to be found in a new system
development of a known size.
This estimate could be confirmed during the actual testing. One suggestion is that known
errors can be seeded in the software. This seeding could be done by having one or more

Project Management and Software quality 126


Module 4 and 5
people doing a desk-check of code, but then leaving any errors found in the code. Say 10
such errors are found. Then suppose that after the first set of tests 30 errors were found
of which six were known errors, that is 60% of the seeded errors. This suggests that
around 40% of the errors have still to be detected, that is 20 errors (of which four are
already known). The method of calculating an estimate of the errors in the software is
(total errors found)/(seeded errors found) X (total number of seeded errors)
Tom Gilb (1977) Software Metrics Winthrop Publishers, Cambridge, MA.
You may be thinking that deliberately putting (or leaving) known errors in software is a
bit sneaky. It might be more acceptable to use a slightly different approach originally
suggested by Tom Gilb. Two different reviewers, or groups of reviewers, are asked to
inspect or test the same code. They must be completely independent of one another.
Three counts are collected:
n1, the number of valid errors found by A
n2, the number of valid errors found by B
⚫n12, the number of cases where the same error is found by both A and B.
The smaller the proportion of errors found by both A and B compared to those found by
only one reviewer. the larger the total number of errors likely to be in the software. An
estimate of the total number of errors (n) can be calculated by the formula:
n = (n1 x n2)/n12
For example, A finds 30 errors and B finds 20 errors of which 15 are common to both A
and B. The estimated Software Quality 363 total number of errors would be:
13.13 Software Reliability
(30 X 20)/15=40
We have pointed out in Section 13.6 that reliability is an important quality attribute. In
this section, we discuss some basic concepts in software reliability engineering. The
reliability of a software product essen- tially denotes its trustworthiness or
dependability. Alternatively, the reliability of a software product can be defined as the
probability of its working correctly over a given period of time.
Intuitively, it is obvious that a software product having a large number of defects is
unreliable. It is also very reasonable to assume that the reliability of a system would
improve if the number of defects in it is reduced. However, it is very difficult to formulate
a mathematical expression to characterize the reliability of a system in terms of the
number of latent defects in it. To get an insight into this issue, consider the following.
Removing errors from those parts of a software product that are infrequently executed
makes little difference to the reliability of the product. It has been experimentally
observed by analysing the behaviour of a large number of programs that 90% of the
execution time of a typical program is spent in executing only 10% of the instructions in
the program. Therefore, in addition to the number of defects, the specific point in the

Project Management and Software quality 127


Module 4 and 5
program (core or non-core part) where the bug is located also matters. Further,
reliability is observer dependent, in the sense that it depends on the relative frequency
with which different users invoke the functionalities of a system. It is possible that
because of different usage patterns of the available functionalities of software, a bug
which frequently shows up for one user, may not show up at all for another user, or may
show up very infrequently.

Reliability of a software product usually keeps on improving with time during the testing
and operational phases as defects are identified and repaired. In this context, the growth
of reliability over the testing and operational phases can be modelled using a
mathematical expression called Reliability Growth Model (RGM). Thus, RGM models
show how the reliability of a software product improves as failures are reported and
bugs are corrected.

A large number of RGMs have been proposed by researchers based on various failure and
bug repair patterns. A few popular reliability growth models are Jelinski-Moranda model,
Littlewood- Verall's model, and Goel-Okutomo's model. For a given development project,
a suitable RGM can be used to predict when (or if at all) a particular level of reliability is
likely to be attained. Thus, reliability growth modelling can be used to determine when
during the testing phase a given reliability level will be attained, so that testing can be
stopped.
We can summarize the main reasons that make software reliability more difficult to
measure than hardware reliability:

 The reliability improvement due to fixing a single bug depends on where the bug
is located in the code.
 The perceived reliability of a software product is observer-dependent.
 The reliability of a product keeps changing as errors are detected and fixed.

Hardware versus Software Reliability

A fundamental issue that sets the reliability study of software apart from hardware
reliability study is the difference between their failure to failure of software components.
Hardware components fail mostly due to wear and tear, whereas software components
fail due to presence of bugs.
As an example of a hardware, consider an electronic circuit. In this circuit, a failure may
occur as a logic gate may be stuck at 1 or 0, or a resistor might short circuit. To fix a
hardware fault, one has to either replace of repair the failed part. In contrast, a software

Project Management and Software quality 128


Module 4 and 5
product would continue to fail until the error is tracked down and either the design or
the code is changed to fix the bug.

For this reason, when a hardware part is repaired its reliability would be maintained at
the level that existed before the failure occurred; whereas when a software failure is
repaired, the reliability may either increase or decrease (reliability may decrease if a bug
fix introduces new errors). To put this fact in a different perspective, hardware reliability
study is concerned with stability (e.g. the inter-failure times remain constant). On the
other hand, the aim of software reliability study would be reliability growth (i.e. increase
in inter-failure times).
A comparison of the changes in failure rate over the product life time for a typical
hardware product as well as a software product is sketched in Figure 5.11. Observe that
the plot of change of reliability with time for a hardware component [Figure 5.11(a)]
appears like a 'bath tub'. As shown in Figure 5.11(a), for a hardware system the failure
rate is initially high, but decreases as the faulty components are identified and are either
repaired or replaced.

Figure 5.11: Reliability growth with time for hardware and software products

Reliability Metrics

The reliability requirements for different categories of software products may be


different. For this reason, it is necessary that the level of reliability required for a
software product should be specified in the SRS (software requirements specification)
document. In order to be able to do this, we need some metrics to quantitatively express
the reliability of a software product.
A good reliability measure should be observer-independent, so that different people can
agree on the degree of reliability a system has. However, in practice, it is very difficult to
formulate a metric using which precise reliability measurement would be possible.

Project Management and Software quality 129


Module 4 and 5
Six metrics that correlate with reliability as follows.
Rate of occurrence of failure (ROCOF)

ROCOF measures the frequency of occurrence of failures. ROCOF measure of a software


product can be obtained by observing the behaviour of a software product in operation
over a specified time interval and then calculating the ROCOF value as the ratio of the
total number of failures observed and the duration of observation.
However, many software products do not run continuously (unlike a car or a mixer), but
deliver certain service when a demand is placed on them. For example, a library software
is idle until a book issue request is made. Therefore, for a typical software product such
as a pay-roll software, applicability of ROCOF is limited.

Mean Time to Failure (MTTF)

MTTF is the time between two successive failures, averaged over a large number of
failures. To measure MTTF, we can record the failure data for n failures. Let the failures
occur at the time instants 11, 12, ... and Then, MTTF can be calculated as Et,/n. It is
important to note that only run time is considered in the time measurements. That is, the
time for which the system is down to fix the error, the boot time, etc., are not taken into
account in the time measurements and the clock is stopped at these times.

Mean Time to Repair (MTTR)

Once failure occurs, sometime is required to fix the error. MTTR measures the average
time it takes to track the errors causing the failure and to fix them.
Mean Time between Failure (MTBF)
The MTTF and MTTR metrics can be combined to get the MTBF metric:

MTBF = MTTF + MTTR. Thus, MTBF of 300 hours indicates that once a failure occurs, the
next failure is expected after 300 hours. In this case, the time measurements are real time
and not the execution time as in MTTF.

Probability of Failure on Demand (POFOD)

Unlike the other metrics discussed, this metric does not explicitly involve time
measurements. POFOD measures the likelihood of the system failing when a service
request is made. For example, a POFOD of 0.001 would mean that 1 out of every 1000
service requests would result in a failure. We have already mentioned that the reliability
of a software product should be determined through specific service invocations, rather
than making the software run continuously. Thus, POFOD metric is very appropriate for
software program that are not required to run continuously given period of time. This
metric not only considers the number of failures occurring during a time interval

Project Management and Software quality 130


Module 4 and 5
Availability. Availability of a system is a measure of how likely would the system be
available for use metrek but also takes into account the repair time (down time) of a
system when a failure occurs. This important for systems such as telecommunication
systems, operating systems and embedded controllers, which are supposed to be never
down and where repair and restart time are significant and loss of service during that
time cannot be overlooked.

Failures which are transient and whose consequences are not serious are in practice of
little concern in the operational use of a software product. These types of failures can at
best be minor irritants.
More severe types of failures may render the system totally unusable. In order to
estimate the reliability of a software product more accurately, it is necessary to classify
various types of failures.

In the following, give a simple classification of software failures into different types.

Transient. Transient failures occur only for certain input values while invoking a
function of the system.
Permanent. Permanent failures occur for all input values while invoking a function of the
system.
Recoverable. When a recoverable failure occurs, the system can recover without having
to shut down and restart the system (with or without operator intervention).
Unrecoverable. In unrecoverable failures, the system may need to be restarted.
Cosmetic. These classes of failures cause only minor irritations, and do not lead to
incorrect results. An example of a cosmetic failure is the situation where the mouse
button has to be clicked twice instead of once to invoke a given function through the
graphical user interface.

Reliability Growth Modelling

A reliability growth model is a mathematical model of how software reliability improves as


errors are detected and repaired.
A reliability growth model can be used to predict when (or if at all) a particular level of
reliability is likely to be attained. Thus, reliability growth modelling can be used to
determine when to stop testing to attain a given reliability level.

Jelinski and Moranda Model

The simplest reliability growth model is a step function model where it is assumed that
the reliability increases by a constant increment each time an error is detected and
repaired. Therefore, perfect error fixing is implicit in this model. Another implicit
assumption in this model is that all errors contribute equally to reliability growth

Project Management and Software quality 131


Module 4 and 5
(reflected in equal step size). Both the assumptions are unrealistic since different errors
contribute differently to reliability growth and also the error fixed may not be perfect.
Typical reliability growth predicted using this model has been shown 5.12 below.

Figure 5.12 : Jelinski and Moranda Model

The instantaneous failure rate (or the hazard rate) in this model is given by Z(t) = K(N-i);
where K is a constant, NW is the total number of errors in the program, and t is any time
between the ith and (i+1)th failure.

Littlewood and Verall's Model

This model allows for negative reliability growth to reflect the fact that when a repair is
carried out, it may introduce additional errors. It also models the fact that as errors are
repaired, the average improvement to the product reliability per repair decreases. It
treats an error's contribution to reliability improvement to be an independent random
variable having Gamma distribution. This distribution models the fact that error
corrections with large contributions to reliability growth are removed first. This
represents diminishing return as the test continues.

Goel-Okutomo Model

In this model, it is assumed that the execution times between the failures are
exponentially distributed. The cumulative number of failures at any time can therefore be
given in terms of p(t), the expected value of failures between time t and time + At.

It is assumed that it follows a Non-Homogeneous Poisson Process (NHPP). That is, the
expected number of error occurrences for any time t to t+At is proportional to the
expected number of undetected errors at time t. Once a failure has been detected, it is
assumed that the error correction is perfect and immediate. The number of failures over
time is given in Figure 5.13. The number of failures at time I can be given by, u(t)= N(1-e),

Project Management and Software quality 132


Module 4 and 5
where N = Expected total number of defects in the code and b is the rate at which the
failure rate decreases.

FIGURE 5.13 Goel-Okutomo reliability growth model

5.14 Quality Plans

Some organizations produce quality plans for each project. These show how the standard
quality procedures and standards laid down in an organization's quality manual will
actually be applied to the project. If an approach to planning such as Step Wise has been
followed, quality-related activities and requirements will have been identified by the
main planning process with no need for a separate quality plan. However, where
software is being produced for an external client, the client's quality assurance staff
might require that a quality plan be produced to ensure the quality of the delivered
products. A quality plan can be seen as checklist that all quality issues have been dealt
with by the planning process. Thus, most of the content w be references to other
documents.
A quality plan might have entries for:

This contents list is based on a draft IEEE standard for software quality assurance plans.
 Purpose - scope of plan
 List of references to other documents

 Management arrangements, including organization, tasks and responsibilities


Documentation to be produced ,Standards, practices and conventions
 Reviews and audits
 Testing

Project Management and Software quality 133


Module 4 and 5
 Problem reporting and corrective action
 Tools, techniques and methodologies
 Code, media and supplier control
 Records collection, maintenance and retention
 Training
 Risk management - the methods of risk management that are to be used

Questions :

Q.1 Explain place of Software Quality in Project Planning


Q.2 Write about Importance of Software Quality
Q.3 Explain in detail Software Quality Models and it’s types.
Q.4 What is process and product metrics?
Q.5 Write short notes on software metrics
Q.6 Write about types of software metrics
Q.7 What are the advantages and disadvantages of software metrics?
Q.8 Explain in detail Product versus Process Quality Management
Q.9 Write short notes on Quality Management Systems
Q.10 Explain in detail Process Capability Models
Q.11 Write about Techniques to Help Enhance Software Quality
Q.12 Explain different levels of software testing
Q.13 Compare black box testing and white box testing
Q.14 Explain software testing and it’s types in detail
Q.15 What are the Quality Plans?
Q.16 Explain Software Reliability

----------------**************---------------------

Project Management and Software quality 134

You might also like