SPPM Notes
SPPM Notes
UNIT-II
Software Project Management Renaissance
Conventional Software Management: The waterfall Model, Conventional Software Management
Performance
Evolution of Software Economics: software Economics. Pragmatic Software Cost Estimation.
Improving Software Economics: Reducing Software Product Size, Improving Software Processes,
Improving Team Effectiveness, Improving Automation, Achieving Required Quality, Peer Inspections.
The old way and the new: Principles of Conventional Software Engineering, Principles of Modern
Software Management, Transitioning to an interactive Process.
Life-Cycle Phases and Process Artifacts
Engineering and Production Stages, Inception, Elaboration, Construction, Transition phases, The Artifact
Sets, Management Artifacts,, Engineering Artifacts, Pragmatic Artifacts,
Model Based Software Architectures: A Management Perspective and Technical Perspective.
UNIT-III
WorkFlows and Checkpoints of the Process
Software Process Workflows, Iteration Workflows, Major Mile Stones, Minor Milestones, Periodic Status
Assessments.
Iterative Process Planning
Work Breakdown Structures, Planning Guidelines, Cost and Schedule Estimating. Iteration Planning
Process, Pragmatic Planning.
UNIT-IV
Project Organizations
1
Line-of-Business Organizations, Project Organizations, Evolution of Organizations,
Process Automation: Building Blocks, the Project Environment.
Project Control and Process Instrumentation
Seven Core Metrics, Management Indicators, Quality Indicators, Life Cycle Expectations Pragmatic
Software metrics, Metrics Automation
UNIT-V
CCPDS-R Case Study and Future Software Project Management Practices, Modern Project Profiles,
Next-Generation Software Economics, Modern Process Transitions.
TEXT BOOKS:
1. Managing Software Process, Watts S Humphrey, Pearson Education.
2. Software Project Management, Walker Royce, Pearson Education.
Outcomes:
At the end of the course, the student shall be able to:
Describe and determine the purpose and importance of project management from the perspectives
of planning, tracking and completion of project
Compare and differentiate organization structures and project structures.
Implement a project to manage project schedule, expenses and resource with the application of
suitable project management tools
2
UNIT – II
3
Waterfall Model part 1: The two basic steps to building a program.
2. In order to manage and control all of the intellectual freedom associated with software
development, one must introduce several other "overhead" steps, including system
requirements definition, software requirements definition, program design, and testing. These
steps supplement the analysis and coding steps. Below Figure illustrates the resulting project
profile and the basic steps in developing a large-scale program.
3. The basic framework described in the waterfall model is risky and invites failure. The testing
phase that occurs at the end of the development cycle is the first event for which timing, storage,
input/output transfers, etc., are experienced as distinguished from analyzed. The resulting design
changes are likely to be so disruptive that the software requirements upon which the design is
based are likely violated. Either the requirements must be modified or a substantial design change
is warranted.
4
Write an overview document that is understandable, informative, and current so that every worker
on the project can gain an elemental understanding of the system.
2. Document the design. The amount of documentation required on most software programs is usually
more. Documentation is necessary due to the following reasons:
Each designer must communicate with interfacing designers, managers, and possibly customers.
During early phases, the documentation is the design.
The real monetary value of documentation is to support later modifications by a separate test
team, a separate maintenance team, and operations personnel who are not software literate.
3. Do it twice. If a computer program is being developed for the first time, arrange matters so that the
version finally delivered to the customer for operational deployment is actually the second version insofar
as far as critical design/operations are concerned. In the first version, the team must have a special broad
competence where they can quickly sense trouble spots in the design, model them, model alternatives
and, finally, arrive at an error-free program.
4. Plan, control, and monitor testing. The biggest user of project resources-manpower, computer time,
and/or management judgment-is the test phase. This is the phase of greatest risk in terms of cost and
schedule. It occurs at the latest point in the schedule, when backup alternatives are least available, if at all.
The previous three recommendations were all aimed at uncovering and solving problems before entering
the test phase. However, even after doing these things, there is still a test phase and there are still
important things to be done, including: (1) employ a team of test specialists who were not responsible for
the original design; (2) employ visual inspections to spot the obvious errors like dropped minus signs,
missing factors of two, jumps to wrong addresses (do not use the computer to detect this kind of thing, it
is too expensive); (3) test every logic path; (4) employ the final checkout on the target computer.
5. Involve the customer. It is important to involve the customer in a formal way so that he has
committed himself at earlier points before final delivery. There are three points following requirements
definition where the insight, judgment, and commitment of the customer can bolster the development
effort. These include a "preliminary software review" following the preliminary program design step, a
sequence of "critical software design reviews" during program design, and a "final software acceptance
review".
1.1.2 IN PRACTICE
Some software projects still practice the conventional software management approach. It is useful to
summarize the characteristics of the conventional process as it has typically been applied, which is not
necessarily as it was intended. Projects destined for trouble frequently exhibit the following symptoms:
Protracted integration and late design breakage.
Late risk resolution.
Requirements-driven functional decomposition.
Adversarial (conflict or opposition) stakeholder relationships.
Focus on documents and review meetings.
Protracted Integration and Late Design Breakage
For a typical development project that used a waterfall model management process, Figure 1-2 illustrates
development progress versus time. Progress is defined as percent coded, that is, demonstrable in its target
form.
The following sequence was common:
Early success via paper designs and thorough (often too thorough) briefings.
Commitment to code late in the life cycle.
5
Integration nightmares (unpleasant experience) due to unforeseen implementation issues and
interface ambiguities.
Heavy budget and schedule pressure to get the system working.
Late shoe-homing of no optimal fixes, with no time for redesign.
A very fragile, unmentionable product delivered late.
In the conventional model, the entire system was designed on paper, then implemented all at once,
then integrated. Table 1-1 provides a typical profile of cost expenditures across the spectrum of software
activities.
Late risk resolution A serious issue associated with the waterfall lifecycle was the lack of early risk
resolution. Figure 1.3 illustrates a typical risk profile for conventional waterfall model projects. It
includes four distinct periods of risk exposure, where risk is defined as the probability of missing a cost,
schedule, feature, or quality goal. Early in the life cycle, as the requirements were being specified, the
actual risk exposure was highly unpredictable.
6
Requirements-Driven Functional Decomposition: This approach depends on specifying requirements
completely and unambiguously before other development activities begin. It naively treats all
requirements as equally important and depends on those requirements remaining constant over the
software development life cycle. These conditions rarely occur in the real world. Specification of
requirements is a difficult and important part of the software development process. Another property of
the conventional approach is that the requirements were typically specified in a functional manner. Built
into the classic waterfall process was the fundamental assumption that the software itself was
decomposed into functions; requirements were then allocated to the resulting components. This
decomposition was often very different from a decomposition based on object-oriented design and the use
of existing components. Figure 1-4 illustrates the result of requirements-driven approaches: a software
structure that is organized around the requirements specification structure.
9
Organizations are achieving better economies of scale in successive technology eras-with very large
projects (systems of systems), long-lived products, and lines of business comprising multiple similar
projects. Figure 2-2 provides an overview of how a return on investment (ROI) profile can be achieved in
subsequent efforts across life cycles of various domains.
10
2.2 PRAGMATIC SOFTWARE COST ESTIMATION
One critical problem in software cost estimation is a lack of well-documented case studies of projects that
used an iterative development approach. Software industry has inconsistently defined metrics or atomic
units of measure, the data from actual projects are highly suspect in terms of consistency and
comparability. It is hard enough to collect a homogeneous set of project data within one organization; it is
extremely difficult to homogenize data across different organizations with different processes, languages,
domains, and so on. There have been many debates among developers and vendors of software cost
estimation models and tools. Three topics of these debates are of particular interest here:
1. Which cost estimation model to use?
2. Whether to measure software size in source lines of code or function points.
3. What constitutes a good estimate?
There are several popular cost estimation models (such as COCOMO, CHECKPOINT, ESTIMACS,
Knowledge Plan, Price-S, ProQMS, SEER, SLIM, SOFTCOST, and SPQR/20), CO COMO is also one of
the most open and well-documented cost estimation models. The general accuracy of conventional cost
11
models (such as COCOMO) has been described as "within 20% of actuals, 70% of the time." Most real-
world use of cost models is bottom-up (substantiating a target cost) rather than top-down (estimating the
"should" cost). Figure 2-3 illustrates the predominant practice: The software project manager defines the
target cost of the software, and then manipulates the parameters and sizing until the target cost can be
justified. The rationale for the target cost maybe to win a proposal, to solicit customer funding, to attain
internal corporate funding, or to achieve some other goal. The process described in Figure 2-3 is not all
bad. In fact, it is absolutely necessary to analyze the cost risks and understand the sensitivities and trade-
offs objectively. It forces the software project manager to examine the risks associated with achieving the
target costs and to discuss this information with other stakeholders.
A good software cost estimate has the following attributes:
It is conceived and supported by the project manager, architecture team, development team, and
test team accountable for performing the work.
It is accepted by all stakeholders as ambitious but realizable.
It is based on a well-defined software cost model with a credible basis.
It is based on a database of relevant project experience that includes similar processes, similar
technologies, similar environments, similar quality requirements, and similar people.
It is defined in enough detail so that its key risk areas are understood and the probability of
success is objectively assessed.
Extrapolating from a good estimate, an ideal estimate would be derived from a mature cost
model with an experience base that reflects multiple similar projects done by the same team with
the same mature processes and tools.
12
3.1 REDUCING SOFTWARE PRODUCT SIZE
The most significant way to improve affordability and return on investment (ROI) is usually to produce a
product that achieves the design goals with the minimum amount of human-generated source material.
Component-based development is introduced as the general term for reducing the "source" language size
to achieve a software solution. Reuse, object-oriented technology, automatic code production, and higher
order programming languages are all focused on achieving a given system with fewer lines of human-
specified source directives (statements). size reduction is the primary motivation behind improvements in
higher order languages (such as C++, Ada 95, Java, Visual Basic), automatic code generators (CASE
tools, visual modeling tools, GUI builders), reuse of commercial components (operating systems,
windowing environments, database management systems, middleware, networks), and object-oriented
technologies (Unified Modeling Language, visual modeling tools, architecture frameworks).
3.1.1 LANGUAGES
Universal function points (UFPs) are useful estimators for language-independent, early life-cycle
estimates. The basic units of function points are external user inputs, external outputs, internal logical
data groups, external data interfaces, and external inquiries. SLOC metrics are useful estimators for
software after a candidate solution is formulated and an implementation language is known. Substantial
data have been documented relating SLOC to function points. Some of these results are shown in Table 3-
2.
13
3.1.2 OBJECT-ORIENTED METHODS AND VISUAL MODELING
Object-oriented programming languages appear to benefit both software productivity and software
quality. The fundamental impact of object-oriented technology in reducing the overall size of what needs
to be developed, is by providing more formalized notations for capturing and visualizing software
abstractions.
Booch has described 3 reasons which helps in improving software economics using an object oriented
approach:
1. An object-oriented model of the problem and its solution encourages a common vocabulary between
the end users of a system and its developers, thus creating a shared understanding of the problem being
solved.
2. The use of continuous integration creates opportunities to recognize risk early and make incremental
corrections without destabilizing the entire development effort.
3. An object-oriented architecture provides a clear separation of concerns among disparate elements of a
system, creating firewalls that prevent a change in one part of the system from rending the fabric of the
entire architecture.
Booch also summarized five characteristics of a successful object-oriented project.
1. A ruthless focus on the development of a system that provides a well understood collection of essential
minimal characteristics.
2. The existence of a culture that is centered on results, encourages communication, and yet is not afraid
to fail.
3. The effective use of object-oriented modeling.
4. The existence of a strong architectural vision.
5. The application of a well-managed iterative and incremental development life cycle.
3.1.3 REUSE
Reusing existing components and building reusable components have been natural software engineering
activities since the earliest improvements in programming languages. Reuse minimizes development costs
while achieving all the other required attributes of performance, feature set, and quality. Treat reuse as an
essential part of achieving a return on investment.
Most truly reusable components of value are transitioned to commercial products supported by
organizations with the following characteristics:
They have an economic motivation for continued support.
14
They take ownership of improving product quality, adding new features, and transitioning to new
technologies.
They have a sufficiently broad customer base to be profitable.
The cost of developing a reusable component is not trivial. Figure 3-1 examines the economic trade-offs.
The steep initial curve illustrates the economic obstacle to developing reusable components. Reuse is an
important discipline that has an impact on the efficiency of all workflows and the quality of most
artifacts.
15
3.2 IMPROVING SOFTWARE PROCESSES
Process is an overloaded term. Three distinct process perspectives are.
1. Metaprocess: an organization's policies, procedures, and practices for pursuing a software-
intensive line of business. The focus of this process is on organizational economics, long-term
strategies, and software ROI.
2. Macroprocess: a project's policies, procedures, and practices for producing a complete software
product within certain cost, schedule, and quality constraints. The focus of the macro process is on
creating an adequate instance of the Meta process for a specific set of constraints.
3. Microprocess: a project team's policies, procedures, and practices for achieving an artifact of the
software process. The focus of the micro process is on achieving an intermediate product baseline
with adequate quality and adequate functionality as economically and rapidly as practical.
Although these three levels of process overlap somewhat, they have different objectives,
audiences, metrics, concerns, and time scales as shown in Table 3-4
Table 3-4: Three Levels of Process and their attributes
16
The primary focus of process improvement should be on achieving an adequate solution in minimum no.
of iteration and eliminating as much as rework and scrap possible. The quality of software process has an
impact on the required effort and also on the schedule of the development.
3.3 IMPROVING TEAM EFFECTIVENESS
Teamwork is much more important than the sum of the individuals. With software teams, a project
manager needs to configure a balance of solid talent with highly skilled people in the leverage positions.
Some maxims of team management include the following:
A well-managed project can succeed with a nominal engineering team.
A mismanaged project will almost never succeed, even with an expert team of engineers.
A well-architected system can be built by a nominal team of software builders.
A poorly architected system will struggle even with an expert team of builders.
Boehm five staffing principles are
1. The principle of top talent: Use better and fewer people
2. The principle of job matching: Fit the tasks to the skills and motivation of the people available.
3. The principle of career progression: An organization does best in the long run by helping its people to
self-actualize.
4. The principle of team balance: Select people who will complement and harmonize with one another
17
5. The principle of phase-out: Keeping a misfit on the team doesn't benefit anyone
Software project managers need many leadership qualities in order to enhance team effectiveness. The
following are some crucial attributes of successful software project managers that deserve much more
attention:
Hiring skills. Few decisions are as important as hiring decisions. Placing the right person in the right job
seems obvious but is surprisingly hard to achieve.
Customer-interface skill. Avoiding adversarial relationships among stakeholders is a prerequisite for
success.
Decision-making skill. The jillion books written about management have failed to provide a clear
definition of this attribute. We all know a good leader when we run into one, and decision-making skill
seems obvious despite its intangible definition.
Team-building skill. Teamwork requires that a manager establish trust, motivate progress, exploit
eccentric prima donnas, transition average people into top performers, eliminate misfits, and consolidate
diverse opinions into a team direction.
Selling skill. Successful project managers must sell all stakeholders (including themselves) on decisions
and priorities, sell candidates on job positions, sell changes to the status quo in the face of resistance, and
sell achievements against objectives. In practice, selling requires continuous negotiation, compromise,
and empathy
3.4 IMPROVING AUTOMATION THROUGH SOFTWARE ENVIRONMENTS
The tools and environment used in the software process generally have a linear effect on the productivity
of the process. Planning tools, requirements management tools, visual modeling tools, compilers, editors,
debuggers, quality assurance analysis tools, test tools, and user interfaces provide crucial automation
support for evolving the software engineering artifacts. Above all, configuration management
environments provide the foundation for executing and instrument the process. At first order, the isolated
impact of tools and automation generally allows improvements of 20% to 40% in effort. However, tools
and environments must be viewed as the primary delivery vehicle for process automation and
improvement, so their impact can be much higher.
Automation of the design process provides payback in quality, the ability to estimate costs and
schedules, and overall productivity using a smaller team. Round-trip engineering describes the key
capability of environments that support iterative development. As we have moved into maintaining
different information repositories for the engineering artifacts, we need automation support to ensure
efficient and error-free transition of data from one artifact to another. Forward engineering is the
automation of one engineering artifact from another, more abstract representation. For example,
compilers and linkers have provided automated transition of source code into executable code. Reverse
engineering is the generation or modification of a more abstract representation from an existing artifact
(for example, creating a visual design model from a source code representation). Economic improvements
associated with tools and environments. It is common for tool vendors to make relatively accurate
individual assessments of life-cycle activities to support claims about the potential economic impact of
their tools. For example, it is easy to find statements such as the following from companies in a particular
tool
Requirements analysis and evolution activities consume 40% of life-cycle costs.
Software design activities have an impact on more than 50% of the resources.
Coding and unit testing activities consume about 50% of software development effort and
schedule.
Test activities can consume as much as 50% of a project's resources.
Configuration control and change management are critical activities that can consume as much as
25% of resources on a large-scale project.
Documentation activities can consume more than 30% of project engineering resources.
18
Project management, business administration, and progress assessment can consume as much as
30% of project budgets.
3.5 ACHIEVING REQUIRED QUALITY
Software best practices are derived from the development process and technologies. Table 3-5
summarizes some dimensions of quality improvement.
Key practices that improve overall software quality include the following:
Focusing on driving requirements and critical use cases early in the life cycle, focusing on
requirements completeness and traceability late in the life cycle, and focusing throughout the life
cycle on a balance between requirements evolution, design evolution, and plan evolution.
Using metrics and indicators to measure the progress and quality of an architecture as it evolves
from a high-level prototype into a fully compliant product
Providing integrated life-cycle environments that support early and continuous configuration
control, change management, rigorous design methods, document automation, and regression test
automation
Using visual modeling and higher level languages that support architectural control, abstraction,
reliable programming, reuse, and self-documentation
Early and continuous insight into performance issues through demonstration-based evaluations
Conventional development processes stressed early sizing and timing estimates of computer program
resource utilization. However, the typical chronology of events in performance assessment was as follows
Project inception. The proposed design was asserted to be low risk with adequate performance
margin.
Initial design review. Optimistic assessments of adequate design margin were based mostly on
paper analysis or rough simulation of the critical threads. In most cases, the actual application
algorithms and database sizes were fairly well understood.
19
Mid-life-cycle design review. The assessments started whittling away at the margin, as early
benchmarks and initial tests began exposing the optimism inherent in earlier estimates.
Integration and test. Serious performance problems were uncovered, necessitating fundamental
changes in the architecture. The underlying infrastructure was usually the scapegoat, but the real
culprit was immature use of the infrastructure, immature architectural solutions, or poorly
understood early design trade-offs.
3.6 PEER INSPECTIONS: A PRAGMATIC VIEW
Peer inspections are frequently over hyped as the key aspect of a quality system. In my experience, peer
reviews are valuable as secondary mechanisms, but they are rarely significant contributors to quality
compared with the following primary quality mechanisms and indicators, which should be emphasized in
the management process:
Transitioning engineering information from one artifact set to another, thereby assessing the
consistency, feasibility, understandability, and technology constraints inherent in the engineering
artifacts
Major milestone demonstrations that force the artifacts to be assessed against tangible criteria in
the context of relevant use cases
Environment tools (compilers, debuggers, analyzers, automated test suites) that ensure
representation rigor, consistency, completeness, and change control
Life-cycle testing for detailed insight into critical trade-offs, acceptance criteria, and requirements
compliance
Change management metrics for objective insight into multiple-perspective change trends and
convergence or divergence from quality and progress goals
Inspections are also a good vehicle for holding authors accountable for quality products.
All authors of software and documentation should have their products scrutinized as a natural by-
product of the process. Therefore, the coverage of inspections should be across all authors rather
than across all components.
1.Make quality # 1: Quality must be quantified and mechanisms put into place to motivate its
achievement
2.High-quality software is possible: Techniques that have been demonstrated to increase quality include
involving the customer, prototyping, simplifying design, conducting inspections, and hiring the best
people
3.Give products to customers early: No matter how hard you try to learn users' needs during the
requirements phase, the most effective way to determine real needs is to give users a product and let them
play with it
4.Determine the problem before writing the requirements: When faced with what they believe is a
problem, most engineers rush to offer a solution. Before you try to solve a problem, be sure to explore all
the alternatives and don't be blinded by the obvious solution
5.Evaluate design alternatives: After the requirements are agreed upon, you must examine a variety of
architectures and algorithms. You certainly do not want to use” architecture" simply because it was used
in the requirements specification.
20
6.Use an appropriate process model: Each project must select a process that makes ·the most sense for
that project on the basis of corporate culture, willingness to take risks, application area, volatility of
requirements, and the extent to which requirements are well understood.
7.Use different languages for different phases: Our industry's eternal thirst for simple solutions to
complex problems has driven many to declare that the best development method is one that uses the same
notation throughout the life cycle.
8.Minimize intellectual distance: To minimize intellectual distance, the software's structure should be
as close as possible to the real-world structure
9.Put techniques before tools: An undisciplined software engineer with a tool becomes a dangerous,
undisciplined software engineer
10.Get it right before you make it faster: It is far easier to make a working program run faster than it is
to make a fast program work. Don't worry about optimization during initial coding
11.Inspect code: Inspecting the detailed design and code is a much better way to find errors than testing
12.Good management is more important than good technology: Good management motivates people
to do their best, but there are no universal "right" styles of management.
13.People are the key to success: Highly skilled people with appropriate experience, talent, and training
are key.
14.Follow with care: Just because everybody is doing something does not make it right for you. It may
be right, but you must carefully assess its applicability to your environment.
15.Take responsibility: When a bridge collapses we ask, "What did the engineers do wrong?" Even
when software fails, we rarely ask this. The fact is that in any engineering discipline, the best methods can
be used to produce awful designs, and the most antiquated methods to produce elegant designs.
16.Understand the customer's priorities: It is possible the customer would tolerate 90% of the
functionality delivered late if they could have 10% of it on time.
17.The more they see, the more they need: The more functionality (or performance) you provide a user,
the more functionality (or performance) the user wants.
18. Plan to throw one away: One of the most important critical success factors is whether or not a
product is entirely new. Such brand-new applications, architectures, interfaces, or algorithms rarely work
the first time.
19. Design for change: The architectures, components, and specification techniques you use must
accommodate change.
20. Design without documentation is not design: I have often heard software engineers say, "I have
finished the design. All that is left is the documentation. "
21. Use tools, but be realistic: Software tools make their users more efficient.
22. Avoid tricks: Many programmers love to create programs with tricks constructs that perform a
function correctly, but in an obscure way. Show the world how smart you are by avoiding tricky code
23. Encapsulate: Information-hiding is a simple, proven concept that results in software that is easier to
test and much easier to maintain.
24. Use coupling and cohesion: Coupling and cohesion are the best ways to measure software's inherent
maintainability and adaptability
25. Use the McCabe complexity measure: Although there are many metrics available to report the
inherent complexity of software, none is as intuitive and easy to use as Tom McCabe's
21
26.Don't test your own software: Software developers should never be the primary testers of their own
software.
27.Analyze causes for errors: It is far more cost-effective to reduce the effect of an error by preventing
it than it is to find and fix it. One way to do this is to analyze the causes of errors as they are detected
28.Realize that software's entropy increases: Any software system that undergoes continuous change
will grow in complexity and will become more and more disorganized
29.People and time are not interchangeable: Measuring a project solely by person-months makes little
sense
30.Expect excellence: Your employees will do much better if you have high expectations for them.
22
5. Enhance change freedom through tools that support round-trip engineering. Round-trip engineering is
the environment support necessary to automate and synchronize engineering information in different
formats (such as requirements specifications, design models, source code, executable code, test cases).
6. Capture design artifacts in rigorous, model-based notation. A model-based approach (such as UML)
supports the evolution of semantically rich graphical and textual design notations.
7. Instrument the process for objective quality control and progress assessment. Life-cycle assessment of
the progress and the quality of all intermediate products must be integrated into the process.
8. Use a demonstration-based approach to assess intermediate artifacts.
9. Plan intermediate releases in groups of usage scenarios with evolving levels of detail. It is essential that
the software management process drive toward early and continuous demonstrations within the
operational context of the system, namely its use cases.
10. Establish a configurable process that is economically scalable. No single process is suitable for all
software developments.
Table 4-1 maps top 10 risks of the conventional process to the key attributes and principles of a modern
process
23
4.3 TRANSITIONING TO AN ITERATIVE PROCESS
Modern software development processes have moved away from the conventional waterfall
model, in which each stage of the development process is dependent on completion of the previous stage.
The economic benefits inherent in transitioning from the conventional waterfall model to an iterative
development process are significant but difficult to quantify. As one benchmark of the expected economic
impact of process improvement, consider the process exponent parameters of the COCOMO II model.
(Appendix B provides more detail on the COCOMO model) This exponent can range from 1.01 (virtually
no diseconomy of scale) to 1.26 (significant diseconomy of scale). The parameters that govern the value
of the process exponent are application precedentedness, process flexibility, architecture risk resolution,
team cohesion, and software process maturity.
The following paragraphs map the process exponent parameters of COCOMO II to my top 10 principles
of a modern process.
Application precedentedness. Domain experience is a critical factor in understanding how to
plan and execute a software development project. For unprecedented systems, one of the key goals
is to confront risks and establish early precedents, even if they are incomplete or experimental.
This is one of the primary reasons that the software industry has moved to an iterative life-cycle
process. Early iterations in the life cycle establish precedents from which the product, the process,
and the plans can be elaborated in evolving levels of detail.
Process flexibility. Development of modern software is characterized by such a broad solution
space and so many interrelated concerns that there is a paramount need for continuous
24
incorporation of changes. These changes may be inherent in the problem understanding, the
solution space, or the plans. Project artifacts must be supported by efficient change management
commensurate with project needs. A configurable process that allows a common framework to be
adapted across a range of projects is necessary to achieve a software return on investment.
Architecture risk resolution. Architecture-first development is a crucial theme underlying a
successful iterative development process. A project team develops and stabilizes architecture
before developing all the components that make up the entire suite of applications components.
An architecture-first and component-based development approach forces the infrastructure,
common mechanisms, and control mechanisms to be elaborated early in the life cycle and drives
all component make/buy decisions into the architecture process.
Team cohesion. Successful teams are cohesive, and cohesive teams are successful. Successful
teams and cohesive teams share common objectives and priorities. Advances in technology (such
as programming languages, UML, and visual modeling) have enabled more rigorous and
understandable notations for communicating software engineering information, particularly in the
requirements and design artifacts that previously were ad hoc and based completely on paper
exchange. These model-based formats have also enabled the round-trip engineering support
needed to establish change freedom sufficient for evolving design representations.
Software process maturity. The Software Engineering Institute's Capability Maturity Model
(CMM) is a well-accepted benchmark for software process assessment. One of key themes is that
truly mature processes are enabled through an integrated environment that provides the
appropriate level of automation to instrument the process for objective quality control.
25
The transition between engineering and production is a crucial event for the various stakeholders. The
production plan has been agreed upon, and there is a good enough understanding of the problem and the
solution that all stakeholders can make a firm commitment to go ahead with production. Engineering
stage is decomposed into two distinct phases, inception and elaboration, and the production stage into
construction and transition. These four phases of the life-cycle process are loosely mapped to the
conceptual framework of the spiral model as shown in Figure 5-1
PRIMARY OBJECTIVES
Baselining the architecture as rapidly as practical (establishing a configuration-managed snapshot
in which all changes are rationalized, tracked, and maintained)
Baselining the vision
Baselining a high-fidelity plan for the construction phase
Demonstrating that the baseline architecture will support the vision at a reasonable cost in a
reasonable time
ESSENTIAL ACTIVITIES
Elaborating the vision.
Elaborating the process and infrastructure.
Elaborating the architecture and selecting components.
27
Does the executable demonstration show that the major risk elements have been addressed and
credibly resolved?
Is the construction phase plan of sufficient fidelity, and is it backed up with a credible basis of
estimate?
Do all stakeholders agree that the current vision can be met if the current plan is executed to
develop the complete system in the context of the current architecture?
Are actual resource expenditures versus planned expenditures acceptable?
ESSENTIAL ACTIVITIES
Resource management, control, and process optimization
Complete component development and testing against evaluation criteria
Assessment of product releases against acceptance criteria of the vision
PRIMARY EVALUATION CRITERIA
Is this product baseline mature enough to be deployed in the user community? (Existing defects
are not obstacles to achieving the purpose of the next release.)
Is this product baseline stable enough to be deployed in the user community? (Pending changes
are not obstacles to achieving the purpose of the next release.)
Are the stakeholders ready for transition to the user community?
Are actual resource expenditures versus planned expenditures acceptable?
28
Achieving user self-supportability
Achieving stakeholder concurrence that deployment baselines are complete and consistent with
the evaluation criteria of the vision
Achieving final product baselines as rapidly and cost-effectively as practical
ESSENTIAL ACTIVITIES
Synchronization and integration of concurrent construction increments into consistent
deployment baselines
Deployment-specific engineering (cutover, commercial packaging and production, sales rollout
kit development, field personnel training)
Assessment of deployment baselines against the complete vision and acceptance criteria in the
requirements set
EVALUATION CRITERIA
Is the user satisfied?
Are actual resource expenditures versus planned expenditures acceptable?
29
manager, regulatory agency), and between project personnel and stakeholders. Specific artifacts included
in this set are the work breakdown structure (activity breakdown and financial tracking mechanism), the
business case (cost, schedule, profit expectations), the release specifications (scope, plan, objectives for
release baselines), the software development plan (project process instance), the release descriptions
(results of release baselines), the status assessments (periodic snapshots of project progress), the software
change orders (descriptions of discrete baseline changes), the deployment documents (cutover plan,
training course, sales rollout kit), and the environment (hardware and software tools, process automation,
& documentation).
Management set artifacts are evaluated, assessed, and measured through a combination of the
following:
Relevant stakeholder review
Analysis of changes between the current version of the artifact and previous versions
Major milestone demonstrations of the balance among all artifacts and, in particular, the accuracy
of the business case and vision artifacts
Design Set
UML notation is used to engineer the design models for the solution. The design set contains
varying levels of abstraction that represent the components of the solution space (their identities,
attributes, static relationships, dynamic interactions). The design set is evaluated, assessed, and measured
through a combination of the following:
Analysis of the internal consistency and quality of the design model
Analysis of consistency with the requirements models
Translation into implementation and deployment sets and notations (for example, traceability,
source code generation, compilation, linking) to evaluate the consistency and completeness and
the semantic balance between information in the sets
Analysis of changes between the current version of the design model and previous versions (scrap,
rework, and defect elimination trends)
Subjective review of other dimensions of quality
Implementation set
The implementation set includes source code (programming language notations) that represents
the tangible implementations of components (their form, interface, and dependency relationships)
30
Implementation sets are human-readable formats that are evaluated, assessed, and measured through a
combination of the following:
Analysis of consistency with the design models
Translation into deployment set notations (for example, compilation and linking) to evaluate the
consistency and completeness among artifact sets
Assessment of component source or executable files against relevant evaluation criteria through
inspection, analysis, demonstration, or testing
Execution of stand-alone component test cases that automatically compare expected results with
actual results
Analysis of changes between the current version of the implementation set and previous versions
(scrap, rework, and defect elimination trends)
Subjective review of other dimensions of quality
Deployment Set
The deployment set includes user deliverables and machine language notations, executable
software, and the build scripts, installation scripts, and executable target specific data necessary to use the
product in its target environment. Deployment sets are evaluated, assessed, and measured through a
combination of the following:
Testing against the usage scenarios and quality attributes defined in the requirements set to
evaluate the consistency and completeness and the~ semantic balance between information in the
two sets
Testing the partitioning, replication, and allocation strategies in mapping components of the
implementation set to physical resources of the deployment system (platform type, number,
network topology)
Testing against the defined usage scenarios in the user manual such as installation, user-oriented
dynamic reconfiguration, mainstream usage, and anomaly management
Analysis of changes between the current version of the deployment set and previous versions
(defect elimination trends, performance changes)
Subjective review of other dimensions of quality
Each artifact set is the predominant development focus of one phase of the life cycle; the
other sets take on check and balance roles.
As illustrated in Figure 6-2, each phase has a predominant focus: Requirements are the focus of
the inception phase; design, the elaboration phase; implementation, the construction phase; and
31
deployment, the transition phase. The management artifacts also evolve, but at a fairly constant level
across the life cycle. Most of today's software development tools map closely to one of the five artifact
sets.
1. Management: scheduling, workflow, defect tracking, change management,
2. documentation, spreadsheet, resource management, and presentation tools
2. Requirements: requirements management tools
3. Design: visual modeling tools
4. Implementation: compiler/debugger tools, code analysis tools, test coverage analysis tools, and
test management tools
5. Deployment: test coverage and test automation tools, network management tools, commercial
components (operating systems, GUIs, RDBMS, networks, middleware), and installation tools.
32
The inception phase focuses mainly on critical requirements usually with a secondary focus on an initial
deployment view. During the elaboration phase, there is much greater depth in requirements, much more
breadth in the design set, and further work on implementation and deployment issues. The main focus of
the construction phase is design and implementation. The main focus of the transition phase is on
achieving consistency and completeness of the deployment set in the context of the other sets.
6.1.4 TEST ARTIFACTS
Conventional software testing followed the same document-driven approach that was applied to software
development. Development teams build requirement documents, top level design documents and detailed
design documents before constructing any source file. Similarly test team builds test plan document,
system test procedure document, integration test plan document, unit test plan document, etc.
The test artifacts must be developed concurrently with the product from inception through
deployment. Thus, testing is a full-life-cycle activity, not a late life-cycle activity.
The test artifacts are communicated, engineered, and developed within the same artifact sets as the
developed product.
The test artifacts are implemented in programmable and repeatable formats (as software
programs).
The test artifacts are documented in the same way that the product is documented.
Developers of the test artifacts use the same tools, techniques, and training as the software
engineers developing the product.
Test artifact subsets are highly project-specific, the following example clarifies the relationship
between test artifacts and the other artifact sets. Consider a project to perform seismic data processing for
the purpose of oil exploration. This system has three fundamental subsystems: (1) a sensor subsystem that
captures raw seismic data in real time and delivers these data to (2) a technical operations subsystem that
converts raw data into an organized database and manages queries to this database from (3) a display
subsystem that allows workstation operators to examine seismic data in human-readable form. Such a
system would result in the following test artifacts:
Management set. The release specifications and release descriptions capture the objectives,
evaluation criteria, and results of an intermediate milestone. These artifacts are the test plans and
test results negotiated among internal project teams. The software change orders capture test
results (defects, testability changes, requirements ambiguities, enhancements) and the closure
criteria associated with making a discrete change to a baseline.
Requirements set. The system-level use cases capture the operational concept for the system and
the acceptance test case descriptions, including the expected behaviour of the system and its
quality attributes. The entire requirement set is a test artifact because it is the basis of all
assessment activities across the life cycle.
Design set. A test model for non-deliverable components needed to test the product baselines is
captured in the design set. These components include such design set artifacts as a seismic event
simulation for creating realistic sensor data; a "virtual operator" that can support unattended,
afterhours test cases; specific instrumentation suites for early demonstration of resource usage;
transaction rates or response times; and use case test drivers and component stand-alone test
drivers.
Implementation set. Self-documenting source code representations for test components and test
drivers provide the equivalent of test procedures and test scripts. These source files may also
include human-readable data files representing certain statically defined data sets that are explicit
test source files. Output files from test drivers provide the equivalent of test reports.
Deployment set. Executable versions of test components, test drivers, and data files are provided.
33
The management set includes several artifacts that capture intermediate results and ancillary information
necessary to document the product/process legacy, maintain the product, improve the product, and
improve the process.
Release Specifications
The scope, plan, and objective evaluation criteria for each baseline release are derived from the vision
statement as well as many other sources (make/buy analyses, risk management concerns, architectural
considerations, shots in the dark, implementation constraints, quality thresholds). These artifacts are
intended to evolve along with the process, achieving greater fidelity as the life cycle progresses and
requirements understanding matures. Figure 6-5 provides a default outline for a release specification
Release Descriptions
Release description documents describe the results of each release, including performance against each of
the evaluation criteria in the corresponding release specification. Release baselines should be
accompanied by a release description document that describes the evaluation criteria for that
configuration baseline and provides substantiation (through demonstration, testing, inspection, or
analysis) that each criterion has been addressed in an acceptable manner. Figure 6-7 provides a default
outline for a release description.
Status Assessments
Status assessments provide periodic snapshots of project health and status, including the software project
manager's risk assessment, quality indicators, and management indicators. Typical status assessments
should include a review of resources, personnel staffing, financial data (cost and revenue), top 10 risks,
35
technical progress (metrics snapshots), major milestone plans and results, total project or product scope &
action items
Environment
An important emphasis of a modern approach is to define the development and maintenance environment
as a first-class artifact of the process. A robust, integrated development environment must support
automation of the development process. This environment should include requirements management,
visual modeling, document automation, host and target programming tools, automated regression testing,
and continuous and integrated change management, and feature and defect tracking.
Deployment
A deployment document can take many forms. Depending on the project, it could include several
document subsets for transitioning the product into operational status. In big contractual efforts in which
the system is delivered to a separate maintenance organization, deployment artifacts may include
computer system operations manuals, software installation manuals, plans and procedures for cutover
(from a legacy system), site surveys, and so forth. For commercial software products, deployment
artifacts may include marketing plans, sales rollout kits, and training courses.
36
6.3 ENGINEERING ARTIFACTS
Most of the engineering artifacts are captured in rigorous engineering notations such as UML,
programming languages, or executable machine codes. Three engineering artifacts are explicitly intended
for more general review, and they deserve further elaboration.
Vision Document
The vision document provides a complete vision for the software system under development and.
supports the contract between the funding authority and the development organization. A project vision is
meant to be changeable as understanding evolves of the requirements, architecture, plans, and technology.
A good vision document should change slowly. Figure 6-9 provides a default outline for a vision
document.
37
Architecture Description
The architecture description provides an organized view of the software architecture under development.
It is extracted largely from the design model and includes views of the design, implementation, and
deployment sets sufficient to understand how the operational concept of the requirements set will be
achieved. The breadth of the architecture description will vary from project to project depending on many
factors. Figure 6-10 provides a default outline for an architecture description.
39
Architecture development and process definition are the intellectual steps that map the problem to
a solution without violating the constraints; they require human innovation and cannot be
automated.
7.2 ARCHITECTURE: A TECHNICAL PERSPECTIVE
An architecture framework is defined in terms of views that are abstractions of the UML models in the
design set. The design model includes the full breadth and depth of information. An architecture view is
an abstraction of the design model; it contains only the architecturally significant information. Most real-
world systems require four views: design, process, component, and deployment. The purposes of these
views are as follows:
Design: describes architecturally significant structures and functions of the design model
Process: describes concurrency and control thread relationships among the design, component,
and deployment views
Component: describes the structure of the implementation set
Deployment: describes the structure of the deployment set
Figure 7-1 summarizes the artifacts of the design set, including the architecture views and architecture
description
40
The requirements model addresses the behaviour of the system as seen by its end users, analysts, and
testers. This view is modeled statically using use case and class diagrams, and dynamically using
sequence, collaboration, state chart, and activity diagrams.
The use case view describes how the system's critical (architecturally significant) use cases are
realized by elements of the design model. It is modeled statically using use case diagrams, and
dynamically using any of the UML behavioural diagrams.
The design view describes the architecturally significant elements of the design model. This view,
an abstraction of the design model, addresses the basic structure and functionality of the solution.
It is modeled statically using class and object diagrams, and dynamically using any of the UML
behavioural diagrams.
The process view addresses the run-time collaboration issues involved in executing the
architecture on a distributed deployment model, including the logical software network topology
(allocation to processes and threads of control), interprocess communication, and state
management. This view is modeled statically using deployment diagrams, and dynamically using
any of the UML behavioural diagrams.
The component view describes the architecturally significant elements of the implementation set.
This view, an abstraction of the design model, addresses the software source code realization of
the system from the perspective of the project's integrators and developers, especially with regard
to releases and configuration management. It is modeled statically using component diagrams, and
dynamically using any of the UML behavioral diagrams.
The deployment view addresses the executable realization of the system, including the allocation
of logical processes in the distribution view (the logical software topology) to physical resources
of the deployment network (the physical system topology). It is modeled statically using
deployment diagrams, and dynamically using any of the UML behavioral diagrams.
Generally, an architecture baseline should include the following:
Requirements: critical use cases, system-level quality objectives, and priority relationships among
features and qualities
Design: names, attributes, structures, behaviours, groupings, and relationships of significant
classes and components
Implementation: source component inventory and bill of materials (number, name, purpose, cost)
of all primitive components
Deployment: executable components sufficient to demonstrate the critical use cases and the risk
associated with achieving the system qualities
41
UNIT-III
WorkFlows and Checkpoints of the Process-Software Process Workflows, Iteration Workflows, Major
Mile Stones, Minor Milestones, Periodic Status Assessments.
Iterative Process Planning-Work Breakdown Structures, Planning Guidelines, Cost and Schedule
Estimating. Interaction Planning Process, Pragmatic Planning.
Table 8-1 shows the allocation of artifacts and the emphasis of each workflow in each of the life-cycle
phases of inception, elaboration, construction, and transition.
42
ITERATION WORKFLOWS
Iteration consists of a loosely sequential set of activities in various proportions, depending on where the
iteration is located in the development cycle. Each iteration is defined in terms of a set of allocated usage
scenarios. An individual iteration's workflow, illustrated in Figure 8-2, generally includes the following
sequence:
Management: iteration planning to determine the content of the release and develop the detailed
plan for the iteration; assignment of work packages, or tasks, to the development team
Environment: evolving the software change order database to reflect all new baselines and
changes to existing baselines for all product, test, and environment components
Requirements: analyzing the baseline plan, the baseline architecture, and the baseline
requirements set artifacts to fully elaborate the use cases to be demonstrated at the end of this
iteration and their evaluation criteria; updating any requirements set artifacts to reflect changes
necessitated by results of this iteration's engineering activities.
Design: evolving the baseline architecture and the baseline design set artifacts to elaborate fully
the design model and test model components necessary to demonstrate against the evaluation
43
criteria allocated to this iteration; updating design set artifacts to reflect changes necessitated by
the results of this iteration's engineering activities
Implementation: developing or acquiring any new components, and enhancing or modifying any
existing components, to demonstrate the evaluation criteria allocated to this iteration; integrating
and testing all new and modified components with existing baselines (previous versions)
Assessment: evaluating the results of the iteration, including compliance with the allocated
evaluation criteria and the quality of the current baselines; identifying any rework required and
determining whether it should be performed before deployment of this release or allocated to the
next release; assessing results to improve the basis of the subsequent iteration's plan
Deployment: transitioning the release either to an external organization (such as a user,
independent verification and validation contractor, or regulatory agency) or to internal closure by
conducting a post-mortem so that lessons learned can be captured and reflected in the next
iteration
Iterations in the inception and elaboration phases focus on management. Requirements, and design
activities. Iterations in the construction phase focus on design, implementation, and assessment. Iterations
in the transition phase focus on assessment and deployment. Figure 8-3 shows the emphasis on different
activities across the life cycle. An iteration represents the state of the overall architecture and the
complete deliverable system. An increment represents the current progress that will be combined with the
preceding iteration to from the next iteration. Figure 8-4, an example of a simple development life cycle,
illustrates the differences between iterations and increments.
44
45
CHECK POINTS OF A PROCESS
Three types of joint management reviews are conducted throughout the process:
1. Major milestones. These system wide events are held at the end of each development phase. They
provide visibility to system wide issues, synchronize the management and engineering perspectives, and
verify that the aims of the phase have been achieved.
2. Minor milestones. These iteration-focused events are conducted to review the content of an iteration in
detail and to authorize continued work.
3. Status assessments. These periodic events provide management with frequent and regular insight into
the progress being made.
Each of the four phases-inception, elaboration, construction, and transition consists of one or more
iterations and concludes with a major milestone when a planned technical capability is produced in
demonstrable form. An iteration represents a cycle of activities for which there is a well-defined
intermediate result-a minor milestone-captured with two artifacts: a release specification (the evaluation
criteria and plan) and a release description (the results). Major milestones at the end of each phase use
formal, stakeholder-approved evaluation criteria and release descriptions; minor milestones use informal,
development-team-controlled versions of these artifacts.
46
Figure 9-1 illustrates a typical sequence of project checkpoints for a relatively large project.
MAJOR MILESTONES
The four major milestones occur at the transition points between life-cycle phases. They can be used in
many different process models, including the conventional waterfall model. In an iterative model, the
major milestones are used to achieve concurrence among all stakeholders on the current state of the
project. Different stakeholders have very different concerns:
Customers: schedule and budget estimates, feasibility, risk assessment, requirements understanding,
progress, product line compatibility
Users: consistency with requirements and usage scenarios, potential for accommodating growth, quality
attributes
Architects and systems engineers: product line compatibility, requirements changes, trade-off analyses,
completeness and consistency, balance among risk, quality, and usability
Developers: sufficiency of requirements detail and usage scenario descriptions, frameworks for
component selection or development, resolution of development risk, product line compatibility,
sufficiency of the development environment
Maintainers: sufficiency of product and documentation artifacts, understandability, interoperability with
existing systems, sufficiency of maintenance environment
Others: possibly many other perspectives by stakeholders such as regulatory agencies, independent
verification and validation contractors, venture capital investors, subcontractors, associate contractors,
and sales and marketing teams
Table 9-1 summarizes the balance of information across the major milestones.
47
Life-Cycle Objectives Milestone
The life-cycle objectives milestone occurs at the end of the inception phase. The goal is to present to all
stakeholders a recommendation on how to proceed with development, including a plan, estimated cost
and schedule, and expected benefits and cost savings. A successfully completed life-cycle objectives
milestone will result in authorization from all stakeholders to proceed with the elaboration phase.
Life-Cycle Architecture Milestone
The life-cycle architecture milestone occurs at the end of the elaboration phase. The primary goal is to
demonstrate an executable architecture to all stakeholders. The baseline architecture consists of both a
human readable representation (the architecture document) and a configuration-controlled set of software
components captured in the engineering artifacts. A successfully completed life-cycle architecture
milestone will result in authorization from the stakeholders to proceed with the construction phase.
The technical data listed in Figure 9-2 should have been reviewed by the time of the lifecycle architecture
milestone.
48
Figure 9-3 provides default agendas for this milestone.
49
The product release milestone occurs at the end of the transition phase. The goal is to assess the
completion of the software and its transition to the support organization, if any. The results of acceptance
testing are reviewed, and all open issues are addressed. Software quality metrics are reviewed to
determine whether quality is sufficient for transition to the support organization.
MINOR MILESTONES
For most iterations, which have a one-month to six-month duration, only two minor milestones are
needed: the iteration readiness review and the iteration assessment review.
Iteration Readiness Review. This informal milestone is conducted at the start of each iteration to
review the detailed iteration plan and the evaluation criteria that have been allocated to this
iteration.
Iteration Assessment Review. This informal milestone is conducted at the end of each iteration to
assess the degree to which the iteration achieved its objectives and satisfied its evaluation criteria,
to review iteration results, to review qualification test results (if part of the iteration), to determine
the amount of rework to be done, and to review the impact of the iteration results on the plan for
subsequent iterations.
The format and content of these minor milestones tend to be highly dependent on the project and the
organizational culture. Figure 9-4 identifies the various minor milestones to be considered when a project
is being planned.
50
A mechanism for openly addressing, communicating, and resolving management issues, technical
issues, and project risks
Objective data derived directly from on-going activities and evolving product configurations
A mechanism for disseminating process, progress, quality trends, practices, and experience
information to and from all stakeholders in an open forum
Periodic status assessments are crucial for focusing continuous attention on the evolving health of the
project and its dynamic priorities. They force the software project manager to collect and review the data
periodically, force outside peer review, and encourage dissemination of best practices to and from other
stakeholders.
The default content of periodic status assessments should include the topics identified in Table 9-2.
51
Many parameters can drive the decomposition of work into discrete tasks: product subsystems,
components, functions, organizational units, life-cycle phases, even geographies. Most systems have a
first-level decomposition by subsystem. Subsystems are then decomposed into their components, one of
which is typically the software.
Figure 10-1 Conventional work breakdown structure, following the product hierarchy:
Management
System requirement and design
Subsystem 1
Component 11
Requirements
Design
Code
Test
Documentation
…(similar structures for other components)
Component 1N
Requirements
Design
Code
Test
Documentation
…(similar structures for other subsystems)
Subsystem M
Component M1
Requirements
52
Design
Code
Test
Documentation
…(similar structures for other components)
Component MN
Requirements
Design
Code
Test
Documentation
55
E Implementation
EA Inception phase component prototyping
EB Elaboration phase component implementation
EBA Critical component coding demonstration integration
EC Construction phase component implementation
ECA Initial release(s) component coding and stand-alone testing
ECB Alpha release component coding and stand-alone testing
ECC Beta release component coding and stand-alone testing
ECD Component maintenance
F Assessment
FA Inception phase assessment
FB Elaboration phase assessment
FBA Test modeling
FBB Architecture test scenario implementation
FBC Demonstration assessment and release descriptions
FC Construction phase assessment
FCA Initial release assessment and release description
FCB Alpha release assessment and release description
FCC Beta release assessment and release description
FD Transition phase assessment
FDA Product release assessment and release description
G Deployment
GA Inception phase deployment planning
GB Elaboration phase deployment planning
GC Construction phase deployment
GCA User manual baselining
GD Transition phase deployment
GDA Product transition to user
56
PLANNING GUIDELINES
Software projects span a broad range of application domains. It is valuable but risky to make specific
planning recommendations independent of project context. Project-independent planning advice is also
risky. There is the risk that the guidelines may Be adopted blindly without being adapted to specific
project circumstances. Two simple planning guidelines should be considered when a project plan is being
initiated or assessed. The first guideline, detailed in Table 10-1, prescribes a default allocation of costs
among the first-level WBS elements. The second guideline, detailed in Table 10-2, prescribes the
allocation of effort and schedule across the lifecycle phases.
57
level budget and schedule, then decomposes these elements into lower level budgets and intermediate
milestones. From this perspective, the following planning sequence would occur:
1. The software project manager (and others) develops a characterization of the overall size,
process, environment, people, and quality required for the project.
2. A macro-level estimate of the total effort and schedule is developed using a software cost
estimation model.
3. The software project manager partitions the estimate for the effort into a top-level WBS using
guidelines such as those in Table 10-1.
4. At this point, subproject managers are given the responsibility for decomposing each of the
WBS elements into lower levels using their top-level allocation, staffing profile, and major milestone
dates as constraints.
The second perspective is a backward-looking, bottom-up approach. We start with the end in
mind, analyze the micro-level budgets and schedules, then sum all these elements into the higher level
budgets and intermediate milestones. This approach tends to define and populate the WBS from the
lowest levels upward. From this perspective, the following planning sequence would occur:
1. The lowest level WBS elements are elaborated into detailed tasks
2. Estimates are combined and integrated into higher level budgets and milestones.
3. Comparisons are made with the top-down budgets and schedule milestones.
Milestone scheduling or budget allocation through top-down estimating tends to exaggerate the
project management biases and usually results in an overly optimistic plan. Bottom-up estimates usually
exaggerate the performer biases and result in an overly pessimistic plan. These two planning approaches
should be used together, in balance, throughout the life cycle of the project. During the engineering stage,
the top-down perspective will dominate because there is usually not enough depth of understanding nor
stability in the detailed task sequences to perform credible bottom-up planning. During the production
stage, there should be enough precedent experience and planning fidelity that the bottom-up planning
perspective will dominate. Top-down approach should be well tuned to the project specific parameters, so
it should be used more as a global assessment technique. Figure 10-4 illustrates this lifecycle planning
balance.
58
THE ITERATION PLANNING PROCESS
Planning is concerned with defining the actual sequence of intermediate results. An evolutionary build
plan is important because there are always adjustments in build content and schedule as early conjecture
evolves into well-understood project circumstances. Iteration is used to mean a complete synchronization
across the project, with a well-orchestrated global assessment of the entire project baseline.
Inception iterations. The early prototyping activities integrate the foundation components of a
candidate architecture and provide an executable framework for elaborating the critical use cases
of the system. This framework includes existing components, commercial components, and
custom prototypes sufficient to demonstrate a candidate architecture and sufficient requirements
understanding to establish a credible business case, vision, and software development plan.
Elaboration iterations. These iterations result in architecture, including a complete framework and
infrastructure for execution. Upon completion of the architecture iteration, a few critical use cases
should be demonstrable: (1) initializing the architecture, (2) injecting a scenario to drive the worst-
case data processing flow through the system (for example, the peak transaction throughput or
peak load scenario), and (3) injecting a scenario to drive the worst-case control flow through the
system (for example, orchestrating the fault-tolerance use cases).
Construction iterations. Most projects require at least two major construction iterations: an alpha
release and a beta release.
Transition iterations. Most projects use a single iteration to transition a beta release into the final
product.
The general guideline is that most projects will use between four and nine iterations. The typical
project would have the following six-iteration profile:
One iteration in inception: an architecture prototype.
59
Two iterations in elaboration: architecture prototype and architecture baseline.
Two iterations in construction: alpha and beta releases.
One iteration in transition: product release
A very large or unprecedented project with many stakeholders may require additional inception
iteration and two additional iterations in construction, for a total of nine iterations.
PRAGMATIC PLANNING
Even though good planning is more dynamic in an iterative process, doing it accurately is far easier.
While executing iteration N of any phase, the software project manager must be monitoring and
controlling against a plan that was initiated in iteration N - 1 and must be planning iteration N + 1. The art
of good project· management is to make trade-offs in the current iteration plan and the next iteration plan
based on objective results in the current iteration and previous iterations. Aside from bad architectures
and misunderstood requirements, inadequate planning (and subsequent bad management) is one of the
most common reasons for project failures. Conversely, the success of every successful project can be
attributed in part to good planning.
A project's plan is a definition of how the project requirements will be transformed into' a product
within the business constraints. It must be realistic, it must be current, it must be a team product, it must
be understood by the stakeholders, and it must be used.
Plans are not just for managers. The more open and visible the planning process and results, the
more ownership there is among the team members who need to execute it. Bad, closely held plans cause
attrition. Good, open plans can shape cultures and encourage teamwork.
UNIT-IV
Project Organizations: Line-of-Business Organizations, Project Organizations, Evolution of
Organizations, Process Automation: Building Blocks, the Project Environment.
Project Control and Process Instrumentation: Seven Core Metrics, Management Indicators, Quality
Indicators, Life Cycle Expectations Pragmatic Software metrics, Metrics Automation
LINE-OF-BUSINESS ORGANIZATIONS
60
The main features of default organization are as follows:
• Responsibility for process definition & maintenance is specific to a cohesive line of business.
• Responsibility for process automation is an organizational role & is equal in importance to the process
definition role.
• Organizational role may be fulfilled by a single individual or several different teams.
PROJECT ORGANIZATIONS
The figure below shows a default project organization and maps project-level roles and responsibilities:
61
The main features of the default organization are as follows:
• The project management team is an active participant, responsible for producing as well as
managing.
The architecture team is responsible for real artifacts and for the integration of components, not
just for staff functions.
• The development team owns the component construction and maintenance activities.
• The assessment team is separate from development.
• Quality is everyone’s into all activities and checkpoints.
• Each team takes responsibility for a different quality perspective.
Software Management Team:
The Software management team is responsible for planning the efforts, conducting the plan, and adapting
the plan to the changes in the understanding of requirements or design. It takes the ownership of all aspect
of quality. Schedules, cost, functionality and quality expectations are highly interrelated and requires
continuous negotiation among multiple stakeholders who has different goals. The software management
team carries the burden of delivering win conditions to all stakeholders.
62
Software Architecture Team:
The software architecture team is responsible for architecture. For any project, the skill of the software
architecture team is very crucial. With a good architecture team, an average development team can
succeed. If the architecture is weak, then even an expert development team cannot succeed. The inception
and elaboration phases of any projects are dominated by the management and architecture teams.
The architecture team must have expertise in Domain experience and Software Technology experience.
Domain experience is necessary to produce acceptable use case view and design view. Software
Technology experience for producing acceptable process , component and deployment view.
63
Software Assessment Team:
64
EVOLUTION OF ORGANIZATIONS
The project organization represents the architecture of team which should be consistenly evolved
with the project plan captured in WBS. The following figure illustrates how the team’s focus shifts over
the life cycle:
65